In 2002, I joined the Station program’s Assessments
and Cost Estimation Office (ACEO), an organization
established to perform the kind of early warning,
“Where’s-my-program-headed?” assessments that
few program managers have the time or staff to do
By the time I joined the team, the ACEO had
already established several unique tools with which to
develop meaningful summaries and “What’s-the-data-really-
telling-you?” assessments for the ISS Program
Manager. But one key program control tool remained
missing: earned vlue based performance measurement.
Leading the development and implementation of a
program-wide EVM system became one of my early
tasks, to no small extent because I volunteered that I
understood EVM and believed in its utility.
But you’ve got to use the data
Mid-program EVM implementations, I soon discovered,
are widely held by industry to be difficult endeavors at
best. Although the ISS program was receiving monthly
EVM data from its major contractors, nobody was tying
them together to form a consolidated performance
message. And even if someone had, only about half of
the program’s work would have been covered under this
type of performance measurement.
Few seemed to be using the contractor EVM data we
were getting. Most managers were collecting it because
it was required, not because they saw the value inherent
in EVM reporting. The common feeling was that EVM
was expensive, faddish, a royal pain in the posterior, and
definitely not worth the effort. This feeling was expressed
even more strongly by managers of work content not
already encompassed by EVM reporting: “I’m getting all the data I need through planned vs. actual costs, plus the
technical updates I receive monthly from my leads…why
do I need earned value?”
That was only the beginning of the challenge.
ISS was already squarely in operations, even as the
last of the development effort was wrapping up. Some
astute managers started asking the very good question
of how meaningful EVM would be when applied to
what they considered to be essentially level-of-effort
work. Literature and Internet searches unearthed no
examples of implementation of EVM on programs
in the operations phase; nobody’s corporate memory
could recollect such an instance either. And it didn’t
help that what some veterans could remember was that
a prior implementation of across-the-program EVM
had been abandoned largely because the associated
overhead was perceived to outweigh the benefits.
Then there was the issue of timeframe. All knowledgeable
sources indicated that EVM implementation
was often a multi-year endeavor. Once initiated, EVM
systems were said to take at least four to six months to
“settle out” and produce meaningful data. My team’s
marching orders were to have a tested EVM system in
place in time for the start of the next fiscal year (which
at that time was less than five months away) and to have
results capable of withstanding outside scrutiny after
the first month of baseline operation.
Drumming up support
A crucial first step was to develop an implementation
plan and gain the Program Manager’s support.
We outlined an aggressive schedule that supported
conducting three dry runs of the new system. The
Program Manager agreed to our plan, as well as to our
request to present it to his control account managers at his next senior staff meeting. Having the Program
Manager openly support our efforts in that forum was
worth far more than any amount of lobbying we might
have attempted to do. We had a sanctioned plan in front
of everyone. Now we had to make it happen.
Dealing with PMS
Our philosophy of implementing an EVM system which
maximized return on investment included minimizing
the impact on managers’ existing workloads. Our new
Performance Measurement System (PMS—yes, we’ve
heard all the jokes) was to be based on earned value
concepts rather than to be a formal, certified EVM
system. The idea was to use existing schedules, metrics,
etc., rather than to reinvent the wheel. Considering that
our program was largely in the operations phase, we
also didn’t expect to cover the high percentage of total work content under discrete earned value performance
metrics that traditional EVM systems do.
We concentrated on measuring performance for
those tasks that, because of their risk, high cost,
or visibility, could cause potential problems for the
Program Manager. In this approach, we identified
and closely watched those items that could become
“gotchas.” Thus our PMS became closely aligned with
the program’s risk management system.
Another facet of making our PMS palatable to
managers involved relieving them from as much of
the implementation effort as possible. For example,
our team shouldered the up-front work of developing
a PMS process tool that would minimize the effort
required for control account managers to make monthly
EVM inputs and retrieve processed data for analysis.
Our team drafted top-level, resource-loaded schedules
for those control accounts that didn’t already use onein routine status reporting. We reiterated our “lowimpact
implementation” message as we presented our
pre-developed schedules and formats to managers and
their support folks, then worked with them to answer
questions and revise the schedules.
Within ten weeks of the inaugural senior staff
meeting, we had our process defined, and the first
version of the PMS tool developed and validated. We
also had top-level, resource-loaded schedules for all of
our new control accounts, covering the three-month dry
run period laid out in our PMS implementation plan.
Similar schedules, covering upcoming fiscal year 2003,
were in place. An innovative, more understandable
way of looking at the EVM data—adapted from a DoD
format—was incorporated into our tool and ready for
debut with the ISS senior management. We developed
methods of projecting end-of-fiscal year expenditures,
as well as the split between unencumbered under-run
and content-laden roll-through—taking into account
such unorthodox factors as being in the operations
phase. Convergence metrics were devised to track
the system’s “settling out” and to project when the
EVM data would be mature enough to be considered
meaningful for management decision making.
But will the process work?
Starting with the first dry run, we made monthly
briefings of PMS results to the Program Manager and
his senior staff. The initial results were interesting: Any
given control account’s data could be all over the map,
but in aggregate the PMS estimate of overall program
status was very close to the management team’s “gut
feel.” The second month’s dry run results showed more
of the same behavior, and underscored what EVM
experts had predicted: The data should be expected
to vary widely from one month to the next until the
system “settled out.” By the third dry-run, however, the
system already showed signs of stabilizing, particularly
the ISS-level aggregate data. The Program Manager
and his team were pleased with the initial results, as
well as with our tool’s data processing and presentation;
the go-ahead was given to proceed with a baseline PMS
for the new fiscal year.
The initial baseline run, completed within six months
of approval of our implementation plan, went as
smoothly as anyone could have hoped for. The new
resource-loaded schedules were completed just in time; the last-minute process and tool tweaks came
together the same way. The financial and earned
value data—once loaded into our PMS tool—resulted
in a very believable ISS status that was in line with
the senior managers’ understanding of the program’s
technical, cost, and schedule situation.
Perhaps most importantly, the EVM data sparked
questions that forced managers to look a bit deeper
into what was going on in their respective areas of
responsibility. Those healthy discussions alone made
all the previous months’ efforts worthwhile.
All of this was accomplished with the part-time
efforts of a half-dozen people on our team, plus a
couple of people from each of the ten new control
accounts we created—and is being maintained with
far less overhead than is commonly attributed to EVM
systems. Our home-grown Excel®-based PMS tool,
besides being “no-cost” compared with commercially
available software, enabled us to tailor every thing at
will to meet our analysis needs. Our PMS, including
the unorthodox projection methods we developed, went
on to predict fiscal year closing statistics to within a
half percent a mere three months into baseline operations.
EVM has become a valuable tool in our assessment
We swear by it.
- Rather than forcing a situation to conform to a
solution that doesn't fit, flexibility and a willingness to
try new things are necessary to tailor known techniques
to the specific needs of a project.
- Overcoming the project team's resistance to change
can be facilitated by minimizing the direct burden that
results from the implementation of that change.
Why is a methodology developed more than a generation ago
still unpopular in many well-developed organizations, and why
does it still require a dedicated introduction effort?
Michael Jansen leads the Assessments
branch within the Program Planning
& Control Office of the International
Space Station (ISS) Program at the
Johnson Space Center (JSC). He is active
in NASA training, knowledge sharing, and community