RISK MANAGEMENT


An introduction and discussion on Risk Management together with recommendations for its implementation. Prepared by Chester Simmons.

CONTENTS

1. Introduction

The management of risks is a central issue in the planning and management of any venture, but it is also something of an orphan within the acquisition establishment (at least in the U.S.). Risk management has not historically been a "branch activity" as noted in a bygone version of the Defense Systems Management College's System Engineering Handbook. (Reference 1 is the latest version of the guide.) "Branch" in this context refers, of course, to an organizational element within the engineering-development organizations common two or three decades or so ago that was probably derived from "branch of service" designations. The connotation being that there were not proponents per se for risk management as there were for reliability, safety, systems, electrical, PP&C, propulsion, human factors, guidance, C3I, etc. The situation is still somewhat loosely defined.

The purpose here is to provide information for use in risk management by any and all stakeholders. The objective is not to foster risk management as an identifiable and separate specialty.

The prescriptive portions of the discussions are cast from the perspective of a contractor performing an effort for some customer, typically an agency of government. The emphasis is on cross-specialty, cross-discipline, cross-functional and cross-technology development programs since such programs maximize risk opportunities and occurrences. In terms of program phases, the discussions are intended for a program in the pre-proposal, proposal or start-up phases. The reason for this timing is that risk management should be proactive, and activities later than these phases is hardly proactive in terms of avoiding risks.

The underlying themes for what follows are:

The discussions in this note are based on the assumption that the programs under consideration involve significant development activities. That is, the software, hardware, operational concepts, etc. or combinations of these aspects do not exist at the start of the program, and the development of these aspects is accomplished to some specification in some allocated set of time and monetary constraints.

1.1 Risk Definition
The simplest and possibly best definition of risk is:

The possibility of loss, injury, disadvantage or destruction.

Apply this definition to the issues of program management and you have the starting point for successful risk management.

Please note that the "Apply...to the issues of program management..." is meant to imply a concerned, experienced, energetic and capable effort towards any and all issues of immediate and long-range concern within the purview of program governance. The position here is that no definition of risk, no matter how convoluted, will reduce risk one iota. Management must know its job and must do it.

It is possible, of course, to gain some insight by considering the types of risks such as programmatic, technical, cost, schedule and sometimes supportability. There is also the consideration that acquisition risks are a part and often mingled with risks such as encountered in other venues such as health, safety, insurance/underwriting , finance, business, environment and politics. However, what happens very often with elaborate definitions is that much time and energy are wasted trying to characterize a risk as opposed to managing it. Risks are so often interwoven as to type as to be Gordian knots , and a "cut the knot" attitude is best.

The recommendation here is that if a customer (either a contracting agency or a superior agency) requires some elaborate set of definitions (e.g., through contract terms) then use them (i.e., apply the Golden Rule), but otherwise avoid the trap of too much definition to the detriment of content. If cataloging of risks is desired, it is suggested that a matrix be used (Figure 1, Risk Identification Matrix).

The leftmost column of this risk definition matrix will be the risks, and across the top will be the categories: programmatic, technical, cost, schedule, supportability and others as appropriate. Each risk has the applicable items of the categories checked. This approach is easy to implement and it avoids needless discussions that will not contribute in proportion to the time spent. Columns for ownership, criticality, priority, and relative rankings can be added as the understanding of the risks evolve, producing a useful graphic for risk management briefing.

There are two definitions of risks that are currently fashionable within some procurement circles: proposal risks versus performance risks. The definitions tend to vary among sources. The preferred definitions are:

Proposal Risks:
Those risks inherent in the venture, i.e., to design and build a disposable external tank for a reusable spacecraft is inherently risky. Thus, an RFP for such a tank has embedded risks no matter who undertakes the development.

Performance Risks:
Those risks inherent in the proposed approach. A given contractor can implement an approach that has risks above and beyond those inherent in the venture. For example, a developer may elect to base key design decisions on analytical data rather than empirical data to reduce costs at some increase in risk.

These definitions must be addressed during a proposal if they are included in the RFP, but after an award they are probably not too useful to a performing organization. Some sources (e.g., Reference 2) define the proposal risk as being the risk associated with the contractor's approach and the performance risk as being related to the contractor's track record.

1.2 Risk Management Definition
Basically, risk management is the sum of all proactive management-directed activities within a program that are intended to acceptably accommodate the possibility of failures in elements of the program. "Acceptably" is as judged by the customer in the final analysis, but from an organization's perspective a failure is anything accomplished in less than a professional manner and/or with a less-than-adequate result.
The key words are:

It is possibilities that are being accommodated. It is management's job to do the planning that will accommodate the possibilities. The customer is the final judge, but internal goals should be to a higher level than customer expectations.

Risk management as a shared or centralized activity must accomplish the following tasks:


The highlighted activities are those that must be reserved for management's attention and action in those cases for which a risk management staff/secretariat are employed. This list exclusive of the management functions is consistent with the list espoused for years by the Defense Systems Management College (DSMC): risk planning, risk assessment, risk analysis and risk handling. The managerial functions are highlighted to once again emphasize that management is responsible and accountable for risk management.

1.2.1 Identify Concerns & Identify Risks
A concern to be evaluated as a potential risk is literally any issue about which a doubt exists in some context. Later a procedure will be recommended for accomplishing the review of concerns and identifying those that actually engender risks. Some differentiation is needed because difficult things often get confused with risky things. Also, some people use the risk tag to justify additional funding when, in fact, no risk exists.

Since risks will not be arbitrarily dropped as key management issues once they are identified, it is smart to spend the necessary time to identify concerns and then to assess the existence of the risks. Of course, risks identified by the customer in the RFP or some other formal fashion are automatically risks for the program.

There is also a need for differentiating between identifying concerns and identifying risks to reflect the fact that in a contracting organization the Program Manager is responsible for all risks for the contract, and it is his exclusive right to formally declare that an issue is or is not a risk. (Common sense indicates that the PM had better listen to his subordinates, but the responsibility is still his.)

Within the performing organization it is necessary for the PM to allocate responsibility for resolving risks to the appropriate function, specialty or discipline. Also, some individual needs to be tagged as the organizational focus for actions for each risk. The ownership of risks is essentially an allocation process tailored to the organization doing the job. Some organizations may elect to keep risk ownership and leadership at relatively high levels (e.g., functional leads, department heads, etc.) whereas in other cases it might be appropriate to allocate the ownership as low as possible in the organization considering spans and scopes of control for appropriate resources.

A point to be made at this time is that risks are seldom deeply held secrets. Experience indicates that virtually all risks of consequence are more or less common knowledge. This point will be discussed again later, but it is worth noting that program-killing, lawsuit-engendering risks have been common knowledge on more than one doomed program!

1.2.2 Risk Manager
A risk manager is recommended if a program is large enough to afford one. The role for this position will be to capture and formalize risk management activities and results. This role includes being spokesperson for the program for risks for major reviews and reports. For example, at the SRR and SDR, it is invariably necessary to describe the common elements of the risk management program before specifics are discussed on a subsystem-by-subsystem basis otherwise there is much repetition in formats. The risk manager can lay out the whole approach, and later presentations can focus on details of specific elements of the system.

The risk manager's domain is essentially a secretariat-type function. It is not a shaker-mover position. The risk manager does not have direct responsibility for any risks per se. This position is somewhat analogous to that of program planning and control (those persons responsible for C/SCSC-driven activities, performance management reporting, etc.). The reality is less exalted that the title. Specific duties are discussed below.

Experience indicates that programs of $100M/year will require a risk staff of probably no more than 3 persons for early phases (through SDR) and only one person later, possibly augmented by one or two staffers at the time of major reviews. Smaller programs can use proportionally smaller staffs to the point of having some person designated as a part-time risk manager.

Experience also indicates that major programs also tend to be segmented into major subcontracts (or teaming relationships). For subcontracts appropriate to the scale of a $100M/year prime program, a one-person risk staff for each subcontractor is probably adequate with some help at major reviews. It is assumed that the prime and lower-tier companies work in concert in risk management if not in cooperation.

Note: It is a fashion in some circles to project a risk management role that is considerably enhanced in scope relative to what is recommended here. In effect, there is one risk owner, the risk manager. In theory such a position sounds nice, but in fact it is felt that such an approach will not be as effective as having the risk owners also be the owners of the expertise, the resources and the mission to do the job. A separate highly-empowered risk manager will just be a nuisance in most cases, and a program manager who abdicates his responsibilities for risk management to such a position is truly at risk (and probably not too bright).

Another prejudice about this super role is that today's systems are too complex for any one person to really understand at the level of professional competence. Remember the following as a hard and fast rule: Having an opinion is a far cry from understanding, but an opinion is closer to understanding than understanding is to professional competence, and professional competence is the starting point for solving difficulty problems. From this perspective, understanding is a relatively cheap commodity, but even understanding is almost impossible across the full span of today's systems. So, avoid the trap of an over-empowered risk management role if the system is at all complex.

The risk management role as recommended here is not as attractive as a direct design role, but it will have its moments.

1.2.3 Evaluate the Risks as to Consequences & Likelihoods
One of the more useful constructs of traditional risk management is that a risk as a possibility actually consists of a likelihood and of consequences. This definition is probably derived from the elementary mathematical concept of expectation of an event. Expectation for some event is defined as the product of its probability of occurrence and its value (in a generalized sense) if it occurs. Thus, a one-in-forty million lottery ticket for a prize of $20,000,000 has an expectation of fifty cents.

For risk management the situation is normally much more fuzzy than the simple lottery example, and there is usually very little precision in either the metric for the probability of occurrence or the metric for the consequences. Therefore, the possibility expressed as a combination of probability and consequences is usually subject to debate even if some of the pseudo-mathematical approaches are used (and some of these are recommended).

The recommendation here is to use whatever tools that are available and meaningful in a given situation (and, as noted, some are recommended below), but to not get hung up on mathematically appearing artifices that do not really have any more precision than an informed judgment. Again, avoid trying to untie a Gordian Knot, just cut the thing.

There may be situations in which effectiveness analyses, engineering analyses, bean counting of interfaces, etc. may be necessary, but these are sideline issues to the exercising of judgment about the risks.

Note: It is somewhat surprising that the cost and schedule aspects of risk consequences are not cast in terms of a C/SCSC perspective that provides an effective if not scientific tie between cost and schedule parameters.

1.2.4 Assess Options for Risk Management
Risk management options are usually cited as risk handling options subdivided as avoidance, control, assumption, risk transfer, and knowledge and research Generally, the assessment of management options is a hip shot since the necessary decisions must occur early in a program when things are still fuzzy. However, if experienced personnel are given the facts, one can expect very good decisions since there is seldom any real mystery about the practicality of options available. (The practicality of any option is usually just an issue of schedule and funding.)

Avoidance: Use an alternate approach that does not have the risk. This mode is not always an option. There are programs that deliberately involve high risks in the expectation of high gains. However, this is the most effective risk management technique if it can be applied.

Control: The DSMC Risk Management Guide (RMG) defines this mode as: "Controlling risks involves the development of a risk reduction plan and then tracking to the plan." The key aspect is the planning by experienced persons. The plan itself may involve parallel development programs, etc.

Assumption: Simply accepting the risk and proceeding. A word of caution: There appears to be a tendency within organizations to gradually let the assumption of a risk take on the aura of a controlled risk. This mental evolution is the kind of wrongly conditioned thinking that led to the Challenger failure.

Risk Transfer: An attempt to pass the risk to another program element. Typically, used in the context of a government agency passing the risks to a contractor.

There are some discussions in the DoD acquisition literature that this mode trades government risk for profit to the contractor. This belief is apparently founded on elementary economic theory and the mistaken belief that an executive in a procuring agency has avoided risks by passing the buck. What the executive will have done is, at best, a CYA exercise.

Knowledge & Research: The DSMC RMG cites this mode as not being "true" risk handling, but rather a technique for strengthening other techniques. From a program management perspective this approach can best be viewed as an adaptation of the approach used by graduate students for their theses: intensive study associated with specialized testing. In effect, the student develops intellectual ownership of his problem in all of its aspects: theoretical, empirical and practical.

Essentially, this mode is simply doing one's homework.

This mode is critical for testing. The DoD's Test, Analyze, and Fix (TAAF) has a nice ring, but it is valid only in a vary narrow context: testing of production and preproduction prototypes to remove bugs. However, TAAF has been mistakenly applied to earlier development phases. Failure to analyze prior to testing generally poses a risk that trends in the test data will not be understood or key test results will mistakenly be taken as inconsequential.

1.2.5 Prioritize the Risk Management Efforts
Once the risks have been evaluated in terms of likelihood of occurrence and consequences, and when options for risk management have been reviewed, it is then meaningful to rank the risks for the program manager to assign priorities. The task of prioritizing the risks is performed at the senior staff level to assure that all political, business and programmatic factors are weighted in the priority assessment. The purpose is to avoid the "successful operation, but the patient died" syndrome. The risk manager earns some of his pay at this point by sorting all of the mechanical aspects of the risks (ranks and risk management options) and presenting them to the senior management as a package.

Note: The recommended risk management options will generally be of the "risk control" category above, and the risk management will be just special emphases or possibly additions to existing plans. For example, the risk management plan might be additional development tests, a re-review of make-or-buy decisions, a shift in schedule, etc.

Management must exercise its judgment to prioritize resources for risk management purposes. The ranked risks are reviewed in terms of combined likelihoods and consequences and in terms of program level concerns with missions, functions, business objectives and political aspects. Assuming that the senior management is satisfied with the completeness of the risk management efforts leading to the review (identification, evaluations, options, etc.), the risks can be ranked or re-ranked in terms of program priorities and the primary options selected for each for the planning of risk management.

The risk owners should be present to support the ranking and to assure that the priorities are reflected in their subsequent planning efforts.

Note: The customer should not be a part of these reviews since business interests beyond the customer's purview will be discussed. Risks stipulated by the customer are, of course, included as required.
1.2.6 Develop Risk Management Plans
At this point a hiccup in the average RFP will be discussed so that what is meant by risk management plan will be understood.

Most RFPs from beginning to end refer to a risk management plan in the singular, and this plan in the singular refers to all of the topics discussed here. However, allowance is typically not made for multiple risk plans for risks that often have significantly different characteristics. Therefore, the recommendation is that the risk management program encompass a two-tier approach to risk management plans: a risk management program plan (RMPP) and risk-specific risk management plans (RMPs). The RMPP essentially captures all aspects of risk management at the program level and those aspects common to all risks.

Note: In some risk manuals, plans roughly equivalent to Risk Management Plans are sometimes denoted as Risk Abatement Plans.

For example, the DSMC RMG provides a suggested outline for a risk plan that is not attuned to the recommendations and suggestions presented here. Four of five sections of that outline refer to common elements, and specific risk planning is limited to portions of the fifth and final section. With some slight modification this outline can be used as a basis for the RMPP that, in turn, defines the scope of risk specific plans.

Suggested contents for RMPPs and RMPs are given in Appendix A.

The RMPP should encompass an approach to risk management that commits the program to significant emphases for all risks considered to be of moderate to high. Such risks will have specific risk management plans, and each risk will be referenced in C/SCSC-based reporting, e.g., Variance Reports by CAMs will have a flag indicating that a high or moderate risk is associated with the effort being reported.

Risks considered to of a low ranking can be delegated to routine management, and such risks do not require specific risk management plans. Note: The relative treatment of high, moderate and low risks corresponds closely to the treatments suggested by Blanchard (Reference 3).

Note: The use of high, moderate and low categories does not preclude finer numerically-based rankings, but the finer grained rankings are not usually recommended.

For a large program (say hundred of millions of dollars) the RMPP can be developed with a page count of no more than 35 pages (assuming a large number of graphics). Individual RMPs can be on the order of 75 pages for high risk and 25 pages for moderate risks. The difference in the RMPs as a function of risk category is that the RMP for a high risk should be a stand-alone document with minimal references (directly including budgets, schedules, technical data, etc.). The RMP for a moderate risk can be largely based on references to appropriate sources.

1.2.7 Authorize the implementation of the risk management plans
This step is usually accomplished by the simple act of the program manager's signature on the signature pages of the RMPP and lower-tier RMPs. The plans are under configuration control following this step.

1.2.8 Track the risk management efforts and manage accordingly.
After the planning is accomplished and the RMPP is underway, the risk manager should be responsible for presenting the status of all risks at all reviews. Risk reviews should be a part of both technical and programmatic reviews.

A part of this risk management effort will be the implementation of a risk management board consisting of senior managers. These persons do not have to be risk owners although they may be. This board is convened routinely to provide high-level visibility to the risk management process. The risk manager and owners of significant risks present summaries of progress or non-progress in managing the risks. Also, the program is routinely reviewed for the occurrence of new risks.

The frequency at which the board meets will depend on the risks, the organization's structure (e.g., primarily internal responsibility versus significant subcontracting) and the overall schedule. As a minimum, the board should be convened prior to all major program reviews (SRR, SDR, etc.) to assure all parties have a mutual understanding of these critical areas before going to the customer.

Monthly risk board sessions can be appended to normal internal management reviews. These monthly risk reviews will normally be intra-organizational affairs (intra-prime, intra-subcontract, etc.). The risk managers of subordinate organizations can transmit summaries to the risk manager for the prime for inclusion in the prime's risk review. The special reviews should include all organizations. These reviews are normally difficult to schedule since they will occur in the hectic periods prior to major reviews, and they may have to be via videoconference or teleconference. The video-based conference is preferred, but either mode works relatively well since the risk issues tend to be relatively static. (A program with highly volatile risk management issues across the board would be in a world of hurt.)

2. Background
The background for risk management involves two facets of interest here: the fundamental causes of risks being realized in the acquisition of large complex systems, and the formal imposition of risk management as a bureaucratic and contract concern.

The Denver airport and some of the first of the rapid transit systems in the U.S. of the modern era illustrate that not only the DoD has trouble with the acquisition of large-scale systems. However, large-scale systems do not have to be seemingly impossible. Disney World, the other home of Mickey Mouse, is a testimonial that complex systems can work so effectively as to be almost transparent to the user. The discussions here will focus on known problems of the DoD's process as discussed in References 4 and 5.

In terms of the formality of risk management the focus will again be on the DoD process that begins with Reference 6.

2.1 Problems with DoD's Acquisition Process
The following is taken from the GAO's assessment of the DoD acquisition process (Reference 4):

OVERVIEW

The Department of Defense (DoD) spends billions of dollars each year developing and procuring major weapons systems. These expenditures have produced many of the world's most technologically advanced and capable weapon systems--as demonstrated during Operation Desert Storm. Nevertheless, the process through which weapons requirements are determined and systems acquired has often proved costly and inefficient--if not wasteful. In addition, the "high stakes" weapons acquisition process has proven vulnerable to fraud, waste, and abuse. It was this high stakes process--and the absence of adequate internal controls--that provided the breeding ground for the investigation and charges of influence-peddling known as "ill wind."

DOD has made some improvements in the weapons acquisition process over the years. Major reforms recommended by the President's Blue Ribbon Commission on Defense Management--the Packard Commission--in 1986 have been or are currently being implemented. In addition, the diminished Soviet threat and corresponding budget reductions are also prompting major changes in the way DOD acquires weapons systems. Top management within the Office of the Secretary of Defense has taken steps in an attempt to make the acquisition process more disciplined and to redefine the basic strategy for acquiring weapons. Moreover, key Members of Congress are calling for the military services to reevaluate their roles and functions.

*******
THE PROBLEM

Despite many efforts to reform and improve DoD's weapons acquisition process over the years, a number of fundamental problems persist. For example, despite an increased emphasis on the sound development and testing of weapons, we still see major commitments to programs, such as the B-2 bomber and the Airborne Self-Protection Jammer, without first seeing proof that these systems will meet critical performance requirements. Despite improved cost-estimating policies and procedures, we still see the unit costs of weapon systems, such as the DDG-51 destroyer and the C-17 transport, double. Despite the increased emphasis on developing systems that can be efficiently produced and supported, we have weapons, such as the Advanced Cruise Missile and the Apache helicopter, that still encounter costly production and support problems. Clearly, problems are to be expected in major weapons acquisitions, given the technical risks and complexities involved, but too often we find
-- systems being acquired that may not be the most cost-effective solution to the mission need,
-- overly optimistic cost and schedule estimates leading to program instability and cost increases,
-- program acquisition strategies that are unreasonable or risky at best,
-- too much being spent before a program is shown to be suitable for production and fielding, and
-- individuals seeking to improperly influence the outcome of the contracting process.

*******

THE CAUSES

While there are many reasons for these types of problems, the underlying cause of persistent and fundamental problems in DoD's weapons acquisition process is a prevailing culture that is dependent on generating and supporting new weapons acquisitions. The culture is made up of powerful incentives and interests that influence and motivate the behaviors of participants in the process. Participants include the various components of the Department of Defense, the Congress, and industry. Sometimes, these interests transcend the need to satisfy the most critical weapons requirements at minimal cost. Such interests may include protecting (1) service roles and missions, (2) service budget levels and shares, (3) service reputations, (4) organizational influence, (5) the industrial base, (6) jobs, and (7) careers. Collectively, these interests create an environment that encourages "selling "programs--a process that may entail undue optimism, self-interest, and other compromises of good judgment. In this environment, it may not be reasonable to expect program sponsors to present objective risk assessments, report realistic cost estimates, or perform thorough tests of prototypes when such measures may expose programs to disruption, deferral, or even cancellation. The "culture" is not the cause of all the problems in weapons acquisitions. Some problems can be attributed to basic errors in judgment or other motivating forces. For example, the "high stakes"--that is, the big money involved--in defense acquisitions can lead to influence-peddling and contracting fraud and abuse--as found in the "ill wind" investigation.

*******
GAO'S SUGGESTIONS FOR IMPROVEMENT

If changes in the acquisition of weapons are to be of a lasting nature, they must be directed at the system of incentives that has become self-sustaining and very difficult to dislodge. Incentives and opportunities that produce undesirable behaviors must be eliminated or minimized through effective internal controls and/or offset by stronger--positive or negative--incentives. Moreover, officials in top DOD management positions, as well as the acquisition work force in general, must be held to the highest standards of integrity and conduct. Specific suggestions for addressing several prevalent undesirable behaviors or conditions are described below.
Controlling Inter-Service Competition
Several actions are needed to change incentives and conditions leading to inter-service competition, self-interest, and the acquisition of unnecessary, overlapping, or duplicative capabilities. These actions could also reduce incentives for overselling programs. First, a consensus must be reached between the Congress and the administration on military strategy, the services' roles and missions, and future funding levels. Uncertainty surrounding current roles and missions encourages the services to acquire weapons that will support and protect traditional or desired capabilities. The inability of DOD to accurately predict outyear funding levels has resulted in optimistic spending plans that cannot be executed under actual funding levels. Secondly, determining needed capabilities and the particular types of weapons to fill those needs should not be left with individual branches and warfare communities within the services. The duplicative outcomes of the acquisition process are an outgrowth of the fact that system requirements mirror the traditions and self-preservation instincts of their sponsoring organizations. Making these decisions at the Office of the Secretary of Defense level could enable competing demands, available resources, and the needs of theater commanders to be more fairly assessed before a specific program is given life.
Discouraging the Overselling of Programs
A combination of internal controls and other forms of incentives and disincentives is needed to reduce the tendency to sell weapons programs through optimistic cost and schedule estimates and accelerated--and therefore, high risk--acquisition strategies. Under the existing culture, the success of participants' careers is more dependent on getting programs through the process than on achieving better program outcomes. Accordingly, overselling "works" in the sense that programs get started, funded, and eventually fielded. The fact that a given program costs more than estimated, takes longer to field, and does not perform as promised is secondary to getting a "new and improved" system to the field.
Limiting Technology Risks
Research and technology efforts need to be freed from program association until they mature to a specified level, such as the demonstration and validation phase. This idea is already embodied in DoD's new acquisition strategy, which calls for advanced technologies to prove their feasibility and producibility before they are incorporated into new or ongoing acquisition programs.
Limiting Opportunities for Fraud and Abuse
DOD must continuously review and ensure compliance with controls designed to (1) ensure the free flow of current and accurate information from the contractors and program offices to top decision makers and those with oversight responsibility and (2) prevent improper influencing of contract awards. Today, the prospects for constructive change are quite encouraging. The demise of the Soviet threat and declines in defense budgets have created a unique opportunity to effect lasting changes in the weapons acquisition process. Both the Department of Defense and the Congress have acted upon this opportunity and have shown a willingness to support the types of changes needed to improve acquisition outcomes. DOD must ensure that effective internal controls are in place to minimize cultural influences, incentives, and behaviors that are not in the best interest of the taxpayers.

2.2 Formal Acquisition Policy and Procedures
The basic policy and procedures for risk management in the U. S. DoD procurement processes flow from the "DoD 5000" documents beginning with the policy, DoD Directive 5000.1, "Defense Acquisition." These documents should be understood by contracting organizations. In addition to defining the driving forces behind risk management for procurement, these documents are good sources of motherhood for proposals. There is a Web site that should be consulted for copies and information for these documents,DoD Directives.

3. Risk Concepts
There are only a few key concepts in the management of risks, and these concepts are easily mastered and applied.

3.1 First principles
There are no fundamental scientific laws in risk management akin to the laws of motion, conservation and continuity from which applied scientific results are obtained. Most of risk management is qualitative and subject to judgment colored by experience, prejudice and politics. However, there is one fundamental principle that can be postulated and used.

Specifically, any element of a venture that entails a new aspect for the performing organization is a source of risk. (Barring malfeasance, incompetence including criminal neglect, and accident, it can be argued that "newness" is the only real source of risk.) The attitude here is that if all risks associated with newness are accommodated then whatever remains will in all probability be of small import and impact.

Risk management thus involves identifying the new aspects of the venture in question, and then adopting strategies to avoid, mitigate or otherwise accommodate the issues identified according to priorities suitable for the program. There is a temptation to include to the "customer's satisfaction" between "identified" and" according" in the previous sentence, but customer satisfaction is reflected in what is meant by suitable priorities.

In the present context, inexperience is a synonym for newness.

A caution: If inexperience is a primary source of risk then the hiring of experienced personnel may appear to be an immediate cure, but this approach must also be assessed for newness. If a previously unused consultant is hired to plug a gap in experience then the consultant poses a derived (and often very serious) risk. A person new to an organization, no matter how knowledgeable, is often more of a problem than a solution. Unless an organization has a good track record for using consultants then a special plan should be implemented to track the contribution of any consultants to assure that what is desired is being accomplished. If people are hired to plug gaps in experience then a similar risk prevails.

The secret to risk management is to be creative in applying tests for newness to the activities, tools, people and products that constitute the venture. The key issue can be a new product, a higher or lower price, a tighter or looser specification, a higher or lower production rate, a new customer, a different time of year, a larger or smaller physical scale, a new paint, a new glue, new computer programs, a new manager, a new production machine, a new performance envelope, a new environment, new personnel, new subcontractors, new terms for proven subcontractors, tight schedules, new performance tolerances, unfamiliar parties to an interface definition, new types and/or scopes of interfaces, new corporate environment, etc.

In effect, any and all aspects of a venture should be tested for newness in any conceivable nuance. In Section 5, Risk Management Tools, the WBS and SOW will be recommended as the framework for achieving closure in the search for newness.

The issue that is being begged at this point is that of the seriousness of the risks so identified. (The seriousness is measured as noted earlier as the combined consequences and likelihood.) No two ventures will have the same risks and no two organizations will face the same consequences for a given set of risks. Therefore, it is all but impossible to generalize about seriousness as opposed to newness. Some assessments of relative seriousness of consequences are given in the discussions of ranking tools.

However, experience indicates that the seriousness aspects tend to sort themselves once a given set of risks is postulated.

3.2 Ownership
Like risk itself, ownership of risk is a concept of many dimensions and interpretations. The most important aspect of ownership is a clear mutual understanding of the responsibilities among parties to a contract and/or the responsibilities among parties to a cooperative venture. The second most important aspect is for a similar understanding on an intra-organizational basis.

It is common for government customers to weight risk in establishing the reach and scopes of procurement contracts. Part of this weighting is a consideration of risk retained by the government versus passing risks to a contractor (for higher profits). Such issues need to be fully understood by all parties to a contract. Failure to achieve this understanding can result in wrongly conceived priorities by the wrong organization and in the failure to assure that the real risk owner gets all facts and impacts germane to the risk.

Every risk identified in a program should have an organization tagged for ownership, and a position holder should be tagged as managerial lead for its resolution.

3.3 Types of Risks
The earlier disclaimer re the relative unimportance of defining risks by types is not being ignored here. Here the treatment loosely parallels that of the Risk Management Guide of the DSMC in which typing is accomplished through "risk facets" defined as a way of classifying risks. These facets are postulated as a means to understand and classify risks. One or more of the facets are assigned to any given risk.

The facets are the names that have often previously been applied as labels for the types of risk: technical, supportability, programmatic, cost and schedule. In effect, the earlier typing criteria are now considered as characteristics. These characteristics match the matrix labels recommended earlier. The Risk Management Guide has good discussions of these different facets. These discussions are just paraphrased here:

3.3.1 Programmatic Risks
Those risks that flow from or impose an impact on program governance, and those risks that impact program performance. The risks for governance may be external (political, statutory, litigious, or contractual) or internal (business priorities, staff limitations, ROI constraints, and learning curves). Risks that impact on program performance generally flow from issues of competence, experience, organizational culture, and skills of the management team.

In this context, in contrast to present fashion re leadership that denigrates managers versus leaders, it is most important that the management team understand the nuts and bolts of management of the design, development, integration, test and verification processes. Basically, it is important that the management team fully understand the System Engineering process and its implications at each step in the overall process.

3.3.2 Schedule Risks
At the highest level of concern, schedule risks are simply that not enough time exists to do the required job with the resources allocated... people and/or money and/or material. Problems with resources can be argued as being of a programmatic nature, i.e., an intrinsic flaw in the program. (Such arguments are, of course, the basis for the risk classification scheme recommended in Section 1.1) At a managerial level, the concern is more focused. For example, how does one incorporate flexibility in the tail end of the schedule to permit some maneuvering room for coping with problems that will inevitably occur as time and resources diminish.

3.3.3 Cost Risks
At the highest level, cost risk is simply that there is not enough money to do the job required in the time allocated including reserves for reasonable contingencies. Again, an intrinsic flaw in the program. The causes of such risks can be estimating errors, low ball bids, business decisions, lack of understanding of requirements and political expediency. A management technique is to focus on all elements of the program that are new and to insure that management reserves are at least adequate compared to the costs of the new elements.

Technical Note: It occasionally appears that the procuring agencies do not understand what is reasonable in terms of accuracy of estimates. Often, the implied levels of concern are at odds with any reasonable assessment.

For example, the construction industry in the U. S. is a well-founded, well-understood and well-experienced industry (as it is, in fact, in any nation relative to local practices). In major construction the uncertainty in costs to build are historically about 30% at the stage of "door knob" estimates. As the design and specification of a particular project evolves to the level of detailed definitions, detailed drawings/specifications and detailed schedules, the uncertainty drops to 5% or so.

In small-scale residential construction, it is common practice for a general contractor/ builder to add 25% to the quoted cost to construct any plan that the particular builder has not built before. (Secondary factors influence this margin, but the main factor is that of the uncertainties in the details. A significant other factor is the such homes tend to be custom builds, and buyers of custom homes tend to be picky.)

It would seem to be entirely unreasonable to expect smaller uncertainties in endeavors involving significant scratch development of state-of-the-art hardware and/or software.

3.3.5 Technical
The technical risks are performance risks associated with the end items. From the perspective of the buying organization the concern is that the system will not perform as required. From the perspective of the performing organization the concern is that the system will not meet it specifications (and hence not be purchased and/or not meet customer satisfaction goals).

3.3.6 Supportability
The supportibility risk is that an otherwise acceptable system will cost too much to operate and maintain over its life cycle in terms of time, personnel and material resources. It is a fact that most systems cost more to sustain than to develop, and this fact is not new. It was a matter of comment in Goode and Machol in 1957 (Reference 7).

3.4 Development Risks
A development effort always entails a measure of risk because such an effort always involve aspects that are new to the performing organization. The new aspects as a minimum are limited to "reach" aspects of the end item. For example, an experienced design-and-build team that is extending the performance range for a single parameter of a system probably has a minimal risk. However, a team formed as a result of winning a major proposal for stretching all envelopes for all subsystems of a complex system has many risks, some only remotely associated with the stretching of the performance envelopes. Such multiple risks situations are major challenges and are the most interesting from a management perspective.

The management of risks associated with the development of the objective products is the emphasis in the next section of this note. Here, the focus is on some of those things that engender risks, but that are not directly aimed at the specification or SOW for the objective products. These are specific risks experienced in start-up situations.

3.4.1 Communications
One of the first risk situations facing such a team is that it invariably requires additional staffing. When new people are hired some of the negative aspects are that the collective awareness of the nuances of the program is diluted, and people start making decisions with less than complete understanding of the nuances of the program, the company or the customer. The one and only and simple solution is communicate, communicate and communicate. Regular staff meetings are a must. Also, the Program Plan, the SEMP, the TEMP and other planning documents are of course elements of effective start-up communications.

The purpose of such communications is to impart missions, functions, goals, priorities and other guiding information to all team members as soon as possible, particularly new team members. Every new employee should be given a "catch up" kit that contains all information about the program . (This approach ensures the quality and accuracy of the understanding by each employee.) This kit can include the RFP, the proposal and any planning that has been accomplished in at least draft form (program, system engineering, test, verification, staffing, training, logistics, etc.).

It is also recommended that each new employee get a through introduction to the roles, personalities and functions of all support organizations: contracts, tech pubs, quality, safety, computer services, manufacturing, cleanroom, test lab, shipping and receiving, etc. Where the quality lab is located and the fact it has an Arbor Press is not something every newly hired stress analyst will know, but it is information that can get a quick and dirty compressive strength test performed if necessary.

Another recommendation is to instil the use of meeting and discussion forms to reduce the risk of misunderstandings. These forms are no more than one page summaries of all key meetings (including telecons and videocons). The recommendation is that a form should be prepared for the following situations:

Typically, the M&D forms are sent to function, specialty and discipline managers who are responsible for distribution within their areas of responsibility. On occasion, the forms are shared with customers.
The forms should include participants (and e-mail address, phone numbers), date, place, issues discussed and key results (decisions, actions or closures).
The M&D can be implemented as e-mail, but an archive should be established since these forms provide one of the best briefing packages for newly hired personnel.

A special dividend of the M&D is that significantly less time will be spent in staff meetings providing background materials. The key aspects of almost every issue will have been previously communicated.

3.4.2 Engineering Data Base
Start-up organizations created to perform major new programs always suffer from the lack of a mature and pervasive engineering data base. Individuals bring applicable materials to the effort, but the organization as a whole does not have a common data base of materials, suppliers, standards, reports, handbooks, etc. from which to synthesize solutions to problems. This fact significantly impacts the effectiveness of the organization as the necessary assembly and dissemination of data and information is accomplished. Typically, a major scratch start-up requires about six months to a year before the useful data base exists and has become effectively disseminated within the program.

An aggressive data management function can accelerate the necessary diffusion of information and data (formal and informal).

3.4.3 Program Plan
The purpose and scope for a well-founded program plan is described elsewhere on this site. The risk of concern here is that the Program Plan is often confused with the Program Management Plan (PMP). The Program Plan is an executive level document whereas the PMP is at the level of configuration management, quality and system engineering plans. Too often, Program Managers fail to formulate and promulgate a succinct, but definitive plan for their programs. The result is that lower tier plans often set goals and priorities at odds with the overall mission.

A program plan needs to be produced to provide a summary with respect to the following aspects of the program:

Typically, the Program Plan should be prepared within a month or two of the start of the program (assuming a multi-year effort). For short efforts (say two years), the plan should be a kickoff document.

3.4.4 Concurrent Engineering Trick
There are simple ways to avoid some of the risks of concurrence. Assuming that a program is organized with a PM, primary functional managers, and key support managers within the parent organization, one way to promote concurrence is to simply have all major documents (formal and informal) approved by all functional and support manager. This process is normally implemented as formal approval by the manager(s) of the producing department(s) and other managers sign in concurrence. In effect, every manager reviews all major documents. (Of course, the reviews are normally delegated to subordinates, but the managers are held accountable.) As a minimum, the following managers must review all documents: design, system engineering, software, program management (PMS), test, manufacturing, contracts, subcontracts, etc.

A reciprocal process is to have all incoming materials routed to the same managers for review for (initial) impact assessments. "No impact" is an acceptable response, but the response is required.

A recommended procedure is to have the data management function be responsible for accomplishing the necessary grunt work to get these procedures accomplished.

Risks that can be avoided include:

4. Risk Management Structure
The basic structure recommended for risk management consists of a Risk Manager who is responsible for the definition, structure, implementation and coordination of a risk management approach consistent with the program, system engineering, test , manufacturing and verification plans. The risk manager works on the staff of the program manager. The risk management job is comparable to that of Configuration Manager, Data Manager, Program Management (PMS) and other staff level positions that do not have a direct objective product development role.

It is the Risk Manager's job to coordinate the risk management activities within the prime's organization and with all subcontractors. The Risk Manager assists the Program Manager in the PM's role of Risk Board Chairman.

The Risk Manager schedules and oversees the production of all risk reviews, either as stand alone events or as part of management reviews. This entails alerting risk owners and risk board members of support requirements for such reviews. The Risk Manager is responsible for preparing and distributing the minutes from risk board meetings.

The risk manager is responsible for coordinating and presenting at least the summary of risk management activities at all major reviews.

4.1 Functions
The basic functions for risk management are:

Program Manager: Principal Risk Owner for Program

Risk Board: A Non-Voting Advisory Board for assisting PM in resolving risk management issues.

Risk Manager: Performs the duties described above. The Risk Manager also is responsible for:

Writing the Risk Management Program Plan.

Identifying requirements for risk management consultants.

Providing training in risk management.

Coordinate risk management inputs for ECPs.

Coordinate risk management activities for subcontractors.

Prepare briefing materials for risk management for program manager.

4.2 Phases
The recommended approach to risk management involves three phases:

Pre-Proposal/Proposal

Start-up

Post-SDR

The emphasis will be on the risk manager's role in the discussions of these phases.

4.2.1 Pre-Proposal/Proposal
The primary functions for the risk management are:

Staging of a Proposal Manager's Risk Review for use as a proposal focus. Develop a list of concerns, and then filter the list for risks for inclusion in the risk list. A description of a typical Proposal manager's risk review is given in Appendix C.

Compile and maintain the status of the risk list.

Provide any risk training required for the proposal.

Write the inputs to the proposal re risk management.

Develop work-arounds to overcome any shortcomings of the RFP with respect to risk.

Provide whatever level of draft that is required by the RFP for the Risk Management Program Plan. If no requirement is imposed assure that at least an outline of the Risk Management Program Plan is included in the proposal.

Assure that all proposal elements are kept a breast of risk issues and status of key risks.

4.2..2 Start-Up
For the present purposes, the start-up phase is defined as that period of the program prior to the completion of the SDR. Tasks for Risk Management include:

Finalize the Risk Management Program Plan.

Develop a definition of training required, and develop a training plan.

Coordinate the Risk Management Program Plans with the risk managers for subcontractors (primary) and team members. Train these organizations as required. (Usually just have the subcontractor Risk Managers attend the training at the prime.)

Stage a Program Manager's Risk review to update the risk lists for post-award impacts .

Present the risk management approach and available results at program and technical reviews.

Coordinate the first and subsequent meetings of the Risk management Board. Issue a roles and responsibilities write-up for Board members.

Coordinate Risk management activities and actions with all standing committees and working groups (test, interface, etc.)

4.2.3 Post-SDR
The Risk Manager's primary job is to assist in the tracking of the risk management activities, and to accomplish the routine board and review functions. The Risk Manager provides a focus for risk assessment (re-review) for all ECPs. Any new risks are captured in the on-going process.

The Risk manager's job can be abolished as a special activity at any time following the start-up provided there is confidence that the risk plans are being accomplished without significant problems. The risk assessment necessary for ECPs can be delegated to the Program Management Office (or whatever function is responsible for ECPs).

5. Risk Management Tools
The primary functions for the risk management tools are to assist in the assessment of risks, to assure that risk assessments address all pertinent aspects of the program and to provide specific means of overcoming the underlying bases for the risks. The WBS, SOW and Proposal are recommended as structures for assessing risks. Make-or-buy decisions, development tests and engineering analyses are, of course, means of mitigating the risks by overcoming inexperience and/or a lack of knowledge of specific issues.

The key to assessing risks is to identify any and all aspects of the program with some degree of newness. If this goal is accomplished then virtually all risks have been identified. The recommended review process is to have every functional element of the organization and the primary support organizations review every WBS element, every SOW paragraph and every proposal paragraph. Each reviewing organization will provide an item-by-item summary identifying items of on impact, items of concern and items that definitely involve new aspects.

Normally, these reviews are performed by the appropriate organizations as "homework" prior to the program manager's Risk Reviews.

5.1 WBS
The WBS encompasses the structure of everything that will be done or delivered in a program. Therefore, assessing each and every element of the WBS will, in most programs, assure overall closure of the risk assessment. Each WBS element should be reviewed by each organizational element as noted above. This approach is a beginning of concurrent engineering and assures that inter-functional, inter-discipline and inter-specialty concerns are accommodated.

Specific attributes of the WBS that make it a valid basis for such reviews are:

The WBS identifies in a structured form all elements of the program in each phase, and provides a comprehensive framework for assessing each and every aspect of the program for potential risks.

The specification trees map directly to the WBS which provides traceability between performance requirements and risks for hardware and software items.

The WBS provides a direct exposition of the system hierarchy and interfaces for purposes of identifying risk propagation.

The WBS can also provide a single point-of-contact for each risk through the management structure, i.e., the individual responsible for the CWBS work package.

One problem with the WBS as a review tool is that care must be taken to assure that all external influences on any elements are considered in the reviews. Such influences include interfaces of any type (intra-program and external) and such issues as GFE, special test equipment, etc. The specification and ICD "trees" can provide a structure to assure interfaces are not neglected.

There is also the consideration that the WBS must be well formed or it becomes a risk in itself and a shaky basis for reviewing risks. Problems with the WBS should be reported on an element-by-element basis as an issue for consideration as a risk. A typical problem is the lack of interface hardware elements when such hardware is clearly need. Awkward WBS constructs can also create risks. For example, some WBS structures are very difficult for purposes of subcontracting, manufacturing scheduling, ICO for the prototypes, interface control, etc.

5.2 Statement of Work
The SOW should be examined in a fashion similar to the WBS review, to the same extent and for the same purposes. This review should follow that of the WBS with special emphases on the items of concern from the earlier review.

5.3 Proposal
The proposal may or may not be an element of the contract. Some agencies do not count it as a binding contract term (?), but, in any event, it provides another structure against which risk can be judged. It should be reviewed in the fashion of the review for the WBS and SOW.

5.4 Make or Buy Decisions
The make or buy process usually weights risk as a factor in the decision to use internal or external resources. An often-used and reliable vendor for procurement of good and/or services that the vendor routinely provides is a low risk. However, the use of a new vendor who is working in an area new to that organization is at least as risky as doing it internally. However, the risk associated with using a subcontractor for a development effort can often be cost effective if the vendor has specific analytical skills and/or test capabilities that would be too costly to duplicate internally.

5.5 Risk Ranking Tools
There are any number of ways to rank risks. If a program or project is small enough (or the risks sufficiently well know) then the risks can be ranked qualitatively in a skull session among senior and knowledgeable staff members. For larger programs, numerical models can be used, remembering that the results are not absolutes. The numerical procedure presented here is derived almost in total from techniques presented in recent and past publications of the Defense Systems Management College.
Blanchard (Reference 3) includes the procedure with an extension to accommodate weighting factors not used here. (The charts and formulas are also better presented in Blanchard.)
Basically, the numerical ranking of a risk is calculated as:

Risk = Pf + Pc - Pf Pc

where Pf is the likelihood of occurrence determined from Table 1,
and Pc is the normalized consequences factor determined from Table 2.
Pf is calculated as the average of the values assigned for each column. Pf is averaged only for applicable columns. Thus, if an item under review did not include software, only three values will be assigned and the divisor will be three.

Risk values are calculated for each risk and the results incorporated into the risk matrix (Figure 1).

Figure 2 can also be used to display relative rankings while maintaining the separate Pf and Pc values. There is no derivation of the curves in the figure. The zones are simply notional to reflect general levels of concern. The curves can be adjusted to suit individual preferences.

5.6 Risk Software

To Be Supplied

5.7 Development Testing
Development testing is almost exclusively a risk mitigation activity. Therefore, the design and implementation of the development test program is a major element of risk management, and it should be approached as such.
The scope of development testing is taken to be any test that precedes acceptance testing in time and scope. Thus, the scope includes everything from any initial feasibility testing through acceptance of the first production articles or the flight article. This scope includes all test by all agencies (e.g., pre-proposal bench tests, customer IT&E, etc.).
Also, System Integration Laboratory type activities are lumped into the overall development test program.
Such tests are performed in support of scratch development of hardware, modifications to existing designs, and tests in support of planned improvements, whatever is done to develop a design basis and/or verify the design. The design basis includes the front-end aspects such as feasibility, sizing, tuning, scaling, and calibration data for analytical models. Verification is, of course, the back-end concern with proof of performance.

Having made all of these assertions, it will now be noted that then focus here is on the initial phases of development testing by a contractor in an acquisition program. The reason is that this phase of testing more than any other drives the cost and prospects of the test program.

5.7.1 Test Goals
Obviously, the goals of a test program can address technical, programmatic and political risks. Testing to support the meeting of exit criteria when such criteria are imposed is an example of all three types of goals rolled into one. In the following it is assumed that the test goals are known. The implementation of a test structure to achieve the goals is the focus here, and it is seen that to a surprising degree the test structure is independent of the goals!

In this context there is good news and bad news.

5.7.2 Good News & Bad News
The bad news: The design of a development test program is an art. As an art, this design task is best done by an experienced and gifted individual or individuals, people who have the knack for getting the most data with the least resources. Smarter, more experienced and more gifted people will do a better job than others. The bad news is, of course, that such people may not (and statistically will not) be available.

When faced with a budget problem for the development tests for a 1500 C furnace for a Spacelab application, a bright young design engineer suggested that only the hot section of the two-section bore-type furnace be built and tested as an engineering model since only the hot section was pushing the state-of-the-art. He showed that this "bobtail" test article would provide data for calibration of the analytical models for the hot and the cold ends, provide the necessary materials testing, test the control system, etc. Considerable cost was avoided, the manufacturing schedule was compressed, and all of the original development test goals satisfied. The fellow has a knack for seeing such possibilities.

The good news is that an adequate, if not brilliant test plan can be developed by virtually anyone with the time and resources to examine the program's structure and the willingness to iterate the test planning through a couple of cycles.

5.7.3 Build-Test Matrix
The model recommended for the planning of tests is shown in a summary form in Figure 3.. The model consists of the generation breakdown (or WBS or drawing tree or whatever is available), a build-test matrix for the test program (constructed in support of development of the SEMP and the RMPP), and the master schedule for the test program. These elements are shown for a notional system with three major subsystems: A, B and C with levels of risk identified for each (high or low in this graphic).

In this notional example, A might be a fluid system, B includes all electronics, and C might be the software. Assume that the software is a first-time application by the developing organization. Hence, it is high risk. This software risk is assumed to bleed over to the hardware for element A1 (say motor-controlled valves rather than manually controlled). Other elements are assumed to be low risks. Note: Proof of principle testing is assumed to have been previously accomplished or not needed here.

The tree and the schedule are, of course, usual elements of program planning. The build-test matrix is a little less usual. This matrix describes the (downward) flow of maturity of hardware as one goes from initial bench-level breadboard tests to final tests of the all-up system. Three key aspects of the matrix should be noted:

Elements of the system enter the matrix (come into existence) at vertical locations that reflect the degree of risk. A system element "moves" downward as it matures through the various phases: breadboard, brassboard, engineering model and pre-production prototype. Not all elements pass through all phases. Mature (less risky) items can enter the flow at lower positions. Note that A1 and C of the notional example enter at the highest level because of their high risk.

Each "box" of the matrix indicates a test that implies: specific purposes and data requirements, test articles, test fixtures, special test equipment, facilities, personnel, documentation, a planned test intervals, and test budgets. The overall structure of the matrix and these specific aspects are readily captured in a data base format and/or wall chart. The design of a test program is really a matter of juggling these individual test requirements to develop an overall flow that is effective. However, the concept of the build-test matrix provides the framework for keeping the details in order.

The build-test matrix loosely follows the generation breakdown, but in an inverted form. This relationship is implied by the large flow arrow of the chart. The mapping is not absolute in detail, but it is always present to some degree in any development effort. In fact, this mapping is one test of the quality of the design of a generation breakdown. If the matrix does not follow to some degree from the breakdown then the breakdown/matrix should be seriously considered for revision. (Of course, the breakdown must also pass muster against make-or-buy planning, the specification tree, interface working group structure, etc.)

The build-test matrix should also reflect the master test schedule that, in turn, must be tied to the program master schedule. In particular the build-test matrix should support the schedule requirements. Again, as with the breakdown, the matrix and the schedule have (or should have) a common underlying structure. If this structure is not obvious then something is wrong and the design of the schedule and matrix should be iterated.

5.7.4 Comments to the Build-Test Matrix
The matrix is presented as a reflection of the breakdown and the schedule. In many instances, the planning of this matrix drives the other two items.

The matrix can be used to identify at what points test articles, support equipment, documentation, personnel, facilities, etc. are needed in terms of maturity, inclusion as procurable items in the breakdown, and required timing from the schedule. Also, the matrix can be used to identify dead-end hardware to avoid and to identify possibilities of reuse and early use of specific items.

As a visual aid, the generation breakdown can be highlighted in color (red = high risk, blue = moderate risk, green = low risk).

It should be understood that for illustration purposes only tiers of the build-test matrix to subsystem levels are shown. In reality, some elements of the system may require tiers from the level of components.

Such lower tier testing should be based on incrementally achieving functionality (or actually confidence in functionality of designs) in the items that make up each assembly before proceeding to the next tier.

An example of a good approach is the real example of a company performing a first-time (for it) development a high temperature, automatically controlled furnace with sample handling mechanisms. The company's expertise was in power supplies, thermal analyses, structural design, and some competence in control software. The development cycle for the mechanisms was first a kinematics prototype, update of the kinematics prototype to powered operation (lab power with manual control), update of the control to prototype software control, incorporation of engineering models of the final power supplies, and finally incorporation of the final control. The result was an extended, but very low risk approach.

5.7.5 Tricks of the Trade
The best of test programs uses a minimum of resources (people, time, money and hardware) and produces a maximum of data to support the test goals. Some tricks of the trade are:

Finally, the interdependency of analysis and test is a key element of what the tests must accomplish. This aspect is discussed in the section on integration of analysis and testing.

5.7 Engineering Analysis
There is, of course, no need to justify analysis an a key factor in the design and development process. However, there is an apparent need to lobby for the development of analysis plans, and the need for general rules in the SEMP and/or program plan re the balance of test and analysis to be performed. The latter point is discussed first.

5.7.1 Test Analysis Rule
It is recommended that some sort of rule be promulgated re how specific features of the design will be verified as the development process unfolds. In this sense, verification is the process of establishing a confidence level in design decisions at each point in the development process. An example of such a rule might be that every WBS element with a functional and/or interface requirement be twice tested and twice analyzed for each function and/or interface prior to the beginning of the acceptance activities. The rule can include the proviso that at least one of the tests or analyses must be in an integrated configuration of at least the subsystem level (for a system-subsystem configuration for the end-item(s)). The functions of concern include: dynamics, stress, thermal, power, control, mass properties, stability, etc.

Thus a tire for an all-terrain vehicle may be assessed for influence on vehicle dynamics analytically with a spring mass, tested at a suspension assembly level to, in part, verify the knowledge from the simple analyses, and, of course, instrumented and tested in cross-country ride tests of the system.

Every product manager, subsystem manager, Design Build Team, etc. should have some such rule and verify that it is met or justify omissions. This is a simple rule and it is not hard and fast, but it will help integrate the test-analysis program.

5.7.2 System Functional Analysis Plans
The second aspect of the analysis recommendations is that each analytical discipline be required to submit System Functional Analysis Plans that detail what will be analyzed when, with what model, and with what expected results. The SFAPs will span the range of analyses from hand-calculations to final CDR-level models/simulations. The purpose is to assure that there is closure between the scopes of the test and analyses and to avoid omissions in the analytical efforts.

The plans will describe the fidelity of the modeling for each phase of the program: pre-SDR, pre-PDR, pre-CDR and pre-Test (development, qualification and acceptance). These plans will describe the decisions supported by the analyses. Analysis schedules will be included to show the support to trade studies, development tests, monitoring of subcontractors, development of specifications, assessments of environments, etc.

The plans will define test data required to support model development and/or calibrate the analytical tools.

Incremental analysis reports will be prepared for each major milestone: Concept analyses at the DR, a preliminary report at the DR and a final report at the DR. Each report has the data required for the initial phases of the next phase.

5.7.3 Types of Analyses
There are few things that cannot be analyzed in support of design at some level of utility with the tools and techniques available today, but too often organizations opt for the more expensive and riskier empirical basis for product development (scratch and evolutionary). Programs that do not have a strong analytical culture and that do not carefully integrate analysis and test in a sort of leapfrogging to the final products are not doing as professional a job as they should and could for their customers.

Analyses tend to be divided as engineering and specialty. The engineering analyses include such aspects as thermal, stress, dynamics, kinematics, power distribution, and control. Such analyses should be performed for all mechanical and human elements of any system that must perform in anything other than benign environments doing anything other than routine, non-critical and non-safety functions. Any "stress" whatsoever should invoke analysis at the level of engineering models. For example, analysis of the human element should include such aspects as the physiothermal response if the environment is outside the comfort zone. In assessing the need for such engineering analysis, the rule should be that omissions must be justified.

Specialty analyses include such aspects as signatures, radar cross sections, C3I analyses and trades, vulnerability, survivability, human factors, effectiveness studies (at various levels of engagement), mobility, logistics, reliability, penetration analyses for armor, terrain generation, fire control, and timing. Every program should include an evaluation of the scope and fidelity of the possible analyses, and then omit analyses only upon justification. (Lack of time, money and priority is often sufficient justification.) A list of possible analyses can be derived rather easily from literature searches.

6. The Human Element
Considerations of and concerns for the human element are at the core of management of any endeavor, but risk management is not human-centered. It focuses on such things as the policies, procedures, planning and results of a program without any real regard for who does what.

However, from a risk management perspective it is common to ponder how we ever get anything done considering our frailties and often less-than-grand motives, and we do get things done. Therefore, the following is offered as a statement of the positive as much as it is a condemnation of the negative aspects of human nature on the failure of programs.

All programs experience some degree of risk and associated failure simply because of clumsy, dumb, indifferent, misdirected, naive, gutless, intellectually lazy and intellectually dishonest behaviors on the part of its principals. And, this is the short list for this type of behavior. There is not too much that can be done about the majority of such factors from the perspective of risk management except marvel at our ability to be our own worst enemy.

Some generalized examples from experience include:
Clumsy: Shoddy equipment calibration for quality control with the Hubble optics, in a situation screaming for caution.

Dumb: Designing a WBS in the development phase of a program to preclude competition in later manufacture-to-print phases of the program.

Indifferent: Providing shoddy or incomplete work simply because there is little visibility to the effort and the user has little clout.

Naive: The belief that because something sounds logical it will "play out" as envisioned.

Misdirected: Product development teams for the sake of having such teams, i.e., fashion for fashion's sake. (See virtually any recent RFP from the U.S. military.)

Gutless: Management's acceptance of a customer's meaningless, but disruptive inputs...simply to avoid conflict.

Intellectually Lazy: Management by buzz words rather than by sweat equity.

Intellectually Dishonest: Any or all of the rationalizations to defend a failing program from funding cuts, i.e., the program manager's syndrome.

Note that criminal behaviors are not discussed, but are a factor to consider in some environments and organizations.

All of these are behavioral problems on the part of individuals or groups of individuals. While we are all subject to such behavior, we should make an effort to be professional in our approach and also to avoid situations and organizations in which such behavior by others is tolerated. When you find that the inmates have taken over the asylum...check out!

7. References & Background Information
1_"System Engineering Management Guide," Technical Management Department, Defense Systems Management College, 1989

2_Best Value Contracting Workshop II (notebook & videos), June 30-July 1, 1994, Orlando, Florida, National Training Systems Association, Two Colonial Place, 2101 Wilson Boulevard, Suite 400, Arlington, Virginia 22201-3061

3_"System Engineering Management," B. J. Blanchard, John Wiley & Sons, Inc., 1991

4_"Defense Weapons Systems Acquisition," Report No.: GAO/HR-93-7, United States General Accounting Office, December 1992

5_"Critical Issues in The Defense Acquisition Culture, Government and Industry Views from the Trenches," J. Ronald Fox, Project Chairman, Defense System Management College-Executive Institute, December 1994.

6_DoD Directive 5000.1, "Defense Acquisition," March 15, 1996 (and associated documents). DoD Directives

7_"System Engineering, An Introduction to the Design of Large-scale Systems," H. H. Goode & R. E. Machol, McGraw-Hill, 1957


Bibliographical Materials

8_"Risk Management, Concepts and Guidance," Defense Systems Management College, Ft. Belvior, VA 22060-5426

9_"Program Manager's Notebook" Defense Systems management College, 1989
10_"Product Design and Development," K. T. Ulrich & S. D. Eppinger, McGraw-Hill, 1995

11_"Design to Reduce Technical Risk," AT&T, McGraw-Hill, 1993

12_Testing to Verify Design and Manufacturing Readiness, AT&T, McGraw-Hill, 1993

13_"System Engineering," EIA Interim Standard 632 (draft), The Electronic Industries Association, 2001 Pennsylvania Avenue, Washington, DC 20006-1813, 1994

14_Military Standard, Engineering Management, Mil-Std-499A, 1974

15._Military Standard, Engineering Management, Mil-Std-499B (pre-coordination draft-not for official uses), 1991

16_"Program Manager's Workstation (Blue Book, Course Manual BMP Course No. 101A)" Best Manufacturing Practices, Center of Excellence, 4321 Hartwick Road, Suite 400, College Park, Maryland, 20740

17_"Best Practices-How to Avoid Surprises in the World's Most Complicated Process, The Transition from Development to Production," Report NAVSOP-6071, Department of the Navy, March 1986

18_DoD Manual 4245.7, "Transition from Development to Production, Solving the Risk Equation"



Appendix A: Risk Management Program Plan (RMPP) Outline

1. Introduction
1.1 Scope of RMPP
1.2 Program Overview & Description

1.2.1 End Items
1.2.2 Organization
1.2.3 Schedule, Milestones & Reviews

2. Risk Management Practices

3. Baseline Risks
3.1 Risks Defined by Contract
3.2 Customer Retention of Risks
3.3 Risk Reviews (accomplishments and/or plans)

3.4 Risks

4. Risk Management Tools
4.1 Development Test Approach
4.2 Analysis Plan
4.3 System Simulations Laboratory
4.4 Maturation Plan
4.5 Verification Plan

5. References

RMPP Appendix A
Risk ranking Tools
Risk Templates



Appendix B: Risk Management Plan (Outline)

Executive Summary
Describe the risk, its rank and (very briefly) how it will be manage...three paragraphs at most.

1. Risk Background
Description (include identification numbers, "name", description)
How Identified (Review, RFP, Problem, etc.)
Risk Indicators

2. Risk Assessment
Accomplishments
Consequences (summary of worksheet)
Probability (summary of worksheet)
Rank Calculation (show point on risk space)
Affected WBS Elements
Affected Interfaces
Affected SOW Elements
Make-or-Buy Considerations
Affected Schedules & Budgets
Risk Propagation

3. Risk Ownership
Organizations and Position Holders

4. Risk Management Approach
Goals & Description of Approach
Accomplishments
Detailed Description of Selected Method
Risk Specific Tasking (design, test, analysis, simulation, etc.)
Risk Specific Schedules
Risk Specific Budgets
Success Criteria & Decision Points
Brief Descriptions of Alternative Approaches

5. References

Appendices as Required



Appendix C

1. Purpose

The purpose of the proposal manager's review is to achieve a solid basis for the proposal for the following aspects:

- Effectiveness of communications in the boiler room environment of the proposal via a common nomenclature re the specific risks and risk management in general

- Baseline definitions of risks and integrated priorities as a basis for business decisions during the proposal

- Establish firm control by the proposal manager of all facets of all risks

- Capture into the proposal effort of customer risks explicitly cited in the bid package

- Capture into the proposal of the experience of the marketing and chase teams

- Capture into the proposal effort of all specific RFP requirements for risk management

- Provide a basis for transition from business orientation of proposal to performance orientation for successful award.

2. Participants

- Proposal Manager (not a deputy, not a surrogate... the PM)

- Risk Management Staff (usually RM and an assistant)

- Finance

- Subcontracts (as representatives for probable subcontractors)

- Contracts

- Engineering Discipline Chiefs (structural, mechanical, electrical, thermal, software, etc.)

- Functional Chiefs (system engineering, manufacturing, design, test, etc.)

- Proposal Production Staff Manager

3. Agenda

- Introduction (task at hand, personalities, schedule for session and schedule for post review activities): 5 -10 minutes

Corporate culture will indicate if meeting is rigid or if people can wander in and out, if refreshments are needed, etc. A free-for-atmosphere is generally good since, in part, the session is one of brainstorming.

- Proposal Manager's Commitment Statement: Extemporaneous (2-3 minutes)

- Program Description (Chase Leader or Proposal Manager): 5-10 minutes

- Risk Management Primer (Risk Manager): 15-45 minutes

- RFP Requirements: 5-15 minutes

- Issues & Risk Identification Round Robin(outlined in the Primer session): 20-40 minutes (programs at $1Ms) to 4 hours (programs to $100Ms).

The RM leads a person-by-person (first-cut) assessment of the issues of concern to each person in the meeting. No exceptions. Comments are captured onto overheads or whiteboards as handwritten notes (This is the assistant's job among other things.). Literally, up and down the rows, around the table or whatever. No real distinction is made between risks and concerns at this time.

It is helpful to have a checklist of WBS elements and other maps of the system to assure coverage. These are marked as issues are discussed.

Each issue is identified as a name (e.g., radar mounting natural frequency) and a one- or two-line description (e.g., gear has an annoying and possible dangerous tendency to bang against its stops on climb out).

While the "experts" are expected to initiate specific risks, allied areas (technical and non-technical) are encouraged to participate. When someone does not speak up as expected, assume they have a concern and ask them directly if their silence is ominous.

- Summary Review: 30 - 60 minutes

After the last person has contributed, the last is tallied ( "We have 24 items." ), and each item is discussed in terms of a sanity check, adequacy of the name, correction of the description, a leader for the item for the proposal and probably seriousness. The PM then gives his seal of approval for the item as something requiring at least special attention and probably risk management treatment for the proposal. Issues the PM deems too sensitive for team-level awareness are struck from the list and returned to his personal management. Note: The PM can get issues of a routine nature assigned to him for standard risk management.

- Closure: 3-5 minutes

The review closes with the PM noting that the RM will be visiting each individual over the course of the next several days to begin the process of implementing risk management for the proposal. These visits correspond to the interview process of the DSMC's procedures.

The Proposal Manager will note that a brief follow-up meeting may be necessary if the interviewing process significantly impacts the issues list to the point that it must be re-intriduced to the team at large. He also notes that risk management will be reported at all status meetings of the proposal team.

He finally notes that an official list of risks will be published for the team's use. This list will include the necessary identification of the risks, assessments of seriousness and consequences and assignments of responsibilities.

4. Comments

This review format works. It is an effective tool for identifying risks, creating a common understanding of risks by all responsible individuals and for the initiation of an integrated and complete risk management approach for the proposal.

The review promotes a disciplined approach to handling these sensitive issues.

There are a several factors that are felt to contribute to the effectiveness of the review:

- Early in any effort the enthusiasm is at a high point and parochial positions have not had too long to emerge and solidify.

- Risks are generally well know, and this format lends itself to the capture of common knowledge.

- The PM had direct and immediate control of sensitive issues so pointless activities are nipped in the bud. Such issues can be pet peeves of key principals, people confusing difficult tasks with risky tasks, and issues that must be very carefully managed from a business sensitivity basis.

5. Preparation

The risk manager must prepare for the review. Some things that must be accomplished are:

- Developing a production and meeting schedule around the PM's schedule. Schedule the resources for the review.

- Develop a list of persons required to attend and a mailing list.

- Review the RFP to identify risks and the risk management requirements explicitly required for the proposal.

- Identify customer risk management standards applicable to the phase of the program (development, preproduction, production, etc.). Obtain copies of the standards for the proposal room library. Send copies of key documents to all participants prior to the review.

- Develop a RM Primer tailored to the situation after assessing the scope of work and discussions with the PM and marketing/chase principals. Strive for a succinct presentation of the generic and specific aspects of risk management.

Develop either a checklist of the WBS or a surrogate list of applicable disciplines, functions, deliverables and interfaces to serve to steer the review process. Discuss with PM and other knowledgeable individuals.

This information was supplied by C.W. Simmons see C.W. Simmons home page for more details.


Back to Home page MANAGING STANDARDS Home page

Back to top of page Click here
Please send any beneficial comments or identification of errors using the following form to: kenr@wysywig.airtime.co.uk

Copyright © Ken Rigby 1996, 1997, 1998 -- 2003