APB Breaches


The following learning objectives are covered in this Lesson:
  • Identify when program deviations occur and the actions that should be taken by the acquisition manager.
  • Relate the Acquisition Program Baseline (APB) to planning, control, and risk management in attaining cost, schedule and performance goals.

1. A program deviation occurs when the Program Manager has reason to believe that the current estimate for a given cost, schedule or performance parameter does not meet the threshold value specified for that parameter in the Acquisition Program Baseline. The PM must follow certain procedures whenever this occurs:
  • The PM must immediately inform the Milestone Decision Authority (MDA) when a program deviation occurs.
  • Within 30 days of the deviation, the PM must explain to the MDA the reason for the deviation and what steps need to be taken to bring the program back on track.
  • Within 90 days of the deviation, one of the following scenarios must take place:
    • The program is brought back on track; or
    • A new APB is approved, changing only the parameters that were deviated; or
    • An OIPT-level review is conducted to evaluate the PM's proposed baseline revisions, and feedback is given to the MDA, or in the case of a major program, to the Defense Acquisition Executive; or
    • If it's not possible for at least one of these actions to take place within 90 days, then the MDA should hold a formal program review to determine the status of the program.
2. Cost, schedule, and performance parameters are interrelated, and a change in one parameter can affect the others. For example, the materials needed for a lighter aircraft may cost more and take longer to design and manufacture than materials in a heavier aircraft. In that case, performance would affect both cost and schedule parameters. Therefore it is important to involve all the key stakeholders when considering changes to the APB. 

Integrated Baseline Review


The following learning objectives are covered in this lesson:
  • Identify the primary factors that the government should review to evaluate the contractor's PMB during an Integrated Baseline Review (IBR).
  • Identify the three reasons for PMB changes, and recognize their impact.
1. The Cost Performance Index (CPI) and Schedule Performance Index (SPI) indicate the performance efficiency factors that the contractor has achieved to date. Anytime the CPI or SPI are running significantly below 1.0, rebaselining may be necessary in order to complete the program. Generally, a CPI or SPI falling 10% or more below 1.0 is considered significant. The To Complete Performance Index (TCPI) indicates the efficiency factor that the contractor must achieve from "time now" to meet the budget at completion (BAC) or estimate at completion (EAC).
A TCPI greater than 1.0 indicates the contractor must work more efficiently that they have in the past to stay within the BAC or meet the EAC. These performance indices may indicate the need to conduct an Integrated Baseline Review (IBR). The IBR assesses the validity of the program management baseline (PMB) and identifies the risks associated with executing to the current PMB. Participants in an IBR typically include the government PM and technical staff, along with the related contractor's staff. During an IBR, the primary factors that are evaluated include:
  • The technical scope of the PMB
  • Program schedule requirements
  • Effective resource allocation to ensure that the work can be accomplished

2. There may be considerable risks associated with the current PMB indicating a need to rebaseline the program in order to make it executable. Changing the PMB can be caused by any one of the following three reasons:
  • Contract changes : only applies to changes/contract modifications directed by the government, not the contractor.
  • Internal re-planning : occurs when the contractor's original plan needs adjustment in response to problems or the opportunity to capitalize on efficiencies. The remaining work is then replanned by the contractor PM using the remaining budget and schedule.
  • Formal re-programming : occurs when the remaining budget and schedule is unrealistic; the contractor requires more time and dollars; the PMB exceeds the contract target cost and an over target baseline (OTB) occurs and the budget is insufficient; and the original objectives cannot be met.

Software Problems


The following learning objectives are covered in this lesson:
  • Apply a generic problem-solving model to an acquisition situation.
  • Apply one or more selected qualitative tools (e.g., fishbone diagram) to resolve a problem.
  • Identify developer practices essential for creation of high quality software.
  • Identify the requirements for interoperability testing.
1.    One problem-solving technique is the cause and effect diagram or "fishbone" diagram.  By analyzing all the possible causes of a problem, the fishbone diagram focuses on determining the root cause of a problem, rather than on symptoms or solutions.  Typically, the fishbone diagram begins with a statement of the problem in a box on the right side of the diagram--the "head" of the fish.  Then categories of major causes are identified and drawn to the left--the "bones" of the fish.  These major causes are broken down into all the related causal factors that might contribute to the major causes.  Finally, the causal factors are examined and narrowed down to the most significant elements of the problem to determine the ultimate cause or causes.
https://learn.dau.mil/CourseWare/83_7/rem/images/fishbone.jpg
2. The Software Program Managers Network has identified several software best practices based on interviews with software experts and industry leaders.  Here is a synthesized list of some of those characteristics, which are essential for the creation of high quality software: Adopt Continuous Program Risk Management
Risk management is a continuous process beginning with the definition of the concept and ending with system retirement. Risks need to be identified and managed across the life of the program.
Estimate Cost and Schedule Empirically
Initial software estimates and schedules should be looked on as high risk due to the lack of definitive information available at the time they are defined.
Use Metrics to Manage
All programs should have in place a continuous metrics program to monitor issues and determine the likelihood of risks occurring. Metrics information should be used as one of the primary inputs for program decisions.
Track Earned Value
Earned value requires each task to have both entry and exit criteria and a step to validate that these criteria have been met prior to the award of the credit. Earned value credit is binary with zero percent being given before task completion and 100 percent when completion is validated.
Track Defects against Quality Targets
All programs need to have pre-negotiated quality targets, which is an absolute requirement to be met prior to acceptance by the customer. Programs should implement practices to find defects early in the process and as close in time to creation of the defect as possible and should manage this defect rate against the quality targets. Meeting quality targets should be a subject at every major program review.
Treat People as the Most Important Resource
A primary program focus should be staffing positions with qualified personnel and retaining this staff through the life of the project. The program should not implement practices (e.g., excessive unpaid overtime) that will force voluntary staff turnover. The effectiveness and morale of the staff should be a factor in rewarding management.
Adopt Life Cycle Configuration Management
All programs, irrespective of size, need to manage information through a preplanned configuration management (CM) process. This discipline requires as a minimum:
  • Control of shared information
  • Control of changes
  • Version control
  • Identification of the status of controlled items(e.g., memos, schedules) and
  • Reviews and audits of controlled items.
Manage and Trace Requirements
Before any design is initiated, requirements for that segment of the software need to be agreed to. Requirements need to be continuously traced from the user requirement to the lowest level software component.
Use System-Based Software Design
All methods used to define system architecture and software design should be documented in the system engineering management plan and software development plan and be frequently and regularly evaluated through audits conducted by an independent program organization.
Ensure Data and Database Interoperability
All data and database implementation decisions should consider interoperability issues and, as interoperability factors change, these decisions should be revisited.
Define and Control Interfaces
Before completion of system-level requirements, a complete inventory of all external interfaces needs to be completed. Internal interfaces should be defined as part of the design process. All interfaces should be agreed upon and individually tested.
Design Twice, Code Once
Traceability needs to be maintained through the design and verified as part of the inspection process. Design can be incrementally specified when an incremental release or evolution life cycle model is used provided the CM process is adequate to support control of incremental designs.
Assess Reuse Risks and Costs
The use of reuse components, COTS (Commercial Off-The-Shelf), GOTS (Government Off-The-Shelf) or any other non-developmental items (NDI) should be a primary goal, but treat any use as a risk and manage it through risk management.
Inspect Requirements and Design
All products that are placed under CM and are used as a basis for subsequent development need to be subjected to a formal inspection defined in the software development plan. The program needs to fund inspections and track rework savings.
Manage Testing as a Continuous Process
All testing should follow a preplanned process, which is agreed to and funded. Every test should be described in traceable procedures and have pass-fail criteria.
Compile and Smoke Test Frequently
Smoke testing should qualify new capability or component only after successful regression test completion. All smoke tests should be based on a traceable procedure and run by an independent organization (not the engineers who produced it). Smoke test results should be visible and provided to all project personnel.

3.    Interoperability problems can best be identified through the use of actual, live systems to mitigate risk.  Joint interoperability is defined as the ability of systems to provide services to and accept services from other systems and to use the services exchanged to enable them to operate effectively together.  The Joint Interoperability Test Command is responsible for verifying the interoperability of systems to the parameters outlined in the ICD, CDD, CPD and ISP. 

Reprogramming Funds


The following learning objectives are covered in this lesson:
  • Select the appropriate public law (i.e., Misappropriation Act, Anti-deficiency Act, Bona Fide Need) that applies to the use of appropriated funds under specific circumstances.
  • Given a funding shortfall, apply the rules governing reprogramming of appropriated funds in each appropriation category to resolve the problem.
  • Identify the role of Operational Assessment (OA) in reducing program risk.
  • Identify the risks and benefits associated with
  •  
  •  DT/OT.
1. Congress has passed laws to ensure the proper use of the funds they make available for defense acquisition programs:
  • The Misappropriation Act states that funds appropriated by Congress can only be used for the programs and purposes for which the appropriation was made. Using Research, Development, Test and Evaluation (RDT&E) funds to pay for the procurement of items, for example, would violate the Misappropriation Act.
  • The Anti-Deficiency Act prohibits the obligation of funds in excess of an appropriated amount or in advance of receiving an appropriation. In other words, you can't spend more funds than you have or before you have them. Incurring a contractual obligation without having the funds to cover it, for example, would violate the Anti-Deficiency Act.
  • The Bona Fide Need Rule states that funds appropriated for a particular area can only be used during the period in which the appropriation is available for new obligations. If a research and development contract were awarded with FY03 RDT&E funds, and a new requirement arises in FY05 beyond the scope of that contract, then using FY03 RDT&E funds to pay for the new requirement would violate the Bona Fide Rule.
2. Although there are strict rules governing the use of appropriated funds, Congress recognizes that there are certain situations where some flexibility is needed. Reprogramming is the use of funds for purposes other than those intended by Congress at the time originally appropriated. Note that reprogramming only applies to funds that have already been appropriated by Congress.
Prior approval from Congress is required to move funds between appropriations, to increase the quantities of major systems procured, new starts, or for designated special interest items. However, most reprogramming actions in DoD are approved at the service or agency level, without the involvement of Congress, using below threshold reprogramming. Below threshold reprogramming allows the transfer of funds among programs within an appropriation category, subject to certain limitations. Up to $20 million of procurement funds can be transferred into a line item, and up to $10 million of RDT&E funds can be transferred into a program element, through below-threshold reprogramming.

3. An Early Operational Assessment (EOA) is typically conducted sometime before the Design Readiness Review held in the System Development and Demonstration phase (SDD). Using prototype systems, the EOA identifies potential operational effectiveness and suitability issues during system development. An Operational Assessment (OA) is conducted before Milestone C. Using engineering development models or pre-production systems, the OA provides operational effectiveness and suitability data before low rate initial production is begun.

4. Sometimes developmental and operational testing are combined to save resources, time and money. DT and OT are typically combined when the data, resources, objectives, test scenarios, and measures of effectiveness of both tests are similar and compatible. DoD policy encourages combined testing as long as the objectives of both types of testing are met. Combined testing eliminates redundant activities and raises operational concerns in time to make changes in the system design. However, combined tests require extensive coordination, are more difficult to design, and risk compromising test objectives.

Combining DT and OT does not remove the requirement to conduct initial operational test and evaluation (IOT&E), which is required by law for ACAT I and ACAT II programs. IOT&E uses production representative systems and typical user personnel in a scenario that is as realistic as possible. Successful IOT&E is required for the milestone decision authority to make the full-rate production decision. 

Design Changes


The following learning objectives are covered in this lesson:
  • Identify how instability of user capability needs, design, and production processes impact program cost and schedule.
  • Identify the purpose of specific technical reviews and their relationship to the acquisition process.
  • Identify the roles, responsibilities, and methods for interface control and technical data management.
  • Recognize how configuration management impacts all functional disciplines (e.g., test, logistics, manufacturing, etc.)
  • Identify the impact on configuration management when commercial items are used in the system.
  • Relate the different types of program unique specifications to their appropriate configuration baselines and technical review requirements.
  • Trace the maturation of system design information as it evolves through the acquisition life cycle of a system.
  • Identify the relationship between configuration baselines, specifications, and configuration management planning.
  • Identify key acquisition best practices, including commercial practices that impact the relationship between government and industry.
1. Technical reviews are conducted throughout the acquisition life cycle to reduce program risk.  They are event-driven, not schedule-driven, and help determine whether to proceed with development or production.  Technical reviews are used to clarify design requirements, assess design maturity, and evaluate the system configuration at various points in the development process.  They provide a forum for communication across different disciplines in the system development process and establish common configuration baselines from which to proceed to the next level of design.
Types of technical reviews include:
  • System Requirements Review (SRR), in which the system specification is evaluated to ensure that system requirements are consistent with the preferred concept and available technologies.
  • Preliminary Design Review (PDR), in which the top-level design for each configuration item function and interface is evaluated to determine if it is ready for detailed design.
  • Critical Design Review (CDR), in which the detailed Product Baseline is evaluated to determine if system design documentation is good enough to begin production (hardware) or final coding (software).
  • Test Readiness Review (TRR), in which test objectives, procedures and resources are evaluated to determine if the system is ready to begin formal testing.

2. Configuration management is a systems analysis and control tool that is used in the systems engineering process to control the design of a product as it evolves from a top-level concept into a highly detailed design. Through configuration management, we ensure that designs are traceable to requirements, interfaces are well defined and understood, change is controlled and documented, and product documentation is consistent and current.
Configuration management involves development of program unique specifications and other technical data to document the design.  As design requirements are finalized at different levels of detail, configuration baselines are established to formally document those requirements and to define an item's functional and physical characteristics. The baselines progress from the overall system level (functional baseline), to the more specific configuration item level (allocated baseline), down to the detailed level (product baseline):
 

BASELINE
SPECIFICATIONS
UAV EXAMPLE
Functional
("system specification")
Overall system performance requirements, including interfaces
Night vision requirement
Allocated
("design to" specification)
Item performance specifications.  Performance characteristics of specific configuration items, including form, fit, function requirements.
Specific light level and resolutions that are required of a digital camera for the night vision capability. 
Interface requirement for camera to attach to air vehicle.
Product
("build to" baseline)
Item detail specifications.  Process, procedure, material details, technical documentation
Camera shutter design details. 
Video transport circuit detailed design.
Drawing showing locking mechanism for camera body.
 
The Government must determine which baselines should come under Government control.  Generally speaking, the Government maintains control of the functional, or system-level baseline; either the Government or contractor can maintain the allocated baseline; and the contractor is usually responsible for the product, or 'build-to' level baseline and below.

3.Interface management involves the control and definition of the boundaries at which product subsystems come into contact with other components of the system.  Effective interface management involves identifying, developing and maintaining the external and internal interfaces necessary for system operation.  Interface management can become a configuration management challenge when a product is modified.
The contractor is usually responsible for design and control of internal interfaces, while the Government is responsible for external interfaces.  An Interface Control Working Group (ICWG) is often used to establish formal communication links between Government and contractor personnel involved in system interface design.

4. Once a system is fielded, configurationmanagement documentation becomes the basis for supporting the system, whether that support is provided by the contractor or by the Government.  Interoperability and maintenance issues can become very problematic if configuration management isn't done properly.  Even minor changes to a commercial item can create configuration challenges and impact logistics, testing, production and other functional areas.
The contractor will ultimately document the functional, performance, and physical characteristics of their product in a Technical Data Package (TDP).  Ensuring that the TDP is comprehensive and updated regularly is especially important if the Government is going to maintain or modify the system. 

Reviews, Simulations and Tests


The following learning objectives are covered in this lesson:
  • Recognize the importance of modeling and simulation in the defense acquisition process.
  • Recognize the contribution of STEP (Simulation, Test & Evaluation Process) to the development of a system.
  • Distinguish among various types of DT&E (e.g., Production Qualification Tests, Production Acceptance Test and Evaluation).
  • Recognize the relationship between risk management and exit criteria.
  • Identify the information required for a milestone review.
1. One way to effectively manage acquisition risk is through the use of exit criteria, which serve as a litmus test as to whether the program is on track to achieve its goals. In order for exit criteria to be meaningful, they must be unique to not only the program itself, but to each phase of the program. Exit criteria are proposed by the Program Manager and approved by the Milestone Decision Authority (MDA).
Exit criteria can take many forms. However, the criteria should be measurable and reflect progress made in high risk areas of the program. Examples include the achievement of technical capabilities as seen in test results or the maturity of a manufacturing process. Thus, exit criteria are event-driven and considered at program reviews throughout the life of a program. They are critical "show-stoppers;" failure to meet an exit criteria could prevent a program from making further progress.
2. Milestone reviews are conducted by the MDA to initiate technology development, authorize program initiation and entry into the SDD phase, and to commit to production and deployment. Information for milestone reviews may be required by statute or regulation. The specific information required for each milestone review can be found in Enclosure 3 of DoDI 5000.2.
3. The use of modeling and simulation (M&S) can be very helpful during the acquisition process. Used as a predictor of future capabilities, M&S can be an inexpensive way to test various capabilities. However, M&S should not be used as a substitute for good test data. Models and simulations can also be modified and reused later in the acquisition process, which should avoid costs in the long run.
The Simulation, Test and Evaluation Process (STEP) is a DoD initiative that attempts to integrate modeling and simulation into the test and evaluation process by combining the two into one process. It is important to note that neither one is used as a replacement or substitute for the other, but are used together to complement each process.
Simply put, the STEP process, which can be used throughout the system life cycle, can be broken down into four steps: Model-Test-Fix-Model. The key to this process is to fix problems as they are discovered; not at the end of the process - and then begin the STEP process all over again in an effort to isolate new problems that might have arisen.
While M&S can be very effective, simulations only provide predictions of a system's performance and effectiveness. Thus, by combining M&S data with the empirical, measurable data provided by T&E, the two processes enhance each other and should result in long term efficiencies and cost savings.
4. Developmental Testing and Evaluation (DT&E) can take many forms during the acquisition process, depending upon what stage of the life cycle the program is in.
  • Component tests take place on individual system parts before being merged into the system as a whole. Component testing is conducted both on hardware items and on software items before they are integrated with system hardware.
  • Integration testing is used to assess compatibility of individual hardware and software components as they are aggregated to form subsystems or systems.
  • Environmental testing, sometimes referred to as the "shake-rattle-roll" part of the testing process, attempts to define how different components react under various conditions, such as temperature and shock.
  • Production Qualification Testing (PQT) is conducted on initial production articles to verify the effectiveness of the manufacturing process.
  • Production and Acceptance Testing and Evaluation (PAT&E) is conducted on production items to verify that these items have met contract requirements.
  • Live Fire T&E provides an assessment of system vulnerability and/or lethality.
  • Modification testing can be used during production, or following system deployment, to determine the need for or benefits of any system changes.

Operational and Live Fire Tests



The following learning objectives are covered in this lesson:
  • Identify which organizations develop, coordinate, or approve Critical Operational Issues (COIs).
  • Identify which organizations develop, coordinate, or approve Critical Technical Parameters (CTPs).
  • Recognize how Measures of Effectiveness (MOE) and Measures of Suitability (MOS) are used throughout the T&E process.
  • Recognize the purpose and objectives of Live Fire Test and Evaluation.
  • Distinguish among various types of DT&E (e.g., Production Qualification Tests, Production Acceptance Test and Evaluation).

1. Developmental test and evaluation is essential in determining a system's readiness for initial operational test and evaluation (IOT&E). The results of developmental testing are formally reviewed in an Operational Test Readiness Review (OTRR) prior to proceeding with IOT&E.
  • Critical Technical Parameters (CTPs) are key parameters and developmental testing criteria that are derived from the CDD, and from technical performance measures as specified by the System Engineering Plan. The CTPs are developed, coordinated and approved by the T&E IPT within the Program Management Office. Examples of CTPs are an aircraft's cruising speed, range and altitude.

2. Two types of developmental testing become important as a system nears and enters production:
  • Production Qualification Testing (PQT) is conducted on a small number of initial production items to evaluate the effectiveness of the manufacturing process.
  • Production Acceptance Testing and Evaluation (PAT&E) is conducted on items as a form of quality assurance to ensure that contractual obligations are being met.

3. Operational test and evaluation is conducted to determine if a system will successfully meet the user's capability needs.
  • Critical Operational Issues (COIs) indicate the operational effectiveness and operational suitability needs of a system. They are expressed in the form of a question, developed by an independent operational test agency, and broken down into quantifiable MOEs and MOSs. An example of a COI is: "Does the aircraft accomplish its mission in the battlefield environment?"
  • Measures of Effectiveness (MOEs) are specific, objective measures of system performance that are closely related to mission accomplishment. An example of a MOE is: "Number of targets destroyed "
  • Measures of Suitability (MOSs) are specific, objective measures of how well as system can be maintained and utilized by the end user. They are written and approved by an independent operational test agency. An example of a MOS is: "Aircraft Mean Time Between Failure (MTBF)."
4. In summary, COIs are the primary operational issues that must be answered by the testing program, while MOEs and MOSs may be thought of as the quantifiable measures that can be used to determine whether the COIs have been addressed successfully. In turn, CTPs provide developmental test data that help support the MOEs and MOSs.

5. Under Public law (Title 10, US Code 2366), Live Fire Test and Evaluation is required for certain major systems before full rate production can begin:
  • Survivability testing is required for "covered" systems that are occupied by personnel and designed to provide the personnel some degree of protection in combat situations.
  • Lethality testing is required for all major munitions and missile programs to determine whether the weapon can reliably disable or destroy its target.
Live Fire Test and Evaluation results are sent to the Director, Operational Test and Evaluation (DOT&E), acting as the OSD agent, who then reports them to Congress before a program can move forward beyond LRIP and on to full rate production.