Using Verification Metrics to estimate DO-178B/C Projects PDF Print E-mail
Article Index
Using Verification Metrics to estimate DO-178B/C Projects
Page 2
All Pages

By Gustavo Cubas, Engineering Manager, Avionyx.

While all software projects are unique by definition, they share commonalities that allow companies and project managers to learn from previous experiences.  Looking at past project performance is the first step towards having a well-planned and controlled project.  In order to learn from previous projects, a systematic look at your metrics database is a must.  This paper summarizes many valuable lessons learned that can increase your estimation accuracy in your next DO-178C compliant software project.

The challenge

Before any project starts, we all want to know how much it will cost, how long it will last, how many people will be needed and in general what resources are going to be necessary to finish it on time and under budget.

Deciding Which Metrics to Use

It is tempting to characterize project productivity and performance by tracking the number of hours per requirement.  While such metrics can be useful as a rule-of-thumb tool, it is not enough for budgeting and project programming at a task level.

Productivity is the result of a complex equation that must consider quantitative and qualitative variables.  A partial list of such variables includes the following:

Qualitative Variables  Quantitative Variables
 Table 1. Factors that affect verification effort
  • Schedule constraints
  • Team Experience (application, tools, process, customer)
  • Project Manager Experience & mentoring skills
  • Tool capabilities
  • Testing method (manual, automated, high-level, low-level, black-box, white-box etc.)
  • Level of abstraction and quality of requirements
  • Application complexity
  • Software Criticality Level
  • Process complexity (plans & standards)
  • Code & Requirements stability
  • Software methodology, language
  • # of Requirements
  • # of test cases
  • Total statements
  • Total executable statements
  • # of decision points (cyclomatic complexity)
  • Source lines of code per requirement
  • # of test stations per engineer

A project effort estimate must consider the qualitative factors to ensure that previous metrics are relevant.  For instance, when estimating the effort for a graphical library running on a 32-bit RISC processor with a Real-Time Operating System, metrics from a previous project based on a modest 16-bit processor may not be accurate due to differences in algorithmic complexity, development environment, or concurrency.

The Approach

First, Commit

Before you start collecting metrics, you will need your team’s commitment to collect them.  If they are not willing to spend 5 minutes per day filling a timecard or entering their hours in a time tracking system, then you will need to start by creating the awareness in your team about the importance of this endeavor.

Then, Plan

The next step is to plan what metrics you need to track.  Detailed activity-level metrics are good for controlling task performance, but may be too detailed for tracking overall project productivity or future reference for estimation.  This means that if you collect metrics from a detailed project schedule, you also need to have a second layer of metrics for the category of that activity.  This mapping is illustrated in the following figure.


Task Classification

Get the right tools

You will need to get a time-tracking system that simplifies collecting these metrics.  You can use a specialized program or you can use an application such as MS Project Server® that allows team members to report the hours charged to an activity.  The former option is generally simpler and more straightforward, while the latter option allows you to update your schedule online.

Collect metrics

Once you start collecting metrics, it will be important to monitor your team to ensure they record their hours correctly.  A weekly or biweekly review by the project manager should suffice for this purpose.

Make sure you track rework separately, because updates of previously developed materials can distort metrics and almost all projects contain a certain level of rework.  You can have a rework-category for every task category or you can simply track all rework in a single category.  This depends on the level of sophistication you want to achieve.

And Finally, Analyze

In order to analyze your collected metrics, first identify the independent variables that determine the effort required to complete a task.  For instance, when assessing the amount of work required to do test procedure development, you should realize that the number of hours spent developing test procedures is not as relevant as the number of hours spent writing test procedures per test case.  This is because the grouping of test cases into test procedures could distort the first metric as every engineer can choose to group test cases into test procedures in a different way.

Activities can be classified in three categories, those that:

  • require a fixed amount of work, e.g., producing a review record, audits.
  • represent a percentage of the work of other tasks, e.g., project management.
  • depend on parameters of the system to be verified e.g., development of test cases.

Metrics are particularly relevant to activities in the latter category.

The following table lists the driving factors of the most common verification activities in a DO-178B/C software project along with the most effective metrics to track them:

Table 2. Driving factors for most common verification activities
Tasks Driving factorsMetrics to track
SW Quality Engineering# of planned audits & transition records– see your SW Quality Assurance Plan or track the percentage of SW-Quality-Engineer work against total verification activitiesHours per audit
Hours per transition record
% of SQE work hours
Configuration Management# of baselines & document releases
# of Problem Reports
Hours to create a baseline
Hours to release a document
Hours per Problem Report or Change Request
% of CM work hours/total project hours
Project ManagementTeam Size
Project complexity
PM hours per team member
% of PM work hours/total project hours
Test Case Development# of requirements
# of conditions per requirement
Complexity of requirements
Cyclomatic Complexity
Test cases per requirement
Test cases per decision point
Test Procedure Development# of test cases
Test Procedure hours per Test Case
Test Cases & Procedures Review time
Development time
% of review time vs. % of development time
Code Reviews
# of Statements
# items in Code Review Checklist
Code Complexity
Complexity of Coding Standard
Hours per statement
Timing Analysis
# of time-related requirements
Hours per time-related requirement
Stack Analysis
Depth of function call tree
Size of local variables
Tools to do stack analysis
Call tree depth
# of hours per functions that use stack

To illustrate the use of some of the factors in Table 2, suppose you want to estimate the work required to conduct code reviews on an application.  You can start by analyzing some qualitative factors such as the size and complexity of the Code Review checklist, the Coding Standard, and the number of requirements the code implements.  Then you have to determine if you have metrics from a project with similar characteristics in terms of technology and complexity.  If you realize you don’t have relevant metrics, you will have to factor in this risk.  Once you have identified the project whose metrics you plan to use as a reference, you can adjust those metrics considering the level of experience of your team and any differences you may have identified.

Ongoing analysis of metrics during project execution can help you identify problems as they occur.  Metrics will show you not only budget or schedule overruns but also planning errors and technical difficulties the team members may be facing.

Finally, keep in mind that in order to compare apples with apples, you have to make sure you are considering non-quantitative factors such as SW level, type of testing and the characteristics of the project (processor type, use of RTOS, algorithmic complexity, etc.) as well as risk level associated with the team’s experience with the application, tools, process and each other.


Metrics reflect the degree of success of a project and the areas that need improvement.  A well-tuned metrics system is one of the most valuable tools your company can have to increase project predictability and to identify problems on-the-fly.  The overhead costs of a metrics tracking system can be minimized by the use of friendly tools your engineers can use with little hassle.  In short, metrics are the best way to learn from project experiences; you cannot afford not to learn from them otherwise you risk making the same mistakes again.

Do you want to learn more about this topic?

Avionyx has gathered many valuable lessons in nearly 20 years developing DO-178B/C Software projects.  If you would like to learn more from our experience with Software metrics feel free to contact us.

AddThis Social Bookmark Button