Using Verification Metrics to estimate DO-178B/C Projects PDF Print E-mail
Article Index
Using Verification Metrics to estimate DO-178B/C Projects
Page 2
All Pages

Get the right tools

You will need to get a time-tracking system that simplifies collecting these metrics.  You can use a specialized program or you can use an application such as MS Project Server® that allows team members to report the hours charged to an activity.  The former option is generally simpler and more straightforward, while the latter option allows you to update your schedule online.

Collect metrics

Once you start collecting metrics, it will be important to monitor your team to ensure they record their hours correctly.  A weekly or biweekly review by the project manager should suffice for this purpose.

Make sure you track rework separately, because updates of previously developed materials can distort metrics and almost all projects contain a certain level of rework.  You can have a rework-category for every task category or you can simply track all rework in a single category.  This depends on the level of sophistication you want to achieve.

And Finally, Analyze

In order to analyze your collected metrics, first identify the independent variables that determine the effort required to complete a task.  For instance, when assessing the amount of work required to do test procedure development, you should realize that the number of hours spent developing test procedures is not as relevant as the number of hours spent writing test procedures per test case.  This is because the grouping of test cases into test procedures could distort the first metric as every engineer can choose to group test cases into test procedures in a different way.

Activities can be classified in three categories, those that:

  • require a fixed amount of work, e.g., producing a review record, audits.
  • represent a percentage of the work of other tasks, e.g., project management.
  • depend on parameters of the system to be verified e.g., development of test cases.

Metrics are particularly relevant to activities in the latter category.

The following table lists the driving factors of the most common verification activities in a DO-178B/C software project along with the most effective metrics to track them:

 
Table 2. Driving factors for most common verification activities
Tasks Driving factorsMetrics to track
SW Quality Engineering# of planned audits & transition records– see your SW Quality Assurance Plan or track the percentage of SW-Quality-Engineer work against total verification activitiesHours per audit
Hours per transition record
% of SQE work hours
Configuration Management# of baselines & document releases
# of Problem Reports
Hours to create a baseline
Hours to release a document
Hours per Problem Report or Change Request
% of CM work hours/total project hours
Project ManagementTeam Size
Project complexity
PM hours per team member
% of PM work hours/total project hours
Test Case Development# of requirements
# of conditions per requirement
Complexity of requirements
Cyclomatic Complexity
Test cases per requirement
Test cases per decision point
Test Procedure Development# of test cases
Test Procedure hours per Test Case
Test Cases & Procedures Review time
Development time
% of review time vs. % of development time
Code Reviews
# of Statements
# items in Code Review Checklist
Code Complexity
Complexity of Coding Standard
Hours per statement
Timing Analysis
# of time-related requirements
Hours per time-related requirement
Stack Analysis
Depth of function call tree
Size of local variables
Tools to do stack analysis
Call tree depth
# of hours per functions that use stack

To illustrate the use of some of the factors in Table 2, suppose you want to estimate the work required to conduct code reviews on an application.  You can start by analyzing some qualitative factors such as the size and complexity of the Code Review checklist, the Coding Standard, and the number of requirements the code implements.  Then you have to determine if you have metrics from a project with similar characteristics in terms of technology and complexity.  If you realize you don’t have relevant metrics, you will have to factor in this risk.  Once you have identified the project whose metrics you plan to use as a reference, you can adjust those metrics considering the level of experience of your team and any differences you may have identified.

Ongoing analysis of metrics during project execution can help you identify problems as they occur.  Metrics will show you not only budget or schedule overruns but also planning errors and technical difficulties the team members may be facing.

Finally, keep in mind that in order to compare apples with apples, you have to make sure you are considering non-quantitative factors such as SW level, type of testing and the characteristics of the project (processor type, use of RTOS, algorithmic complexity, etc.) as well as risk level associated with the team’s experience with the application, tools, process and each other.

Conclusion

Metrics reflect the degree of success of a project and the areas that need improvement.  A well-tuned metrics system is one of the most valuable tools your company can have to increase project predictability and to identify problems on-the-fly.  The overhead costs of a metrics tracking system can be minimized by the use of friendly tools your engineers can use with little hassle.  In short, metrics are the best way to learn from project experiences; you cannot afford not to learn from them otherwise you risk making the same mistakes again.

Do you want to learn more about this topic?

Avionyx has gathered many valuable lessons in nearly 20 years developing DO-178B/C Software projects.  If you would like to learn more from our experience with Software metrics feel free to contact us.



 
AddThis Social Bookmark Button