Wednesday, December 15, 2010

Static Analysis Reporting For Success

When looking at effective reporting with static analysis, you have to consider the following:
  • Who is the audience of the information?
  • How do you turn data into actionable information?
  • How does the information support business goals?
  • How is this information delivered?
Static analysis improves software quality and security and so there are a number of potential "actors" that are affected by the information.  These may include:
  • CEO/CTO
  • VP of Engineering, VP of Products
  • Director of Engineering, QA
  • Manager of Engineering, QA
  • Director of Tools, Infrastructure, Build/Release
  • Manager of Tools, Infrastructure, Build/Release
  • Architect
  • Engineer / Developer / Programmer
  • Tools, Build/Release Engineer
  • QA Engineer
  • Governance / Compliance Analyst
  • Product Manager
Each individual has a stake in the successful usage of static analysis. Each constituent has their piece of the puzzle to manage or be aware of.  We unfortunately do not have room in the blog post to cover each of these roles.

Business Goals
Creating metrics without business goals as a framework will create wasted cycles, missing information and superfluous data.  Some common business goals that we see for static analysis are:
  • No static analysis defects in code (sometimes called "Clean")
    • The business goal is to improve software quality and security by addressing all potential issues reported by static analysis.
    • Variations of this include:  "no critical or high priority issues" or no defects in a particular safety-critical component of the code
    • This often requires it to be "managed" down, meaning that there should be planned downward slope to the number of defects.
    • Targets may be established at certain milestones, for instance, the number of defects should be reduced by 10% for every release or month.  Possibly only the high and critical defects would be managed in this way.
    • Some organizations even manage down "false positives" with the goal that the code should be written so cleanly that a static analyzer couldn't be confused.
  • No new defects (sometimes called "No new harm")
    • The business goal is to at least keep software quality and security at status quo.  The business argument is that new code tends to be the buggiest and that legacy code has already had a chance to be "exercised."
    • Variations of this include:  "no new critical or high priority issues" or no new defects in a particular safety-critical component of the code.  Complexity metrics can also be used as indicators of trouble areas.
    • A baseline should be established (such as the beginning of a branch pull) and all new defects (or a high and critical subset) introduced through changes in the code should be fixed
  • Improve Quality and Security Through Voluntary Usage
    • The business goal is to provide a convenience to the developers to optionally fix software problems.  In practice though, usage typically decreases over time.
    • Total number of defects fixed should be reported in order to quantify tool's worth
  • Benchmark Against Competition
    • The business goal is to be at parity or better than the industry benchmarks
    • Defect density's are compared to publicly available data, such as open source.
Sample Reports

The number and types of reports that organizations use to more effectively use static analysis is too numerous to put in blog or article form.  We include just a few report concepts to give you an idea of the type of reporting that has proven useful to organizations:

Executive Dashboard
  • Number of defects outstanding in current development branch.  Defects of course are likely to be the types of defects that are required to be fixed as part of an acceptance criteria.
  • Trend of high priority defects that have been reported and that have been fixed since the branch pull.  Graph with pretty pictures always impress executives.
  • Benchmark of quality level compared to industry standards
  • Defect density (current and trend)
  • Latest analysis run statistics - was it broken?  How much coverage was there?
Managerial dashboard
  • Number of new defects reported since yesterday
  • Number of open defects for each component owner
  • Ranked list of number of fixes by component and by developer
  • Trend in complexity and defect type
  • False positive rate overall and by component and by developer
  • List of open defects by priority and by age
Administrator dashboard
  • Latest build statistics including lines of code analyzed, number of new defects, analysis times, coverage statistics
  • New false positives marked to review for configuration changes
  • Alerts for broken builds or builds that exceed certain performance and coverage thresholds
Architects
  • Complexity trend over time
  • Ranking of complexity by function
  • New false positives marked for review to audit
  • Defect density by component 
Developers
  • New defects introduced in my code
  • Outstanding defects in my queue
  • Code complexity for my code
  • Benchmarks against other developers in my company
And much, much more.

Information Delivery
How information is received plays an important role in how usable the system is.  In general, utilize whatever existing process flows exist - for instance:
  • Generate email as alerts with a link to get to the details
  • Create a new bug tracking entry for new defects (if they meet a specific criteria).  Some organizations group static analysis bugs into a single bug tracking entry
  • Display information in an existing continuous integration dashboard
  • Publish information in a wiki or equivalent intranet
  • Display information in a code review tool
  • Generate a PDF of information as part of an executive dashboard
The fewer additional steps required the more likely the tool will be used.  Creating separate, independent paths to the information will often cause the tool to not be used.

    No comments:

    Post a Comment