Open Source Quality Improving, But Development Assumptions Need Revising

Coverity has been scanning open source projects' code for two years with its defect-detecting tool, Prevent. It's found that the quality of the code is improving, and also that some favorite, long held assumptions about software development may not be true.

For example, it's assumed for any commercial or open source project, the larger the code mass, the more frequently defects will be encountered due to the growing complexity. With the open source projects, "there's no multiplier based on the size of the code base," said David Maxwell, Coverity's open source strategist.

A bigger code base will have more code defects, but only at the same frequency as when it was half its current size. As a matter of fact, said Maxwell in an interview, "If you give me the size of the code base, I can estimate the number of defects." In other words, the increase is linear and predictable.

Likewise, it used to be considered wise to break down a single, complex function in a program into smaller functions, letting each solve a piece of the problem. After inspecting functions that ranged from just 14 lines to those up to 345 lines in length, Maxwell says, "long functions don't have more defects." He suspects that breaking down a coherent, long function tends to complicate matters and "increase the amount of indirection" contained in a program, making it harder to keep defect free. So longer functions may become more acceptable, he said.

Over the past two years Coverity has examined the code 250 freely available open source projects, representing a total of 55 million lines of code. It is doing so under contract with the Department of Homeland Security, which wanted a check on the open source code frequently adopted by government agencies. Coverity has found that defects occur in open source at a rate similar to those found in commercial code, but conscientious open source projects frequently acknowledge and clean them up faster than commercial code distributors.

By repeatedly conducting static analysis of each project, Coverity has conducted 14,238 scans over the two-year period, analyzing a total of 10 billion lines of code [many of them multiple times]. Static analysis is examining the code of a program that is not running at the time.

Coverity posts the results of its scans periodically to its Web site, http://scan.coverity.com. A report on its activity, "Scan Report On Open Source Software 2008," is available at www.coverity.com.

Coverity is coming to the end of a two-year, $300,000 contract with the Department of Homeland Security but it says it's going to continue the scans and post the results. "Nobody is ever going to have access to this much commercial source code," compared to the 55 million lines available through the open source projects, he noted. Commercial vendors typically sell and distribute compiled versions of their products, keeping the source code under lock and key as a proprietary asset.

As Coverity posts the results of its scans, open source developers go and investigate the nature of the defects reported. On average, 13.2% are false positives, meaning even though they show characteristics of a defect, they could not occur as defects in the running program. So totals posted may be higher than the total number of bugs that a project's developers finally agree to. Maxwell said that rate of false positive is a good one for static code analysis.

The scans have found the rate of defects in some open source projects, such as Linux and Samba, are far lower than others or in common commercial code. The 2.6 version of the Linux kernel had a defect rate of 0.127 defects per 1,000 lines of code, compared to a normal rate of 1 per 1,000 lines of code. That version of the kernel had 3.6 million lines of code.

Samba was even lower at 0.024 bugs per 1,000 lines, while the PostgreSQL database project had a 0.041 rate.

The FreeBSD project's scans were reported publicly by Coverity but it fixes the bugs within its own product, without reporting on the ultimate rate. Robert Watson, president of the FreeBSD Foundation, said in a statement on the report that having the scans has helped the project team "improve our software development methodology."

"Null pointer dereference" was the most common defect reported, a common shortcoming in C programs. It occurs when a pointer leads to a memory area that is no longer valid. Since the program logic can't pick up the next step of the program from the pointer, the process it's trying to execute fails, often leading to a stalled application.




Source: informationweek.com
Posted By: IndoSourceCode

Technorati Tags: ,



Related Posts by Categories



Widget by Hoctro

Enter your email address:

Delivered by FeedBurner

Followers



Source Code

Tips