Home
Site Map
Reports
Voting News
Info
Donate
Contact Us
About Us

VotersUnite.Org
is NOT!
associated with
votersunite.com

E-Voting Tests Get Failing Grade 

By Kim Zetter  |   WiredNews

Nov. 01, 2004 PT

In 1996, a federal testing lab responsible for evaluating voting systems in the United States examined the software for a new electronic voting machine made by I-Mark Systems of Omaha, Nebraska.

The tester included a note in the lab's report praising the system for having the best voting software he had ever seen, particularly the security and use of encryption. 
 Doug Jones, Iowa's chief examiner of voting equipment and a computer scientist at the University of Iowa, was struck by this note. Usually testers are careful to be impartial.

But Jones was not impressed with the system. Instead, he found poor design that used an outdated encryption scheme proven to be insecure. He later wrote that such a primitive system "should never have come to market."

But come to market it did. By 1997, I-Mark had been purchased by Global Election Systems of McKinney, Texas, which in turn was purchased by Diebold in 2002. Diebold marketed the I-Mark machine as the AccuVote-TS and subsequently signed an exclusive $54 million contract to supply Georgia with the touch-screen machines statewide. In 2003, Maryland signed a similar agreement.

Last year, computer scientists found that the Diebold system still possessed the same flaws Jones had flagged six years earlier, despite subsequent rounds of testing.

"I thought surely something must have changed in all of that time," Jones said. "There's really very little excuse for the examiners not to have noticed."

Before 1990, the United States had no standards for testing and evaluating voting equipment. Anyone who wanted to make a voting system and sell it to election officials could do so. In 1990, the Federal Election Commission tried to address that weakness by establishing national standards for designing and testing voting equipment. Accredited labs were to evaluate systems at the federal level, while states instituted processes to perform additional testing at the local level.

Election officials point to this "rigorous" testing according to standards as evidence that the systems are fine. But a study (PDF) commissioned by Ohio last year found all the top e-voting systems had security flaws that testers failed to catch.

The certification process is, in fact, rife with problems, having long been neglected by federal and state authorities who don't have funding or the authority from Congress to oversee the process properly.

The problems arise because:

? The "independent testing labs," or ITAs, that test voting systems are not completely independent of the companies that make the voting equipment. Although the top level of certification is called "federal testing," private labs with no connection to the government conduct the testing. The vendors pay those labs to test their systems, giving the vendors control over such parts of the testing process as who views the results. This lack of transparency means state officials who buy voting machines seldom know about machine problems that occurred during testing.

? The federal standards for voting systems are flawed. They demand little security from vendors and contain loopholes that allow parts of voting systems to slip through without being tested. An upgrade to the standards is in the works but won't be available until mid-2005 and may not fix all of the standards' flaws.

? Procedures for tracking certified software are poor, so even if labs test voting systems, no one can ensure that the software used in elections is the same software that got tested. California discovered this problem last year when it found that Diebold installed uncertified software on machines in 17 counties.

Despite the problems, few election administrators admit the certification process is inadequate. This doesn't surprise Jones.

"If election officials admit that the standards and certification process are bad, then public confidence in elections is threatened (and) participation in elections will go down," Jones said. "So the question is, do you talk about this? The answer seems to be, for a lot of people in the election community, no."

When the standards came out in 1990, they addressed punch-card, optical-scan and first-generation direct-recording electronic machines, the precursor to today's touch-screen machines. But it would take another four years before any testing occurred, because Congress failed to provide the FEC with funds or a mandate to oversee testing.

In 1992, the National Association of State Election Directors, an informal association of election administrators, assumed the voluntary task of accrediting labs and overseeing the testing process. In 1994, Wyle Laboratories in Huntsville, Alabama, became the first lab to test voting equipment. Two other labs followed suit later on. 
 
   
 In the past year, voting activists have decried the secretive nature of voting-machine testing, saying no one knows how labs test equipment or how the equipment performs in tests. Generally only vendors and a handful of computer consultants who volunteer for NASED see test reports, and the latter sign nondisclosure agreements, or NDAs.

States can obtain lab reports by making review a condition of certification. But Jones said the reports contain little information to help him evaluate systems, and he's not allowed to speak with the testing labs because the labs sign NDAs with the vendors.

David Jefferson, a computer scientist with Lawrence Livermore Laboratories and a member of California's voting systems panel, doesn't fault the labs NDAs are common in the testing industry. But he faults NASED for failing to force vendors to make testing more transparent.

"It's more important for the public interest that the reports be openly debated and published," Jefferson said. "Voting systems are not just ordinary commercial products. They are the fundamental machinery of democracy."

Tom Wilkey, former chair of NASED's voting systems board, said as long as the federal government refuses to pay for testing which runs between $25,000 and $250,000 per system the only way to get testing done is to have vendors pay for it. As long as they pay for it, they can demand NDAs. The situation isn't ideal, he said, but it's better than no testing at all.

The labs say there's no mystery to how they test voting systems. The voting standards describe what to look for in a system, and a NASED handbook lists the military testing standards they follow.

Testing is a two-part process. The first covers the hardware and firmware (the software program on the voting machine). The second covers the election-management software that sits on a county's server and programs ballots, counts votes and produces election reports.

Only three labs test voting equipment. Wyle tests hardware and firmware, and Ciber Labs, also based in Huntsville, tests software. SysTest, in Colorado, began testing software in 2001 and now tests hardware and firmware, as well.

The hardware testing consists of "shake 'n' bake" tests that measure things like how systems perform under extreme temperatures and whether the hardware and software work the way the company says they do.

As for software, Carolyn Coggins, SysTest's director of ITA operations, said the labs examine counting accuracy and read the source code line by line to look for adherence to coding conventions and security flaws, "such as hard-coded password." (The latter was one of the flaws testers failed to catch in the Diebold system year after year.) She said they also test for Trojan horses and "time bombs" malicious code that activates at a specific time or under certain conditions.

Once a system passes testing, states are supposed to run functional tests to make sure the machines meet state requirements. Before elections, counties run logic and accuracy tests to ensure that votes going into machines match those coming out of them.

If done properly, state tests can uncover problems that slip by labs. But Steve Freeman, a member of NASED's technical committee who also tests systems for California, said state tests are often nothing more than sales demonstrations for vendors to show off a system's bells and whistles.

One of the biggest problems with testing is the lack of communication between state officials and labs. There's no procedure for tracing a problematic system back to the lab that passed it, or for forcing vendors to fix their flaws. In 1997, Jones wanted to notify the lab that passed the I-Mark system that it was flawed, but NDAs prevented him from doing so. He did tell the vendor about the flaws, but the company was unresponsive.

"There are 50 states," Jones said. "If one of them asks tough questions ... you just go in search of states where they don't ask hard questions."

Additionally, there's no process for sharing information about defects with other election districts. In Wake County, North Carolina, during its general election in 2002, a software flaw caused touch-screen machines made by Election Systems & Software to fail to record ballots cast by 436 voters. ES&S later revealed it had fixed the same problem in another county a week earlier but had failed to warn Wake officials.

"There should be a channel through which the deficiency is communicated so that ITAs are officially informed (about problems), and the FEC, or some other government agency, can be informed as well," Jones said. 
   
 Of all the tests performed on voting systems, the source-code review receives the most criticism. Jones said despite Coggins' assertions, the review concentrates more on programming conventions than secure design. The reports he saw focused on whether programmers included comments in the code or used an acceptable number of characters on each line, rather than whether votes in the system would be safe from manipulation.

"They're the kinds of things you would enforce in a freshman or sophomore-level programming course," Jones said. "There's no deep examination of cryptographic protocols to see whether the programmers made the best choices in terms of security."

In their defense, the labs say they can only test what the standards tell them to test.

"There's a big misunderstanding that the labs have control over anything," said Shawn Southworth, who coordinates software testing for Ciber. "We test the systems to the standards and that's all we do. If the standards aren't adequate, then they need to be d or changed."

"Our job is to hold the vendors' feet to the fire, but our job is not to build the fire," Coggins said.

Everyone agrees the standards are flawed. Although the 1990 standards were d in 2002, they contain a loophole that allows commercial off-the-shelf software like the Windows operating system to go unexamined if a vendor says it hasn't modified the software.

But Jones said a faulty system once slipped through testing because a lab declined to examine the operating system after the vendor upgraded to a new version of Windows. The new version had the unintentional consequence of revealing every vote cast by previous voters to the next voter who used the machine.

The biggest bone of contention in the standards is security. Jones said the standards don't specify how vendors should secure their systems.

"They say the system shall be secure," Jones said. "That's basically the extent of it."

Southworth said the labs try to encourage vendors to go beyond the standards to design better equipment, but they can't force them to do so.

But Jones said even if the standards don't require specific security features, the labs should be smart enough to catch blatant flaws like the ones the Ohio examiners uncovered.

"These reports demonstrate that there really is a reasonable expectation of what it means for a system to be secure ... and that there are objective questions that should be asked that never got asked by the labs," Jones said.

Standards and testing are moot, however, if states can't ensure that the software on voting machines is the same software that got tested. By the time a lab tests a system, which can take three to six months, voting companies can upgrade or patch their software a dozen times. States have no way to determine if vendors altered the software on their machines after it went through testing. They have to rely on vendors to tell them.

A voting software library established at the National Institute of Standards and Technology last week will help alleviate this problem, but it can't solve all certification ills.

Two years ago, Congress established a new agency to oversee the standards testing. The Elections Assistance Commission is currently upgrading voting standards and establishing new procedures to make testing more transparent. But the agency is already experiencing funding problems, and it will likely take several years before all the problems are ironed out.



Previous Page
 
Favorites

Election Problem Log image
2004 to 2009



Previous
Features


Accessibility Issues
Accessibility Issues


Cost Comparisons
Cost Comparisons


Flyers & Handouts
Handouts


VotersUnite News Exclusives


Search by

Copyright © 2004-2010 VotersUnite!