June 30, 2004
To the U.S. Election Assistance Commission

Dear Commissioners:

Please consider the following information as you prepare your "best practices" document.

As you probably know, on June 24, the House Science Subcommittee on Environment, Technology, and Standards held a hearing to examine how voting equipment is tested and certified. The experts and Congressmen agreed that the testing and certification process is completely inadequate. Dr. Michael Shamos said the process was "not only broken" but "virtually non-existent." No one present disagreed.

This means that NONE the voting equipment currently in the field has been adequately tested.

Those present at the hearing also agreed that the process cannot be fixed by this November. Certainly the entire process cannot be fixed, but the local pre-election testing could be vastly improved. Adequate local testing would add significantly to the reliability of the election outcomes.

Local pre-election testing is particularly crucial. Here's why. Every voting system includes a key component, called the ballot definition file (BDF), that is never subjected to an outside review. Given that BDFs determine the way votes are recorded and counted, the lack of independent oversight of these files is a major security vulnerability. If BDFs are incorrectly prepared, the wrong candidate could be elected.

BDFs are unique for each election and define all the races and candidates for each precinct. BDFs tell the voting machine software how to interpret a voter's touches on a screen or marks on an optical scan ballot (including absentee ballots), how to record those selections as votes, and how to combine them into the final tally. Local pre-election testing is the only process that checks the accuracy of the BDF constructed for the specific election. For more information on BDFs, see: https://www.votersunite.org/info/ballotprogrammingintro.asp

Based on my nine months of intensive study in this area and interviews with many local election officials, I believe the following recommendations are essential:

First: eliminate the vendors from the pre-election/election/post-election processes as much as possible. When it is necessary to use their services, make sure they are closely supervised and the work they do is open to public oversight.

Second: improve pre-election testing.

For all types of voting machines, pre-election testing must be sufficiently sensitive to detect errors in the way touches on the screen or marks on a ballot are mapped to the result counters. A mis-mapping can cause one candidate's votes to go to the opponent as well as other types of miscounting. Indeed, this has happened in many elections.

Pre-election testing for optical scan machines is inadequate in many counties. Unfortunately, testing on the paperless, unauditable DREs varies from completely inadequate to virtually non-existent. Some counties run the DRE's self test, which is merely a simulation that does not simulate all aspects of an election since no one is pressing the buttons to cast the votes. Some counties simply cast enough ballots to ensure that all the buttons work. Some only test a small percentage of the voting machines. No county that I know of tests its DREs with even a fraction of the thoroughness of optical scanner testing (which is often inadequate as well).

Minimum requirements for pre-election testing should include:

a) Counties should do their own ballot programming if at all possible. Currently many counties have their voting machine vendor do the ballot programming; others hire a contract programmer to do it. If county officials must contract out this most important work, then they must make their pre-election testing much more stringent.

b) Counties with optical scan machines should create their own test decks. Currently many counties have the vendor create their decks of test ballots, especially counties with hundreds of ballot styles. If it is unrealistic for a county to create its own test ballots, they should at least add several ballots of each ballot style to each deck -- and then make sure the results match what is expected.

c) Test decks for optical scan machines should be produced by marking a set of the ballots produced for the live election. Recently in Severe County, Arkansas, the test ballots were produced separately from the live ballots, and then the live ballots were printed wrong. Since the pre-election testing is intended to catch errors in the ballots as well as in the ballot programming, producing test ballots in a different print-run than the live ballots makes no sense at all.

d) Counties that have the vendor do the ballot programming, and print the optical scan ballots, and create the test decks must add extensively to the test deck.

e) Pre-election testing for DREs should be a mock election. This means:

1) Each ballot style should be tested by manually entering a test script of ballots into one of the DREs. This means approximately 30-50 ballots need to be entered by hand, the way a voter would enter it, for EVERY ballot style. This tests the ballot program which presumably will be the same on every DRE, so this process wouldn't have to be repeated for every DRE.

2) EVERY DRE must be tested to make sure it will start up and that the buttons are working correctly. This means that several ballots should be entered on each one enough to press each button on the screen and try every feature. Results should match the ballots entered.

3) Testing should not be done in test mode, but in election mode. I realize this adds a complication to the testing process, but test mode runs through different software than election mode does, and since the software is a trade secret, there is no way of knowing how different it is.

4) The results of all these tests should be accumulated in the central tabulation system, so the county has truly run a mock election.

f) For both optical scans and DREs: If a ballot style doesn't pass the test, it must be reprogrammed and the entire test must be done again. If a machine doesn't pass, but the testing passes on other machines, the flawed machine must be taken out of service. This means that counties ought to allow more than just a few days for the test. Reprogramming and re-testing can take quite a bit of time.

Note: recently in Dona Ana, New Mexico, pre-election testing showed that the Sequoia Insight optical scanners were not incrementing the ballot counters correctly. Having no procedures to deal with a failed test, the county used the machines in the live election anyway.

Thank you for considering these essential testing procedures.


Ellen Theisen