May 3, 2004
U.S. Election Assistance Commission
To the Members of the Election Assistance Commission:
Certainly you have heard a great deal of testimony from computer experts and software engineers. Perhaps you have not heard from someone with my particular kind of technological expertise and experience. I have been a technical writer for over 20 years. For twenty of those years, my focus has been software documentation. I have successfully owned and operated my own independent technical writing business for fifteen years. I know, first-hand, about the software development process from a different perspective than that of software engineers and computer experts.
When I first heard of the idea of electronic vote-recording and tallying, I immediately reflected on my experience with software development processes, and I became extremely concerned. Computers are invaluable. They provide assistance in a multitude of ways that make our lives much easier and more fruitful. But they should always remain in our service.
Trusting our election outcomes to computer control is unwise in the extreme. The processes that record and tally votes must be open to public scrutiny because errors are inevitable. This is particularly important in the case of software processes that record and tally votes, since errors they make are not random and will be magnified by the number of those machines in use during an election.
Unlike manual processes, software processes cannot be observed by election officials. Officials may be able to observe the inputs and the outputs, but they cannot watch the ones and zeros moving data from here to there to perform the functions the software was designed to perform. Because the processes that manage our elections must be transparent, software processes should never be trusted to record and tally votes; they should be used only to aid election officials in their work.
As evidence for my position, I offer four points I have learned through my experience as a software documentation specialist:
1)Standard bug-tracking and in-house testing procedures do not prevent software from being released with known bugs, as well as unknown bugs.
2)Unanticipated user actions often reveal previously-undetected bugs.
3)By definition, elections are a beta test for voting software.
4) Each new election is a potential source for revealing new bugs.
Bug Tracking and Product Testing
I have worked for dozens of companies, and all of them have procedures in place for reporting and tracking software bugs. Some use off-the-shelf bug trackers, such as FogBUGZ. Whenever I work with a program in order to write user instructions or developer guidelines, a standard part of my service is to maintain a bug list, which I transmit to my client periodically - sometimes daily, sometimes weekly. Other clients set me up with a login to their online bug tracker. The number of reports on a bug list commonly runs into the hundreds.
Many software companies have a quality assurance department, and they "freeze" the development for some period of time, such as a week or more, before the product release so the QA department has time to test the final product. During this period of time, no features are added. The only software changes allowed are the bug fixes.
It is a well-known fact in the industry that a bug fix can inadvertently introduce a new bug, so many companies have procedures for thoroughly retesting relevant modules after a bug is fixed. Since the danger of introducing a new bug is so high when any change is made to the software, many companies resist fixing late-detected errors, unless it is absolutely essential to do so. In fact, when a bug is found at the end of the development cycle, a common in-joke is to ask the question, "Shall we call it a bug or a feature?" It is not unusual for an insignificant bug to be called a feature, and I write it up as such in the documentation.
Every software company I have worked with maintains a list of the known bugs in every release so they can fix as many as possible in the next release. It is standard practice.
With all the scrutiny on the voting machine vendors and their software, I have heard nothing about their bug-tracking systems or their product test plans. It might be useful to examine their bug reports, the procedures for fixing them, and their test plans and results.
Unanticipated User Actions
One value I bring to my clients is my lack of familiarity with their software. They know what the software does so well that they fail to explain many of the steps, warnings, and explanations that users need in order to understand how to best use the system. Quite simply, the developers know too much to write a good manual.
All in-house testing is less than thorough for exactly that same reason. They know what the product is supposed to do, and they know how they expect the user to use it. They make an effort to test for all these actions.
But users take actions that no developer would expect, and the testers - completely familiar with the way the software is intended to be used - fail to test for those unanticipated actions. What I bring to the testing process is my ignorance. I am a simulated user, and I take actions the developers would never expect. While those actions may not be appropriate to the use of the product, they are actions real users are certain to take. It is not unusual for my clients to fix a bug I found and tell me it never occurred to them that a user would do that, so they hadn't included the necessary error-checking or subroutine to handle it.
Voting machine software has a user base with the broadest possible range of education, computer-experience, and competence. No other software product has users with such a wide variety of demographics.
It is logistically impossible, not to mention financially prohibitive, to test voting machines thoroughly, since it would require testing by users from every part of the spectrum. As a result, after in-house testing, voting software inevitably has errors that will only be detected in the field.
Every Election is a Beta Test
The whole of the in-house testing process is known as alpha-testing - the first level of testing. Software companies normally find most of the gross problems during this testing - but not all of them. They know this, so they conduct beta-testing.
They send the alpha-tested software to typical members of their actual user base and simply ask them to use it for a while and report any problems. These users are aware that they are participating in a beta test, so they rarely rely on the results or the processing as they would rely on a released product.
It is important to note that L&A testing is comparable to alpha testing. It does not simulate actual use in the field but focuses on simulating the actions users are expected to take in an election. Even if the election officials attempted to take all the same unanticipated actions users will take, they know too much about the intended use of the machines to be successful.
An actual election is the only field-testing available to voting machine vendors. So, the first time a voting software version is used in an election, that election is comparable to a beta test. Furthermore, because of the special circumstances surrounding elections, every election is comparable to a beta test. Unfortunately, an election is not treated like a beta test, and this fact has serious implications:
a) An election offers all the risks of a beta test, with none of the benefits. Since the beta test is an actual election, it is impossible to determine the results of the test. Evaluating the results would require knowing how each voter voted and comparing the votes with the performance of the machine. So, in the first field-test, the voters and the election officials are exposed to the risks of flawed software, but the election fails to fulfill the purpose of a beta test - to find additional bugs before the final release of the software.
b) Every subsequent election is also a beta test. Since the beta test cycle is never carried to completion through the process of reporting and fixing bugs, the next election is necessarily a beta test also, and any bugs that may have impacted the first beta-test/election are likely to impact the next one as well.
c) Users - election officials and voters alike - are asked to rely on the results of the beta test. They are not informed that the election is a beta test. They are not asked to track bugs and report them to the vendors. They expect that the software has been fully tested, which, by definition, it cannot have been since the election is the field test. They are expected to rely on the results as if the product had been fully tested, and because they are not familiar with the software development cycle, they normally do.
Each New Election Might Reveal New Bugs
By the nature of the function it performs, election software can never be tried and true.
Every time a software program is used in a new configuration, previously-undetected bugs are likely to emerge. In-house testing can only test a limited number of configurations. Field-testing for voting software (an election) tests exactly one configuration - the specific set of races and candidates slated for the election.
Software is only as robust as the engineer's ability to anticipate all the eventualities. For example, if the engineer creates an array to handle a specified number or type of elements, the software might operate in several elections before a specific configuration of races and candidates did not fit within that limit. At that point, an error would be triggered for the first time.
Since the number of configurations possible for an election are infinite, it is impossible to ensure that new errors will not emerge in any election, no matter how many previous elections were apparently successful.
Viable, Cost-Effective Solutions
Many people are calling for a voter-verified paper trail to be added to DREs. This is a minimal safeguard at best. Any novice programmer can write code that prints what is on the screen and records something else. Since there are so many software processes involved in the operation of voting software, it is not unlikely that the same discrepancy could occur accidentally.
There are processes to read the screen, to move the data to memory, to write to temporary storage, to drive the printer - and many more than that. It is impossible to test all these in every possible situation, to make sure they are completely accurate and will interface successfully with each other in every case.
Computers provide a valuable service. We should keep them in our service, but not give them control over the outcomes of our elections. Electronic ballots are not reliable. None should be used.
I have enclosed a booklet entitled "Myth Breakers for Election Officials," which is a collection of facts compiled from newspaper articles and academic research on the subject of electronic voting. Currently, concerned individuals and grassroots organizations are distributing the booklet to local election officials throughout the country. The President of the Connecticut Registrars Association distributed copies to all 169 Registrars in Connecticut. Hundreds more have been distributed in at least 27 states in the last two weeks. Many of the copies have been hand-delivered. I am already hearing from officials who tell me they find the information quite enlightening and useful.
The facts documented in this booklet illustrate the practical and financial disadvantages of using Direct Recording Electronic voting machines. In addition, they show that thousands of voters have been disenfranchised because these machines are unreliable and complex to use. The facts in this booklet also point to reliable, verifiable, and cost-effective solutions that allow disabled individuals to vote unassisted. They are described below.
Optical ballot scanners used in conjunction with ballot-marking devices
With this system, every vote is verified on paper at the time the voter marks the ballot. The voting and the verification are the same step, unlike the two steps required on a DRE that provides a paper trail. Precinct optical scanners protect against over-votes and warn voters of under-votes, and the ballot-marking device provides the accessibility features required by HAVA for disabled individuals.
Each precinct requires only one scanner and one ballot-marking device, regardless of the number of voting booths needed. In a full DRE system, a machine is required for each booth, and additional peripherals are also required, according to the functionality of the manufacturer's system.
The cost for acquiring these optical scan systems ranges from one-third to one-half the cost of outfitting each precinct with DREs, and if the county is already using scanners, the initial outlay is even lower. While most of the acquisition costs in the near term will be funded by HAVA, counties bear the costs of running an election, storing the equipment between elections, and maintaining their systems. Counties that purchase scanner systems will save taxpayer dollars every time an election is held, since the cost of running an election with a scanner/ballot-marking device is much lower than the cost of a DRE election. This is because the additional cost of printing optical scan ballots for all voters is much lower than the labor cost for hiring additional poll workers and conducting the time-intensive Logic and Accuracy testing required for DRE elections.
Tactile ballot templates used with optical scanners
Tactile ballot templates are currently in use throughout Rhode Island, in conjunction with the state's optical scan systems. The templates are made from standard ballots and are used by visually-impaired voters to allow them to vote unassisted. The voter can feel bumps beside the choices, while an audio explanation of the meaning of each set of bumps assists them in completing their ballots. The cost is a minimal addition to the cost of printing ballots.
While ballot templates would not provide accessibility to voters with severe manual disabilities, if used with a Braille instruction sheet, they would allow voters who are both blind and deaf to vote unassisted - an advantage neither DREs nor ballot-marking devices have.
Free open-source software installed on standard computers
A group of computer scientists and engineers with the Open Voting Consortium is developing electronic voting software that can be run on standard computers, even the older models that have been replaced by leading edge technology. Counties could rent computers for elections or use old computers stored in their basements. The software is open-source, so it is open to public scrutiny.
The voter makes selections on the computer, which then prints out the official ballot. In addition to being human-readable, each ballot includes a barcode of the voter's choices, so they can be electronically tallied.
The system is fully HAVA compliant. OVC voting systems will accommodate different languages and scoring methods, as well as voters with special needs. The OVC expects the software to be certified early in 2005. They will distribute it free of charge.
Not a single piece of hard evidence supports the use of DREs over these HAVA-compliant paper ballot systems. Quite the opposite. Acquisition costs of DREs are twice to three times as much as scanner-based systems. DREs are more expensive to store and maintain. DRE elections cost counties more than scanner-based elections, because of the labor costs for additional poll workers and the time-intensive L&A testing required for DREs. In addition, elections conducted on DREs are less reliable, more complex, and more prone to human error than paper-based elections. DREs are also more susceptible than scanners to the problems that plague electronic equipment.
Unlike DREs, paper-based solutions provide a true audit trail as required by HAVA and they allow election officials to conduct a transparent, verifiable election. I urge you to adopt standards that eliminate the use of electronic ballots.