Engineering and Ethics – Neal Ford

With the assistance of technology, we literally control the world. Like Icarus’ flame, it brings both benefits and dangers. Software is technology too, which means that the things we write can find uses for both good and ill. Anyone who works with technology must understand the implications of what they create. This essay is an exploration of ethics in software development, with the goal of making you think about the sometimes invisible implications of what you build.

You may ask yourself What is that beautiful house? You may ask yourself Where does that highway lead to? You may ask yourself Am I right… Am I wrong? You may say to yourself My God… what have I done?

Lyrics from Once in a Lifetime , from the album Remain in Light by The Talking Heads Composed by David Byrne, Brian Eno, Chris Frant, Jerry Harrison and Tina Weymouth. Once upon a time, I was a C programmer. One of the books that taught me a huge amount about how to become an idiomatic C programmer was the book C Chest and Other Treasures by Alan Holub, a collection of the best of the “C Chest” column from Dr. Dobb’s Journal. As much as it taught me about C, the real lasting impression of that book was from its appendix. The reason I read the appendix was the quoted lyrics that appeared above it, the same lyrics at the start of this article. The appendix topic seemed odd to me then, but over the years I’ve realized its importance. The appendix was about ethics in software, and that’s the subject of this article as well.

Predicting the Future

On the No Fluff, Just Stuff tour this year, I presented a keynote in some cities entitled Smithing in the 21st Century. Ostensibly, the keynote is about how to predict the long-term future of technology, but it also touches on the topic of ethics because it’s not possible to talk about the future of technology without also discussing its implications, both positive and negative.

When trying to predict the future, I tell people to watch out for technology accelerators: events or inventions that advance technology faster than its normal pace. Unfortunately, one of the most effective technology accelerators is war. Before the US Civil War, medicine was shockingly primitive. Because of their experience during the war, surgeons toured the country, teaching seminars demonstrating their new techniques. Before World War I, airplanes were a novelty item; by the time the war was over, they were becoming sophisticated, reliable machines.

If you look up the word “computer” in a dictionary prior to 1945, the definition was “one who computes”. Computers existed during World War II, mostly as large rooms of women (because the men were overseas) using mechanical calculators and slide rules. These computers were calculating artillery trajectories, and they couldn’t do it fast enough to satisfy the military. This shortcoming was one of the prime motivators for the military to fund ENIAC, the world’s first operational, general purpose, electronic digital computer. The other purpose for ENIAC funding (and its first real application) was the Manhattan Project in Los Alamos, New Mexico. One of the fascinating characters at Los Alamos was Richard Feynman.

Feynman and Regret

Richard Feynman was a Nobel prize winning physicist who did pioneering work on quantum computing. One famous quote about Feyman (he was so entertaining, there are a couple of books just about his crazy shenanigans) was that if anyone acts the way he did and is not a Nobel prize winner, they are considered a kook. If you act that way and are a Nobel prize winner, you’re just eccentric.

Feynman was profoundly eccentric. One of the famous stories about Feynman happened while he was at Los Alamos as one of the physicists. Understandably, security was tight at Los Alamos, and one day Feynman came across a hole in the fence in a far away part of the compound. He notified his superiors but nothing happened. After a few days, he figured out how to force its repair. He crawled through the hole in the fence and walked around to the only gate for the compound and made a point of greeting the guards on duty. He then immediately crawled back through the hole and re-entered through the front gate. The third time he walked into the gate without walking out of the gate, the guards stopped him, forced him to explain, and the hole was repaired.

Feynman was also chosen to appear on the board assessing fault in the space shuttle Challenger disaster. Most of the other board members where engineers and NASA officials; Feynman was the only theoretical scientist on the panel. One of the key issues at the core of the problem was the plastic-like O-rings, one of the components of the space shuttle rocket. The main argument presented by the engineers was that the O-rings didn’t become stiff when the temperature fell below a certain level. The engineers were trying to make the case that the O-rings were not to blame for the accident. While all this evidence was presented, Feynman took the small sample of O-ring material they had given to each of the board members, placed a C-clamp around it, and dropped it in the glass of ice water in front of him. When it came time for him to comment, he didn’t need to say a word: he fished the O-ring material out of the glass and demonstrated that the material loses its flexibility at freezing. Feynman was smart, but he also thought deeply and sometimes unintuitively.

In spite of his whimsical side, he also suffered for much of his life because of his involvement in the Manhattan Project. Years later, he wrote:

…with any project like that you continue to work trying to get success, having decided to do it. But what I did – immorally I would say – was to not remember the reason I said I was doing it, so that when the reason changed, not the singlest thought came to my mind that meant now I have to reconsider why I am continuing to do this.

He’s talking about the Manhattan Project. When this project started, the future of the world looked dire. World War II was raging and each side was trying to leverage technology to gain an advantage. Clearly, very good reasons existed for this project at its outset. Once the scientists and engineers started working on it, they got caught up in the problem solving aspects. It became this great unsolved puzzle that they all worked incredibly hard to crack. And they did.

However, during the nighttime test of the first atomic bomb, many of the scientists who witnessed the first detonation were appalled at what they had helped create. By the time the project had reached fruition, the war in Europe was over, and it was clear that the allies would win the war in the Pacific, albeit with much effort and loss of life. Feyman’s point shows a mental blind spot common to most engineering types: it is easy to get caught up in the details of a problem, forgetting (temporarily or permanently) why you are solving this problem.

Engineering and Ethics

Engineers working on potentially dangerous technology must never lower their guard, and nothing is more potentially dangerous than applications of computation.

What is the most surveilled society in human history? You might guess the USSR under Stalin, China under Mao, or Iraq under Saddam Hussein. The answer is current day London, which has over 10,000 surveillance cameras. It is almost impossible to stand outside in public space in London and not appear on a camera. With the advent of software that can scan and understand faces, you are trackable wherever you are in London. Security and crime fighting are the reasons the cameras exist, but what if the government changes and decides that it would like to keep track of selected non-criminals? This is the sticky ethical situation we find ourselves in: security is a good thing, but how do you defend against possible misuse? This problem isn’t as far removed as you might think. The next city to be thusly outfitted is New York City, where work is under way now to install cameras…and software.

Here is an even tougher judgment. A few websites have popped up under the category of “entertainment shopping” (they aren’t allowed to call themselves auction sites). Here’s a typical deal: you purchase bids in pre-packaged blocks of 25. Each bid costs you 75 cents, with no volume discount. When you are involved in an “auction”, each bid raises the purchase price 15 cents and increases the auction time by 15 seconds. Once the auction ends, you pay the final price, but the other people who bid don’t get their bid tokens back.

On a recent day, an iPod touch went up for “bid”. The Apple list price on that day was $229. The winner ended up paying $187.65, which is a good deal. However, there were 1251 total bids placed; the failed bids added up to $928.25, meaning that the “auction” site sold the iPod touch for $1125.90! Now you can see why they can’t really call it an auction.

Sites like this are exploiting a particular human weakness known as the endowment effect: once you’ve placed a few bids, you’re more likely to keep bidding. This is not illegal in any way, but it does actively exploit a well known weakness. How would you feel working on this site?

Back to the Future

Earlier, I discussed the effect of war as a technology accelerator. The US is currently engaged in two armed conflicts, and at least one of the accelerated technologies is robotics. When the conflicts began, there were no unmanned aircraft, now there are more than 6000. The same is true for land-based robots. These planes use software. It’s pretty easy to justify their existence when they are being used in a war to save lives, but what happens when this technology becomes ubiquitous, and moves out of military applications?

Now, I’m not suggesting a doomsday scenario. Like a lot of geeks, I’m generally a sunny optimist, and that part of me imagines intelligent robots as helpers, like Robby the Robot from Forbidden Planet or Robot on Lost in Space. But part of me worries that we’ll end up with the Terminator instead.

Doctors take a Hippocratic oath: “At least do no harm”. Perhaps software developers should take a similar oath, which I’ll call the Miles Dyson promise:

“At least don’t create any super intelligent homicidal robots.’”

About the Author

Neal Ford is Software Architect and Meme Wrangler at ThoughtWorks, a global IT consultancy with an exclusive focus on end-to-end software development and delivery. He is also the designer and developer of applications, instructional materials, magazine articles, courseware, video/DVD presentations, and author and/or editor of 6 books spanning a variety of technologies, including the most recent The Productive Programmer. He focuses on designing and building of large-scale enterprise applications. He is also an internationally acclaimed speaker, speaking at over 100 developer conferences worldwide, delivering more than 600 talks. Check out his web site at http://www.nealford.com.

Reprinted from NFJS the Magazine, Volume II, Issue IX

Leave a Reply

Your email address will not be published. Required fields are marked *

*