Fit the Tool to the Person, Not the Person to the Tool

I recently had a conversation with a student about the ethical quandaries raised by the advent of self-driving cars. Hypothetically, for instance, how would a self-driving car react to a pedestrian who stepped out in front of it? Whose safety would it be programmed to privilege?

The relatively tech-savvy student was unfazed. Obviously this would only be a problem until pedestrians were forced out of the picture. He took it for granted that the recalcitrant human element would be eliminated as a matter of course in order to perfect the technological system. I don’t think he took this to be a “good” solution, but he intuited the sad truth that we are more likely to bend the person to fit the technological system than to design the system to fit the person.

Not too long ago, I made a similar observation:

… any system that encourages machine-like behavior from its human components, is a system poised to eventually eliminate the human element altogether. To give it another turn, we might frame it as a paradox of complexity. As human beings create powerful and complex technologies, they must design complex systemic environments to ensure their safe operation. These environments sustain further complexity by disciplining human actors to abide by the necessary parameters. Complexity is achieved by reducing human action to the patterns of the system; consequently, there comes a point when further complexity can only be achieved by discarding the human element altogether. When we design systems that work best the more machine-like we become, we shouldn’t be surprised when the machines ultimately render us superfluous.

A few days ago, Elon Musk put it all very plainly:

“Tesla co-founder and CEO Elon Musk believes that cars you can control will eventually be outlawed in favor of ones that are controlled by robots. The simple explanation: Musk believes computers will do a much better job than us to the point where, statistically, humans would be a liability on roadways [….] Musk said that the obvious move is to outlaw driving cars. ‘It’s too dangerous,’ Musk said. ‘You can’t have a person driving a two-ton death machine.'”

Mind you, such a development, were it to transpire, would be quite a boon for the owner of a company working on self-driving cars. And we should also bear in mind Dale Carrico’s admonition “to consider what these nonsense predictions symptomize in the way of present fears and desires and to consider what present constituencies stand to benefit from the threats and promises these predictions imply.”

If autonomous cars become the norm and transportation systems are designed to accommodate their needs, it will not have happened because of some force inherent in the technology itself. It will happen because interested parties will make it happen, with varying degrees of acquiescence from the general public.

This was precisely the case with the emergence of the modern highway system that we take for granted. Its development was not a foregone conclusion. It was heavily promoted by government and industry. As Walter Lippmann observed during the 1939 World’s Fair, “General motors has spent a small fortune to convince the american public that if it wishes to enjoy the full benefit of private enterprise in motor manufacturing, it will have to rebuild its cities and its highways by public enterprise.”

Consider as well the film below produced by Dow Chemicals in support of the 1956 Federal Aid-Highway Act:

Whatever you think about the virtues or vices of the highway system and a transportation system designed premised on the primacy the automobile, my point is that such a system did not emerge in a cultural or political vacuum. Choices were made; political will was exerted; money was spent. So it is now, and so it will be tomorrow.

5 thoughts on “Fit the Tool to the Person, Not the Person to the Tool

  1. Well, the answer is obvious. Elect a government that works for us, the people, and not large corporations and vested interests; then let it regulate in our interests, not theirs. We need to experiment, innovate, improve, and use science, technology and social understanding to improve everyone’s lot, reverse the damage that has already been done, and prevent future catastrophes, but we need imagination and understanding to predict outcomes better before implemention. But it’s late in the day.

  2. A current example of money being spent is in the recent Dodge commercials that border on outright propaganda.

    The classic line is “They believed driving was a holy endeavor.” Every time I see that commercial I think, “That’s why the US does not have an effective high-speed rail system.” It also speaks to the need for people to experience the transcendent.

    This has relevance locally where this I-4 Ultimate Project is feeling more and more like a sham all the time. $2.3 billion dollars to improve a 21-mile stretch of I-4, which will include toll lanes whose monetary amount will be based on the amount of traffic rather than the traditional distance traveled.

    A pitch is that it is about personal choice whether to take the toll lane, paying $3, $4, or $7, or continuing to drive in slower traffic. Yes, personal choice that is limited to one roadway (I-4) in one form of transportation (automobile). The money is not being spent to add to or improve infrastructure that would benefit buses, Sun Rail, or bicycle lanes, which adds several more choices.

  3. A few thoughts : My goal is to find enjoyable ways to solve problems so that everyone has an oportunity to become more of who they have been made to be and life is simultaniously more enjoyable.

    -) How can we talk about the ethical quandries more profitabily ? I dont think we have trouble getting people to recognize the ethical quandries associated with technological convergance. From my observation its been more difficult to concisely get people who have the influence and political clout to understand these problems. By default people who have influence and clout also have limited time. By making an intuitive leap i think that solving political technological problems ( the quandry) will be bottle necked by collaborative problem solving. Do you have any suggestions from your expierence on how we might be able to more quickly un-bottle neck collaborative problem solving as to more efficiently solve these problems?

    -) It makes me anxious inside thinking about the new political complexity involved in situations you are describing here. Because the decisions effect the lives of so many I wish there was a way for anyone to understand the complexity and weight of the decsions in a non-biased informative way.

    -) While I understand using the highway system case study as an analogy for the future need for rebuilding public policy and infastructure I am curious to see how the next 10 years pans out for the digital infastructure that may be nessesary to build.

    -) i dont fully believe that the idea of self driving cars will solve safty issues because i dont fully believe that all code I have seen written is error free. Code cant fix itself yet and errors of logic from simple to evovled still goes through iteration. The potential for more saftey is there, i cant help but recognize that a good bit of the reason for the campaigning around automous vehicals will be directly tied to the ability to quanitify and monitize vehicals in new ways.

    Cheers, for the post ! im curious to hear more of your thoughts.

Leave a comment