#title The Unabomber on Robots
#subtitle The need for a philosophy of technology geared toward human ends
#author Jai Galliott
#source Robot Ethics 2.0, Chapter 24. <[[https://doi.org/10.1093/oso/9780190652951.001.0001][www.doi.org/10.1093/oso/9780190652951.001.0001]]>
#lang en
#pubdate 2025-07-14T20:40:23.023Z
#topics robots, autonomous, cars, ethics, law, philosophy, policy, responsibility, psychology, artificial intelligence, Philosophy of Science, analysis of Ted’s ideas & actions
#date Published online: 19 October 2017. Published in print: 30 November 2017
#notes Draws on direct correspondence and prison interviews with Kaczynski and applies his broader views to the robotization of humanity.
According to conventional wisdom, Theodore John Kaczynski is little more than a Harvard-educated mathematics professor turned schizophrenic terrorist and lone murderer. Most will remember that what catapulted the University and Airline *Bomber* (the “Unabomber”) to the top of the FBI’s “Most Wanted” list in the United States was his engagement in a nationwide bombing campaign against people linked to the development and use of modern technology. Many dismiss Kaczynski as insane. However, this does nothing to refute his argument, namely that the continued development of our technological society threatens humanity’s survival and, as said society cannot be reformed, confers upon us a moral obligation to ensure that it is destroyed before it collapses with disastrous consequences.
This chapter draws on direct correspondence and prison interviews with Kaczynski and applies his broader views to the robotization of humanity. It also engages with the recent work of Peter Ludlow, a philosopher and prominent critic of Kaczynski, who argues that his account is ridden with *ad hominem* attacks and devoid of logical arguments. Demonstrating that Ludlow is mistaken, at least on the latter point, the chapter maps Kaczynski’s arguments and portrays him as a rational man with principal beliefs that are, while hardly mainstream, entirely reasonable. Contra Ludlow, it is argued that the problem is not merely that those in power seize control of technology and then exploit their technological power to the detriment of the public. That is, it is not simply a matter of putting technology back in the hands of the people through “hacktivism” or open-source design and programming. The modern technological system is so complex, it will be argued, that people are forced into choosing between using jailed technology controlled by those within existing oppressive power structures or dedicating their lives to building knowledge and understanding of the software and robotics that facilitate participation in the technological system. In this sense, Kaczynski is right in that the system does not exist to satisfy human needs. It is morally problematic that human behavior has to be modified to fit the needs of the system in such a radical way. We must therefore accept revolt aimed at bringing about near-complete disengagement from technology as permissible or recover a philosophy of technology truly geared toward human ends—parts set against the dehumanizing whole.
*** 1. The Unabomber Manifesto
Over the course of nearly twenty years, Ted Kaczynski mailed or personally placed no fewer than sixteen explosive packages that he handcrafted in his utility-free cabin in the Montana woods, taking the lives of three individuals, maiming twenty-four others, and instilling fear in much of the general population (Federal Bureau of Investigation 2008). He was the kind of “lone wolf ” domestic terrorist that today’s presidential administrations would have us think Da’ish is recruiting and was, at the time, an incredibly stealthy nuisance to law enforcement agencies that were busy dealing with traditional foreign enemies, namely socialist Russia. Despite incredibly precise bomb-making that left no evidence as to the provenance of the tools of his destruction, what eventually led to Kaczynski’s capture was the forced publication of his 35,000-word manifesto in the *New York Times* and *Washington Post*. Despite attributing authorship to a group referred to as the Freedom Club (FC)—thought to be derived from the name of the anarchist group (the Future of the Proletariat, FP) in Joseph Conrad’s *The Secret Agent* (1907), from which much inspiration was drawn[1]—Kaczynski was actually the lone member of the group, and his unique prose allowed his brother to identify him as the author, leading to his eventual arrest. The manifesto is well structured, with numbered paragraphs and detailed footnotes but few references.[2] It is best described as a socio-philosophical work about contemporary society, especially the influence of technology. In essence, the Unabomber thinks modern society is on the wrong track and advocates an anarchist revolution against technology and modernity. Indeed, the manifesto begins by declaring that “the Industrial Revolution and its consequences have been a disaster for the human race” (Kaczynski 1995, 1) and quickly proceeds to link the ongoing nature of the disaster to the “The Left” and a scathing analysis of liberals:
Almost everyone will agree that we live in a deeply troubled society. One of the most widespread manifestations of the craziness of our world is leftism… . Leftists tend to hate anything that has an image of being strong, good and successful. They hate America, they hate Western civilization, they hate white males, they hate rationality… . Modern leftish philosophers tend to dismiss reason, science, objective reality and to insist that everything is culturally relative. They are deeply involved emotionally in their attack on truth and reality. They attack these concepts because of their own psychological needs. For one thing, their attack is an outlet for hostility, and, to the extent that it is successful, it satisfies the drive for power. (Kaczynski 1995, 1, 15, 18)
The manifesto connects this almost Nietzschean criticism of leftism to the concept of “oversocialization.”[3] Kaczynski writes that the moral code of our technological society is such that nobody can genuinely think, feel, or act in a moral way, so much so that people must deceive themselves about their own motives and find moral explanations for feelings and actions that in reality have a non-moral origin (1995, 21– 32). According to the Unabomber, this has come about partly because of “The Left,” which appears to rebel against society’s values, but actually has a moralizing tendency in that it accepts society’s morality, then accuses society of falling short of its own moral principles (Kaczynski 1995, 28). Kaczynski’s point is that leftists will not be society’s savior from technoindustrial society, as is often thought. As will be argued in the next section, this hate-filled discussion of leftism merely distracts from the broader manifesto, the remainder of which is much more concerned with the actual impact of technology upon society.
Kaczynski’s ideas on this topic have deep philosophical underpinnings and are grounded in the writings of Jacques Ellul, who penned the *The Technological Society* (1964) and other highly perceptive works about the technological infrastructure of the modern world. While Kaczynski details the effect of “technology” and “technologies”—in other words, tools that are supposedly used for human ends—he is not specifically interested in any particular form of technology, whether military or civilian, robotic or otherwise. Robots are important, he notes, but from his point of view and for the most part, they are important only to the extent that they form part of the overall picture of technology in the modern world (pers. comm. 2013). In this respect, he joins Ellul in being more concerned about “la technique,” which is technology as a unified entity or the overarching whole of the technoindustrial society in which, according to Ellul and his contemporaries, everything has become a means and there is no longer an end (Ellul 1951, 62). That is to say that it would be a mistake to focus on technology as a disconnected series of individual machines. For both Kaczynski and Ellul, technology’s unifying and all-encompassing nature and efficiency in just about every field of human activity are what make it dehumanizing, in that its absolute efficiency and lack of ends do away with the need for humanity.
Ellul wrote that “the machine tends not only to create a new human environment, but also to modify man’s very essence” and that “the milieu in which he lives is no longer his. He must adapt himself, as though the world were new, to a universe for which he was not created” (1964, 325). Kaczynski shares this sentiment and provides an illustration that encapsulates his version of the same argument:
Suppose Mr. A is playing chess with Mr. B. Mr. C, a Grand Master, is looking over Mr. A’s shoulder. Mr. A of course wants to win his game, so if Mr. C points out a good move for him to make, he is doing Mr. A a favor. But suppose now that Mr. C tells Mr. A how to make ALL of his moves. In each particular instance he does Mr. A a favor by showing him his best move, but by making ALL of his moves for him he spoils his game, since there is no point in Mr. A’s playing the game at all if someone else makes all his moves. The situation of modern man is analogous to that of Mr. A. The system makes an individual’s life easier for him in innumerable ways, but in doing so it deprives him of control over his own fate. (1995, n. 21)
His view is that a technoindustrial society requires the cooperation of large numbers of people, and the more complex its organizational structure, the more decisions must be made for the group as a whole. For example, if an individual or small group of individuals wants to manufacture a robot, all workers must make it in accordance with the design specifications devised at the company or industry level, and all inputs must be regular; otherwise the robot is likely to be of little use and perhaps even unreliable. Decision-making ability is essentially removed from the hands of individuals and small groups, and given to large organizations and industry groups (if not robots themselves), which is problematic, as it limits individual autonomy and disrupts what Kaczynski calls the “power process” (1995, 33– 7).
Most human beings, he says, have an innate biological need for this power process, which consists of autonomously choosing a goal and satisfying certain drives, and making an effort to reach them (Kaczynski 1995, 33). Kaczynski splits these human drives into three categories: drives that can be satisfied with minimal effort, drives that can be satisfied with significant effort, and drives that have no realistic chance of being satisfied no matter how much effort is exerted. The first category leads to boredom. The third category leads to frustration, low self-esteem, depression, and defeatism. The power process is satisfied by the second category, but the problem is that industrial society and its technology push most goals into the first and third categories, at least for the vast majority of people (Kaczynski 1995, 59). When decisions are made by the individual or small-scale organizations, the individual retains the ability to influence events and has power over the circumstances of his or her own life, which satisfies the need for autonomy. But since industrialization, Kaczynski argues, life has become greatly regimented by large organizations because of the demand for the proper functioning of industrial society, and the means of control by large organizations have become more effective, meaning that goals become either trivially easy or nigh impossible. For example, for most people in the industrial world, merely surviving in a welfare state requires little effort. And even when it requires effort—that is, when people must labor—most have very little autonomy in their job, especially with the robotization of many of today’s workforces. All of what Kaczynski describes is thought to result in modern man’s unhappiness, with which people cope by taking on surrogate activities—artificial goals pursued not for their own sake but for the sake of fulfillment (Kaczynski 1995, 39). The desire for money, excessive sexual gratification, and the latest piece of technology would all be examples of surrogate activities. And, of course, if robotization accelerates and Kaczynski is right, life will become much worse for people.
At the most abstract level, then, the manifesto perpetuates the idea that by forcing people to conform to machines rather than vice versa, technoindustrialization creates a sick society hostile to human potential. The system must, by its very nature, force people to behave in ways that are increasingly remote from the natural pattern of human behavior. Kaczynski gives the example of the system’s need for scientists, mathematicians, and engineers, and how this equates to heavy pressure being placed on adolescent human beings to excel in these fields, despite the fact that it is arguably not natural for said beings to spend the bulk of their time sitting at a desk absorbed in directed study in lieu of being out in the real world (1995, 115). Because technology demands constant change, it also destroys local, human-scale communities and encourages the growth of crowded and unlivable megacities indifferent to the needs of citizens. The evolution toward a civilization increasingly dominated by technology and the related power structures, it is argued, cannot be reversed on its own, because “while technological progress *as a whole* continually narrows our sphere of freedom, each new technical advance *considered by itself* appears to be desirable” (Kaczynski 1995, 128 [emphasis in the original]). Because humans must conform to the machines and the machinery of the system, society regards as sickness any mode of thought that is inconvenient for the system, particularly anti-technological thought, for the individual who does not fit within the system causes problems for it. The manipulation of such individuals to fit within the system is, as such, seen as a “cure” for a “sickness” and therefore good (Chase 2000; Kaczynski 1995, 119).
Since the existence of the technoindustrial system and technologies like robots threaten humanity’s survival and, on the belief examined later, namely that the nature of technoindustrial society is such that it cannot be reformed in a way that reconciles freedom with technology, Kaczynski (1995, 140) argues that it must destroyed. Indeed, his manifesto states that the system will probably collapse on its own when the weight of human suffering becomes unbearable for the masses. It is argued that the longer the system persists, the more devastating the ultimate collapse. The moral course of action is, therefore, to hasten the onset of the breakdown and reduce the extent of the devastation.
*** 2. Revolution or Reform?
To argue that revolutionaries ought to do everything possible to bring about this collapse to avoid technology’s far more destructive triumph is clearly to discount the possibility of reform or effective regulation; and, for Kaczynski’s many critics, it is likely to be seen as the flawed logic of someone who, either because he is sitting in an air-conditioned high-rise or hiding in a cave, is somehow not connected to life. Peter Ludlow (2013), for instance, dismisses Kaczynski’s argument on the basis of a “smorgasbord of critical reasoning fails” that are, in fact, mostly restricted to the early sections critical of leftism. The source of contention is Kaczynski’s *ad hominem* reasoning and what is purported to be a genetic fallacy. To be clear, those sections of the manifesto concerning the Left do involve *ad hominem* attacks and, while one clearly cannot define all leftist individuals as masochists because some “protest by lying down in front of vehicles … intentionally provoke police or racists to abuse them” (Kaczynski 1995, 20), it should be pointed out that not all *ad hominem* reasoning is fallacious, especially when it relates to the kinds of practical and moral reasoning utilized in the manifesto. But even if one were to grant that Kaczynski’s reasoning in the relevant sections on leftism is fallacious, it is far from obvious that this compromises Kaczynski’s overall argument. It must, for instance, be admitted that the Unabomber provides some worthwhile insights and that Kaczynski’s analyses of autonomy and deprivation of control over one’s fate are fundamentally accurate. It is difficult to deny that society has evolved to a point where most people work on tasks that have little to no tangible value outside of being part of a much larger and incomprehensively complex and largely technological process; and that this creates an (at least occasional) lingering feeling of alienation for people as the needs of the complex system take precedent over their own needs. Indeed, a recent worldwide Gallup poll reports that only 13% of people are psychologically engaged in their work (Crabtree 2013).
It must also be acknowledged that Ludlow points to Kaczynski’s *ad hominem* attacks only insofar as they support his claim that Kaczynski commits a genetic fallacy in pointing to the Left’s and humanity’s inability to evolve/ adapt to technology via non-alienating means. But even if a genetic fallacy is committed here, and indeed there is evidence that it is, this does not necessarily falsify Kaczynski’s core belief; to think otherwise is to ignore a number of arguments provided by Kaczynski, which not only illuminate the non-genetic reasons that technoindustrial society has assumed its present form but also explain why reform is so difficult in the modern context. In exploring these arguments, this section disputes Ludlow’s claims and demonstrates the difficulty of reform through the provision of examples, and it highlights that the disturbing future Kaczynski predicts is much closer than we think, if not already upon us.
One of the first reasons Kaczynski provides for why technoindustrial society cannot be reformed in any substantial way in favor of freedom is that modern technology is a unified and holistic system in which all parts are dependent on one another, like cogs in a machine (1995, 121). You cannot, he says, simply get rid of the “bad” parts of technology and retain only the “good” or desirable parts. To clarify, he gives an example from modern medicine, writing that progress in medical science depends on progress in related fields, including chemistry, physics, biology, engineering, computer science, and others. Advanced medical treatments, he writes, “require expensive, high-tech equipment that can be made available only by a technologically progressive, economically rich society. Clearly you can’t have much progress in medicine without the whole technological system and everything that goes with it” (1995, 121). This is certainly also true of modern robotic surgery, for instance, which depends on the medical-technical ecosystem for the maintenance of environments conducive to surgery, the development and training of highly specialized surgeons, and the manufacture and maintenance of the robotic equipment. Kaczynski maintains that even if some elements of technological progress could be maintained without the rest of the technological system, this would in itself bring about certain evils (1995, 122). Suppose, for example, that we were able to exploit new research and use DNA nanobots to successfully treat cancer. Those with a genetic tendency to cancer would then be able to survive and reproduce like everyone else, such that natural selection against cancer-enabling genes would cease and said genes would spread throughout the population, degrading the population to the point that a eugenics program would be the only solution. Kaczynski would have us believe that humans will eventually become a manufactured product, compounding the existing problems in technoindustrial society (1995, 122).
Many will object to such slippery-slope arguments, but Kaczynski bolsters his argument by suggesting that lasting compromise between technology and freedom is impossible because technology is by far the more powerful social force and repeatedly encroaches upon and narrows our freedom, individual and collective (1995, 125). This point is supported by the fact that most new technological advances considered by themselves seem to be desirable. Few people, for instance, could have resisted the allure of electricity, indoor plumbing, mobile phones, or the internet. As already mentioned, each of these and innumerable other technologies seem worthy of employment on the basis of a cost-benefit analysis in which the threatening aspects of technology are balanced with temptingly attractive features, such that it is often considered absurd not to utilize a particular piece of technology. Yet technologies that initially appear not to threaten freedom regularly prove to threaten freedom in very serious ways after the initial adoption phase. Kaczynski provides a valuable example related to the development of the motor industry:
A walking man formerly could go where he pleased, go at his own pace without observing any traffic regulations, and was independent of technological support-systems. When motor vehicles were introduced they appeared to increase man’s freedom. They took no freedom away from the walking man, no one had to have an automobile if he didn’t want one, and anyone who did choose to buy an automobile could travel much faster than the walking man. But the introduction of motorized transport soon changed society in such a way as to restrict greatly man’s freedom of locomotion. When automobiles became numerous, it became necessary to regulate their use extensively. In a car, especially in densely populated areas, one cannot just go where one likes at one’s own pace, as one’s movement is governed by the flow of traffic and by various traffic laws. One is tied down by various obligations: license requirements, driver test, renewing registration, insurance, maintenance required for safety, monthly payments on purchase price. Moreover, the use of motorized transport is no longer optional. Since the introduction of motorized transport the arrangement of our cities has changed in such a way that the majority of people no longer live within walking distance of their place of employment, shopping areas and recreational opportunities, so that they HAVE TO depend on the automobile for transportation. Or else they must use public transportation, in which case they have even less control over their own movement than when driving a car. Even the walker’s freedom is now greatly restricted. In the city he continually has to stop and wait for traffic lights that are designed mainly to serve auto traffic. In the country, motor traffic makes it dangerous and unpleasant to walk along the highway. (1995, 127)
The important point illustrated by the case of motorized transport is that while a new item of technology may be introduced as an option that individuals can choose as they see fit, it does not necessarily remain optional. In many cases, the new technology changes society in such a way that people eventually find themselves forced to use it. This will no doubt occur yet again as the driverless car revolution unfolds. People will adopt robotic vehicles because they are safer than ever and spare society from thousands of accidents, only to later find that such action actually contributes to the atrophy of deliberative skills (including moral reasoning) necessary to make decisions and remain safe on the road. Indeed, the commencement of this process is already evident from the inappropriate use of vehicles with limited autonomous functionality, suggesting that it will not be long before society reaches a point at which those who desire to drive their cars manually will be unable to do so because of various obligations and laws imposed by the technoindustrial system aimed at further minimizing accidents and maximizing efficiency at the cost of freedom.
If one still thinks reform is the most viable option, Kaczynski provides yet another argument demonstrating why technology is such a powerful social force. Technological progress, he argues, marches in only one direction (1995, 129). Once a particular technical innovation has been made, people usually become dependent on it so that they can never again go without said innovation, unless a new iteration of it becomes available and yields some supposedly desirable attribute or benefit. One can imagine what would happen, for example, if computers, machines, and robots were to be switched off or eliminated from modern society. People have become so dependent on these technologies and the technological system that turning them off or eliminating them would seem to amount to suicide for the unenlightened within that system. Thus, the system can move in only one direction: forward, toward greater technologization. This occurs with such rapidity that those who attempt to protect freedom by engaging in long and difficult social struggles to hold back individual threats (technologies) are likely to be overcome by the sheer number of new attacks. These attacks, it is worth noting, will increasingly come from developing nations. The possible creation of advanced industrial and technological structures in regions such as the Middle East and East Asia, in particular, could pose real problems. While many will conceive of what the West is doing with modern technology to be reckless, it arguably exercises more self-restraint in the use of its technoindustrial power than those elsewhere are likely to exercise (Kaczynski 2001). The danger rests not only in the use of intentionally destructive technologies such as military robotics, which have already proliferated from China to a variety of other less-developed nations, but also in seemingly benign applications of technologies (e.g., genetic technologies and nanobots) that may have unanticipated and potentially catastrophic consequences.
These arguments are not tied to Kaczynski’s *ad hominem* attacks on the Left or disparaging remarks about humanity’s limited success in evolving alongside technology, and offset Ludlow’s concern that Kaczynski commits a genetic fallacy. That is to say that Kaczynski sees reform as an unviable option based on both genetic and other forms of inductive reasoning, the latter of which seem to stand. The problem is that most people in modern society, a good deal of the time, are not particularly foresighted and take little account of the dangers that lie ahead, meaning that preventative measures are either implemented too late or not at all. As Patrick Lin (2016) writes, “[W]isdom is difficult to sustain. We’re having to re-learn lost lessons—sometimes terrible lessons—from the past, and intergenerational memory is short.” The Greenhouse Effect, for example, was predicted in the mid-nineteenth century, and no one did much about it until recently, when it was already too late to avoid the consequences of global warming (Kaczynski 2010, 438). And the problem posed by the disposal of nuclear waste should have been evident as soon as the first nuclear power plants were established many decades ago; but both the public and those more directly responsible for managing nuclear waste disposal erroneously assumed that a solution would eventually be found while nuclear power generation pushed ahead and was eventually made available to third-world countries with little thought for the ability of their governments to dispose of nuclear waste safely or prevent weaponization (Kaczynski 2010, 438– 9).
Experience has therefore shown that people commonly place the potentially insoluble problem of dealing with untested technological solutions on future generations and, while we cannot infer from past events that a future event will occur, we can ask what the future might look like based on analysis of recent technological developments. Kaczynski asks us to postulate that computer scientists and other technicians are able to develop intelligent machines that can do everything better than humans (1995, 171). In this case, it may be that machines are permitted to make all of their own “decisions” or that some human control is retained. If the former occurs, we cannot conjecture about the result or how machines might behave, except to say that humankind would be at the mercy of the machines. Some will object that such control will never be handed over to machines, but it is conceivable that as society comes to face an increasing number of challenges, people will hand over more and more decisions to machines by virtue of their ability to yield better results in handling complex matters, potentially reaching a point at which the volume and nature of the decisions will be incomprehensible to humans, meaning that machines will be in effective control. To unplug the machines would, yet again, be tantamount to committing suicide (Kaczynski 1995, 173). If, on the other hand, humans retain some control, it is likely that the average person will exercise control over only limited elements of their private machines, be it their car or computer, with higher-level functions and broader control over the system of systems maintained by an all too narrow core of elites (Kaczynski 1995, 174). To some extent, this already occurs, with companies like Tesla, Inc. placing restrictions on their autonomous vehicles in response to inappropriate use by few among the many. It must also be recognized that even if computer scientists fail in their efforts to develop strong artificial intelligence of broad application, so that human decision-making remains necessary, machines seem likely to continue taking over the simpler tasks such that there will be a growing surplus of human workers who are either unable or unwilling to sublimate their needs and substitute their skills to support and preserve the technoindustrial system. This provides further reason for lacking confidence in the future.
*** 3. Revolt and the Hacktivist Response
Kaczynski offers the foregoing arguments in support of what he sees as the only remaining option: the overthrow of technology by force, but it is in advocating an overthrow that the Unabomber parts ways with Ellul, who insisted that his intention was only to diagnose the problem and explicitly declines to offer any solution. The Unabomber, recognizing that revolution will be considered painful by those living in what he sees as conditions analogous to those of Plato’s cave, says that the revolutionary ideology should be presented in two forms: a popular (exoteric) form and a subtle (esoteric) form (Kaczynski 1995, 186– 8). On the latter, more sophisticated level, the ideology should address people who are intelligent, thoughtful, and rational, with the objective of creating a core of people who will be opposed to the industrial system with full appreciation of the problems and ambiguities involved, and of the price that has to be paid for eliminating the system. This core would then be capable of dragging less willing participants toward the light. On the level targeted at these common individuals, the ideology should be propagated in a simplified form that will enable the unthinking majority to see the conflict between technology and freedom in unambiguous terms, but not so intemperate or irrational as to alienate those rational types already committed.
The revolution can be successful if this occurs, Kaczynski thinks, because in the Middle Ages there were four main civilizations that were equally “advanced”: Europe, the Islamic world, India, and the Far East (1995, 211). He says that three of the four have remained relatively stable since those times, and only Europe became dynamic, suggesting that rapid development toward technological society can occur only under specific conditions, which he hopes to overturn. But here again surfaces Ludlow’s genetic fallacy, and it is, of course, incorrect to say that there were four main civilizations that were roughly equal in terms of technological enablers and that only Europe became dynamic. Ever since the human species moved out of East Africa, it has been devising technologies along the way, as required. The species itself has always been dynamic. There are myriad examples of that dynamism in ancient civilizations, such as those of the Sumerians and the Indus Valley (Galliott 2015, 1). The “four main civilizations” idea amounts to taking a cross section of human history and focusing on the civilizations you find in that slice and regarding them as the be-all and end-all of the concept of “civilization.”
It is also questionable whether people in that or any similar slice of history had freedom of the kind to which we should like to return. Plenty of preindustrial societies, from that of the Pharaohs to the medieval Church, imposed tight organizational controls upon their populaces. Some might also point to the fact that most people who lived in preindustrial societies were subsistence farmers, barely able to get by, and with less control over their lives than your typical factory worker. In this context, one might argue that technology brings with it greater autonomy. But Kaczynski takes issue only with organization-dependent technology that depends on large-scale social organization and manipulation, not small technology that can be used by small-scale communities without external assistance (1995, 207– 12). That is to say that he does not necessarily advocate doing away with all technology, down to the primitive spear, and there is a reason people romanticize the “old ways” of agrarian or preindustrial society, or even warfare and combat. Even if pre-robot jobs were mundane, dangerous, and repetitive in earlier days, they were at least measurable and, on some level, relevant in a visible and tangible way. Without technology, you could determine if you have successfully raised your crops, forged your widgets, or survived a battle. This would serve as affirmation of a job well done, largely absent in today’s society. It would seem that humans have not yet fully evolved, emotionally or physically, to live in industrial societies dominated by technology.
Ludlow, on the contrary, believes that like bees and beavers, and spiders and birds, we are technological creatures and that we are on the verge of evolving in a meaningful way. In his view, alienation does not come from technology (any more than a beaver can be alienated from its dam or a bird from its nest), but rather surfaces when technology is “jailed” and people cannot tear apart the object and see what makes it tick. So it is not technology that alienates us, but corporate control of technology—for example, when Apple makes it difficult for people to reprogram their iPhones or robot manufacturers jail their code. This is the point, he says, where hacking becomes important. Hacking is fundamentally about our having the right and responsibility to unleash information, open up the technologies of everyday life, learn how they function, and repurpose those technologies at will. Ludlow argues that there is no need to overthrow the technoindustrial system and that a hacktivist response represents evolution that can put control of technology back in the hands of everyone, not just the powerful and elite.
Yet is not clear how, or to what extent, this solves the problem. Imagine that, through hacktivism, society has progressed to a point whereby all technology is open-source. This solves the problem only where the technology is cheap and practical for people to create or manipulate themselves, a requirement that may be satisfied with the most basic forms of software and hardware but is unlikely to be satisfied in complex software and hardware systems (Berry 2010). Until recently, most people could (if they wished) understand most technologies in everyday use—cars, lighting, household appliances, and the like. But this is becoming less true by the day as people place more emphasis on technology functioning effectively and how they themselves may integrate with the technoindustrial system, compromising their “sense that understanding is accessible and action is possible” (Turkle 2003, 24), such that we will shortly enter an era where most people will not be able, even if they wish, to understand the technologies in use in their everyday life. There is, therefore, much to Kaczynski’s point that the system does not exist to satisfy human needs. To repeat a point made earlier, people should not be forced into choosing between using jailed technology controlled by those within existing oppressive power structures or dedicating their lives to building knowledge and understanding of the software and robotics that facilitate participation in the technological system. After all, there is no point to having everything open-source if one still cannot understand it. Having complete access to the workings of complex technology is no solution if that technology is so complex it is beyond most people’s understanding.
Another problem is that the very nature of the technoindustrial system is such that decisions have to be made that affect millions of people, with little real input on their behalf. Even in a future society in which open-source programming, open standards, and the like are common, decisions will need to be made about which codes and standards to adopt or about how to utilize a particular piece of technology or code on a large scale. Suppose, for example, that the question is whether a particular state should impose restrictions on the functionality of electric autonomous vehicles—some of which have already been sold and are on public roads—to improve public safety until such time as the owners of these vehicles become more effective in overseeing their operation. Let us assume that the question is to be decided by referendum. The only individuals who typically have influence over such decisions will be a handful of people in positions of power. In this case, they are likely to originate from manufacturers of autonomous electric vehicles, conventional gasoline vehicle manufacturers, public safety groups, environmental groups, oil lobbyists, nuclear power lobbyists, labor unions, and others with sufficient money or political capital to have political parties take their views into account. Let us say that 300,000,000 people will be affected by the decision and that there are just 300 individuals among them who will ordinarily have more influence than the single vote they legitimately possess. This leaves 299,999,700 people with little or no perceptible influence over the decision. Even if 50% of those individuals among the many millions were able to use the internet and/ or hacking techniques to overcome the influence of the larger corporations and lobby groups to have the matter decided in their favor, this still leaves the other 50% without any perceptible influence over the decision. Of course, this oversimplifies how technologies and standards are adopted in reality, but the point here is that “technological progress does not depend on a majority consensus” (Lin 2016). The will of the masses can be overcome by one person, a team of inventors, a room full of hackers, or a multinational corporation aiming to develop and field a technology that reinforces the needs and efficiency of the technoindustrial system, which represents a miscarriage of justice that dominates the human person.
*** 4. The Way Forward: Toward a Philosophy of Technology Geared Toward Human Ends
If hacking and open-source development are ineffective as a countermovement—and we concede the correctness of Kaczynski’s basic analysis that it is immoral for human behavior to be modified to fit the needs of the technological system and its elites at the cost of human freedom and autonomy—we must find another way to challenge the premise that reform is futile, or otherwise reluctantly admit that a revolution against the technoindustrial system and its many machines is understandable and perhaps permissible. That is, some way must be found to reach an optimum level of technology, whether that is in terms of certain kinds of technologies, technologies in certain spheres of life, or those of a particular level of complexity, and establish social order against the morally problematic forces described in this chapter. Philosophy is, of course, well suited to guiding us in this pursuit. In the context of robotics, this might begin with advocacy for a more meaningful and international code of robot ethics, one that is not merely the preserve of techno-optimists and addresses the concerns of those who desire to live an anarcho-primitivist lifestyle. Foreseeing the attractiveness, Kaczynski writes that a code of ethics will always be influenced by the needs of the technoindustrial system, so that said codes always have the effect of restricting human freedom and autonomy (1995, 124). Even if such codes were to be reached by some kind of deliberative process, he writes, the majority would always unjustly impose on the minority. This might not be the case, however, if in addition to a robust code of ethics, international society were to also build or recover a philosophy of technology truly geared toward human ends—parts set against the dehumanizing whole. What this might look like is for others to determine but, as a minimum, it would have to go beyond the existing philosophy of technology—which examines the processes and systems that exist in the practice of designing and creating artifacts, and which looks at the nature of the things so created—with a view toward incorporating a more explicit requirement to explore the way in which these same processes and systems mutate human beings, influence power and control, and erode freedom and autonomy.
This might be a long shot, but consider that Kaczynski himself has argued that revolution needs to be conducted on both the esoteric and exoteric levels to be effective. In the proposed case of reform short of violent revolution, a code of ethics would operate at the exoteric level, and the philosophy of technology geared toward human ends would operate at the esoteric level in persuading those of sound reason, with the united system potentially yielding better results in protecting the rights of those who wish to withdraw from technological society. Going forward, the imperative to accommodate those individuals who wish to disengage from technology will grow stronger as they are further alienated by unprecedented investment in robots and artificial intelligence, likely to be perceived as society buying into totalitarianism, the eradication of nature, and the subjugation of human beings, with the potential to fuel further terrorist attacks by those of the extreme fringe of the technoindustrial system.
*** Explanatory Note
This chapter draws on direct correspondence with Kaczynski and his original manifesto. This will be to the distaste of those who feel the manifesto was published with blood and that the only proper response is to demonize the author or ignore him completely. Therefore, it must be emphasized that the goal here is to neither praise nor criticize, but rather to improve understanding. It must also be acknowledged that the views expressed here are those of the author and do not represent those of any other person or organization. The author would like to thank Ryan Jenkins for a number of valuable comments that greatly improved this chapter.
; Notes
[1] In Joseph Conrad’s novel *The Secret Agent*, a brilliant but mad professor, not dissimilar to Kaczynski, abandons academia in disgust for the enterprise and isolates himself in a tiny room, his “hermitage.” There, clad in dirty clothes, he fashions a bomb used in an attempt to destroy an observatory derisively referred to as “that idol of science.” More generally, Conrad wrote about alienation and loneliness and portrayed science and technology as nefarious forces naively utilized by the public, further indicating the extent to which Kaczynski drew on Conrad.
[2] All citations of the manifesto refer to paragraph numbers, reflecting the military-style numbered paragraphs and how Kaczynski himself refers to the text.
[3] In the sense that Nietzsche (2002, §13) often argued that one should not blame the strong for their “thirst for enemies and resistances and triumphs” and that it was a mistake to resent the strong for their actions. Note that this is just one connection to Nietzsche. Both men shunned academic careers: Kaczynski in mathematics, Nietzsche in philology. Each tried to make the most of a relatively solitary existence, with Nietzsche writing, “Philosophy, as I have understood and lived it to this day, is a life voluntarily spent in ice and high mountains” (2004, 8)—words that could well have been penned by Kaczynski in his mountain cabin, where he spent much time prior to his capture. Nietzsche also wrote of the “will to power,” while Kaczynski wrote of the “power process.” All of this suggests that Kaczynski had great admiration for Nietzsche.
*** Works Cited
Berry, Wendell. 2010. “Why I Am Not Going to Buy a Computer.” In *Technology and Values: Essential Readings*, edited by Craig Hanks. Malden, MA: Wiley-Blackwell.
Chase, Alston. 2000. “Harvard and the Making of the Unabomber (Part Three).” *Atlantic Monthly* 285 (6): 41–6 5. [[https://theatlantic.com/magazine/archive/2000/06/harvard-and-the-making-of-the-unabomber/378239/][www.theatlantic.com/magazine/archive/2000/06/harvard-and-the-making-of-the-unabomber/378239/]].
Conrad, Joseph. 1907. *The Secret Agent*. London: Methuen & Co.
Crabtree, Steve. 2013. “Worldwide, 13% of Employees Are Engaged at Work.” Gallup. [[https://gallup.com/poll/165269/worldwide-employees-engaged-work.aspx][www.gallup.com/poll/165269/worldwide-employees-engaged-work.aspx]].
Ellul, Jacques. 1951. *The Presence of the Kingdom.* Translated by Olive Wynon. London: SCM Press.
Ellul, Jacques. 1964. *The Technological Society*. Translated by John Wilkinson. New York: Alfred A. Knopf.
Federal Bureau of Investigation. 2008. “FBI 100: The Unabomber.” [[https://fbi.gov/news/stories/2008/april/unabomber_042408][www.fbi.gov/news/stories/2008/april/unabomber_042408]].
Galliott, Jai. 2015. *Military Robots: Mapping the Moral Landscape*. Farnham: Ashgate.
Kaczynski, Theodore. 1995. “Industrial Society and Its Future.” *New York Times* and *Washington Post*, September 19.
Kaczynski, Theodore. 2001, Letter to Anonymized Scholarly Recipient in the UK, November 1. [[https://scribd.com/doc/297018394/Unabomber-Letters-Selection-6][www.scribd.com/doc/297018394/Unabomber-Letters-Selection-6]].
Kaczynski, Theodore. 2010, “Letter to David Skribina.” In *Technological Slavery: The Collected Writings of Theodore J. Kaczynski, a.k.a. “The Unabomber.”* Edited by Theodore Kaczynski and David Skrbina. Los Angeles: Feral House.
Lin, Patrick.2016. “Technological vs. Social Progress: Why the Disconnect?” American Philosophical Association Blog. [[https://blog.apaonline.org/2016/05/19/technological-vs-social-progress-why-the-disconnect/][www.blog.apaonline.org/2016/05/19/technological-vs-social-progress-why-the-disconnect/]].
Ludlow, Peter. 2013. “What the Unabomber Got Wrong.” *Leiter Reports*. [[https://leiterreports.typepad.com/blog/2013/10/what-the-unabomber-got-wrong.html][www.leiterreports.typepad.com/blog/2013/10/what-the-unabomber-got-wrong.html]].
Nietzsche, Friedrich. 2002. *Beyond Good and Evil.* Translated by Judith Norman. Edited by Rolf-Peter Horstmann. Cambridge, MA: Cambridge University Press.
Nietzsche, Friedrich. 2004. *“Ecce Homo: How One Becomes What One Is” & “The Antichrist: A Curse on Christianity.”* Translated by Thomas Wayne. New York: Algora.
Turkle, Sherry. 2003. “From Powerful Ideas to PowerPoint.” *Convergence* 9 (2): 19– 25.