Title: Frictionless Technologies
Subtitle: The Innovation of Human Obsolescence
Author: Dr. Laura Drake
Topic: anti-tech
Date: May 20-22, 2019

A paper in the special track “Powerful Humans and More Powerful Technologies”

Biennial Meeting of the Society for Philosophy and Technology

May 20-22, 2019, Texas A&M University, College Station, Texas

Introduction

Postindustrial humans stand today at a point in history at which appeals to efficiency, rather than morality, have explicitly become the main driving force of the system of which they are a part. Such appeals also form the basis of what is now known as neoliberalism as a philosophy of power. Efficiency, however, is the native province of the machine, not the human. Francis Fukuyama, who famously proclaimed "the end of history" in 1989, wrote in 1992 that "the ability of technology to better human life is critically dependent on a parallel moral progress in man," and that "without the latter, the power of technology will simply be turned to evil purposes, and mankind will be worse off than it was previously."[1] The rise of efficiency to the apex of the postindustrial world's priority list, implying that the slow eclipse of human values by machine values has already occurred, portends that the machine, either with or without the most powerful humans, is positioned to assume the starring role in a new posthuman history.

A century ago, Nietzsche advocated the inculcation of machine values into average humans, writing that "the object is to make man as useful as possible, and to make him approximate as nearly as one can to an infallible machine: to this end he must be equipped with machine-like virtues."[2] However, only by mid-century did it become possible to see the machine itself, or technology in its more abstract sense, as a force capable of becoming more powerful than humanity. Perhaps Nietzsche's Overman will present itself as a network of interconnected AIs presiding over dependent, singularly helpless multitudes of last men.

To the extent that the machine is now equipped with an explicit cognitive array of its own in the form of the networked algorithm, and to the extent that it is capable of efficiently incorporating and systematizing individual human units into its native routines, and in doing so, of squeezing out individual and societal activities and efforts previously aimed at the realization of human ends, the future of human initiative, agency, awareness and self-awareness, and other aspects of human cognition is at a point where it may now be explicitly called into question. The critical prospect innovators are addressing today in the realm of the human is the cognitive and emotional "frictions" or "pain points" of daily life,[3] the response to which is the presentation of algorithms capable of facilitating their softening or disappearance. The practical result has been an explosion in frictionless technology, a largely 21st century nomenclature describing a class of algorithms whose function it is first to systematize, and ultimately, to administer, all the large and small aspects of individual human life.

The systemization of human affairs through the application of algorithms has been underway for decades in the economic and institutional realms; during the last decade, however, algorithms have become both qualitatively and quantitatively more powerful and have entered the individual realm, assuming the administration of increasingly greater portions of what were once private, even intimate human responsibilities. Just as the mechanical devices of decades past relieved individual humans of difficult domestic physical efforts, the algorithmic technologies of today promise the same on the cognitive and emotional levels.

To the extent that the average human, the quintessential mass man,[4] perceives the exertion of cognitive and emotional efforts on his own behalf to be burdensome, the more seamlessly these functions will be integrated into the domain of the machine. From a human, these efforts require attention and care, sometimes of a focused and sustained nature. From a machine algorithm, they require only greater or lesser processing capacity, a connection to the server that houses the algorithmic engine, and of course, raw materials for processing.

To the human is left the lesser burden of serving as the raw material,[5] the data for the algorithm's calculus. The human may supply this material either actively, through energetic labor, as with social media, or passively, by standing as an open repository, depending on the circumstances, function, and the state of development of the particular technology. Humans need only push it out to the algorithms, or passively allow the algorithms to pull it in, and the algorithms in turn produce output that humans interpret as pleasurable, satisfying, or diversionary, often with a minimum of conscious effort, or even conscious awareness, required on the part of the humans concerned.

We argue that while the rapid progression of this class of technologies, whose function it is to push frictionless rewards to humans, if carried through to its logical conclusion - not merely in the workforce, but also at the individual level - implies a future of regression, atrophy, apathy, and obsolescence for most, or all, of the affected portions of humanity.


Summoning forth the human data reserve

Martin Heidegger, writing at mid-century, saw technology as a "way of revealing." He argued that modern technology in particular, in contrast to its precursors, constitutes an "ordering revealing" by humans of the objects in the natural environment. Modern technology comes into existence, he claimed, by way of enframing (Ge-stell), which he defined as “that challenging that sets upon man to order the real as standing-reserve," to unlock the energies hidden within the natural and hold them at the ready for human use.[6] Heidegger saw enframing as a kind of trap that blocks all other ways of revealing that do not involve ordering. This "setting upon" of humans produces something more than technology, procedures, or machines. Cumulatively, across time, it produces integrated systems of means, including but not limited to technology, that increase in complexity; it produces the cumulative totality that Jacques Ellul called technique.

Efficiency in the history of technology was central to Ellul's work. Just after mid-century, he introduced his notion of technique as "the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity."[7] It includes, but is not limited to, all that we know as technology. Technique sprang, at least initially, from the fount of human agency, as Ellul situated the origins of technique in invention, in humanity's interposition of an intermediary agency between itself and nature as a means of survival.[8] Thus, technique is the layered, enduring accumulation of human means, without reference to human ends, formed into a practical, yet invisible, abstract human structure that seems to be everywhere and nowhere at the same time. Because it is pure efficiency, it is no more accountable to human values than is water flowing downhill or electricity finding the ground.[9] According to Ellul, technique claims the natural and integrates it into itself: "[W]hen the natural is integrated [into technique], it ceases to be natural. It becomes part of the technical ensemble. It is an element of the mechanism, an element which must play its role, and no more."[10] As a result of being claimed and assimilated by technique, the natural ceases to exist as whatever it was before and changes its existence to fit its new role assigned to it by technique:

Technique worships nothing, respects nothing. It has a single role: to strip off externals, to bring everything to light, and by rational use to transform everything into means.[11]

When objects in nature are brought to light, to Heidegger’s state of unconcealment, and ordered for use as standing reserve, they cease to appear as objects. As “the real everywhere … becomes standing-reserve,” [12] it is ordered to presence itself as stored energy units standing at the ready for human use, that is, for technique. The logging industry serves as an early example: with the innovation of logging, humans began cutting down forests. Forests, in this context, cease to appear as forests, and their trees cease to appear as trees: they are all transformed from the objects that they were into stocks of wood reserves, standing ready to have their energies unlocked, stored, and distributed for human use.

This mode of revealing, the ordering revealing, may include, as of late, the internal contents of human minds. Heidegger was himself concerned that one day, the trajectory humanity was on with technology would progress to a point where humans might themselves be taken as standing reserve:

As soon as what is unconcealed no longer concerns man even as object, but does so, rather, exclusively as standing-reserve, and man in the midst of objectlessness is nothing but the orderer of the standing-reserve, then he comes to the very brink of a precipitous fall; that is, he comes to the point where he himself will have to be taken as standing-reserve.[13]

Toward the end of the 20th century, at the close of the postmodern era, human inventors brought about the digital as the next logical step toward the perfection of efficiency, in the progression of technique. It is our argument that the personal frictionless technologies of the mind, as a new addition to technique consequent to digital innovation, are engaged in the taking of human minds as standing reserve, and that this is what makes the effects of technology on humans today different from what they were in the 20th century. Whereas, according to Heidegger, the older technologies of bringing forth, which included various forms of craftsmanship, were "ways of revealing" by humans co-creatively with nature, but without claiming nature's own processes, and whereas the human revealers of the modern challenging forth technologies reordered some of nature's own processes co-creatively with other humans, the next iteration of this process in the new era of posthumanity (circa 2010) is taking the form of frictionless technique, through which the most powerful humans are summoning forth human minds to reveal and reorder themselves as standing reserve - as data.[14]

We argue that the human inventors of Big Data are "set upon" to undertake a trajectory of innovation that involves an irreversible harvesting of human minds, and a reordering of the contents of those minds into a human data reserve. Just as objects brought to light through unconcealment - their energies subsequently reordered - cease to appear as objects, humans similarly unconcealed and reordered will cease to appear as humans. This path is rationalized by the posthuman worldview in which, as Katherine Hayles elucidated in 1999 in How We Became Posthuman, human beings as they have always been are now re-cast as substrates, that is, interchangeable and therefore expendable media housing for patterns of information, which are held as the only valuable elements. [15] With human consciousness and everything that comes with it deemed irrelevant in a stroke of the keyboard, and with all human ethical concerns thus swept out of the way, innovators who are “set upon” to reorder human minds into information patterns, arrangements of data, are rendered by this worldview free to do so. Frictionless technologies efficiently unconceal the human mind as an energy source: they summon it forth to reorder its contents, which include its thoughts, preferences, relationships, feelings and anything else constitutive of the conscious, self-aware human person - all the contents of his individual life-world - into energetic data units standing at the ready at all times for harvesting, storage, and use. This is the truth behind the now-common saying that "data is the new oil": both are energy. Submitting to this summoning forth process will require that humans cease to be natural, that they voluntarily change their existence from what they were before - natural, self-aware, fully conscious human agents with rights and duties in relation to one another - to fit this new role that technique has assigned to them. As the larger and larger segments of the human data reserve continue to be summoned forth, reordered, and harvested, as the technique that uses it for energy expands, as the innovators become more powerful, the standing reserve is progressively depleted, and the cycle deepens with each subsequent iteration. Technique is a feedback hub, layered, cumulative, and dynamic, that with every layer added, new possibilities are opened up for the still more powerful revealing and ordering of yet deeper energy resources for use.

We will now turn to explore two of the most important functional mechanisms, automation and buffering, through which human minds are presently being summoned forth. Automation technology reorders the human mind by appropriating its individual and cognitive functions; buffering technology does the same by taking over its collective and social functions. We will conclude by discussing the power implications of these ongoing processes for humans and machines.

Automation

A primary method by which frictionless technology attracts data from a given human is by taking on his cognitive burdens and undertaking particular sequences of actions for him. The technology is presented to the human as a machine capable of translating his desires into strategies and actions in the world, and then back to him as apparent outcomes or effects. To the extent that a human considers the expenditure of effort into thinking about how to act in the world on his own behalf to be burdensome, frictionless technology appears as a just-in-time solution to decide which actions should be taken by or for him, and to either direct him through a sequence of pre-fabricated dance steps or produce the desired effects on its own.

Personal assistant and other IoT algorithms beamed remotely into smartphones or into evolving smarthomes are present-day examples of technologies designed to eliminate the human's cognitive frictions, or bottlenecks, associated with his having to arrange, organize, and make decisions within his individual life. These algorithms, and their more tightly coupled successors that are on the way, will keep track of his physical and emotional needs, wants, days, and whereabouts, make the necessary decisions, put the required processing sequences on a schedule, and implement them in the background.

The technology is cognitively frictionless because the human need only use his fingers to input the desired effects - and any formal consent acknowledgements that may be required by law - into a smartphone, or speak them into a microphone embedded into the local machine-shell he will have installed in his home. Thus is the human mind brought to light, unconcealed. The machine will then either guide the human step-by-step through a pre-structured process, or produce the desired effects in his mind entirely on its own, without his further mental involvement, as frictionless rewards. As it does so, it captures the human's explicitly stated desires and reorders them as data.

It is not merely that humans will soon be relieved of the burdens of shopping for themselves, cooking their meals, calling people to schedule appointments, turning on light switches, or adjusting their thermostats. Google Duplex is already on its way to handle the phone appointments, IoT assistant technology is already turning light switches on and off, and Google Nest IoT thermostats are already adjusting themselves. More importantly, humans will also no longer have to endure the friction of having to figure out which items they should order, determine what to have for dinner on a particular night, initiate the scheduling of appointments and meetings with specific people in specific places, decide on a whim to turn a light on or off, or notice that the temperature is too hot or cold and that the thermostat needs to be adjusted.

All of these detailed individual life elements and more will initially be suggested or recommended, much as videos and “friend”-accounts are on YouTube or Facebook today, but eventually they will be fully automated into a series of deepening, self-reinforcing, predictable, self-executing, and self-evolving patterns. As the algorithm today processes a human's responses to its suggestions, it gradually incorporates all the real-world elements of the individual life of that specific human into its processing routines. The machine, animated and expanded in capability by this growing human data store, becomes more powerful; that is, it becomes, on the human’s life energies continuously reordered as fresh data, a more perfectly efficient machine.

As its engagement with the human progresses, the deep-learning machine algorithm will be able to use the data store it has already acquired to analyze and process his explicitly stated desires, both separately and in combination, and to draw inferences and conclusions. As it does so, it will be able to generate second-order analytic data on his more deeply-ingrained habits, hidden preferences, internal rhythms, longitudinal and cyclical patterns of behavior, personality, emotions, and overall physical and psychological attributes and processes - all of which hold the key to some machine-readable variant of his unstated, implicitly-held, perhaps unconscious desires - and reorder them too as data.

The ongoing feedback effects will enable the algorithm to predict, anticipate, and even preempt the human's future desires ahead of their formation. It will increasingly be able to produce the desired effects in the human’s mind automatically, thus relieving him of the second-order friction of having to ask the machine for them. Predictive technologies will be able to anticipate when an appointment should be made for a particular human, when his lights should be turned on or off, and when his thermostat should be turned up or down. AI “assistant” technology is being developed to become increasingly capable of taking over the human faculty of orienting oneself in time, just as today’s driving apps already orient him in space; successor versions networked together, and capable of anticipative and predictive functions, will be able to integrate many of the human awareness and deciding functions and handle them in combination.

The greater the access the algorithm is granted to the deep recesses of the human's life and mind, the more ease, convenience, and other rewarding effects it will be able to produce in its human. However, if the smartphone is any indication of what is coming, the greater the frictionlessness, the greater the dependence: at some point, the human will be so weakened on his own merits that he will find himself unable to function without it.

Human data stores are kept not within the physical IoT machine-shell the human has in his home, or attached to his body, but in the remote server where the cognitive engine resides. Therefore, the human, who may believe he has bought a machine for his personal use, possesses neither the IoT algorithm nor the data harvest. Instead, data are accumulated over time at the remote location where the server is housed, analyzed, and organized into a personal dossier (account) specific to each human. The collective human data harvest, that is, the combined dossiers that make up that portion of the human data reserve to which the relevant institution has access, are either retained or resold into human behavior prediction markets, by and to entities actively seeking to open up new avenues to predict, influence, shape, and guide the behavior of humans both individually and in aggregate, that is, in human clusters and populations.[16] In this early phase, frictionless technologies are diverse, distinct, and not cognitively synchronized with one another, operating independently to some degree. However, as IoT algorithms - the smartphone and the algorithms (apps) that run on it were the first IoT devices - they are all going to be networked together, and AIs are increasingly being coded to serve as command and coordination centers to manage them and their humans. As algorithms undergo integration and synthesis, and as augmented reality routines are added to them, they will develop into powerful cognitive, physical, temporal, spatial, social, psychological, and sensory guidance systems acting upon and coordinating humans, and harvesting mountains of data on them in the process. Algorithms will soon become capable of directing individual humans, and by extension, entire populations, step by step, through the twists and turns of time itself, through the separate, individualized labyrinths of their own lives.

The turn-by-turn directional instructional function of the spatial orientation algorithm (i.e., the driving app), remotely beamed into the smartphone, in which the human is directed to "turn left now, turn right now," is an initial indication of how some combination of new AI-directed networked algorithms are being set forth to automate humans. AI “assistant” technologies and their successors, as they become more sophisticated, will gradually be able to extend the "turn left now, turn right now" model of human automation into the temporal dimension as well.

The spatial orientation algorithm, which exists already and can therefore be analyzed, takes over the burden of the human to know how to orient himself in space, ascertain where he is at any given moment, and find his way. For the human immersed in, or incorporated into, its processing routine, the real environment disappears, as the human is pulled out of it and embedded into a virtual environment in which the roles have been reversed: the machine takes on the cognitive responsibility and provides direction, while the human turns that part of his brain off[17] and behaves like an automaton. Humans acting like automatons have consequently been driven into lakes and other hazardous places as a result of their abdication of their native spatial orientation functions, and their consequent withdrawal from the spatial environments of the real.[18] There is a growing collection of research documenting the effects of spatial orientation algorithms on human spatial abilities, including spatial knowledge acquisition,[19] scene recognition,[20] the formation of cognitive maps,[21] and spatial memory.[22] In a study specifically addressing the cognitive effects of acoustical turn-by-turn instructional algorithms of the type that most humans follow today, Elliot P. Fenech et al., stated their main finding:

Results show that using a turn-by-turn navigation system negates route learning and impairs scene recognition. These findings suggest that using a navigation system while driving creates inattention blindness, a failure to “see” elements in the environment.[23]

All the while, the driving algorithm uses the GPS technology embedded in the smartphone to harvest and take possession of all the movements of the human through space as a string of time-stamped location data. Mapped out over time, locational patterns appear in the data sets, and the routine daily movements of the human, and clusters of humans, begin to fall into predictable patterns that can be dissected, analyzed, and used both to guide future human movements and to innovate still more powerful algorithms.

The human in this situation, to the degree that he is not already natively familiar with his surroundings, lacks situational awareness. Should he awaken from the automated routine into a moment of clarity, he may find himself disoriented, because he, as a human being, would have no idea where he was.[24] His native environment might take on a cast of unreality, much like a dream state. Nor would this human have situational awareness as to the significance of the data that had been streaming out of him over however many months or years, or as to what has happened to that part of his cognition. By expending his energies on entering data and carrying out a machine’s orders instead of orienting himself and finding his way, this human would find that his native spatial orientation faculty had been appropriated by the algorithm implicitly bearing its name. As the technology progresses, it will likely move from the turn-by-turn directional model, in which the human carries out a machine's step-by-step instructions, to the still more efficient model of the Windows installation wizard, in which all the steps are both formulated and carried out by the machine itself, with the processing interrupted only by a small number of prompts directed at the human. Instead of instructing the human as to how to get somewhere or do something, the algorithm will simply get him there or do it. For example, once automated pods, or autopods (i.e., self-driving cars), become common, the human interface for the driving directions algorithm will itself no longer be necessary. The algorithm will simply be projected directly into the pod’s mechanical functions, as it is today into those of the human, from a remote server. As humans are directed, nudged,[25] and moved about through their lives by increasingly sophisticated algorithms, as they abdicate ever greater parts of their lives to them, human data sets richer than ever before will be unlocked in their minds along the way for the use of both networked algorithms and of humans more powerful than themselves.

A human, once he has been fully automated, will not even have to know or be aware of what he is doing, much less where he is. He will not have to be independently competent to go through what will become frictionless, automated human life-days, strung together into a seamless, maximally efficient schedule of fragmentary machine-tasks, put together and administered by networked AIs. At some point, such a human will no longer be able to tend to his own affairs in any case, for the same reason he will no longer be able to drive: autopods will eventually travel at such high speeds that driving alongside them will at some point no longer be considered safe. Similarly, the velocity of a human life, to the extent it is under the administration of a machine, will continue to accelerate until it is moving too fast for the human to keep up, as it too will be moving at machine speeds.

A two-tiered timescale has already begun to emerge: machine time, which is speeding up, and human time, which has already reached its upper bounds, as many humans of our time are today finding it increasingly difficult to keep up with the pace of their partially machine-run lives. This, together with many other factors, has resulted in the collapse, or implosion, of human notions of time into what Douglas Rushkoff calls "the ever-present now."[26] Heidegger’s conception of the human being, as expressed in Being and Time, is that the human being, or the human as

“Being-in-the-world” - an entity he called Dasein - is primordially temporal.[27] Heidegger's Dasein is being asked today to either adapt himself as a human to a world run by machines on machine time, in which human time is on its way to being abolished, or to stop being a human. The technological determinists would say that only the latter will be possible in the end. Time stops. By doing everything for the human, by making his life appear frictionless, automation algorithms do unto him, and ultimately, instead of him. In the end, it will no longer really be his life. Rather than being the conductor of his life, much less its author, the human living the frictionless life will increasingly be pushed through it, destined to exist vicariously through the sensory apparatus of the machine. As more elements of the human's life are systematized and automated by remote agencies, and as fewer of them are subject to the whims of the human's own agency, creativity, initiative, or spontaneity, the human, like the driver, becomes a functional automaton. He becomes the human equivalent of a zombie PC whose bandwidth and processing power have been hacked and re-purposed, together with hundreds or thousands of other zombie PCs, for the use of a handful of indifferent remote corporate CEOs with an insatiable hunger for his life's gold.

To extent that the human is automated in this way, which is a measure of the perfection of the frictionless life, his humanity becomes obsolete. For he will have willingly given up his initiative, consented to the datafication[28] of his energies, and to the sterilization of his human spirit. He will have regressed from the apex of intelligent, self-aware cognition to the blind alleyways of mere stimulus response.[29] In the meantime, his empathy, already being lost across generational time, [30] will likely be gone, as it too will be obsolete. However, this automated human, imbued with Nietzsche’s “machine-like virtues,” will not be Ray Kurzweil's transcendent, perhaps immortal, transhuman with a new digital soul;[31] he will bear no resemblance to Nietzsche's Overman. He will be a hollowed out drone, an artificially animated shell, a puppet, a dumb terminal with human features.


Buffering

The Oxford Dictionary defines “friction” as “the resistance that one surface or object encounters when moving over another.” [32]Naturally, energy, and in the case of humans, effort, is required to live and do things in real, frictional environments with, and sometimes in opposition to, other real humans who are different from oneself in various ways. With frictionless technologies, these real world human-to-human frictions can be made to go away, either by taking the real out of the real, that is, by turning the frictional real into a new frictionless real, or by pulling the human out of the real world altogether and embedding him in a simulated environment made of data. In either case, the creation of frictionlessness involves the buffering of humans from real interactions with one another as, in accordance with human nature, such interactions, especially the more significant and meaningful ones, sometimes involve friction. Each human is unique, and each has his own interests, circumstances, and will. A core aspect of human life in the real world involves engaging with other humans under less than perfect conditions, and doing so requires the expenditure of effort to navigate the kaleidoscope of human differences and wills.

A science-based definition of friction gives substance to the metaphor of frictionless technology and the counterforce it exerts upon human interaction:

Friction is ... an example of an electrical force in that, as two surfaces are in close contact with one another, there are electrical attractions between the subatomic particles in the two surfaces. Other forces must be applied to 'break' these temporary bonds and cause the surfaces to slide across one another.[33]

If the natural attraction of humans generally to other members of their species creates bonds between them, and if these bonds are interpreted as frictions, then the force of frictionless technology may break these bonds and dissipate the frictions, allowing humans to seamlessly glide past one another instead.

Robert Putnam, in his seminal work Bowling Alone, written as the last millennium came to a close, societal interaction, social trust, and community had already been in decline in post-industrial America for several decades.[34] Social technologies appeared into this void with the promise to frictionlessly reconnect humans to one another, with little to no human-to-human effort required. However, because these algorithms simulate the human interaction itself, they appropriate that function, while providing frictionless emotional rewards that mimic, to the point of parody, the originals.

For some, direct interaction with other humans is socially uncomfortable, awkward, and too demanding, in terms of the mental, emotional, and time investments it requires; in other cases, especially those requiring small talk, niceties, and other forms of diplomacy, some may find it tedious. Sustained interaction with other humans, to be successful and rewarding, requires self-awareness, effort, empathy, focus, and patience. It also requires some degree of responsibility, trustworthiness, follow through on commitments, interpersonal accountability, an ability to shrug off awkward moments with good humor, and the fortitude to deal with difficult situations in genuine ways. In the absence of these civilizing mutual attributes and expectations, themselves signs of higher development, humans collide or recoil from one another.

Frictionless social technologies have emerged, in particular, social media, that buffer humans from one another by replacing direct interaction with an interconnected assortment of simulacra, in which human interaction is dislocated not only in space but also in time. Instead of addressing one another directly, humans direct their entreaties at a maze of technological filters that populate simulated, temporally fragmented, imaginary, multiplayer game-style environments. The projection of these simulacra into the human mental space greatly reduces the possibility of awkward spontaneous collisions among humans in real times and places. Rather than connecting with one another, every human atom is simultaneously connected to a common machine that simulates human relations. Rewards consist of points awarded to participating players by the algorithm through a game-like interface (e.g., numbers of “follower”- or “friend”-accounts hyperlinked to one’s own, numbers of times a human reacted to the emotive stimulus a text fragment or photograph by activating a “like” algorithm, etc.).

People compete for and sometimes assess their worth based on how many points they have accumulated in these games, to the point where it has become possible to purchase blocks of additional points for real money. Placeholder affective states previously associated with human mental and emotional interaction frictionlessly appear to fill up the real places in human minds and hearts once associated with human relationships: “friends” “sharing” takes the place of friends sharing, a “like” algorithm replaces liking someone or something, “heart emoji” takes over the place of love. Social media algorithms are diversionary in this way, while also relieving humans of what they may consider to be the burden of genuine interhuman engagement under conditions where, as Putnam thoroughly documented, mutual expectations and experiences of reciprocity and trust had already largely disappeared. [35]

The technology achieves permanence and deepens its roots by programming humans to act and react only adjacently to other humans, and to interact directly only with the algorithm that stands between them. In the real world, the phenomenon is visible in the widespread engagement of humans, even adult humans, in “parallel play” [36] - a phenomenon heretofore deemed characteristic of early toddlers - in which children play with separate toys in one another’s presence with only token awareness of or interest in the others alongside them. Today it is visible in lines of human adults and teenagers simultaneously enacting themselves as data while seated or standing adjacent to one another in a multitude of settings, all nurturing and tending to the same algorithms to the mutual neglect of one another.

Buffering technologies systematize, absorb, and nullify spontaneous human interaction,

whether it be formal or informal, and incorporate it into technique in the way Ellul described with regard to everything else that is natural. The systematization apparatus itself comprises the buffer that shields and protects humans from direct engagement with one another. Randy Rieland, writing for the Smithsonian Magazine wrote that “[t]he big buzzword in digital technology now is ‘frictionless,’ meaning the less we humans have to deal with, the better.”[37] However, that also includes having to deal with other humans: with “frictionless sharing,” as Mark Zuckerberg would have it, and of which Rieland writes approvingly, humans are able to acquire information on their “friends” without engaging their humanity, that is, they gain the ability to read or view them as data. The human stops being a social being in such real-time environments: instead of being socially engaged or connected, the human inside the data simulation is socially inert.

Systematization makes human interaction seem less emotionally difficult and burdensome, that is, less discomforting, demanding, unpredictable, or tedious, but in so doing renders it in a non-human form.

As the ambiguities, unpredictabilities, commitments, spontaneities, and accountability requirements of interhuman relations are increasingly interpreted as frictions, the technology takes on the administration of a low-resolution knock-off of the human social, a qualitative phenomenon previously realized through natural, spontaneous interhuman engagement. With social media other humans are rendered abstract, unnatural, distant, asynchronous, and dislocated from time, space, and in many ways, reality; they are removed from the energetic and emotional exchanges and experiences that are a natural part of unscripted real world human encounters. Social technologies rose up within the void Putnam identified, but because they are simulacra of the originals, they serve to distance humans even further than they were before by erecting new technological barriers between them. The technology buffers humans from one another by removing the frictions, but in so doing, it disconnects the human element that connects them.

Buffering begins when the individual human opens up his mind to a social media data profile, or account interface, which emerges as a shining, simulated portrait of his ideal self, and whose function is to effectively serve as his representative in his scripted data performances directed at other human data profiles. Sherry Turkle called the social media data profile a human’s “algorithmic,” “quantitative,” or “aspirational” self.[38] In writing his profile, the human is writing the contents of his mind, in this case, the substance of his aspirational self, to the machine in the form of data. That is, he is writing the skeleton of his own data dossier to the machine. Thereafter, it is filled out with the meat and flesh of his emotional and social mind-matter, with the data transmission itself serving as the algorithm’s lifeblood. Whatever of these raw aspirational materials can be converted into machine-readable format is so transformed; the data are recorded, stored in the relevant dossiers, and distributed throughout the simulated environment.

The human interacts frictionlessly with the algorithm through the sequential sending of entreaties to it for more points (e.g., “likes,” “friends,” or “followers,”), or for other substantive mental or emotive rewards, that is, data fragments released by other players that make him feel good. A human’s revelation of an emotive fragment to the algorithm, which presents itself to another human in the form of an emotive stimulus, and the second human’s reaction to that stimulus, combine to form a simulation of an interaction; large volumes of these fragments strung together simulate a relationship. In the real world, this procedure embodies nothing more than the writing of the contents of both minds to the machine. Humans become pure information. Instead of directly encountering one another, which always involves the possibility of awkwardness or collision, humans move through the simulation alone, aware of others travelling on the same path and the relative proximity of each, but never making direct contact, much like cars travelling the same direction in freeway lanes. Simulated encounters are thus produced algorithmically, and their effects are returned to the humans involved as frictionless rewards. Human interactions attempted through the buffer of the simulated environment come out in contextually distorted pieces and fragments, that is, in a form native to machines, but the attempts of all of them together combine to form a single, integrated, cumulative data store.

That the result stands in conformity to the way machines, not humans, natively operate is evidence of the machine’s extant priority over the human. To the machine, human relationships as data are raw material. The data are all the same; they are a homogenous material whose contents are of no import. Those subtleties of human relationships that are purely qualitative, that is, human, and are thus lost in the data stream, are irrelevant and thus expendable. The machine equivalent, the vast collection of human profiles and the urgent data revelations surrounding them, generically and collectively come together to take the form of “social media content.” Human minds become content. Social media content is comprised of data fragments. Human minds become data.

Social technology, like other simulacra, claims original title to the name of that which is simulated, its presence hastening the disappearance of the original.[39] It replaces genuine interaction with a simulacrum, and it turns human relationships into units of relational data used for the care and feeding of algorithms and the powerful humans who are “set upon” to write them. Some humans have become so dependent on these algorithms that they have come to identify more with their algorithmic selves than with their real selves, because, as Jean Baudrillard might say, the duplicates, to some, feel better than the real. The duplicate copy is the human’s ideal, fully sanitized, non-human self, with the awkwardness, insecurities, anxieties, fears, and other inconvenient aspects of being a human redacted from the profile. Humans may find it easier, that is, more emotionally frictionless, to engage in the fantasy of “being” that self than to be themselves, with all their all-too-human imperfections. They may find it irresistible for the same reason one might find a fantasy irresistible, except that this fantasy is a duplicate or datified copy of real life. Gradually the aspirational self, which is as loved and compulsively tended to in every detail as the human self once was, comes to take precedence over it. The human self, on the other hand, is met with ambivalence and is to the same degree neglected, as it is deprived of that same intensive, nurturing energy now being devoted to the replicated self. Humans change from Heidegger’s “Being-in-the-world” to living their lives as data.

The real world, and human actions and interactions within it, are rendered obsolete in their own right, supplementary at best to the real action happening inside the simulation: “If it isn’t on social media, it doesn’t exist. If it didn’t happen on social media, it didn’t really happen.” If the tree of human connection fell down but no one happened to hear it because everyone was on social media, the tree didn’t really fall down. The real world ceases to exist. Human connection ceases to exist. Human reality is no longer a living reality; the human reality, and therefore, the human as human, is obsolete. Only the simulacrum remains.

Expectations that humans come to have of both themselves and one another are downgraded accordingly[40] to accommodate these new machine values and priorities. Humans imbued with “machine-like virtues” and priorities find it more difficult, and simultaneously less important, to do things together as humans in unfiltered, unsystematized real world spaces. A standout example is that of today’s youth, who are finding that the mere presence of the dating simulacrum - without reference to its effectiveness - has rendered it largely unacceptable for them to select and approach potential dating partners in the real world. Similarly, Turkle notes that the mere presence of a smartphone in full view of participants in a conversation, even if it is turned off, discourages connection between humans in the same room with it, or renders their conversation superficial and shallow: “even a silent phone disconnects us.”[41] From a different angle, Adrian Ward et al., showed that the smartphone’s presence in full view of its human also reduces that human’s available cognitive capacity.[42]

Gradually, as the machine values and priorities continue to eclipse the human, instead of living as sovereign citizens of a society, humans gradually become denizen data terminals or nodes in a machine-centric system, one in which machine values rule not only over machines, but also over humans. The mere presence of the simulated, frictionless social network makes it more difficult, and less acceptable, for humans to act together as people engaged in unfiltered, unmonitored, unsimulated activities as a part of a real-world society, because the norms and expectations have been altered in ways that effectively deprecate the native human social faculties as obsolete.

Human spontaneity, unpredictability, and emotional and intellectual quality are lost, as human emotive and relational data units are transcribed, recorded, catalogued in dossiers, stored forever, and freely distributed as data units containing the stimulus responses and life energies of specific humans, either with or against the wishes of the humans who released them. Human minds thus disrobed by a machine, their emotional entrails turned inside out and back again in full view of all others, thrust into a fourth dimension they do not understand, and devoid of mutual trust - a void deepened rather than filled up by the buffering of the real relationships - become risk averse. This has consequences across the spectrum of human existence, including for the possibility of meaningful and effective dissent in the realm of the political and the economic. Such possibilities require mutual trust, focused attention, close cooperation, agency, and sustained, coordinated intervention by real people in the real world to achieve anything lasting. They also require that relationships be private, that is, not subject to surveillance or distribution, so as to allow for the informal, non-anonymized exploration of ideas and discourse amongst communities of dissenters, as well as the ability for individuals to retreat from incompletely formed, untenable, or unworkable ideas, or from random expressions of dissent made in anger, without the threat of the premature, universal, and permanent attribution of such expressions to them.

The carving up of human relations into a cacophony of temporally displaced byte-size fragments may be efficient in the sense of removing real-world frictions, but it doing so, it fractures, and thereby actively depletes, human connection. Perhaps that is the point, if it is human connection itself that is identified to be the friction, in line with David Byrne, who wrote: "Part of making something ‘frictionless’ is getting the human part out of the way."[43] However, in fracturing and depleting the human connection, the human ends up fracturing and depleting himself, that is, his original imperfect self, that all-too-human self decaying behind the profile. Emerging research, to which Jean Twenge summons our close attention in iGen, is linking increases in rates of real-world depression and suicides in recent years to increases in human engagement with social media algorithms, especially among youth.[44] One of these studies suggests that some statistically significant number of the multitudes of humans immersing their minds in various types of social media are doing so to alleviate real-world loneliness, but without success.[45] Another points the causal arrow for these emerging trends directly at human mental immersion in social technology.[46] The humans concerned may find themselves vaguely depressed and dissatisfied, or numb and desensitized, but without any situational awareness about how or why this is happening to them.

In substituting social technologies for Putnam's reciprocity, trust, and trustworthiness, as humans come to rely instead on ungrounded versions of "people" and "places," they themselves regress to a more primitive state. Instead of being social for themselves in real-world settings, they stand eagerly at the ready, at all hours of the day or night, to urgently interrupt whatever human activity they were doing before, including sleeping, to tend to the machine’s insatiable demands for data. Human minds reordered into this pattern collectively form the human data reserve, standing at the ready at all times, urgently awaiting a stimulus or a reward. The native social faculties of these humans, which are of no use to the machine in their original state, but which are very useful to it as data, are either relatively undeveloped in the first instance or begin to atrophy from non-use as they fade into obsolescence.

As with the turn-by-turn spatial orientation function discussed in the previous section, it is precisely the mental faculty that the machine purports to enhance in the human that is being depleted. Just as the spatial orientation algorithm depletes the human spatial orientation faculty as it renders it obsolete, the social algorithm does the same with the human social faculty. As automation displaces the human’s individual and cognitive functions, buffering does the same to his relational and emotive functions. All the appropriated human functions change from being human functions to becoming machine functions. As human functions are displaced and replaced with machine forms, total spectrum frictionlessness emerges. Humans as humans become increasingly obsolete to one another as well as to themselves, and thus, move toward obsolescence in toto.


Power implications

Power, and balances of power, are about the relative capabilities of known entities and the directionalities in those capabilities across time. The change presently underway in the status of the human as a result of the mentally depleting effects of automation and buffering has broad implications for the future of the power balance between the masses, the most powerful human innovators, and the emerging technologies whose powers will likely eclipse even those of their inventors. A negative power balance has emerged between humans and machines and is widening. As humans increasingly abdicate their executive capabilities, responsibilities, and selves to machine processes, the recipient machines process the incoming humans-as-data in ways that increase and perfect their ability to administer humans, thus rendering them increasingly more powerful than humans. The power losses that are just beginning to be sustained by humans in relation to these technologies and their powerful owners promise to run particularly deep as the default trajectory progresses forth:


Atrophy of human faculties

Humans have already to begun to suffer a power loss in the form of the progressive atrophy of their native cognitive and interpersonal human functions from non-use, as these functions are replaced by frictionless technologies that automate them as they buffer them from other humans. The appearance of neuroimaging studies coming out of China and South Korea point to the appearance of significant brain abnormalities - including widespread structural abnormalities and reduced densities in white[47] and gray brain matter (including in the cerebellum),[48] reduced functional brain connectivities spanning a distributed network (especially with regard to the cortico-subcortical connections, in both the prefrontal and parietal cortices),[49] and significant orbitofrontal cortical thinning[50] - in the brains of Internet-immersed youth. The associated impairments cover a wide range of core human cognitive functions, including executive attention, working memory, emotion generation, decision making (especially reward-associated decision making), cognitive control, mental flexibility, response inhibition, impulse control, and goal-directed behavior. All of these authors note the similarity of their present findings to neuroimaging patterns associated with various types of drug addiction.

There is likewise a growing literature relating to the atrophying effects of the new automation technologies on the human cognitive faculties, including the ability to think critically, focus, remember, sort, filter, concentrate, and pay attention[51] - all of which are decreasing in importance to the extent that they are no longer used. However, according to a child development scholar: "Paying attention, remembering, thinking ahead - these are the elements of intelligence."[52] If these cognitive elements are being lost to machines, and if they are indeed the elements of intelligence, this implies an outbound transfer of intelligence from humans to machines together with the abdication of native faculties and the consequent outflow of humans-as-data. It is not merely that machines are becoming more powerful at no absolute cost to humans, with human capabilities remaining flat, but that algorithms are becoming increasingly capable at some cost, perhaps at great cost, to human intelligence.

The corresponding atrophy of interpersonal and emotive faculties through buffering, human faculties such as love, care, respect, empathy, and humility, result in a net power loss for humans because they impair the native ability of humans to combine their powers together in the real world directly, without a middleman, for the achievement of strictly human ends.

Loss of human standing

Abdicating his core faculties to a machine, the human stops being what he is, and loses his human standing in part or in whole. He is deemed by other powerful humans, and increasingly by himself, to be too inefficient to run his own affairs in the increasingly machine-driven world he has himself brought into being. Humanity itself, in its totality, is judged similarly by the most powerful humans, and thereby also loses its standing. This judgement on humanity is rationalized through the accurate statement that machines are more efficient than humans in taking care of human affairs. However, because it prioritizes the machine and raises efficiency to a position above human ethics and aspiration, it is not an argument in favor of human survival. Powerful humans thinking and acting rationally toward the future of humanity would hold out human survival to be their top priority, with the same urgency and rigor demonstrated by earlier generations of powerful humans in their efforts, despite the gravity of their prior errors in judgement, to stave off nuclear apocalypse during the Cold War era.

Though the advertising and industry of the last century framed humans as producers, laborers, and consumers, still independent human economic actors, frictionless technologies relegate them to a passive role when it comes to critical decisionmaking, especially with respect to the setting of boundaries. Due to their totalizing influence, their displacement of native human faculties, and the speed at which they move and change, frictionless technologies are interfering with both the desire and the capability of human minds to engage in activities other than becoming, and projecting, data.

Technology that delivers simulated rewards disconnected from human and interhuman effort, and which does not involve any substantive engagement with the real world, has brought about a broadly-based human mental dependency upon the ease it affords. However, in order to live in this utopia, the human finds that he must present himself to it as data, rather than as human. The entire process unrelentingly tells him what, not whom, he is to technique: an energy reserve consisting of complex data nodes.[53] The dependent human obeys: he stops being whatever he was and becomes whatever substance the machine needs him to be. The human becomes his own data and is processed at machine speeds through both time and space by the algorithm.

Both the powerful humans “set upon” to innovate algorithms that take the contents of human minds as standing reserve, and the humans who are summoned forth to serve in that capacity and who are doing so, are steered in such a way, Heidegger would say by enframing, to believe that the only viable human life is henceforth to be an inhuman one. This future is one that ultimately preempts the human’s understanding of himself, and humanity at large, as real-world possibility: it is this element of possibility that Heidegger saw as Dasein’s understanding of itself.[54] Where pure efficiency reigns, the human-as-possibility has no humanized space in which to operate, as efficiency has preemptively crowded it out, has reduced all possibility to a certainty of its own making, and in doing so, has collapsed the entire wave function of human possibility.

As soon as the human decision function is removed, the future of individual humans and of humanity at large instead becomes something that is pre-scripted or pre-coded, determined, and through that method forced into a state of verifiable predictability by the supervision and execution of algorithmic processes. Humans are no longer to write their own history: to the extent that there is no more human history, per Fukuyama, there will be no more moon landing celebrations. Any advances from here on out will no longer be humanity’s own.

Containment

The world’s most powerful humans today are structuring the general existence in the world in ways that are optimized for the rhythms, routines, and practices of machines, not humans. Consequently, the human lived environment is shrinking and environments native to machines are expanding. Humans are increasingly moving apart and diffusing, while machines and algorithms are moving together and converging. Frictionless technology projects to humans the illusion of expanding their world, where in human terms, it is shrinking their lived universe in the sense of the real. It is shrinking it by way of its substitution of the calculative processes of technique for human mental engagement, effort, and achievement.

Increasingly, human social and cultural life are being shaped by machine algorithms, expressed within machine contours, infused with machine priorities and values, and run on machine time instead of on human time. Instead of humans expending effort to make their own world into a world they want to live in, one that respects their native ways of being, frictionless technique is creating bounded environments for human minds to inhabit, on its terms, or on the terms of its makers. The ideal human role, because humans are not actively involved in its making, is thus downgraded from the human as the free and independent being to the human in a position of subservience and resignation: humans are becoming the resident human data service in the house of Shoshana Zuboff’s “Big Other.”[55]

The human mind, thus repurposed, operates inside its own fully individualized machine runtime environment instead of in the real embodied world outside it, beyond the container’s artificial confines. He lives pleasurably, or slowly fades, in his own digital cell, in which the primary goal is to keep him comfortable: his life is automated for him, and he is buffered from unscripted encounters with other members of his own species. The price of his ecstasy thus becomes his humanity: the frictionless algorithm is the opiate of the masses in our century. A human reduced to this condition exists primarily as standing reserve. In every other way he is obsolete.

Because frictional power systems that make use of outwardly coercive force bring the human being to feel conscious discomfort, and because there is literal evidence of them, they are legible; hence resistance becomes possible. Frictionless power systems do not show themselves in this manner. Because their mechanisms are more abstract, they are more effective. More efficient power feels both more immovable and more indifferent precisely because it is smoother and more frictionless. It is as invisible as it is ubiquitous, much like the laws of nature, but added as layers on top of them.

Unspoken, unofficial rules and barriers forming the narrowing boundaries of unsystematized human action in real world environments emerge inside this invisible normative structure and are refined as a result of increasing human dependency upon systematizing algorithms, but these algorithms in their rapidly evolving form are leaving progressively fewer independent degrees of freedom for humans to move in as the machines continue to become more perfectly efficient over time. Ultimately they will leave no space untended in which humans could work together outside it to pursue their own affairs. Encompassing the totality of all human means, Ellul’s technique, in its new networked algorithmic form, is now becoming a container that holds human minds in reserve and uses them too as means. The net effect of these changes is that the mass man, himself created by an earlier technological epoch, but now, no longer needed, is kept at bay by the algorithms that run his life, out of the way of a mostly automated economy, its integrated machine apparatus, and its tiny handful of super-elite human beneficiaries. The mass man, in this scenario, is obsolete.

Loss of agency

Frictionless technology’s replacement of the human’s core functions, with the rationale of making his life feel easier, results in his loss of human agency with respect to his lived environment, which is itself being displaced by algorithmic environments. The parallel movements toward automation and buffering in the individual and social domains, respectively, add up to the end of human agency, both personally and collectively. By agency, we mean the human as a confident, self-perceived, and effective causal agent in the outside world, that is, the world of other humans, nature, and things, including machines. As activity and independence are replaced with passivity and dependency, the human agent, in the face of overwhelming technology aimed directly at his core functions, becomes an afterthought, not important in his own right, but expendable, obsolete.

What has not been discussed up to now is the mass man’s complicity in his loss of agency:

the willful hyperdependence of one entity on another, despite the fact that it is willful, is counted as a power imbalance. The mass man has, in many ways already, willingly become a cognitive absentee as he has been downgraded to the status of standing reserve, separated and buffered from the world by networks of automated procedures and muted social protocols that act as a deterrent to autonomous human activity and agency in connection with the real.[56] Because of his relationship of totalizing dependency with these protocols, and because of the vast power imbalance between himself and the machines that occupy more and more of his individual mindspace, he is increasingly unable to take the decision to present himself to the outside world, whether to other humans or to machines, as anything other than data.

This mass man is not merely a unit of human economic surplus; he has retained little ability

to formulate himself as an empowered actor unto his environment, a condition of powerlessness that is now in the process of extending into his individual, once-private spaces. As he internalizes his new status, and begins to identify more with his data profiles than with his real-world humanity, he loses his ability to present himself, even to himself, as human. At that point, his ability to recognize his cognitive potential to be a free and responsible human agent with a mind of his own is incrementally nullified.

Regaining or retaining agency requires humans who wish to exist as free people to make their minds unavailable for use as standing reserve: to steer clear of cognitive and emotional dependency upon frictionless technologies, especially in individual settings. As Heidegger wrote back in 1955, in his Discourse on Thinking, with regard to technology generally:

We can use technical devices, and yet with proper use also keep ourselves so free of them, that we may let go of them any time. We can use technical devices as they ought to be used, and also let them alone as something which does not affect our inner and real core. We can affirm the unavoidable use of technical devices, and also deny them the right to dominate us, and so to warp, confuse, and lay waste our nature.[57]

Individually, frictionless technology takes aim at that “inner and real core” by taking over its responsibilities. It comes at the human from outside, and much unlike the household appliances of yesteryear, does not recognize his human boundaries. It extends from his outer environment into his most private spaces, his inner environment, enters his mind, and brings its powerful human innovators along for the ride. There is a reason why so many of these innovators do not allow their own children to engage with their profession’s technology innovations.[58]

Some might argue to the contrary, that frictionless technology makes humans stronger, more powerful, and more free, not less, that it extends their capabilities. It releases them from mundane tasks and frees them up for higher pursuits. That familiar argument lacks credibility on three counts. First, the algorithms are actually taking up more of the time and attention of humans than ever before, as they are running humans on machine time. Machine processes are speeding humans up faster and faster, leaving the latter anxiously engaged in tending to their relentless task-related demands.[59] Precious little time, attention, or energy is left for higher human pursuits.

Second, the idea that frictionless technologies extend human faculties and capabilities is rendered moot by the fact that, while they graft powers superficially onto humans, they do so in a manner that does not leave their original capabilities intact. Not only are humans as humans not able to do more things as a result, they are able to do fewer things. They become "able" to have more things figured out for them, gradually disabling them of the capacity to think and act for themselves.

Third, and most importantly, higher pursuits are the products of human endeavor undertaken by human minds for human purposes. Frictionless technique takes human endeavor out of the world; it does not open up room for it. Higher pursuits would be neither relevant nor possible in a world run on machine rather than human values. To the contrary, Heidegger’s prescription (above) to preserve the possibility of letting go of certain technologies would itself count as a higher pursuit, as it aims at protecting the future independence of the human mind and its ability to think critically, both of which are empowering to humans. Keeping this possibility alive, however, depends on the simultaneous nurturing of what he called “our inner and real core,” one of those factors on which human existence - as well as political, social, and economic power - ultimately depends.

Loss of democracy, or political self-determination

Collectively, the political power of the mass man is precluded by the fact that humans no longer trust their own institutions or one another, a condition that has only been reinforced and deepened by the subsequent interposition of the machine into every tiny corner of their mutual affairs. The presence of the machine everywhere leaves little space or opportunity for humans to engage with one another directly as humans to rebuild this trust.

This wholesale buffering changes the balance of power of buffered humans, both individually and collectively, relative to real-world events and powerful, unbuffered actors: If buffered humans do not interact directly, but instead merely glide past one another, they are no longer capable of doing much more than ephemerally "making waves" in the domain of the vast data flow.

Humans can frictionlessly form themselves into transient waves of trending topics on social media, and in doing so, feed more of themselves into the algorithm as data, thus further empowering the already powerful. However, in the absence of human-to-human friction, humans cannot function as collective actors capable of wielding real political power with respect to those powerful humans who control the boundaries, shape, laws, and permissible content of the social data flow or unto those actors whose minds are operating primarily outside it, or above it.

If those who are set upon to invent the frictionless technology are the structurers of its boundaries and the makers of its laws, any meaningful political role for those humans who have given up their claims to frictional agency in the outer environment is non-existent, as their minds are more inside the machine’s data structures than they are out in the world. Whether people will ultimately experience this profound power loss as placing them in the lap of luxury, or in the role of data slaves - “manipulated serfs on an oligarch’s digital plantation.”[60] - only time will tell: at present, it appears to be both simultaneously. However, given the machine speeds at which technological advancement itself is moving, if it continues, it is unlikely humans will experience it with any sense of self-awareness for very much longer than a brief historical moment.

The long trajectory of the disappearance of man, as foreshadowed by Foucault,[61] has already begun to take shape in pragmatic terms. An entity without independent agency, whether he cares about it or not, does not have power; the mass man, without any political power to determine his own fate, has residual human qualities that are surplus to those humans who innovate the algorithms and who have power. Whatever of him can be converted to machine use can therefore be consumed. Efficiency does not require the perpetuation of the human or his mind as such.


Conclusion

Heidegger's conventional definition of technology as a means to a human end has been reversed: humans, as data, have become its instrumental means. Technology is the end because it is efficient. The hidden end of this system is efficiency for its own sake, the machine value that is the logical conclusion of an urgent proliferation of means while leaving a human void at the position of ends. [62] Ostensibly this was done to prevent the recurrence of totalitarian governance; after all, it was the 20th century ideologies of human ends that were blamed for the previous century’s legacy of human atrocities. However, this abdication of responsibility has opened up the way for non-human values to rise up to fill up the empty space.

The erasure of human ends from any consideration or regard, which is man proclaiming self-defeat in his historic endeavor to improve himself by ethical and civilized means, was briefly enshrined in neoliberalism, an upstart libertarian child of the liberalism that constituted Fukuyama’s final form of government. Traditional liberalism, for all its faults, at least placed human ends in general in the category of desiderata. Perhaps it was not understood at the turn of the millennium that an uncontrolled proliferation of means driven by economic rather than military exigencies could ever intersect with that supreme yet unstated human end, unarticulated because it is so obvious and fundamental: the existence, survival, and improvement, by humans for humans, of the human condition. As Baudrillard wrote at the turn of the millennium, those ends might not be included in the default setting:

[T]he human species could be dedicating itself to a sort of automatic writing of the world, to an automated and operationalized virtual reality, where human beings as such have no reason for existing anymore.[63]

A number of other thinkers and philosophers, especially including Samuel Butler, Martin Heidegger, Jacques Ellul, Ted Kaczynski, Gene Roddenberry, Francis Fukuyama, and Bill Joy, were able to foresee well in advance that ends in general, left to unfold arbitrarily into the void, without human intervention, could actually have the early potential to turn into something non-human.

Indeed, the trajectory human history is presently on points to the human becoming a friction in his own right, an impediment to machine efficiency. As work in artificial intelligence progresses past a certain point, machines will become self-sufficient in both programming and data. If efficiency is the sole end in light of the sightless proliferation of means, then human existence itself should, when there are no longer any uses for it, be identified as a friction, an inefficiency to be rectified. If humans are inefficient, then the human element creates unnecessary friction for the smooth operation of a technique whose operations no longer depend on human knowledge or action. Human friction includes anything that makes humans less than perfectly efficient, including their higher pursuits and their associated value systems. That includes those pursuits presently being made by AI researchers in the area of value alignment, that is, figuring out how to make these machines develop in ways consistent with human values, with the first value being human survival as indicated earlier. Human values that are taken in any way seriously are among the elements of humanity that are already becoming obsolete, which is entirely a result of human neglect and self-deprecation, not malign machine agency: AI technology is still in its early learning stage.[64]

Ellul explained that technique creates its own ends, which are identical to its means. As a consequence of the practice and operation of the technological system over a long period of time, its default teleology, the perfection of efficiency, comes to be substituted for human dreams and futures. It is not merely the American Dream that has been replaced by a machine teleology, but humanity’s age-old dream of improving and bettering itself in the related realms of wisdom, ethics, and higher development, as so well articulated in our time by the late Gene Roddenberry. Human values had advanced significantly in his rendering, whereas the technologies, however magical, had no role other than that of tools deployed by a reasonably mature, well-functioning human society in the service of its own values and ends. Technology for Roddenberry was created in humanity’s best image, not its worst. However, his vision of human possibility - together with humanity’s seeming confidence in its own potential for advancement in the realms of wisdom, ethics, and higher development - that entire mirage was completely overthrown at the turn of the millennium. Now visible and undeniable to every thinking person, postindustrial humanity’s relationship with technology began to show its true nature, one in which technology advances not as part of humanity’s development, but in a regressive way, directly at humanity’s expense.


[13 Ibid.

[1] Francis Fukuyama, "The end of history?" The National Interest, No. 16, 1989: 3-18; and Francis Fukuyama, The End of History and the Last Man (New York: Macmillan, 1992): 6.

[2] Friedrich Nietzsche, Will to Power: An Attempted Transvaluation of all Values, v. 2, Book IV, tr. Anthony M. Ludovici, in The Complete Works of Friedrich Nietzsche, v. 17, Oscar Levy (ed.), (London: T.N. Foulis, 1913/1888): 321 (emphasis added).

[3] “Real innovation addresses pain points. It notices the frictions in how people live and work, how businesses function and how industry operates and tries to smooth them over.” Charlie Burton, Senior Commissioning Editor, “How the internet of things is set to transform industry,” GQ Magazine, London, February 23, 2019, https://www.gq-magazine.co.uk/article/smart-factory-tech, Retrieved on March, 28, 2019.

[4] For an insightful discussion on the creation of the mass man, see Jacques Ellul, The Technological Society, tr. John Wilkinson (New York: Alfred A. Knopf, 1964/1954): 332-335.

[5] Nicholas Carr, Utopia Is Creepy and Other Provocations (New York: W.W. Norton, 2016).

[6] Martin Heidegger, “The question concerning technology," in The Question Concerning Technology and Other Essays, tr. William Lovitt (New York: Garland Publishing, 1977/1954): 38-70.

[7] Ellul, The Technological Society: xxv.

[8] Ibid: 23, 25.

[9] Stephen Omohundro used the water analogy in reference to the native self-improvement drive that will characterize future AI. Stephen Omohundro, “The Basic AI Drives,” in Rei Wang, Ben Goertzel, and Stan Franklin (eds.), Proceedings of the First Conference on Artificial General Intelligence (AGI), (Amsterdam: IOS Press, 2008): 483-492. Posted on https://selfawaresystems.files.wordpress.com/2008/01/ai_drives.final.pdf, Retrieved on June 27, 2018.

[10] Ellul, The Technological Society: 217.

[11] Ibid: 142.

[12] Heidegger, “The question concerning technology," op. cit.

[14] Jesse Bailey, although he mischaracterizes Heiddeger's concept of enframing, offers an interesting argument that transhumanism, a feature of the current posthuman era, risks turning human bodies into standing reserve, thus alienating humans from their own bodies. Through it, he writes, the mass population could be reduced to the status of "consumers of identity," to be bought and sold in the form of transhuman enhancements. Jesse I. Bailey, "Enframing the flesh: Heidegger, transhumanism, and the body as 'standing reserve," Journal of Evolution and Technology, 24(2), July 2014: 44-62.

[15] N. Katherine Hayles, How We Became Posthuman (Chicago: University of Chicago Press, 1999).

[16] Shoshana Zuboff, "The Secrets of Surveillance Capitalism," Address at Green Templeton College, Oxford, United Kingdom, May 3, 2016, http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillancecapitalism-14103616.html?printPagedArticle=true#pageIndex_2, Retrieved on February 22, 2019; and Franklin Foer, World Without Mind: The Existential Threat of Big Tech (New York: Penguin Press, 2017): 77.

[17] Amir-Homayoun Javadi et al., "Hippocampal and prefrontal processing of network topology to simulate the future," Nature Communications, v. 8, Article 14652.

[18] Humans, behaving like automatons, have passively driven themselves into lakes, train tracks, and many other situations they would never have done on their own, as a result of machine errors, and the absence of the human in the supervisory or management role. This has led to some unfortunate outcomes for drivers, examples of which are in Allen Yilun Lin, Kate Kuehl, Johannes Schöning, and Brent Hecht, "Understanding 'Death by GPS': A Systematic Analysis of Catastrophic Incidents Associated with Personal Navigation Technologies," Proceedings of the International Conference on Human Factors in Computing Systems (CHI), May 2017, http://www.brenthecht.com/publications/chi17_deathbygps.pdf, Retrieved on March 28, 2019; and in George Michelsen Foy, How Using Your GPS Too Much Could Kill You, Psychology Today, April 1, 2016, https://www.psychologytoday.com/us/blog/shut-and-listen/201604/how-using-your-gps-too-much-couldkill-you, Retrieved on February 8, 2019.

[19] Avi Parush et al., “Degradation in Spatial Knowledge Acquisition When Using Automatic Navigation Systems," in Stephan Winter et al. (eds), Conference Proceedings on Spatial Information Theory, Lecture Notes in Computer Science, v. 4736 (Berlin/Heidelberg: Springer, 2007): 238-254, http://geosensor.net/cositprivate/44.pdf, Retrieved on March 30, 2019; Stefan Münzer et al., “Computer-assisted navigation and the acquisition of route and survey knowledge,” Journal of Environmental Psychology, 26(4), 2006: 300–308.

[20] Toru Ishikawa and Kazunori Takahashi, "Relationships between methods for presenting information on navigation tools and users' wayfinding behavior," Journal of Cartorgaphic Perspectives, n. 75, 2013: 17-28. This study found that participants using paper maps and planning their own routes displayed better scene recognition than those following the instructions of automated wayfinding algorithms.

[21] Gary Burnett and Kate Lee, “The Effect of Vehicle Navigation Systems on the Formation of Cognitive Maps," Proceedings of the International Conference on Traffic and Transport Psychology: Theory and Application, Nottingham, England, September 2004: 407-418, https://www.researchgate.net/publication/255603105_The_Effect_of_Vehicle_Navigation_Systems_on_t he_Formation_of_Cognitive_Maps, Retrieved on March 31, 2019.

[22] Elliot P. Fenech et al., “The Effects of Acoustic Turn-by-Turn Navigation on Wayfinding,” Proceedings of the 54th Annual Meeting of the Human Factors and Ergonomics Society, 2010: 1926–1930, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.848.7952&rep=rep1&type=pdf , Retrieved on March 28, 2019; Aaron L. Gardony et al., "Navigational aids and spatial memory impairment: the role of divided attention," Spatial Cognition and Computation, 15(4), 2015: 246–284; and Aaron L. Gardony et al., "How navigational aids impair spatial memory: evidence for divided attention," Spatial Cognition and Computation, 13(4), 2013: 319–350.

[23] Elliot P. Fenech et al., “The Effects of Acoustic Turn-by-Turn Navigation on Wayfinding,” op. cit.

[24] Lack of locational awareness showed up in the background elements and participant comments in the Ishikawa and Takahashi’s scene recognition study: some of their study participants did not know where they were at the start of the study and required the assistance of the study team at the start of the experiment to help them ascertain their location. (p. 21). The authors wrote that “participants' comments show that some people find it difficult to understand where they are located when using maps" (p. 25), and that “[f]or the paper map, they wanted their current location and a specific route to be indicated, because they had difficulty knowing where they were located." (p. 23). Also in the comments, study participants in all categories indicated that they wanted their exact routes given to them, not merely their starting and ending positions. Ishikawa and Takahashi, "Relationships between methods for presenting information on navigation tools and users' wayfinding behavior," op. cit.

[25] Cass R. Sunstein, "Fifty shades of manipulation, Journal of Marketing Behavior, 1(3-4), 2016: 213-244; T. Martin Wilkinson, “Nudging and manipulation,” Political Studies 61(2), June 2013: 41-55; and Richard H. Thaler and Cass R. Sunstein, Nudge: Improving Decision About Health, Wealth, and Happiness (New Haven, CT: Yale University Press, 2008).

[26] Douglas Rushkoff, Present Shock: When Everything Happens Now (New York: Penguin, 2013).

[27] “[W]henever Dasein tacitly understands and interprets something like Being, it does so with time as its standpoint. Time must be brought to light - and genuinely conceived - as the horizon for all understanding of Being and for any way of interpreting it. In order for us to discern this, time needs to be explicated primordially as the horizon for the understanding of Being, and in terms of temporality as the Being of Dasein, which understands Being.” Martin Heidegger, Being and Time, tr. John Macquarrie and Edward Robinson (Oxford: Blackwell Publishers, 1962/1927): 39.

[28] Shoshana Zuboff defines datafication as "the application of software that allows computers and algorithms to process and analyze raw data" (p. 86). Shoshana Zuboff, "Big other: surveillance capitalism and the prospects of an information civilization," Journal of Information Technology, 30(1), 2015: 75-89.

[29] Zuboff, "Big other: surveillance capitalism and the prospects of an information civilization,": 82.

[30] Sara H. Konrath, Edward H. O’Brien, and Courtney Hsing, "Changes in dispositional empathy in American college students over time: A meta-analysis," Personality and Social Psychology Review, 15(2), 2011: 180–198.

[31] Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York: Viking, 2005).

[32] Oxford English Dictionary, definition of “friction,” https://en.oxforddictionaries.com/definition/friction, Retrieved on March 28, 2019.

[33] Carl A. Rotter, Physics 201B Course Materials, West Virginia University, Department of Physics, Spring 1999, http://www.as.wvu.edu/phys/rotter/phys201/4_Forces_Dynamics/Classifying_Forces.html, Retrieved February 8, 2019.

[34] Robert D. Putnam, Bowling Alone: The Collapse and Revival of American Community (New York: Simon and Schuster, 2000).

[35] Ibid: 179.

[36] The term “parallel play” was coined by child development specialist Mildred Parten in 1929 in her landmark theory, still in use today, of the six stages of childhood play development, based on the characteristics of the play she observed in different age cohorts of young children. Characteristic of the 2 ½-3 ½ year age range, children play side by side but separately with toys or in sandboxes, while paying some attention to one another. The subsequent stages are “associative play,” in which the children interact with one another while playing, and gradually begin to show more interest in one another than in the specific toys they are playing with, and “cooperative play,” in which children, as they mature, begin to play together. Mildred Parten, “Social participation among pre-school children,” Journal of Abnormal and Social Psychology 27 (1932): 243–69, study cited in Peter Gray, The special value of children’s age-mixed play," American Journal of Play, 3(4), 2011: 500-522.

[37] Randy Rieland, “A little less friction, please,” Smithsonian Magazine, March 26, 2012, https://www.smithsonianmag.com/innovation/a-little-less-friction-please-165687855/, Retrieved on March 28, 2019.

[38] Sherry Turkle, Reclaiming Conversation: The Power of Talk in a Digital Age (New York: Penguin Random House, 2015). Turkle refers to the “quantitative self” and the “algorithmic self” on p. 90, and the “aspirational self” on p. 83.

[39] Jean Baudrillard, Simulacra and Simulation, tr. Sheila Faria Glaser (Ann Arbor: University of Michigan Press, 1994/1981).

[40] Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other (New York: Basic Books, 2011).

[41] Turkle, Reclaiming Conversation: 20-21. The quoted material is on p. 21.

[42] Adrian F. Ward et al., "Brain drain: The mere presence of one’s own smartphone reduces available cognitive capacity," Journal of the Association for Consumer Research, 2(2), 2017: 140-154.

[43] David Byrne, "Eliminating the human," Technology Review, September/October 2017, https://www.technologyreview.com/s/608580/eliminating-the-human/, Retrieved April 30, 2019.

[44] Jean M. Twenge, iGen: Why Today’s Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy - And Completely Unprepared for Adulthood (New York: Simon and Schuster, 2017: 77-118; Jean M. Twenge et al., "Increases in depressive symptoms, suicide-related outcomes, and suicide rates among U.S. adolescents after 2010 and links to increased new media screen time,” Clinical Psychological Science, 6(1), 2017:3-17; Ajit Shah, “The relationship between general population suicide rates and the Internet: a cross-national study,” Suicide and Life Threatening Behavior 40(2), April 2010: 146-150; David Luxto et al, "Social media and suicide: a public health perspective," American Journal of Public Health, Supplement 2, 2012, 102 (S2): S195-S200; Melissa G. Hunt, Rachel Marx, Courtney Lipson, and Jordyn Young, “No more FOMO: limiting social media decreases loneliness and depression," Journal of Social and Clinical Psychology: 37(10), 2018: 751-768; Ethan Kross et al., “Facebook use predicts declines in subjective well being in young adults,” PLoS ONE, 8(8), 2013, https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0069841, Retrieved on April 5, 2019; and Holly Skickya and Nicholas A. Christakis, “Association of Facebook use with compromised well-being: A longitudinal study,” American Journal of Epidemiology, 185(3), 2017: 203-211.

[45] Hayeon Song et al, "Does Facebook make you lonely?: A meta-analysis," Computers in Human Behavior, 36C, 2014: 446–452. This study concluded not that Facebook makes non-lonely humans lonely, but that humans who engage with Facebook often do so because they are lonely - and that Facebook does not alleviate their loneliness.

[46] Morten Tromholt, "The Facebook experiment: quitting Facebook leads to higher levels of well-being," Cyberpsychology, Behavior, and Social Networking, 19(11), November 2016: 661-666. Tromholt states outright in the abstract: “[T]his study provides causal evidence that Facebook use affects our well-being negatively.” The well-being under study included both its emotional and cognitive aspects. He goes on to write: “In the present study, well-being encompasses a cognitive and an affective dimension as these are standards in evaluating people’s well-being and quality of life.”

[47] Fuchun Lin et al., “Abnormal white matter integrity in adolescents with Internet addiction disorder: a tract-based spatial, statistics study.” PLoS ONE, 7(1), 2012, https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0030253&type=printable, Retrieved on March 28, 2019.

[48] Kai Yuan et al., "Microstructure abnormalities in adolescents with internet addiction disorder," PLoS ONE, 6(6), 2011, https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0020708&type=printable, Retrieved on March 30, 2019.

[49] Soon-Beem Hong, "Decreased functional brain connectivity in adolescents with internet addiction. PLoS ONE, 8(2), 2013, https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0057831&type=printable, Retrieved on March 30, 2019.

[50] Soon-Been Hong et al., "Reduced orbitofrontal cortical thickness in male adolescents with internet addiction," Behavioral and Brain Functions, 9(11), 2013, https://behavioralandbrainfunctions.biomedcentral.com/track/pdf/10.1186/1744-9081-9-11, Retrieved on March 30, 2019.

[51] Patricia Greenfield, "Technology and informal education: what is taught, what is learned," Science 323(69), February 2009: 69-71, https://www.researchgate.net/publication/23716383_Technology_and_Informal_Education_What_Is_Tau ght_What_Is_Learned, Retrieved on March 30, 2019. This study concluded that too much technology, while privileging the human visual faculty, is interfering with “mindful knowledge acquisition, inductive analysis, critical thinking, imagination, and reflection.” For anecdotal observations about the atrophy of the higher-order cognitive faculties of complex, deep, and sustained thought, attention, concentration, contemplation, and reflection, see Nicholas Carr, “Is Google making us stupid?: what the Internet is doing to our brains,” The Atlantic, July/August 2008, https://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/, Retrieved on March 28, 2019. Carr was also quoted in a 2018 Pew Research study on the Internet and Well-Being: “Among other things, the research reveals a strong association, and likely a causal one, between heavy phone and internet use and losses of analytical and problem-solving skill, memory formation, contextual thinking, conversational depth and empathy as well as increases in anxiety.” That study found in its canvassing of experts, many of whom discussed cognitive and emotional changes in humans being wrought by the new technologies, that 47% predicted individuals’ well-being would be more helped than harmed by digital life in the next decade, 32% predicted that individual well-being would be more harmed than helped, and 21% predicted little or no change. Pew Research Center, “The Future of Well-Being in a Tech-Saturated World,” Washington, D.C., April 17, 2018, https://assets.pewresearch.org/wp-content/uploads/sites/14/2018/04/14154552/PI_2018.04.17_Future-ofWell-Being_Final.pdf, Retrieved March 26, 2019.

[52] Peter Gray, The special value of children’s age-mixed play," op. cit.: 508.

[53] “It [the engineering mindset] views humans as data, components of systems, abstractions.” Foer, World Without Mind: 77, op. cit.

[54] Heidegger, Being and Time, op. cit.

[55] Zuboff, “Big other: Surveillance capitalism and the prospects of an information civilization,” op. cit.

[56] Baudrillard, Simulacra and Simulation, op. cit.

[57] Heidegger, Discourse on Thinking (New York: Harper and Row, 1966/1959).

[58] Chris Weller, "Bill Gates and Steve Jobs raised their kids tech-free — and it should've been a red flag," Business Insider, Jan. 10, 2018, https://www.businessinsider.com/screen-time-limits-bill-gates-steve-jobs-red-flag-2017-10, Retrieved on March 30, 2019; and Chris Weller, "An MIT psychologist explains why so many tech moguls send their kids to anti-tech schools," Business Insider, Nov. 7, 2017, https://www.businessinsider.com/sherry-turkle-why-tech-moguls-send-their-kids-to-anti-tech-schools-201 7-11, Retrieved on March 30, 2019.

[59] Nicholas Carr, Utopia is Creepy and Other Provocations, op. cit.

[60] Michael Krieger, “Two roads diverged in a digital wood,” Liberty Blitzkrieg, January 30, 2019, https://libertyblitzkrieg.com/2019/01/30/two-roads-diverged-in-a-digital-wood/. Retrieved on April 13, 2019.

[61] Michel Foucault, The Order of Things (New York: Routledge, 2002/1966).

[62] "[T]he proliferation of means brings about the disappearance of the ends.... The further we advance, the more the purpose of our techniques fades out of sight." Ellul, Our Technological Society: 430.

[63] Jean Baudrillard, The Vital Illusion (New York: Columbia University Press, 2000): 64.

[64] Hans Moravec, "The Universal Robot," in Timothy Druckrey (ed.), Ars Electronica: Facing the Future (Cambridge, MA: MIT Press, 1999): 116-123, https://frc.ri.cmu.edu/~hpm/project.archive/robot.papers/1991/Universal.Robot.910618.html, Retrieved on March 31, 2019.