Title: Anti-Tech 101
Subtitle: An Overview of the Central Issues
Author: David Skrbina
Date: Recorded May 21, 2023. Published May 27, 2023.

    Introduction

    Presentation

    Open Discussion

Introduction

Griffin: Thanks for joining us everyone for another etc. Today we're doing a little. Presentation that David put together for us at Doctor Skrbina to put together for us. Little anti tech one. 01 just going over some of the basic issues, the core fundamental issues and a little bit of philosophy of technology to add on to it. And then a little, uh, using COVID as an example to kind of talk through some of those fundamental idea. Ideas and principles and such. So the first David is going to present first this PowerPoint we're going to record that section and then afterwards we're going to move into a Q&A section which will not be recorded. So we'll stop the recording and then move into the Q&A so people can feel free to ask whatever questions they want. And yeah, so that's pretty much the plan. We'll probably be about an hour and a half total, if maybe more, maybe less and kind of questions people have, if any. And yeah, that's the that's the plan. So if that all sounds good, you can go ahead and start whenever you're ready.


Presentation

Skrbina: So yeah, we're calling this anti tech 101. It's kind of an overview of some main issues. Technology, critique philosophy of technology basically just sort of understanding the phenomenon of modern technology. So we've got a few parts here. We're going to start. With what we? Call technology in the modern. World just to sort of. Set the context for things. So yeah, just kind of a quick review, right, what are the basic conditions that we all live in today in modern society? Well, sort of. We all kind of know these things, right? We have pervasive advanced technology. We're all relatively isolated from nature, spending extensive time in buildings indoors, and vehicles, so forth. Globally question, we have a growing human population currently over 8 billion on the way to 10 or 12 billion in worst case. We have an unsustainable ecological footprint by almost any standard. If we look at national levels or global levels, that's far beyond the capacity of the biocapacity of the planet. Consequently, we're facing multiple environmental risks, including loss of species, soil depletion, deforestation, yeah, climate change. I think we know those. Kind of ongoing risk. To human health and well-being, both physical and mental. We'll talk a little bit about those as we go and maybe more fundamentally kind of this condition of, let's say, damage to human freedom, human dignity, human autonomy. These are all conditions that we live in an advanced technological societies. This is not really new. I mean, we've known about these things for several years, several decades actually it's become worse in recent years just to sort of give you one, one clue. Back in 2015, Oxford University put together a list of global challenges that were faced by humanity. They listed 12 of them. Twelve major threats to human existence that could potentially mean the end of humanity. Nine of those twelve were technological factors, so they were climate change or related to technology, climate change, nuclear war, global pandemic. Which was kind of Nice, predicted back in 2015. And then we had one of the. Ecological collapse, economic and social collapse, synthetic biology, artificial intelligence which is in the news in recent times, nanotechnology and then sort of a generic unforeseen technological consequences. It's kind of striking, Stephen Hawking, now deceased, but famous physicist, made this comment about that time. He said most threats to humans come from science and technology. Nobody wants to talk about that. They don't want to admit that at least Hawking had the courage to state that fact. And that's absolutely true. That was in 2015. Two years later, in 2017, the World Economic Forum put together a list of key emerging technologies that carried significant risk to humanity. Under the category of moderate risk included several technological factors, including 3D printing, which included bio printing. Nanomaterials, virtual reality and alternate reality neurotechnologies, including drugs and brain interface. And such things like quantum computing and neural networks, these have been going on as part of the AI process. Those are moderate risks under the category of high risk. They included geoengineering, meaning changing the planets to accommodate the. Requirements for climate change probably presumed. Biotechnologies, including gene manipulation using this CRISPR tool and then AI and robotics, which was a fairly broad category AI and itself of course we're talking about that in recent times. So the robotics including things like self driving cars, autonomous. Autonomous robots, military uses, and so forth. So the bottom line is, when we look at the situation globally, we see that all major problems that we face at root are technological problems. And it's things from chemical pollutants to greenhouse gases to overpopulation, destruction of nature. Military conflicts, terrorism and so forth. These are all caused by or enabled by advanced technology. It's ironic because people assume that technology will solve our problems, but if, in fact technology is the root cause of these problems. That's a whole different situation. If technology is the root cause of our problems and it's not the solution, then we're at the situation, as I stated at the bottom, technology is not the. Solution it's the problem. There's a thing called technological optimism. People think technology will solve all the problems facing humanity. But that's not true, because the problems aren't technological. The more technology we have, the more problems we all have, not the more solutions. So if there's. An initial conclusion that we can draw. From this short. Analysis As we say, we need to deeply re-examine the technological system itself. And find ways to lessen its influence in the world. And we need to do it soon. Because we haven't got much time. So we have basic. Philosophy of technology just a few key. Concepts in that field. Obviously it's a huge area. It could take hours and hours to describe it. Just some basic ideas. So I wanted to start with four myths, common myths about technology that people seem to believe. First of all, people believe that technology is something neutral. Secondly, the technology is under our control since we make the technology, they feel like we control the technology. People tend to think technology promotes human well-being because we control. It we design. Things we're doing it for our own interests, so therefore it must promote human well-being. And fourth, people believe that technology can be reformed to mitigate or reduce or eliminate any problems. That's the reform strategy. So these are common dues, widely held views, implicitly held by almost everybody, including experts and specialists. In the field. I think all four of those are false, so these are four myths. We'll talk briefly about these. So I think here's the truth. Technology, in fact, is not neutral. The the neutrality question is this idea that technology is. Just a tool. And that we can use the tool as. We like so we can use it for good. Things or for. We can use it to help people. Or harm people. It's just the tool. Anything from a hammer and a. And as a power saw up to advanced AI and nuclear weapons, all those things are just tools. They're just neutral things. They're neither good nor bad, and it's. All in how we use it. But when you look at it and many, many thinkers and philosophers have looked at this question, then they almost uniformly conclude that technology is not neutral. For one thing, the use is not optional. Many technologies begin as optional things, and they become mandatory in an in a functional sense in our lives. I mean simple things like. Automobiles 100 years ago. When when they were first developing automobiles and nobody had one, it was kind of a cute little entertainment and people the rich guys bought one and went driving in the. Countryside and the weekends. And then they became more popular, more pervasive and cheaper, more widespread. And then the cities evolved to require. Them and now of. Course everybody pretty much functionally has to. Have a car. Same with computer technology. Same with the Internet, with e-mail, with cell phones. All those things at one time were optional and now functionally. If you want to exist in modern day society, you have to use those things. Technology introduces many unpredictable consequences. The more complex the technology is, the more we are unable to anticipate what will come from these things. That's that makes it not a neutral thing. Correspondingly, the risks are unpredictable. Again, as complexity grows, we don't really know what kind of risks we're facing. Are they small risks? Are they recoverable risks? Are they catastrophic risks? We don't really know. And finally, it's not even clear that we have a net gain. We introduce new technologies and devices because we think it's gonna. Help us because. It's going to cause create some kind of benefit to our lives or to society, but of course there's always trade-offs and the question is, is the trade off is the. Worse than the benefit. From the technology, right? So that's the question of a net gain and it's far from clear that we have net gain in many cases. We have a net loss for for advanced technologies. Second best technology, in fact, seems to be not under our control. It's a highly dispersed diversified global. To tea again, take things like artificial intelligence, because that's in the news. You know, lots of different research units are working on it. Labs, corporations, universities, military institutions, multiple countries are working on it. Private individuals are doing these things highly dispersed. Around the planet, no one person or organization. Can control these things. Obviously there's in such a condition, there's really. No legal or ethical. We could sort of call for things we can ask for requirements or restraints, but those are purely voluntary. The net effect is we tend to sink to the lowest common denominator, so the person who's willing to use the most powerful technology the soonest to his advantage tends to come out ahead, which forces other people to respond in kind. And so you're introducing rapid, powerful, risky technologies without. Moving them out. Because of emergency conditions and competitive pressure is what that what that does is that drives. Us down to what we would call. The lowest common denominator. And of course, there's high incentives all along the way. Incentives of profit power and control manipulation of people and so forth. The bottom line, we see this a lot when we talk about advanced technologies. People say things I like this little phrase. I asked people to look for this little phrase, this idea that. We have no choice. And we see this a lot of this is in Ray Kurtzweil and other. People who talk about. Technologies I seen it recently with the with the advanced artificial intelligence. People say, yeah, there's problems. Yeah, there's risks. But you know we have. No choice because of well. If we don't if we don't. I heard this. Just Bill Gates, we have the. If we don't. Do it then. The bad guys are gonna do. But and well, we don't want the bad guys doing it. So we have to do it too. And of course if we don't do it, the Chinese are gonna do it. And the Chinese are saying, well, if we don't do it, the Americans are gonna do it. And so everybody's kind of pointing at everyone else and the bottom line is, yeah, you got no choice. You have to press ahead. That's a disastrous condition. Warmth technology does not. Aid our well-being, in fact it. Harms our well-being in many. Cases we have negative physical consequences to our health, cause of illnesses, injuries, military uses, of course. We're seeing, especially in recent years, negative psychological consequences, stress, depression, psychosis for people who engaged in intensive technology use at work and at school, especially young people who are doing lots of time online and social media. We see lots of adverse psychological consequences. And finally, there's negative. Ethical consequences as well in terms of living under a surveillance state, corruption of individuals and institutions, the general dehumanizing tendency of technology, all adverse consequences to human well-being. And lastly, on the reform thesis which everybody says, they say, yeah, sure. Yes, we understand that. But well, we'll just fix the problems. We'll just reform the technology. And then things. Will go away, but of course that doesn't seem to really work if you. Look at the history. And I have the historical evidence suggests that technological reform is at best partial. You can reform or fix a small piece, a small component of the problem, but you never really get to the whole problem. It's often a temporary fix. And in the end, it's often counterproductive, meaning you fix one problem, but you introduce another problem or more problems that are worse than the original problem. So in that sense, it's counterproductive. So that's just sort of a very short analysis of those commonness. And I think we can, we can show a good argument why all four of those are not true. And those are the standard defenses that we hear from people. The Pro tech people. And I think we need to be prepared to challenge. Every one of those, because there is a there's data and evidence behind. All, all of our counterclaims.

All right, so structure of modern technology just kind of little brief little philosophical overview here for those of you who are interested. A lot of this comes from the work of Jacques Aloul. Key thinker key person in their critical analysis of technology, his book The Technological Society is an essential work to be to be read by anybody who is concerned about technological society. It was the basis for Ted Kaczynski's manifesto. Much of his ideas came from Jacques Lowe's Technological society. Especially chapters one and. Two are essential readings. I cover those many times. In my courses. But a little basically lays out a kind of structure. The main characteristics of modern technology. Just to mention these here. In passing, there's. Lots to be said, I can't elaborate here obviously. But a little one. Through five will give five key characteristics of modern technology. One is automatism, meaning that the process is self directing. Technology seems to direct itself toward its own advancement. In a kind of self defining self directing way, we see this even in relatively simple technologies. Far more the case in advanced technologies. It's remarkable that. A little even identified such things. Back in the. 1950s, let alone what we're facing today. He also came upon this characteristic of self augmentation. So where technologies they self growing or self building phenomenon and it grows in kind of a progressive. And let's say. Kind of a ratcheted way where it's moving head forward, it doesn't move backward, it only moves forward in an irreversible way. And in a rapid way, not just sort of linearly increasing at a sort of a regular linear rate, but at an exponential rate. So rapid exponential growth in technology. All right, and this is what we're seeing, particularly now with AI, we're seeing rapid forward moving irreversible and exponential growth. 3rd characteristic that. A little identified was monism and he said, well, look, modern technology is basically an integrated holistic phenomenon. The whole system kind of works together. It all it. All the all. The parts sort of need each other play together. It's like a watch where all the all the components are required. Work to make it work, which means you can't. Just extract one or two bad parts that you don't want. You can't take those. Out and still have your technological. System. It demands all components. Of the system, you have to have. The supply chain, the. Resource extraction, the processing, the manufacturing, the deployment, all these things are. Integrated together in a tightly knit way and it functions as a whole thing. 4th characteristic was universalism. Again, technology basically expands globally. It looks the same globally. We have the same basic technologies around the world, the same Internet, same power structures, same cell phones. You know, to whatever tablets. And you know all these. Automobiles, right, aircraft. I mean, it's basically the same. The technology expands universally in the same way. This is sort of why even the most. What in past times would have? Been far different cultures, Asian cultures or African cultures. Once they become modernized, they all look the same. They look like technological societies. And the 5th component is autonomy independent action. That technology seems to function on its own independently, and it's the driving factor. Behind society, this has a name. I'll mention that in the next slide it's called technological determinism, where technology is an autonomous force in society. It drives the other components of society. It drives the economic system, it drives politics, it drives the social change. Social values, yeah. Pretty much all, all aspects of modern society. It's a very brief overview. I would definitely refer you to alulis book, especially Chapter 2. He's got tremendous elaboration, lots of good examples of all. Five of these. So summarize that. We would say that it appears that the technological system advances independently of human wishes. And it frequently does so counter to. Our own best interests and counter to the interests of nature. We know this because a technology is unquestionably advancing B human well-being is unquestionably declining human health and welfare is declining. It's not getting better. It's getting worse. The state of the planet environmentally is not getting better. It's getting worse. But technology is advancing. The time. How is that? How is that possible? The only explanation is that technology advances counter to our well-being and counter to the well-being of the planet. That's the only explanation. If it was the other. Way around our health would. Be get getting getting better. It would be healthier and. Happier the planet would be flourishing, but it's not. Secondly, technology as I. Said is the key driving force behind. Political, social and economic change. Thus we have a condition in in philosophy of technology that we would call strong technological determinism. Technology is the primary moving force. In society today, it's the primary cause, and it's the primary threat. So again, to summarize, technology as a primary cause. Of social environmental. Problems, therefore, the obvious answer is that any long term solution has to be a rollback, an extensive rollback, or an elimination. Of industrial technology, this is the revolutionary thesis. This is precisely Kaczynski's thesis and the logic drives us there. I think any other position is probably indefensible. OK, few more slides. And then we'll sort of open it up. To Q&A. So Part 3 here. One look, the COVID pandemic just as a kind of an interesting little case study. So yeah, COVID is sort of. Over for now. But of course it's not really over. They say first of all, secondly, of course, we know probably the next pandemic is just around the corner. It's out there brewing somewhere in some lab or some in some remote corner of the jungle that somebody is about to pull. Into human society. It's about to get. But just look at any pandemic. Forget about COVID any pandemic. Requires certain things to happen. One, it requires dense populations. We did not have pandemics in Hunter gatherer days because populations were sparse. They dispersed right? I mean they, they were, they were not dense and any local infections or contaminations were very local and local consequences. And that was it. Now of course we have very dense populations. You have to move the pathogen around. So any pandemic requires relative. This is relative, of course. Relatively high speed transportation systems. So today is of course we have jet airplanes, high speed trains, even automobiles. Are sufficient to rapidly move paths. Around the planet. In a day and within societies easily within. A matter of few days or a few weeks. What we also have the kind of. Again, for any pandemic is. A kind of interference with and manipulation of nature. So for example, we have industrial animal farming which tends to create new pathogens in in animal intensive animal agriculture. On the one hand, on the other hand. They're invading wild animal habitats in the jungles and the tropics, exposing ourselves to. New pathogens that maybe were relatively benign in animal populations and then they get transferred to us and become very, very dangerous.

And then of course now we have in the present day, we have industrial technological causes of pathogens. So for example, we have this nifty little process of genetic manipulation. So here's a little quote by journalist Nicholas Wade. He said ever since virologists gained the tools for manipulating a virus's genes, they. Have argued that. They could get ahead of a potential pandemic by exploring how close a given animal virus might be to making a jump to humans. In this process, justified lab experiments in enhancing the ability of dangerous animal virus viruses to infect people. So we undertook deliberate processes in labs to increase the. Potency of these. Indigenous viruses to study them, to learn how they work right to manipulate them. There's a name for that. It's called gain of function research. You probably heard about that with the COVID pandemic going on in in, in China and I'm sure many other places around the world and certainly in the United States, researchers are doing gain of function. They're enhancing the potency of viruses. To study them, to learn how they spread. Which is of course extremely risky. Right. Then on top of that, of course, we have the all the military uses, so we have bioweapons. So unquestionably all the major militaries around. The world that include. US, China, Russia. Any major you know, India? Middle Eastern countries are certainly working on genetic manipulation in terms. Of bio weapons. To create yeah bio, contaminants, bio, pathogens that are genetically engineered to spread and to attack enemy nations or enemy people. And then one more of course, we have the old lab accidents. We happen to happen all the time. If you're working in a lab environment with the dangerous and potent pathogens, you know. These things are microscopic. Obviously they can escape one the lab worker gets contaminated. He leaves the. He's not fully decontaminated, who goes out to society for thing escapes. Yeah, myriad of opportunities for laboratory accidents. Another striking thing about COVID was the response. The solutions to the COVID pandemic were always technological solutions. We need more technology. More advanced technology was always the answer. To our problems. So for example, how what did we do? We had to. Develop a new vaccine we needed. And new high tech vaccine using something new M RNA messenger RNA. Yeah, we had to invent this or deployed it a new way to deal with the COVID pandemic. We had to do mass experimentation on people. Mass, because it was a mass phenomenon. So we had to basically experiment on these vaccines with people, old people, sick people, young people, children. Still don't understand the consequences for the people that got sick. We had high tech, medical solutions, ventilators, medical facilities, special, you know. The disinfection units and people wearing their space suits to avoid getting contaminated and so forth. We had nifty little treatments like monoclonal antibodies and. Other sort of. Nice little things that got invented to, to deal. With these things. So that's how you know you're dealing with technological situation, because all the solutions to the technological problem are technological solutions. That's that's a case. Study in a technological society, technology causes the problems, and it's supposedly the solution to the problems, so maybe temporarily we've solved the COVID pandemic, but we haven't really done anything fundamental. We haven't, really. Addressed the root cause of the problem right. So just to sum. Up the COVID thing, right we have. Increasing evidence that suggests that. The the pandemic seemed to have started in the lab. Nobody really wants to talk about that, but that seems to. Be the case quite possibly genetically engineered, might have been a bio weapon deployed might have been a lab. Leak. We don't know. We do know that. All the solutions to. The COVID pandemic were always more technology. The the true solution the getting. To the root cause. Would be the technological system that allowed the pandemic. To exist and spread. The first place. So for example restricting high speed transportation. Staying out of wilderness areas, marking them off, reducing population levels and densities, those are the root cause solutions to any pandemic. We didn't even talk about those. Those weren't even on the table. No one discussed them. No one debate. And those were out of the question. We just introduced. Our technological solutions we did not address the root causes, which guarantees that there will be a future pandemic and that it will be. Worse than the first one. So that's that was my. Final point, dark tech will lead to. More and worse pandemics, that's virtually guaranteed.

That's really the the, the main part. Of the presentation, just the last couple. Of slides here, this part 4 sort. Of what you can do, so just a few. Things, of course, it's pretty overwhelming. We've got a massive technological system that's. Run amok and things are rapidly getting worse on multiple fronts and you know, you can kind of feel powerless and sort of just depressed. But there are some things that we all of us can do. So we jotted down a few things. This is sort of the function of the. Anti tech collective. So the first thing. Is to get well educated so. That you know what's going on. We have a library link at the ATC website which gives a list of essential reading material, so that's sort of the base basic thing, first of all. Right. Just know what you're talking about. Know the history. Know the essential pieces. Right. The books like Jacqueline and obviously Kaczynski's work. And there's lots of other thinkers I've written a couple books myself. Just a matter of being well. Of yourself then, as you do that, you can sort of start to educate. Other people, because you become a knowledgeable individual. You could speak. Out you can write. You can do blogs. You can do. You know, we're always interested in writers here for ATC. You know blogs and. Just essays or articles so forth. You can connect with other people in your area. We do lots remotely because we're functioning online here, obviously. But it's nice to work with people face to face, connect other people in your area, work with them to raise awareness, help other people get educated. One thing to lookout for is what I call the fake critics. There are lots of people who claim to be. Critical of technology? And they're not. Because they don't really either. They don't understand what they're talking about or they're just superficial thinkers, or most likely, they just are convenient to the system because they don't prevent present any profound challenges. So I'm thinking of people like Sherry Turkle and Jaron Lanier. So these other people that are paraded in the media as critics or skeptics of technology and they say really nothing of. Substance they either just don't know. They're talking about or they have no ability to. Get to root causes so. It's nice to identify those people. And avoid them, or at least criticize them because they’re do for some good criticism themselves. And of course helping. Out our group. Donations, financial donations, right, written donations, and so forth are always helpful. And then even little things. Like becoming a member, right? We've had processes going on. We're sort of rethinking this process recently, but we've had monthly meetings, we've had a newsletter. We try to network with people. Those similar groups around. Around the world. To try to get this. Word out globally. So that's our presentation. Thanks for your time.


Open Discussion

Griffin: All right. Well, with that, we'll move into the Q&A. I'll, I can moderate things. If you have a question, I'll ask that you use the raise your hand function in. Zoom or if you want you can ask the question in the text box in the in the chat and we can read it from there as well, but I'll just go in order of the order that I see them, so if anyone has any questions or anything they wanted to ask Doctor Skrbina or just talk about in General, Ethan, looks like you have a question. Ohh go ahead.

Ethan: Hello everyone I've written two articles for this group. First of all it's nice to meet everyone. I've also been reading. A lot of this, it's actually really funny because everything that he mentioned was in backwards. That's by uh Bellamy Fitzpatrick. It's really good reading material. It's not gonna make you lose your mind. It teaches about primitivism and forest gardening, so if you ever feel alone, you can always read that that helps me out. My question I have two questions. The first is a matter of strategy and then the second is more of a spiritual question. So the first is a matter of strategy and this is for everyone, but it would seem to me that if there was some kind of radical Kaczynski and type of just collapse of the technological system, imagining that that can even happen. And I am going to critique this Group A little bit, imagining that that can even happen. There are already so many things that could go wrong. I mean, unattended oil wells unattended nuclear power plants unattended. I mean, I read about a mine near Berlin where they were trying to store nuclear waste and it went wrong and it just seems to me that right now at this very at this very moment in history, you you have a situation where you can't, you can't go forward. But if you go, it's the same it's I mean it's everything he talks about, right, he says. You can't go forward, but you can't go back either. What are your solutions to address those problems? I know that you, you doctor Sabrina, have suggested that you go back to the early 1200s. I think that's what you said, right?

Skrbina: Well, that's right. So you're you're. Right. There are a number. Of very dangerous. Things which you don't want. To just abandon because nuclear reactors will overheat. And the you. Know toxic waste will just sort of flow out into the environment. So yeah, I've argued for a kind of a process. I call it creative reconstruction, which is kind of systematically dismantling these most dangerous. Technologies so that they get to this point where they are no longer posing an immediate threat to humanity. If if we have some kind of collapse because, yeah, then then a lot. Of things are immediately on unattended. So I think now we have a window. Of time, maybe. It's 1010 years or 20 years or 30 years even. We have a window of some decades. I think that there’s a fair chance we could unwind those things. At least get them to a stable. State where if you know the global power. You will be faced with additional catastrophes. We have enough problems on our hand as it is. You don't want to deal with overheated nuclear plants and. You know, toxic chemicals and. All this kind of stuff. So you'd like to. Get to at least a semi. Stable point where you could. Survive relatively well from a from a. Shutdown or loss of the system so that that's kind of what I've argued to me. That's a rational course of action. You know, obviously we're doing nothing of the of the sort. We're moving in the opposite direction. We're making things more complex and more dangerous.

Ethan: The other guy had a question.

Griffin: Jorge, I think it's you.

Alex: Yeah, if I can. If I can address that point by Ethan, I'm a bit more radical than Doctor Scherbina and I think that. The situation is so dire that. Those kind of concerns can't hold us up from outright stopping the system, no matter what the consequences. It seems to me. Far more idealistic that there can be any kind of agreement, even in one nation. Perhaps supposing that China would decide to shut down nuclear power plants or coal plants or whatever. Any nation take to take your pick. They're going to be disadvantaged in international competition with other countries and societies, so it just seems like it's off the table for any of them. To do that you would need. A dictatorial, authoritarian regime that would be able to decide that against the will of the population and be able to tank the economy. And you had a bit of a glimpse of that in Cambodia or in Myanmar, Burma, where they have a low level of technology and they suffer internationally for it domestically as well as internationally. They're not able to compete. They don't have the military potential to exercise against competitor states. They don't have economic growth. They were authoritarian, so. They were able to impose that on. People, but they obviously didn't make a dent against the world having technological progress. So that's my response to that is that although it would be ideal if we could shut down oil wells and decommissioned nuclear reactors and so on and so forth with all the all the threats that exist in the technological world. It's not worth trying to get that achievement at this at the cost of continuing the system as it is. And that overtime? Nature will sort it all out. I don't think that's just idealistic or naive, because we can see examples of that in many polluted sites that given enough time, nature has reclaimed and started to heal those places. So I think the continuation of industrial technological society. Can continue the continuation of. It will go on with the hope that, Oh well we're just one step away from getting the right people in power or getting the agreement among the citizenry to shut these things down. And we're not really going to get that. And if we would get one of those steps forward, we would go 2 steps back in the advancement of technology in some other arena. And I just wanted to say about. You know you had you. Had COVID as a primary example. And I saw a quote from the CDC Director, a former CDC director, who said that every year some new microbial threat is discovered, some new pathogen is discovered, which presents a problem to humanity and that also any one of these can traverse the whole globe in a mere. 36 hours. Now he didn't cite it's because of a problem of technology, but that's what it is. It's because of our transportation technologies. So it only takes 36 hours for any one of the annual flue threats to span the whole globe and affect people everywhere who are then going to be traveling and amplifying the rate of transmission… As far as… That’s all, I'm gonna leave it there. I lost my train of thought, so I'll come back to it.

Skrbina: Yeah, I guess. I would just say in response that you know there's I think strategically we’re we're an agreement. I mean, there’s obviously room to discuss. Different tactics, right and. And how you would proceed? Right, there's an argument for sort of an A more extreme, more radical. Approach and I'm certainly open to that idea to that, to the discussion. You know those I think maybe those things operate on different levels, right, I. Mean there's things that can be done. Within a a rational and let's say a legal framework in terms of discussing at least the most dangerous technologies, and that maybe those things can be decommissioned, pulled out of the use. I mean, we've done this before, right? We've done it in some extent with nuclear weapons. We've done it with certain chemical weapons and so forth where we've sort of disassembled and destroyed these things. And we have mutual inspection regimes and UN inspectors to try to confirm this kind of stuff. We could in principle do the same thing with the most, starting with the most dangerous technology. And sort of collectively agree it doesn't take a a global dictatorship to make those things happen, right? We do, we do have international agreements, some ability to reach international agreements, maybe a I might be heading that way as one of these things where if sort of you know we and European countries and China and India. See like, Oh my God, this is an existential. Threat, which it is. You know, maybe there can be some kind of agreement globally for the major players to pull restrain these things in. I mean, obviously that stuff is better than nothing. It's better to restrain things. I mean, even Ted Kaczynski. You know, he kind of admitted. That, yeah, maybe we need to. Sort of buy ourselves. Some time with some of these. Sort of intermediate. As we're figuring out longer term ways and I fully agree with the long term ways, the. The system has. To end, we can't really sustain this modern industrial technological system. It just can't go on, so we have to. Get rid of. It one way or the other right? Like Kaczynski said. Either slowly or quickly, you. Know we can't really say we can't predict. That but one way or another. It's got to go. And I guess prudence would suggest you deal with the most. Obvious risks first and. Then you sort of work your way down the hierarchy and try to make progress. On those on the other ones. But again, yeah, there's lots of room for discussion about specific approaches, specific tactics, how you, how you tackle things.

Griffin: Did anyone else want to comment or question on that point? If not, Hoy or no. I'm sorry Holly gave some feedback. Thank you for your feedback. We'll work on doing some more actionable stuff and the what you can do section perhaps on the website collapse enthusiasts, said doctors. The relationship between tech. Progress and social progress is obvious. For example, slavery was replaced by industrial machines. Rubber fueled feminism. Now we have synthetic hormones for trans people anyways, with regard to Kaczynski's ideas on leftism, what would you say to someone who wants to have modern moral values but wants to abolish the industrial society that makes them pop?

Skrbina: Yeah, I'm not. Sure. What modern moral. Values means with a progressive leftist. Values, I mean I I. Think that's probably a mistake, right? Because that’s sort of goes along with sort of the. The the mindset. That got us into the situation in. The first place. I mean that’s kind. Of the whole. Challenge to leftism that that Kaczynski pressed upon and you know I I've, I fully agree with it. A lot of these nominally progressive values have been counter. Progressive, counterproductive to society, to human well-being and. To the planet. I mean that would take a. Longer discussion than we can do here and. It's probably an important one to. Have at some point. But yeah, talk about the conflict of values over socialized values. What would count as leftist values today are problematic on multiple levels. Not the least of which. Is that they tend to promote technological solutions and the technological system in. So we need to get away from those values. There's no virtue in such values because they're. Leading us to collective disaster. So we need to rethink those values that are appropriate for. A non-technological age. And I think we have those, we, we. Know those just. Go back in time, go back to history, go back to either Renaissance era values or or better yet, go back to ancient Greece. Look at the value. Structure of the of the Greeks. Which we're able to create great societies, great, noble, prosperous societies in the very detect monological systems. And they were not certainly not leftist values by any means. I mean there's. Obviously some things we might sort of agree on, like slavery generally is bad. And I guess. You could call that a modern value if. You, if you like. You know, and women's rights, of course. I mean, there's some sense in which that’s probably a valid. A contribution of modernity. If you want to want to do that so you know, we could, we can compensate for some of those things that maybe weren't there in ancient. Times, but we have lots of good examples from history to work from and to and to draw from, so it's. Yeah, it’s an interesting question. There probably should be a whole debate about kind of the. Detrimental component of typical leftist values, which have contributed to the problem.

Griffin: Looks like there's another question that's being talked about, but I'll ask it to you. David is Constantine, Constantine said. Would it be correct to look at technological progress as a byproduct of the struggle for power? For example, between states or corporations. If so, would an anti tech movement seek to discourage the things that lead people to competitive behaviors such as a desire for conspicuous consumption or nationalism?

Skrbina: So technological determinism says that the technology is primary, which means it's not a byproduct, it's not an incidental consequence of other thing. It’s the essential thing. So I guess I would I. Would challenge that on on a very fundamental. View and I would say well. Look, technological processes are the primary driving. Forces those who, of course. Acquire the use of more powerful technologies. Gain in power. That's sort of always been true. That's that's just. The nature of the more powerful technologies. That they that they. The human power is a is an accompanyment with these powerful technological aspects. Right. But but yeah, no, so. Since I'm a proponent of the strong. Technological determinism I. Would say it’s not a byproduct. It's the primary thing and the power structures, the human power structures and the political structures are the consequence. Of these technological abilities.

Alex: And I think also that trying to. Make people less competitive. Is an alteration of human nature and sort of it lends toward the collectivist approach, making people into an anti or something like that.

Skrbina: Yeah, I would agree. I mean I'm sort. Of a proponent of. The old world to power sort of thesis, right? Which says that's an intrinsic part of human life. Animal life, in fact, all existence is kind of this exertion of your individual power or your presence in the world. That's a very old philosophical idea and I think that’s true as well. That's probably part of the story. How technology technology functions in the 1st place. It is a kind of embodiment of will to power that. We see in a in a in a. In a non biological realm so yeah. I think we can't get rid. Of that, that's kind of part of human existence. Animal existence I. Wouldn't want to stifle that in a. Anyway, but the problem is. You get, you get. People with strong wills to power and who also have potent technologies and then that becomes a very dangerous thing.

Ethan: Can I ask…?

Griffin: If you think if you have any. Question go ahead.

Alex: Can I ask…?

Griffin: Was that on this point, Jorge?

Alex: No, I just. I was just wondering if I could just say it before Ethan's question. Who here was not already concerned about the problem of technology and maybe that it's the core problem that you know human existence and life on Earth faces. Who is new to this for? If nobody's new to this. Can everybody see to, to bring new people into such a thing into like the next anti tech 101? Can people try and reach out and recruit new people who are not familiar with this, people who are skeptical but concerned about climate change or concerned about AI or something. Bring them to listening to a discussion about maybe technology overall is the core problem, not just this one or two aspects. That's not really a big question that needs an answer kind of rhetorical question, but if we're all just here because we already had this view, then we need to do something to expand it to people who don't share the view already.

Ethan: That was going to lead into my question. It is really challenging to walk around and try to tell people that technology is a problem. They don't even begin to. Like, I mean, honestly, it's like in my personal opinion, it's a miracle the human race has made it this far. Like it it it's honestly a miracle because when you start talking to the average person, they get sucked down in like all of these, like civilizational illusions and hierarchical things. And they can't, they cannot possibly register that something. And like COVID, they don't. I mean exactly going back to what the doctor said, it's basically like something like COVID is able to spread around because we have high speed airliners going everywhere. If you told someone tomorrow that, hey, look, we gotta get rid of airplanes, they would lose their mind. They don't even know how to process that. And so, like like, like, seriously, they don't even know how to. That's like, I don't even know how to. But you know what I mean? But I guess they're going back to what Jorge said was. I think that if you're going to really seriously try to mark it, mark it, quote UN quote, or get more people in here. You can't. Unfortunately. You can't shoot around. Ted Kaczynski's name everywhere. You know what I mean? That really most people are not. Most people can't even hear. They can't even. Like they can't make the distinction between how the state has killed more people rather than a lone man in Montana. You know what I mean? Like they can't make that distinction. And so they just therefore immediately shut the book. So, so I guess. I guess my point is it may be a really good idea to have like a short pamphlet or something like that that you can pass to someone that mentions these ideas in, like exactly what he had in the PowerPoint. That's, I guess that's what I have to say. But I guess my question is how do we market these ideas and not look like a bunch of coops? I mean, we know we're not crazy, obviously. Like it's not. You know what I mean?

Skrbina: Yeah, that’s a really good. You know, we've kind of been debating that all along for the last. Couple of years here in our little group. How do you get the word out? How do you get people who are already skeptical? How do you reach people who had even thought about this and you know they're just on step 0 here? So you kind of need different strategies to reach different people who are at different stages of their own thinking, right, so. Yeah, I mean, probably need more, it's probably. Gonna be a multi. Pronged strategy, right of people who are relatively knowledgeable can deal at a relatively high level, sort of the average person. Maybe just say. You know, hey, it's a pain in the *** to have that cell phone with you day and night 24/7, you know? And you know, boy, that's pretty depressing to have to go on social media all the time. You know, it really makes me feel kind of crummy. Maybe hit people. At their personal level. People who have kids, I think maybe that's kind. Of a nice. Thing because you're. If you're a parent, especially young. Parent and you see what it does to your kids people. Tend to think well, I'm pretty. Tough I can handle it. But look what my kids are doing and you make people you know, think about what, what's doing to. My kid, my kids. You know, watching too much TV. Now my kids on the. Internet my kid has a cell phone. Know you can appeal to. Parents that way. So people are different conditions, different stages of. Of of their. Lives and to. To me, it's been helpful to kind of. Hit people at different level. We haven't done. That very systematically maybe I need a brochure for the already the well educated. Then I need a brochure for the young parent and I need a brochure for the average person has a thought, you know. 2 seconds about this. But something like that to hit different people at different. Levels to kind of get them into the process of thinking. Critically, I mean. That's it’s a good point and it's a huge a huge challenge for sure.

Alex: I'd like to. Address that and I see Aram has made a comment about breaching anti tech people and I think that. I know several people and myself were not anti tech and were leftist and thought the problem was capitalism alone and that you know more state oriented social programs and safety net and such. You know more, more control and direction of people to cooperation was the answer. And we don't believe that anymore. In my experience, it is possible. It's just that. To make people anti tech come around. But it's just that. I think there is a realization of the losses among people that would happen without the technological system that we have and without the infrastructure and all the things that it provides to the 8 billion people who are not. Capable of being sustained without it and. I think that to Ethan's point, a mention of Ted Kaczynski. That turns people off. Those are probably not people that are very considerate. You know, I mean if if a mention of Martin Luther King is responded to with Oh well, he. Was a philanderer. So I can't. Hear his message. You know those people are not very considerate. So I don't know that those are the people that we. Need to reach anyway. I don't know what you know that they would provide a great value to the group and or to the movement to the. And I do. Agree with David's point that there is a case by. Case basis for. Making an approach to some degree and that people with children, that's usually the primary. Focus for concern of their life is that their children have a good life and I think that. You know, attacking or focusing on. The negatives that are on the horizon in the near term. From technological progress is a good. Way to go but that. The people that will be reached and and I do agree with Aaron, that that the people that will. Just need to be made aware that there is the perspective that technology is the problem and that it can be defeated, that it's not. You know? Undefeatable and. Yeah, I think I don't think that it's really a problem in a bigger sense to present the ideas of Kaczynski, because I think that they're true and accurate, but also that people turned off to him. Because of his crime. Just aren't seeing a big picture. They’re not capable or they're not, or they're choosing not to think on a deeper level.

Griffin: Any other questions or comments on anything?

Ethan: I think Gorge was right. I think if we come back in here, maybe we should each try to bring in one person and see what they have to say about it. I think that's a really good idea because one person is only one person. I think it might benefit to have. A little bit. It's a lot to dive into, like learning all of Euler's things. A lot of material so. You know, you might, I think. Somebody put it in the chat earlier. They were like, well and Jorge talked about. He talked about technological supremacy. And that is a really big deal, like when the politician goes into the office and says, well, we need to, we have to be ahead of China. They don't even consider like, what the ramifications. Are like they don't. They don't consider that type of thing. And I think the person that commented Hoy is actually I think he's hitting the nail with the hammer. He's saying that people shut down because they get kind of pessimistic. And UM. But I think it I. Think it's absolutely true that if we got here then we can go back, so there's no, there's no need to sit there and get all doomy and gloomy and say that this is it. You know what I mean? You can't so. I think that that is a really good message for people to hear. It's almost like therapy. Honestly, it's like well, if you got here in the first. Place you could probably go a step back. You know what, I mean?

Skrbina: I would agree, as long as we're still sort of standing and breathing we have at least a chance to do something. So in in principle, we should take advantage of it and be smart because. We claim to be a smart. Species right where the rational animal. And all these nice sort of things. That we should show it. By by acting in. Our own best interests, our own self-defense and in the defense of the planet. So that’s the way the way to. Move ahead, you. I would encourage people to think about this AI. I think because that's sort of ramping up quickly that could be the next sort of. Thing that hits. The you know the news in some kind of way significant way. There's a good article recently in Time magazine by Max Tegmark. I don't know if anybody seen that one. There was a a survey of people in AI and they asked they said, what's, what's the percent chance that AI will lead to human extinction? And they did a survey. I don't know the numbers of people offhand, but, but something like 50% of the AI specialists said there's a more than 10% chance. That AI will lead to human extinction in the. Coming decades, whatever. So that's a lot of. That's a big chunk. That's like, whatever borderline majority of the AI people think there's a non negligible. That's a pretty good 1010% could be higher, 20 percent, 30% or more that AI alone. Could wipe out humanity. So you know, that's just one scenario of multiple disaster scenarios that we're that we're facing. I mean, so even these things as they start to get into the news, it's kind of nice for us to take advantage of those to point those out to people, to point out the root cause. Does point out some kind of need to get to the root of the of the problem and you know that sort of I think helps build our case as we say, look, here's another case. Oh, look, here's another case. Look, there's another one right there, that sort of, I think maybe builds some some credibility for the movement that says, look, if we don't grapple with this thing. At a fundamental level. It's going to like literally be the. End and that'll just be. It so I mean that can drive people to, to, to think. In in pretty. Pretty extreme solutions I think.

Alex: If I can add on to that, there's Max Tech Mark, he's a big he's, he's a big person behind AI development and in the technology world. And he's not alone as one that's come out with some concerns about it. And there's a few others who are driving forces and prominent people. Behind AI development and in technology development for the last. 20 years they are. There are several of these people saying AI is a serious threat. We need to be slowing down and being prepared and I think that this presents a window of opportunity for us to push the issue of technology itself, not just this one aspect, not just the newest development of AI or the next thing. But technology itself, I. Think that people these people might. Have they might have opened themselves a bit to hearing that kind of a message that oh, it's not just this thing, it’s all these other things and they're unintended consequences that the ramifications we couldn't foresee and predict that have negatively impacted our society and our planet. I think that they are becoming open to that and we I don't know because we haven't targeted them. But if we could try to target those people, that might pay off and. As for actions that we can do bringing people to this spreading the message online where it can get many views or spreading it in person where you can make personal connections with people and have perhaps extended dialogues with people about this, bringing them into the fold that way, it does raise awareness even if you know. If I go out. Firing on a university campus about this kind of event and people reject the message, they at least become. Aware that there is. This ideology that there is people thinking, no technology needs to be destroyed, it needs to be eradicated because it is the problem and it's not reformable and it can be eradicated. They become aware of that. And I think that. One of the things Kaczynski suggests, and I think other revolutionaries have suggested with their own movements, is that. Although they can be dismissed in the early stages as crises build and the society builds to a head of conflict. The things that resonate, maybe things that were rejected in a prior era and so as the technology crisis worsens, people having rejected anti technology radicalism might you know they won't. They won't have forgotten it and they might start to see it. Says oh, actually, I see what those people were talking about now, they weren't a bunch of groups and.

Skrbina: I said the same thing. Right in in one. Time today, which sounds like an unreasonable solution tomorrow becomes a very reasonable solution because things get worse. So yeah, I mean, that's absolutely right.

Alex: And I think also that to the to the question or the point that, Ethan? Was raising as far as. You know, I don't have a direct answer that that. We should lay. All our cards on the table or. We should get. More or be a bit more. Guarded about every point of our position or every. Thought that we have. I can't say exactly what's the best strategy there. But I don't think that. Bringing anything too radical is always a turn off. It's a turn off to the people you. Don't want but the people that you do want. You know that the people that the Communist Party wants to recruit are not going to be turned off by Che Guevara was a murderer or whatever, you. Know that doesn't. Bother them, they're not susceptible to that, and they're still recruited into the Communist Party. And similarly, I think the people that are susceptible or open to the idea that technology is a problem, it is defeatable and there's no reforming it. The only thing is to defeat it outright. They're not going to be turned off to a radical message that, no we need an insurrectionary global movement to destroy the pillars of the technological society and this guy who did these crimes, he was correct. And you know, that's why he did these crimes. I don't think they're going to be turned off to that. So I tend not to think that that's a problem that we need to shy away from. Mentioning Kaczynski or mentioning the radical aim, the ultimate goal. And yeah, as far as actionable stuff if people sign up or they connect with ATC in one way or another, then whenever we're having an event or whenever we have some kind of messaging to put out in hard copy, like in the real physical world or online, this can be shared and then people can spread it through all their different. Networks and their associates and things that they connect to, and I don't see that as being a bad thing. So that’s one small action that people could easily take. But require networking plugging days.

Griffin: Is also part of the part of. There was also like, part of the reason we did this one discussion too, to record it as like like now we're going to post this recording of the presentation. And that's like a very succinct way to present some of the issues. I think, David, I think you only mentioned. Since he's named even once, but not even physically on the PowerPoint and just like sharing that with people that aren't as familiar is. Yeah, just one way of trying to reach out to any people that might be receptive to these ideas, yeah. If there are more questions, feel free to type or raise your hand. I'm just going to take the opportunity to post this very brief survey that if everyone could fill out, it would be greatly appreciated. We're just using it to more effectively reach out to people and set up public meetings like these in the future, so just a few questions. The Google form on the link that I just posted in the chat. If you could fill that out that would be fantastic.

Ethan: Otherwise other than.

Griffin: That are there any other questions or comments? Or anything anybody wanted to say.

Ethan: Well, I think I have one more going over to. I guess I don't even know how to frame this properly. You like you, so you talk about you’re getting in the right people, right. And unfortunately, these are like the types of things that can tear entire families apart. Like if you. We want to destroy the entire technological system. I mean, try telling that to your grandpa, your brother, and they don't even understand the message. At what point? Do you see what I'm saying? You really do kind of have to be radical in order to sustain that kind of mentality. And I think for some it's more like it's going to. Be some kind of brief thing. Like you're going to like, you know what I mean. You're going to try to break it down rapidly. Or in the case of the doctor. He wants to go back in time and so I think that it should be made evident that there are people with different plans and ideas and that not everything is the idea is to get rid of like the technology, right. But people have differing methods. On how to do that. And do you think that it that the message is that that that is made clear in this group?

Skrbina: Well, the yeah, the I mean the, the, the. Group has fairly clear aims. I think we've tried to portray that. I mean, that's kind. Of why we're. An anti tech collective, I mean that's a pretty simple sort of. I mean you can start with a. Very basic level, yeah, man. I'm like anti tech. You know, or people throughout this little phrase, Luddite. Right, so if you. Pay attention to the media, they say oh, so and so's a lot and they use it like an insult and like. No, actually I am a lot. I mean I said that in my book in print. I'm like, yeah, yeah, I'm a lot. And that's actually a good thing and everybody should. Be a lot like by the way. If you want to survive. So I mean just. A little simple little kind of discussions. Like you know. Just to get, get, get people on. On the anti tech. Side of the fence and get. Them to start thinking you don't have to go all the way till we need to smash the. So now I mean, some people are ready for that and some are, but you know, but yeah, if you're dealing with your grand, your grandparents or your wife or your husband, you know they. Then you need. To kind of, let's just kind of move on to the anti tech camp 1st and then we'll think about how that's going to work. And then maybe we go a little bit further down the road. You know it's kind of a, it's kind of a gradual process. Along the way so. So like I said before, right, there's different strategies, different approaches, different words that you might use for different people, right? So you just gotta you gotta. You gotta know the whole. Range of of pot possibilities. Feel out the person you're talking to and see where they're at and then try try to. Move them step. By step, you know. In in in the right direction.

Ethan: Thanks so much guys.

Griffin: Thank you, Ethan. Anyone else want to have anything they want to ask?

Alex: Thank you, David, for doing it.

Griffin: If there is. Thank you for participating. Thank you everyone for participating. We'll hopefully have another one of these in the near future. And if we do have an anti tech 101 to try to bring them some new people, bring some friends that you think might be receptive or aren't as familiar. Or maybe skeptical or totally against these ideas and see if we can't convince them otherwise.

Skrbina: Even skeptical we want. To hear the other views too, we're willing to do a little debate thing. I want to hear the defense and then we'll sort of show shoot those down 1 by 1. So we’re. We can take any side. Of the debate, that's no problem.

Griffin: All right then I will say see you later. Thank thanks everyone for coming. Thanks Dave for presenting.

Alex: Thanks. Thanks everyone.

Griffin: We'll catch you later. Take it easy.