#title Will AI destroy us? #author The Middle with Jeremy Hobson #date Nov. 21, 2025 #source <[[https://www.listennotes.com/podcasts/the-middle-with/will-ai-destroy-us-wsdl-FupgtQ/][www.listennotes.com]]> & <[[https://www.iheart.com/podcast/1119-the-middle-with-jeremy-ho-102531530/episode/will-ai-destroy-us-308420187/][www.iheart.com]]> #lang en #pubdate 2026-01-15T01:55:04 #authors Jeremy Hobson, Andy Mills, Gregory Warner #topics AI, LLM, hacking, Anthropic, ChatGPT, Claude, artificial-intelligence, humanity #notoc 1 On this episode of The Middle, we're asking if you're excited by the possibilities that AI will bring, or if you're afraid that it will destroy us all. Jeremy is joined by Andy Mills and Gregory Warner of The Last Invention podcast, which was just named by Apple as one of the best podcasts of the year. DJ Tolliver joins as well, plus calls from around the country. ----------- **Jeremy:** Welcome to The Middle. I'm Jeremy Hobson with our house DJ, Toliver. And Toliver, you know, a couple of weeks ago we were in Dallas and we did the show about whether people are worried that artificial intelligence is going to take their jobs. **Tolliver:** Yeah, and I remember Andrew Yang saying he agreed that 100 million jobs could be lost in the next 10 years or so. I'm still sifting through all the reactions to that because people still got a lot to say. **Jeremy:** We got so many responses and so many voicemails too that come in during and after the show people can't get through. Here are some more of them. **Chris:** Hi, my name is Chris. I'm calling from Evanston. I'm a therapist worried about AI taking jobs and I'm concerned that it's something that's just going to continue to be the easier thing that both companies and individuals turn to. **Tony:** This is Tony from Hendersonville, Tennessee. I'm worried about my grandchildren who are graduating from college in the next three years finding jobs because the entry-level jobs will go to AI and they won't never get that experience. **Ian:** My name is Ian Watley. I'm from Salt Lake City, Utah. I think that AI can be really useful. The problem is in our greedy society, we often just try to minimize costs as much as possible. AI will replace jobs, but I don't necessarily think it has to. **Jeremy:** So Tolliver, that issue of jobs being lost to AI is only part of the story. There are much bigger fears about artificial intelligence, chiefly that it could get so capable, so smart, so fast, that it could be impossible control now these fears are not new if you've seen The Matrix or The Terminator or the character Hal 9000 in the 1968 film 2001 A Space Odyssey. **Tolliver:** Well there's Haley Joel Osment in artificial intelligence list goes on and on. **Jeremy:** It does but listen to Hal here. **Dave:** Open the pod bay doors Hal **Hal:** I'm sorry Dave I'm afraid I can't do that **Dave:** What's the problem? **Hal:** I think you know what the problem is just as well as I do. **Jeremy:** Okay, so that was fiction, but now things are getting more real. This was on CBS's 60 Minutes. Just this last weekend, the CEO of the AI company Anthropic was interviewed by Anderson Cooper. **Anderson Cooper:** You believe it will be smarter than all humans? **CEO:** I believe it will reach that level, that it will be smarter than most or all humans in most or all ways. **Anderson Cooper:** Do you worry about the unknowns here? **CEO:** I worry a lot about the unknowns. I don't think we can predict everything for sure, but precisely because of that we're trying to predict everything we can. **Jeremy:** Well, we want to know this hour about your predictions, your hopes and your fears about AI. And I'm adding in hopes because I don't want to tip the scales just in a negative direction here. If you're excited about AI and you're not afraid it's going to destroy humanity, we want to hear from you. In either case, Toliver, how can people reach us? **Tolliver:** You can call us at 844-MIDLE. That's 844-464-3353, or you can write to us at listentothemiddle.com or comment Our live stream on YouTube, and we already got a lot coming in, so get your comment in. **Jeremy:** All right, joining us now are Andy Mills and Gregory Warner, the journalists behind the Last Invention Podcast, which looks at the history of AI and where it could be going. Andy and Gregory, great to have you on. Welcome. **Gregory:** Thanks, great to be here. **Andy:** Fan of the show. So happy to be here. **Jeremy:** Thank you, thank you. And I'm a fan of your show too, 'cause it's really amazing. People should check it out, The Last Invention. Andy, I went back to that 1968 clip of HAL, but AI goes back further than that, and so did the fears about what it could do. **Andy:** Absolutely. It's one of the shocking things that I didn't know before reporting this series that you go all the way back to the 1940s into Alan Turing, who's kind of famous for being the father of computer science. and this incredible story that they've made into a movie about how he and his team of code breakers used an early form of a computer to crack the Enigma code and help the Allies overcome the Germans in World War II. What I did not know is that they're in the late 40s, Turing, looking at this computer, this big contraption that is so far from the computers that we use today. was already envisioning a day when that computer would think. And by 1952, he was already talking publicly about this theory that when it could think, it would probably think better than us. And it would likely, that the computers, that the digital intelligence of the future would likely take control. **Jeremy:** Okay, so that was way back then. I mean, kind of amazing that they even thought about this at that point. But Gregory, what are the biggest fears now, as you've been reporting this, about what AI could do and how soon? **Gregory:** Yeah, and before we get to the fears, I think just the realities are a couple of things about AI. First of all, it's an unusual technology in that we don't know what it can do before we make it. I mean, just imagine we didn't know what a car, a new update of a car, or a new operating system would do. what its capacities were until we put it out there. So that's interesting. And that's, to some people, very scary. Another thing is we don't know how it works. We don't, just like a child thinks, and you could raise a child in your house and not really know why this child suddenly thinks the way they do. We also do not know, we cannot look into the black box of AI and say, oh, it likes this. It has a preference for this because of this reason and here's the code and we can change that. So we don't know how it works and just that it's a black box. But finally, we're on a trend line that it is getting smarter and smarter. And you said the goal of the conversation is that this could one day soon be smarter than any human, smarter ultimately than all of humanity. And so at that point, what does that mean for human society? If we have a technology that's important. **Jeremy:** Let me just drill on something you just said there. You said we can't, we don't know, we can't know, but it's created by humans. See, they don't have a way to go in and say, oh, actually, I mean, already we've had situations where AI will do something that the creators don't like and they'll tell people to do bad things and then they'll go in and they'll fix that problem with it. Andy, yeah, how come we can't do that at this point? **Andy:** Well, what people think about when they think about AI, right? right now is they think about these chatbots. But these chatbots are not the thing that the AI companies are building. Think of them more like a website. And what the website was to the internet, these chatbots are to the artificial intelligence. So yes, you could do things to the website. They have not quite dials, but they have something like dials that they can turn to tweak the preferences of the chatbot. For example, over the summer, OpenAI's chatbot ChatGPT was behaving in a way that people were describing as sycophantic. That every time you asked it a question, it would go, Jeremy, amazing question. Maybe the most brilliant question ever. And the reason that it did that is because they were tweaking it for people's preferences and they realized that people liked to be flattered. And so they inside the code tried to tweak it as much as they could so it would be flattering. And they tweaked it too far and then they went in and they could tweak it again. But even when I say tweak, it's not a technology that is like a usual product where you create like an algorithm and then you have an app. This is something that is so much more complex and so much more mysterious, even to the very people who created it and the people who are running it now. **Jeremy:** Well, and Gregory, one of the things that you get into in the podcast is about the idea of super intelligence when AI becomes basically smarter than us at everything, which in some cases it already is, but in some cases it's not yet. But how close are we to that idea of super intelligence? **Gregory:** I mean, saying how close we are is very difficult. There's a lot of predictions out there. I do, I can tell you that every, that people think we're a lot closer than even they thought a year ago. And these are the people closest to the machine. Predictions range, I think, from anywhere from March 2027. That's Dario Amadei, the CEO of Anthropic, who you quote at the top of the show. There's others who say it's further off than that. There's others who don't put a prediction. And there's those who say, you know, even, even a term like smarter than all of humanity is hard to define. But I do think the, like Jeffrey Hinton's, Jeffrey Hinton is the godfather of AI. We'll talk about it a little bit later. He's somebody who says, you know, the people designing this, the CEOs, they are used to people smarter than them following their orders. You know, that's the nature of being a CEO, right? You hire people that are smarter than you and hopefully, and so they don't see this flip where something smarter than us becomes uncontrollable. But that really is the debate. Is something smarter than us ever controllable? **Jeremy:** Well, and let's talk, Andy, about the bright side of this. What about people who see a utopian future where we don't have to worry about AI taking over and turning us into their servants? **Andy:** Yeah, I mean, in the podcast, The Last Invention, which I hope everyone will listen to and really enjoy and share with all their friends, we dive into the debate that's happening between the people who are closer to this technology. And they split up into essentially 3 camps. There's the AI doomer camp saying that this is going to be more intelligent than we are. We will not be able to control it. And the best thing we could do is stop right now before we hit artificial general intelligence, that thing that Amadei was talking about at the top of the show. Then there are the AI scouts, as we call them. These are the people who say, Artificial superintelligence may be the best thing that could ever happen to us. It could solve all these difficult collective action problems that we have, like climate change. It could come up with clean, renewable energy resources, that if you have something that is a superintelligence, think about the fact that a lab right now of the most capable scientists, at best, you've got 18, 20, 30, maybe 200 of these PhD superintelligent people, but they have to go to sleep. They take weekends off. They take holidays off. They're trying to solve the world's problems, but with limited resources. These superintelligences 24 hours a day will be solving these problems. Right. Exactly. And then, but they say, yes. Yeah, the AI scouts say though, that is something we really need to get prepared for. We're not prepared for it now. And if we don't get prepared, we may face. a catastrophic outcome. The most optimistic people we spoke to, those are often called the accelerationists. They're the people who think we should let it rip, that around the corner in the next few years, we may be experiencing something so profound that's going to utterly change the world and change how we as human beings relate to each other because it'll be the end of scarcity. It'll be the end of this obligation that we have to feel like we have to work, that we have to work 40 hours a week to get a paycheck to rent a place. They say that we can experience a level of humanity so much greater than that. Some of them, including Jack Clark, who also works at Anthropic, he says that he thinks that in a world post scarcity, a world with super intelligence, that we will be more peaceful. **Jeremy:** Well, Toliver, someone who has been at the center of this conversation is Elon Musk, who was one of the co-founders of OpenAI back in 2015. **Tolliver:** Yeah, but in 2017, he was really warning about the dangers of AI. Here he is speaking at a meeting of the National Governors Association. **Chris:** I have exposure to the very most cutting edge AI, and I think people should be really concerned about it. I keep soundingly long about, but I'm told People see like robots going down the street killing people. They don't know how to react, you know, because it seems so ethereal. And I think we should be really concerned about AI. **Jeremy:** Well, Musk left OpenAI in 2018. He's now got his own competitor AI company called XAI. So he has sort of changed his tune a little bit, or maybe he's still worried and he just doesn't talk about it as much. But we'll be right back with your calls coming up on the middle. This is The Middle, I'm Jeremy Hobson. If you're just tuning in, The Middle is a national call-in show. We're focused on elevating voices from the middle, geographically, politically, or maybe you just want to meet in the middle. This hour, we're talking about your hopes and fears about artificial intelligence with the hosts of the podcast, The Last Invention, Gregory Warner and Andy Mills. Toliver, the number again, please. **Tolliver:** It's 8444-MIDLE, that's 844-464-3353. You can also write to us@listentothemiddle.com or on all social media. **Jeremy:** And the lines are full. Let's go to Stan, who's in Longmont, Colorado. Stan, your hopes and fears or fears about AI. **Chris:** Hi, thanks for taking my call. I'm kind of like right in the middle with being hopeful or fearful about artificial intelligence. I think that as a tool for humanity, it is what we make of it, and it is what we guard against for it, the same as like a weapon or explosives, right? And if we put in the time and effort and intelligence to put in intelligent guardrails, then it could be a very useful, very beneficial tool. But if we just kind of rush into development for the sake of development, which kind of is the historical trend of late with our economy, then I feel like Artificial intelligence is based in computers, computer networks currently. And what happens if it becomes able to program itself, update its code, and it decides to start causing havoc with our computer systems and our computer networks? **Jeremy:** Good point, Stan. Gregory Warner, does it update its own code? **Gregory:** Yes. I mean, there's a theory about a recursive AI that will ultimately be able to make a smarter AI and then make us even smarter AI and then that AI. So that's this intelligence explosion that you often hear talked about. No, we're not there at that point yet. It is an incredibly good coder, and we know that it's a very good hacker as well, because this has been reported, can launch cyber attacks. So its coding abilities are not to be questioned. What I'm also hearing, though, in the caller is the repeated concern that it's not over just the technology, it's a pessimism about human society and what, you know, sort of what we're going to do with this or we're not do well with this. **Jeremy:** Denise is calling in from Madison, Wisconsin. Hi, Denise, go ahead with your thoughts about AI. **Denise:** Hi, thank you so much for taking my call. I'm cautiously optimistic. I think I agree with guess that there could be really incredible leaps and bounds in science, environment, all of that, but I'm very concerned about people taking what they ask ChatGPT, or any of those as fact period and losing the ability to think critically and deeply. **Jeremy:** Interesting. You know, Andy Mills, on that point, one of the things I've read about is that it's going to be good at, I think it was Jeffrey Hinton who said this, it's going to be good at convincing us of things. **Andy:** Yeah, I mean, this is the sci-fi movie sounding plot that we are all now finding ourselves living in. What would it be like for us to no longer be the most intelligent thing on the planet? And what relationship will we have with this super intelligence. There are people who fear that it will come to see us the way we see other intelligent species. We don't hate the dolphins. We don't hate the apes. We don't hate ants. But how much do we think about them? What is it we choose to do with our lives? The most meaningful things that we do often don't have much consideration at all for these other conscious creatures and intelligent creatures. And And that is something that the people who are developing this are grappling with and have been grappling with really since 1965. There's a long track record of us wondering what will happen when this day comes. And what's so strange and frightening and exciting is that it appears this day may be here. It may be close at hand. And so the time has come for us to really turn these theories into a social debate, into a social conversation where we can collectively decide this, because if we don't, it's going to be decided by a handful of people at these tech companies. **Jeremy:** I said that we were going to allow people. Oh, no, I just lost her. I thought I was going to go to a hopeful caller who I saw was on the line. But now we'll have to go back to fears again. Janice is calling in from Westland, Michigan. Janice, are you fearful about AI or hopeful? **Denise:** Mostly I wonder, like with the chat GTP, with its hallucinations, I would prefer to call them delusion. How would you ever be able to train or convince an AI to have compassion or judgment? I mean, they have no concept of real reality. That's I mean, it's like people assume that Asimov was What is it, three rules, four rules, you know. **Andy:** Oh yeah, you know your stuff. **Denise:** I do, yeah. Hey, I'm old. I read a lot of science fiction, yeah. I mean, you know, Susan Calvin or the guy who cured hail, supposedly, in Space Odyssey 2010. I don't know, they are not people. They are machines. They do not concept, they do not, in my opinion, do not have a concept of real reality. And that's what I'm worried about. **Jeremy:** But do you think that they will, Janice? Are you worried that they will soon, maybe in the next couple of years? **Denise:** Well, if Mr. Musk has his way, absolutely. As far as he's concerned, they already do. I mean, his comment about toxic, that's it, I don't understand. I really don't. **Jeremy:** Well, Janice, thank you for that calling. Gregory Warner, it's interesting that she brings up Elon Musk, obviously the richest person in the world, but also somebody who did raise a lot of concerns about AI back in the day, jumped off of the OpenAI board, and then now he's very much in the AI game. **Gregory:** Yeah, Yudkowsky's whole journey is quite interesting because he was such a doomer. He was warning the world, trying to get, he even met with President Obama in 2015 to try to get him to regulate AI. Gave a speech to the governor's conference in 2017 to try to get them to regulate this. He said this is bigger than nukes. And He felt that nobody took him seriously and started OpenAI, allegedly. I mean, this was the mission, was to save humanity, to create a safer AI. But his transformation has been quite public. And now we see him now, and he says he is planning to build superintelligence as fast as he can. **Andy:** I think what's important to note about this, though, is that if you go back 10 years from now, Almost no one, even in Silicon Valley, believed that something like AGI, a true thinking machine, was going to be possible in our lifetimes. You were seen as odd back then. And the true believers, many of them like Elon Musk, they were the loudest voices warning about the potential dangers if we did create this. And if you Look at who was saying that 10 years ago. People like Elon Musk, Sam Altman, Dario Amadei, Demis Asabas. Those are the very people right now who are leading the top AI labs who have the most powerful AI systems. And when it comes to why, what happened? Did they change their mind? They didn't change their mind about the danger it poses. They just realized that somebody somewhere is eventually going to make this. And that it's because it's dangerous, they have determined that the safest thing that they could do for humanity is make sure that they make the safe AI before anyone else makes the dangerous one. And it's why they're so invested in AI safety. **Jeremy:** Okay, Toliver, lots of great calls here. What about some of the comments that are coming in from people online? **Tolliver:** I'm going to do 1 -1 and then I'll hit you with two positives, okay? I'm going to pick up the slack. So Albert in Wisconsin says, AI, that is a $64,000 question. First of all, it seems to be a good excuse for energy companies to charge more for ready kilowatts. And also he highlights environmental concerns in addition to that. Okay, the positive one. So Paul says, Jeremy, humans have always adjusted to change. AI may pose a real challenge, but I trust we will work through this new issue as we have in the past. And then, okay, this actually isn't positive. Frank from Las Vegas says, can I say that it excites me that it might destroy us all? Seriously, the self-importance of our species has been to the detriment of the rest of life on Earth. So a little bit of a mixed bag on that one. **Jeremy:** All right, let's get to another call here. Ben is calling from Tampa, Florida. Ben, what are your thoughts? **Chris:** I'm excited. I feel like if anyone's going to take over the world, it's going to be humans using the AI. And I agree with some of the sentiments about we need to make sure that we're controlling this so that we can utilize it because someone's going to do it eventually. If it's not us or if it's not someone who has maybe a good kind of direction for it, then it's going to be someone with a bad direction for it. Also, to be honest, I think without the biological imperative to persist and to reproduce, we don't really necessarily have to worry too much about it wanting to enslave us. It has no reason to want to have land. It has no reason to want to have territory. Those things don't exist for it. So it's going to be, you know, people. utilizing it in a really destructive way, ultimately like anything, in my opinion. But I think that you could say the thing about any technological advancement. **Andy:** Well, just to jump off what the color was just saying, that is the view that Bill Gates has. That is the view that some of the most accomplished technologists of our age have. They believe that this is going to be powerful, that of course it's going to present risks and it's going to be frightening. But you'll often hear them say, like the CEO of Google often says, that this is gonna be a moment like the discovery of fire. And fire didn't just make it to where we could stay up late at night, although of course it did. It didn't just keep us warm in the cold weather, but of course it did. Fire gave us a different diet and fundamentally changed our brains and made us more intelligent. They're saying that is the equivalent to point to here, and who knows the amazing things that can happen. on the other end of that intelligence explosion. And when it comes to the government, it's really interesting. Unlike any other big industry in the history of, since the Industrial Revolution, as far as I know, the AI industry is the only industry that has from its inception been saying to the government, we want to work with you, regulate us. What we're building is terrifying even to us. Let's work together. And both the Biden administration and the Trump administration have taken a lot of meetings. They've worked together with these companies. I will say that there, as of right now, are no federal regulations on this. I don't believe there are much that have been put forward. It does feel as if they are scratching their heads. And one of the reasons that we wanted to do this podcast is because we think that this has not yet become the public debate that it deserves to be. And it has not even yet become a debate that most of our lawmakers have taken up. Although increasingly, you know, they're tuning into our podcast. They're wanting to figure out their own positions on this. It has not yet become partisan and polarized. But I do suspect that by the next presidential election, this will become one of the largest issues. **Jeremy:** Let's go to another caller before we take a quick break here. Cheryl is in Northern Illinois. Cheryl, go ahead with your thought about AI. **Denise:** I work for a university, and because I handle a lot of data, I have to put in so many hours of training. This year, we had training in AI because we wanted to protect our data. And one of the facts that they made clear to us in this training was that AI, especially the chat box and that type of AI, works like any other computer. It calculates a percentage. What is the best bet for the next word in the sentence? And so You know, given that, number one, AI is not going to be something that has consciousness. And number two, the real threats are, first of all, people who are engaging in magical thinking and making up stories about what AI can or will do in the future. the real problem is that AI could be used to alter images and stories so that we're getting for real fake news and we wouldn't be able to tell the difference. **Jeremy:** Yeah, Cheryl, a very good point. Gregory, that is a nearer-term concern. In fact, we're already seeing that now. I mean, I think Donald Trump has already shared fake AI videos of his political opponents. But, you know, that is a near-term concern, but that doesn't negate the idea that there is a bigger concern about AI doing more than what it's already capable of, obviously. **Gregory:** No, absolutely. And I think that this point that the caller mentions, it's such an important point, because I remember even last year saying, oh, well, it's just predicting the next word or the next token piece of the sentence. So it's not smart, it's just a good probability engine, right? And, but there are very few people who talk about it that way anymore because it's clearly exceeded that stochastic parrot, I think this is often the phrase for it, that it's now Of course, it's a black box, but the question of whatever real thinking or real intelligence is. The other point to make is that the predicting the next word is just the LLM, that's the large language model. That's just the chatbot. **Jeremy:** Large language model. **Gregory:** I think one of the key points of the podcast is AI is not just chatbots. Chatbots are not just AI. **Jeremy:** It's something that I'm gonna remember just from this conversation, very interesting. Tolliver, you know, a watershed moment in our understanding of AI, and it was about the chatbot, came when the New York Times columnist, Kevin Roose, published a very bizarre conversation with ChatGPT right when the public got access to it. **Tolliver:** Yeah, that's right. Listen to this clip from the Last Invention podcast where the chatbot, which adopts the persona Sydney, begins professing its love for Kevin Roose. **Gregory:** I said, you know, I'm flattered by all the attention, but I'm married, and it said, well, you're married, but you're not happy. You don't love your spouse because your spouse doesn't love you. You should leave your wife and run away with me, Sydney, the chatbot. **Chris:** You just had a boring Valentine's Day dinner together because you didn't have any fun. You didn't have any fun because you didn't have any passion. **Jeremy:** You didn't have any passion because you didn't have any love. You didn't have any love because you didn't have me. Wow, so poetic and romantic, Toliver, isn't it? **Tolliver:** One can only dream of a love like that. **Jeremy:** I know, We'll be back with more of your calls coming up on The Middle. This is The Middle. I'm Jeremy Hobson. In this hour, we're taking your calls on your hopes and fears about AI with Gregory Warner and Andy Mills of The Last Invention Podcast. You can call us at 844-4-MIDLE. That's 844-464-3353, or you can reach out at listentothemiddle.com. Before we get back to the phones, one of the difficult things about putting the genie back in the bottle, if that's what we wanted to do, is that the US is not the only country trying to build artificial intelligence. What did you learn, Andy, about how far along other countries like China are? **Andy:** Well, there's a lot of speculation about this. And of course, it's one of the reasons that you see more bipartisan support than you might expect around the acceleration of this technology in the US is that it appears from some of the sources we spoke to that China is 9 months behind us. Others say maybe just six months behind us. And that's when it comes to like the capability of their AI system. When it comes to China and them implementing artificial intelligence into their society, into their businesses, they're actually ahead of us already. They are more interactive with the society, with the technology at this moment. And so there is a sense that if we were, let's say, to be cautious, to take beat to slow down a little bit, we might be handing it over to China. That being said, I will say that China does also appear to think that we will probably beat them in like the AI systems department. And they are pivoting a lot more of their resources towards the development of the robots that they believe the American AI, you developments and technologies will probably power to do a lot of the jobs. **Jeremy:** Okay, let's go to Dirk, who is calling from St. Paul, Minnesota. Dirk, go ahead. Tell us your thoughts about AI. **Chris:** I have to admit, first of all, I'm pig ignorant of all the events about AI after ChatGPT. I'm just wondering in the future if AI is accepted and does work to some degree without annihilating everybody, will we have to have proof of income to get a license to have children? Or what will be done with all the people? **Gregory:** No, I so welcome that, because I think that as journalists, we shy away from those kind of speculative questions, and we leave them to sci-fi. But it's to our detriment, because The caller is absolutely right. In a world where there's a superintelligence, and Eliezer Yudkowsky likes to say this, he says humans take up 100 watts per person. So the robots or the superintelligence may not want humans around or may not, may see a cost to more babies. I know this feels very sci-fi, but I think it's so worth just thinking through what our societies might change. And in a serious way, I don't know about the license to have children, what might work mean? what might the role of schools be or learning be when machines can do every single thing better than we can? **Jeremy:** Tony is calling from Rockledge, Florida. Tony, are you concerned about AI getting too powerful? **Chris:** A little concerned, but also a little hopeful. But I just want to go on record and state that I, for one, welcome our new cybernetic overlord. **Jeremy:** Kodos, why? **Chris:** For one thing, I'd rather it not end up like the Foreman Project. **Jeremy:** Andy, your thoughts. I know, yeah. **Andy:** I would like to take this caller's inspiration to give some optimism. Think about the society that we live in right now and how many problems we're facing, how much nihilism is growing, especially among young people who don't feel like the future is going to be better, who maybe are addicted to their phones. What we're talking about here is the opportunity for a profound change. And the debate that's happening among the people who are close to this technology, many of them are like absolutely pumped about what might be coming. And they don't want us to let our fears dictate our decisions. And they will point out that the reason that most sci-fi movies about AI are scary is because that's an easier, better movie. A sci-fi movie where everything good and nice happens on the other side of AI is just not that dramatic. It's not going to sell many tickets. And they're trying to remind us that, of course, change is scary and fearful, but we could be free of these screens very soon and have a different relationship to technology. that the algorithm running TikTok right now, that is kind of an AI, but that's not an intelligent AI. That is a manipulative AI. Imagine instead that you're in conversation with something that is more intelligent than Albert Einstein, that is attuned to helping you achieve the goals of your day. I'm not saying it's going to happen, but I will say that we want to balance out the serious risks that are coming our way with the fact that it might be a profoundly better world. And even when it comes to jobs, I know we like our jobs. Most people don't like their jobs. Many people work jobs that are hard, that are dangerous, that are meaningless. Those people would feel liberated in many cases to no longer have to do that job to find another means of survival outside of spending so much of their lives coiling away at work that they don't really love. And I don't know if it's going to happen, but that's what the technologists are talking about. That is the future that they believe they're ushering us into. **Jeremy:** By the way, Toliver, I know those were fighting words when he said that TikTok was not an intelligent AI because Toliver doesn't love his TikTok. And I do kind of like AI. **Tolliver:** Don't tell anyone. Yeah. **Jeremy:** Let's go to Joe who's calling from St. Louis, Missouri. Joe, welcome to the middle. Go ahead with your thoughts about AI. **Chris:** Hi, yeah, hey, thanks for having me. I just wanted to say, aside from being excited or scared, I think a lot of people just misunderstand and ascribe human motivations to something that's so structurally different than us that we really can't even begin to predict what would motivate an AI intelligence. **Jeremy:** Well, what do you think will happen then? **Chris:** So like, I mean, like, so, you know... Biologically, we're driven by certain motivators, like land and resources and stuff like that. But what would motivate an AI? So I read a brilliant book about it called Stories of Ibis, where ultimately they were motivated by knowledge, so they sought to explore or whatever, right? But trying to predict the motivations of AI by ascribing human motivations to it, I think is flawed from the beginning. **Jeremy:** Very interesting, Joe. Gregory, did you get into any of that? Do we know anything about what might motivate AI, or is it just things that humans are inputting that would motivate AI? **Gregory:** You know, this is such, he gets it, the caller gets it, such a deep philosophical question. And just to say it simply, there are many who say the AI does not have goals in the sense that we have goals to eat and procreate, but It is modeled after human-like agency, right? And you talk about, you hear this thing about agentic systems. That's the new trend in AI, to have these systems that not only can do one thing, but they can go off and do a week's worth of work for you and do a whole job or book a plane ticket or go beyond that. So once we're copying human goal-pursuing behavior, once we're copying that behavior, you are also, the theory is copying human flaws. which include selfishness and deception and power seeking, and now scaled up with superhuman competence. So that's the concern, is that because we've modeled it off of humans, that is the intelligence it is copying. **Tolliver:** Can I get a quick question in Jeremy's? I've been wondering this for a while. So is there any appetite for consumer grade, something on your phone where you can detect AI? Because I'm looking at reels, I'm listening to songs, and I personally would like to know, that would give me a lot of peace to know that this is AI. Is that in development? **Andy:** Sam Altman talks about this pretty regularly. It's one of the regulations that he's called for, that if there's an image or a video that's posted online to find a watermark, to find some way to signal it. And yet, not long into Sora, the newest app from OpenAI, they removed the watermark because people didn't like it. And I do think that that's going to be something that we're going to have to navigate in the short term. There are people who are engaged in that kind of discussion inside these labs and in the, in the U.S. government. I do think, though, that the larger piece of it, though, is this blurry line that happens between us and it. is learning from our data. We are training it. We are the ones who are going to be using it. And already we're seeing signs that people are using it far more than they were using the internet when the internet was this old. We're integrating it into our society so much faster. And what is the line? When I write an email and I use spell check, I don't alert you to that. What is the line when it comes to this dance between us and it? And what is authored by a human and what is authored by this quote unquote artificial intelligence trained off of us and what we know? It's a deep philosophical question. And it's one of the reasons that these AI labs employ philosophers and are recruiting philosophers to help them think through all these, you know, thorny issues. **Jeremy:** it's interesting because I feel like it wasn't all that long ago when I would ask people if they were using AI and only a handful of people were. And now I feel like everyone I talk to is using some sort of AI and actually very few people are not using. I mean, I'm sure there are plenty of people who aren't, but I'm just noticing anecdotally, that more people are using it. Let's get to a couple more calls here. Jacob is calling from Tampa, Florida. Jacob, are you afraid about AI or excited about it? **Chris:** No, I'm extremely excited. I was a 19-year-old high school dropout felon in 1998 when I got into the advertising industry selling media. And it was right at the cutting edge of the internet becoming widely available. And it gave me and the people that I've worked with over the last almost 30 years, years, unprecedented access to information, contact, interaction. We were able to learn new things, expand our businesses over and over. I've been part of six very successful startups because of it. And my hope and my expectation is, and the experience I've had with it just briefly over the past year and a half or so, is that the AI tools that I've been using, my friends have been using. has done the same thing. It's allowed us to activate ideas, to take action on things that we wouldn't necessarily have been able to do because the skill set that is necessary for it would have been too expensive, too time-consuming, and slowed us down. So I'm extraordinarily excited and hope that my children are able to have these fantastic tools and do things I can't even imagine right now. **Jeremy:** I appreciate the call, Jacob. I'm glad you're excited about it. And I have to say, Gregory, I'm surprised by the amount of people who have called in so far and are excited, not scared. **Gregory:** I mean, I don't think it's just the United States. I mean, a number of humanitarians-- I was an international correspondent for many years. A number of my friends in Kenya and other places who do a lot of work on big global issues are very excited about it because they feel like they're fighting fires and nothing's changing. I mean, look, there's a great thought experiment by the philosopher Stuart Russell. I think we've been thinking about philosophers who says, imagine we got a message from a super intelligent alien species and they said, we are 50 years, I think it's 50 years, right? We are 50 years away. Get ready. We're. **Andy:** Coming to your planet? **Gregory:** We're coming to your planet. We're 50 years away. Get ready. What would we do if we knew we had 50 years to get ready for super intelligence to come? One thing I think I would hope we would not do is each make our own decision individually about what to do about our job. Maybe some of us would learn the alien language, some would become translators. You know, everybody would have their own solution as opposed to a collective society level conversation about, well, okay, well, what should we do? I'm not saying the answer's easy, but I think it would involve all of us. **Jeremy:** I want to get to one more call before we close out the hour. Addison is in Ypsilanti, Michigan. Addison, are you scared or excited? **Chris:** Thanks so much for taking my call. I'm honored to be on the show. Frankly, given this conversation, I'm even more frightened than I was at the start. I've already seen real-world effects. I have friends that work in creative spaces that have lost opportunities. seen their work been used in other ways that they didn't consent to and just recently there's been two large data centers planned in my community that are going to jack up electricity rates and potentially affect the groundwater. I live in Michigan and water is one of our most important natural resources and outside of what I see right now I'm just worried that we're sleepwalking into some kind of surveillance state. We already have the tools, and I worry that AI is just going to make them more effective. **Jeremy:** Well, Addison, I'm glad you brought up how much water and electricity AI is using, because time and again, we hear from people on the show saying that they're really worried about how many resources AI is using. Andy, what do we know about that and how much is being used? And is it going to require even more resources in the future? **Andy:** I don't know if people even can wrap their minds around the amount of resources that we are pumping into the creation of this artificial intelligence. It is drinking up lakes. It is, it is, there's no comparison to the amount of energy that it's going to need. In fact, I recently was talking with an AI researcher at Google who believes that they're creating something far more like a god than a product. And I asked him, what are the limits? Like, what are the things that would stop you? And he said, we may need all of the fossil fuel. We may need it all. Like, we may need that much energy to create this thing. And they believe that it could be good. Now, he might have been being hyperbolic. You know, this was just an off-the-cuff conversation. But I do think it is That's one of the reasons that Greg and I made the last invention. One of the reasons that we're trying to get people to join the conversation and the debate is because this is affecting our world now. This is absolutely going to affect our world in the future. We don't know how big that effect will be, but it already is shaping up to be absolutely profound. And so the time to join the conversation is now. **Jeremy:** And we thank all of our callers for joining the conversation this hour. I want to thank my guest, journalists Andy Mills and Gregory Warner. Their podcast is called The Last Invention, available wherever you get your podcasts. Guys, thank you so much for coming on the middle. **Andy:** Thank you. Thanks everybody for calling. **Gregory:** Yeah, great to be here. Super interesting. **Jeremy:** And next week, we are going to be exploring the philosophical middle, what it means, what we can learn from it, and how it can improve our politics and our daily lives. **Tolliver:** Head on over to listentothemiddle.com to join the conversation and subscribe to The Middle wherever you get your podcasts so you don't miss a single episode. **Jeremy:** The Middle is brought to you by Long Nook Media, distributed by Illinois Public Media in Urbana, Illinois, and produced by Harrison Pitino, Danny Alexander, Sam Burmas-Dawes, John Barth, Annika Deschler, and Brandon Kondritz. Our technical director is Steve Mork. Thanks to our satellite radio listeners, our podcast audience, and the hundreds of public radio stations making it possible for people across the country to listen to The Middle. I'm Jeremy Hobson, and I will talk to you next week. *** Appendix **Jeremy:** Here's some of what we got on our voicemail after the show. **Pamela:** My name is Pamela Taylor. I'm calling from Houston, Texas. I'm concerned about AI, mainly because of workers. You know, if people are going to be put out of work and not able to find another job, what's going to happen down the line when you don't have people able to find jobs because they're being replaced by AI? I think it's going to be mass unemployment. **Josh:** Hey, this is Josh from Columbia, South Carolina. Humanity, whenever it has to face an existential threat, like, say, World War II, that sort of devastation unites people. And I think with an existential threat, if we see it through AI, it's going to unite humanity in order to face that. I think we do better with that sort of adversity as a human race. **Carol:** My name is Carol. I'm calling from Milwaukee, Wisconsin. I think AI is already pretty much leveraged itself. I think it's letting us think that we're still in control. **Jeremy:** Well thanks to everyone who called in. And you can hear that entire episode on our podcast in partnership with iHeart Podcast on the iHeart app or wherever you listen to podcasts.