
Mythic U
Join us to explore practices for discovering the stories that animate each of us. By understanding the meaningful stories that are your personal mythology you can choreograph your own unique way of attending to the needs of your soul. Hosted by Karen Foglesong and Erin Branham
Mythic U
Artificial Intelligence and the Soul
We'd love to hear from you. Click here to send us a text.
Artificial Intelligence is everywhere now, available for your use and possibly coming for your job! Karen and Erin discuss the hefty moral implications of AI for humans - and for the artificial intelligence itself. The discussion ranges from AI art in competitions to the Church of AI to (of course) Star Trek, the Borg, and Commander Data (with a little Star Wars thrown in for good measure).
SHOW NOTES
What Isaac Asimov Can Teach Us About AI - article in The Atlantic
The Mathematical Basis of the Arts by Joseph Schillinger, 1943 - Internet Archive
Inside the First Church of Artificial Intelligence - article in Wired
We want to hear from you! Please rate and review us wherever you find this podcast. Join our Patreon: patreon.com/yourmythicu
Hi, everybody. Welcome back to Mythic U. I'm Karen Fogelsong
Erin Branham:and I'm Erin Branham.
Karen Foglesong:Today we're discussing artificial intelligence like everyone else, why all of this AI stuff? Well, if you haven't heard AI is coming for your job in your town or your college essay. If you're a creative professional, a graphic designer, a writer, a visual artist, businesses are actively using AI rather than hiring you. I actually just heard about somebody being told specifically what pieces of their job description to put in their welcome letter so that they didn't get pushed out of an AI interview process. And Marvel Studios just released a major TV show, Secret Invasion, starring Samuel Jackson, and its opening credit sequence was created by AI rather than a human artist. That could be scary, so let's start with let's define it. What is AI?
Erin Branham:So when we talk about AI, most people know artificial intelligence, but what does that actually mean? Artificial Intelligence is machine based intelligence. It's been theoretically possible for a very long time. A little bit later, we'll talk about sort of how we've thought about machine intelligence and how far back that goes, but it's becoming actually possible now, like the technology has actually gotten to the point that we're getting closer to something that is like real machine based intelligence. Debates around AI have overlapped with debates around animal intelligence and human intelligence. So when you try to get down to like, what does it actually mean to be intelligent, right? When we say that, what does that word mean? What would it mean for a machine to achieve that, right? What is consciousness? Sometimes the word sentience is used in science fiction and other kinds of conversations around artificial intelligence, but basically the question you have to ask, or the thing that's trying to be achieved is what makes a non human intelligence equal to equivalent or the same as human intelligence, which means you basically have To truly understand what human intelligence and human consciousness -
Karen Foglesong:And we don't.
Erin Branham:And we don't, right, at least not fully.
Karen Foglesong:We don't. So we're building something we don't understand based on something we don't understand-
Erin Branham:which I think is what is part of why it's a scary concept, yeah, right, because you start going, Okay, well, yeah, we're building something we don't understand based on something we don't understand, and in the way that humans have we can, like, we can reach that far right out of our realm and and do stuff, and it's not always good. It's not always bad either. Like, I have to say, I am actually not a doomsayer about artificial intelligence. I don't hate the concept. I think there are ways in which it's a really, a beautiful quest. I just think we got to be really, really, really, really careful.
Karen Foglesong:I'm not a doomsayer either, Erin, I'm curious more than anything. I'm curious about what it's doing, yeah, but I do have some, maybe I don't know, some concerns and And small enough at the same time, right?
Erin Branham:Concerns absolutely right. As we said, the kind of core question then, what is intelligence? Right? Some people say it's information processing is really the measure of consciousness, and that's been part of why AI is coming to be now, because the storage capacity of computers has gotten Because they can take all that processing speed and put it on a large enough, and the processing speed has gotten fast enough to go through all of that information and produce something akin to what a human would produce. Absolutely, yeah, definitely, like the real factors, the real sort of hard limits of technology, have been teeny tiny postage stamp now, rather than a, like a giant weight. overcome for the most part, and now we have things like chat GPT that you can have a conversation with, and it'll feel, you know, pretty much like a human conversation. And that's why it's, you know, joked about college essays in the open, because it's starting to write, you know, it can write a paper, it can write a novel. And people like the creative people are starting to both kind of use it and be replaced by it at the
Karen Foglesong:Yeah, I've heard that argument, but I same time. So when we talk about information processing, then there's also kind of the question of, What is consciousness, or what is self awareness? Like do? Does this program actually have some level of self awareness. When you talk to it, it will say things like, Well, I am an information remember in early biology classes that was one of our processing program, so it seems to know what it is and to be able to respond. So like, that's an interesting concept about how does self awareness come? A lot of people argue that's an emergent property that happens when there's enough information and enough information processing. markers of intelligence is self awareness. So we've been trying
Erin Branham:It seems like it that's, you know, one of the great questions that that I know of of kind of information processing and artificial intelligence is what's called the Turing test. Alan Turing was one of the great early pioneers of computers and computer processing. There's been a great to get monkeys and dogs and birds to look at themselves in movie made about him, and he developed a kind of a test, which says, If you basically, if you can have a conversation with a computer and you can't tell that it's not human, then it's not, then it's done like, if it can simulate being a human closely enough that a human can't tell the difference, then it is intelligent. Because how can, how can you say it's not? If you can sit there and go, well, but is it just simulating it? Even if it is just simulating it, it doesn't matter.
Karen Foglesong:I mean, there's some argument to be made that the mirrors for generations. So recognition of self in a mirror even, you know, like our early forms, our children are just simulating our they're just copying us to get to a level where they have their own ideas too. And if there's anything into this accumulative idea of knowledge, which I think there is, because I've read many articles about how the rote memorization in early levels of school is important because it gives a level of it gives you a filing system be able to build on right to add information to and categorize and whatnot. So it seems like at some point there is some threshold that is a key factor in our intelligence, and these AI have something else will happen beyond what we've fed into the computer.
Erin Branham:Well, that's part of that's one of been one of the other big quests of artificial intelligence is learning. Machine learning. One of the things that distinguishes humans is our ability to actively learn new information, new skills, new processes that you're talking about children, right? Very, very young children. I work in education, so I've done a lot of already surpassed that. studies sort of brain development. Children below the age of seven, their brains are literally different than they process the world, differently than the way that our brains process the world, and that's why little kids are the way they are. Right. Everybody knows right, 3-4-5-6, they are just they are operating on a different level. They are just pulling in everything. Their brain is just grabbing everything and storing it and connecting everything to everything else, which is why they're so wildly creative. Then at around seven, your brain switches to this new mode, which is called Mastery. The first part is discovery. The second part is mastery. And mastery pares down. It goes through all those connections, and it says that's not a real connection. Just because the ball was on the table when you were three does not mean a ball goes with a table right, right? Because a three year old's brain thinks it does. There's a synapse. There's literally a synaptic connection between those things. And then as you go along and you have experiences over and over again, you start to say, like, Oh, here's a real connection. Chair goes with table. Ball may be near table, but it doesn't actually go there. It goes somewhere else. Goes on the right, you know, the court. And so you start to learn that. And so you pare down and pare down. And so there have been pieces of, I know I've, uh, research and simulations and things that people have done with computers, that have tried to teach computers how to learn, and they that is actually starting to happen again. That's one of the things I think chat GPT does, is, as you talk to it, or have a conversation with it, or if you're say, a writer working with it, you can continually say, No, I need this. I need that. I need this. Honestly, for this podcast, I work with otter, AI to do the transcripts. And as it goes along, the more of them I do with it, the more it learns to like, get your name and my name correct on who is speaking.
Karen Foglesong:Oh, nice.
Erin Branham:And it can learn to, it learns our particular vocabulary patterns and the and it can start to, you know, be more accurate than it was the very first one I ran through. So, like, even it's just, you know, it's a program that I pay a teeny amount of money for so but it's actively learning in a way that right computers really didn't before.
Karen Foglesong:And another aspect of learning, as far as I understand it, is that there's imitation learning, like many creatures do this, where the mother does an action and the baby follows the action, and that's the the like a lower like a first stage of learning. But if you get enough of those first stage of learnings, then something that humans do that is unique to my understanding, is that we can take what we learned in those little stages and put them together in a brand new way that nobody has taught us or we've seen imitated, is that correct with what you understand of it?
Erin Branham:Well, yeah, and that's part of what I thought as we were thinking about artificial intelligence, things like chatGPT and these things, is that they can write your college thesis or that novel that you always wished that you could write, but that you honestly can't on your own. What it does is it trawls through a whole set of content. Well, you know, if you could do it without the AI, you would have done it right? That's what I tell myself. I'm 53 and I've always wanted to write a novel, and I have to have you come to a point of say that, like, you know, if I really loved writing that much, I would have written that much, and I haven't yet, just because now I can make, now I can make a computer do that like actual hard work. And honestly, the part that's rewarding about writing, I don't, I don't understand why you would do that anyway. It got it runs through a set of content that human artists have made, and it does generate some remix or recombination of that content, but what it produces is exactly as good as the raw material it was fed, yeah,
Karen Foglesong:like a giant collager.
Erin Branham:Exactly. And so what I would say is that that's the thing that makes it different than a human creating art, because humans make choice. You know, we also have a ton of content that we draw from to remix and to create things when we're making art, however, a human makes conscious choices about things to incorporate like it says, I'm going to draw from this inspiration, and that inspiration, your conscious mind tells you that, right? But then your unconscious mind is throwing a whole bunch of stuff in there that you're not really aware that you're putting in there.
Karen Foglesong:Yes.
Erin Branham:So this is this weird, tricky. Yes, I'm doing conscious stuff, but I'm also doing unconscious stuff, and it's happening in this way. And a human can transcend the source material, right? Can go somewhere that nobody has ever gone before or take something I the thing that occurred to me was, I don't know if anybody's ever actually read the book The Godfather.
Karen Foglesong:No.
Erin Branham:But it is a smarmy pot boiler, pulpy, not a great book, okay? Fun to read on the beach, but not a great book, and Francis Ford Coppola and all the people who created the movies could take that and turn it into an operatic, cinematic masterpiece.
Karen Foglesong:Right.
Erin Branham:Because they were able to find something else, something else emerged out of that. Now I think Artificial intelligence has the potential to reach a state where emergent properties happen, I think that's largely what we're afraid of.
Karen Foglesong:Well, that's sending me in all kinds of different directions, but I think the most pertinent at this point is that I want to point out there was a man named Schillinger who believed that art could be produced with a mathematical equation, because what we're talking about here is that kind of magic that happens between the conversation between the subconscious and the conscious when an artist is producing right or even a waterfall of the subconscious jumping in there, and you might make associations that your conscious mind wouldn't. Schillinger said that all art could be produced through a mathematical equation, and he actually did it. He produced two dimensional artworks. He produced sculpture. He produced he even came up with a formula to create the choreography on a dance piece depending upon the measurements of the space that the dance piece was was going to happen in now, I as an artist, I experimented with this process, and for me, it was not fun. It didn't fulfill what happened, like the magic, the so called magic that happens when I'm creating even a dance piece. Because if I walk into the studio with a grid work, a grid pattern and put it on, say, nine people that have volunteered to be a part of this piece, I have totally excluded anything that they could bring that might create an emerging an emergent property, kind of newness based on the shape of their body, the way their minds work, the way their soul interacts with the beginning material, the thematic material that I give them, right? So I am I acknowledge that Schillinger could happen, yes, and the two dimensional pieces, if you look at them, a layman may not be able to tell the difference. They're valid pieces, and I assume that a computer would be able to do exactly what Schillinger did, because it's based on a mathematical equation, but you're losing that conversation with the subconscious. And then so the next question is brought up with AI, can the AI develop a subconscious based on the imagery that it's being fed?
Erin Branham:That is a very that is, say, I think that gets down to the kind of core questions about, you know, like say, if artificial intelligence is seeking to achieve something akin to human consciousness, and what is human consciousness? Then you you get into that concept of, you know, well, what makes humans human, right? So, what is that special thing that has happened of to humans amongst all of the creatures on earth? As far as we know, I'm still, I'm, you know, we continue to find new information about the intelligence of cetaceans. You know, look at the orcas right this minute who are, who are plainly involved in some collective learning about how to sabotage human boats and yachts and things which, and we're aware, you know, we've seen this amongst a lot of you know, what we think of as sort of higher order creatures. There's some level of consciousness, the fact that it doesn't line up with human consciousness, or have the same values as human consciousness, I think sometimes is part of what makes it hard for us to understand it, which always brings me back to one of my favorite quotes of all time from Douglas Adams about - in the Hitchhiker's Guide to the Galaxy, the three most intelligent creatures on earth, the mice, the most intelligent, and the humans and the dolphins, who are the second and the third intelligent, and they were right, neck and neck. And the humans always thought that they were smarter than the dolphins, because we had neat things like civilization and cities and war, and all the dolphins had ever done was muck about in the water, and the dolphins thought they were more intelligent than the humans for the exact same reason.
Karen Foglesong:Right?(laughing)
Erin Branham:Sorry, I love that. I've, I've been quoting that my whole life.
Karen Foglesong:I love that. And I just love the, the answer to life and everything is 42. that's that when I get really bunched up, I'll be like just 42 Karen, 42
Erin Branham:Right? Exactly.
Karen Foglesong:It's just as meaningless everyone
Erin Branham:100% but like when you look at when you know, sort of, in the archeological record, when we mark, kind of the beginning of humanity, there are certain things that that show up and they are like burials is one of the main things that you start to see a kind of a different treatment of a body after death than animals do, right? The burials and some kind of ritualistic, you know, the ornaments with them, or those kinds of things with them. And the other one is the appearance of art, you know, marks on cave walls, or the drawings you know, of Lascaux, or even the handprint, if you've ever seen the handprint of, you know, yes, ancient artists who were leaving their mark, that kind of imprint on the universe that seems to indicate Soul. I mean that we talk about soul here at Mythic U and so that is a very interesting thing when you get to artificial intelligence. Honestly, what seems to be happening with artificial intelligence right now is that while, I think sort of theoretically in the way people have thought about it for a long time, was seeking after something that could be self aware, something that could have that kind of complexity as human intelligence. Now, it seems that there's a lot of drive towards just, can we get it to do human jobs? As you know, basically as good as your average human we seem to have taken that particular turn, which is not surprising. We're capitalist culture. You know, it's going to go to, how can you turn this into a way to save money or make money?
Karen Foglesong:Yes.
Erin Branham:But it's it's unfortunate, because that other question, that other question of, could we create a being, a, you know, a program, a thing or something that could have a soul, is an amazing thought.
Karen Foglesong:Yes, it is. But all of this goes in multiple directions for me simultaneously. I'm having to, like, what do I - which way do I want to go with this? But I think right here is what I worry about most with AI, and this idea of soul is, where is the shadow going? Because we, we've talked about in the past, that we don't deal with our shadow well anyway, and we tend to push our shadow out on the thing that we don't like. So if you don't like your neighbor and you haven't been working with your shadow, it intensifies the conflict feelings that you feel on your neighbor or you see someone from a different culture, and it's easier to dislike them because you're projecting your own shadow self onto them.
Erin Branham:We'd love to hear from you. You can email us at
Karen Foglesong:your mythic u@gmail.com. so y, o, u, r, m, y, t, h, i, c, u@gmail.com,
Erin Branham:yes, because we would love to hear from you any comments, suggestions, criticisms and your stories. Definitely want to hear your stories. So like said, rate and review us on Apple podcasts, or whatever your podcatcher of choice is, we'd love to hear from you your mythic you@gmail.com
Karen Foglesong:So I've read stories about, you know, folks that are dealing with lots of social media, and they're utilizing these bots, not robots, but these program bots, to sift through the internet and come up with ideas for their social media posts, and they're becoming racist quite quickly.
Erin Branham:Yeah, absolutely, the kinds of bots and things that have been sent out to, you know, trawl the net right to, like, go on Twitter and to read, you know, the the source material that it has to work from is Twitter itself, and then it generates tweets. Tweets, and, yeah, when it, when that experiment was done, it immediately became wildly racist.
Karen Foglesong:Right?
Erin Branham:And started tweeting terrible, hateful things. And I think you're right, right? The The thing that always scares us, you know, even in our fiction, I think of artificial intelligence, is this idea that, yeah, if we create something like humans - well, that's creating something like humans, right? And, and it's a machine, so it's potentially doesn't have the weaknesses that humans have, and therefore, you know, a super human creature that has current human values is terrifying.
Karen Foglesong:Yes, yeah, it absolutely is.
Erin Branham:Because we haven't dealt with ourselves. And that's, you know, I think that's what is really frightening about the quest to people who are out there trying to create artificial intelligence to do these jobs, to to achieve these things, may not have dealt with their own stuff when it comes to that, right? I mean, it sort of comes down to those that artificial intelligence can only be a mirror to us.
Karen Foglesong:Yes. So what kind of mirror are we creating? Because from what I see, humans are the best. We have perfected the art of torture. That's what we're the best at. So like, what to do? Just torture one another with AI?
Erin Branham:Well, potentially, I think that's what it is. And when you look at how we have imagined, or the sort of the myths of artificial intelligence in our popular consciousness, you know, we've thought about this for a very, very long time. And in general, artificial intelligence in our fiction exists on a kind of a spectrum, and one end of it is the shadow end, right? It's terrifying. It's scary. It's going to do awful things. It's going to destroy all the humans, to a very benevolent, kind of very helpful artificial intelligence. You go back to, I think, when I was looking for it, I found in 1872 there was a story called Erewhon, which is nowhere spelled backwards, and by Samuel Butler, who is an English writer. And so he was the first person to write about machines possibly achieving human intelligence. And he thought about it in terms of natural selection, because Darwin's Origin of Species had just been published a little bit before this. So he writes a story in which machines were outlawed because they would evolve by natural selection, and because they could evolve more rapidly than humans, they were sure to catch up and surpass us. So there's somebody who's like the idea, and I think we do think about machines in that way and artificial intelligence in that way. We think of it all as that kind of survival of the fittest.
Karen Foglesong:Yes, yeah. And I've seen research that talks about how our philosophy and our creative thought shifts as we shift the way we look at the world. Like the Industrial Revolution gave us this very machine like quality, this mechanized quality of way of looking at the world. And this kind of technological revolution gives us this kind of either or it's either one or zero kind of quality that doesn't leave a lot of room for gray, and gray is where I would argue that the subconscious is hanging out. It's not (laughs) living in the black or the white. It's creating swirls and swirls on top of swirls. So I think we have to leave room for that. We have to leave room for the swirly quality that is undefinable or indefinable, or I think we need the undefined as much as we need the defined.
Erin Branham:That's a very good point, and I would agree that feels hard for computer to achieve, but it's an interesting thought. You know, in terms of it's made up out of simpler parts, that's the whole idea of emergent phenomenon, right? Its language may be binary, but does that really mean it's restricted to one or zero in the final analysis? Like, is it possible for that to achieve different levels of complexity? Just like, you know, we're made up out of atoms, but we've achieved greater complexity.
Karen Foglesong:That's true.
Erin Branham:I wonder. I just, I wonder about that, and think about it, and it is, you know, Jung. I was thinking as you were speaking Jung's, you know, core idea is that the soul, the self, moves towards wholeness. And so these negative or shadow aspects of oneself, which we all have. I mean, this is kind of core,
Karen Foglesong:archetypal.
Erin Branham:Interestingly, this goes down to the like, the binary thing. You know, Campbell would always talk about many, and many other people talk about, we exist here in this universe, in our bodies, in the field of opposites, right, right? We exist in the field of opposites. So that's exactly what Jung's talking about. You cannot be wholly light and good, because we exist in the field of opposites, and therefore the dark and the evil is also with us at all times, and we have to try to reconcile that and neurosis and problems come from your consciousness, because you are not reconciling those negative aspects of yourself, right? And so you're projected, as you were talking about earlier, Karen, you're projecting them onto your neighbor, right? You're projecting those parts of yourself you don't like onto other people and out into the world, right? And so, like, that's what we're saying is sort of scary about artificial intelligence. Is not the artificial intelligence itself. Like, I feel like machines, and there's, there's to me, and I grew up on a lot of science fiction, and I like, there's this idea of, there's a way that maybe this could be a pure, better intelligent organism than humans, but it's limited by the fact that humans are the ones creating it, right? And since we haven't dealt with our stuff like so say, somebody really does create this fantastic thing. Is that person dealt with their shadow? Or do they just program that into the machine?
Karen Foglesong:Right? Because it like, just the way you're explaining it, yeah, it. It manifests anywhere you give it an option to if you do not look at it yourself, like any open screen will do if it's left alone. But I also think that this is a good place to bring in the Church of the AI. Mm, hmm. I Okay. I'm just going to say to everybody, anybody who's listening, I am a skeptic first. I don't trust you. If you're like, I have the secret to God, okay, I'm gonna look, but I'm not gonna be like jumping in with both feet, right? So I do feel that there's a little bit of scam quality going on here, like this person that, what is it? Lewandowski, is been involved with some other corporate espionage and things like that. And we know from historical fact that religion is an easy way to grift people, but he says that he is trying to actively create an effective God. They're taking all of there's a group of programmers and a group of folks that are interested, and they're all working together to create an AI program that they're feeding all of the descriptions. Well, they say all, but, you know, we tend to be prejudiced in our meaning of all as well. So I'm hoping that they're doing a world search for words about God and theories about God, and then they're feeding this into this program, and they're going to see what happens. And this is the church of AI that is happening in San Francisco right now.
Erin Branham:That's fascinating.
Karen Foglesong:It is because, again, there's another perfect stage for the shadow to come out unrestrained. But is also really interesting. I mean, in part of my research for this episode. I looked around, and I'm also a curator in a state fair gallery competition, and last year we had an AI piece place in a category, and it caused a lot of upheaval. So I'm dealing with this situation, with it, the question of AI is very real for me. I'm dealing with it on a very on a daily basis, interacting with people who are anti and Pro, and often there's no middle ground to stand on. But I'd like to remind people that photography was thought of as a cheat when it first came out, too. Like so it's a tool. It's just another tool, and as you stated, it can only give us what we put into it, right? So I tend to go at this through the art. And while we were researching this episode, I found this one place, this one artist. He's a member of a particular AI art Consortium, and he takes suggestions from his followers, and they asked for hell first. I thought it was pretty telling that they asked for hell first. But he had enough requests that he fed in all these images and descriptors of human and mostly Christian perspective of hell, and he got, you know, what you'd expect, these kind of torturous half image no faced people, stretchy, pully, flaming things. And finally, people were brave enough to ask for heaven, and we have less descriptions of heaven, you guys, basically. What they got were these, like bucolic gardens and some spectacular waterfalls, but nothing of the gory detail like we have for Hell, like, so that's the scariest part for me, Erin, like you're saying, like, what what you feed into it is got a lot to do with what's going to come out of it. So if we're not dealing with our psyche or our soul, we don't even understand where it is or where it what happens to it, or how it exists, and we're trying to create it in another human being or another creature? What's are we going to make a monster? This is like,
Erin Branham:I think that's the question we've been asking ourselves for a long time. When you look at like you said, when you go back and look at how we've thought about this after we have this first one who's like, Ah, scary. You know, machines are going to out, out compete us in natural selection. No good. Right after that, we get the Tin Man in the Wizard of Oz in 1900
Karen Foglesong:and all he needs is a heart.
Erin Branham:That's right. All he needs is a heart, which he already has, right? Yep, by the time he gets to the end and he gets the heart, it's because he's already shown that that exists in him, that that impulse, that a soul listens, right, the heart, is kind of a symbol of the soul in that one. Yeah. After that, we get the silent movie masterpiece, Metropolis. Love that movie so much!
Karen Foglesong:So do I.
Erin Branham:1917 which, but it has an evil AI, right? Maria, the robot is created to replace the woman Maria. And so you get a sense there of that, that fear of being replaced again, by machine intelligence. In 1920 we get Rossums Universal Robots by the Czech writer, Carol Capic, which I'm sure I'm not pronouncing correctly, which is where we actually get the word robot. Wow, that comes from that piece of writing. And then in 1940 you start getting Asimov's robot stories. Now, Isaac Asimov was a scientist and a science fiction writer, and he actually laid down a lot of the core ideas that we have about like what an Android would be, or what a robot would be. He wrote I Robot and Bicentennial Man, Positronic man and some other kind so he establishes some really key concepts about artificial intelligence, including the three laws of robotics right, which were fictional, but that many people who work on artificial intelligence kind of keep in their mindset. You want to read those for us. Karen?
Karen Foglesong:Sure, number one, the first law, a robot may not injure a human being, or through inaction, allow a human being to come to harm. Second Law, a robot must obey the orders given it by human beings, except where such orders would conflict with the first law. Third Law, a robot must protect its own existence, as long as such protection does not conflict with the first or second law.
Erin Branham:So that is the idea of that okay, if we put the right parameters on artificial intelligence, then we can keep it safe right. And all the characters in the I Robot series and the bunch of they are very sympathetic characters. You like them, you root for them, you really want them to and they follow these laws, and they are, you know, typically good. But then in 1968 we get Hal, the HAL 9000 in 2001 A Space Odyssey, And in the book, Hal gets caught in a logical conundrum, right? right? He's told he has to lie to his to the humans that are in the spaceship, and he doesn't want to lie, because that conflicts with some of his other ethical programming, right? So he determines that, well, if I just kill them, I won't have to lie to them, which actually sounds to me like a little bit of a plot hole, because if he was so upset about lying to them, I'm sure he must have had an ethical rule not to kill them. But anyway. But anyway. What's interesting about that is something recently happened, supposedly, maybe, anyway, a colonel in the Air Force recently wrote a blog post where he tells the story that they were doing a simulation trying to get a drone to take out a target. And this it says that, and it has a human operator, right? And the human operator can tell it not to go do that, you know, say to go there, but don't kill and so the systems, here's the quote, the system started realizing that, while they did not did identify the threat. At times, the human operator would tell it not to kill that threat, but it got its point by killing that threat. And so this is said by Hamilton, the chief of the AI test and operations with the US Air Force during future combat air and space capability summit in London in May. I got this out of an article by the from The Guardian, and so it says, quote, so what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective. Then they said they trained the system. Hey, don't kill the operator. That's bad. You're gonna lose points if you do that. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with it to stop it from killing the target. So it was just interesting because it started to develop some
Karen Foglesong:workarounds,
Erin Branham:different pathways to get its point, yeah, now the US Air Force is that the article that I read in The Guardian said the US Air Force has denied that this happened.
Karen Foglesong:Oh, bizarre,
Erin Branham:but, but it's an interesting little tale about because you see a lot of things like I said, I'm a Trekkie. You go watch some old, startling Original Series, Star Trek. They're always talking computers into blowing themselves up by giving them a logical conundrum.
Karen Foglesong:I don't I just don't see that happening.
Erin Branham:In 1968 we also got the publication of Do Androids Dream of Electric Sheep, by Philip K Dick, right? And this becomes Blade Runner in the 1980s. In the 70s, we get droids, like the droids of Star Wars, C3PO and R2D2, these are benevolent. AI, right? Helpful. AI who's just gonna do, who's gonna sacrifice themselves to save the hero, the human protagonist, right? That kind of stuff. Yeah. And, but then, like said, Then Do Androids Dream of Electric Sheep? becomes Blade Runner in the 1980s which starts to ask much more complex. I'm sure there are other pieces of fiction that did this before, but it has - that's the one most people are going to know. Where it has to be - it's about, I am, AI Don't I deserve to live?
Karen Foglesong:Right?
Erin Branham:Right? Roy Batty's speech at the end. I have seen, you know, where he said, now it's time to die. And he dies in the rain is, you know, this, this creature saying, I have consciousness. Do not I deserve to live. Do you get to kill me just because you created me?
Karen Foglesong:Right?
Erin Branham:It's one of the great, you know, tragic science fiction stories of all time.
Karen Foglesong:Well, that's the question. This is also all a reflection of us questioning our own creator. Do you get to kill me just because you created me? You know, like it's, it's almost a kind of a drama that we're playing out to try to answer our own question.
Erin Branham:Well, it certainly begs all these questions, these very things that we kind of talk about here, which is, how, you know, what, what is the soul? How do we feed the needs of the soul? What are the needs of the soul? What are the ways that we can create solutions that actually are more human, and the concept of it, you know, and AI, oftentimes are benevolent. AI in our stories, yearns to be human. Yes, right? Yours feels that there is something that they have not quite achieved that will make them human. When we didn't even talk about transhumanists, which are the ones who think that we should merge with technology
Karen Foglesong:Like the Borg
Erin Branham:in order to evolve, inevitably bringing up images of the Borg, the terrifying Borg. There's a concept of the unbenevolent AI, right? Yeah, the idea of the technology literally eating us down to our flesh, but the as such that all individuality is erased.
Karen Foglesong:The weird thing with the Borg is that, yeah, that's what it does, is it takes away our individuality, but otherwise, they're not necessarily evil. They don't see themselves as evil. They see themselves as perfecting the universe, like you mentioned Star Wars, the 3CPO and R2D2 to and I had see 3CPO underoos when I was a kid (laughs) I miss underoos. But the other side of that is that the merging with the technology, which would have been Darth Vader at the time, he would have been,
Erin Branham:ah, yes, you right. Oh, that's right.
Karen Foglesong:Kind of Borg, like, is evil, you know, that is the dark side. So we were taking a stance, then that organic was better one or the other. Kind of like, stick with your own kind. Don't mix.
Erin Branham:That's definitely right. It's I forgot about Darth Vader. Vader is very much an image of the, like a pre Borg image of merging,
Karen Foglesong:Right? Pre-Borg image
Erin Branham:of merging technology and humanity. And thus, you know, something evil, looks evil, all those things. That's the other thing. Often, when you see those kinds of creatures, they're depicted as terrifying, horrifying looking, right? All of that, yes. Star Trek, besides giving us the Borg, also gives us the ultimate sort of benevolent AI in Data, an officer on the crew, yes, who is very influenced by Asimov. He has a positronic brain, which was what Asimov postulated in his robot stories. Data is 100% you know, bound by the three laws of robotics, Data sacrifices himself numerous times for his humans companions, only wants to be more human. And you know, in every way, is a protagonist. Is somebody you root for within the stories. And one of the most powerful stories Star Trek ever did around Data was called The Measure of a Man by Melinda N Snodgrass, who I greatly admire, because in particular, where he is found by Starfleet, he is an officer in Starfleet. So a technician, a scientist, shows up one day and says, I'm going to take you apart because we can make more of you, and that would be great for Starfleet, yep. And Data's like, I would rather not be disassembled. Thank you very much. And the guy's like, no, it's gonna be great. It's all gonna be fine. And he's like, I don't think you actually can disassemble me and download my experiences and then upload like, I don't think we have the technology to do that well. And the guy's like, no, it's gonna be fine. And data has a great line where he says there is an ineffable quality to experience that I do not believe would survive your procedure. And this guy's just like, yeah, what are you talking about? You're a robot!
Karen Foglesong:That's a beautiful line.
Erin Branham:I know. I love that line (laughs) You know. So Data goes, Well, I'm just going to resign. Yeah, then if you're going to order me to do this, I'm going to resign. And the guy comes back and says, You can't resign. You're the property of Starfleet. And then a trial is had to say, well, is Data property? Does he belong to Starfleet? Starfleet found him artificial being. You know, does that? Doesn't that mean he belongs to them? And then there's a really good discussion like, sort of, what are his legal rights? Can he refuse the procedure? They say, Would you allow the computer on the Enterprise to refuse a refit? You know, like, some really interesting questions about, when you start doing this, where do the lines get drawn, right? And they're just, they're going on, and they're somebody makes a really good argument for why Data actually is property. And there is an exchange in which the captain goes and talks to the wise bartender, played beautifully by Whoopi Goldberg. If you've never seen this scene, she is absolutely divine in the scene, and she says to him,"Consider that in the history of many worlds, there have always been disposable creatures. Yeah, they do the dirty work. They do the work that no one else wants to do because it's too difficult or too hazardous, and an army of Datas, all disposable. You don't have to think about their welfare. You don't have to think about how they feel, whole generations of disposable people." And Captain Picard says, "You're talking about slavery." And she's, you know, being devil's advocate here. So she goes, I think you're being a little harsh. And he's like, No, it's not harsh. We are hiding a horrific idea here behind the nice and simple word property. And it's really it's a very remarkable scene in which you go, oh, like what you were saying. We, you know, we want them to have enough autonomy to do our jobs.
Karen Foglesong:Yep.
Erin Branham:Right to do the dirty work,
Karen Foglesong:but not so much that they don't want to do our jobs.
Erin Branham:Right? But not so much that they can refuse, yeah, to all of that. And at the very end, the judge, who has to rule on this, you know, she says, this is we're dancing around the basic issue. Does Data have a soul? And then she says, I don't know if I have, I don't know if he has, but he gets to choose. And so it's, it's a very, you know, we really stumble into some hardcore ethical conundrums when it comes to artificial intelligence that bring us back again and again to the soul.
Karen Foglesong:Well, this is the same. I mean, I'm ashamed to say this, but you guys, I still have relatives that believe that not all humans are equal. So it goes back to the same question over and over again, are there disposable people? How about are there disposable entities? I would say absolutely not. There are not disposable entities ever. And we got to figure out how to take out our own garbage, literally, figuratively, spiritually, all of those things,
Erin Branham:100% that's why we're doing the work we do here at Mythic U thinking about it, working through, working through that garbage and through the shadow and all that stuff. Well, thank you. That was a really fun discussion. You had a good time?
Karen Foglesong:Absolutely, I did. And for listeners out there, if you guys want to tell us about there's several other stories that are out there in the public and if you want to tell us in our comments or drop us a line and tell us about your favorite like Megan or Eagle Eye, or what's your perspective on AI. Shoot us a line. Let us know
Erin Branham:absolutely we'd love to hear from you. Please don't forget to rate and review us on Apple podcasts that helps other people find us and we thank you for listening.
Karen Foglesong:Have a great day, y'all, bye.
Unknown:Bye.
Karen Foglesong:You. Thank you for joining us at mythic. U we want to hear from you. Please visit our website at mythic u.buzzsprout.com that's m, y, t, h, i, c, U, dot, Buzz sprout.com for more great information on choreographing your own spirituality, leave us a comment and donate if you have the means and the interest. If you'd like to support our work more regularly, visit our Patreon and become a member of mythic you, depending on the level at which you join, members receive early access to new episodes, bonus episodes and free mythic you, gifts you.