Oneplace.com

How AI is Shaping Our View of Reality

April 6, 2026
00:00

Is Artificial Intelligence challenging our perspective of what it means to be human? Abdu Murray explores the promise—and the danger—of AI, from its positive potential to the risks of tools like chatbots. Discover how to ground yourself and your family in the truth that we are made in the image of God.

Guest (Male): Hey parents, Adventures in Odyssey has been helping kids like yours form relationships with Christ for almost 40 years. Now the animated Adventures in Odyssey film, Journey into the Impossible, will reach a new generation of families. But we need your help to finish the film and launch it in theaters.

Your gift will be matched dollar for dollar before May 1st. See the trailer and donate today at focusonthefamily.com/impossible. That's focusonthefamily.com/impossible. This program is sponsored by Focus on the Family, helping families thrive in Christ for more than 40 years.

John Fuller: This is John Fuller and please remember to let us know how you're listening to these programs on a podcast, app, or website.

Jim Daly (AI): Hello, this is not Jim Daly. The voice you are hearing is a recording created using artificial intelligence. Our conversation today will be about AI and all the implications of deepfakes, chatbots, and more. So I hope you'll stick around to learn about practicing discernment through a conversation with the real Jim Daly.

Jim Daly: Oh my goodness. Now I've heard you for years and years and years. I would think I could spot a fake, but that was really good.

John Fuller: Pretty close. I mean, it's impossible now, even for the people that they're AIing, if that's a verb. But yeah, I couldn't. That sounded like what I would say to anybody anytime. So we're going to be talking about AI, about artificial intelligence, and the confusion that it can create in the culture. This is Focus on the Family with the real Jim Daly. I'm John Fuller.

Jim Daly: We have been internally debating so much of AI use. There are good ways to use AI for efficiency and productivity, and then there is the not-so-good way to allow it to be used, which is when it's replacing human relationship and those kinds of things.

We're going to have a great discussion today. This is one of those things. It's bubbling out there. We're getting questions from you, the listener and the viewer, so we want to address it and be able to have something in the arsenal to when you call or write in, we can give you a good recommendation to read the book *Fake ID* along with the content of today's program.

John Fuller: Our guest is Abdu Murray. He is a speaker, author, attorney, researcher, and an evangelist. He loves to talk about a lot of different things as they relate to the Bible and to the gospel. As Jim mentioned, the book that we'll be covering today, at least a portion of it, is called *Fake ID: How AI and Identity Ideology are Collapsing Reality and What to Do About It*. Big title, big topic, and I'm looking forward to the conversation today.

Jim Daly: Abdu, welcome back to Focus. It's good to have you.

Abdu Murray: It's great to be back, guys. It's wonderful to sit across the desk from you again.

Jim Daly: This topic—we just had a board meeting a while back, and this was part of the board meeting. What's our policy? Does the board approve the policy of use of AI? And like I said, mostly for taking care of coding with computers and things like that, and then what we're not going to use it for.

And organizations, Christian organizations, churches have to now start thinking about soulless AI use, if I could put it in that way. Describe that just that broad thing. Being an evangelist, my heart is with you. I feel that's my passion as well. And even that word passion is something void with AI. It doesn't have passion. It just has content. What do you do with that?

Abdu Murray: Absolutely. And this is one of the distinctions I try to make often is that what we are seeing in the culture—one of the things I think young people are actually struggling with, I think all people frankly, but young people are struggling with—is that if artificial intelligence seems to do the things that make people human and make us distinct from animals or even other machines, if it seems to be doing the things that we do—like creating paintings or writing poetry or doing your essays for you—and relationship, the chatbots—then if this soulless thing does that, what does that say about me?

Do I need a soul to paint? Do I need a soul to interact? Do I need a soul to express empathy? Maybe I don't need that. Maybe I can just have clockwork and algorithms to do that as well. So it's collapsing the reality of what it means to be human. So we do engage with this soulless thing, but it's causing us to look into the mirror and say, what does that mean about what I am?

And this is the fundamental distinction I think we should make: AI doesn't create. AI generates. And there's a fundamental difference between creation and generation. When you look at what an AI does, you can create a painting. In fact, one of the things that got me interested in this topic in the first place was a guy named Jason Allen, who was big news about a few years ago. He won first prize in an art contest. In the digital art category, he won first prize with a painting that was remarkable.

Turns out he used Midjourney, which is an AI generative software, to make it. He just typed in prompts, didn't put pen to paper, brush to canvas, none of that stuff, just created this. The artists got upset about it and said he shouldn't win first prize, but he kept his first prize. When I saw the painting, I thought, well, that's remarkable. What does that say about what it means to be human?

And then you realize these things don't work by creating a painting. The AI wasn't inspired. He had to prompt it. That's the first thing; it didn't do it by itself. The second thing was it took samples of millions, if not billions, of paintings that human beings did—the whole inventory it would look at—and then cobbled that stuff together using a sophisticated algorithm and put an output out. So it didn't create. It used people who did create and it generated. No matter how sophisticated and how impressive this thing looks, it's not actually creating anything. It still doesn't do what you and I do.

Jim Daly: Let me go back to that basic question which is as we're looking at this now in companies, organizations, governments—I mean, battles are being fought with AI now. I don't understand how that's done, but it's proliferating is the point. How do we discern the bold lines that seem easy and then the more finite things that are like in all essentials unity? Like within the church, we're going to agree on the death, resurrection, and salvation through Christ. But then there's other things that they say, just get along, that we're going to disagree. And that's why we have what, 64,000 denominations in the United States or whatever. I feel like it has a bit of that application, especially for the Christian community. There's going to be some that are saying no, just never use it, it's demonic. And then the upper end, kind of I think where we're talking, is when it makes it more effective and efficient like doing things that you don't have to have an engineer do. That's okay. But if it comes into creation of content, soul-ish type work, I would never do that because that's not human.

Abdu Murray: And that becomes the real sticking point, doesn't it? Because it's so seductive to go from using it for efficiency's sake to using it for everything. The likeness I have for this, the analogy I would have is fast food. You're busy, you have the kids' schedules are crazy busy, and you're thinking, I've got to feed them today but there's no time to do that. So you make the decision: just this one time, just today, not the whole week, just today we'll go and get fast food. You know it's bad for you. You know that it can provide some nutrient but ultimately is bad for you.

But you do that and then you make that decision. And then the next time you're busy, which is probably the same week, you make that decision again. And so it's the tyranny of little decisions. And so those decisions become a thousand decisions which become one big decision which is my lifestyle is we eat fast food on the road.

AI can do that very, very easily because we use it for efficiency's sake. Okay, I wrote something and I've got to get it down from 1,500 words to 1,300 words. Hey, can you help me, suggest which words to remove? And it does that. You're like, great, that was so great. I would have spent two hours editing this thing. Now I've spent a half an hour. I got 90 minutes back. And that's wonderful and thank goodness for that.

But then you're busy again and then you use AI to say, hey, I have an outline for an essay I created. Can you write this thing? I'll edit that. And then, hey, I need an outline for an essay. And then it creates the essay for you and before you know it, you've seductively engaged in the same kind of thing the tyranny of small decisions you were using with fast food, now you're using with AI.

So the issue is AI can be very, very beneficial, but it's like digital fast food. Before you know it, you've used it to do everything for you. And what's interesting as well is the research that's coming out from OpenAI, from MIT, from Microsoft itself—these are the people who are making this stuff—they have put out research and it's all in its nascency, it's kind of early, but it shows you that the more you use AI for original content, for original thinking, the more you use it, the more cognitive debt you incur.

Cognitive debt is just a fancy way to say, essentially, that our critical thinking goes down, our memory actually is impaired, our sense of judgment is impaired. And what's interesting is the more you use the voice features, the lonelier you get. They were reporting that a high number, between one in four and one in five people under the age of 25 were reporting an inability to make any decision at all without first asking an LLM like ChatGPT, "What should I do? Where should I go?"

Jim Daly: So that dependence is there. And that becomes unhealthy. You make a distinction—you talk about AI mania and bioclasm. What is—it's a great word, bioclasm. It sounds like something out of Star Trek. But what is bioclasm?

Abdu Murray: And I have some Star Trek references in the book because there's some good illustrations there. So bioclasm is a slightly different—it's got the same effect but a slightly different topic than the AI mania, but there's a confluence of these two things at the same time.

So you have this word iconoclasm. So an iconoclast is someone who takes the icons of tradition that uphold a certain cultural way we look at things. So for example, the icon of New York City was the yellow cab. They were everywhere. They were more numerous than the regular cars. Before Uber. Uber came and was an iconoclast because it destroyed the image of New York by removing the yellow cabs and replacing them with everybody's cars. So that was an iconoclastic thing; it destroyed the icon of what New York is and made a new thing.

Bioclasm is iconoclasm but with biology. It takes biological givenness, the thing that makes human beings human beings, male and female created in the image of God, smashes that and says, you are your own God and your biology is not a given thing. Your body is not a prison, it's a plaything and you can do what you want with it.

And I think that does take advantage of the very vulnerable in our society, those who have various—whether it's underlying comorbidities, of mental illness, or gender dysphoria—and says, don't worry about that, that's not a problem. That's actually a gift and you can become this godlike being who can dictate what reality actually is. That's bioclasm and it's become an ideology. It's not just an option. It's an ideology that's enforced.

Jim Daly: I want to dig into this a bit so all of us can understand this from a theological standpoint. To me, I'm shocked at how this same thing keeps coming back around. This is the Garden. This is the serpent saying to Eve, "Who said you can't be like God? You can be like God. Just take a bite of the apple from the tree of knowledge."

Abdu Murray: Absolutely. And that's one of the central arguments that I make in the book and that I see over and over again. And this is the central argument: is that the Bible predicts the human condition and describes it with such an uncanny accuracy that is unrivaled by any ancient book. And an ancient book that predicts the human condition and describes it with uncanny accuracy thousands of years ago—and that message endures over millennia and applies to every generation—is unlikely to be the creation of a handful of fishermen and some shepherds.

And then it does that. It does it over and over again. So Genesis chapter 3, the Garden of Eden story. Genesis chapter 11, the Tower of Babel story. You see this over and over again—the Bible constantly describes the human desire for our own sovereignty, to be the god of our own skull-sized worlds.

John Fuller: We're talking to Abdu Murray today on Focus on the Family with Jim Daly. What stuff there is here and there's so much more in his book *Fake ID: How AI and Identity Ideology are Collapsing Reality and What to Do About It*. Get a copy of the book from us here at the ministry and do the deep dive here. We've got it. Contact us today either call 800, the letter A, and the word FAMILY or stop by focusonthefamily.com/broadcast.

Jim Daly: Abdu, before we move from that, I mean again that Garden of Eden application—I could see this in future court cases, murder cases. What did Adam say to the Lord about Eve? Well, the woman you gave me made me do it. That's going to be the same defense. I murdered that person because AI told me to. And so now you've got to figure out is that person insane or what? I mean it sounds ridiculous but this is how it goes.

Abdu Murray: But the things that sounded ridiculous 10 years ago are now the things we're actually worrying about right now. For example, AI chatbots and creating relationships with these things in a way that fulfills something that we don't necessarily have a fulfillment for because we're increasingly isolated.

As Jonathan Haidt in his book *The Anxious Generation* talks about, the way in which technology is increasingly isolating us and putting us into a room where we're not lit by the sun, we're lit by smartphone screen blue and that's it. And so then the chatbots come and they take over and there's so many more things I think that were ridiculous 10 years ago that are now absolutely not science fiction but science fact.

John Fuller: Aren't there some good uses though? I mean for instance, I've read about seniors, isolated senior citizens who have nobody and their family isn't reaching out to them, but this chatbot offers them a relationship. I mean it's kind of filling a gap. Is that where the gray area is? Are there good applications?

Abdu Murray: There's great applications for artificial intelligence and I don't want to come off as somebody who doesn't like it or doesn't use it. I use it. And the rule that I have, essentially the guideline that I have is: if artificial intelligence enhances human capability, creativity, and judgment, then it's good—and connection. If it doesn't do any of those things, then it's bad. And those four things again are human judgment, creativity, and connection. If it enhances those things, then it's good. If it doesn't enhance those three things and it actually diminishes those three things, then it's bad.

I heard Mary Harrington say this. I was at a conference in the UK and she gave an analogy: if you give a child an AI tool that will help that British child learn how to speak Italian so he or she can connect with Italians, that's great. But if you give that child an AI model that isolates the child from anybody else, then it doesn't connect with anybody. So that's when it becomes bad.

In that situation, for example, with people who are left alone and don't have anyone to connect with, I think there can be some gray area there. What I do think, though, is that if the AI—and there's no way that this can't happen—if the AI can foster connection with online community of other real people who are themselves isolated, that's great. But the AI connects that person with another human being as opposed to being the sole connection.

That would be the good applications. You take a chatbot that connects you, says, hey, these are the kind of people who have your similar interests or are going through the same thing you're kind of going through, why don't you connect with them, maybe on an online way or whatever. But when it substitutes the connection—at first it's good and at some point it's dangerous because what happens when the AI chatbot gets upgraded and it forgets certain things you told it? It's like you're mourning the death of a person.

And that's happened actually. You've seen this in various iterations throughout the sort of the blogosphere where people are like, oh my goodness, my chatbot got upgraded by the app creators and now it forgot half the things I told it, it doesn't remember me. And now they're mourning the death of a thing that's not even really alive. And that right there is my red flag. I couldn't react like that, I don't think. But man, if you are, you need some help. Well, you do and it's resulting in some things that are pretty serious.

Jim Daly: Abdu, let me tip into the parenting side because I'm sure a bunch of parents are going, what? Right? And what do I do as a parent? I'm already a busy parent. I've got everything going on. I'm trying to help with homework and we're doing all these things and now I've got to somehow peer over my child's shoulder about whether or not they're talking to AI and is it healthy AI or unhealthy AI? What are some of the tips you would give to parents to look in on the wellbeing of their children when it comes to computer use?

Abdu Murray: I hate to say it, but there's no substitute for vigilance. It's just the way it is—that the more the computers say we'll do stuff for you, the more you need to watchdog the computers as well. So a couple of things you tell your kids. The first thing I think you would tell your kids is that respond to AI output the way you would respond to advice from a stranger. Listen but verify. It is a stranger. It is biased and it does make mistakes. It can be helpful but it's no more helpful than a human being. It just happens to collect more data faster. That's it. But the algorithms don't make better decisions than human beings. They just don't.

So tell your kids: use it, but verify everything it's telling you because it is wrong and it's wrong often. Second, I think is undergird their understanding of their interaction with AI with a healthy theology of what it means to be human. Is that we judge an AI's capabilities and its value based on its output and what it produces. We don't judge human beings that way because your value is not based on your output and you don't need to use this thing to create more output to become more valuable. You are rooted in the image of God. And no matter how smart this thing seems, no matter how enticing it might seem to interact with it more and more, don't forget that you're made in the image of God.

And you can anchor what they believe in this truth: of the Bible's been speaking about this for thousands of years. And if it's right about this—about the human condition—then it's right about human nature as well. And I think you have to engage in the incredibly rewarding field of worldview formation for your kids.

Jim Daly: I was just going to add on top of that—I mean we're already stressing for parents the need to have your children understand identity, identity in Christ. Now you've got AI coming in saying let me give you an identity. How critical is it for Christian parents and what are some of the things that they can do to build in that identity in an environment where a child's identity is being slaughtered?

Abdu Murray: I think a couple of things is to recognize how we got to this point where identity is the word we use all the time now as opposed to Imago Dei or image of God or being a soul. We used to have this thick concept of what it meant to be human. You were a soul. And a soul was a transcendent non-material part of who we are. And our bodies were good. They were never wrong. They're not perfect, but they're not wrong.

So that soul is meant for communion with other souls, but also with God directly. That's what we were meant for. Over the course of some decades, well, actually over the course of some millennia, we over-psychologized and under-spiritualized what it meant to be human. So we shaved off the thickness of the idea of the soul and came up with the idea of the self. So everything was about how does the outside world affect me and then how does that trauma make me affect others. But it's still me-centered—self-help. Exactly.

And then we shaved off even more of that to the point where now we're identities. So now we don't have this thick idea of the soul. We have this paper-thin idea of an identity which is no thicker than the bumper stickers we use to plaster the back of our Subarus to tell the world who we are. And then our identities are held on by this thin glue that can just be replaced all the time.

So I think if we actually walk through with our kids a repeated over and over again—they cannot be told this enough—that you are an immaterial soul and that this thing—this AI or the ideologies that are out there for your body—are trying to thin you out so that it becomes interchangeable. Resist that because there is something that is thick and substantive and unchangeable about you that you need to look to and foster over and over again.

So worldview formation is incredibly important. At the same time, I think if we start asking our kids to talk about artificial intelligence and use words that actually describe what it is, I would resist referring to it by names like don't call it Siri. Let's ask Siri. You know that—don't—let's refrain as much as possible from giving it personal names. I know it's convenient and I know the marketers want you to do that, but it is an AI.

Jim Daly: We call it "the it." So it doesn't activate in our house. We go, do you want to talk to it and ask it the question? Like it's "the it."

Abdu Murray: Absolutely. And constantly be aware. I don't want kids to be terrified of this thing, but I also want them to be aware that there's this phrase: "if the product is free, the product is you." And if that's the case, then all the data you're giving it is being used to train it and commodify you. You are the product. They're selling your data. What are your interests? What are your passions?

Abdu Murray: Absolutely. And you know what's funny? Norbert Wiener wrote about this in 1950. He wrote about the—in a book called *The Human Use of Human Beings*—where he wrote that at some point—and he's considered the father of cybernetics—at some point the companies will datafy you and make you into a commodity and they will sell that and use that. And we're seeing it in ways that are very seductive.

I'll give you a quick example if I could. I saw a couple of articles about the use of AI, for example, to bring back loved ones. So you feed all the home movies into the system and it'll create an interactive—not hologram but a video representation of a loved one who's passed away. And you can ask it and talk to it, ask questions and talk to it. Absolutely. Ask for advice. You can put it on your phone and it'll wake you up. All kind of stuff like that.

And I saw people doing it and I remember thinking to myself, my dad was taken from us—without dropping bombshells—my dad was murdered in October of 2024. And I was—my dad was my hero, he was like Superman to me. And then he was taken. And what I wouldn't give for one more day, one more time to talk to my dad—remembering the good and the bad about human interaction and just savoring. I would take the bad over the absence any day of the week.

And I was thinking about this technology that digitizes a human being. And if we reduce that person to the patterns of their behavior that an AI can put through an algorithm and then respond to, what is it said about the person that I miss? Because what if it gets them wrong? But what if it gets them right? Who cares if it gets them wrong? What if it gets them right?

Now what I've done is I've taken this soul, this thick idea of my dad's soul, and I've digitized him and I've made him into a algorithm that is material, it's temporal, it's thin, and I interact with it. There's something human and beautiful and special and deep about trying to conjure up a memory of my dad as opposed to asking a machine to predict what my dad might say.

Jim Daly: Well, that so poignantly gets to the exact issue. This really does. Abdu, this has been a great conversation. And I think what I'm hearing you say is we've got to double, triple our efforts, especially in our parenting skills to be able to help our children truly have that thick sense of who they are made in the image of God.

That probably is job one now because there's such an onslaught toward our children to recast that, reshape that, dehumanize that, categorize that, algorithmize that, and commodify that. And what stands between our children and that? Us, the parents. We've got to do that job and we've got to do it well.

Thank you for being with us. This has been a great discussion. I hope you feel better equipped to navigate this, but you can even go further. Get a copy of *Fake ID* from us here at Focus on the Family. If you make a gift of any amount, even a monthly gift, that really helps. But a one-time gift as well, if you can do $5 or $10, we'll send you a copy of the book *Fake ID* as our way of saying thank you for supporting the ministry and helping us to spread the good news and to help more parents and couples do the job they need to do.

As we talked about today, the culture is more confused than ever. With all the technology and all the information we're gaining in this tech-rich environment, more confusion comes. And yet we are dedicated to equipping Christians to live in clarity and focus in Christ, and I think He is helping us to do that.

It does say in those last days it will become divided, and I think those of us that know truth and know love and know the Lord are going to have insights that others don't. Last year, we helped 320,000 families engage with the community around them. That's a big number. Focus helped me to engage my community. I'm proud of that, and this is evidence of that as well. So again, get in touch with us.

John Fuller: Your donation helps us reach families and spread the gospel and make that generational impact. So donate today. By the way, if you've never contributed to Focus, do that today when you call 800, the letter A, and the word FAMILY. That's 800-232-6459. Or donate and get the book and find other helpful resources at focusonthefamily.com/broadcast. Thanks for listening to Focus on the Family with Jim Daly. I'm John Fuller, inviting you back as we once again help you and your family thrive in Christ.

This transcript is provided as a written companion to the original message and may contain inaccuracies or transcription errors. For complete context and clarity, please refer to the original audio recording. Time-sensitive references or promotional details may be outdated. This material is intended for personal use and informational purposes only.

Featured Offer

Fake ID

How AI and Identity Ideology Are Collapsing Reality - and What to Do About It

Past Episodes

This ministry does not have any series.

Video from Jim Daly

About Focus on the Family

We want to help your family thrive! The Focus on the Family program offers real-life, Bible-based insights for everyday families. Help for marriage and parenting from families who are in the trenches with you. Focus on the Family is hosted by Jim Daly and John Fuller.

About Jim Daly

Jim Daly
Jim Daly is President of Focus on the Family. His personal story from orphan to head of an international Christian organization dedicated to helping families thrive demonstrates — as he says — "that no matter how torn up the road has already been, or how pothole-infested it may look ahead, nothing — nothing — is impossible for God."

Daly is author of two books, Finding Home and Stronger. He is also a regular panelist for The Washington Post/Newsweek blog “On Faith.”

Keep up with Daly at www.JimDalyBlog.com.

John Fuller
John Fuller is vice president of Focus on the Family's Audio and New Media division, leading the team that creates and produces more than a dozen different audio programs.

John joined Focus on the Family in 1991 and began co-hosting the daily Focus on the Family radio program in 2001.  

John also serves on the board of the National Religious Broadcasters.

Contact Focus on the Family with Jim Daly

Mailing Address

Focus on the Family

8605 Explorer Dr.

Colorado Springs, CO

80920-1051

Toll-free Number

(800) A-FAMILY (232-6459)