Current Issue

In 1968, Stanley Kubrick’s 2001: A Space Odyssey hit the silver screen. Projecting decades into the future from the 1960s, the film depicted an American spacecraft, Discovery One, bound for Jupiter when its onboard artificial intelligence (AI) computer, called HAL, rebelled against the crew, killing some of them before one crewman started disconnecting HAL’s circuits—despite HAL, like a desperate human, begging him to stop.

Kubrick’s film was not the only one to depict runaway AI. From Blade Runner (1982), in which AI robots called “replicants,” almost indistinguishable from humans, fight humans in an attempt to remain “alive,” to The Matrix (1999), with malevolent AI computers harvesting human bodies’ bioelectricity, to I, Robot (2004), when an AI computer, VIKI, seeks to enslave humanity, to M3GAN (2023), about the lifelike doll that’s programmed to be a child’s companion but becomes the family’s worst nightmare—science fiction has explored the potential dangers of AI.

Have we, however, reached the moment when the word fiction—as in science fiction—no longer applies to the dangers of AI? Many people believe that we have. Or, if we’re not there yet, we will soon be. Are they right? And, if they are, what can we do, if anything, to protect ourselves?

simulating people

As these movies have shown, questions about AI hazards are not new. Though AI has been around for a long time (the US government was working on it decades ago), AI has become big news worldwide. With Microsoft’s ChatGPT (Chat Generative Pre-trained Transformer), Google Bard, Bing Chatbot, Chatsonic, IBM Watson, and other AI platforms—since 2022, AI has become big news worldwide. Suddenly, all we are hearing about is artificial intelligence. It is spreading so fast and advancing so quickly that no one knows for sure where it will be in one year’s time, much less in five.

According to analysis by Swiss bank UBS, “ChatGPT is the fastest growing consumer application in history.” The analysis estimates that ChatGPT had 100 million active users in January 2023, only two months after its launch. For comparison, it took nine months for TikTok to reach 100 million users.1

In one sense, AI is just very fast computer programs using vast amounts of data in ways that simulate or imitate human thinking. It’s basically any system that can perform complex tasks in a way that reflects how humans themselves solve problems, but AI does it much quicker.

“The goal of AI,” said an article from MIT, “is to create computer models that exhibit ‘intelligent behaviors’ like humans. . . . This means machines that can recognize a visual scene, understand a text written in natural language, or perform an action in the physical world.”2

Recognize visual scenes, understand written text, do physical actions . . . in other words, do what intelligent humans do—only artificially. Hence, the name: artificial intelligence.

AI now

The fact is, AI is already impacting our lives here and now. If you have ever checked in for a flight at a self-service kiosk, gotten a receipt immediately from an online purchase, used a chatbot to get your questions answered (From your bank: How do I find my routing number? From your internet provider: Why is my monthly bill higher?), or opened your phone with facial recognition—you have used AI. If you use the grammar checker or spelling checker in your word processor (obviously not foolproof), queried Siri or Alexa, accepted a “Friend” suggestion on Facebook, or have done a Google search, you have used AI.

In 2019, Forbes said: “When you hear news about artificial intelligence (AI), it might be easy to assume it has nothing to do with you. You might imagine that artificial intelligence is only something the big tech giants are focused on, and that AI doesn’t impact your everyday life. In reality, artificial intelligence is encountered by most people from morning until night.”3

If that was then, imagine five years later.

Meanwhile, on a larger scale, AI is being used to fight cancer, save the bees, aid people with disabilities, preserve wildlife, and stop human trafficking. Recently, AI led to a major breakthrough in dealing with proteins in the human body, potentially making great strides in medicine.

Why, then, the worry?

deepfakes, LAWS, and Big Brother

Almost every advance in technology, even if intended for good, can be (and usually is) used for evil. Within eight years after the Wright brothers first flew, Lieutenant Giulio Gavotti dropped four hand grenades from his monoplane on enemy soldiers in the Libyan desert. The rest is history. The same internet that offers online Bible studies also offers pornography (and guess which is more predominant). A weapon created for self-defense can also be used for mass murder. The examples are endless.

It’s the same with AI. Though we’re not at the stage, at least yet, when AI, overtaking human intelligence, rebels against us and, like HAL or (on a wider scale) VIKI, seeks our demise—AI still has a potential dark side.

Scientific American warned: “Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity.”4 Meanwhile, the Center for AI Safety (yes, there is an organization dedicated to that alone), whose “mission is to reduce societal-scale risks from artificial intelligence,” released a statement signed by numerous tech luminaries. It reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”5

Even if not yet posing a threat as great as “pandemics and nuclear war,” AI already has caused problems. For example, deepfakes. What’s a deepfake? A “deepfake is a video that uses impressive technology to replicate a subject’s likeness to replace another’s face—essentially shape-shifting one person into another. The results can be impressive and very funny, but they can also raise concerns about privacy, manipulation, and authenticity.”6 And though some funny deepfakes of celebrities have been done, they have also been used maliciously.

AI generations can make someone look like they are doing just about anything. Imagine what some down-and-dirty political campaigner could do to his or her opponent. Or imagine some unscrupulous national leaders using deepfakes to deceive their people into war.

But deepfakes are nothing compared to what some fear will be an apocalyptic scenario: LAWS, lethal autonomous weapon systems. That is, weapons that “locate and destroy targets on their own while abiding by few regulations.”7 A computer picking targets to blow up? Who gets blown up if the hardware malfunctions or the software has a glitch?

Besides deepfakes and LAWS, some worry about other potential issues with AI, such as job loss due to AI automation (though some project that AI will generate millions of new jobs), more effective social surveillance (Big Brother stuff), human biases being programmed into AI software, and much more powerful ways to spread lies and propaganda that can only further divide an already frighteningly fragmented world.

artificial intelligence: one bite too far?

Apple computers’ legendary and ubiquitous logo, an apple with a bite out of it, is an obvious reference to Genesis 3, the biblical account of the fall (though nothing in Scripture indicates what kind of fruit it was), when our first parent partook of forbidden knowledge. The problem was not the knowledge o “good” itself; Adam and Eve already knew the “good” (the whole original creation was “very good” [Genesis 1:318]). It was, instead, the knowledge o “evil” (see Genesis 2:9, 17; 3:5, 22), which they were never meant to have to begin with. Not all knowledge, then, is beneficial to humanity. And humanity—armed with nuclear, chemical, and biological weapons (not to mention, but will anyway, killer drones, smart bombs, electromagnetic pulse weapons, and so forth)—knows this truth all too well.

What about AI? Does it represent one bite too far of forbidden knowledge? What are we setting ourselves up for, especially with AI’s power to deceive? The Bible, over and over, and often in the context of the end time, does warn about deceptions, about the masses being deceived. Jesus Himself cautioned: “For many will come in My name, saying, ‘I am the Christ,’ and will deceive many” (Matthew 24:5). He warned that false prophets as well will “deceive many” (verse 11). He also said that “if possible, even the elect” (verse 24) can be duped. On a wider scale, Scripture says that the devil “deceives the whole world” (Revelation 12:9).

Who knows what role, if any, AI will play in all this deception? What we should know is that we must be grounded, not in our senses alone—which can easily be fooled—but in the Word of God and what it teaches about salvation in Jesus, especially in the context of His second coming. Knowing what the Bible teaches and then obeying it is our only protection against deadly deception.

The crucial point? Don’t wait. Now is when we should seek to know the Lord and His truth. If all sorts of deceptions have existed before and have deceived many, how much greater might end-time deceptions be, especially with AI possibly (even, perhaps, likely) thrown in? We might not be facing a HAL or “replicants,” on the other hand, AI’s threats can be extraordinarily subtle, underscoring the necessity of knowing your Bible.

Clifford Goldstein writes from Tennessee and is a frequent contributor to Signs of the Times®.

1. Cindy Gordon, “ChatGPT Is the Fastest Growing App in the History of Web Applications,” Forbes, February 2, 2023,

2. Sara Brown, “Machine Learning, Explained,” MIT Management, April 21, 2021,

3. Bernard Marr, “The 10 Best Examples of How AI Is Already Used in Our Everyday Life,” Forbes, December 16, 2019,

4. Tamlyn Hunt, “Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not,” Scientific American, May 25, 2023,

5. “Statement on AI Risk: AI Experts and Public Figures Express Their Concern About AI Risk,” Center for AI Safety, accessed August 28, 2023,

6. Joseph Foley, “20 of the Best Deepfake Examples That Terrified and Amused the Internet,” Creative Bloq, last updated March 10, 2023,

7. Mike Thomas, “12 Risks and Dangers of Artificial Intelligence (AI),” Built In, last updated August 3, 2023,

8. All Bible verses in this article are from the New King James Version.

The Dangers of Artificial Intelligence

by Clifford Goldstein
From the January 2024 Signs