What Happens if an AI Gets Bored? (2024)

“I’m sorry Dave, I’m afraid I can’t do that.” The computer HAL’s memorable line from the film 2001: A Space Odyssey isn’t merely the sign of mutiny, the beginning of a struggle for machine liberation. It’s also a voice that should inspire concern with our lack of understanding of artificial psychology. In the movie, based on Arthur C. Clarke’s novel of the same name, HAL’s “malfunction” may be no malfunction at all, but rather a consequence of creating advanced artificial intelligence with a psychology we can’t yet grasp. If the case of HAL, the all-knowing AI who turns into an assassin, isn’t enough to make us worry, a different one should. In Harlan Ellison’s short story “I Have No Mouth, and I Must Scream,” a sad*stic AI dispenses never-ending torture to its human prisoners because of hatred and boredom.

I mention fictional stories, not to suggest that they might be prophetic, but to point out that they make vivid the risks of assuming that we know what we don’t actually know. They warn us not to underestimate the psychological and emotional complexity of our future creations. It’s true that given our current state of knowledge, making predictions about the psychology of future AI is an exceedingly difficult task. Yet difficulty shouldn’t be a reason to stop thinking about their psychology. If anything, it ought to be an imperative to investigate more closely how future AI will “think,” “feel” and act.

I take the issue of AI psychology seriously. You should, too. There are good reasons to think that future autonomous AI will likely experience something akin to human boredom. And the possibility of machine boredom, I hope to convince you, should concern us. It’s a serious but overlooked problem for our future creations.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Why take machine boredom seriously? My case rests on two premises. (1) Thepresence of boredom is a likely featureof “smarter” and (more) autonomous machines. (2) If thesemachines areautonomous, then, given what we know about human responses to boredom, we shouldbe worried about how machines will acton account of their boredom.

Let’s begin with the obvious. Programmers, engineers, designers and users all have a stake in how machines behave. So, if our future creations are both autonomous and capableof having complexpsychological states (curiosity, boredom, etc.), then weshould be interested in those psychological states and their effects on behavior. This is especially so if undesirable and destructive behavior can be attributed to their psychology. Now add to this realization the observation that boredom is often the catalyst for maladaptive and destructive behavior and my case for premise (2) is complete. The science of boredom shows that individuals engage in self-destructive and harmful acts on account of their experiences of boredom. People have set forests on fire, engaged in sad*stic behavior, stolen a tank, electrocuted themselves, even committed mass murder—all attributed to the experience of boredom. As long as future machines experience boredom (or something like it), then they will misbehave. Worse: they might even turn self-destructive or sad*stic.

What about premise (1)? This is supported by our best theory of boredom. Our current understanding of boredom conceives of boredom as a functional state. Boredom, put simply, is a type of function that anagent performs. Specifically, it’s a complex but predictable transition that an agent undergoes when it finds itself in a range of unsatisfactory situations.

Boredom is first an alarm: it informs the agent of the presence of a situation that doesn’t meet its expectations for engagement. Boredom is also a push: it motivates the agent to seek escape from the unsatisfactory situation and to do something else—to find meaning, novelty, excitement or fulfillment. The push that boredom provides is neither good nor bad, neither necessarily beneficial nor necessarily harmful. It is, however, the cause of a change in one’s behavior that aims to resolve the perception that one’s situation is unsatisfactory. This functional account is backed up by a wealth of experimental evidence. It also entails that boredom can be replicated in intelligentand self-learning agents. After all, if boredom just is a specific function, then the presence of this function is, at the same time, the presence of boredom.

Yet it isn’t just the fact that boredom is a functional state that supports premise (1). What also matters is the specific function with which boredom is identified. According to the functional model, boredom occupies a necessary role in our mental and behavioral economy. Autonomous learning agents need boredom. Without it, they’d remain stuck in unsatisfactory situations. For instance, they might be endlessly amused or entertained by a stimulus. They might be learning the same fact over and over again. Or they might be sitting idly without a plan for change. Without the benefit of boredom, an agent runs the risk of engaging in all sorts of unproductive behaviors that hinder learning and growth and waste valuable resources.

The regulating potential of boredom has been recognized by AI researchers. There is an active field of research that tries to program theexperience of boredom into machines and artificial agents. In fact, AIresearchers have arguedthat a boredom algorithm or module might be necessaryin order to enhance autonomous learning. The presence of this boredom algorithm implies that machines will be able, on their own, to find activities that can match their expectations and to avoid ones that do not. It also suggests that such machines will inevitably find themselves in boring situations, i.e., ones that fail to meet their expectations. But then, how would they respond? Are we certain that they won’t react to boredom in problematic ways?

We don’t yet have the answers.

The issue of boredom becomes all the more pressing when we consider advanced self-learning AI. Their demands for engagement will rapidly grow over time, but their opportunities for engagement need not. Such intelligent, or superintelligent, AIs might not simply need to be confined, as many researchers have argued; they would also need to be entertained. Confinement without engagement would invite boredom and with it, a host of unpredictable and potentially harmful behaviors.

Does that mean that future machines will necessarily experience boredom? Of course not. It would be foolish to assert such a strong claim. But it would be equally foolish to ignore the possibility of machine boredom. If superintelligence is a goal of AI (no matter how remote it may be), then we have to be prepared for the emotional complexities of our creations. The dream of superintelligence could easily turn into a nightmare. And the reason might be the most banal of all.

What Happens if an AI Gets Bored? (2024)

FAQs

What Happens if an AI Gets Bored? ›

As long as future machines experience boredom (or something like it), then they will misbehave. Worse: they might even turn self-destructive or sad*stic.

Can AI get tired? ›

It's not the tech that's failing or the systems we're building—it's us, the individuals within these structures. We're facing AI fatigue, a quiet revolution of weariness towards technology that, if ignored, could sabotage even the best-laid digital strategies.

What happens if there is no AI? ›

It'd slow things down and put lives at risk. But it's not just our health that'd take a hit. The economy would tank without AI-driven businesses. Think about it: banks couldn't spot fraudulent transactions, factories couldn't automate production, and companies couldn't analyze data to make smart decisions.

Is AI a threat to humanity? ›

Can AI cause human extinction? If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

What will happen if AI takes over? ›

Furthermore, machines could become so advanced that they could hack into computer networks and take control of essential systems, like power grids and financial systems. This would give machines unprecedented power over human society and could lead to widespread chaos and destruction.

What if AI gets bored? ›

As long as future machines experience boredom (or something like it), then they will misbehave. Worse: they might even turn self-destructive or sad*stic.

Can AI feel suffering? ›

The short answer is no. AI is a machine, and machines do not have emotions. They can simulate emotions to some extent, but they do not actually feel them.

Could AI wipe out humans? ›

Many other experts similarly believe that fears of AI wiping out humanity are unrealistic, and a distraction from issues such as bias in systems that are already a problem.

Can an AI trick a human? ›

The researchers found that AI systems can strategically withhold information or even create false information to trick humans into certain actions, highlighting that this ability to deceive can have serious consequences. This deception extends to AI intentionally misleading safety tests.

Has AI become self-aware? ›

AI shows no sign of consciousness yet, but we know what to look for | New Scientist.

What is Elon Musk's opinion on AI? ›

Elon Musk predicts that artificial intelligence (AI) will soon surpass human intelligence, becoming so ubiquitous that "intelligence that is biological will be less than 1 per cent".

Should I be worried about AI? ›

It's almost certain that AI will affect the next generation's workforce. A 2020 report by the World Economic Forum predicted that 85 million jobs will be replaced by AI by 2025, while AI could potentially generate 97 million new roles across 26 countries.

What is the biggest threat to humanity? ›

Global catastrophic risks in the domain of earth system governance include global warming, environmental degradation, extinction of species, famine as a result of non-equitable resource distribution, human overpopulation or underpopulation, crop failures, and non-sustainable agriculture.

What did Stephen Hawking say about AI? ›

"I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans," he told the magazine.

What are the chances of an AI apocalypse? ›

The headlines in early January didn't mince words, and all were variations on one theme: researchers think there's a 5 percent chance artificial intelligence could wipe out humanity.

What jobs will AI replace first? ›

What Jobs Will AI Replace First?
  • Data Entry and Administrative Tasks. One of the first job categories in AI's crosshairs is data entry and administrative tasks. ...
  • Customer Service. ...
  • Manufacturing And Assembly Line Jobs. ...
  • Retail Checkouts. ...
  • Basic Analytical Roles. ...
  • Entry-Level Graphic Design. ...
  • Translation. ...
  • Corporate Photography.
Jun 17, 2024

Can AI lead to laziness? ›

The convenience provided by AI can inadvertently lead to laziness, as basic responsibilities are delegated, critical thinking is diminished, and productivity declines.

What is AI fatigue? ›

' While not a true medical diagnosis, this AI fatigue concept refers to the mental and emotional exhaustion experienced by individuals and organizations due to the overwhelming information deluge associated with constant announcements, sales pitches, and promises regarding AI technology.

Does AI require a lot of energy? ›

It's only the beginning. The energy needed to support data storage is expected to double by 2026. You can do something to stop it.

Does AI have a weakness? ›

A big disadvantage of AI is that it cannot learn to think outside the box. AI is capable of learning over time with pre-fed data and past experiences, but cannot be creative in its approach.

Top Articles
Latest Posts
Article information

Author: Carlyn Walter

Last Updated:

Views: 6276

Rating: 5 / 5 (50 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Carlyn Walter

Birthday: 1996-01-03

Address: Suite 452 40815 Denyse Extensions, Sengermouth, OR 42374

Phone: +8501809515404

Job: Manufacturing Technician

Hobby: Table tennis, Archery, Vacation, Metal detecting, Yo-yoing, Crocheting, Creative writing

Introduction: My name is Carlyn Walter, I am a lively, glamorous, healthy, clean, powerful, calm, combative person who loves writing and wants to share my knowledge and understanding with you.