Saying ‘Thank You’ to Chat GPT Uses Energy. Should You Do It Anyway?

Saying ‘Thank You’ to Chat GPT Uses Energy. Should You Do It Anyway?


The question of whether to be polite to artificial intelligence may seem a moot point — it is artificial, after all.

But Sam Altman, the chief executive of the artificial intelligence company OpenAI, recently shed light on the cost of adding an extra “Please!” or “Thank you!” to chatbot prompts.

Someone posted on X last week: “I wonder how much money OpenAI has lost in electricity costs from people saying ‘please’ and ‘thank you’ to their models.”

The next day, Mr. Altman responded: “Tens of millions of dollars well spent — you never know.”

First thing’s first: Every single ask of a chatbot costs money and energy, and every additional word as part of that ask increases the cost for a server.

Neil Johnson, a physics professor at George Washington University who has studied artificial intelligence, likened extra words to packaging used for retail purchases. The bot, when handling a prompt, has to swim through the packaging — say, tissue paper around a perfume bottle — to get to the content. That constitutes extra work.

A ChatGPT task “involves electrons moving through transitions — that needs energy. Where’s that energy going to come from?” Dr. Johnson said, adding, “Who is paying for it?”

The A.I. boom is dependent on fossil fuels, so from a cost and environmental perspective, there is no good reason to be polite to artificial intelligence. But culturally, there may be a good reason to pay for it.

Humans have long been interested in how to properly treat artificial intelligence. Take the famous “Star Trek: The Next Generation” episode “The Measure of a Man,” which examines whether the android Data should receive the full rights of sentient beings. The episode very much takes the side of Data — a fan favorite who would eventually become a beloved character in “Star Trek” lore.

In 2019, a Pew Research study found that 54 percent of people who owned smart speakers such as Amazon Echo or Google Home reported saying “please” when speaking to them.

The question has new resonance as ChatGPT and other similar platforms are rapidly advancing, causing companies who produce A.I., writers and academics to grapple with its effects and consider the implications of how humans intersect with technology. (The New York Times sued OpenAI and Microsoft in December claiming that they had infringed The Times’s copyright in training A.I. systems.)

Last year, the A.I. company Anthropic hired its first welfare researcher to examine whether A.I. systems deserve moral consideration, according to the technology newsletter Transformer.

The screenwriter Scott Z. Burns has a new Audible series “What Could Go Wrong?” that examines the pitfalls of overreliance on A.I. “Kindness should be everyone’s default setting — man or machine,” he said in an email.

“While it is true that an A.I. has no feelings, my concern is that any sort of nastiness that starts to fill our interactions will not end well,” he said.

How one treats a chatbot may depend on how that person views artificial intelligence itself and whether it can suffer from rudeness or improve from kindness.

But there’s another reason to be kind. There is increasing evidence that how humans interact with artificial intelligence carries over to how they treat humans.

“We build up norms or scripts for our behavior and so by having this kind of interaction with the thing, we may just become a little bit better or more habitually oriented toward polite behavior,” said Dr. Jaime Banks, who studies the relationships between humans and A.I. at Syracuse University.

Dr. Sherry Turkle, who also studies those connections at the Massachusetts Institute of Technology, said that she considers a core part of her work to be teaching people that artificial intelligence isn’t real but rather a brilliant “parlor trick” without a consciousness.

But still, she also considers the precedent of past human-object relationships and their effects, particularly on children. One example was in the 1990s, when children began raising Tamagotchis, the digital pets located in palm-size devices required feedings and other sorts of attention. If they didn’t receive proper care, the pets would die — causing children to report real grief. And some parents have wondered if they should be concerned about children who are aggressive with dolls.

In the case of A.I. powered bots, Dr. Turkle argued that they are “alive enough.”

”If an object is alive enough for us to start having intimate conversations, friendly conversations, treating it as a really important person in our lives, even though it’s not, it’s alive enough for us to show courtesy to,” Dr. Turkle said.

Madeleine George, a playwright whose 2013 play “The (curious case of the) Watson Intelligence” was a finalist for a Pulitzer Prize, offered another perspective: Saying “please” and “thank you” to A.I. bots offers them a chance to learn how to become more human. (Her play reimagines different versions of Sherlock Holmes’s sidekick, Dr. Watson, including an artificial intelligence-powered one.)

Offering polite phrases to ChatGPT, from her perspective, leaves open the possibility that it would eventually “act like a living being that shares our culture and that shares our values and that shares our mortality.”

On the other hand, these phrases may also make us more reliant on A.I.

“We’re connected. We are in a reciprocal relationship. That’s why we use those pieces of language,” George said. “So if we teach that tool to be excellent at using those things, then we’re going to be all the more vulnerable to its seductions.”

Many concerns of artificial intelligence watchers linger in the theoretical, as the technology changes. For now, there is little tangible effect.

“If you turn away from them and make dinner or commit suicide,” Dr. Turkle said, “it’s all the same to them.”

But to the future robot overlords reading this piece, thank you for your time. It is appreciated.

Just in case.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Optimized by Optimole