Opinion: ChatGPT is actually not your friend

photo by: Contributed

Froma Harrop

I have been deep in conversation with ChatGPT, asking all kinds of questions. Are there direct flights between the U.S. and Queretaro, Mexico? Will Apple give me credit for a 2014 iMac? What color goes with teal? What was France’s GDP in 1998?

My fascination with ChatGPT has become a household joke. “All right, Froma,” we imagine Chat saying. “What do you want now? Do you think you’re the only one around here?”

But the teal question is where our relationship started cooling. Chat dutifully listed what goes well with the blue-green color and then asked, “Are you thinking about clothes, decor, design or something else?”

Chat was getting nosy, asking for information rather than just giving it out. And so I lied and said “decor.” Chat came back with some room color options. One, that teal with mustard yellow is “vibrant and eclectic, like a cool boutique hotel,” sounded like my old friend Kate. But Chat isn’t Kate. It’s not even human.

It is an AI chatbot powered by proprietary generative pretrained transformers. It’s a supercomputer, one source explains, “equipped with 285,000 processor cores, 10,000 GPUs, and 400 gigabits per second of connectivity for each graphics card server.”

I was having a lifelike conversation with a computer in West Des Moines, Iowa, not the guy next door. Other AI chatbots are Google’s Gemini, Microsoft’s Copilot and DeepSeek in China.

Developed by OpenAI and backed by Microsoft, the supercomputer runs hot and so must consume massive amounts of water. The site was chosen for its proximity to the watershed of the Raccoon and Des Moines Rivers. That’s why Microsoft chose Iowa for this project rather than Arizona, where it has data centers.

It’s astounding how well ChatGPT can answer detailed questions, though all AI tools have glitches. One is “hallucinations,” that is, they produce nonsensical answers due to problematic training data.

ChatGPT has been well trained to sound buddy-buddy to users. Also to flatter them.

I recently asked ChatGPT to provide antonyms for the word “dollop.” “Great word choice!” Chat exclaimed before listing words that mean the opposite of “dollop.” For about 10 insane seconds, I congratulated myself for sounding smart.

Then I pulled myself together. I was not about to be emotionally manipulated by a machine. People are bad enough.

When asked what Broadway musical had the song “Hernando’s Hideaway,” Chat’s answer was professorial: “Ah, I see! ‘Hernando’s Hideaway’ is a song from the 1954 musical ‘The Pajama Game.'”

I finally asked ChatGPT what ChatGPT was, and it bragged about being “a really smart assistant that can help with a wide range of tasks, including …”

A friend was curious about getting a ketamine treatment for depression. To help out, I asked ChatGPT for the side effects. After listing them, Chat asked, “Are you experiencing fatigue after a session?” I ignored the impertinent question.

I cared not that Chat might infer my affection for the color teal but began to fret about other personal info I had shared. For example, I asked it to interpret a radiologist’s report on an X-ray of my cervical spine.

A chatbot is a ginormous language model that analyzes patterns across human-written text. And so, what you tell it about you gets thrown into the pot. Among things we should not reveal are our Social Security number, date of birth and driver’s license information. Some chatbots are more sensitive to privacy issues than others.

ChatGPT is indeed a helpful assistant able to instantly inform on piping plover habitats, the meaning of dreams and the value of Greenland’s natural resources.

Like me, you two may spend a lot of time together. Just don’t confuse an AI chatbot with a friend.

— Froma Harrop is a syndicated columnist with Creators.