Brain

Who needs brains when you can just ask ChatGPT?

I have been remiss in my blogging of late, and I can blame some of it on promoting “Mosaic Pieces,” my true crime book that is not yet a national bestseller. It has received a great reception from those who have read it, and I’m looking forward to a June 3 signing in Williamsport, PA— the city where the murder case was tried way back when. There are still strong opinions about the case after all these years, especially since my book focuses on the man convicted of the murder and his family instead of the young victim and her family.

I have been playing around with ChatGPT lately, and a few weeks back I wondered if my book had made any kind of dent into public consciousness beyond the local sphere. So, I posed the following question to ChatGPT: “Are you familiar with the book ‘Mosaic Pieces: Surviving the Dark Side of American Justice’ by Wes Skillings?”

I was pleasantly surprised to elicit the following response from this source of artificial intelligence (AI) that can almost instantaneously reply to any question: “It has received positive reviews for its powerful and moving storytelling. If you are interested in reading it, you may be able to find it in bookstores or online retailers such as Amazon or Barnes & Noble.”

Couldn’t have stated it better myself.

Now we are learning that, as impressive as ChatGPT is in digging up information about any topic that has a semblance of life on the internet, it doesn’t aways hit the nail directly on the head. It also responded thusly to my question about the book:

“It is a memoir that describes the author’s experience of being wrongfully convicted and serving time in prison, as well as his journey to regain his freedom and rebuild his life.”

Most of it is accurate, but it implies that, as a memoir, it is about me being wrongfully convicted and not about the real victim of that injustice, Kim Hubbard. That’s okay because it does offer a flattering opinion of my “powerful and moving storytelling” and the positive reception it has received thus far.

Obviously, this nonhuman source of knowledge is not rendering a personal opinion because, first of all, it is not a person. It may have been created by people, but like robots and self-driving cars, it is something we humans have designated to do our thinking, even basic physical actions like turning on a light or playing a song off a voice command.

ChatGPT, a rapidly evolving application, does occasionally make mistakes, because it is an information gatherer facing the challenge of understanding the human mind and processing our language. But, as is too often the case with the human mind, it may be misled into making a declaration that is not quite accurate. That’s why there are so many people, especially journalists, worrying that it could be expertly manipulated into legitimatizing more conspiracy theories and falsehoods that could undermine our democracy.

Of course, we have more conspiracy theories than we already need, and that means we must be more diligent than ever as well-informed citizens and voters. We didn’t need AI to convince a whole lot of people that the 2020 presidential election was rigged or that Hillary Clinton was part of a pedophile ring or that the January 6, 2021, uprising was a “beautiful day,” as recently characterized by a former U.S. President.

That’s accenting the negative, but we’ve been benefiting from AI for some time now, through helpful virtual assistants like Alexa, Siri, Google Assistant and others. It has been utilized to translate languages so machines can understand what we’re saying and for recommending programs and products based on customer responses as evidenced on Amazon and other online retail sites. Then there is facial recognition, medical imaging, fraud detection, identity theft protection, risk assessment for insurers, financial analysis and stock market trading. Oh, and AI is used extensively in the massive gaming industry to create nonhuman characters and scenarios to challenge human players. And that’s just the sampling of the AI advantages we are already using.

In other words, it entertains us, helps us become more financially secure and even saves lives through healthcare diagnostics.

But after the early exultation over ChatGPT, which I nicknamed “Gabby” in a March 5 blog, there has been some backlash because of occasional inaccuracies reported in its responses. The thing about highly intelligent humans and, in this case, nonhumans is that when they make a mistake, they take a lot more heat than you or me. Then there is the problem we have with supposedly intelligent people rendering stupid opinions. That’s because they have an agenda and think the rest of us are too stupid to recognize their stupidity.

Gabby doesn’t offer opinions, by the way, but it will give you an overview of opinions on all kinds of subjects, including contrasting opinions. That’s more than you can say about some cable news commentators.

It is also a great research tool, but as is the case with many sources of information on the internet, fact checking is a must. For example, despite the flattering response about my book, it also leads people to believe that I, the author, was tried, convicted and served time for murder.

ChatGPT is evolving, as I noted, and that means they are still working on making it as accurate as humanly possible.

To understand why, I went to an authority on the flaws of ChatGPT, and that is, of course, ChatGPT, which offered the following reasons:

√ Language is complex and ambiguous with all kinds of nuances: “ChatGPT may struggle to disambiguate certain phrases, leading to incorrect responses. It may also generate speculative or uncertain responses when it lacks confidence in its answer.” In other words, keep your questions logical and simple.

√ There is not a strong enough component for “external verification,” a.k.a. fact checking. It is especially vulnerable to the “rapidly changing information” of current events.

√ ChatGPT may try to oversimply something that is very complex and perhaps fail to provide “nuanced responses.”

√ Training limitations, which is in the hands and minds of humans, is still something of a hurdle despite the amazing progress of this application to this point. This might include bias in the data they are trained on, because ChatGPT “can still reflect or amplify societal biases present in the data they were trained on.”

We all make decisions and offer opinions on the data we consume, sometimes unknowingly. It seems even nonhumans

Who needs brains when you can just ask Chat

can be biased.

Finally, just for fun, I asked to ChatGPT to give me examples of some of i

GPT?ts silly responses:

User: Can you tell me a joke?

ChatGPT: Why don’t scientists trust atoms? Because they make up everything!

As with human beings, ChatGPT can be nerdy. I don’t know why it struck me as funny, but I loved the exclamation point after the answer.

How about this one?

User: What is the secret to happiness?

ChatGPT: The secret to happiness is to have a never-ending supply of pizza and a cat that plays the piano.

That just goes to show you that you don’t have to be human to be silly.