How Can Contact Centers Use AI-Powered Chatbots Responsibly?

Chatbots have been maturing steadily for years. In 2022, however, they showed that they’re ready to take a giant leap forward.

When ChatGPT was unveiled a few short weeks ago, the tech world was abuzz about it. The New York Times tech columnist Kevin Roose called it “quite simply, the best artificial intelligence chatbot ever released to the general public,” and social media was flooded with examples of its ability to crank out convincingly human-like prose.[1] Some venture capitalists even went so far as to say that its launch may be as earth shattering as the introduction of the iPhone in 2007.[2]

ChatGPT does indeed look like it represents a major step forward for artificial intelligence (AI) technology. But, as many users were quick to discover, it’s still marked by many flaws — some of them serious. Its advent signals not just a watershed moment for AI development, but an urgent call to reckon with a future that’s arriving more quickly than many expected.

Fundamentally, ChatGPT brings a new sense of urgency to the question: How can we develop and use this technology responsibly? Contact centers can’t answer this question on their own, but they do have a specific part to play.

ChatGPT: what’s all the hype about?

Answering that question first requires an understanding of just what ChatGPT is and what it represents. The technology is the brainchild of OpenAI, the San Francisco-based AI company that also released innovative image generator DALL-E 2 earlier this year. It was released to the public on Nov. 30, 2022, and quickly gained steam, reaching 1 million users within five days.

The bot’s capabilities stunned even Elon Musk, who originally co-founded OpenAI with Sam Altman. He echoed the sentiment of many people when he called ChatGPT’s language processing “scary good.”[3]

So, why all the hype? Is ChatGPT really that much better than any chatbot that’s come before? In many ways, it seems the answer is yes.

The bot’s knowledge base and language processing capabilities far outpace other technology on the market. It can churn out quick, essay-length answers to seemingly innumerable queries, covering a vast range of subjects and even answering in varied styles of prose based on user inputs. You can ask it to write a resignation letter in a formal style or craft a quick poem about your pet. It churns out academic essays with ease, and its prose is convincing and, in many cases, accurate. In the weeks after its launch, Twitter was flooded with examples of ChatGPT answering every type of question users could conceive of.

The technology is, as Roose points out, “Smarter. Weirder. More flexible.” It may truly usher in a sea of change in conversational AI.[1]

A wolf in sheep’s clothing: the dangers of veiled misinformation 

For all its impressive features, though, ChatGPT still showcases many of the same flaws that have become familiar in AI technology. In such a powerful package, however, these flaws seem more ominous.

Early users reported a host of concerning issues with the technology. For instance, like other chatbots, it quickly learned the biases of its users. Before long, ChatGPT was spouting offensive comments that women in lab coats were probably just janitors, or that only Asian or white men make good scientists. Despite the system’s reported guardrails, users were able to train it to make these types of biased responses fairly quickly.[4]

More concerning about ChatGPT, however, are its human-like qualities, which make its answers all the more convincing. Samantha Delouya, a journalist for Business Insider, asked it to write a story she’d already written — and was shocked by the results.

On the one hand, the resulting piece of “journalism” was remarkably on point and accurate, albeit somewhat predictable. In less than 10 seconds, it produced a 200-word article fairly similar to something Delouya may have written, so much so that she called it “alarmingly convincing.” The catch, however, was that the article contained fake quotes made up by ChatGPT. Delouya spotted them easily, but an unsuspecting reader may not have.[3]

Therein lies the rub with this type of technology. Its mission is to produce content and conversation that’s convincingly human, not necessarily to tell the truth. And that opens up frightening new possibilities for misinformation and — in the hands of nefarious users — more effective disinformation campaigns.

What are the implications, political and otherwise, of a chatbot this powerful? It’s hard to say — and that’s what’s scary. In recent years, we’ve already seen how easily misinformation can spread, not to mention the damage it can do. What happens if a chatbot can mislead more efficiently and convincingly?

AI can’t be left to its own devices: the testing solution

Like many reading the headlines about ChatGPT, contact center executives may be wide-eyed about the possibilities of deploying this advanced level of AI for their chatbot solutions. But they first must grapple with these questions and craft a plan for using this technology responsibly.

Careful use of ChatGPT — or whatever technology comes after it — is not a one-dimensional problem. No single actor can solve it alone, and it ultimately comes down to an array of questions involving not only developers and users but also public policy and governance. Still, all players should seek to do their part, and for contact centers, that means focusing on testing.

The surest pathway to chaos is to simply leave chatbots alone to work out every user question on their own without any human guidance. As we’ve already seen with even the most advanced form of this technology, that doesn’t always end well.

Instead, contact centers deploying increasingly advanced chatbot solutions must commit to regular, automated testing to expose any flaws and issues as they arise and before they snowball into bigger problems. Whether they’re simple customer experience (CX) defects or more dramatic information errors, you need to discover them early in order to correct the problem and retrain your bot.

Cyara Botium is designed to help contact centers keep chatbots in check. As a comprehensive chatbot testing solution, Botium can perform automated tests for natural language processing (NLP) scores, conversation flows, security issues, and overall performance. It’s not the only component in a complete plan for responsible chatbot use, but it’s a critical one that no contact center can afford to ignore.

Learn more about how Botium’s powerful chatbot testing solutions can help you keep your chatbots in check and reach out today to set up a demo.

[1] Kevin Roose, The Brilliance and Weirdness of ChatGPT, The New York Times, 12/5/2022.

[2] CNBC. “Why tech insiders are so excited about ChatGPT, a chatbot that answers questions and writes essays.”

[3] Business Insider. “I asked ChatGPT to do my work and write an Insider article for me. It quickly generated an alarmingly convincing article filled with misinformation.”

[4] Bloomberg. “OpenAI Chatbot Spits Out Biased Musings, Despite Guardrails.”

Artificial Intelligence, Machine Learning