Chatbot Security in the Age of AI

With each passing year, contact centers experience more of the benefits of artificial intelligence. This technology — once only a distant idea portrayed with wonder and fear in science fiction — is now a key part of how businesses and customers interact.

According to survey data from Call Centre Helper, customer satisfaction is the number one factor driving more brands to adopt artificial intelligence (AI) as a part of their customer service models. AI’s ability to enable self-service and handle more calls more efficiently will prove critical for contact center success going forward. Not only that, but many contact center leaders find that its capacity for data collection and live interaction analytics presents game-changing possibilities for customer experience (CX).[1]

Yet, despite its many benefits, the present-day reality of AI isn’t fully free of the fears it has so often stoked in science fiction stories. One of the most pressing concerns about this powerful, widespread technology is its threat to data security. For contact centers, which house massive volumes of customer data and rely on chatbots to engage many customers and collect their information, this is a serious concern that can’t be overlooked. Thankfully, though, it’s also one that can be addressed.

The growing problem — and cost — of data breaches

Data breaches have made the headlines many times in recent years. Major brands and organizations, from Microsoft and Facebook to Equifax and the Cash App, have had troves of sensitive customer data stolen in cyberattacks that affected millions of consumers.

Despite the high-profile headlines, however, these cyberattacks can still seem like unfortunate but isolated events. This couldn’t be further from the truth.

According to the Identity Theft Resource Center (ITRC), a nonprofit organization that supports victims of identity crime, there were 1,862 data breaches in 2021. That exceeds 2020 numbers by more than 68% and is 23% higher than the all-time record of 1,506 set in 2017. 83% of those 2021 data breaches involved sensitive customer data, such as Social Security numbers.[2]

For the companies that fall victim to these data breaches, the costs are enormous. Brand reputation is sullied and customer trust is eroded, both of which can take years to rebuild and result in millions in lost revenue.

Those effects are significant enough, but they’re not the only ones. The immediate costs of a data breach are substantial. According to IBM’s latest data, the average data breach for companies across the globe costs $4.35 million. In the U.S., it’s much higher — at $9.44 million. It also varies significantly by industry, with healthcare topping the list at $10.10 million.[3]

The risks of AI

There are various vectors for these data breaches, and companies must work to secure each nexus where customer data can be exposed. As repositories for vast amounts of customer data, contact centers represent one of the most critical areas to secure. This is particularly true in the era of cloud-based contact centers with remote workforces, as the potential points of exposure have expanded exponentially.

In some ways, AI enhances an organization’s ability to discover and contain a data breach. The IBM report notes that organizations with full AI and automation deployment were able to contain breaches 28 days faster than those without these solutions. This boost in efficiency saved those companies more than $3 million in breach-related costs.[3]

That said, AI also introduces new security risks. In the grand scheme of contact center technology, AI is still relatively new, and many of the organizational policies that govern the use of customer data have not yet caught up with the possibilities AI introduces.

Consider chatbots, for instance. Nowadays, these solutions are largely AI-driven, and they introduce a range of risks into the contact center environment.

“Chatbot security vulnerabilities can include impersonating employees, ransomware and malware, phishing and bot repurposing,” says Christoph Börner, senior director of digital at Cyara. “It is highly likely there will be at least one high-profile security breach due to a chatbot vulnerability [in 2023], so chatbot data privacy and security concerns should not be overlooked by organizations.”

As serious as data breaches are, the risks of AI extend far outside this arena. For instance, the technology makes companies uniquely vulnerable to AI-targeted threats, such as Denial of Service attacks, which specifically aim to disrupt a company’s processes in order to gain a competitive advantage.

Going a step further, we have yet to see what could happen if a company deploys newer and more advanced forms of AI, such as ChatGPT, which launched in November to widespread awe at its ability to craft detailed, human-like responses to an array of user questions. It also spouted plenty of misinformation, however. What happens when a brand comes under fire for its bot misleading customers with half-baked information or outright factual errors? What if it misuses customer data? These are bona fide security threats every contact center relying on AI needs to be thinking about.

Solving the problem of chatbot and data security

The threats may be many and varied, but the solutions for facing them are straightforward. Many are familiar to contact center leaders, including basic protocols like multi-factor authentication, end-to-end chatbot encryption, and login protocols for chatbot or other AI interfaces. But true contact center security in the age of AI must go further.

Returning again to chatbots, Börner notes, “Many companies that use chatbots don’t have the proper security testing to proactively identify these issues before it’s too late.”

The scope of security testing needed for AI systems like chatbots is far more extensive than what any organization can achieve through manual, occasional tests. There are simply too many vulnerabilities and potential compliance violations, and AI can’t be left to its own devices or entrusted with sensitive customer data without the appropriate guardrails.

Automated security testing provides those guardrails and exposes any potential weak spots so contact center software developers can review and address them before they result in a security breach. For chatbots, a solution like Cyara Botium adds an essential layer of security. Botium is a one-of-a-kind solution that enables fast, detailed security testing and provides guidance for resolving issues quickly and effectively. Its simple, code-free interface makes it easy to secure chatbot CX from end to end.

If your contact center is committed to AI-driven chatbots, you can’t afford to sleep on securing them. To learn more about how Botium can enhance security for your chatbots, check out this product tour.

[1] Call Centre Helper. “Artificial Intelligence in the Call Centre: Survey Results.”

[2] Identity Theft Resource Center. “Identity Theft Resource Center’s 2021 Annual Data Breach Report Sets New Record for Number of Compromises.”

[3] IBM. “Cost of a data breach 2022.”

Artificial Intelligence, Machine Learning