ChatGPT is not your AI strategy

Since its launch in December 2022, ChatGPT, together with Google Bard and other large language models (LLMs), has been the subject of articles in the most prestigious publications and on broadcast television, accumulated millions of posts and discussions worldwide, and sparked an overnight pivot in sales and investment strategy for many of the world’s largest organizations.

Employees, shareholders, customers, and partners are looking to organizational leaders to answer the questions: What is your AI strategy? What is your ChatGPT strategy? What does this mean for your workers?

This is a pivotal leadership moment. The approaches that worked for creating a digital strategy and a data strategy won’t work this time around, given the deeper questions raised by this technology together with the media attention it has received.

ChatGPT is a powerful tool, and within the context of the market imagined as a chessboard, it is like a pawn, capable of being promoted to one of the most powerful pieces on the board, but only if orchestrated together with the rest of the pieces.

An LLM is only one piece on the board

Understanding the capabilities of LLMs as one piece on the board is necessary to set a strategy for the future of the organization, and it anchors on the question of authority.

In layman’s terms, these language models take prompts such as “Create an AI strategy” and provide answers based on massive amounts of data that, at first glance, are surprisingly cogent.

At second glance, however, they distill information that already exists and recast it based on what it “seems” like the answer should be. They have no authority in and of themselves to tell you the actual answer.

If a researcher published a paper based on years of technical research, and a student with no technical experience summarized the paper in five bullet points, the summary may be accurate as rewordings of the underlying paper, but the student would not know whether it was accurate or be able to answer any follow-up questions without going back and quoting something else from the research that seemed like it might answer the question.

The image for this article is a great example. It was generated by DALL·E 2 based on this prompt: “A photo of an ornately carved pewter chess set on a chess board in front of a window at sunrise.” The generated image does seem like a chess set on a chess board, but any human – not even an expert, but any human who has ever learned how to play chess – can instantly recognize that there should not be three kings on the board.

Practical applications where LLMs can be applied retain human authority, such as systems in which experts can interact with archived institutional knowledge. For example, if a network engineer could describe a particular file she knew existed but for which she had forgotten the name and location, an LLM could help provide much more precise recommendations than previous systems.

The key ingredient to the successful application of these models is that humans remain the authority on whether something is accurate and true, with LLMs serving as accelerants for experts to navigate and generate information.

The rest of the pieces

LLMs are only one type of piece on the board, alongside deep learning, reinforcement learning, autonomous artificial intelligence, machine teaching, sentiment analysis, and so on.

Ironically, many of the other pieces on the board have more readily available and practical applications than LLMs despite the fact that fewer people are familiar with them.

For example, some companies have developed autonomous artificial intelligence systems to control machines where there was no historical data. To account for a lack of historical data, simulations were made of the environment and of the machine, paired with curricula created by the humans who operated the machine, and deep reinforcement learning was leveraged for the system to create its own data through simulated experience of what to do and what not to do to successfully control that machine.

Another powerful piece on the board is the application of artificial intelligence in real time to streaming data, moving organizations away from applying algorithms in nightly or weekly batches or even manual jobs to intelligence and learning applied in the moment.

These kinds of applications have strong economic potential, but because they cannot be accessed by anyone at home on a laptop or phone, they are not as well-known, and leaders are at risk of missing the signal of near-term value within the noise.

Autonomous, real-time, and generative AI all have valuable applications, and the most compelling can be found in combining them for exponential value. For example, when a customer calls a customer support center, real-time AI can analyze the customer’s voice for sentiment and transcribe their speech to text, which, up until recently, has then been used to perform searches and recommendations of knowledge articles to assist the customer care agent to resolve the customer concern within a matter of minutes.

The addition of generative AI to this picture means the transcribed customer speech can be leveraged as prompts to infer intent and generate more precise recommended responses to customer challenges, in seconds. Human authority can be maintained by embedding the underlying knowledge article(s) below the generated text for the customer care agent to validate generated responses.

Amid the sea of change, with AI pieces receiving varying degrees of investment and recognition, the leaders who create the most value for their customers and organizations will be those who can see the entire board and understand the value of each piece without losing sight of the broader strategy in favor of a quick tactic.

Strategy can’t precede vision

The answer to the question of an AI strategy that makes the most of all the pieces on the board starts with vision. What is the envisioned future of the organization? What is the envisioned and desired future of the market?

The inevitable answer that comes to mind for many is to research trends or to gather data. What does Gartner or IDC say is the future?

These resources and practices are valuable and have their place, but the responsibility of setting the vision for the future of the organization cannot be outsourced, and it should not be a reaction to a hypothetical trend envisioned by someone else based on investments other organizations are making.

Leaders must start with the hard but essential question of what future they want to create for their people, their partners, and their customers, and then work backward to the present as the starting point. This process clarifies what investments must be made to create that future, with LLMs and other technologies serving not as the basis of strategy, but as powerful tools making the strategy possible.

Learn more about DataStax here.

About Brian Evergreen

DataStax

Brian Evergreen advises Fortune 500 executives on artificial intelligence strategy. He’s the author of the book Autonomous Transformation: Creating a More Human Future in the Era of Artificial Intelligence, and the founder of The Profitable Good Company, a leadership advisory that partners with and equips leaders to create a more human future in the era of AI.

Artificial Intelligence, Machine Learning