6 MIN. READ

Language Savants: Risks and Opportunities of Generative BS

Share

An empty-headed puppet

AI Chatbots powered by LLMs are genius at language play and idiotic in other realms of intelligence – Language Savants.

This essay introduced the metaphor of language savants in early 2023 as a short speech delivered to a CEO event.

I am an entrepreneur and have been doing AI work for 40 years. I’m best known as the cofounder and CTO of the company that created Siri. Launched 12 years ago by Apple, I think it still might be the most frequently used conversational AI, invoked on the order of a billion times a day all around the world.

Our vision for Siri was a personal assistant for everyone, that you could just say what you wanted and have the AI figure out how to do it. I think we made a good start on this vision.

Today we have a new generation of conversational assistants, like ChatGPT. Of course, they are lot better at having a conversation with you, but there’s something much more profound going on in these systems.

Today’s AI, based on large language models, is like having a personal assistant who has been trained on content that would take a million human lives to consume, speaks over 100 languages, and can read and write as well as college graduates, at least when taking written tests. For example, GPT4 can pass the law exam on the first try without studying. It’s very, very good with human language.

I call this class of AI chatbot a language savant, because what it is not good at, is knowing what it is talking about. Today’s language savant has no idea what is true or false, or even why it says the things it does.

Despite this inherent limitation it can be prompted to perform a wide array of tasks, from writing essays to writing code. It can help people be more productive, performing some of their work on demand, and in some cases, automating it. It could be relevant to anyone who uses language in their work. Think about it, for a lot of people, consuming and generating language is core to what they do: students, teachers, office workers, CEOs, marketing people, journalists, novelists, customer support people, lawyers, politicians, pastors, professors, and sales people. Anyone whose job is to consume and produce language can see their work significantly impacted by this technology.

And not only it is broad in applicability, it’s broad in reach. Because it’s in the form of software with a lightweight and universally accessible UI, you can scale it to as many users as you want. It could be available to every person with access to the Internet. That’s over half of the people on earth.

But there is a downside of this new AI, and it is inherent in the way this technology is built. That same facility with language that enables it to pass the law exam also gives it superhuman skills at the generation of BS. It is not lying, because it doesn’t know it is BSing. It does not know and does not care about what is true or false. It is an infinitely scalable fabricator of text and images. Today’s AI already has superhuman skills at impersonation, misinformation, disinformation, and fraud. In the wrong hands (especially in a lot of wrong hands) this superpower with BS could cause the break down of trust in human society. I don’t mean trust in some warm, fuzzy way. I mean trust that is fundamental to the stability of a lot of things that we care about in this country like

  • Regulated markets – if you can’t trust that transaction will go through, or that the thing you are buying is what it is claimed to be, markets do not function.  The very idea of money requires trust.
  • The freedom of the press to inform us – if there is no way to tell the difference between journalism and propaganda, we have no free speech.
  • Free and fair elections cannot happen without trust.
  • Responsible, non-corrupt government – the very idea of representative government depends on trust
  • For professions like law and medicine, the fiduciary responsibility of the lawyer to the client and the doctor to the patient are relationships based on trust.
  • Even organized religion depends on trust.

If you cannot trust anything that you read or hear or see or experience through your digital device, how will society function?

The only limitation to this technology is time and money. And we’re in the middle of an AI arms race that is throwing as much money as possible at this technology as fast as possible. There’s a very strong research result that shows that as you add more computational power and data to training the large language models, they get more and more capable with no end in sight.  That means that capital, which pays for the computation and data, can be used to displace labor at scale, not only to automate existing work, but to solve problems that humans cannot solve today.  In another iteration of these models, they will be more knowledgeable and able to think through more options and process more information and come up with more ideas than any human whoever existed.

Note that I didn’t say it would be wiser, or make better decisions than any humans, or even make decisions that are good for humans

This is where we come in as leaders, citizens, and people with a stake in a stable, prosperous, flourishing human society.  This technology is moving fast, but building bigger and better models will not automatically lead to the outcomes we want.  These systems have to be engineered to align with human benefit, which is an unsolved problem today.  This requires not only breakthroughs in the research, but also means finding agreement as a society on what kinds of human benefits to go after, and who gets to benefit.  

We can start making decisions now about how this technology can and should be applied. It could absolutely revolutionize how humans work and learn. 

  • Every one of us could have a personal coach and tutor, not only during our school age years, but continuously throughout our lives.  
  • As professionals, we could all have a staff of researchers and writers and analysts and even creative inspirations. We will have our own team of programmers and graphic designers, our own project managers, our own chief of staff.
  • As a society we can choose to direct capital and talent that aim the new AI at important problems like sustainable energy production, climate change mitigation, curing disease and helping us all live healthier lives. 

We also have hard work to do to contain the coming wave of AI, so that we can continue to have stable systems based on trust and avoid the epistemological maelstrom.  Fortunately, the conversation about containment has already begun, among the leaders of the tech firms who build the giant AI models, the AI researchers who work to understand how they can be contained, and government people who set policy about how they can and should be used.  But we need to move fast, and move from talk to action, while there still is time.

I know this raises a lot of questions, and that’s the point.  A superhuman language savant will help us work smarter individually, but could also be used to unravel the threads of trust that help us work together collectively. Today we are facing the elbow in the curve, the moment of nonlinear takeoff for this technology.  Now is the time to think it through, decide what how we want things to turn out, to ask the hard questions about how this might be controlled, and by whom.