Crowd of people surrounding text inscription

AI, Big Data, and Humanity in 2024: Our new relationship with AI

Keynote at the Amsterdam RAI Center | February 2024

The latest AI models based on planetary scale LLMs with intelligent interfaces such as ChatGPT are changing our relationship with AI. This presentation is an excellent overview of the latest models, what makes them possible, how they are created, what they are great at, what to be careful with, and the exciting applications that they enable.  Gruber brings decades of experience in large scale AI systems whose mastery of language changes the game.  Topics covered in this presentation include:

Included is a Q&A session with things to think about how this technology could impact your life.


Transcript

[00:00] This is an amazing opportunity to talk about a really big topic: AI humanity, big data. Let’s start a conversation today and then you can continue it in the breakouts the rest of the week.

[00:16] Let’s start with, well, an obvious thing, AI is a big deal, yeah, it’s a big deal. It’s in all the conversations, but how big a deal? So let’s start with an authority on this. The guy who runs the biggest company in the world that is dedicated to AI, it’s Google. They’ve been AI first for many years.

[00:32] Eight years ago, he said the AI is a more profound technology than the invention of fire or electricity. And that was before all these large language models and all that stuff happened. 

[00:44] And then Sam Altman, the CEO of OpenAI, which makes ChatGPT, won’t be outdone by a mere CEO of a trillion dollar company.

[00:53] He says that when he meets his goal, which is on the record for being done by this decade (of AGI), they’re going to capture the economic value of all of the galaxy for all of time. All right, what are these guys talking about? 

[01:07] They’re not talking about science fiction. We’re talking about a technology that’s here moving fast as you know. We’re going to try to frame it in how does AI promise to deliver for humanity? What’s possible. And also what’s going to be our role in this transition? What’s the new partnership between humans and AI? 

[01:27] So let’s talk about what’s happening right now.

[01:30] There’s a partnership already between us and AI chatbots, we’re going to represent the state of the art of AI. I know there’s a lot going on. Now, AI chatbots, like ChatGPT and ilk from the other startups, are the UI to AI. So the user interface to AI.

[01:49] The big model AI in the cloud. In particular, more precisely, it’s conversational interface to big model AI in the cloud. Now, as you know, these models are based on machine learning. They’re created not by engineering, but by training. And by training on a lot of content. Mostly stuff on the internet.

[02:08] Another way of thinking about it, it’s not just content on the internet. This isn’t arbitrary stuff. They’re trained on the legacy, the cultural legacy of humanity. They’re taking our knowledge and putting them in a box. It’s kind of interesting. And they, because of this, they have certain properties.

[02:28] One thing, with their user interface wrapper, you can think of it as a capable assistant. You can think of this like a Siri vision delivered 14 years later. All, yeah, you can really talk to this thing, and it seems to know a lot . What are these capabilities just many of you know this but you’d be surprised how clever these things are. They can take tests like nobody.

[02:47] They’re really good. They can get into college. No problem. If you want a fake guy going to college just give him a chat GPT pass the test flying colors. More than that They can pass the law exam in the US. After you go to law school for multiple years, you take this one test that qualifies you to practice law.

[03:04] It’s horrible. People study for months. They mostly don’t pass the first time. ChatGPT passes it the first time. 90 percent profile without studying. Not bad. Now, they’re also an amazing translator. I didn’t know this until recently. They, these things speak and read and write 120 languages. including even some that they call low resource languages like Latvian and Welsh and Klingon.

[03:31] And if you give it something to read, of course, it can read it and abstract it and summarize it, and basically it’s an assistant that can do knowledge work. Now the most important property of these guys is language. Language is really their superpower. And why is language a big deal? Why is that of all the things that could be super good at math or something, they’re good at language.

[03:54] The reason is, first of all, we’re good at language. We now have an interface to the big AI that doesn’t require programming or data science. We, any of us, can access the power of the biggest technology on earth. That’s called Universal Access, and it’s been a dream of mine and everyone else in my field for years.

[04:11] It’s there now. And the second thing that’s kind of cool is that it means that if you think about I/O, there’s language on the input, language on the output, what’s in the middle? That’s the class of tasks that the AI can start to automate. Well, what kind of things is language on the input and the output?

[04:30] Basically, most of white collar work is now in range of automation. And that’s a sea change from the kinds disruptions we’ve had from technology in history. What does that mean? It might mean just not only lawyers and managers and ad writers, but even network engineers take language and then take it out sometimes.

[04:50] And language can be viewed broadly because the new bots can take pictures and make pictures. Wow. Multimodal. In fact, most of the graphics you’re going to see in this talk were made by an AI. And that is better than I could ever do myself. Now, to start thinking about where this power came from, we have to look back and see how it happened.

[05:13] Because what happened was, it was a surprise. It was almost an accident. So I want to give you a little historical context to see where it came from. Let’s start with 2010, about 14 years ago, which is when we first showed the world Siri. It started out as an app and then Steve Jobs called us, we joined Apple, then we shipped it again, through Apple in 2011.

[05:35] 

[05:35] it’s, I’m proud of the fact that we set the paradigm for the conversational front end to the AI back end. We set the pattern, everyone copied it. No one really has seen much to improve, except that modern ones actually do real conversation. How did it work? What are the key technologies in Siri that were relevant?

[05:51] Tom: So the first one is AI, but it was, as they say, old school or classic AI. Symbolic reasoning. It wasn’t math like it is today. Fairly deep stack of classical AI involved. And for the speech recognition, you use statistical machine learning, but not the kind that you see today. And then there’s two more pieces of technology that most people don’t realize was essential.

[06:10] And they’re right in the middle of your industry. The second piece of technology is open APIs. There were 1,500 published open [APIs] available on the Internet at the end of the Web 2.0 era. And Siri was able to tap into them and use AI to orchestrate the services among all those APIs to do things like give you a reservation at a French restaurant next Thursday for two.

[06:31] In one shot. And the other technology, Well, it turns out that there was 24 of us. We started in 2008 and we shipped in 2010. We shipped to a million people the first year, and then we moved up and it was a hundred million the next year, and it kept going. How did we do this? We didn’t grow the team by orders of magnitude.

[06:54] We had Elastic Cloud. We had the illusion of infinite compute resources. Because we could push a button and expand. That was, we couldn’t have done Siri without that. So , yeah, you’re a part of this. Next, in the mid 2000s, you may have noticed that Siri stopped screwing up what you had to say to it 

[07:12] so much. 

[07:13] Why? Because it started using deep learning. Now, what’s deep learning? It’s neural nets, which are not new, and were not new then, applied to labeled data.

[07:25] Supervised learning on labeled data. So you have a bunch of speech, it’s labeled with the words, you learn a bunch of pictures, labeled with the captions, it learns. So big data plus this neural net architecture, they got enthusiastic about how many layers then, but it was essentially the same idea as ten years ago. And then, one more thing.

[07:46] It turns out that you need a lot of compute of a certain kind. These neural nets were not classic AI. They didn’t do symbolic reasoning. They just did math. Matrix algebra, basically. Floating point operations. And guess what? The video game industry had created hardware acceleration for that very kind of computation.

[08:02] GPUs. And so, GPUs was an absolutely essential enabling condition for the deep learning revolution. And then, in 2017, this amazing breakthrough happened. A friend ran up to me and said, dude, you have to read this paper. If you only read one paper this year, read this one. Google has figured out how to make an AI that can learn from as much data as you feed it with no labels, with no supervision.

[08:28] And it’s called a transformer. And it was called a transformer because it’s the language people working on it, the translation people. From French to English, German to French, and so on. The interesting thing was this architecture allowed it to work on arbitrary text.

[08:41] Just learn from examples of text. They didn’t have to be labeled as, with the parts of speech and all that stuff. And as a result, you could feed it a lot of text, because guess where it all was? The Internet was crawling with lots of text. That was easy. And again, GPUs. This was inherently a distributed processing kind of thing.

[08:57] So you could spread it out horizontally, and you could crank. Which meant, really, we could now build the first generation of large language models. And they started to do things that no one predicted they would do. So some people were paying attention. Most people were paying attention. Everyone jumped on the transformer model and this particular group of research over in Open AI, the startup, they said, look, there’s a formula, GPUs, transformers, big data, like as much as you want from the internet.

[09:29] What if we threw money at the problem? Could we scale this thing infinitely? By just throwing more and more money. And Sam Altman was very good at getting more and more money. He went and got himself a billion dollars in pocket change from Microsoft, then another ten billion. And they threw money at the problem, and their bet paid off.

[09:48] They won the bet. Because they essentially discovered that if you scale this thing big enough, you get an amazing AI. And so what does this AI look like today? It looks like a cold, humming building somewhere in the desert. A data center, filled with lots of compute and lots of data. Now this thing, if you look at it at this level of abstraction , you’d see it’s really hard to build these things.

[10:11] So from an economics point of view, there’s only a few players, maybe a dozen in the world, who can build them. And then, but everybody can use them. That has good and bad. So now, we’re in the era where fortunes are being made, or at least fortunes are being invested, in companies that are verticalizing this general capability into industries.

[10:31] So let’s talk about a few of these. Let’s start with software engineering. I was a recovering software engineer. When I write my code, I do my best, I never really wrote all the tests and all the checks and all the stuff that you’re supposed to do. What’s happening now is you can have a co pilot that can do all the tedious tasks that you would do if you had infinite time.

[10:52] I don’t think they’re replacing programmers right now, but they’re shoring up all the tedium. And there’s a fair amount of it. So my team at LifeScore, you heard the music earlier from LifeScore, that team had a 50 percent bump in productivity overnight from using these tools. Now, there’s all kinds of ways, and you probably know a lot of them, that this has been infiltrated into software engineering workflows.

[11:14] The key thing is that I believe, I predict, that the best effect will be, it’ll be quality improvement. Because, software we live with has so many bugs , it’s because we don’t do systematic functional analysis, quality assurance, and so on. And in the realm we’re talking about this week, security analysis.

[11:31] So the next industry I want to talk about is the very giant call centers. Call centers are, they’re full of fairly unhappy people according to what I’ve read. It’s the highest churn rate of any employment category I’ve heard of. Most people don’t last more than a year. And, of course, if we’ve been on the other end of that, it doesn’t always give the service we want.

[11:50] So it’s ripe for disruption. And we’re seeing already the first wave of this. Where it’s replacing the low level, entry level, the first, the new employee type things. And the senior people are still essential to that job. 

[12:05] Creative services. Now, this is an area in which I live and breathe now. Like this talk. I couldn’t have done the visuals without AI, just couldn’t have done it. It would have been completely different visuals. I’m so happy we had the conversation. I’m sitting down with my designer and AI as a partner, and many of the visual metaphors you see in the talk weren’t in our heads at the beginning of the project.

[12:28] In just three days, we put together all these visuals, and we probably iterated over a couple hundred. That’s the kind of thing that generative AI can do now with images. It literally gives you visual ideas that you can then follow up on. 

[12:42] And then education. 

[12:45] It’s a universal problem, and it’s universally underserved. Technology’s been battering away at this for a while, but this is the first time in history where the machine can do everything the student can do in terms of the lessons. It can read and write and perform. It can generate the lessons and lesson plans.

[13:01] But more important than any of that, it can look at how the learner is learning and see what’s missing and recommend changes in learning. That’s called tutoring, of course. And tutoring has been shown, if you get your own tutor, like you pay for a tutor for your kid or something, on average improves their performance by one standard deviation.

[13:23] And if they’re struggling, two. That applies to every kid in the world. So now we have a possibility of AI literally changing the Gaussian curve of education across four billion people. Kind of amazing. And this goes on to personalized AI, too. So we have personal assistance applications for health care, for mental health care, and for personal development.

[13:47] They’ve always been there, but now we have a conversational environment where it can truly listen and have rich context in a conversation about the issue. So let’s say you’re working on your hypertension or your diabetes. Or maybe you’re working on wellness and health and exercise.

[14:06] Maybe you’re working on anxiety, depression. There’s techniques that do work in a conversational mode, like cognitive behavior therapy, that are now being deployed in the context of really rich, intelligent AI conversations. And talk about serving an unserved need. During COVID, it was estimated that 3 billion people had clinically serious anxiety and depression.

[14:30] It may be down to two billion today. And there’s not enough psychiatrists to go around. So this is an area of humanity where AIs could make a really big difference. And if you keep following that line of AI helping you learn and think a little bit better, you end up with this, augmented cognition.

[14:49] That is, just like glasses, when you put them on, they’re a technology that helps you see better. Augmented cognition is a technology that helps you think better. It doesn’t have to be glasses, of course. It’s any kind of way in which the digital presence, the intelligence, is in the loop of your experience.

[15:08] So when you’re looking through glasses, it’s in your visual experience. But you think about it, as knowledge workers, we do everything digitally now. We talk to everybody socially and professionally. Everything we consume and read starts out digitally, usually stays digital. Everything. We live a digitally mediated life.

[15:25] And that means that the AI can now watch, understand, remember, and advise. And even on the input side, advise on attention. Advise on what is a decent thing, given your goals, what you should attend to. This is going to radically change how we view our partnership with technology, and how it’s going to make us all smarter.

[15:46] Now, there are risks, and we’ve heard a lot about them already this week. As you know, more than most, I think that’s one of the things that’s a privilege for talking to you folks, because you really are part of the solution here. You know there’s a battle out there. There’s an arms race going on between the white hats and black hats.

[16:04] It’s cyber warfare. It’s fraud, impersonation. This stuff is scary because it can disrupt everything. And guess what, what’s the technology enabling the cyber warfare? It’s AI. And it’s going to be the technology used to fight back. you.

[16:20] You understand what’s going on here. AI is a technology that can read and write code, and anyone can use it, because they can just talk language at it. They can iterate it, make a million bots, let them loose. What could go wrong there? Especially worrying is social engineering. Because the ability for AI to pretend to be a human is now, at least at the level of the visuals and so on, and the voice, imperceptibly different from real humans.

[16:45] So, wow, look out for phishing, here we go. 

[16:48] All right, so now let’s talk about how these applications are put together just for a second. This is obviously the system diagram of an old fart engineer. But it’s funny because that’s really what’s going on. You have this Open AI or some model in the middle, and you wrap it with all these tubes and wires and stuff. And then you have your application on top. And the tubes and wires are things like fine tuning and retrieval augmented generation. It’s almost like frosting on top because the cake was really hard to make in the first place.

[17:13] And you don’t have to make the cake every time. You don’t have to make the engine of the large language model anymore yourself. And that’s changing the nature of economics. This is the reason why the VCs just were falling over themselves last year to invest in this stuff because it doesn’t cost that much to make a state of the art killer AI app anymore, which used to be impossible.

[17:33] So now let’s switch to how you and I might use this technology. And that’s consumers. The key thing, the key skill you need to use chatbots is prompt engineering. You may have heard that phrase. I, think of it as “careful what you ask for”. The key skill of prompt engineering starts with the idea that it is not a search engine. A lot of people think it’s a search engine: here’s a question, see what ChatGPT says. Well, you can use it that way, but that’s not really what it is, and it’ll lead you wrong doing it that way.

[18:04] It’s a conversational agent with a memory. Its superpower is language, not search. It can call a search engine, but so can you, and you’re better at it than it is right now. How do you use this? Summarization is fantastic. I there was a 300 page paper, about the DeepMind results when they did AlphaFold, this amazing result where they use AI to solve a big biology problem.

[18:26] 300 pages with all kinds of diagrams. The first thing I did is I gave it to chatGBT, I said, read the paper and summarize it. It did. It did a fabulous job at summarizing a 300 page paper in about a minute. And I was like, whoa, okay, that changed the game. And now once the abstract’s there, I’m having a conversation about the words I see in the summary.

[18:47] So why are protein folding important? That kind of thing. So it’s amazing, and you can get a lot from this. So let’s give it a shot. Let’s roll up our sleeves. In preparation for the talk, I got my chatGPT out, and I asked it, I’m trying to explain why GPT 4 is such a big deal, so tell me why. So specifically, I’m prompt engineering.

[19:05] I was actually in a conversation. It was bragging about capabilities. I grabbed it and said, Okay, what exact capabilities make it better? I went down, it gives a normal consultant speak of 10 things. And then I noticed on item 6…

[19:20] it said it can make music. And I went, hold on a second. I know that space. The fish video you saw today, that’s AI and music and humans working together. That was the music you heard. Well, ChatGPT doesn’t do that. But if I didn’t know that, if I wasn’t an expert, I wouldn’t have noticed. I was just taken it at its word for it.

[19:41] But just for fun, I said, let’s see what happens if I asked it to justify its answer. So I copied and pasted the string and said, okay, now just tell me what evidence supports this claim. And guess what it said? Boom. There is no evidence to support this claim. I didn’t mean it. I contradict myself. Never mind. 

[20:00] I mean, really now? It’s a pretty good BS artist. But it’s just like, Oh, I don’t care if I contradict myself. It’s not a big deal to me. Here you go. You want to hear this? I’ll tell you that. And that’s the way it’s built. It’s called a hallucination.

[20:17] And this is fun, too. How do you visualize a hallucination involving octopuses and mushrooms. There you go. The better word is confabulation. But that’s what everyone calls a hallucination. And I call it BS generation, but whatever. And the funny thing here is, if you think about it, this was the underhanded pitch.

[20:37] This was the best use case. Hey, ChatGBT, you think they haven’t curated that query before? If it was a search engine, it would never that wrong. But you can’t curate out the way these things work. I know it’s false, but if you didn’t know it was false, you would listen. It was articulate. It was right there in item six out of ten.

[20:55] It looks legitimate to me, right? So why is that? How can something so simple be gotten wrong by something so clever? And the answer is, it’s not a genius. It’s a language savant. In other words it’s an idiot savant of sorts. It’s really good at one thing, which is language, which is a big thing.

[21:13] But the other things that make a human being intelligent aren’t there. It has no strategy for thinking. Classic AI, by the way, did have strategy for thinking. There’s no inference engine. There’s no way to know what it’s doing. It just remembers things. And it actually has no database of facts. So it doesn’t look up, does GPT 4 do music?

[21:35] It doesn’t do it that way. It has this ancestral memory. It’s just, well, He’s asking about stuff that I do. Um, music, you don’t know if it’s right or not. It just remembers it. And if there’s some coefficients and you ask it tomorrow, it’ll give you a different answer. And finally, this is the worst part.

[21:50] If you say, why did you think that? It’ll tell you something, but it has no basis for telling you something. It has no epistemology. It has no ability to reflect. Or understand how evidence leads to conclusions. It just remembers how to make things sound good. Keep that in mind when using this stuff. 

[22:09] You say, well, what can go wrong there? I’ll check my facts and so on. But people are hooking this up to machinery. So if you came and someone said, I have this new sentient being or apparently sentient being that can talk in English to you back and forth and does things for you. It’s read a million lifetimes of stuff.

[22:26] It must be really smart. Let’s put it in the cockpit of an airplane and let it fly, wouldn’t it be like putting a chimp in the cockpit? It’d cause a lot of mischief. 

[22:37] These things need supervision. that’s a technological requirement, not just a moral one, right? If we let this thing loose on APIs and just say, Hey, “figure something out on your own!”, we’re not going to get what we want. And then even independent of applications and engineering and products and companies, everybody in the country, everybody in the world who lives as a citizen, our leaders, our children, our teachers, our preachers, whatever, we need to understand that critical thinking is no longer an optional skill.

[23:08] It’s a required skill to use this. We can’t have people running around saying, I believe because this sort of conspiracy theory pattern makes me think these things. We have to sharpen our pencils on this.

[23:18] And you know, it didn’t go so well with social media. So now we’re in a world where the AI is amplifying the weakness of not having critical thinking.

[23:27] That’s where we’re at, and what do we do about it? One thing is, we, actually you, as the industry, can do a lot about it. I worked out this metaphor with ChatGPT and with my designer. We got there after three or four conceptual breakthroughs. We thought kind of like a ship.

[23:44] I was in Antarctica , rolling through these 40 foot waves. It’s like that. You’re on the helm of the ship. It’s storming out there, lightning, crazy stuff. Boats are sinking all over the place. But if we’re protecting us, we have the network, we have the systems, we’re dry.

[23:59] And, it looks like craziness, but we’ve been in these waters before. I’m hoping we can generate that technology. And it really means three things. We need to protect against fraud. Now, I mean fraud in the broad sense, and it’s important. Bots that impersonate people on social media are frauds.

[24:15] Okay, it’s illegal in most domains. We have to stop that. It’s ruining the Internet, ruining our mental health. There was already a 25 million dollar fraud just last week, where an entire video was made of this fake CFO authorizing a 25 million dollar payment. Okay, we’ve got to stop that kind of thing.

[24:31] And, secondly, we need to keep our Internet, including our businesses, our governments, our civil society, safe from cyber. Of course, I am preaching to literally the choir here. But this is life and death for the entire world if we don’t get this right. And thirdly, I think it’s subtle, but there’s a culture of the internet that we also protect.

[24:55] There was no coincidence that during the mid 2010s, the results of machine learning works just amazingly. And they’re still super hyper fast results. And it’s because there’s a culture of massive openness in the machine learning AI community. And everything’s shared and trusted. And that’s now being transferred to the life sciences and the material sciences where the AI is starting to have influence.

[25:15] If we can’t trust what we share on the internet anymore, who it’s coming from, whether it’s been tampered with, we will lose the edge we’ve created in the last 20 years about collective intelligence on this planet. Okay, not to be too much of a bummer, but I can go one more. Imagine this, you have this property where you have this new technology.

[25:34] It’s never been so powerful before. Only a few players can create it, okay? It costs 10 million dollars to make one of these things. And teams you can’t recruit. Only a few players can do it. And it has access to everyone’s data in the world. And, in fact, it’s easily used to manipulate democratic elections.

[25:52] Thanks. Okay, in fact, by the way, half of the population on Earth is going to be in an election this year, 2024. Imagine what happens if that goes wrong. What this is called is an Orwellian nightmare. It’s called Big Brother. And we don’t want that to happen. It doesn’t have to be. If it went to Big Brother, would be the following things.

[26:13] We only care about markets and populations and constituency voters. We don’t care about individuals. If we care about populations and markets, we’ll make an AI that optimizes for market extraction and population manipulation. If we think about individuals, we’ll optimize for something else.

[26:32] I have a metaphor for this, so let’s switch from big brother to big mother because the big mother knows your human weaknesses, but instead of taking advantage of you, it nurtures you. It nudges you towards health. It’s AI. Instead of that misinforms you, it’s AI that teaches you. That helps you think more clearly, not taking advantage of your weaknesses there.

[26:56] And finally, it protects you against the forces of people using AI against you. I think we should build personalized AI, and I think we should build it in the metaphor, in the mindset of Big Mother. Alright, let’s conclude. Last slide. I’m actually super optimistic. I’m very optimistic. Even though I’m scared to death, about what could go wrong.

[27:17] And, as long as we maintain a free society, and a healthy internet, we have this other upside that the internet gives us. Let’s not forget. We can deliver all of these benefits that you’ve heard about of AI to every person on the planet. Well, there’s a few billion left, but you guys are busy giving them internet access, right?

[27:38] But we can give it to everybody in the world for almost free. It’s what we called at Siri the Big Think Small Screen. You can put big AI in the cloud and deliver it to everybody. And that means that we can be part of the solution right now in this year to help build a world where AI is used to protect and improve all of our lives. 

[27:59] Thank you so much for your attention.

Q&A with Oliver Tuszik, President EMEA at Cisco

[28:07] Oliver: Wonderful, wonderful. By the way, when you were describing AI, and I was writing this down, no thinking, no facts, no idea what’s true, making facts up. That sounds to me like a young teenager. So before we get to our discussion, let me just ask one question, because a lot of people ask me to ask you. Who came up with the name Siri? 

[28:30] Tom: Well, It was a designed name. Okay, we went through about a hundred names. We tested them all, for ethnicity, for gender, all these things, and it won. But also, the guy who came up with it was responding to one of our prompts. We said, If you were to name your next child, what would you name it?

[28:47] And he’s married to a Norwegian woman, and it’s a Norwegian name. It means beautiful woman that leads you to victory. 

[28:53] Oliver: Wonderful. first of all, it was great to see you bringing up the full story behind it. But when you look at the story, it looks like it got exponential in the last two years, maybe?

[29:04] And what I’m hearing from all the other discussion, you predict it will get even faster in the future. The statement on it’s more disruptive than electricity is also your statement? No, that’s Sundar Pichai. Yeah, but would you say the same? It depends how you write history.

[29:20] Tom: Sure. what he’s talking about is AGI. So if we hit AGI, Artificial General Intelligence, if we hit it and if the concept still means what people think it means today, it will be fundamentally different because we’re gonna have we’re gonna be basically in the midst of a super intelligence that can do everything we can do better and faster.

[29:39] And it’s gonna change everything. I agree. Electricity changed the nature of human labor. So did fire.. 

[29:45] Oliver: So this is why you’re also saying we should start with more governance, more critical thinking. You brought up the critical thinking in our discussions pretty often. Have us a bit more how we can get to this critical thinking. 

[29:58] Tom: Yeah one simple thing is we talked about this before. It’s like, you’re a manager, you’re really good at managing people.

[30:03] What does that mean? That’s not a skill you’re born with, right? You learn to, delegate, supervise allocate tasks to people who have the right skills and so on, and monitor, and then bring the results back together. That’s a skill right now, combined with epistemology, with critical thinking skills, we’re gonna have to have.

[30:19] We’re gonna have to have, figure out how to have an army of, or a small team, of AI report to us. And I think it’s a skill that every knowledge worker can start learning right now. 

[30:30] Oliver: But it’s something that most of our young generation might not be able to adapt to, or are they better in adapting to?

[30:36] Tom: Well, they’re better in what they think is multitasking, but really I think critical thinking is the critical dividing line. If we can start teaching critical thinking, this thing giving me an answer, why not? The right answer, right? Then all the rest of it follows.

[30:50] Once you know that, you can judge the work of your report and that kind of thing. So yeah, I hope they do. I think it’s our job to lead with a role model example. 

[30:59] Oliver: You talk about, a lot about controlling it and finding a way to, to govern it. The Europeans especially, they love to bring up new governance modalities, policies and so on.

[31:11] Is it the recent AI act, Is this something great or something rather stupid? 

[31:17] Tom: Oh, no, I’m actually really I’m bullish on the European regulation. Not for its own sake, but we have a problem here that , if we just let large language models out without the right protections they’re going to be exposing the ability to do things like biological weaponry and other kinds of, and cyber attacks to a much larger collection of amateurs, many of which are deranged. That’s one of the many risks. And the ability to influence elections and things like this. So the European government has been thinking about that for a while. And I do a little bit of work with World Economic Forum groups, and they’ve been influencing that legislation, I actually saw it.

[31:50] It’s rare to see legislators actually respond to expert influence, and they’re doing it thoughtfully. We need more of that. We need the Americans to pay attention. And they’re listening, but we need them to act, right? And then we have to think about, actually, worldwide. The Europeans are pretty good at multi country governance.

[32:07] We can’t have a different AI government for every country. We have to have agreement worldwide. 

[32:14] Oliver: Now, when you look around, these are mainly knowledge worker, white color, where the last industrial revolution always impacted the blue color, this will impact this group. But should we all be afraid?

[32:29] Tom: I don’t think so. Electricity and then steam power led to the Industrial Revolution. It replaced muscle power with machine power. And I think we’re going to replace a kind of muscle that we do in our jobs every day. We do a lot of busy work in a corporation that we don’t need to do.

[32:45] But also, we cut corners, because we don’t have time. An executive, like you might have a big staff of people you could, but most of us don’t have that luxury. And even if you did, there’s only so many of them. But now with AI, you can have them, take care of communications and manage things that need to be done.

[32:58] Make sure things don’t drop into cracks. All kinds of little things that knowledge workers do. That’s how I view it. I view it as an augmentation of the human. And as long as we keep our governance, we can be clear headed about who are leading us, including governance in corporations, as long as boards of directors are humans, I think we’re going to have a great time with augmentation.

[33:19] Now, if I was a low level like programmer or a low level call center worker, I would start learning quickly how to become a high level one using AI and that’s the transmission. That’s always happening when we have a disruption. You said this in our discussion yesterday, if you’re a great programmer, it make you even better.

[33:39] If you’re not a good one, you might lose your job. 

[33:41] Yeah, it’s time to become a good one. Yeah. Although you can become a good one with a technology actually . Yeah. 

[33:47] Oliver: And you started with Siri so you’re using these kind of engines now for a long time. And you already said some of all of the graphical elements were produced with the help or by an AI, was the whole script written by the AI?

[34:03] The whole script. Did you want Oh, no, no. That’s a funny thing. Yeah, the script. Did you just type in, “we need a great script.” Wouldn’t that be great with Cisco? 

[34:10] Tom: No, actually I like writing words. It’s why I like writing speeches, it’s what I do, so it’s fine I didn’t want to let the AI have that.

[34:16] And that’s not something I would delegate to anyone else. I hire a designer, because I’m not as good as they are, but I write my own stuff. This is the way it is, everyone has their thing. The designer isn’t going to write his own words, he might have an AI help him with that. So that’s, it’s part of this delegation issue, and supervision.

[34:33] I think we figure out what we love to do, and what we’re good at, and creative with, and work from there. Thinking of visual metaphors and how they tie to concepts. That’s the luxury. That’s the fun part of this kind of stuff. When you’re working with your comms team or marketing, that’s the good part, right?

[34:49] You want to do that and let the AIs do the boring stuff. 

[34:53] Oliver: So you brought up the example of ChatGPT making it up that he or she can do music. Interesting enough, you have a company or you work with a company, that is creating AI based music, and you call it adaptive music. Tell us a bit more about what’s behind this idea.

[35:10] Tom: Yeah, it’s LifeScore. The idea is that when you have music that you can compose on the fly, then once technology is involved in it, you can remix, reinvent an idea, but you can also do it while you’re, say, driving or running. 

[35:25] And that’s what LifeScore does. One example is Bentley, the fancy cars. They, their creative team, hired us to make, so that as you’re driving to work or doing your, or being driven to work in a Bentley you get a nice journey in a soundtrack. Like a movie that you’re in. And it’s worked with games and with exercise and all these other things.

[35:44] And it’s because the AI is generating the music on the fly, but the AI is not composing the raw material. The reason why Bentley liked this is we’re like the people who make the nice leather. We’re not the people that make the cheap plastic. We have human composers that write the bits that figure out how to get goosebumps to rise on you when you hear it.

[36:03] And those are not automatable right now. And the machine just takes those little composing pieces and assembles them to make the music. 

[36:11] Oliver: So, will it calm me down when I’m riding 200 kilometers in auto and play play some soft music or will it play hard rock making me 

[36:18] Tom: Both, actually.

[36:18] Depends on you say, am I in sporty mode or am I in calm mode? And I think they did some studies. It’s not, it hasn’t been published, but there was a mode where it’s the calm down mode. And they found that people were driving actually slower and then just chilling out. 

[36:30] Oliver: Okay. Let’s get back to the risk side.

[36:34] Okay. We had yesterday a bigger customer discussion and at the end the kind of content was big AI make it much easier and cheaper for the bad guys. Yeah, so are we at the end fueling the bad guys and making it easier to attack us? Is it something we can previous stop or are you asking Jeetu to fix it?

[36:55] Tom: Jeetu can fix it. That’s right. Delegate. Yeah, we’ve seen this before. The arms races work this way. Sometimes it’s much easier to destroy than to protect. It’s always been that way. But, the thing we’re at now, we’re at this point where we’ve got open source models of a certain caliber out in the wild, you can’t take them back.

[37:13] But the governance might say, let’s not make the one that’s ten times bigger willingly free for everyone to play with. Because you can never take the genie back once you let that out. So that’s one thing we as a society can do. But also, I think we’re gonna see that these are mostly amateurs trying to attack the world.

[37:29] And these are, and you’re mostly professionals. So I think you can harness systematic shields. You’ve heard some of these things in the breakouts, I’m sure, like ways in which it’s identifying, using AI, identifying intrusion patterns, and automating the protection response. I think there’s room for a professional response that could outpace. You may have to outspend them in hardware and software, but you can outpace them in the end.

[37:56] Oliver: There’s still the discussion, is this a hype? Are we even before the hype or are we getting down to earth already? The interesting thing is we did a survey recently with our customers and more or less everybody said they are investing time to understand. But it’s about only 14% that have a plan right now. So are we really at the beginning for these companies to utilize it or do they all miss something and we should be far ahead?

[38:23] Tom: This is the technological wave. I think this wave is moving faster than most waves in technology, and so it’s only been a year or two, we’ve seen this stuff come to public awareness. There’s this first generation of apps with the wrappers, the prompt engineering layers, and that kind of stuff.

[38:39] They’re coming out now. They’re not that hard to make. We should do more experiments in that case. But, the idea isn’t you throw the AI at the problem and see what happens. You own the problem, and you engineer a solution. And there’s some parts of the problem, like if I had a problem in summarizing text, I would definitely hire the AI to do that.

[38:58] If I have a problem in something else, like managing a bunch of people to solve a problem, I wouldn’t hire the AI for that. Like in healthcare, you can’t have AI doing diagnosis and treatment right now, but you can have it administering known therapies that have been proven.

[39:11] So that’s, this is, I think, where we’re at now. We should be somewhat careful in how we deploy it, and thoughtful. But it’s also true that we should all appreciate how powerful it is. It’s low hanging fruit right now for us to use. 

[39:25] Oliver: When there was a couple of comments also yesterday that all these large language models behave rather like a older white man, and you were talking about a big mama already. Do we create something that will be only typical for a certain part of the people in the world and how do you believe can we get to a big mama? 

[39:49] Tom: Bias was a big problem, particularly in the labeled data era of deep learning. You shouldn’t be asking ChatGBT things for which its gender bias or racial bias matters. You shouldn’t ask it, what do people think about X?

[40:04] Or what’s the characteristic face of a foo person. That’s using it like a search engine. And, but instead it has its core competence at reading lots of language. And what you want is it to be curated on data that’s coherent, rational language, not trolls on the internet but sensible language, whether it’s white or green or whatever, whoever, it doesn’t matter who wrote it.

[40:28] And I think that’s the key differentiator. Again, there’s all kinds of ways. I’m not minimizing the bias in AI problem. Labeled supervised learning is still AI and we’re still deploying a lot of it. And so when you’re in that world, the training data biases are very significantly reflected in the output.

[40:44] But when in this new world of LLMs, it’s a little bit different. 

[40:48] Oliver: You’ve been talking a lot about the responsible usage and as you might know, Cisco built up a framework for responsible AI and we invested a lot of time and money to find a way to adapt and utilize this tool in a responsible way, but what we really realized from a lot of our customers, they’re struggling to build up a responsible way. Because you put such a big focus on this one, any advice or any guidance you could give how to do this? While we’re still managing big companies and trying to be profitable. Yeah. 

[41:17] Tom: Again, there’s sort of two AIs that you can manage. If you’re managing supervised learning AI, you’re responsible for the optimization criteria.

[41:26] So these things have an objective function, and they’re optimizing for something. So if you’re the head of Facebook, you optimize for attention. Advertising dollars. Everybody has their optimization. And that had an unintended consequence, which is not so good. So you own it from the top of the company, all the way down to the engineers.

[41:41] And those objectives should be aligned with corporate objectives. When you’re dealing with this new kind of AI, it’s more in the verticalization of it, where it becomes responsible, irresponsible. So for example, Hey, here’s a tool, hook your API up to it and see what happens.

[41:57] That’s irresponsible. 

[41:59] Oliver: I need to come to my last question, sorry, but we talked a lot about the risk, but you were trying always to keep a positive momentum. Now, if we do everything right, and we utilize it in a responsible way for everybody, how would the future look? 

[42:14] Tom: Yeah, I am very, I am genuinely optimistic about that future.

[42:18] If, again, I’m just more worried about free society and the Internet, but if we survive 2024 we have a future where the greatest technology ever created is available to everybody. That is fabulous. And it doesn’t cost the climate, it doesn’t cost human welfare, it doesn’t have a zero sum game.

[42:35] It’s upside. So that’s what makes me happy. Perfect. Thanks a lot. Thank you. This was great. Big applause. Yeah, thank you. Thanks very much.