What AI and Big Data Can Do for Humanity
In this keynote, Tom shares his insights into the power and potential of big data and AI, with examples of how things can go wrong — and the potential for doing things right. He reveals the role of AI within social media platforms and details how its misuse has led to over-optimization for profit and unintended consequences for humanity. To counter this concerning trend, Tom offers examples of AI that is optimized for human mental health and well being. Followed by an interview with John Markoff at the virtual TechFest conference.
Hi, today I’m going to offer you some insights into big data and AI, the potential of what can go right, and what can happen when things go wrong. Let’s start with the fact that won’t surprise you, but which is worth putting into perspective: a large portion of our global population spends a substantial amount of time online in systems designed to control their behavior. I’m not talking about an evil AI that’s taken over the world. I’m talking about Facebook, YouTube, Twitter, Snapchat and other sites that feed on human intention.
And they are successful. Altogether, over 2.7 billion people use Facebook and another 2 billion use YouTube. And it’s designed to keep you there. Once people start a session watching YouTube on their mobile phone, they end up staying on that session for over an hour on average. The feed is addictive. It’s common for people to check social media over 100 times a day.
So just to put this into perspective now, we have more than the amount of followers of the two largest religions on Earth, neither of which requires praying 100 times a day. Now that is true devotion. How did the titans of the attention economy become so successful? These companies didn’t win because of brilliant design or clever psychology or con artistry. These systems can manipulate human behavior at these scales because of AI operating on big data. Now, every day, these sites run psychology experiments on their subjects that are orders of magnitude larger than anything ever published in a scientific journal. They gather every scrap of information about how people interact with these systems, aggregated over billions of people every day. They feed this data into the most powerful AI technology deployed at an unprecedented scale. And what do they do with this? They learn from all of us how to get us all to play along.
Now, this is no secret or conspiracy theory. Of the billion hours watched on YouTube each day, over two thirds, 70%, are due to the AI driven recommendation algorithm. This is an intended consequence of AI, a point of pride by the company that the AI is driving usage. But now, let’s look at what kind of content is offered by these AI driven recommendation engines. Let’s say you do a Google search for the question, the factual question, “Is the earth flat?”. Okay now, if you do this to Google, 20% of the pages on the internet claim it’s flat. This is too bad. I don’t know if it’s bots or whatever, it doesn’t matter. Because since you asked the question to Google, your critical mind is still engaged and you’re actually able to make sense of the answers. However, if you check the content on this question that is recommended by the YouTube recommendation engine, which gives you stuff you didn’t ask for, remember, 90% of the videos on the question of the earth being flat or not say it’s flat.
If you’re curious about more examples like that, check out this site algotransparency.org.
And did you know that dumb AI likes to recommend false news? Now, why would that be? An MIT study of Twitter that was published in Science, researchers found that news spreads much faster and farther if it’s false, than if it’s true. Like on, on Twitter, if you watch information that’s traveling, if it’s false, it goes faster and farther. Now, the AI models don’t know the difference between true and false. They only know that people seem to be liking certain kinds of stuff. And the model watches and factors that in to recommendations. So basically, what it’s doing is amplifying our human weakness for gossip.
Now in 2017, the head of Facebook set the corporate objective to get people to join Facebook groups to “bring the world closer together.” Now that’s a good idea. He told his team to go do it with AI. And it worked, maybe a little too well. The recommendation engine did its thing. When someone joined a new Facebook group, the engine recommended other groups to join. But the engine didn’t always recommend new groups of human beings that you really want to meet. For example, if you are joining a Facebook group for new mothers, guess what the engine recommended? Anti-vaccine conspiracy groups. And once you’re in one of those groups, imagine what kind of other groups are being recommended.
Now, why blame AI? The Internet’s full of misinformation and conspiracy theories. We all know this. But most people aren’t aware that AI is contributing to the problem ˚by recommending this stuff to users, and real harm is done. This is not free speech or the wisdom of crowds. This is deadly misinformation at scale. In 2019, the World Health Organization declared that anti vaccine misinformation, much of it driven by recommendation engines, is now a top 10 global health threat, up there with malaria. Cases of measles have surged 30% and this has effective vaccine, this is crazy. And this year, we have watched in horror as misinformation spread through algorithmically governed social media is leading thousands of people to die unnecessarily during today’s global pandemic.
The harms done by algorithm driven social media are described powerfully by the documentary called The Social Dilemma, which is now available on Netflix and doing quite well. It’s about the work of the Center for Humane Technology. As a human being, I highly recommend the documentary. Now the Center is advised by tech industry insiders like me, and they also try to help change the industry from within. For example, after the publicity about anti vaccine groups, Facebook patched the bug, and has also done a lot of things to help with mental health issues. But this is a system problem, not an isolated incident. It’s a consequence of building systems that optimize an objective without aligning the objective with human impact. The problem is hard to solve because the systems are not governed by humans. They’re governed by machines running AI software operating at scales way beyond what humans can do.
What’s happened is a Frankenstein story, a story of out-of-control technology and unintended consequences. All that data, fed into all those AI models, has produced the kind of super intelligence, an intelligence about controlling human attention. It has won the contest for human attention. We don’t have a chance. And in an adversarial game exploited by propagandists and political animals, AI that recommends false and potentially dangerous content is a threat to the epistemological contract on which our society is built.
Okay, is your head ready to explode? Don’t worry, it won’t. Because this is not super intelligence like you hear about in Terminator or Skynet. It’s not artificial general intelligence. That’s AI that can fool all of us all the time. No, this is not general intelligence. The AI behind these recommendations systems is today’s narrow AI. But it’s applied to insane amounts of data about how people behave online. This AI has no idea what is true or false, helpful or harmful, or just plain crazy. It’s brilliant. But it’s an empty-headed idiot savant. All this narrow AI knows is what the data tells it will work to get people to do things to maximize the business objective. This is an AI expert at manipulating human attention only, and nothing else.
So to really understand what’s going on, we need to get a little more technical and understand the key factor in the way AI works. It’s called the objective function. Now at the core of any AI system is the model, which takes an input like an image, performs the computation, and then determines what class it’s in — classifies the image. The objective function sits on the side and evaluates how well the model is doing its job. The objective function must be a metric that the computer can compute by looking at data from observation or simulation.
Now, the objective function itself is core to the machine learning process. You know, these AI algorithms you hear about? They don’t do anything unless they’re trained, and they’re trained with objective function. For example, that AI system to classify photos, when it’s presented with a photo of a cat, and it gets the answer right, the objective function goes, yep good idea. If it gets it wrong, it says no. And they repeat this over millions of examples of cats and other things. And the machine learning algorithm makes a model that’s good at classifying photos.
Now in AI systems that play games like chess or Go, the role of an AI model is to evaluate how likely a particular move will lead to victory, so the program can choose the best move at each turn. The objective function evaluates models that how well they predict the value of moves. The only factor considered in the objective function is whether the move is on a winning path. And this is typically computed by having the AI game play itself billions of times. And then by after doing this more than any time that any humans could possibly ever experience, it now has a good sense of which moves are winning moves and it can beat the pants off any humans.
Now what about the attention economy? On a social media platform, the AI system is effectively playing a competitive game against its human users all the time to get the behavior it’s designed to maximize. Now the goal of the AI model is to determine for each piece of content to put in front of the user, which one will lead to more engagement, kind of like choosing moves.
The machine learning that drives recommendation engine is governed by an objective function that is based on three metrics, maximize human time on site, maximize humans clicking on things that might be ads, and maximize humans doing things that get other humans involved, recruiting other people into the game. Now, with billions of people interacting with social media sites every day, the AI system has enough data to win the game with overwhelming force.
The problem is, the human is on the wrong side of the equation. Nowhere in the objective functions of social media are the hard to quantify soft metrics on the impact of this behavior on critical thinking, mental health and public discourse. As a result, we get unintended consequences of automation.
Now, the AI engineers working on these systems don’t program them to recommend dangerous information. Of course not. These engineers are just doing their job, which is to make models that achieve the quantitative objectives established by their employers. What can we do about it? Can we fix the objective function? Yes, but it’s not easy. Remember, the objective function must be a metric that is computable, measurable and computable by the computer. You can’t just tell the AI to say, “do the right thing for humanity.” For instance, like Asimov’s laws of robotics, the first law was do no harm to humans. Well, today’s idiot savant AI doesn’t know what that sentence means and can’t know what that sentence means.
This is a problem worth solving. It’s not going to be easy, because human impact is much harder to measure than page views and clicks. And how do you teach the machine that the content is recommending will cause harm to humans or have them cause it to others?
It’s not easy, but it’s worth solving. And as long as the objective functions are blind to the content, as long as they don’t have anything in them about what’s in that stuff they’re recommending, then the problems will remain.
Pretty scary, huh? Well, I think there’s an alternative. What if we put AI on the human team? This is what I call humanistic AI, which is designed to empower humans, rather than compete with them. The key is to think about how to build human benefits into the objective function from the beginning. My strategy has been to look for applications that augment or collaborate with humans, and to define the objective functions for those applications as the joint performance of the pair of human and machine, and that performance is on a task that humans are struggling to achieve.
So let’s take an example. Let’s look at modern video games. Today, what’s their objective function? Maximize game time, just like social media. They present game stimulus to the user, like Facebook puts things in your feed, and they’re optimized to present just the right amount and level of stimuli that the model predicts will produce the right amount of dopamine rush in you that reinforces the addictive behavior.
But you know what doesn’t make sense to me? I’m not a gamer. But the people I know who play games, they do it to kill time, not to kill people. I mean, with the power of fully immersive interfaces being developed right now, game designers no longer need, or should have to rely on, that dopamine charge from violence to be successful. We can use game dynamics, which are powerful psychological motivators to help players have empathy for people, and maybe consider situations that are outside the filter bubbles created by today’s online world.
What if we change the game so that AI is on our side? We can build an objective function to measure positive human behavior. A company that creates games can create AI models that predict not only whether a given technique increases gameplay, but that it encourages behaviors that improve this user’s wellbeing and positive social interactions. And that’s exactly what a company called TRULUV.ai is doing. Instead of optimizing for dopamine fueled addiction, their game like experience offers a nurturing relationship with an AI character that optimizes for caring and compassion.
You think this is crazy. But over 2 million people have already downloaded the early experiment, which is called #SelfCare. And it’s showing a new way for interface experiences to influence human behavior in a positive direction.
Now, in addition to helping us learn new behaviors, AI can help us understand what’s going on in our own minds and bodies. And this has huge potential.
Take for example the problem of serious mental illness. Millions suffer from conditions, such as bipolar disorder, psychosis and so on. And mental healthcare professionals are not very good at predicting when someone will suffer from one of these conditions, even though early intervention makes a big difference to save lives and restore function.
What if we could deploy AI models that can predict from data that you already have, when someone would need to go in for care? A company called Mindstrong, has developed AI models that can do just that. These models are based on what’s called digital brain biomarkers that measure human cognition and mood. It turns out that the human computer interactions required to use a smartphone today puts a cognitive load on the person. It’s a sensitive window into low level cognitive circuit functions.
Now the system gathers data on how you tap and scroll and use the keyboard. And a big neural net uses that data to detect disruptions in these cognitive circuits. And from changes in this behavior, the model can predict clinically relevant disruptions to cognitive control and working memory.
For example, in one real world case, a patient under care for a bipolar disorder and psychosis seemed to be doing okay, but her brain biomarkers indicate that something was wrong. She had a little bit of sleep issues, but she didn’t know anything was really going on. But the biomarkers detected something. Her healthcare team was alerted and discussed the situation with her. And she checked herself into the hospital for care and treatment. And to her great relief, she was able to restore her mental health. The AI models predicted a breakdown, which did happen during her stay in the hospital where she could get the treatment, prevent a crisis and restore health. And this is a great example of a collaboration between humans and AI, each doing what they do best.
This objective function for mental health care is going after a new skill. It’s a superhuman talent at predicting internal mental conditions inside the brain of a human by watching their daily activity on the phone.
Another case of predictive AI is addressing an issue that affects every human on the planet right now, knowing whether you are infected with COVID.
The company is called migraine.ai because they make an app for migraine that predicts based on the data in your phone, kind of like Mindstrong, whether you’re going to have a migraine headache. It likes to use selfie videos. You can take a selfie video, and it can tell a bunch of vital signs and information from that. And that, together with the things that it asks you, can predict migraines better than humans. And that turns out to be key for effective treatment.
When the pandemic hit, migraine.ai was able to apply their technology for gathering vital signs and digital biomarkers to create a COVID screening app. Now they can detect vitals like heart rate, heart rate variability, respiration, more things. And the kind of cough that is characteristic of COVID. It actually has a characteristic sound to it. All from this selfie video that you do. Now the goal is to be able to predict whether you have COVID or not better than you can today. And as we know, a lot of cases are, “Pre symptomatic or asymptomatic.”
These are only a few points of light in the galaxy of humanistic AI applications. How might we use this perspective to turn things around from a world where AI is used against humans to a world where AI is inherently aligned with humanity? I think the challenge is to change the way we define the objectives for AI technology. And we do that by redefining the objective functions we use to train and evaluate our models.
To conclude, let me ask you, ask yourself, if you’re an investor choosing AI companies or an entrepreneur making products or an engineer working on an AI project, or a manager managing that project in a company, what would you want to put in your objective function? And what can you insist should be measured, even if it’s hard? And what goals should be rewarded?
And for all of us, as members of our society and stewards of our planet, what externalities should we demand be counted in the objective functions that drive the AI systems that create wealth and prosperity? Thanks very much for your attention.