Tom Gruber at AI.x 2019

What Can AI Do for Humanity?

Keynote | AI.X Conference, Seoul, South Korea | June 25, 2019

In this talk to a gathering of over 1,000 AI practitioners, Tom showed how the misuse of AI by social media has led to over unprecedented addiction and harm to society. He gave a technical analysis of how the technology is involved and how it might be deployed differently to avoid unintended consequences. He then described several new areas in which Humanistic AI applications, applying the AI with different objectives, can produce intended consequences for human benefit, including health care, mental health care, augmentative communication, and memory enhancement.  He has given versions of this talk to business leaders who are interested in double and triple bottom line governance and forging a new contact between technology and society.


Today, I’m going to talk about a question, “What can AI do for humanity?” And we’re going to look at a couple of ways it might be done. One, maybe not the right way, and then maybe the way I recommend. So, kind of a dark side and a light side. So, let’s start out with two views of how we could look at how AI can help humanity.

One is to make the smartest machines we can and compare them against them humans, and when they get really good we apply them to human applications. The other way is to think from the beginning about the humanistic need, or the human centric need, and formulate the AIs around that goal.

So, let’s start with the machine-centric perspective. We’ve all seen this lovely utopian swoop of progress for AI, and it always ends in the sparkly future of general intelligence, which is either your ticket to retirement or your biggest nightmare, depending on who you ask.


I think of the endpoint of general artificial intelligence as a distraction today. Because long before we get anywhere close to artificial general intelligence, which is smarter than most people at most things, we have an intelligence, an AI right now, which is taking advantage of us at our weakest, and it already defeats us. And that AI is all about something you might call behavior control: persuasion and control of human behavior.

Now, what am I talking about? Imagine a newspaper headline that said this, “AI Outwits Billions of People!”


Well, we already have this. A large promotion of humanity today spends a substantial amount of our time online, on systems whose purpose is to control our behavior. And I’m not talking about an evil AI genius who’s taking over, I’m talking the attention economy, social media. Facebook, Snapchat, Google, YouTube and so on. The systems that we all spend a lot of time online. But the key attribute they have is, they feed on human attention. They are the Titans of the attention economy. And the attention economy, in short, is an exchange from attention for money; it exchanges your attention for money.

Now, these systems are very successful. They’re successful at getting our attention. 70% of American teens use social media multiple times a day, it’s probably true in most countries around the world. A quarter of the U.S. adults say they’re constantly online, when asked, and Millennials check their phone 150 times a day.

Now if you’re a Millennial, and sorry for being so surprised by that, but that seems a little bit extreme to me.


But this number really blows me away: there’s 2.2 billion people on Facebook and 1.9 billion people on YouTube. Now just to put that in perspective, well, that’s more than the two major religions on Earth, and they don’t require you to pray 150 times a day. This is an amazingly big change in human behavior. And not only that people are participating, but the time being spent is unprecedented in history.

Now, have you ever gotten a, someone sent you, a link, “Hey, check out this YouTube video.” You watch the little three-minute video, and then 60 minutes later you wake up from a trance and you went, “Where did my hour go?” It turns out that statistically, that’s a really frequent event. In fact, the average time spent on a session in YouTube on mobile is an hour. That’s not normal human behavior.


So, how did the attention economy become so successful? Well, were the founders brilliant designers? Were they amazing psychologists? Were they con artists? Nope. It was AI. They got their ability to manipulate human attention at scale from AI. Because every single day these sites run, essentially, psychology experiments on the number of subjects which is orders of magnitudes greater than any published psychology research. And they gather every scrap of information about how people interact with the systems. Many of you know this. Even the milliseconds at times while you watch a page or scroll.


They feed this data of billions of users, billions of hours a day, into a pretty big data base and they run some pretty good AI on it. And that teaches the AI how to learn to get us to play along. Now, this is no secret or conspiracy theory. This is a success point. Of a billion hours a day, every day, a billion hours of human time is spent watching YouTube. And more than two-thirds of it was caused by AI recommendation algorithms. This is not a secret; this is a bragging point of the company.  They’re very proud of this.

Now, those are the intended consequences. What are the unintended consequences? Well, as Tristan Harris of the Center for Humane Technology says, “While we’ve been upgrading our machines, we’ve been downgrading humans.” Derived empirically, these AI models have discovered techniques that exploit our human needs for approval, our vulnerability to addiction, prejudice, tribalism, irrational biases of all sorts. Now these are not the goals, they’re not programmed to do this. They empirically discovered these things about how human beings work, and they got put into the site because the metrics guided them to do so.


And this causes some potentially serious side effects. These unintended consequences include impaired critical thinking, addiction and obsession, serious mental illness. And essentially, as we all know, it’s rewarding outrage over civil dialogue. Now let’s just dig into these, because these are bold claims. Let’s start with impaired critical thinking.

Now, you know, there’s a lot of anti-vaccine misinformation in the world. And you might think, “No big deal.” It’s a big deal. The World Health Organization has labeled the information problem, the misinformation campaign, caused by conspiracy theories and such about anti-vaccine to be a global health threat. One of the top 10. Up there with measles and malaria and everything. This is a health threat. But it’s a meme, it’s a misinformation problem.


Okay. You might say, “Look, why blame social media, or AI?” I mean, the Internet’s full of misinformation, conspiracy theories. We all know this. But most people aren’t aware of the fact that AI is contributing to the problem by recommending this content to users.

So, let’s just give an example. If you do a Google search for pages on the question about whether the Earth is flat, how many do you think come up flat versus round? Well, it turns about 20% of the internet pages say, on the question of Earth is flat, think it’s flat. Now, I don’t know how they got that way. Maybe they’re Russian bots, I don’t know. But, it doesn’t even matter, because when you do with a Google Search, you’re doing research. You’re asking the questions and you’re powering through the results. You’re doing critical thinking. “Do I believe this or not?”


However, if you check the content on this question that has been recommended by the YouTube recommendation engine, 90% say the Earth is flat. So, the corpus that two-thirds of a billion hours a day is put in front of humanity is telling us that the Earth is flat. Interesting.

Now, these AI recommendation engines have no idea what is true or false, harmful or helpful, or just plain crazy. It’s not like the machine goes, “Oh, I’m gonna trick humans and tell them some stupid things.” No. They’re not artificial general intelligence. They are today’s narrow AI, applied to an insane amount of data about how humans work. And all they know is what works to drive the metrics, to maximize their objective function. These AI models are experts at one thing, manipulating human attention, and they know nothing of anything else.


Now, you could argue, “Look, we’re all adults. The Internet’s a crazy place, we’re all freely participating in this thing.” Well, hold on a sec. We didn’t evolve to defend ourselves against the artificial social construct of this, these environments. They are effectively replacing healthy social life with an obsessive compulsion for likes, and follows, and shares and so on.

Okay, we also have legalized gambling in this country, or in this world. But this is particularly hard on the humans, whose brains have not fully developed to resist the stimuli. This is addictive stimuli, and adolescent brains do not have the prefrontal cortex development required to resist it. Doesn’t happen until early 20s. So, this culture of obsession, this culture of girls comparing themselves with unrealistic models, it just might be contributing to a very real crisis in mental health among teenagers, and especially girls.


Now, in a very large study of American adolescents since 2010, half a million Americans, that’s a big study, there was a massive increase in depressive symptoms, suicide rates and suicide related outcomes, especially among girls, in just five years. During the rise of social media on mobile, this went up 70% in suicide, in actuality, and all the way through 2016, and 58% in serious depression. The authors looked at correlations between this and use of screen time, and other uses of time, like exercise and going out with friends, and they found that it correlates incredibly well with how much time you spend online, particularly in social media.

Another insight of why this happening: because dumb AI favors fake news. “But you told me it doesn’t know, so how could it do that?” Well, it turns out that in a very large MIT study of basically the Twitter firestorm, they looked at how news travels in Twitter. And they found that fake news goes much farther and much faster than real news.


Now, the AI models don’t know the difference, but they know how to watch human behavior. And humans love to share gossip, especially false gossip. And so AI algorithms say, “Wow, humans are sharing this stuff, it must be good. Maybe rank it up and recommended it some more.” And that’s partly the problem here.

So, really what went wrong? These are unintended consequences. This is people like you and me building AI models, doing what we’re asked to do, to maximize the objective function, essentially established by our businesses. The problem is, in my view, the objective function is wrong. It is not accounting for the impact on the user and collectively on society.

To really understand this problem, let’s dig a little bit more into what I mean by objective function. At the core of any AI system, as you know, is the model, which takes an input like an image and performs an inference or reasoning thing, like classifying the image.


The objective function is the thing that evaluates how well the model is doing on its task. And it’s used also not only during run time, but also during training and evaluation. And AI engineers can’t do their job without this objective function. It’s also important to note that this objective function has to be a metric that’s computable. It can’t be like a goal, like “do good”. And it can be computable by looking at data or making computations over observations.

And because of that, they’re almost always an approximation of the actual goal or purpose of the system. The machine learning process which operates on objective functions, can only optimize for the computable function, not for the goal that the human has in its head. And it has no idea what the real design intent is.


So for example, let’s say you’re on a game playing, it’s not a big problem because in game playing systems, the role of the AI model is to evaluate how well a particular move will lead to, most likely lead to a victory. And the program is saying, “Well, a good move is a move that leads to victory.”

The real goal is to win the game, but you can’t just say, “Win the game.” If you could program that you would have, the game would be over. But instead, you say, “The objective function is: choose moves that in simulation, tend to lead to victory.” And the only factor that the objective function has to consider in a game, except maybe compute time, which, not that important anymore, is whether that move is on a winning path. So, winning is the only thing that matters to this objective function.

Now, in the world in which social media operates, their AI based systems are effectively playing in a competitive game against their human users all the time. To get to behavior that they’re designed to maximize, to get them to stay online, stay on site. what is that behavior? It’s literally built into the objective function. Three things: maximize time on the site, maximize clicking on ads, maximize spreading the virus to other carriers.


And the problem is, the human’s on the wrong side of the equation. Nowhere in the equation of the objective function are the hard to quantify soft metrics on the impact of this behavior on critical thinking, on mental health, on public discourse. And even the goal is well-intended, which it almost always is well-intended, if the goal is to say improve human experience, the cold logic of the objective function, that’s how it’s operationalized, can get us into trouble.

Take an example. 2017, the head of Facebook said this corporate objective, “We’re gonna bring the world closer together. So we started building artificial intelligence to suggest groups, and it works. In the first six months, we help 50% more people join meaningful communities.” The objective function was to train the AI that drove the recommendations, which drove people to join groups in droves.


And the recommendation engine did its thing. When someone joined a new group on Facebook, the engine recommended other Facebook groups to join. For example, if you’re a new mother. And you joined a Facebook group for new mothers, guess what the engine recommended? Anti-vaccine conspiracy groups. And once you’re in one of those groups, imagine what kinds of other groups it recommends?

And by the way, much of this material about social media and impact is from the Center For Humane Technology, my friend Tristan Harris. They’re advised by industry insiders, and they advise industry. And they have told Facebook and Google and all these places about this stuff, and their industry is responding. Facebook, for instance, has patched this problem with the anti-vaccine and they’re doing a lot of work to help with mental health issues.


But remember, this is a system problem. It’s not an isolated incident. It was a consequence of the AI team optimizing for the quantitative objective of people joining groups, not whether the groups were beneficial to the humans involved. And now, this is a hard problem to solve, and because human impact is much harder to measure than page views and clicks. But as long as the objective function is blind to content, the problems will remain.

And in an adversarial game exploited by propagandists and scaremongers, not Facebook, the people exploit Facebook to do this, fake and dangerous content becomes even harder to detect. So, this is a hard problem.

Sounds pretty scary. But, there’s an alternative.

Remember, I promised a second way to think about how AI can impact humanity. This is what I call humanistic AI. AI which puts the human at the center. It is designed to empower humans rather than compete with them. The key is, to think about how to build the human benefit into the objective function from the beginning.


My strategy is to look for applications that either augment or collaborate with humans, and to define the objective function as the joint performance of that collaboration. Where the human and the machine operating together are the intelligence system, and that’s what we’re evaluating. Particularly, when that thing they’re trying to do together is something that humans want to do.

Now, a simple example of joint performance is, take the medical classification problem; everyone’s familiar with this. The traditional approach is to define the objective function as the error in classifying an image, as defined on a set of labeled images. Well, you can give the humans the images, and you can give the machine the images, and you can do a horse race and see who wins. And you get a okay results. The machines are approximating the best humans.


However, if you define the objective function as the joint performance of the human and the machine together, you will actually get better joint performance because you can train the AI to do the things well that the human doesn’t do well. For instance, in this case, finding very hard to find cases of isolated cells and millions of cells on a microscope slide. Machines are better than that. But the humans are better at rejecting false positives.

So, let’s look at some humanistic AI examples that I like. One is augmented communication. This was actually my first AI program in 1983. I called it a communication assistant. The problem here is the human has cerebral palsy or ALS, and they can’t speak with a normal voice. So, we augment them with a machine.

In this case, it’s a computer with a screen. And it scans items and the person clicks or sucks through a straw, or does something that they can do, and it selects an item. And then slowly, it generates language, which is then emitted through a TTS system, a text-to-speech system. Now, this is very tedious and slow to do, especially letter by letter. So, I built an AI model that predicted what they’re trying to say, accelerated everything, and they can speak slightly more real time.


The system was able to enable kids with severe cerebral palsy to speak for the first time with a voice, and that was really meaningful. The objective function, however, wasn’t the conventional objective function. It was a joint performance system. That is, how quickly could they together say what she wanted to say.

And that drove the research towards things like highly personalized models, because personalized models out perform generic auto-complete models. And it also drove research into how do you learn from humans to adapt to their context and their world? Because just by a change in how we viewed what success was, we didn’t have word error rates as a success. It was, ” How does a human being with augmentation do better?”


Now, today, this technology has really gotten good. 36 years later, these assistants for communication are powerful. The computing platform is now 100,000 times more powerful. This company Cognixion just last month released an app on the App Store for Apple that uses the face ID and the augmented reality stack in the phone. Some of these new phones can see you with volumetric and RGB cameras, and they can see your eyes. And they can actually allow them to speak with their eyes. It’s really quite profound.

And even more mind blowing, if you’ll pardon that pun, is this Cognixion system that uses EEG electroencephalogram signals, that allow its users to speak with their brain. They developed a headset you see here, which has three electrodes on the back, the occipital the lobe of the brain, and an augmented reality display up front. By detecting brainwave activity, you can understand what objects in the visual field the user is intending or thinking about.


This used to be really hard. The idea has been around for a while, but the technology wasn’t good until the deep neural nets allowed us to make sense of the very noisy signals coming out of EEG. And now, these AI models can predict that you’re clicking or pointing at an object with your brain really effectively.

This system enabled Lorenzo Minelli, a professor of languages who used to speak nine languages until rendered speechless by a serious stroke, to say this using only his brain. “Medicine is an art as well as a science, and great art requires great imagination, great empathy. Take the opportunity to see your own suffering in the suffering of others. For when you can see others’ suffering as your own, you will be greatly rewarded.” Now I’m really happy to see that men like that are not silenced by medical conditions.


Another area of human problem is mental health care. I won’t go into the statistics, but it’s a very big problem. And our science is a little behind. It turns out that for diseases like bipolar disorders, psychosis, PTSD, depression, mental health professionals do not have objective scientific measurements to predict whether you’re going to get sick or get a mental breakdown.

There isn’t a measure or tests like there is when you go to the physical doctor and get a blood test or something, and they can do an objective test. There’s no such thing for these diseases. You go into the psychiatrist, you talk, and they do a checklist most of the time. Sometimes they move things around, but that’s about it.


So, some people ask, “Well, what if we could change this? What if we could deploy AI models that could predict from some data about how humans are doing, whether they’re going to have a mental breakdown or not, or an issue, and not require going to the doctor at all?” Well, a company called Mindstrong has done exactly this. They developed AI models that are based on what they call digital brain biomarkers. And here’s how they work.

It turns out that the operation of using a phone, as the next speaker might tell us, is actually a pretty big cognitive load. They’re not easy to use. They’re actually a tough mental task to operate a smartphone. And so, as you watch how people tap and scroll and use the keyboard, very similar things to what online systems do when they’re monitoring your behavior on social media, you can actually gather a lot of data about human cognition, about 3000 data points a day on average.

And it turns out that disruptions in cognitive function that are predictive of mental conditions and breakdown, psychosis and so on, are actually detectable in these data. And an AI model can predict when a condition is going to happen very accurately by looking at how, a person uses their phone: completely passive, completely unintrusive and very helpful.


Here’s an example. A patient was suffering from psychosis and bipolar disorder and she was doing okay, maybe having some sleep issues, but nothing to go to the hospital about. But then, her healthcare team was alerted by her biomarkers going crazy. The AI detected something going wrong. And so they had a conversation with the patient and she said, “Yeah, maybe I’ll go ahead and check myself into the hospital.”

It was a good thing she did, because she had a big breakdown and things went bad. But when that happens in a hospital, mental health care practitioners can help. They can treat and they can restore to function. And that’s what happened. A crisis was prevented and she was able to get back to normal life again. If you don’t intervene when these things happen, people end up on the street a lot. The difference between right and wrong is huge here.


And again, it’s an AI thing. But it’s interesting, not just AI doing whizzy things with data, but how it did it. Here’s a case where an expert performance task, clinical prediction of mental illness, is not done well by the best human experts. This means that you can’t just ask experts to go label a bunch of data. They certainly wouldn’t know what to do with your keyboard strokes if you asked them. But the data, the brain biomarkers, these 3000 data points a day, are high dimensional quantitative data, exactly the kind of stuff that our AI models love to eat for breakfast. And so, now you can combine an AI model that can make sense of high dimensional quantitative data.

And then the humans are really good at recognizing when something really goes wrong. When someone’s having a psychotic breakdown, it’s pretty easy for trained psychiatrists to recognize and diagnose it. So they basically can label the future, and the AI can operate on data in the present and can predict the future. And the objective function, now, is not error on a labeled training set, as much as it is prediction, and it’s a new kind of skill. It’s a superhuman talent at predicting mental health conditions.


I love the irony of this because this is a new kind of way of humans and machines working together. Instead of downgrading human attention, we’re upgrading the way that we care for people who need mental health care. And the role of the machine is to do what machines do best, and the role of the humanistic do with humans do best, which is care for the people.

And in general, that’s the right theory here, is that you want a system where the machine and the human work together and each does what he, she or it do best.

These are only a couple of points of light in this humanistic AI galaxy.

So, think about how we might use this perspective to turn things around in the larger field of AI, from AI that’s accidentally at, at odds with human benefit, to AI that’s aligned with human benefit.


I think the challenge is to change the way we define the objective functions for AI. It’s the inner loop of how we create and evaluate models.

This is not easy because you can’t just say, “Go do good things, machine,” because that’s not operational. That’s not computable, you can’t tell machines that. But I think we can, if we have the right leadership and the right incentives.

The misalignment of human benefit is often attributed to a single-minded profit motive. Like the business models for social media are broken, we should, you know… And then, the only response, if you think about it that way is, “Ooh, we’ll kill the model.” Well, you can’t do that.


But there’s another way of thinking about business and leadership, that will allow you to do good and do well. And that’s called double bottom line. And this is the concept that in addition to measuring and valuing profit-making activities in a company, a corporation can measure and value human impact. And corporate leaders can guide and reward their employees to achieve both kinds of results simultaneously. The joint performance, as it were. And in an organization like this, AI teams can be tasked with developing models and defining the objective functions that have human benefit in the equation.

This is not some crazy academic idea. It’s the philosophy of the chairman of SK Group, Chairman Chey. He wrote that, “Corporations pursuing social value not only enhance the loyalty of existing customers, but also attracts new customers, which will lead companies to long lasting stability and growth.” So, double bottom line, and leaders like Chairman Chey behind it, is a real thing, and we can do this.

Now, here’s an example in the gaming industry. Imagine double bottom line in the gaming industry. Right now, gaming industry optimizes for play: maximize play, maximize participation. What if it could also maximize for human behaviors that are pro-social and self-care behaviors?


Now, I think that with the power of the new devices coming online, the AR glasses and all this amazing immersive audio visual environments, I think we don’t need to rely on violence anymore. I think this objective function will steer this industry towards games in which it’s the human game. It’s the game of being human and being successful in social interactions with humans. That’s a more interesting game. And in fact, much more of our brain is dedicated to that problem than to violence. Chimps got violence covered. Human brains require, or are necessary for, social interaction.


And also, just from a selfish point of view, if you manage some teams here in AI or something, and you’re trying to recruit the best, if you’re a master in your field, whether it’s game design or AI scientist, you had your choice of where you want to work. And I think we should offer these individuals, the choice to work in a place that does well and does good, and get paid well.

You say, “Okay, naive California guy, the world’s made on money.” Well, it turns out you can do well by doing good, even if you only have a single bottom line. Let’s take some examples.

Aging in place. I don’t need to give you the numbers, look them up. A huge portion of humanity is getting older, and they have money, and they vote and they want solutions to their problem of living at home. They don’t want to be put in institutions.

There’s a lot of AI work going on right now on how to let them stay at home, what they call aging in place, live in dignity where you’ve lived, by having AI that monitors how they’re doing at home. What are their activities of daily life. And you measure how they’re doing, how they’re walking around, how are they’re going to the bathroom, how they’re eating and so on.


And if something goes wrong, just like the brain biomarkers, you can predict things are going wrong with AI. And also, self-care. The idea that you can have AI in there talking to them, checking in with them, which is what human outreach nurses do as well in this situation. But, it can scale.

Another example of this, a specific, but very powerful subset, is medical compliance, which is the problem of taking your medicine as prescribed. It turns out a huge portion of medical, suboptimal outcomes are due to medical compliance problems. There’s AI going on in the sort of straightforward things, “Show me that you took your pills.” Okay. That’s today’s AI, we can do that.

But I’m really excited about a new generation of virtual assistants that are talking to people about how they’re taking their pills and their medicine, answering questions, helping them remember. “Which is the blue pill, which is the green pill? Did I take it today?” And by the combination of better sensing and conversational agency, I think we can do big improvements in this space. And there’s plenty of AI startups thinking about this.


In fact, the general case of this assistant helping humans thing, you might call wellness coaching, which is virtual assistance for not only to help you make a better exercise and better health and so on, but in fact, chronic diseases like diabetes and hypertension. These are the number one preventable disease category in medicine, and it’s also mostly driven by human behavior. So, if we can get AI that manipulates human behavior, imagine how well can do by helping people do what they actually want to do, which is take care of themselves better.

Now, the interesting challenge and opportunity for AI research, as well as industry here, is the science of “Nudging.” Maybe James Landry will talk about this in his talk. The science of how do you decide given what you know about the humans biomarkers, what to advise them to do and when is just beginning. It’s in a nascent stage.

Which is exciting because we have a lot of people we could do this with, just like we do with social media, and we can start a cycle of learning from humans, by trying things out and rapidly advancing the science of what kind of nudges work. If you have a smartwatch, it probably taps you on the wrist and says, “Breathe,” or something. [laughs] It’s, uh, kind of a baby step. We can do a lot more than that.


Now, I particularly get excited about this concept of cognitive enhancement. This is the idea that AI could make us all more intelligent. And I’m not talking about like, “mmmp”, cyber chips implanted in our brains. Not that. The pretty word for that is invasive technologies. I’m talking about AI, like eyeglasses or hearing aids, that’s a natural augmentation of our cognition. That helps us think just a little bit better.

Instead of asking, “How can we make our machines smarter?” We can ask, “How smart can our machines make us?” To me, the low hanging fruit here is something called external personal memory. And we already do this with things like notebooks and audio recorders and cameras. The problem is, that you have to remember and do the thing to record your world. And even then, you don’t have it indexed very well.


But what if we could employ AI to automatically read, watch, look at, listen to everything that we do. It’s now possible for AI to scan written material well enough to do a reasonable semantic index, which means that you can recall anything that you have seen by name, time, location, topic, and relationship to some other concept or topic you’ve already had. So essentially, the knowledge graph level of AI can be applied to this problem.

And today’s natural language and speech interfaces and even “show me” camera interfaces are actually a much more natural way of asking questions. And so, with the combination of the fact that the digital world is in our inner loop, that our digital lives are comprehensive, and that we can index these semantically and that we can use intelligent user interfaces to retrieve makes the whole thing so much easier to do. You can imagine it being with you every day.


So imagine, for example, you read something on the way to the conference. It was something about this guy in gaming and, and I don’t remember exactly what it was. And then the conversation starts happening. Has this happened to you? And you’re like, “Oh, wait a minute. What was that thing I read? I want to know what that thing I read is.”

Well, you could ask your personal memory, “Who is the person that I read about recently, who works in gaming for education?” And that’s essentially enough for an associative retrieval based on a semantic index, to retrieve it. The reason it would work is because it only has to search things in your personal history, not the web. And therefore, the precision is way, way higher.

Imagine also that this could be applied to your social world. A lot of times when we meet someone, we might do a text with them or an email with them, a correspondence; that’s all written material. It’s possible to have an AI look at that, and essentially reconstruct a history of your digital relationship with someone, starting with their name. Wouldn’t it be nice, not to get that wrong the second time?


And of course, since we all live on videos and stuff, what about videos? Well, yeah, it turns out you could do videos, too. You don’t have to have general intelligence that can understand the plot, anything like that. Just scene recognition and labeling is enough.

Here’s an example that, that happened to me the other day. ” There was this TED talk by this person who had this chicken and a laser. Like, it’s really cool. And it’s some kind of imaging and that, I don’t know what it is.” You could say to your personal memory, “The chicken laser woman on TED.” And it would get it, because how many times have you seen a laser chicken lady video? Probably just once. And it’s going come up and show Mary Lou Jepsen’s great work in near-infrared imaging.

Now, let’s put the humans into the equation. If we want to build a humanistic AI, we need to factor in the human impact from the start. And we need to build it into the very process by which we define and iterate on our models. And this is an engineering challenge; this is not easy. But it does not require breakthroughs in AI, because the AI does not have to know the difference between right and wrong. It only has to be pointed in the direction of human benefit.


We saw several ways in which the same technology can be used for surveillance, persuasion, and behavior control and humanistic goals. Where, instead of having population surveillance, you know, face recognition, monitoring for change and identity, blah, blah, blah, all the stuff that’s scary, well it turns out that technology underlying it all can also be used for aging in place, allowing older people to live in dignity in home.

And instead of allowing AI Chatbots to manipulate public opinion, we could build conversational assistants that help people remember to take their pills and manage their chronic diseases and stay healthy. And instead of using the data from billions of people interacting with these systems to manipulate them, we can use the data from people living digitally mediated lives, like you and me, to enhance our memories and help us meet our personal goals for wellness.


We have a historically unprecedented opportunity to use this new data of our human behavior. And instead of just keeping people on site, let’s use it to give insight to people.

And finally, let’s change the narrative about AI, which frankly is scaring people. And instead of talking about the future of AI as a super intelligence, that takes all your jobs away and leaves you as an inferior species, let’s talk about what we can do with super AI today.

Instead of talking about giving up our privacy and self determinism, which has a name by the way, George Orwell’s “big brother”, I think we should start thinking about a similar sounding, but positive metaphor for humanistic AI, Big Mother.


Big Mother is AI that knows your human weaknesses, but instead of taking advantage of them, uses that knowledge to nurture you and nudge you towards your own health and wellness.  This is AI that teaches rather than misleads. It’s AI that protects you against the forces that would take advantage of you, like big brother, and never betrays your trust. And it should do everything possible to help you be the best human being you can be, like a good mom. I think we should build Big Mother. Thank you.