Knowlege Aquisition Hero

Knowledge Acquisition: Machines Learning from Humans

In the 1980’s when Tom was in graduate school, the first big wave of Artificial Intelligence applications were expert systems. Expert systems were a way for computer scientists to embody human expertise in computer programs. There were a lot of successful expert systems for tasks such as diagnosis, classification, and design. For example, expert systems could diagnose infection disease from symptoms and configure complex component assemblies from requirements. But expert system technology did not come to dominate enterprise computing, which instead embraced the more simple model of programs built on relational databases. (Relational databases are just tables of facts, where expert systems could reason from facts and inputs to conclusions. But tables of facts are far easier to build and maintain — and easier wins over better in the history of technology adoption.)

The main reason knowledge systems did not become ubiquitous is the knowledge acquisition bottleneck: It’s hard to build systems that model and act on the knowledge of human experts. Tom worked on this problem for his doctoral research. Like others in the knowledge acquisition community, Tom argued that the programming approach should be replaced with a learning approach. Instead of programmers learning from experts and then writing computer programs, he posited, knowledge systems should be created by machines learning directly from experts.

This is “easier said than done” — which ironically is a key to understanding the solution that Tom proposed. In his thesis, which was published as a book in 1989, Tom showed a way for a machine to learn how human experts do what they do by watching them complete a task and then asking them to justify the decisions they made. The approach requires coming up with a user interface in which the human can interact with the machine, demonstrating what he or she is doing and justifying decisions, in a way that the machine can understand.

In one experiment, Tom’s program interviewed a cardiac surgeon on how they would treat a patient presenting with evidence of heart disease. The system needed to know enough about disease diagnosis to offer the surgeon an interface to work on the case, while it was trying to learn about expert strategy such as the sequence of tests and interventions that minimize risk. This was a very early form of what today is called “supervised machine learning” from individuals, focused on the still-unsolved problem of learning strategic knowledge (knowledge of how to do a task rather than how to interpret information) directly from human experts. The article describing this approach was published in an early volume of the Machine Learning Journal.

Later, Tom applied this principle of learning by demonstration to help human organizations capture their knowledge as they design things, so called “design rationale capture.” Again, the idea is for humans to do their work in the context of real problem solving, assisted by AI systems that know something about the domain, and the machine learns how and why the humans make their decisions. This is useful for human knowledge sharing, since the machine-mediated knowledge can be easily searched and shared, so that people can refer to it and learn from it. This area of research in machine-augmented organizational learning was developed to maturity by one of Tom’s students, Cristina Garcia Bicharra.

Today, we have a paradigm of machine learning in which models are created from examples of humans solving problems without justification. But they are largely black box models: that is, you can’t tell from looking at them how they work and why they came to the conclusions they do. The line of research that Tom was pursuing with learning by demonstration turns this problem on its head: Instead of building black box models from labeled examples, it suggests that we can teach machines how to do things by showing them how we do things, and explaining why we do them this way. Trained this way, machines can justify their recommendations based on the rationalized exemplars shown to them during training. In other words, they could answer the question “why do you recommend doing that?” with “because experts have shown me that doing that in these conditions results in ….”