As a professional researcher, Tom chose not to pursue the tenure track and instead focused on building systems that did new things. However, he managed to knock out a few academic publications and achieve a modest influence in the literature (over 60,000 citations). Here are a few pieces of writing that were important at the time or may be of interest today. They are organized by general topic, although they typically span several of these areas of interest.
A Translation Approach to Portable Ontology Specifications
Thomas R. Gruber (1993). A Translation Approach to Portable Ontology Specifications. Knowledge Acquisition, 5(2), 1993, pp. 199-220.
This is the official Ontolingua paper, containing the fabled definition of ontology as a specification of a conceptualization.
This is the paper that first published the fabled definition of ontology as specification, with a theoretical grounding in AI agency and knowledge representation. This journal was merged into the International Journal of Human-Computer Studies, and this paper was recognized as the highest cited article in the history of those journals.
Abstract: To support the sharing and reuse of formally represented knowledge among AI systems, it is useful to define the common vocabulary in which shared knowledge is represented. A specification of a representational vocabulary for a shared domain of discourse — definitions of classes, relations, functions, and other objects — is called an ontology. This paper describes a mechanism for defining ontologies that are portable over representation systems.
Toward Principles for the Design of Ontologies Used for Knowledge Sharing
Thomas R. Gruber (1993). Toward principles for the design of ontologies used for knowledge sharing. Originally in N. Guarino and R. Poli, (Eds.), International Workshop on Formal Ontology, Padova, Italy. Revised August 1993. Published in International Journal of Human-Computer Studies, Volume 43 , Issue 5-6 Nov./Dec. 1995, Pages: 907-928, special issue on the role of formal ontology in the information technology.
One of the first attempts at a software development methodology for ontologies. Introduces the notion that ontologies are design and should be amenable to engineering methodologies. Proposes five design criteria for ontologies.
Original abstract: Recent work in Artificial Intelligence is exploring the use of formal ontologies as a way of specifying content-specific agreements for the sharing and reuse of knowledge among software entities. We take an engineering perspective on the development of such ontologies. Formal ontologies are viewed as designed artifacts, formulated for specific purposes and evaluated against objective design criteria. We describe the role of ontologies in supporting knowledge sharing activities, and then present a set of criteria to guide the development of ontologies for these purposes. We show how these criteria are applied in case studies from the design of ontologies for engineering mathematics and bibliographic data. Selected design decisions are discussed, and alternative representation choices and evaluated against the design criteria.
Tom Gruber (2008), Ontology. Entry in the Encyclopedia of Database Systems, Ling Liu and M. Tamer Özsu (Eds.), Springer-Verlag, 2009.
Provides a definition of ontology as a technical term for computer science, tracing its historical context from philosophy and AI. Definitional article in the encyclopedia of database systems on ontology. Update of 1993 ontology definition
For a recent citation, see: Gruber T. (2016) Ontology. In: Liu L., Özsu M. (eds) Encyclopedia of Database Systems. Springer, New York, NY.
Enabling Technology for Knowledge Sharing
Robert Neches, Richard Fikes, Tim Finin, Thomas Gruber, Ramesh Patil, Ted Senator, and William R. Swartout (1991). Enabling technology for knowledge sharing. AI Magazine, 12(3):16-36, 1991.
The manifesto publication of the DARPA Knowledge Sharing Effort, describing the layers which eventually morphed and survived somewhat scathed in the formalisms of the Semantic Web. Historically interesting given what has happened since with XML.
Every Ontology Is a Treaty
Thomas Gruber (2004). Every Ontology is a Treaty. Interview for Semantic Web and Information Systems SIG of the Association for Information Systems. SIGSEMIS Bulletin, Volume 1, Issue 3. October, 2004.
Interview with a Semantic Web organization, published in their journal of record, in which I argue that ontologies are designed in a social context. It was a radical idea back then that the meaning of formal knowledge representations are situated in human society. It also predicted that standardizing ontologies top down would fail.
Collaborative Engineering Based on Knowledge-sharing Agreements
Greg Olsen, Mark Cutkosky, Jay M. Tenenbaum, and Thomas R. Gruber (1994). Collaborative engineering based on knowledge-sharing agreements. American Society for Mechanical Engineering (ASME) International Computers in Engineering conference, 1994.
This publication won the Best Paper award at a prestigious engineering conference. The design of products by multi-disciplinary groups is a knowledge intensive activity. Collaborators must be able to exchange information and share some common understanding of the information’s content. The hope, however, that a centralized standards effort will lead to integrated tools spanning the needs of engineering collaborators is misplaced. Standards cannot satisfy the information sharing needs of collaborators, because these needs cannot be standardized.
This paper discusses the design and use of a shared representation of knowledge (language and vocabulary) to facilitate communication among specialists and their tools. The paper advances the opinion that collaborators need the ability to establish and customize knowledge sharing agreements (i.e. mutually agreed upon terminology and definitions) that are usable by people and their machines. The paper describes a formal approach to representing engineering knowledge, describes its role in a computational framework that integrates a heterogeneous mix of software tools, and discusses its relationship to current and emerging data exchange standards.
An Ontology for Engineering Mathematics
Thomas R. Gruber and Greg R. Olsen. (1994). An ontology for engineering mathematics. In J. Doyle, P. Torasso, and E. Sandewall (Eds.), Fourth International Conference on Principles of Knowledge Representation and Reasoning, Gustav Stresemann Institut, Bonn, Germany, Morgan Kaufmann, 1994.
Possibly the first refereed publication of an AI ontology, explicitly called out as an ontology. Defines a formal axiomatization of the mathematics sufficient to represent modern engineering models. The HTML version of this paper is deeply cross indexed and contains the entire ontology in machine and human readable form.
Original abstract: We describe an ontology for mathematical modeling in engineering. The ontology includes conceptual foundations for scalar, vector, and tensor quantities, physical dimensions, units of measure, functions of quantities, and dimensionless quantities. The conceptualization builds on abstract algebra and measurement theory, but is designed explicitly for knowledge sharing purposes. The ontology is being used as a communication language among cooperating engineering agents, and as a foundation for other engineering ontologies. In this paper we describe the conceptualization of the ontology, and show selected axioms from definitions. We describe the design of the ontology and justify the important representation choices. We offer evaluation criteria for such ontologies and demonstrate design techniques for achieving them.
The Configuration Design Ontologies and the VT Elevator Domain Theory
Thomas R. Gruber, Greg R. Olsen, J. Runkel (1994). The configuration design ontologies and the VT elevator domain theory. International Journal of Human-Computer Studies. Volume 44 , Issue 3-4 March/April 1996.
One of the first journal articles presenting domain ontologies as a research contribution.
In the VT/Sisyphus experiment, a set of problem solving systems were being built against a common specification of a problem. An important hypothesis was that the specification could be given, in large part, as a common ontology. This article is that ontology. This ontology is different than normal software specification documents in two fundamental ways. First, it is formal and machine readable (i.e. in the KIF/Ontolingua syntax). Second, the descriptions of the input and output of the task to be performed include domain knowledge (i.e. about elevator configuration) that characterize semantic constraints on possible solutions, rather than describing the form (data structure) of the answer. The article includes an overview of the conceptualization, excerpts from the machine-readable Ontolingua source files, and pointers to the complete ontology library available on the Internet.
SHADE: Technology for Knowledge-based Collaborative Engineering
James G. McGuire, Daniel R. Kuokka, Jay C. Weber, Jay M. Tenenbaum, Thomas R. Gruber, and Greg R. Olsen. (1993). SHADE: Technology for knowledge-based collaborative engineering. Journal of Concurrent Engineering: Applications and Research (CERA), 1(2), 1993.
Abstract: Effective information sharing and decision coordination are vital to collaborative product development and integrated manufacturing. However, typical special-purpose CAE systems tend to isolate information at tool boundaries, and typical integrated CAE systems tend to limit flexibility and process innovation. The SHADE (SHAred Dependency Engineering) project strikes a balance between these undesirable extremes by supporting reconfigurable exchange of engineering knowledge among special-purpose CAE systems. SHADE’s approach has three main components: a shared knowledge representation (language and domain-specific vocabulary), protocols supporting information exchange for change notification and subscription, and facilitation services for content-directed routing and intelligent matching of information consumers and producers.
PACT: An Experiment in Integrating Concurrent Engineering Systems
Mark Cutkosky, Robert S. Engelmore, Richard E. Fikes, Thomas R. Gruber, Michael R. Genesereth, William S. Mark, Jay M. Tenenbaum, and Jay C. Weber. (1993). PACT: An experiment in integrating concurrent engineering systems. IEEE Computer, 26(1), 1993, pp. 28-37.
Reference: Cutkosky, M. R.; Engelmore, R. S.; Fikes, R. E.; Genesereth, M. R.; Gruber, T. R.; Mark, W. S.; Tenenbaum, J. M.; & Weber, J. C. PACT: An Experiment in Integrating Concurrent Engineering Systems. 1993.
Abstract: The Palo Alto Collaborative Testbed (PACT) is a joint experiment in concurrent engineering being pursued by research groups at Stanford University, Lockheed, Hewlett-Packard, and Enterprise Integration Technologies. The current prototype integrates four preexisting concurrent engineering systems into a common framework. Each of the individual systems is used to model different aspects of a small robotic manipulator, and to reason about them from a different discipline (dynamics, digitial electronics, and software). The initial PACT experiments have explored knowledge sharing in the context of a distributed simulation and simple incremental redesign scenario.
Notes: Submitted February 1993.
Semantic Computing and Web 3.0
Collective Knowledge Systems: Where the Social Web Meets the Semantic Web
Thomas Gruber (2007). Collective Knowledge Systems: Where the Social Web meets the Semantic Web. Web Semantics: Science, Services and Agents on the World Wide Web, Volume 6 Issue 1, February, 2008, pp 4-13.
Proposes a class of applications called Collective Knowledge Systems, which are the “killer apps” for the integration of the Social Web (2.0) and the Semantic Web. Characteristics of these systems, principles, and examples from real applications are included.
Original abstract: What can happen if we combine the best ideas from the Social Web and Semantic Web? The Social Web is an ecosystem of participation, where value is created by the aggregation of many individual user contributions. The Semantic Web is an ecosystem of data, where value is created by the integration of structured data from many sources. What applications can best synthesize the strengths of these two approaches, to create a new level of value that is both rich with human participation and powered by well-structured information? This paper proposes a class of applications called collective knowledge systems, which unlock the “collective intelligence” of the Social Web with knowledge representation and reasoning techniques of the Semantic Web.
Computer-assisted Semantic Annotation of Scientific Life Works
Edward Feigenbaum, Thomas Gruber, Will Snow (2007). Computer-assisted Semantic Annotation of Scientific Life Works.
Original Abstract: Describes a new research project for semantic annotation – the semantic equivalent of OCR – to be conducted at Stanford in the domain of digital life work archives.
Presentation to Stanford class (CS300) for research initiatives.
Ontology of Folksonomy: A Mash-up of Apples and Oranges
Thomas Gruber (2005). Ontology of Folksonomy: A Mash-up of Apples and Oranges. Int’l Journal on Semantic Web & Information Systems, 3(2), 2007.
A rebuttal to a popular anti-ontology blog, with a constructive call to action. Showed how “Folksonomy” isn’t incompatible with formal ontology and showed how to get the benefits of both.
Originally presented as an invited keynote to the First on-Line conference on Metadata and Semantics Research (MTSR’05) and released on www.metadata-semantics.org (no longer active). Published to the journal in 2007.
Collaboration and Collective Intelligence
2021: Mass Collaboration and the Really New Economy
Thomas Gruber (2001). 2021: Mass Collaboration and the Really New Economy. In TNTY Futures, Volume 1, Issue 6
Speculation about political and economic changes due to Internet-based information transparency, written in 2001. TNTY Futures was the newsletter of The Next Twenty Years Series. They published articles that speculate about the future, twenty years out. Well, folks, crack open the time capsule!
Original Abstract: In twenty years, the Internet will force cataclysmic changes in business and politics, due to transparency about the quality of goods and services drawn from collective experience.
The Intraspect Knowledge Management Solution: Technical Overview
Thomas Gruber (1998). The Intraspect Knowledge Management Solution: Technical Overview.
Our first technical whitepaper, describing the Intraspect vision and how it was implemented.
Original abstract: The Intraspect Knowledge Management System provides a collaborative environment for knowledge work where people search for, collect, organize, share and collaborate around information. As they work in this environment, their work is captured and maintained as a group memory, making it available for knowledge sharing and reuse. Intraspect’s group memory contains only the information that has been put to use, and it captures the context of its use — who collected it, when it was used, for what task, how it was combined with other information, and what people in the organization said about it. The Intraspect solution is a comprehensive integration of intranet technologies for creating, working in and harvesting a group memory: distributed object storage, full text search, desktop integration, web publishing, email processing and distribution, agent-based monitoring, and brokering to other intranet services. Information in the Intraspect repository includes documents on desktops and file systems, static and dynamic web pages on remote servers, email from any Internet-compatible client or server, and any other information source served by the Internet standards HTTP or SMTP.
Enterprise Collaboration Management with Intraspect
Thomas Gruber (2001). Enterprise Collaboration Management with Intraspect. A Technical Overview White Paper, July 2001.
Slightly marketing-ized, but accurate description of the Intraspect product in 2001.
Original abstract: This paper describes Intraspect’s offerings from a technical perspective. There are four main sections, covering: the technology needed for enterprise collaboration software, how Intraspect addresses the need, the functionality of the Intraspect product line, and how the underlying technology works and can be used to create collaborative applications in the enterprise ecology.
Generative Design Rationale: Beyond the Record and Replay Paradigm
Thomas R. Gruber and Daniel M Russell. (1992). Generative design rationale: Beyond the record and replay paradigm. In T. Moran and J. H. Carroll (Eds.), Design Rationale: Concepts, Techniques, and Use. Lawrence Erlbaum Associates, 1995, pp. 323 – 349. ISBN:0-8058-1567-8.
Originally written in 1992, on the web in 1993, in print in 1995!
Original abstract: Research in design rationale support must confront the fundamental questions of what kinds of design rationale information should be captured, and how rationales can be used to support engineering practice. This paper examines the kinds of information used in design rationale explanations, relating them to the kinds of computational services that can be provided . Implications for the design of software tools for design rationale support are given. The analysis predicts that the “record and replay” paradigm of structured note-taking tools (electronic notebooks, deliberation notes, decision histories) may be inadequate to the task. Instead, we argue for a generative approach in which design rationale explanations are constructed, in response to information requests, from background knowledge and information captured during design. Support services based on the generative paradigm, such as design dependency management and rationale by demonstration, will require more formal integration between the rationale knowledge capture tools and existing engineering software.
Toward a Knowledge Medium for Collaborative Product Development
Thomas R. Gruber, Jay M. Tenenbaum, and Jay C. Weber. (1992). Toward a knowledge medium for collaborative product development. In John S. Gero (Eds.), Artificial Intelligence in Design ’92: Proceedings of the Second International Conference on Artificial Intelligence in Design. Boston: Kluwer Academic Publishers, 1992.
Original abstract: Information sharing and decision coordination are central problems for large- scale product development. However, existing computer tools mainly support isolated tasks, such as geometric modeling and manufacturing process planning. This paper proposes a knowledge representation to support knowledge sharing and communication for cooperative product development. The representation is being designed as a knowledge medium for human organizations, rather than a language for data exchange between tools. Existing product data, models, documents and other forms of shared knowledge are encapsulated into a shared knowledge base a design elements. Relationships among design elements, and annotations describing their contents, are represented explicitly. The representation will afford variable levels of formalization of design knowledge. The minimal, “semiformal” level is an encapsulation of design elements as opaque objects and untyped relations among them (“hyperlinks”). Formal annotations on design elements and relationships can be incrementally enriched. The highest degree of formality includes declarative theories that support automated reasoning about how design team need to be notified. The paper analyzes the relationships can be incrementally enriched. The highest degree of formality includes declarative theories that support automated reasoning about how design changes impact other parts of the design and which members of the design team need to be notified. The paper analyzes the relationship between levels of formality in the shared representation and the computational services they enable.
NIKE: A National Infrastructure for Knowledge Exchange
Thomas R. Gruber , A. B. Tenenbaum, and Jay M. Tenenbaum. (1994). NIKE: A National Infrastructure for Knowledge Exchange. Enterprise Integration Technologies, Menlo Park, CA, 94025.
Context: A widely circulated white paper proposing a web- and market-based infrastructure for collaborative, knowledge-based learning and work.
Abstract: This white paper advocates the development of National Information Infrastructure (NII) technologies to support lifelong learning. The immediate, predictable impact would be to overcome existing inefficiencies in the development and delivery of learning materials. An on-line marketplace will create powerful incentives to develop new materials and provide efficient means for their widespread distribution. Advanced authoring tools will allow millions of educators, students, and specialists to contribute to a growing body of learning materials.
The longer term opportunity is to integrate technologies for network-based learning and collaboration into the work environment. Knowledge workers would apply the same skills and tools used in learning — for finding, organizing, and sharing knowledge on the network — to get the job done. They would collaborate in virtual teams and organizations that cut across temporal, geographical, and institutional boundaries. They would contribute to organizational memories that preserve valuable expertise and experience when employees move on. The new way of learning, exploiting the potential of a National Information Infrastructure, will prepare a generation for a new way of working.
A Generic Knowledge-Base Access Protocol
Peter D. Karp and Thomas R. Gruber (1995). A Generic Knowledge-base Access Protocol. Proceedings of the International Joint Conferences on Artificial Intelligence, Montreal, 1995.
An ontology-based knowledge sharing API for AI people.
Intelligent User Interface
Model-based Virtual Document Generation
Thomas Gruber, Sunil Vemuri, and James Rice (1995). Model-Based Virtual Document Generation. International Journal of Human-Computer Studies, Volume 46 , Issue 6 (June 1997). Special issue: innovative applications of the World Wide Web. ISSN:1071-5819.
Describes the use of the web as a medium for virtual documents that generate natural language explanations of how things work.
Original Abstract: Virtual documents are hypermedia documents that are generated on demand in response to reader input. This paper describes a virtual document application that generates natural language explanations about the structure and behavior of electromechanical systems. The application structures the interaction with the reader as a question-answer dialog. Each “page” of the hyperdocument is the answer to a question, and each “link” is another question that leads to another answer. Unlike conventional hypertext documentation, the system dynamically constructs answers to questions from formal engineering models.
Machine-generated Explanations of Engineering Models: A Compositional Modeling Approach
Thomas R. Gruber and Patrice O. Gautier. (1993). Machine-generated explanations of engineering models: A compositional modeling approach. Proceedings of the 13th International Joint Conference on Artificial Intelligence, Chambery, France, pages 1502-1508, San Mateo, CA: Morgan Kaufmann, 1993.
Original Abstract: We describe a method for generating causal explanations, in natural language, of the simulated behavior of physical devices. The method is implemented in DME, a system that helps formulate mathematical simulation models from a library of model fragments using a Compositional Modeling approach. Because explanations are generated from models that are dynamically constructed from modular pieces, several of the limitations of conventional explanation techniques are overcome. Since the explanation system has access to the derivation of mathematical equations from the original model specification, the system can explain low-level quantitative behavior predicted by conventional simulation techniques in terms of salient behavioral abstractions such as physical processes, idealized components, and operating modes. Instead of relying on ad hoc causal models, crafted specifically for the ex- planation task, the program infers causal relationships among parameters in a constraint-based equation model. Rather than using canned, top-down templates, the text generator composes textual annotations associated with individual model fragments into coherent sentences. We show how these techniques can be combined to produce a variety of explanations about simulated systems.
Generating Explanations of Device Behavior Using Compositional Modeling and Causal Ordering
Patrice O. Gautier and Thomas R. Gruber (1993). Generating Explanations of Device Behavior Using Compositional Modeling and Causal Ordering. Proceedings of the Eleventh National Conference on Artificial Intelligence, Washington, D.C., AAAI Press/The MIT Press, 1993.
Original Abstract: Generating explanations of device behavior is a long-standing goal of AI research in reasoning about physical systems. Much of the relevant work has concentrated on new methods for modeling and simulation, such as qualitative physics, or on sophisticated natural language generation, in which the device models are specially crafted for explanatory purposes. We show how two techniques from the modeling research—compositional modeling and causal ordering—can be effectively combined to generate natural language explanations of device behavior from engineering models. The explanations offer three advances over the data displays produced by conventional simulation software: (1) causal interpretations of the data, (2) summaries at appropriate levels of abstraction (physical mechanisms and component operating modes), and (3) query-driven, natural language summaries. Furthermore, combining the compositional modeling and causal ordering techniques allows models that are more scalable and less brittle than models designed solely for explanation. However, these techniques produce models with detail that can be distracting in explanations and would be removed in hand-crafted models (e.g., intermediate variables). We present domain-independent filtering and aggregation techniques that overcome these problems.
Using the Web as an Application Interface
James Rice, Adam Farquhar, Phillippe Piernot, and Thomas Gruber (1995). Using the Web as an Application Interface. CHI ’96 Proceedings: Conference on Human Factors in Computing Systems, April 13-18, 1996, pp. 103-110, Vancouver, BC, Canada.
Pushes the “virtual document as interface” metaphor to the extreme, for that time in the Web’s evolution.
Machine Learning and Knowledge Acquisition
Nature, Nurture, and Knowledge Acquisition
Thomas R. Gruber (2013). Nature, Nurture, and Knowledge Acquisition. International Journal Human-Computer Studies, Vol. 71, Issues 2, February 2013, pp.191-194.
The nature vs. nurture dualism has framed the modern conversation in biology and psychology. There is an analogous distinction for Knowledge Acquisition and Artificial Intelligence. In the context of building intelligent systems, Nature means acquiring knowledge by being programmed or modeled that way. Nurture means acquiring knowledge by machine learning from data and information in the world. This paper develops the nature/nurture analogy in light of the history of Knowledge Acquisition, the current state of the art, and the future of intelligent machines learning from human knowledge.
Automated Knowledge Acquisition for Strategic Knowledge
Thomas R. Gruber (1989). Automated Knowledge Acquisition for Strategic Knowledge. Machine Learning, Volume 4 , Issue 3-4 (December 1989), pp. 293 – 336.
Original Abstract: Strategic knowledge is used by an agent to decide what action to perform next, where actions have consequences external to the agent. This article presents a computer-mediated method for acquiring strategic knowledge. The general knowledge acquisition problem and the special difficulties of acquiring strategic knowledge are analyzed in terms of representation mismatch: the difference between the form in which knowledge is available from the world and the form required for knowledge systems. ASK is an interactive knowledge acquisition tool that elicits strategic knowledge from people in the form of justifications for action choices and generates strategy rules that operationalize and generalize the expert’s advice. The basic approach is demonstrated with a human-computer dialog in which ASK acquires strategic knowledge for medical diagnosis and treatment. The rationale for and consequences of specific design decisions in ASK are analyzed, and the scope of applicability and limitations of the approach are assessed. The paper concludes by discussing the contribution of knowledge representation to automated knowledge acquisition.
Interactive Acquisition of Justifications: Learning “Why” by Being Told “What”
Thomas R. Gruber (1991). Interactive Acquisition of Justifications: Learning “Why” by Being Told “What.” IEEE Expert, 6(4): 65-75, August 1991.
In this paper I describe an approach to automated knowledge acquisition in which users specify desired system behavior by constructing justifications of examples. Justifications are explanations of why example behaviors are appropriate in given situations. I analyze the problem of acquiring justifications, showing how current knowledge acquisition techniques are best suited for asking what-questions while justifications are naturally viewed as answers to why-questions. I sketch a new approach for acquiring justifications that transforms why-questions into what-questions, borrowing the sources of power of existing techniques. In this approach, users construct justifications by selecting facts that specify what is relevant in a situation from a space of facts provided by the elicitation tool. Justifications are then used to create operational mappings from situations to intended outcomes. I show how the approach is applied to two different knowledge acquisition problems: the acquisition of diagnostic strategy and the acquisition of design rationale. I conclude by identifying common characteristics of the two applications and discuss how their design distributes the cognitive load between human and machine.
The Acquisition of Strategic Knowledge
Thomas R. Gruber (1989). The Acquisition of Strategic Knowledge. San Diego: Academic press, 1989. ISBN:0-12-304754-4.
The PhD thesis turned into a book.
The Acquisition of Strategic Knowledge deals with the automation of the acquisition of strategic knowledge and describes a knowledge acquisition program called ASK, which elicits strategic knowledge from domain experts and puts it in operational form. This book explores the dynamics of intelligent systems and how the components of knowledge systems (including a human expert) interact to produce intelligence. Emphasis is placed on how to represent knowledge that experts require to make decisions about actions. The move toward abstract tasks and how tasks are solved are discussed, along with their implications for knowledge acquisition, particularly the acquisition of expert strategies.
This book is comprised of eight chapters and begins with an overview of the knowledge acquisition problem for strategic knowledge, as well as the relevance of strategic knowledge to artificial intelligence. The next chapter describes a dialog session between the ASK knowledge acquisition assistant and the user (“”the expert””). The discussion then turns to software architecture with which to represent strategic knowledge; design and implementation of an assistant for acquiring strategic knowledge; and approaches to knowledge acquisition. Two applications of the ASK system are considered: to evaluate the usability of the elicitation technique with real users and to test the adequacy of the strategy rule representation upon which the approach is dependent. The scope of ASK, its sources of power, and its underlying assumptions are also outlined.
This monograph will be a valuable resource for knowledge systems designers and those interested in artificial intelligence and expert systems.
Model Formulation as a Problem-solving Task: Computer-assisted Engineering Modeling
Thomas R. Gruber (1993). Model formulation as a problem-solving task: computer-assisted engineering modeling. In Knowledge Acquisition as Modeling, 1993, pp. 105-127, John Wiley & Sons, Inc. New York, NY, USA . ISBN:0-471-59368-0.
A meta-learning paper of the day, describing a knowledge-based theory for building knowledge based systems.
Original Abstract: A central purpose of knowledge acquisition technology is to assist with the formulation of domain models that underlie knowledge systems. In this article we examine the model formulation process itself as a problem‐solving task. Drawing from AI research in qualitative reasoning about physical systems, we characterize the model formulation task in terms of the inputs, the reasoning subtasks, and the knowledge needed to perform the problem solving. We describe the elements of a high‐level representation of modeling knowledge, and techniques for providing intelligent assistance to the model builder. Applying the results from engineering modeling to knowledge acquisition in general, we identify properties of the representation that facilitate the construction of knowledge systems from libraries of reusable models.
A Method for Acquiring Strategic Knowledge
Thomas R. Gruber (1989). A Method for Acquiring Strategic Knowledge. Knowledge Acquisition, Volume 1 , Issue 3 (September 1989), pp. 255-277.
Original Abstract: In this article we present an automated method for acquiring strategic knowledge from experts. Strategic knowledge is used by an agent to decide what action to perform next, where actions effect both the agent’s beliefs and the state of the external world. Strategic knowledge underlies expertise in many tasks, yet it is difficult to acquire from experts and is generally treated as an implementation problem. The knowledge acquisition method consists of the design of an operational representation for strategic knowledge, a technique for eliciting it from experts, and an interactive assistant that manages a learning dialog with the expert. The assistant elicits cases of expert-justified strategic decisions and generalizes strategic knowledge with syntactic induction guided by the expert. The knowledge acquisition method derives its power and limitations from the way in which strategic knowledge is represented and applied.
Acquiring Strategic Knowledge From Experts
Thomas R. Gruber (1988). Acquiring Strategic Knowledge from Experts. International Journal of Man-Machine Studies, Volume 29 , Issue 5 (November 1988), pp. 579-597. Reprinted in The Foundations of Knowledge Acquisition, 1990, pp. 115-133, Academic Press, ISBN:0-12-115922-1.
Abstract: This paper presents an approach to the problem of acquiring strategic knowledge from experts. Strategic knowledge is used to decide what course of action to take, when there are confiicting criteria to satisfy and the effects of actions are not known in advance. We show how strategic knowledge challenges the current approaches to knowledge acquisition: knowledge engineering, interactive tools for experts, and machine learning. We present a knowledge acquisition methodology embodied by an interactive tooi that draws from each approach, automating much of what is currently performed by knowledge engineers, and synthesizing interactive and automatic learning techniques. The technique for eliciting strategic knowledge from experts and transforming it into an executable form addresses the technical problems of operationalization, encoding examples, biasing generalization, and the new terms problem.
Design for Acquisition: Principles of Knowledge-system Design to Facilitate Knowledge Acquisition
Thomas R. Gruber and Paul R. Cohen (1987). Design for Acquisition: Principles of Knowledge-system Design to Facilitate Knowledge Acquisition. International Journal of Man-Machine Studies, Volume 26 , Issue 2 (February 1987), pp 143-159.
Early attempt at a software engineering methodology for knowledge acquisition.
The problem of knowledge acquisition is viewed in terms of the incongruity between the representational formalisms provided by an implementation (e.g. production rules) and the formulation of problem-solving knowledge by experts. The thesis of this paper is that knowledge systems can be designed to facilitate knowledge acquisition by reducing representation mismatch. Principles of design for acquisition are presented and applied in the design of an architecture for a medical expert system called MUM. It is shown how the design of MUM makes it possible to acquire two kinds of knowledge that are traditionally difficult to acquire from experts: knowledge about evidential combination and knowledge about control. Practical implications for building knowledge-acquisition tools are discussed.