Results 1 
5 of
5
Formalising the Knowledge Content of Case Memory Systems
 Progress in CaseBased Reasoning: First United Kingdom Workshop in CaseBased Reasoning
, 1995
"... . Discussions of casebased reasoning often reflect an implicit assumption that a case memory system will become better informed, i.e. will increase in knowledge, as more cases are added to the casebase. This paper considers formalisations of this `knowledge content' which are a necessary prelimina ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
. Discussions of casebased reasoning often reflect an implicit assumption that a case memory system will become better informed, i.e. will increase in knowledge, as more cases are added to the casebase. This paper considers formalisations of this `knowledge content' which are a necessary preliminary to more rigourous analysis of the performance of casebased reasoning systems. In particular we are interested in modelling the learning aspects of casebased reasoning in order to study how the performance of a casebased reasoning system changes as it accumulates problemsolving experience. The current paper presents a `casebase semantics' which generalises recent formalisations of casebased classification. Within this framework, the paper explores various issues in assuring that these semantics are welldefined, and illustrates how the knowledge content of the case memory system can be seen to reside in both the chosen similarity measure and in the cases of the casebase. 1 Introduct...
Two Modelling Approaches Applied to User Interfaces to Theorem Proving Assistants
 Proceedings of the 2 nd International Workshop on User Interface Design for Theorem Proving Systems: University of
, 1996
"... We model the domain of formal proof without allocating specific functions to the user or the computer. This allows us to define key attributes of proof succinctly. Then we model some aspects of the user's interaction with a theorem proving assistant which provides automation only of the lowest level ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We model the domain of formal proof without allocating specific functions to the user or the computer. This allows us to define key attributes of proof succinctly. Then we model some aspects of the user's interaction with a theorem proving assistant which provides automation only of the lowest level functions. This model is then used to understand some of the strengths and weaknesses of an existing system, PVS. 1 Introduction Many failures of poor designs may be attributed to a mismatch between the vision of the designer and the reality of the user's performance. This applies particularly to user interfaces of theorem proving assistants (TPAs). We take an initial step towards techniques for analysing interactive TPAs which help the designer to understand the essential structures being transformed and the nature of the proving activity. In the same way as the designer of a new food processor would start by considering the function performed by food processors and how they are used by c...
Inductive Bias in CaseBased Reasoning Systems
 Department of Computer Science, University of York, York
, 1995
"... In order to learn more about the behaviour of casebased reasoners as learning systems, we formalise a simple casebased learner as a PAC learning algorithm, using the casebased representation hCB; oei. We first consider a `naive' casebased learning algorithm CB1(oeH ) which learns by collecting ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In order to learn more about the behaviour of casebased reasoners as learning systems, we formalise a simple casebased learner as a PAC learning algorithm, using the casebased representation hCB; oei. We first consider a `naive' casebased learning algorithm CB1(oeH ) which learns by collecting all available cases into the casebase and which calculates similarity by counting the number of features on which two problem descriptions agree. We present results concerning the consistency of this learning algorithm and give some partial results regarding its sample complexity. We are able to characterise CB1(oeH ) as a `weak but general' learning algorithm. We then consider how the sample complexity of casebased learning can be reduced for specific classes of target concept by the application of inductive bias, or prior knowledge of the class of target concepts. Following recent work demonstrating how casebased learning can be improved by choosing a similarity measure appropriate to t...
Theorem Proving
 University of Glasgow, UK
, 1995
"... .56> 2 Interfaces to Theorem Proving Assistants So that a Theorem Proving Assistant may operate effectively there must be a close coupling between the human prover and the assistant. Information at a variety of levels is shared between them. This makes the interface a particularly important compon ..."
Abstract
 Add to MetaCart
.56> 2 Interfaces to Theorem Proving Assistants So that a Theorem Proving Assistant may operate effectively there must be a close coupling between the human prover and the assistant. Information at a variety of levels is shared between them. This makes the interface a particularly important component in the whole system. In order to assess existing theorem proving system (human and assistant) interfaces and to build effective new interfaces, we need some criteria. Our ideal is that the assistant should support the human's notion of the proof rather than dictating one of its own. We explore ways in which the assistant can be improved by considering cognitive mechanisms employed by the human as well as system mechanisms supported by the assistant. 2.1 Cognitive Mechanisms In analysing the human's role in the work of theorem proving, it may be useful to employ Rasmussen 's categorisation of skill, rule and knowledge level behaviou