Results 1 
4 of
4
System Identification, Approximation and Complexity
 International Journal of General Systems
, 1977
"... This paper is concerned with establishing broadlybased systemtheoretic foundations and practical techniques for the problem of system identification that are rigorous, intuitively clear and conceptually powerful. A general formulation is first given in which two order relations are postulated on a ..."
Abstract

Cited by 34 (23 self)
 Add to MetaCart
This paper is concerned with establishing broadlybased systemtheoretic foundations and practical techniques for the problem of system identification that are rigorous, intuitively clear and conceptually powerful. A general formulation is first given in which two order relations are postulated on a class of models: a constant one of complexity; and a variable one of approximation induced by an observed behaviour. An admissible model is such that any less complex model is a worse approximation. The general problem of identification is that of finding the admissible subspace of models induced by a given behaviour. It is proved under very general assumptions that, if deterministic models are required then nearly all behaviours require models of nearly maximum complexity. A general theory of approximation between models and behaviour is then developed based on subjective probability concepts and semantic information theory The role of structural constraints such as causality, locality, finite memory, etc., are then discussed as rules of the game. These concepts and results are applied to the specific problem or stochastic automaton, or grammar, inference. Computational results are given to demonstrate that the theory is complete and fully operational. Finally the formulation of identification proposed in this paper is analysed in terms of Klir’s epistemological hierarchy and both are discussed in terms of the rich philosophical literature on the acquisition of knowledge. 1
Tenacious Tortoises: A formalism for argument over rules of inference
 Computational Dialectics (ECAI 2000 Workshop
, 2000
"... As multiagent systems proliferate and employ different and more sophisticated formal logics, it is increasingly likely that agents will be reasoning with different rules of inference. Hence, an agent seeking to convince another of some proposition may first have to convince the latter to use a rule ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
As multiagent systems proliferate and employ different and more sophisticated formal logics, it is increasingly likely that agents will be reasoning with different rules of inference. Hence, an agent seeking to convince another of some proposition may first have to convince the latter to use a rule of inference which it has not thus far adopted. We define a formalism to represent degrees of acceptability or validity of rules of inference, to enable autonomous agents to undertake dialogue concerning inference rules. Even when they disagree over the acceptability of a rule, two agents may still use the proposed formalism to reason collaboratively. 1
Supervaluationism and Its Logics
"... If we adopt a supervaluational semantics for vagueness, what sort of logic results? As it turns out, the answer depends crucially on how the standard notion of validity as truth preservation is recast. There are several ways of doing this within a supervaluational framework, the main alternative bei ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
If we adopt a supervaluational semantics for vagueness, what sort of logic results? As it turns out, the answer depends crucially on how the standard notion of validity as truth preservation is recast. There are several ways of doing this within a supervaluational framework, the main alternative being between ‘global ’ construals (e.g. an argument is valid if and only if it preserves truthunderallprecisifications) and ‘local’ construals (an argument is valid if and only if, under all precisifications, it preserves truth). The former alternative is by far more popular, but I argue in favour of the latter, for (i) it does not suffer from a number of serious objections, and (ii) it makes it possible to restore global validity as a defined notion. Supervaluationism is a mixed bag. It is sometimes described as the ‘standard ’ theory of vagueness, at least in so far as vagueness is construed as a semantic phenomenon, but exactly what that standard theory amounts to is far from clear. In fact, it is pretty clear that there is not just one supervaluational semantics out there—there are lots of such semantics; and although it is true that they all exploit the same insight, their relative differences are by no means immaterial. For one thing, a lot depends on how exactly supervaluations are constructed, that is, on how exactly we come to establish the truthvalue of a given statement. (And when I say that a lot depends on this I mean to say that different explanations may give rise to different philosophical worries, or justify different reactions.) Secondly, and equally importantly, a lot depends on how a given supervaluationary machinery is brought into play when it comes to explaining the logic of the language, that is, not the notion of truth, or ‘supertruth’, as it applies to individual statements, but the notion of validity, or ‘supervalidity’, as it applies to whole arguments. (I am thinking for instance of how different explanations may bear on the question of whether, or to what extent, vagueness involves a departure from classical logic.) Here I want to focus on this second part of the story. However, since the notion of validity depends on the notion of truth—or so one may argue—I also want to comment briefly on the first.
Evaluating Defaults
, 2002
"... We seek to find normative criteria of adequacy for nonmonotonic logic similar to the criterion of validity for deductive logic. Rather than stipulating that the conclusion of an inference be true in all models in which the premises are true, we require that the conclusion of a nonmonotonic inference ..."
Abstract
 Add to MetaCart
We seek to find normative criteria of adequacy for nonmonotonic logic similar to the criterion of validity for deductive logic. Rather than stipulating that the conclusion of an inference be true in all models in which the premises are true, we require that the conclusion of a nonmonotonic inference be true in “almost all ” models of a certain sort in which the premises are true. This “certain sort ” specification picks out the models that are relevant to the inference, taking into account factors such as specificity and vagueness, and previous inferences. The frequencies characterizing the relevant models reflect known frequencies in our actual world. The criteria of adequacy for a default inference can be extended by thresholding to criteria of adequacy for an extension. We show that this avoids the implausibilities that might otherwise result from the chaining of default inferences. The model proportions, when construed in terms of frequencies, provide a verifiable grounding of default rules, and can become the basis for generating default rules from statistics.