Results 1 
4 of
4
A Fast and Accurate Dependency Parser using Neural Networks
"... Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for u ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
(Show Context)
Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transitionbased dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2 % improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2% unlabeled attachment score on the English Penn Treebank. 1
Perception processing for general intelligence: Bridging the symbolic/subsymbolic gap
 In: Proceedings of AGI12, Lecture Notes in Computer Science
, 2012
"... Bridging the gap between symbolic and subsymbolic representations is a – perhaps the – key obstacle along the path from the present state of AI achievement to humanlevel artificial general intelligence. One approach to bridging this gap is hybridization – for instance, incorporation of a subsymboli ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Bridging the gap between symbolic and subsymbolic representations is a – perhaps the – key obstacle along the path from the present state of AI achievement to humanlevel artificial general intelligence. One approach to bridging this gap is hybridization – for instance, incorporation of a subsymbolic system and a symbolic system into a integrative cognitive architecture. Here we present a detailed design for an implementation of this approach, via integrating a version of the DeSTIN deep learning system into OpenCog, an integrative cognitive architecture including rich symbolic capabilities. This is a ”tight” integration, in which the symbolic and subsymbolic aspects exert detailed realtime influence on each others ’ operations. Part I of this paper has described in detail the revisions to DeSTIN needed to support this integration, which are mainly along the lines of making it more ”representationally transparent, ” so that its internal states are easier for OpenCog to understand. We describe an extension of the approach beyond vision to include multisensory integration, and perceptionaction integration. We discuss the potential use of this integrated system to control mobile robots, as exemplified via a thoughtexperiment involving eyehand coordination. 1
Improvements to Training an RNN parser
"... Many parsers learn sparse class distributions over trees to model natural language. Recursive Neural Networks (RNN) use much denser representations, yet can still achieve an Fscore of 92.06 % for right binarized sentences up to 15 words long. We examine an RNN model by comparing it with an abstract ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Many parsers learn sparse class distributions over trees to model natural language. Recursive Neural Networks (RNN) use much denser representations, yet can still achieve an Fscore of 92.06 % for right binarized sentences up to 15 words long. We examine an RNN model by comparing it with an abstract generative probabilistic model using a Deep Belief Network (DBN). The DBN provides both an upwards and downwards pointing conditional model, drawing a connection between RNN and Charniak type parsers, while analytically predicting average scoring parameters in the RNN. In addition, we apply the RNN to longer sentences and develop two methods which, while having negligible effect on short sentence parsing, are able to improve the parsing FScore by 0.83 % on longer sentences.
Bayesian Network Automata for Modelling Unbounded Structures
"... This paper proposes a framework which unifies graphical model theory and formal language theory through automata theory. Specifically, we propose Bayesian Network Automata (BNAs) as a formal framework for specifying graphical models of arbitrarily large structures, or equivalently, specifying probab ..."
Abstract
 Add to MetaCart
This paper proposes a framework which unifies graphical model theory and formal language theory through automata theory. Specifically, we propose Bayesian Network Automata (BNAs) as a formal framework for specifying graphical models of arbitrarily large structures, or equivalently, specifying probabilistic grammars in terms of graphical models. BNAs use a formal automaton to specify how to construct an arbitrarily large Bayesian Network by connecting multiple copies of a bounded Bayesian Network. Using a combination of results from graphical models and formal language theory, we show that, for a large class of automata, the complexity of inference with a BNA is bounded by the complexity of inference in the bounded Bayesian Network times the complexity of inference for the equivalent stochastic automaton. This illustrates that BNAs provide a useful framework for developing and analysing models and algorithms for structure prediction.