Results 1 - 10
of
883
A unified architecture for natural language processing: Deep neural networks with multitask learning
, 2008
"... We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and sem ..."
Abstract
-
Cited by 340 (13 self)
- Add to MetaCart
We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically
Machine Learning Research: Four Current Directions
, 1997
"... Machine Learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (a) improving classification accuracy by learning ensembles of classifiers, (b) methods for scaling up super ..."
Abstract
-
Cited by 287 (0 self)
- Add to MetaCart
learning theory, neural networks, statistics, and pattern recognition have discovered one another and begun to work together. Second, machine learning techniques are being applied to new kinds of problems including knowledge discovery in databases, language processing, robot control, and combinatorial
DT: GenTHREADER: an efficient and reliable protein fold recognition method for genomic sequences
- J Mol Biol
, 1999
"... A new protein fold recognition method is described which is both fast and reliable. The method uses a traditional sequence alignment algorithm to generate alignments which are then evaluated by a method derived from threading techniques. As a ®nal step, each threaded model is evaluated by a neural n ..."
Abstract
-
Cited by 214 (10 self)
- Add to MetaCart
A new protein fold recognition method is described which is both fast and reliable. The method uses a traditional sequence alignment algorithm to generate alignments which are then evaluated by a method derived from threading techniques. As a ®nal step, each threaded model is evaluated by a neural
Making Pure Object-Oriented Languages Practical
- In OOPSLA '91 Conference Proceedings
, 1991
"... In the past, object-oriented language designers and programmers have been forced to choose between pure message passing and performance. Last year, our SELF system achieved close to half the speed of optimized C but suffered from impractically long compile times. Two new optimization techniques, def ..."
Abstract
-
Cited by 122 (22 self)
- Add to MetaCart
, deferred compilation of uncommon cases and non-backtracking splitting using path objects, have improved compilation speed by more than an order of magnitude. SELF now compiles about as fast as an optimizing C compiler and runs at over half the speed of optimized C. This new level of performance may make
First and Second-Order Methods for Learning: between Steepest Descent and Newton's Method
- Neural Computation
, 1992
"... On-line first order backpropagation is sufficiently fast and effective for many large-scale classification problems but for very high precision mappings, batch processing may be the method of choice. This paper reviews first- and second-order optimization methods for learning in feedforward neura ..."
Abstract
-
Cited by 177 (7 self)
- Add to MetaCart
neural networks. The viewpoint is that of optimization: many methods can be cast in the language of optimization techniques, allowing the transfer to neural nets of detailed results about computational complexity and safety procedures to ensure convergence and to avoid numerical problems. The review
Efficient Reinforcement Learning through Symbiotic Evolution
- Machine Learning
, 1996
"... . This article presents a new reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task. Symbiotic evolution promotes both cooperation and specialization, whi ..."
Abstract
-
Cited by 161 (38 self)
- Add to MetaCart
-evolution approachwithout loss of generalization. Such efficient learning, combined with few domain assumptions, make SANE a promising approach to a broad range of reinforcement learning problems, including many real-world applications. Keywords: Neuro-Evolution, Reinforcement Learning, Genetic Algorithms, Neural Networks
Fast and Robust Neural Network Joint Models for Statistical Machine Translation
"... Recent work has shown success in us-ing neural network language models (NNLMs) as features in MT systems. Here, we present a novel formulation for a neural network joint model (NNJM), which augments the NNLM with a source context window. Our model is purely lexi-calized and can be integrated into an ..."
Abstract
-
Cited by 43 (1 self)
- Add to MetaCart
Recent work has shown success in us-ing neural network language models (NNLMs) as features in MT systems. Here, we present a novel formulation for a neural network joint model (NNJM), which augments the NNLM with a source context window. Our model is purely lexi-calized and can be integrated
Fast Neural Net Simulation with a DSP Processor Array
, 1993
"... This paper describes the implementation of a fast neural net simulator on a novel parallel distributed-memory computer. A 60-processor system, named MUSIC, 1 is operational and runs the back-propagation algorithm at a speed of 247 million connection updates per second (continuous weight update) ..."
Abstract
-
Cited by 7 (0 self)
- Add to MetaCart
This paper describes the implementation of a fast neural net simulator on a novel parallel distributed-memory computer. A 60-processor system, named MUSIC, 1 is operational and runs the back-propagation algorithm at a speed of 247 million connection updates per second (continuous weight update
Neural Learning Using Orthogonal Arrays
"... The paper proposes the use of the orthogonal arrays for neural network learning. Learning can be seen as a search for the neural weights that give an optimal network performance. The search/optimization adopted here is inspired from methods based on Orthogonal Arrays (a special set of Latin Squares) ..."
Abstract
- Add to MetaCart
The paper proposes the use of the orthogonal arrays for neural network learning. Learning can be seen as a search for the neural weights that give an optimal network performance. The search/optimization adopted here is inspired from methods based on Orthogonal Arrays (a special set of Latin Squares
Generating Text with Recurrent Neural Networks
"... Recurrent Neural Networks (RNNs) are very powerful sequence models that do not enjoy widespread use because it is extremely difficult to train them properly. Fortunately, recent advances in Hessian-free optimization have been able to overcome the difficulties associated with training RNNs, making it ..."
Abstract
-
Cited by 73 (3 self)
- Add to MetaCart
Recurrent Neural Networks (RNNs) are very powerful sequence models that do not enjoy widespread use because it is extremely difficult to train them properly. Fortunately, recent advances in Hessian-free optimization have been able to overcome the difficulties associated with training RNNs, making
Results 1 - 10
of
883