Results 1  10
of
189
Architectural Styles and the Design of Networkbased Software Architectures
, 2000
"...
The World Wide Web has succeeded in large part because its software architecture has been designed to meet the needs of an Internetscale distributed hypermedia system. The Web has been iteratively developed over the past ten years through a series of modifications to the standards that define its ..."
Abstract

Cited by 1119 (1 self)
 Add to MetaCart
the architectural design of networkbased application software through principled use of architectural constraints, thereby obtaining the functional, performance, and social properties desired of an architecture. An architectural style is a named, coordinated set of architectural constraints.
This dissertation
Faster rates in regression via active learning
 in Proceedings of NIPS
, 2005
"... In this paper we address the theoretical capabilities of active sampling for estimating functions in noise. Specifically, the problem we consider is that of estimating a function from noisy pointwise samples, that is, the measurements which are collected at various points over the domain of the fun ..."
Abstract

Cited by 46 (9 self)
 Add to MetaCart
in comparison to the performance of classical (passive) methods. We present results characterizing the fundamental limits of active learning for various nonparametric function classes, as well as practical algorithms capable of exploiting the extra flexibility of the active setting and provably improving
Dynamic Network Functional Comparison via Approximatebisimulation by
"... Abstract: It is generally unknown how to formally determine whether different neural networks have a similar behaviour. This question intimately relates to the problem of finding a suitable similarity measure to identify bounds on the inputoutput response distances of neural networks, which has s ..."
Abstract
 Add to MetaCart
setting the concept of δ−approximate bisimulations techniques for nonlinear systems. We have positively tested the proposed approach over continuous time recurrent neural networks (CTRNNs).
Ranking via Sinkhorn Propagation
"... Abstract: It is of increasing importance to develop learning methods for ranking. In contrast to many learning objectives, however, the ranking problem presents difficulties due to the fact that the space of permutations is not smooth. In this paper, we examine the class of ranklinear objective fu ..."
Abstract
 Add to MetaCart
functions, which includes popular metrics such as precision and discounted cumulative gain. In particular, we observe that expectations of these gains are completely characterized by the marginals of the corresponding distribution over permutation matrices. Thus, the expectations of ranklinear objectives
Improved Generalization via Tolerant Training
 Journal of Optimization Theory and Applications
, 1998
"... Theoretical and computational justification is given for improved generalization when the training set is learned with less accuracy. The model used for this investigation is a simple linear one. It is shown that learning a training set with a tolerance ø improves generalization, over zerotoleranc ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Theoretical and computational justification is given for improved generalization when the training set is learned with less accuracy. The model used for this investigation is a simple linear one. It is shown that learning a training set with a tolerance ø improves generalization, over zero
Reinforcement Learning with Modular Neural Networks for Control
 In IEEE International Workshop on Neural Networks Application to Control and Image Processing
, 1994
"... Reinforcement learning methods can be applied to control problems with the objective of optimizing the value of a function over time. They have been used to train single neural networks that learn solutions to whole tasks. Jacobs and Jordan [5] have shown that a set of expert networks combined via a ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Reinforcement learning methods can be applied to control problems with the objective of optimizing the value of a function over time. They have been used to train single neural networks that learn solutions to whole tasks. Jacobs and Jordan [5] have shown that a set of expert networks combined via
Improved Generalization via Tolerant Training
 Journal of Optimization Theory and Applications
, 1998
"... Theoretical and computational justification is given for improved generalization when the training set is learned with less accuracy. The model used for this investigation is a simple linear one. It is shown that learning a training set with a tolerance ø improves generalization, over zerotoleranc ..."
Abstract
 Add to MetaCart
Theoretical and computational justification is given for improved generalization when the training set is learned with less accuracy. The model used for this investigation is a simple linear one. It is shown that learning a training set with a tolerance ø improves generalization, over zero
Learning Mixtures of Submodular Functions for Image Collection Summarization
"... We address the problem of image collection summarization by learning mixtures of submodular functions. Submodularity is useful for this problem since it naturally represents characteristics such as fidelity and diversity, desirable for any summary. Several previously proposed image summarization sco ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
scoring methodologies, in fact, instinctively arrived at submodularity. We provide classes of submodular component functions (including some which are instantiated via a deep neural network) over which mixtures may be learnt. We formulate the learning of such mixtures as a supervised problem via large
LeakageResilient Pseudorandom Functions and SideChannel Attacks on Feistel Networks
"... Abstract. A cryptographic primitive is leakageresilient, if it remains secure even if an adversary can learn a bounded amount of arbitrary information about the computation with every invocation. As a consequence, the physical implementation of a leakageresilient primitive is secure against every ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
round Feistel network over 2n bits making 4 · (n + 1) r−2 forward queries, if with each query we are also given as leakage the Hamming weight of the inputs to the r round functions. This complements the result from the previous item showing that a superconstant number of rounds is necessary. 1
Reinforcement Learning with Modular Neural Networks for Control
"... Reinforcement learning methods can be applied to control problems with the objective of optimizing the value of a function over time. They have been used to train single neural networks that learn solutions to whole tasks. Jacobs and Jordan [5] have shown that a set of expert networks combined via a ..."
Abstract
 Add to MetaCart
Reinforcement learning methods can be applied to control problems with the objective of optimizing the value of a function over time. They have been used to train single neural networks that learn solutions to whole tasks. Jacobs and Jordan [5] have shown that a set of expert networks combined via
Results 1  10
of
189