Results 1 - 10
of
529
Improved Inference for Unlexicalized Parsing
, 2007
"... We present several improvements to unlexicalized parsing with hierarchically state-split PCFGs. First, we present a novel coarse-to-fine method in which a grammar’s own hierarchical projections are used for incremental pruning, including a method for efficiently computing projections of a grammar wi ..."
Abstract
-
Cited by 255 (29 self)
- Add to MetaCart
without a treebank. In our experiments, hierarchical pruning greatly accelerates parsing with no loss in empirical accuracy. Second, we compare various inference procedures for state-split PCFGs from the standpoint of risk minimization, paying particular attention to their practical tradeoffs. Finally, we
Multi-robot exploration controlled by a market economy
, 2002
"... This work presents a novel approach to efficient multirobot mapping and exploration which exploits a market architecture in order to maximize information gain while minimizing incurred costs. This system is reliable and robust in that it can accommodate dynamic introduction and loss of team members ..."
Abstract
-
Cited by 185 (16 self)
- Add to MetaCart
This work presents a novel approach to efficient multirobot mapping and exploration which exploits a market architecture in order to maximize information gain while minimizing incurred costs. This system is reliable and robust in that it can accommodate dynamic introduction and loss of team members
Efficient Online and Batch Learning using Forward Backward Splitting
"... We describe, analyze, and experiment with a framework for empirical loss minimization with regularization. Our algorithmic framework alternates between two phases. On each iteration we first perform an unconstrained gradient descent step. We then cast and solve an instantaneous optimization problem ..."
Abstract
-
Cited by 130 (1 self)
- Add to MetaCart
We describe, analyze, and experiment with a framework for empirical loss minimization with regularization. Our algorithmic framework alternates between two phases. On each iteration we first perform an unconstrained gradient descent step. We then cast and solve an instantaneous optimization problem
Realistic and Efficient Multi-Channel Communications in Wireless Sensor Networks
"... Abstract—This paper demonstrates how to use multiple channels to improve communication performance in Wireless Sensor Networks (WSNs). We first investigate multi-channel realities in WSNs through intensive empirical experiments with Micaz motes. Our study shows that current multi-channel protocols a ..."
Abstract
-
Cited by 72 (3 self)
- Add to MetaCart
Abstract—This paper demonstrates how to use multiple channels to improve communication performance in Wireless Sensor Networks (WSNs). We first investigate multi-channel realities in WSNs through intensive empirical experiments with Micaz motes. Our study shows that current multi-channel protocols
Communication-efficient distributed optimization of self-concordant empirical loss. arXiv preprint arXiv:1501.00263,
, 2015
"... Abstract We consider distributed convex optimization problems originated from sample average approximation of stochastic optimization, or empirical risk minimization in machine learning. We assume that each machine in the distributed computing system has access to a local empirical loss function, c ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
, constructed with i.i.d. data sampled from a common distribution. We propose a communication-efficient distributed algorithm to minimize the overall empirical loss, which is the average of the local empirical losses. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps
Tributaries and deltas: Efficient and robust aggregation in sensor network streams
- In SIGMOD
, 2005
"... Existing energy-efficient approaches to in-network aggregation in sensor networks can be classified into two categories, tree-based and multi-path-based, with each having unique strengths and weaknesses. In this paper, we introduce Tributary-Delta, a novel approach that combines the advantages of th ..."
Abstract
-
Cited by 115 (2 self)
- Add to MetaCart
difficult aggregate for this context— finding frequent items—can be efficiently computed within the framework. To this end, we devise the first algorithm for frequent items (and for quantiles) that provably minimizes the worst case total communication for non-regular trees. In addition, we give a multi
Efficient and robust feature selection via joint l21-norms minimization. NIPS
, 2010
"... Feature selection is an important component of many machine learning applications. Especially in many bioinformatics tasks, efficient and robust feature selection methods are desired to extract meaningful features and eliminate noisy ones. In this paper, we propose a new robust feature selection met ..."
Abstract
-
Cited by 71 (24 self)
- Add to MetaCart
method with emphasizing joint ℓ2,1-norm minimization on both loss function and regularization. The ℓ2,1-norm based loss function is robust to outliers in data points and the ℓ2,1norm regularization selects features across all data points with joint sparsity. An efficient algorithm is introduced
Efficient gathering of correlated data in sensor networks
- In ACM Trans. on Sensor Networks
, 2008
"... In this paper, we design techniques that exploit data cor-relations in sensor data to minimize communication costs (and hence, energy costs) incurred during data gathering in a sensor network. Our proposed approach is to select a small subset of sensor nodes that may be sufficient to re-construct da ..."
Abstract
-
Cited by 69 (0 self)
- Add to MetaCart
In this paper, we design techniques that exploit data cor-relations in sensor data to minimize communication costs (and hence, energy costs) incurred during data gathering in a sensor network. Our proposed approach is to select a small subset of sensor nodes that may be sufficient to re
DiSCO: Distributed Optimization for Self-Concordant Empirical Loss
"... We propose a new distributed algorithm for em-pirical risk minimization in machine learning. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iter-ation complexity and comm ..."
Abstract
- Add to MetaCart
and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, where the n data points are i.i.d. sampled and when
Efficient multicast stream authentication using erasure codes
- ACM Transactions on Information and System Security
, 2003
"... We describe a novel method for authenticating multicast packets that is robust against packet loss. Our focus is to minimize the size of the communication overhead required to authenticate the packets. Our approach is to encode the hash values and the signatures with Rabin’s Information Dispersal Al ..."
Abstract
-
Cited by 43 (2 self)
- Add to MetaCart
We describe a novel method for authenticating multicast packets that is robust against packet loss. Our focus is to minimize the size of the communication overhead required to authenticate the packets. Our approach is to encode the hash values and the signatures with Rabin’s Information Dispersal
Results 1 - 10
of
529