Results 1  10
of
38
A discriminative matching approach to word alignment
 In Proceedings of HLTEMNLP
, 2005
"... We present a discriminative, largemargin approach to featurebased matching for word alignment. In this framework, pairs of word tokens receive a matching score, which is based on features of that pair, including measures of association between the words, distortion between their positions, similari ..."
Abstract

Cited by 85 (7 self)
 Add to MetaCart
We present a discriminative, largemargin approach to featurebased matching for word alignment. In this framework, pairs of word tokens receive a matching score, which is based on features of that pair, including measures of association between the words, distortion between their positions, similarity of the orthographic form, and so on. Even with only 100 labeled training examples and simple features which incorporate counts from a large unlabeled corpus, we achieve AER performance close to IBM Model 4, in much less time. Including Model 4 predictions as features, we achieve a relative AER reduction of 22 % in over intersected Model 4 alignments. 1
Proxmethod with rate of convergence o(1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convexconcave saddlepoint problems
 SIAM Journal on Optimization
"... Abstract. We propose a proxtype method with efficiency estimate O(ɛ−1) for approximating saddle points of convexconcave C1,1 functions and solutions of variational inequalities with monotone Lipschitz continuous operators. Application examples include matrix games, eigenvalue minimization, and com ..."
Abstract

Cited by 73 (12 self)
 Add to MetaCart
Abstract. We propose a proxtype method with efficiency estimate O(ɛ−1) for approximating saddle points of convexconcave C1,1 functions and solutions of variational inequalities with monotone Lipschitz continuous operators. Application examples include matrix games, eigenvalue minimization, and computing the Lovasz capacity number of a graph, and these are illustrated by numerical experiments with largescale matrix games and Lovasz capacity problems.
Convergence and noregret in multiagent learning
 In Advances in Neural Information Processing Systems 17
, 2005
"... Learning in a multiagent system is a challenging problem due to two key factors. First, if other agents are simultaneously learning then the environment is no longer stationary, thus undermining convergence guarantees. Second, learning is often susceptible to deception, where the other agents may be ..."
Abstract

Cited by 66 (0 self)
 Add to MetaCart
Learning in a multiagent system is a challenging problem due to two key factors. First, if other agents are simultaneously learning then the environment is no longer stationary, thus undermining convergence guarantees. Second, learning is often susceptible to deception, where the other agents may be able to exploit a learner’s particular dynamics. In the worst case, this could result in poorer performance than if the agent was not learning at all. These challenges are identifiable in the two most common evaluation criteria for multiagent learning algorithms: convergence and regret. Algorithms focusing on convergence or regret in isolation are numerous. In this paper, we seek to address both criteria in a single algorithm by introducing GIGAWoLF, a learning algorithm for normalform games. We prove the algorithm guarantees at most zero average regret, while demonstrating the algorithm converges in many situations of selfplay. We prove convergence in a limited setting and give empirical results in a wider variety of situations. These results also suggest a third new learning criterion combining convergence and regret, which we call negative nonconvergence regret (NNR). 1
Multitask feature selection
 In the workshop of structural Knowledge Transfer for Machine Learning in the 23rd International Conference on Machine Learning (ICML 2006). Citeseer
, 2006
"... We address joint feature selection across a group of classification or regression tasks. In many multitask learning scenarios, different but related tasks share a large proportion of relevant features. We propose a novel type of joint regularization for the parameters of support vector machines in ..."
Abstract

Cited by 51 (1 self)
 Add to MetaCart
We address joint feature selection across a group of classification or regression tasks. In many multitask learning scenarios, different but related tasks share a large proportion of relevant features. We propose a novel type of joint regularization for the parameters of support vector machines in order to couple feature selection across tasks. Intuitively, we extend the ℓ1 regularization for singletask estimation to the multitask setting. By penalizing the sum of ℓ2norms of the blocks of coefficients associated with each feature across different tasks, we encourage multiple predictors to have similar parameter sparsity patterns. This approach yields convex, nondifferentiable optimization problems that can be solved efficiently using a simple and scalable extragradient algorithm. We show empirically that our approach outperforms independent ℓ1based feature selection on several datasets. 1.
A Modified ForwardBackward Splitting Method For Maximal Monotone Mappings
 SIAM J. Control Optim
, 1998
"... We consider the forwardbackward splitting method for finding a zero of the sum of two maximal monotone mappings. This method is known to converge when the inverse of the forward mapping is strongly monotone. We propose a modification to this method, in the spirit of the extragradient method for mon ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
We consider the forwardbackward splitting method for finding a zero of the sum of two maximal monotone mappings. This method is known to converge when the inverse of the forward mapping is strongly monotone. We propose a modification to this method, in the spirit of the extragradient method for monotone variational inequalities, under which the method converges assuming only the forward mapping is monotone and (Lipschitz) continuous on some closed convex subset of its domain. The modification entails an additional forward step and a projection step at each iteration. Applications of the modified method to decomposition in convex programming and monotone variational inequalities are discussed.
Structured prediction via the extragradient method
 In Advances in
, 2006
"... We present a simple and scalable algorithm for largemargin estimation of structured models, including an important class of Markov networks and combinatorial models. We formulate the estimation problem as a convexconcave saddlepoint problem and apply the extragradient method, yielding an algorith ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
We present a simple and scalable algorithm for largemargin estimation of structured models, including an important class of Markov networks and combinatorial models. We formulate the estimation problem as a convexconcave saddlepoint problem and apply the extragradient method, yielding an algorithm with linear convergence using simple gradient and projection calculations. The projection step can be solved using combinatorial algorithms for mincost quadratic flow. This makes the approach an efficient alternative to formulations based on reductions to a quadratic program (QP). We present experiments on two very different structured prediction tasks: 3D image segmentation and word alignment, illustrating the favorable scaling properties of our algorithm. 1
Modified ProjectionType Methods For Monotone Variational Inequalities
 SIAM Journal on Control and Optimization
, 1996
"... . We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projectiontype methods in which the projection direction is modified by a strongly monotone mapping of the form I \Gamma ffF or, if F is affine with un ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
. We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projectiontype methods in which the projection direction is modified by a strongly monotone mapping of the form I \Gamma ffF or, if F is affine with underlying matrix M , of the form I + ffM T , with ff 2 (0; 1). We show that these methods are globally convergent and, if in addition a certain error bound based on the natural residual holds locally, the convergence is linear. Computational experience with the new methods is also reported. Key words. Monotone variational inequalities, projectiontype methods, error bound, linear convergence. AMS subject classifications. 49M45, 90C25, 90C33 1. Introduction. We consider the monotone variational inequality problem of finding an x 2 X satisfying F (x ) T (x \Gamma x ) 0 8x 2 X; (1) where X is a closed convex set in ! n and F is a monotone and continuous function from ! n to ...
A Hybrid Approximate ExtragradientProximal Point Algorithm Using The Enlargement Of A Maximal Monotone Operator
, 1999
"... We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradienttype step is performed using information obtained from an approximate solution of ..."
Abstract

Cited by 24 (15 self)
 Add to MetaCart
We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradienttype step is performed using information obtained from an approximate solution of a proximal point subproblem. The algorithm is of a hybrid type, as it combines steps of the extragradient and proximal methods. Furthermore, the algorithm uses elements in the enlargement (proposed by Burachik, Iusem and Svaiter [2]) of the operator defining the problem. One of the important features of our approach is that it allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems. This yields a more practical proximalalgorithmbased framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. It is further demonstrated that the modified forwardbackward splitting algorithm of Tseng [35]...
A new projection method for variational inequality problems
 SIAM J. Control Optim
, 1999
"... Abstract. We propose a new projection algorithm for solving the variational inequality problem, where the underlying function is continuous and satisfies a certain generalized monotonicity assumption (e.g., it can be pseudomonotone). The method is simple and admits a nice geometric interpretation. I ..."
Abstract

Cited by 20 (11 self)
 Add to MetaCart
Abstract. We propose a new projection algorithm for solving the variational inequality problem, where the underlying function is continuous and satisfies a certain generalized monotonicity assumption (e.g., it can be pseudomonotone). The method is simple and admits a nice geometric interpretation. It consists of two steps. First, we construct an appropriate hyperplane which strictly separates the current iterate from the solutions of the problem. This procedure requires a single projection onto the feasible set and employs an Armijotype linesearch along a feasible direction. Then the next iterate is obtained as the projection of the current iterate onto the intersection of the feasible set with the halfspace containing the solution set. Thus, in contrast with most other projectiontype methods, only two projection operations per iteration are needed. The method is shown to be globally convergent to a solution of the variational inequality problem under minimal assumptions. Preliminary computational experience is also reported. Key words. variational inequalities, projection methods, pseudomonotone maps
Complementarity And Related Problems: A Survey
, 1998
"... This survey gives an introduction to some of the recent developments in the field of complementarity and related problems. After presenting two typical examples and the basic existence and uniqueness results, we focus on some new trends for solving nonlinear complementarity problems. Extensions to ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
This survey gives an introduction to some of the recent developments in the field of complementarity and related problems. After presenting two typical examples and the basic existence and uniqueness results, we focus on some new trends for solving nonlinear complementarity problems. Extensions to mixed complementarity problems, variational inequalities and mathematical programs with equilibrium constraints are also discussed.