Results 11  20
of
595
Dynamical and microphysical retrieval from Doppler radar observations using a cloud model and its adjoint. Part I: Model development and simulated data experiments
 J. Atmos. Sci
, 1997
"... The purpose of the research reported in this paper is to develop a variational data analysis system that can be used to assimilate data from one or more Doppler radars. In the first part of this twopart study, the technique used in this analysis system is described and tested using data from a simu ..."
Abstract

Cited by 70 (6 self)
 Add to MetaCart
(Show Context)
The purpose of the research reported in this paper is to develop a variational data analysis system that can be used to assimilate data from one or more Doppler radars. In the first part of this twopart study, the technique used in this analysis system is described and tested using data from a simulated warm rain convective storm. The analysis system applies the 4D variational data assimilation technique to a cloudscale model with a warm rain parameterization scheme. The 3D wind, thermodynamical, and microphysical fields are determined by minimizing a cost function, defined by the difference between both radar observed radial velocities and reflectivities (or rainwater mixing ratio) and their model predictions. The adjoint of the numerical model is used to provide the sensitivity of the cost function with respect to the control variables. Experiments using data from a simulated convective storm demonstrated that the variational analysis system is able to retrieve the detailed structure of wind, thermodynamics, and microphysics using either dualDoppler or singleDoppler information. However, less accurate velocity fields are obtained when singleDoppler data were used. In both cases, retrieving the temperature field is more difficult than the retrieval of the other fields. Results also show that assimilating the rainwater mixing ratio obtained from the reflectivity data results in a better performance of the retrieval procedure than directly assimilating the reflectivity. It is also found that the system is robust to variations in the Z–qr relation, but the microphysical retrieval is quite sensitive to parameters in the warm rain scheme. The technique is robust to random errors in radial velocity and calibration errors in reflectivity. 1.
Trust region Newton method for largescale logistic regression
 In Proceedings of the 24th International Conference on Machine Learning (ICML
, 2007
"... Largescale logistic regression arises in many applications such as document classification and natural language processing. In this paper, we apply a trust region Newton method to maximize the loglikelihood of the logistic regression model. The proposed method uses only approximate Newton steps in ..."
Abstract

Cited by 69 (12 self)
 Add to MetaCart
(Show Context)
Largescale logistic regression arises in many applications such as document classification and natural language processing. In this paper, we apply a trust region Newton method to maximize the loglikelihood of the logistic regression model. The proposed method uses only approximate Newton steps in the beginning, but achieves fast convergence in the end. Experiments show that it is faster than the commonly used quasi Newton approach for logistic regression. We also compare it with existing linear SVM implementations. 1
Supervised Random Walks: Predicting and Recommending Links in Social Networks
"... Predicting the occurrence of links is a fundamental problem in networks. In the link prediction problem we are given a snapshot of a network and would like to infer which interactions among existing members are likely to occur in the near future or which existing interactions are we missing. Althoug ..."
Abstract

Cited by 62 (0 self)
 Add to MetaCart
(Show Context)
Predicting the occurrence of links is a fundamental problem in networks. In the link prediction problem we are given a snapshot of a network and would like to infer which interactions among existing members are likely to occur in the near future or which existing interactions are we missing. Although this problem has been extensively studied, the challenge of how to effectively combine the information from the network structure with rich node and edge attribute data remains largely open. We develop an algorithm based on Supervised Random Walks that naturally combines the information from the network structure with node and edge level attributes. We achieve this by using these attributes to guide a random walk on the graph. We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links will be created in the future. We develop an efficient training algorithm to directly learn the edge strength estimation function. Our experiments on the Facebook social graph and large collaboration networks show that our approach outperforms stateoftheart unsupervised approaches as well as approaches that are based on feature extraction.
Painless Unsupervised Learning with Features
"... We show how features can easily be added to standard generative models for unsupervised learning, without requiring complex new training methods. In particular, each component multinomial of a generative model can be turned into a miniature logistic regression model if feature locality permits. The ..."
Abstract

Cited by 61 (3 self)
 Add to MetaCart
(Show Context)
We show how features can easily be added to standard generative models for unsupervised learning, without requiring complex new training methods. In particular, each component multinomial of a generative model can be turned into a miniature logistic regression model if feature locality permits. The intuitive EM algorithm still applies, but with a gradientbased Mstep familiar from discriminative training of logistic regression models. We apply this technique to partofspeech induction, grammar induction, word alignment, and word segmentation, incorporating a few linguisticallymotivated features into the standard generative model for each task. These featureenhanced models each outperform their basic counterparts by a substantial margin, and even compete with and surpass more complex stateoftheart models. 1
On the resolution of monotone complementarity problems
 Comput. Optim. Appl
, 1996
"... Abstract. A reformulation of the nonlinear complementarity problem (NCP) as an unconstrained minimization problem is considered. It is shown that any stationary point of the unconstrained objective function is already a solution of NCP if the mapping F involved in NCP is continuously differentiable ..."
Abstract

Cited by 53 (10 self)
 Add to MetaCart
(Show Context)
Abstract. A reformulation of the nonlinear complementarity problem (NCP) as an unconstrained minimization problem is considered. It is shown that any stationary point of the unconstrained objective function is already a solution of NCP if the mapping F involved in NCP is continuously differentiable and monotone. A descent algorithm is described which uses only function values of F. Some numerical results are given.
Superresolution Enhancement of Text Image Sequences
, 2000
"... The objective of this work is the superresolution enhancement of image sequences. We consider in particular images of scenes for which the pointtopoint image transformation is a plane projective transformation. We first describe the imaging model, and a maximum likelihood (ML) estimator of the s ..."
Abstract

Cited by 52 (2 self)
 Add to MetaCart
(Show Context)
The objective of this work is the superresolution enhancement of image sequences. We consider in particular images of scenes for which the pointtopoint image transformation is a plane projective transformation. We first describe the imaging model, and a maximum likelihood (ML) estimator of the superresolution image. We demonstrate the extreme noise sensitivity of the unconstrained ML estimator. We show that the Irani and Peleg [9, 10] superresolution algorithm does not suffer from this sensitivity, and explain that this stability is due to the error backprojection method which effectively constrains the solution. We then propose two estimators suitable for the enhancement of text images: a maximum a posterior (MAP) estimator based on a Huber prior, and an estimator regularized using the Total Variation norm. We demonstrate the improved noise robustness of these approaches over the Irani and Peleg estimator. We also show the effects of a poorly estimated point spread function (PS...
Conditional random fields for activity recognition
 In Proceedings of the Sixth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2007
, 2007
"... of any sponsoring institution, the U.S. government or any other entity. ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
(Show Context)
of any sponsoring institution, the U.S. government or any other entity.
Efficient, featurebased, conditional random field parsing
 In Proc. ACL/HLT
, 2008
"... Discriminative featurebased methods are widely used in natural language processing, but sentence parsing is still dominated by generative methods. While prior featurebased dynamic programming parsers have restricted training and evaluation to artificially short sentences, we present the first gene ..."
Abstract

Cited by 45 (4 self)
 Add to MetaCart
(Show Context)
Discriminative featurebased methods are widely used in natural language processing, but sentence parsing is still dominated by generative methods. While prior featurebased dynamic programming parsers have restricted training and evaluation to artificially short sentences, we present the first general, featurerich discriminative parser, based on a conditional random field model, which has been successfully scaled to the full WSJ parsing data. Our efficiency is primarily due to the use of stochastic optimization techniques, as well as parallelization and chart prefiltering. On WSJ15, we attain a stateoftheart Fscore of 90.9%, a 14 % relative reduction in error over previous models, while being two orders of magnitude faster. On sentences of length 40, our system achieves an Fscore of 89.0%, a 36 % relative reduction in error over a generative baseline. 1
Geometric modeling in shape space
 In Proc. SIGGRAPH
, 2007
"... Figure 1: Geodesic interpolation and extrapolation. The blue input poses of the elephant are geodesically interpolated in an asisometricaspossible fashion (shown in green), and the resulting path is geodesically continued (shown in purple) to naturally extend the sequence. No semantic information, ..."
Abstract

Cited by 44 (4 self)
 Add to MetaCart
Figure 1: Geodesic interpolation and extrapolation. The blue input poses of the elephant are geodesically interpolated in an asisometricaspossible fashion (shown in green), and the resulting path is geodesically continued (shown in purple) to naturally extend the sequence. No semantic information, segmentation, or knowledge of articulated components is used. We present a novel framework to treat shapes in the setting of Riemannian geometry. Shapes – triangular meshes or more generally straight line graphs in Euclidean space – are treated as points in a shape space. We introduce useful Riemannian metrics in this space to aid the user in design and modeling tasks, especially to explore the space of (approximately) isometric deformations of a given shape. Much of the work relies on an efficient algorithm to compute geodesics in shape spaces; to this end, we present a multiresolution framework to solve the interpolation problem – which amounts to solving a boundary value problem – as well as the extrapolation problem – an initial value problem – in shape space. Based on these two operations, several classical concepts like parallel transport and the exponential map can be used in shape space to solve various geometric modeling and geometry processing tasks. Applications include shape morphing, shape deformation, deformation transfer, and intuitive shape exploration.