Results 1  10
of
24
On The Accurate Identification Of Active Constraints
, 1996
"... : We consider nonlinear programs with inequality constraints, and we focus on the problem of identifying those constraints which will be active at an isolated local solution. The correct identification of active constraints is important from both a theoretical and a practical point of view. Such an ..."
Abstract

Cited by 41 (8 self)
 Add to MetaCart
: We consider nonlinear programs with inequality constraints, and we focus on the problem of identifying those constraints which will be active at an isolated local solution. The correct identification of active constraints is important from both a theoretical and a practical point of view. Such an identification removes the combinatorial aspect of the problem and locally reduces the inequality constrained minimization problem to an equality constrained one which can be more easily dealt with. We present a new technique which identifies active constraints in a neighborhood of a solution and which requires neither complementary slackness nor uniqueness of the multipliers. As an example of application of the new technique we present a local active set Newtontype algorithm for the solution of general inequality constrained problems for which Qquadratic convergence of the primal variables can be proved under very weak conditions. We also present extensions to variational inequalities. Ke...
Asymptotic Performance of Vector Quantizers with a Perceptual Distortion Measure
 in Proc. IEEE Int. Symp. on Information Theory, p. 55
, 1997
"... Gersho's bounds on the asymptotic performance of vector quantizers are valid for vector distortions which are powers of the Euclidean norm. Yamada, Tazaki and Gray generalized the results to distortion measures that are increasing functions of the norm of their argument. In both cases, the dist ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
Gersho's bounds on the asymptotic performance of vector quantizers are valid for vector distortions which are powers of the Euclidean norm. Yamada, Tazaki and Gray generalized the results to distortion measures that are increasing functions of the norm of their argument. In both cases, the distortion is uniquely determined by the vector quantization error, i.e., the Euclidean difference between the original vector and the codeword into which it is quantized. We generalize these asymptotic bounds to inputweighted quadratic distortion measures, a class of distortion measure often used for perceptually meaningful distortion. The generalization involves a more rigorous derivation of a fixed rate result of Gardner and Rao and a new result for variable rate codes. We also consider the problem of source mismatch, where the quantizer is designed using a probability density different from the true source density. The resulting asymptotic performance in terms of distortion increase in dB is shown...
Settable Systems: An Extension of Pearl’s Causal Model with Optimization, Equilibrium, and Learning
, 2008
"... Judea Pearl’s Causal Model is a rich framework that provides deep insight into the nature of causal relations. As yet, however, the Pearl Causal Model (PCM) has not had much impact on economics or econometrics. This may be due in part to the fact that the PCM is not as well suited to analyzing econo ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Judea Pearl’s Causal Model is a rich framework that provides deep insight into the nature of causal relations. As yet, however, the Pearl Causal Model (PCM) has not had much impact on economics or econometrics. This may be due in part to the fact that the PCM is not as well suited to analyzing economic structures as might be desired. We o¤er the settable systems framework as an extension of the PCM that embodies features of central interest to economists and econometricians: optimization, equilibrium, and learning. Because these are common features of physical, natural, or social systems, our framework may prove generally useful. In particular, settable systems o¤er a number of advantages relative to the PCM for machine learning. Important distinguishing features of the settable systems framework are its countable dimensionality, its treatment of attributes, the absence of a …xedpoint requirement, and the use of partitioning and partitionspeci…c response functions to accommodate the behavior of optimizing and interacting agents. A series of closely related machine learning examples and examples from game theory and machine learning with feedback demonstrates limitations of the PCM and motivates the distinguishing features of settable systems.
1 Exploiting Reactive Mobility for Collaborative Target Detection in Wireless Sensor Networks
"... Abstract—Recent years have witnessed the deployments of wireless sensor networks in a class of missioncritical applications such as object detection and tracking. These applications often impose stringent Quality of Service (QoS) requirements including high detection probability, low false alarm ra ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Abstract—Recent years have witnessed the deployments of wireless sensor networks in a class of missioncritical applications such as object detection and tracking. These applications often impose stringent Quality of Service (QoS) requirements including high detection probability, low false alarm rate and bounded detection delay. Although a dense allstatic network may initially meet these QoS requirements, it does not adapt to unpredictable dynamics in network conditions (e.g., coverage holes caused by death of nodes) or physical environments (e.g., changed spatial distribution of events). This paper exploits reactive mobility to improve the target detection performance of wireless sensor networks. In our approach, mobile sensors collaborate with static sensors and move reactively to achieve the required detection performance. Specifically, mobile sensors initially remain stationary and are directed to move toward a possible target only when a detection consensus is reached by a group of sensors. The accuracy of final detection result is then improved as the measurements of mobile sensors have higher SignaltoNoise Ratios after the movement. We develop a sensor movement scheduling algorithm that achieves nearoptimal system detection performance under a given detection delay bound. The effectiveness of our approach is validated by extensive simulations using the real data traces collected by 23 sensor nodes. Index Terms—Data fusion, Algorithm/protocol design and analysis, Wireless sensor networks. 1
MLattice: A System For Signal Synthesis And Processing Based On ReactionDiffusion
 PROCESSING BASED ON REACTIONDIFFUSION. SCD THESIS, MIT
, 1994
"... This research begins with reactiondiffusion, first proposed by Alan Turing in 1952 to account for morphogenesis  the formation of hydranth tentacles, leopard spots, zebra stripes, etc. Reactiondiffusion systems have been researched primarily by biologists working on theories of natural pattern f ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
This research begins with reactiondiffusion, first proposed by Alan Turing in 1952 to account for morphogenesis  the formation of hydranth tentacles, leopard spots, zebra stripes, etc. Reactiondiffusion systems have been researched primarily by biologists working on theories of natural pattern formation and by chemists modeling dynamics of oscillating reactions. The past few years have seen a new interest in reactiondiffusion spring up within the computer graphics and image processing communities. However, reactiondiffusion systems are generally unbounded, making them impractical for many applications. In this thesis we introduce a bounded and more flexible nonlinear system, the "Mlattice", which preserves the natural patternformation properties of reactiondiffusion. On the theoretical front, we establish relationships between reactiondiffusion systems and paradigms in linear systems theory and certain types of artificial "neurallyinspired" systems. The Mlattice is closel...
SDRE Flight Control For XCell and RMax Autonomous Helicopters
"... Abstract — This paper presents a statedependent Riccati equation (SDRE) flight control approach and its application to autonomous helicopters. For our experiments, we used two different vehicles: an XCell90 small hobby helicopter and a larger vehicle based on the Yamaha RMax. The control design ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract — This paper presents a statedependent Riccati equation (SDRE) flight control approach and its application to autonomous helicopters. For our experiments, we used two different vehicles: an XCell90 small hobby helicopter and a larger vehicle based on the Yamaha RMax. The control design uses a 6DOF nonlinear dynamic model that is manipulated into a pseudolinear form where system matrices are given explicitly as a function of the current state. A standard Riccati equation is then solved numerically at each step of a 50 Hz control loop to design the nonlinear state feedback control law online. In addition, a static nonlinear compensator is designed to address issues with the mismatch between the original nonlinear dynamics and its pseudolinear transformation. I. NOMENCLATURE u, v, w vehicle velocities along X, Y and Z fuselage axes p, q, r vehicle angular (roll, pitch, yaw) velocities φ, θ, ψ Euler angles: roll, pitch and yaw x, y, z vehicle position in inertial frame ulon, ulat longitudinal and lateral cyclic control inputs ucol, utcol main and tail rotors collective control inputs uw, vw, ww wind velocities along X, Y, and Z fuselage axes m helicopter mass Ixx, Iyy, Izz main moments of inertia T rotor thrust λ0 inflow ratio µ advance ratio µz Vtip normal airflow component speed of the rotor blade tip a rotor blade lift curve slope CT rotor thrust coefficient
The spatiotemporal MEG covariance matrix modeled as a sum of Kronecker products
 NeuroImage
, 2005
"... The single Kronecker product (KP) model for the spatiotemporal covariance of MEG residuals is extended to a sum of Kronecker products. This sum of KP is estimated such that it approximates the spatiotemporal sample covariance best in matrix norm. Contrary to the single KP, this extension allows for ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The single Kronecker product (KP) model for the spatiotemporal covariance of MEG residuals is extended to a sum of Kronecker products. This sum of KP is estimated such that it approximates the spatiotemporal sample covariance best in matrix norm. Contrary to the single KP, this extension allows for describing multiple, independent phenomena in the ongoing background activity. Whereas the single KP model can be interpreted by assuming that background activity is generated by randomly distributed dipoles with certain spatial and temporal characteristics, the sum model can be physiologically interpreted by assuming a composite of such processes. Taking enough terms into account, the spatiotemporal sample covariance matrix can be described exactly by this extended model. In the estimation of the sum of KP model, it appears that the sum of the first 2 KP describes between 67 % and 93%. Moreover, these first two terms describe two physiological processes in the background activity: focal, frequencyspecific alpha activity, and more widespread nonfrequencyspecific activity. Furthermore, temporal nonstationarities due to trialtotrial variations are not clearly visible in the first two terms, and, hence, play only a minor role in the sample covariance matrix in terms of matrix power. Considering the dipole localization, the single KP model appears to describe around 80 % of the noise and seems therefore adequate. The emphasis of further improvement of localization accuracy should be on improving the source model rather than the covariance model.
New Visualization of Surfaces in Parallel Coordinates  Eliminating Ambiguity and Some "OverPlotting"
, 2004
"... A point P R is represented in Parallel Coordinates by a polygonal line P (see [Ins99] for a recent survey). Earlier [Ins85], a surface # was represented as the envelope of the polygonal lines representing it's points. This is ambiguous in the sense that different surfaces can provide t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A point P R is represented in Parallel Coordinates by a polygonal line P (see [Ins99] for a recent survey). Earlier [Ins85], a surface # was represented as the envelope of the polygonal lines representing it's points. This is ambiguous in the sense that different surfaces can provide the same envelopes. Here the ambiguity is eliminated by considering the surface # as the envelope of it's tangent planes and in turn, representing each of these planes by n1 points [Ins99]. This, with some future extension, can yield a new and unambiguous representation, #, of the surface consisting of n1 planar regions whose properties correspond lead to the recognition of the surfaces' properties i.e. developable, ruled etc. [Hun92]) and classification criteria.
COMBINATION OF PARALLEL STOCHASTIC ALGORITHMS AND A DETERMINISTIC NONLINEAR LEAST SQUARES ALGORITHM FOR THE ANALYSIS OF EXTENDED XRAY ABSORPTION FINE STRUCTURE (EXAFS) DATA
, 1999
"... An improved method is presented for the analysis of extended Xray absorption fine structure (EXAFS) data. The new method is a combination of a stochastic algorithm and a deterministic nonlinear least squares (NLLSQ) algorithm. This method is an improvement over the previously used analysis, where a ..."
Abstract
 Add to MetaCart
An improved method is presented for the analysis of extended Xray absorption fine structure (EXAFS) data. The new method is a combination of a stochastic algorithm and a deterministic nonlinear least squares (NLLSQ) algorithm. This method is an improvement over the previously used analysis, where an irregular solution space was searched manually and refined by using the NLLSQ algorithm. The stochastic search part of the new algorithm samples the solution space more thoroughly and faster than the previous manual search; the deterministic NLLSQ part then refines the approximate solution generated by the stochastic algorithm. Reanalysis of previously analyzed data sets demonstrated that the new method is capable of finding both known and new solutions. The stochastic algorithm part of the new method was thoroughly investigated. Different stochastic algorithms, including genetic algorithms (GA), simulated annealing (SA), and combinations of GA and SA were compared. It was found that GA, GA with temperature control, and GA with distancebased mutation produce the best approximate solutions. It was shown experimentally and theoretically that the GA samples the solution