Results 1  10
of
20
Trust and Distrust: New Relationships and Realities
 ACADEMY OF MANAGEMENT REVIEW 23(3)
, 1998
"... We propose a new theoretical framework for understanding simultaneous trust and distrust within relationships. grounded in assumptions of multidimensionality and the inherent tensions of relationships. and we separate this research from prior work grounded in assumptions of unidimensionality and bal ..."
Abstract

Cited by 66 (1 self)
 Add to MetaCart
We propose a new theoretical framework for understanding simultaneous trust and distrust within relationships. grounded in assumptions of multidimensionality and the inherent tensions of relationships. and we separate this research from prior work grounded in assumptions of unidimensionality and balance. Drawing foundational support for this new framework from recent research on simultaneous positive and negative sentiments and ambivalence. we explore the theoretical and practical significance of the framework for future work on trust and distrust relationships within organizations.
Generalized Stochastic Subdivision
 ACM Transactions on Graphics
, 1987
"... This paper describes the basis for techniques such as stochastic subdivision in the theory of random processes and estimation theory. The popular stochastic subdivision construction is then generalized to provide control of the autocorrelation and spectral properties of the synthesized random functi ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
This paper describes the basis for techniques such as stochastic subdivision in the theory of random processes and estimation theory. The popular stochastic subdivision construction is then generalized to provide control of the autocorrelation and spectral properties of the synthesized random functions. The generalized construction is suitable for generating a variety of perceptually distinct highquality random functions, including those with nonfractal spectra and directional or oscillatory characteristics. It is argued that a spectral modeling approach provides a more powerful and somewhat more intuitive perceptual characterization of random processes than does the fractal model. Synthetic textures and terrains are presented as a means of visually evaluating the generalized subdivision technique. Categories and Subject Descriptors: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three Dimensional Graphics and Realism <F11.
Determining Acceptance Possibility for a Quantum Computation is Hard for PH
, 1997
"... It is shown that determining whether a quantum computation has a nonzero probability of accepting is at least as hard as the polynomial time hierarchy. This hardness ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
It is shown that determining whether a quantum computation has a nonzero probability of accepting is at least as hard as the polynomial time hierarchy. This hardness
Identification of Linear ParameterVarying Systems using Nonlinear Programming
 Proceedings of the 35th IEEE Conference on Decision and Control
, 1996
"... This paper deals with the identification of a linear parametervarying (LPV) system whose parameter dependence can be written as a linearfractional transformation (LFT). We formulate an outputerror identification problem and present a parameter estimation scheme in which a prediction errorbased c ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
This paper deals with the identification of a linear parametervarying (LPV) system whose parameter dependence can be written as a linearfractional transformation (LFT). We formulate an outputerror identification problem and present a parameter estimation scheme in which a prediction errorbased cost function is minimized using nonlinear programming; its gradients and (approximate) Hessians can be computed using LPV filters and inner products, and identifiable model sets (i.e., local canonical forms) are obtained efficiently using a natural geometrical approach. Some computational issues and experiences are discussed, and a simple numerical example is provided for illustration. 1 Introduction Identification of LTI systems is an extremely wellstudied research topic. Many successful algorithms have been developed and analyzed; these traditional algorithms are usually easily implemented and enjoy widespread use (Ljung, 1987). However, transfer functionbased approaches to system ident...
Dynamic harmonic regression
 Journal of Forecasting
, 1999
"... seasonal adjustment, dynamic harmonic regression This paper describes in detail a flexible approach to nonstationary time series analysis based on a Dynamic Harmonic Regression (DHR) model of the Unobserved Components (UC) type, formulated with a stochastic state space setting. The model is particul ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
seasonal adjustment, dynamic harmonic regression This paper describes in detail a flexible approach to nonstationary time series analysis based on a Dynamic Harmonic Regression (DHR) model of the Unobserved Components (UC) type, formulated with a stochastic state space setting. The model is particularly useful for adaptive seasonal adjustment, signal extraction and interpolation over gaps, as well as forecasting or backcasting. The Kalman Filter and Fixed Interval Smoothing algorithms are exploited for estimating the various components, with the Noise Variance Ratio and other hyperparameters in the stochastic state space model estimated by a novel optimisation method in the frequency domain. Unlike other approaches of this general type, which normally exploit Maximum Likelihood methods, this optimisation procedure is based on a cost function defined in terms of the difference between the logarithmic pseudospectrum of the DHR model and the logarithmic autoregressive spectrum of the time series. This cost function not only seems to yield improved convergence characteristics when compared with the alternative ML cost function, but it also has much reduced numerical requirements. 1.
Localization and ObjectTracking in an Ultrawideband Sensor Network
, 2004
"... Geometric information is essential for sensor networks. We study two kinds of geometric information. One is the positions of the nodes. The estimation of the positions of the nodes is called the localization problem. The other is the positions of objects in the sensor network. For the localization p ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Geometric information is essential for sensor networks. We study two kinds of geometric information. One is the positions of the nodes. The estimation of the positions of the nodes is called the localization problem. The other is the positions of objects in the sensor network. For the localization problem, We will study the Cramer Rao lower bound on it. For the anchorfree localization problem where no nodes have known positions, we propose a new bound on the variance of the estimation error, because the Fisher Information Matrix is singular. For the anchored localization problem using only local information, we derive a lower bound to the Cramer Rao bound on the position estimation. We find that the Cramer Rao bounds in both cases are invariant under zooming of the whole sensor network. We will also propose a novel twostep localization scheme. In the first step, we estimate an anchorfree coordinate system around every node. In the second step, we combine all the anchorfree coordinate systems together. Then using the anchored position information of some nodes, we transfer the anchorfree coordinate system into an anchored coordinate system. For the object position estimation problem, we study it in different scenarios in terms of number of nodes. There are three scenarios: single transmitter and single receiver, multiple transmitter (receiver) and single receiver (transmitter), multiple transmitter and multiple receiver. For each scenario, we give a position estimation scheme and analyze the performance of our scheme. The Cramer Rao bound for each scenario is also computed. We are particularly interested in the behavior of the Cramer Rao bound when the number of sensors in the network grows to infinity. We find that ii the Cramer Rao bound on object tracking is proportional to the reciprocal of the total received SNR. iii To my parents.
Analytical Methods for the Performance Evaluation of Binary Linear Block Codes
, 2000
"... University ofWaterloo ..."
Non mean square error criteria for the training of learning machines
 in Proc. 13th Int. Conf. Machine Learning (ICML
, 1996
"... In recent papers, Miller, Goodman & Smyth (1991, 1993) provided conditions on the cost function used for the training of a neural network in order to ensure that the output of the network approximates the conditional expectation of the desired output, given the input. However, they only considered t ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
In recent papers, Miller, Goodman & Smyth (1991, 1993) provided conditions on the cost function used for the training of a neural network in order to ensure that the output of the network approximates the conditional expectation of the desired output, given the input. However, they only considered the singleoutput case. In this paper, we provide another, rather straightforward, proof of the same results, for the general multioutputs case; all the developments being presented in the context of estimation theory. More precisely, among a class of "reasonable " performance criteria, we provide necessary and sufficient conditions on the cost function so that the optimal estimate is the conditional expectation of the desired outputs, whatever the noise characteristics affecting the data. We furthermore provide a short overview of related results from estimation theory, and verify numerically the developments by comparing the optimal estimator of several performance criteria. We must stress that, while all these results are stated for a neural network, they are however true in general for any learning machine that is trained in order to predict an output y in function of an input x.
Theory and practice of simultaneous data reconciliation and gross error detection for chemical processes
"... Online optimization provides a means for maintaining a process near its optimum operating conditions by providing set points to the process’s distributed control system (DCS). To achieve a plantmodel matching for optimization, process measurements are necessary. However, a preprocessing of these m ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Online optimization provides a means for maintaining a process near its optimum operating conditions by providing set points to the process’s distributed control system (DCS). To achieve a plantmodel matching for optimization, process measurements are necessary. However, a preprocessing of these measurements is required since they usually contain random and—less frequently—gross errors. These errors should be eliminated and the measurements should satisfy process constraints before any evaluation on the process. In this paper, the importance and effectiveness of simultaneous procedures for data reconciliation and gross error detection is established. These procedures depending on the results from robust statistics reduce the effect of the gross errors. They provide comparable results to those from methods such as modified iterative measurement test method (MIMT) without requiring an iterative procedure. In addition to deriving new robust methods, novel gross error detection criteria are described and their performance is tested. The comparative results of the introduced methods are given for five literature and more importantly, two industrial cases. Methods based on the Cauchy distribution and Hampel’s redescending Mestimator give promising results for data reconciliation and gross error detection with less computation.
The The The The The The The
"... Wiener interpolation differs from polynomial interpolation approaches in that it is based on the expected correlation of the data. Wiener interpolation of discrete data is simple, requiring only the solution of a matrix equation. This note describes two derivations for discrete Wiener interpolation. ..."
Abstract
 Add to MetaCart
Wiener interpolation differs from polynomial interpolation approaches in that it is based on the expected correlation of the data. Wiener interpolation of discrete data is simple, requiring only the solution of a matrix equation. This note describes two derivations for discrete Wiener interpolation. Some advantages of Wiener interpolation are: