Results 1  10
of
277
SpatioTemporal Coding for Wireless Communication
 IEEE Trans. Commun
, 1998
"... Multipath signal propagation has long been viewed as an impairment to reliable communication in wireless channels. This paper shows that the presence of multipath greatly improves achievable data rate if the appropriate communication structure is employed. A compact model is developed for the multip ..."
Abstract

Cited by 282 (14 self)
 Add to MetaCart
Multipath signal propagation has long been viewed as an impairment to reliable communication in wireless channels. This paper shows that the presence of multipath greatly improves achievable data rate if the appropriate communication structure is employed. A compact model is developed for the multipleinput multipleoutput (MIMO) dispersive spatially selective wireless communication channel. The multivariate information capacity is analyzed. For high signaltonoise ratio (SNR) conditions, the MIMO channel can exhibit a capacity slope in bits per decibel of power increase that is proportional to the minimum of the number multipath components, the number of input antennas, or the number of output antennas. This desirable result is contrasted with the lower capacity slope of the wellstudied case with multiple antennas at only one side of the radio link. A spatiotemporal vectorcoding (STVC) communication structure is suggested as a means for achieving MIMO channel capacity. The complexity of STVC motivates a more practical reducedcomplexity discrete matrix multitone (DMMT) spacefrequency coding approach. Both of these structures are shown to be asymptotically optimum. An adaptivelattice trelliscoding technique is suggested as a method for coding across the space and frequency dimensions that exist in the DMMT channel. Experimental examples that support the theoretical results are presented. Index TermsAdaptive arrays, adaptive coding, adaptive modulation, antenna arrays, broadband communication, channel coding, digital modulation, information rates, MIMO systems, multipath channels. I.
On the capacity of OFDMbased spatial multiplexing systems
 IEEE Trans. Commun
, 2002
"... Abstract—This paper deals with the capacity behavior of wireless Orthogonal Frequency Division Multiplexing (OFDM)based spatial multiplexing systems in broadband fading environments for the case where the channel is unknown at the transmitter and perfectly known at the receiver. Introducing a phys ..."
Abstract

Cited by 118 (15 self)
 Add to MetaCart
Abstract—This paper deals with the capacity behavior of wireless Orthogonal Frequency Division Multiplexing (OFDM)based spatial multiplexing systems in broadband fading environments for the case where the channel is unknown at the transmitter and perfectly known at the receiver. Introducing a physically motivated multipleinput multipleoutput (MIMO) broadband fading channel model, we study the influence of physical parameters such as the amount of delay spread, cluster angle spread, and total angle spread, and system parameters such as the number of antennas and antenna spacing on ergodic capacity and outage capacity. We find that in the MIMO case, unlike the singleinput singleoutput (SISO) case, delay spread channels may provide advantage over flat fading channels not only in terms of outage capacity but also in terms of ergodic capacity. Therefore, MIMO delay spread channels will in general provide both higher diversity gain and higher multiplexing gain than MIMO flatfading channels.
On the Interdependence of Routing and Data Compression in MultiHop Sensor Networks
, 2002
"... We consider a problem of broadcast communication in a multihop sensor network, in which samples of a random field are collected at each node of the network, and the goal is for all nodes to obtain an estimate of the entire field within a prescribed distortion value. The main idea we explore in this ..."
Abstract

Cited by 115 (7 self)
 Add to MetaCart
We consider a problem of broadcast communication in a multihop sensor network, in which samples of a random field are collected at each node of the network, and the goal is for all nodes to obtain an estimate of the entire field within a prescribed distortion value. The main idea we explore in this paper is that of jointly compressing the data generated by different nodes as this information travels over multiple hops, to eliminate correlations in the representation of the sampled field. Our main contributions are: (a) we obtain, using simple network flow concepts, conditions on the rate/distortion function of the random field, so as to guarantee that any node can obtain the measurements collected at every other node in the network, quantized to within any prescribed distortion value; and (b), we construct a large class of physicallymotivated stochastic models for sensor data, for which we are able to prove that the joint rate/distortion function of all the data generated by the whole network grows slower than the bounds found in (a). A truly novel aspect of our work is the tight coupling between routing and source coding, explicitly formulated in a simple and analytically tractable modelto the best of our knowledge, this connection had not been studied before.
ScaleSpace for Discrete Signals
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1990
"... We address the formulation of a scalespace theory for discrete signals. In one dimension it is possible to characterize the smoothing transformations completely and an exhaustive treatment is given, answering the following two main questions: 1. Which linear transformations remove structure in the ..."
Abstract

Cited by 94 (21 self)
 Add to MetaCart
We address the formulation of a scalespace theory for discrete signals. In one dimension it is possible to characterize the smoothing transformations completely and an exhaustive treatment is given, answering the following two main questions: 1. Which linear transformations remove structure in the sense that the number of local extrema (or zerocrossings) in the output signal does not exceed the number of local extrema (or zerocrossings) in the original signal? 2. How should one create a multiresolution family of representations with the property that a signal at a coarser level of scale never contains more structure than a signal at a finer level of scale? We propose that there is only one reasonable way to define a scalespace for 1D discrete signals comprising a continuous scale parameter, namely by (discrete) convolution with the family of kernels T (n; t) = e I n (t), where I n are the modified Bessel functions of integer order. Similar arguments applied in the continuous case uniquely lead to the Gaussian kernel.
Sure independence screening for ultrahigh dimensional feature space
, 2006
"... Variable selection plays an important role in high dimensional statistical modeling which nowadays appears in many areas and is key to various scientific discoveries. For problems of large scale or dimensionality p, estimation accuracy and computational cost are two top concerns. In a recent paper, ..."
Abstract

Cited by 92 (12 self)
 Add to MetaCart
Variable selection plays an important role in high dimensional statistical modeling which nowadays appears in many areas and is key to various scientific discoveries. For problems of large scale or dimensionality p, estimation accuracy and computational cost are two top concerns. In a recent paper, Candes and Tao (2007) propose the Dantzig selector using L1 regularization and show that it achieves the ideal risk up to a logarithmic factor log p. Their innovative procedure and remarkable result are challenged when the dimensionality is ultra high as the factor log p can be large and their uniform uncertainty principle can fail. Motivated by these concerns, we introduce the concept of sure screening and propose a sure screening method based on a correlation learning, called the Sure Independence Screening (SIS), to reduce dimensionality from high to a moderate scale that is below sample size. In a fairly general asymptotic framework, the SIS is shown to have the sure screening property for even exponentially growing dimensionality. As a methodological extension, an iterative SIS (ISIS) is also proposed to enhance its finite sample performance. With dimension reduced accurately from high to below sample size, variable selection can be improved on both speed and accuracy, and can then be ac
Regularized estimation of large covariance matrices
 Ann. Statist
, 2008
"... This paper considers estimating a covariance matrix of p variables from n observations by either banding or tapering the sample covariance matrix, or estimating a banded version of the inverse of the covariance. We show that these estimates are consistent in the operator norm as long as (log p)/n → ..."
Abstract

Cited by 89 (13 self)
 Add to MetaCart
This paper considers estimating a covariance matrix of p variables from n observations by either banding or tapering the sample covariance matrix, or estimating a banded version of the inverse of the covariance. We show that these estimates are consistent in the operator norm as long as (log p)/n → 0, and obtain explicit rates. The results are uniform over some fairly natural wellconditioned families of covariance matrices. We also introduce an analogue of the Gaussian white noise model and show that if the population covariance is embeddable in that model and wellconditioned, then the banded approximations produce consistent estimates of the eigenvalues and associated eigenvectors of the covariance matrix. The results can be extended to smooth versions of banding and to nonGaussian distributions with sufficiently short tails. A resampling approach is proposed for choosing the banding parameter in practice. This approach is illustrated numerically on both simulated and real data. 1. Introduction. Estimation
Efficient numerical methods in nonuniform sampling theory
, 1995
"... We present a new “second generation” reconstruction algorithm for irregular sampling, i.e. for the problem of recovering a bandlimited function from its nonuniformly sampled values. The efficient new method is a combination of the adaptive weights method which was developed by the two first named ..."
Abstract

Cited by 80 (9 self)
 Add to MetaCart
We present a new “second generation” reconstruction algorithm for irregular sampling, i.e. for the problem of recovering a bandlimited function from its nonuniformly sampled values. The efficient new method is a combination of the adaptive weights method which was developed by the two first named authors and the method of conjugate gradients for the solution of positive definite linear systems. The choice of ”adaptive weights” can be seen as a simple but very efficient method of preconditioning. Further substantial acceleration is achieved by utilizing the Toeplitztype structure of the system matrix. This new algorithm can handle problems of much larger dimension and condition number than have been accessible so far. Furthermore, if some gaps between samples are large, then the algorithm can still be used as a very efficient extrapolation method across the gaps.
Theoretical Foundations of Transform Coding
, 2001
"... This article explains the fundamental principles of transform coding; these principles apply equally well to images, audio, video, and various other types of data, so abstract formulations are given. Much of the material presented here is adapted from [14, Chap. 2, 4]. The details on wavelet transfo ..."
Abstract

Cited by 67 (6 self)
 Add to MetaCart
This article explains the fundamental principles of transform coding; these principles apply equally well to images, audio, video, and various other types of data, so abstract formulations are given. Much of the material presented here is adapted from [14, Chap. 2, 4]. The details on wavelet transformbased image compression and the JPEG2000 image compression standard are given in the following two articles of this special issue [38], [37]
Toeplitz and Circulant Matrices: A review
, 2001
"... The fundamental theorems on the asymptotic behavior of eigenvalues, inverses, and products of "finite section" Toeplitz matrices and Toeplitz matrices with absolutely summable elements are derived in a tutorial manner. Mathematical elegance and generality are sacrificed for conceptual simplicity and ..."
Abstract

Cited by 65 (0 self)
 Add to MetaCart
The fundamental theorems on the asymptotic behavior of eigenvalues, inverses, and products of "finite section" Toeplitz matrices and Toeplitz matrices with absolutely summable elements are derived in a tutorial manner. Mathematical elegance and generality are sacrificed for conceptual simplicity and insight in the hopes of making these results available to engineers lacking either the background or endurance to attack the mathematical literature on the subject. By limiting the generality of the matrices considered the essential ideas and results can be conveyed in a more intuitive manner without the mathematical machinery required for the most general cases. As an application the results are applied to the study of the covariance matrices and their factors of linear models of discrete time random processes. Acknowledgements The author gratefully acknowledges the assistance of Ronald M. Aarts of the Philips Research Labs in correcting many typos and errors in the 1993 revision, Liu Mingyu in pointing out errors corrected in the 1998 revision, Paolo Tilli of the Scuola Normale Superiore of Pisa for pointing out an incorrect corollary and providing the correction, and to David Neuho# of the University of Michigan for pointing out several typographical errors and some confusing notation. For corrections, comments, and improvements to the 2001 revision thanks are due to William Trench, John Dattorro, and Young HanKim. In particular, Trench brought the WielandtHo#man theorem and its use to prove strengthened results to my attention. Section 2.4 largely follows his suggestions, although I take the blame for any introduced errors. Contents 1
An Active Contour Model For Mapping The Cortex
 IEEE TRANS. ON MEDICAL IMAGING
, 1995
"... A new active contour model for finding and mapping the outer cortex in brain images is developed. A crosssection of the brain cortex is modeled as a ribbon, and a constant speed mapping of its spine is sought. A variational formulation, an associated force balance condition, and a numerical approac ..."
Abstract

Cited by 64 (13 self)
 Add to MetaCart
A new active contour model for finding and mapping the outer cortex in brain images is developed. A crosssection of the brain cortex is modeled as a ribbon, and a constant speed mapping of its spine is sought. A variational formulation, an associated force balance condition, and a numerical approach are proposed to achieve this goal. The primary difference between this formulation and that of snakes is in the specification of the external force acting on the active contour. A study of the uniqueness and fidelity of solutions is made through convexity and frequency domain analyses, and a criterion for selection of the regularization coefficient is developed. Examples demonstrating the performance of this method on simulated and real data are provided.