Results 1  10
of
418,031
The unbearable automaticity of being
 AMERICAN PSYCHOLOGIST
, 1999
"... What was noted by E. J. hanger (1978) remains true today: that much of contemporary psychological research is based on the assumption that people are consciously and systematically processing incoming information in order to construe and interpret their world and to plan and engage in courses of act ..."
Abstract

Cited by 568 (14 self)
 Add to MetaCart
of action. As did E. J. hanger, the authors question this assumption. First, they review evidence that the ability to exercise such conscious, intentional control is actually quite limited, so that most of momenttomoment psychological life must occur through nonconscious means if it is to occur at all
The emotional dog and its rational tail: a social intuitionist approach to moral judgment
 Psychological Review
, 2001
"... This is the manuscript that was published, with only minor copyediting alterations, as: Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review. 108, 814834 Copyright 2001, American Psychological Association To obtain a repr ..."
Abstract

Cited by 629 (20 self)
 Add to MetaCart
This is the manuscript that was published, with only minor copyediting alterations, as: Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review. 108, 814834 Copyright 2001, American Psychological Association To obtain a
Halfa century of research on the Stroop effect: An integrative review
 PsychologicalBulletin
, 1991
"... The literature on interference in the Stroop ColorWord Task, covering over 50 years and some 400 studies, is organized and reviewed. In so doing, a set ofl 8 reliable empirical findings is isolated that must be captured by any successful theory of the Stroop effect. Existing theoretical positions a ..."
Abstract

Cited by 621 (14 self)
 Add to MetaCart
dimensions are likely to be more successful than are earlier theories attempting to locate a single bottleneck in attention. In 1935, J. R. Stroop published his landmark article on attention and interference, an article more influential now than it was then. Why has the Stroop task continued to fascinate us
The Lifting Scheme: A Construction Of Second Generation Wavelets
, 1997
"... . We present the lifting scheme, a simple construction of second generation wavelets, wavelets that are not necessarily translates and dilates of one fixed function. Such wavelets can be adapted to intervals, domains, surfaces, weights, and irregular samples. We show how the lifting scheme leads to ..."
Abstract

Cited by 541 (16 self)
 Add to MetaCart
. We present the lifting scheme, a simple construction of second generation wavelets, wavelets that are not necessarily translates and dilates of one fixed function. Such wavelets can be adapted to intervals, domains, surfaces, weights, and irregular samples. We show how the lifting scheme leads to a faster, inplace calculation of the wavelet transform. Several examples are included. Key words. wavelet, multiresolution, second generation wavelet, lifting scheme AMS subject classifications. 42C15 1. Introduction. Wavelets form a versatile tool for representing general functions or data sets. Essentially we can think of them as data building blocks. Their fundamental property is that they allow for representations which are efficient and which can be computed fast. In other words, wavelets are capable of quickly capturing the essence of a data set with only a small set of coefficients. This is based on the fact that most data sets have correlation both in time (or space) and frequenc...
A scheduling model for reduced CPU energy
 ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 1995
"... The energy usage of computer systems is becoming an important consideration, especially for batteryoperated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job s ..."
Abstract

Cited by 550 (3 self)
 Add to MetaCart
The energy usage of computer systems is becoming an important consideration, especially for batteryoperated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function of the processor speed s. We give an offline algorithm that computes, for any set of jobs, a minimumenergy schedule. We then consider some online algorithms and their competitive performance for the power function P(s) = sp where p 3 2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type.
Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming
 Journal of the ACM
, 1995
"... We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least .87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds the solution ..."
Abstract

Cited by 1231 (13 self)
 Add to MetaCart
We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least .87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds the solution to a nonlinear programming relaxation. This relaxation can be interpreted both as a semidefinite program and as an eigenvalue minimization problem. The best previously known approximation algorithms for these problems had performance guarantees of ...
Critical Power for Asymptotic Connectivity in Wireless Networks
, 1998
"... : In wireless data networks each transmitter's power needs to be high enough to reach the intended receivers, while generating minimum interference on other receivers sharing the same channel. In particular, if the nodes in the network are assumed to cooperate in routing each others ' pack ..."
Abstract

Cited by 548 (19 self)
 Add to MetaCart
: In wireless data networks each transmitter's power needs to be high enough to reach the intended receivers, while generating minimum interference on other receivers sharing the same channel. In particular, if the nodes in the network are assumed to cooperate in routing each others ' packets, as is the case in ad hoc wireless networks, each node should transmit with just enough power to guarantee connectivity in the network. Towards this end, we derive the critical power a node in the network needs to transmit in order to ensure that the network is connected with probability one as the number of nodes in the network goes to infinity. It is shown that if n nodes are placed in a disc of unit area in ! 2 and each node transmits at a power level so as to cover an area of ßr 2 = (log n + c(n))=n, then the resulting network is asymptotically connected with probability one if and only if c(n) ! +1. 1 Introduction Wireless communication systems consist of nodes which share a common commu...
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Results 1  10
of
418,031