Results 1  10
of
146
Limiting Privacy Breaches in Privacy Preserving Data Mining
 In PODS
, 2003
"... There has been increasing interest in the problem of building accurate data mining models over aggregate data, while protecting privacy at the level of individual records. One approach for this problem is to randomize the values in individual records, and only disclose the randomized values. The mod ..."
Abstract

Cited by 287 (11 self)
 Add to MetaCart
There has been increasing interest in the problem of building accurate data mining models over aggregate data, while protecting privacy at the level of individual records. One approach for this problem is to randomize the values in individual records, and only disclose the randomized values. The model is then built over the randomized data, after first compensating for the randomization (at the aggregate level). This approach is potentially vulnerable to privacy breaches: based on the distribution of the data, one may be able to learn with high confidence that some of the randomized records satisfy a specified property, even though privacy is preserved on average. In this paper, we present a new formulation of privacy breaches, together with a methodology, "amplification", for limiting them. Unlike earlier approaches, amplification makes it is possible to guarantee limits on privacy breaches without any knowledge of the distribution of the original data. We instantiate this methodology for the problem of mining association rules, and modify the algorithm from [9] to limit privacy breaches without knowledge of the data distribution. Next, we address the problem that the amount of randomization required to avoid privacy breaches (when mining association rules) results in very long transactions. By using pseudorandom generators and carefully choosing seeds such that the desired items from the original transaction are present in the randomized transaction, we can send just the seed instead of the transaction, resulting in a dramatic drop in communication and storage cost. Finally, we define new information measures that take privacy breaches into account when quantifying the amount of privacy preserved by randomization.
Universal SpaceTime Coding
 IEEE Trans. Inform. Theory
, 2003
"... A universal framework is developed for constructing fullrate and fulldiversity coherent spacetime codes for systems with arbitrary numbers of transmit and receive antennas. The proposed framework combines spacetime layering concepts with algebraic component codes optimized for singleinputsi ..."
Abstract

Cited by 128 (6 self)
 Add to MetaCart
A universal framework is developed for constructing fullrate and fulldiversity coherent spacetime codes for systems with arbitrary numbers of transmit and receive antennas. The proposed framework combines spacetime layering concepts with algebraic component codes optimized for singleinputsingleoutput (SISO) channels. Each component code is assigned to a "thread" in the spacetime matrix, allowing it thus full access to the channel spatial diversity in the absence of the other threads. Diophantine approximation theory is then used in order to make the different threads "transparent" to each other. Within this framework, a special class of signals which uses algebraic numbertheoretic constellations as component codes is thoroughly investigated. The lattice structure of the proposed numbertheoretic codes along with their minimal delay allow for polynomial complexity maximumlikelihood (ML) decoding using algorithms from lattice theory. Combining the design framework with the Cayley transform allows to construct full diversity differential and noncoherent spacetime codes. The proposed framework subsumes many of the existing codes in the literature, extends naturally to timeselective and frequency selective channels, and allows for more flexibility in the tradeoff between power efficiency, bandwidth efficiency, and receiver complexity. Simulation results that demonstrate the significant gains offered by the proposed codes are presented in certain representative scenarios.
Fast and effective orchestration of compiler optimizations for automatic performance tuning
 In Proceedings of the International Symposium on Code Generation and Optimization (CGO
, 2006
"... Although compiletime optimizations generally improve program performance, degradations caused by individual techniques are to be expected. One promising research direction to overcome this problem is the development of dynamic, feedbackdirected optimization orchestration algorithms, which automati ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
(Show Context)
Although compiletime optimizations generally improve program performance, degradations caused by individual techniques are to be expected. One promising research direction to overcome this problem is the development of dynamic, feedbackdirected optimization orchestration algorithms, which automatically search for the combination of optimization techniques that achieves the best program performance. The challenge is to develop an orchestration algorithm that finds, in an exponential search space, a solution that is close to the best, in acceptable time. In this paper, we build such a fast and effective algorithm, called Combined Elimination (CE). The key advance of CE over existing techniques is that it takes the least tuning time (57% of the closest alternative), while achieving the same program performance. We conduct the experiments on both a Pentium IV machine and a SPARC II machine, by measuring performance of SPEC CPU2000 benchmarks under a large set of 38 GCC compiler options. Furthermore, through orchestrating a small set of optimizations causing the most degradation, we show that the performance achieved by CE is close to the upper bound obtained by an exhaustive search algorithm. The gap is less than 0.2 % on average. 1
Robust analog/RF circuit design with projectionbased posynomial modeling
 IEEE/ACM ICCAD
, 2004
"... In this paper we propose a RObust Analog Design tool (ROAD) for posttuning analog/RF circuits. Starting from an initial design derived from hand analysis or analog circuit synthesis based on simplified models, ROAD extracts accurate posynomial performance models via transistorlevel simulation and ..."
Abstract

Cited by 24 (9 self)
 Add to MetaCart
(Show Context)
In this paper we propose a RObust Analog Design tool (ROAD) for posttuning analog/RF circuits. Starting from an initial design derived from hand analysis or analog circuit synthesis based on simplified models, ROAD extracts accurate posynomial performance models via transistorlevel simulation and optimizes the circuit by geometric programming. Importantly, ROAD sets up all design constraints to include largescale process variations to facilitate the tradeoff between yield and performance. A novel convex formulation of the robust design problem is utilized to improve the optimization efficiency and to produce a solution that is superior to other local tuning methods. In addition, a novel projectionbased approach for posynomial fitting is used to facilitate scaling to large problem sizes. A new implicit power iteration algorithm is proposed to find the optimal projection space and extract the posynomial coefficients with robust convergence. The efficacy of ROAD is demonstrated on several circuit examples. 1.
Minimum Moment Aberration For Nonregular Designs And Supersaturated Designs
 Statist. Sinica
, 2003
"... A novel combinatorial criterion, called minimum moment aberration, is proposed for assessing the goodness of nonregular designs and supersaturated designs. The new criterion, which is to sequentially minimize the power moments of the number of coincidences among runs, is a surrogate with tremendo ..."
Abstract

Cited by 24 (10 self)
 Add to MetaCart
A novel combinatorial criterion, called minimum moment aberration, is proposed for assessing the goodness of nonregular designs and supersaturated designs. The new criterion, which is to sequentially minimize the power moments of the number of coincidences among runs, is a surrogate with tremendous computational advantages for many statistically justified criteria, such as minimum G 2 aberration, generalized minimum aberration and E(s ). In addition, the minimum moment aberration is conceptually simple and convenient for theoretical development.
The Perfect Binary OneErrorCorrecting Codes of Length 15: Part II  Properties
, 2009
"... A complete classification of the perfect binary oneerrorcorrecting codes of length 15 as well as their extensions of length 16 was recently carried out in [P. R. J. Östergård and O. Pottonen, “The perfect binary oneerrorcorrecting codes of length 15: Part I—Classification, ” submitted for publica ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
A complete classification of the perfect binary oneerrorcorrecting codes of length 15 as well as their extensions of length 16 was recently carried out in [P. R. J. Östergård and O. Pottonen, “The perfect binary oneerrorcorrecting codes of length 15: Part I—Classification, ” submitted for publication]. In the current accompanying work, the classified codes are studied in great detail, and their main properties are tabulated. The results include the fact that 33 of the 80 Steiner triple systems of order 15 occur in such codes. Further understanding is gained on fullrank codes via icomponents, as it turns out that all but two fullrank codes can be obtained through a series of transformations from the Hamming code. Other topics studied include (non)systematic codes, embedded oneerrorcorrecting codes, and defining sets of codes. A classification of certain mixed perfect codes is also obtained.
Quantum algorithms for weighing matrices and quadratic residues
 Algorithmica
, 2002
"... In this article we investigate how we can employ the structure of combinatorial objects like Hadamard matrices and weighing matrices to device new quantum algorithms. We show how the properties of a weighing matrix can be used to construct a problem for which the quantum query complexity is signific ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
In this article we investigate how we can employ the structure of combinatorial objects like Hadamard matrices and weighing matrices to device new quantum algorithms. We show how the properties of a weighing matrix can be used to construct a problem for which the quantum query complexity is significantly lower than the classical one. It is pointed out that this scheme captures both Bernstein & Vazirani’s innerproduct protocol, as well as Grover’s search algorithm. In the second part of the article we consider Paley’s construction of Hadamard matrices to design a more specific problem that uses the Legendre symbol χ (which indicates if an element of a finite field GF(p k) is a quadratic residue or not). It is shown how for a shifted Legendre function fs(x) = χ(x+s), the unknown s ∈ GF(p k) can be obtained exactly with only two quantum calls to fs. This is in sharp contrast with the observation that any classical, probabilistic procedure requires at least k log p queries to solve the same problem. 1
Performance modeling of analog integrated circuits using leastsquares support vector machines
 in Proc. Des. Autom. and Test Europe Conf
"... This paper describes the application of LeastSquares Support Vector Machine (LSSVM) training to analog circuit performance modeling as needed for accelerated or hierarchical analog circuit synthesis. The training is a type of regression, where a function of a special form is fit to experimental ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
This paper describes the application of LeastSquares Support Vector Machine (LSSVM) training to analog circuit performance modeling as needed for accelerated or hierarchical analog circuit synthesis. The training is a type of regression, where a function of a special form is fit to experimental performance data derived from analog circuit simulations. The method is contrasted with a feasibility model approach based on the more traditional use of SVMs, namely classification. A Design of Experiments (DOE) strategy is reviewed which forms the basis of an efficient simulation sampling scheme. The results of our functional regression are then compared to two other DOEbased fitting schemes: a simple linear leastsquares regression and a regression using posynomial models. The LSSVM fitting has advantages over these approaches in terms of accuracy of fit to measured data, prediction of intermediate data points and reduction of free model tuning parameters. 1.
Pairwise testing: A best practice that isn’t
 22nd Annual Pacific Northwest Software Quality Conference
, 2004
"... James Bach ..."
Fast automatic procedurelevel performance tuning
 In IEEE PACT
, 2006
"... This paper presents an automated performance tuning solution, which partitions a program into a number of tuning sections and finds the best combination of compiler options for each section. Our solution builds on prior work on feedbackdriven optimization, which tuned the whole program, instead of ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
This paper presents an automated performance tuning solution, which partitions a program into a number of tuning sections and finds the best combination of compiler options for each section. Our solution builds on prior work on feedbackdriven optimization, which tuned the whole program, instead of each section. Our key novel algorithm partitions a program into appropriate tuning sections. We also present the architecture of a system that automates the tuning process; it includes several pretuning steps that partition and instrument the program, followed by the actual tuning and the posttuning assembly of the individuallyoptimized parts. Our system, called PEAK, achieves fast tuning speed by measuring a small number of invocations of each code section, instead of the wholeprogram execution time, as in common solutions. Compared to these solutions PEAK reduces tuning time from 2.19 hours to 5.85 minutes on average, while achieving similar program performance. PEAK improves the performance of SPEC CPU2000 FP benchmarks by 12 % on average over GCC O3, the highest optimization level, on a Pentium IV machine.