Results 1  10
of
12
A Class of SoftwareHardware Processors for Fingerprint Matching on the Fourier Domain
"... Abstract. In this paper we propose and evaluate different architectures to perform on FPGAs the Phase Only Correlation (POC) operation for fingerprint matching. In particular, we are interested in a class of architectures that differ on the extent to which the tasks have been split between (dedicate ..."
Abstract
 Add to MetaCart
Abstract. In this paper we propose and evaluate different architectures to perform on FPGAs the Phase Only Correlation (POC) operation for fingerprint matching. In particular, we are interested in a class of architectures that differ on the extent to which the tasks have been split between (dedicated) hardware and software (MicroBlaze softprocessor). We compare the performance of using a dedicated 2D FFT hardware with the one achieved by using hardware arrays of 1D FFT, both of which are used to accelerate the computation of a POC operation. We insert for the first time reconfigurable and scalability capabilities to a realtime POC processing system.
Some Architectures for Chebyshev Interpolation
"... Abstract—Digital architectures for Chebyshev interpolation are explored and a variation which is wordserial in nature is proposed. These architectures are contrasted with equispaced system structures. Further, Chebyshev interpolation scheme is compared to the conventional equispaced interpolation v ..."
Abstract
 Add to MetaCart
Abstract—Digital architectures for Chebyshev interpolation are explored and a variation which is wordserial in nature is proposed. These architectures are contrasted with equispaced system structures. Further, Chebyshev interpolation scheme is compared to the conventional equispaced interpolation visávis reconstruction error and relative number of samples. It is also shown that the use of a hybrid (or dual) Analog to Digital converter unit can reduce system power consumption by as much as 1/3rd of the original. I.
Machine Learning and the Traveling Repairman
, 2000
"... The goal of the Machine Learning and Traveling Repairman Problem (ML&TRP) is to determine a route for a “repair crew,” which repairs nodes on a graph. The repair crew aims to minimize the cost of failures at the nodes, but as in many real situations, the failure probabilities are not known and m ..."
Abstract
 Add to MetaCart
The goal of the Machine Learning and Traveling Repairman Problem (ML&TRP) is to determine a route for a “repair crew,” which repairs nodes on a graph. The repair crew aims to minimize the cost of failures at the nodes, but as in many real situations, the failure probabilities are not known and must be estimated. We introduce two formulations for the ML&TRP, where the first formulation is sequential: failure probabilities are estimated at each node, and then a weighted version of the traveling repairman problem is used to construct the route from the failure cost. We develop two models for the failure cost, based on whether repeat failures are considered, or only the first failure on a node. Our second formulation is a multiobjective learning problem for ranking on graphs. Here, we are estimating failure probabilities simultaneously with determining the graph traversal route; the choice of route influences the estimated failure probabilities. This is in accordance with a prior belief that probabilities that cannot be wellestimated will generally be low. It also agrees with a managerial goal of finding a scenario where the data can plausibly support choosing a route that has a low operational cost.
2009 22nd International Conference on VLSI Design
"... The design of an Ncomparator based asynchronous Successive Approximation AnalogtoDigital Converter (SAR ADC) is described (with N = 6) working at 20 MS/s and consuming only 5.6 mW for low power high speed applications like communication systems. Resetting the comparators in each conversion cycle ..."
Abstract
 Add to MetaCart
The design of an Ncomparator based asynchronous Successive Approximation AnalogtoDigital Converter (SAR ADC) is described (with N = 6) working at 20 MS/s and consuming only 5.6 mW for low power high speed applications like communication systems. Resetting the comparators in each conversion cycle is avoided (reducing power consumption compared to [1]) and only N latches are used overall (incl. comparator latches) for the output code. Further using only N comparators instead of 2N − 1 as in [2], leads to huge savings in terms of area at comparable power consumption. For example, a saving of ∼90% comparator area is achieved for the 6 bit ADC design when compared to the design in [2]. 1.
Design of a Two Dimensional PRSI Image Processor
 11TH EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN, ARCHITECTURES, METHODS AND TOOLS
, 2008
"... A digital processor capable of computing several two dimensional Position Rotation and Scale Invariant (PRSI) transforms on 64 x 64 pixel images is presented. The architecture is programmable to achieve the following five ..."
Abstract
 Add to MetaCart
A digital processor capable of computing several two dimensional Position Rotation and Scale Invariant (PRSI) transforms on 64 x 64 pixel images is presented. The architecture is programmable to achieve the following five
Machine Learning with Operational Costs
"... This work concerns the way that statistical models are used to make decisions. In particular, we aim to merge the way estimation algorithms are designed with how they are used for a subsequent task. Our methodology considers the operational cost of carrying out a policy, based on a predictive model. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
This work concerns the way that statistical models are used to make decisions. In particular, we aim to merge the way estimation algorithms are designed with how they are used for a subsequent task. Our methodology considers the operational cost of carrying out a policy, based on a predictive model. The operational cost becomes a regularization term in the learning algorithm’s objective function, allowing either an optimistic or pessimistic view of possible costs. Limiting the operational cost reduces the hypothesis space for the predictive model, and can thus improve generalization. We show that different types of operational problems can lead to the same type of restriction on the hypothesis space, namely the restriction to an intersection of an ℓq ball with a halfspace. We bound the complexity of such hypothesis spaces by proposing a technique that involves counting integer points in polyhedrons.
The Machine Learning and Traveling Repairman Problem
, 2011
"... The goal of the Machine Learning and Traveling Repairman Problem (ML&TRP) is to determine a route for a “repair crew,” which repairs nodes on a graph. The repair crew aims to minimize the cost of failures at the nodes, but the failure probabilities are not known and must be estimated. If there i ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The goal of the Machine Learning and Traveling Repairman Problem (ML&TRP) is to determine a route for a “repair crew,” which repairs nodes on a graph. The repair crew aims to minimize the cost of failures at the nodes, but the failure probabilities are not known and must be estimated. If there is uncertainty in the failure probability estimates, we take this uncertainty into account in an unusual way; from the set of acceptable models, we choose the model that has the lowest cost of applying it to the subsequent routing task. In a sense, this procedure agrees with a managerial goal, which is to show that the data can support choosing a lowcost solution.
The Influence of Operational Cost on Estimation
"... This work concerns the way that statistical models are used to make decisions. In particular, we aim to merge the way estimation algorithms are designed with how they are used for a subsequent task. Our methodology considers the operational cost of carrying out a policy, based on a predictive model. ..."
Abstract
 Add to MetaCart
This work concerns the way that statistical models are used to make decisions. In particular, we aim to merge the way estimation algorithms are designed with how they are used for a subsequent task. Our methodology considers the operational cost of carrying out a policy, based on a predictive model. The operational cost becomes a regularization term in the learning algorithm’s objective function, allowing either an optimistic or pessimistic view of possible costs. Limiting the operational cost reduces the hypothesis space for the predictive model, and can thus improve generalization. We show that different types of operational problems can lead to the same type of restriction on the hypothesis space, namely the restriction to an intersection of an ℓq ball with a halfspace. We bound the complexity of such hypothesis spaces by proposing a technique that involves counting integer points in polyhedrons.
GENERALIZATION BOUNDS FOR LEARNING WITH LINEAR, POLYGONAL, QUADRATIC AND CONIC SIDE KNOWLEDGE
"... Abstract. In this paper, we consider a supervised learning setting where side knowledge is provided about the labels of unlabeled examples. The side knowledge has the effect of reducing the hypothesis space, leading to tighter generalization bounds, and thus possibly better generalization. We consid ..."
Abstract
 Add to MetaCart
Abstract. In this paper, we consider a supervised learning setting where side knowledge is provided about the labels of unlabeled examples. The side knowledge has the effect of reducing the hypothesis space, leading to tighter generalization bounds, and thus possibly better generalization. We consider several types of side knowledge, the first leading to linear and polygonal constraints on the hypothesis space, the second leading to quadratic constraints, and the last leading to conic constraints. We show how different types of domain knowledge can lead directly to these kinds of side knowledge. We prove bounds on complexity measures of the hypothesis space for quadratic and conic side knowledge, and show that these bounds are tight in a specific sense for the quadratic case. 1.
ROBUST OPTIMIZATION USING MACHINE LEARNING FOR UNCERTAINTY SETS
"... Abstract. Our goal is to build robust optimization problems for making decisions based on complex data from the past. In robust optimization (RO) generally, the goal is to create a policy for decisionmaking that is robust to our uncertainty about the future. In particular, we want our policy to bes ..."
Abstract
 Add to MetaCart
Abstract. Our goal is to build robust optimization problems for making decisions based on complex data from the past. In robust optimization (RO) generally, the goal is to create a policy for decisionmaking that is robust to our uncertainty about the future. In particular, we want our policy to best handle the the worst possible situation that could arise, out of an uncertainty set of possible situations. Classically, the uncertainty set is simply chosen by the user, or it might be estimated in overly simplistic ways with strong assumptions; whereas in this work, we learn the uncertainty set from data collected in the past. The past data are drawn randomly from an (unknown) possibly complicated highdimensional distribution. We propose a new uncertainty set design and show how tools from statistical learning theory can be employed to provide probabilistic guarantees on the robustness of the policy.
Results 1  10
of
12