Results 1 - 10
of
15,369
Face recognition: features versus templates
- IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1993
"... Over the last 20 years, several different techniques have been proposed for computer recognition of human faces. The purpose of this paper is to compare two simple but general strategies on a common database (frontal images of faces of 47 people: 26 males and 21 females, four images per person). We ..."
Abstract
-
Cited by 749 (25 self)
- Add to MetaCart
). We have developed and implemented two new algorithms; the first one is based on the computation of a set of geometrical features, such as nose width and length, mouth position, and chin shape, and the second one is based on almost-grey-level template matching. The results obtained on the testing sets
Local features and kernels for classification of texture and object categories: a comprehensive study
- International Journal of Computer Vision
, 2007
"... Recently, methods based on local image features have shown promise for texture and object recognition tasks. This paper presents a large-scale evaluation of an approach that represents images as distributions (signatures or histograms) of features extracted from a sparse set of keypoint locations an ..."
Abstract
-
Cited by 653 (34 self)
- Add to MetaCart
and learns a Support Vector Machine classifier with kernels based on two effective measures for comparing distributions, the Earth Mover’s Distance and the χ 2 distance. We first evaluate the performance of our approach with different keypoint detectors and descriptors, as well as different kernels
Cumulated Gain-based Evaluation of IR Techniques
- ACM Transactions on Information Systems
, 2002
"... Modem large retrieval environments tend to overwhelm their users by their large output. Since all documents are not of equal relevance to their users, highly relevant documents should be identified and ranked first for presentation to the users. In order to develop IR techniques to this direction, i ..."
Abstract
-
Cited by 694 (3 self)
- Add to MetaCart
Modem large retrieval environments tend to overwhelm their users by their large output. Since all documents are not of equal relevance to their users, highly relevant documents should be identified and ranked first for presentation to the users. In order to develop IR techniques to this direction
An almost ideal demand system.
- American Economic Review,
, 1980
"... Ever since Richard Stone (1954) first estimated a system of demand equations derived explicitly from consumer theory, there has been a continuing search for alternative specifications and functional forms. Many models have been proposed, but perhaps the most important in current use, apart from the ..."
Abstract
-
Cited by 636 (0 self)
- Add to MetaCart
Ever since Richard Stone (1954) first estimated a system of demand equations derived explicitly from consumer theory, there has been a continuing search for alternative specifications and functional forms. Many models have been proposed, but perhaps the most important in current use, apart from
Motivation through the Design of Work: Test of a Theory. Organizational Behavior and Human Performance,
, 1976
"... A model is proposed that specifies the conditions under which individuals will become internally motivated to perform effectively on their jobs. The model focuses on the interaction among three classes of variables: (a) the psychological states of employees that must be present for internally motiv ..."
Abstract
-
Cited by 622 (2 self)
- Add to MetaCart
another. It is, therefore, difficult to test the adequacy of the theory qua theory. Moreover, the approach provides little specific guidance about how (and how not to) proceed in carrying out work redesign activities, other than the general dictum to attend to both the technical and social aspects
Loopy belief propagation for approximate inference: An empirical study. In:
- Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" -the use of Pearl's polytree algorithm in a Bayesian network with loops -can perform well in the context of error-correcting codes. The most dramatic instance of this is the near Shannon-limit performanc ..."
Abstract
-
Cited by 676 (15 self)
- Add to MetaCart
likelihood weighting 3.1 The PYRAMID network All nodes were binary and the conditional probabilities were represented by tables-entries in the conditional probability tables (CPTs) were chosen uniformly in the range (0, 1]. 3.2 The toyQMR network All nodes were binary and the conditional probabilities
A Bayesian Framework for the Analysis of Microarray Expression Data: Regularized t-Test and Statistical Inferences of Gene Changes
- Bioinformatics
, 2001
"... Motivation: DNA microarrays are now capable of providing genome-wide patterns of gene expression across many different conditions. The first level of analysis of these patterns requires determining whether observed differences in expression are significant or not. Current methods are unsatisfactory ..."
Abstract
-
Cited by 491 (6 self)
- Add to MetaCart
with neighboring genes. An additional hyperparameter, inversely related to the number of empirical observations, determines the strength of the background variance. Simulations show that these point estimates, combined with a t-test, provide a systematic inference approach that compares favorably with simple t-test
On the effectiveness of the test-first approach to programming
- IEEE Transactions on Software Engineering
, 2005
"... Abstract—Test-Driven Development (TDD) is based on formalizing a piece of functionality as a test, implementing the functionality such that the test passes, and iterating the process. This paper describes a controlled experiment for evaluating an important aspect of TDD: In TDD, programmers write fu ..."
Abstract
-
Cited by 80 (2 self)
- Add to MetaCart
functional tests before the corresponding implementation code. The experiment was conducted with undergraduate students. While the experiment group applied a test-first strategy, the control group applied a more conventional development technique, writing tests after the implementation. Both groups followed
N-grambased text categorization
- In Proc. of SDAIR-94, 3rd Annual Symposium on Document Analysis and Information Retrieval
, 1994
"... Text categorization is a fundamental task in document processing, allowing the automated handling of enormous streams of documents in electronic form. One difficulty in handling some classes of documents is the presence of different kinds of textual errors, such as spelling and grammatical errors in ..."
Abstract
-
Cited by 445 (0 self)
- Add to MetaCart
in email, and character recognition errors in documents that come through OCR. Text categorization must work reliably on all input, and thus must tolerate some level of these kinds of problems. We describe here an N-gram-based approach to text categorization that is tolerant of textual errors. The system
Automatically validating temporal safety properties of interfaces
, 2001
"... We present a process for validating temporal safety properties of software that uses a well-defined interface. The process requires only that the user state the property of interest. It then automatically creates abstractions of C code using iterative refinement, based on the given property. The pro ..."
Abstract
-
Cited by 433 (21 self)
- Add to MetaCart
that the process converges on a set of predicates powerful enough to validate properties in just a few iterations. 1 Introduction Large-scale software has many components built by many programmers. Integration testing of these components is impossible or ineffective at best. Property checking of interface usage
Results 1 - 10
of
15,369