• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 159,507
Next 10 →

Towards better nlp system evaluation

by Karen Sparck Jones - In Proceedings of the Human Language Technology Workshop, 102--107 , 1994
"... This paper considers key elements of evaluation methodology, indicating the many points involved and advocating an unpacking approach in specifying an evaluation remit and design. Recognising the importance of both environment variables and system parameters leads to a grid organisation for tests. T ..."
Abstract - Cited by 11 (0 self) - Add to MetaCart
This paper considers key elements of evaluation methodology, indicating the many points involved and advocating an unpacking approach in specifying an evaluation remit and design. Recognising the importance of both environment variables and system parameters leads to a grid organisation for tests

NEAL-MONTGOMERY NLP SYSTEM EVALUATION METHODOLOGY

by unknown authors
"... On what basis are the input processing capabilities of Natural Language software judged? That is, what are the capabilities to be described and measured, and what are the standards against which we measure them? Rome Laboratory is currently supporting an effort to develop a concise terminology for d ..."
Abstract - Add to MetaCart
for describing the linguistic processing capabilities of Natural Language Systems, and a uniform methodology for appropriately applying the terminology. This methodology is meant to produce quantitative, objective profiles of NL system capabilities without requiring system adaptation to a new test domain or text

Evaluating collaborative filtering recommender systems

by Jonathan L. Herlocker, Joseph A. Konstan, Loren G. Terveen, John T. Riedl - ACM TRANSACTIONS ON INFORMATION SYSTEMS , 2004
"... ..."
Abstract - Cited by 981 (19 self) - Add to MetaCart
Abstract not found

A Set Of Principles For Conducting And Evaluating Interpretive Field Studies In Information Systems

by Heinz K. Klein, Michael D. Myers , 1999
"... This article discusses the conduct and evaluation of interpretive research in information systems. While the conventions for evaluating information systems case studies conducted according to the natural science model of social science are now widely accepted, this is not the case for interpretive f ..."
Abstract - Cited by 914 (6 self) - Add to MetaCart
This article discusses the conduct and evaluation of interpretive research in information systems. While the conventions for evaluating information systems case studies conducted according to the natural science model of social science are now widely accepted, this is not the case for interpretive

MediaBench: A Tool for Evaluating and Synthesizing Multimedia and Communications Systems

by Chunho Lee, Miodrag Potkonjak, William H. Mangione-smith
"... Over the last decade, significant advances have been made in compilation technology for capitalizing on instruction-level parallelism (ILP). The vast majority of ILP compilation research has been conducted in the context of generalpurpose computing, and more specifically the SPEC benchmark suite. At ..."
Abstract - Cited by 966 (22 self) - Add to MetaCart
. At the same time, a number of microprocessor architectures have emerged which have VLIW and SIMD structures that are well matched to the needs of the ILP compilers. Most of these processors are targeted at embedded applications such as multimedia and communications, rather than general-purpose systems

Query evaluation techniques for large databases

by Goetz Graefe - ACM COMPUTING SURVEYS , 1993
"... Database management systems will continue to manage large data volumes. Thus, efficient algorithms for accessing and manipulating large sets and sequences will be required to provide acceptable performance. The advent of object-oriented and extensible database systems will not solve this problem. On ..."
Abstract - Cited by 767 (11 self) - Add to MetaCart
is essential for the designer of database management software. This survey provides a foundation for the design and implementation of query execution facilities in new database management systems. It describes a wide array of practical query evaluation techniques for both relational and post

The FERET evaluation methodology for face recognition algorithms

by P. Jonathon Phillips, Hyeonjoon Moon, Syed A. Rizvi, Patrick J. Rauss - In (a) Example 1 (b) Example 1 (c) Right ear) (d) Mirroed Left ear
"... AbstractÐTwo of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial i ..."
Abstract - Cited by 1116 (26 self) - Add to MetaCart
AbstractÐTwo of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial

The Aurora Experimental Framework for the Performance Evaluation of Speech Recognition Systems under Noisy Conditions

by David Pearce, Hans-günter Hirsch, Ericsson Eurolab Deutschland Gmbh - in ISCA ITRW ASR2000 , 2000
"... This paper describes a database designed to evaluate the performance of speech recognition algorithms in noisy conditions. The database may either be used to measure frontend feature extraction algorithms, using a defined HMM recognition back-end, or complete recognition systems. The source speech f ..."
Abstract - Cited by 534 (6 self) - Add to MetaCart
This paper describes a database designed to evaluate the performance of speech recognition algorithms in noisy conditions. The database may either be used to measure frontend feature extraction algorithms, using a defined HMM recognition back-end, or complete recognition systems. The source speech

Cumulated Gain-based Evaluation of IR Techniques

by Kalervo Järvelin, Jaana Kekäläinen - ACM Transactions on Information Systems , 2002
"... Modem large retrieval environments tend to overwhelm their users by their large output. Since all documents are not of equal relevance to their users, highly relevant documents should be identified and ranked first for presentation to the users. In order to develop IR techniques to this direction, i ..."
Abstract - Cited by 694 (3 self) - Add to MetaCart
, it is necessary to develop evaluation approaches and methods that credit IR methods for their ability to retrieve highly relevant documents. This can be done by extending traditional evaluation methods, i.e., recall and precision based on binary relevance assessments, to graded relevance assessments

The Cricket Location-Support System

by Nissanka B. Priyantha, Anit Chakraborty, Hari Balakrishnan , 2000
"... This paper presents the design, implementation, and evaluation of Cricket, a location-support system for in-building, mobile, locationdependent applications. It allows applications running on mobile and static nodes to learn their physical location by using listeners that hear and analyze informatio ..."
Abstract - Cited by 1058 (11 self) - Add to MetaCart
This paper presents the design, implementation, and evaluation of Cricket, a location-support system for in-building, mobile, locationdependent applications. It allows applications running on mobile and static nodes to learn their physical location by using listeners that hear and analyze
Next 10 →
Results 1 - 10 of 159,507
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University