• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 4,082,300
Next 10 →

The Nature of Statistical Learning Theory

by Vladimir N. Vapnik , 1999
"... Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on the deve ..."
Abstract - Cited by 13236 (32 self) - Add to MetaCart
Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based

Maximum likelihood from incomplete data via the EM algorithm

by A. P. Dempster, N. M. Laird, D. B. Rubin - JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B , 1977
"... A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value situat ..."
Abstract - Cited by 11972 (17 self) - Add to MetaCart
A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value

The Google File System

by Sanjay Ghemawat, Howard Gobioff, Shun-tak Leung - ACM SIGOPS OPERATING SYSTEMS REVIEW , 2003
"... We have designed and implemented the Google File System, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. While s ..."
Abstract - Cited by 1501 (3 self) - Add to MetaCart
We have designed and implemented the Google File System, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. While

USER ACCEPTANCE OF INFORMATION TECHNOLOGY: TOWARD A UNIFIED VIEW

by Viswanath Venkatesh, Michael G. Morris, Gordon B. Davis, Fred D. Davis , 2003
"... Information technology (IT) acceptance research has yielded many competing models, each with different sets of acceptance determinants. In this paper, we (1) review user acceptance literature and discuss eight prominent models, (2) empirically compare the eight models and their extensions, (3) formu ..."
Abstract - Cited by 1807 (10 self) - Add to MetaCart
Information technology (IT) acceptance research has yielded many competing models, each with different sets of acceptance determinants. In this paper, we (1) review user acceptance literature and discuss eight prominent models, (2) empirically compare the eight models and their extensions, (3

Inverse Acoustic and Electromagnetic Scattering Theory, Second Edition

by David Colton , 1998
"... Abstract. This paper is a survey of the inverse scattering problem for time-harmonic acoustic and electromagnetic waves at fixed frequency. We begin by a discussion of “weak scattering ” and Newton-type methods for solving the inverse scattering problem for acoustic waves, including a brief discussi ..."
Abstract - Cited by 1061 (45 self) - Add to MetaCart
discussion of Tikhonov’s method for the numerical solution of ill-posed problems. We then proceed to prove a uniqueness theorem for the inverse obstacle problems for acoustic waves and the linear sampling method for reconstructing the shape of a scattering obstacle from far field data. Included in our

A Set Of Principles For Conducting And Evaluating Interpretive Field Studies In Information Systems

by Heinz K. Klein, Michael D. Myers , 1999
"... This article discusses the conduct and evaluation of interpretive research in information systems. While the conventions for evaluating information systems case studies conducted according to the natural science model of social science are now widely accepted, this is not the case for interpretive f ..."
Abstract - Cited by 914 (6 self) - Add to MetaCart
field studies. A set of principles for the conduct and evaluation of interpretive field research in information systems is proposed, along with their philosophical rationale. The usefulness of the principles is illustrated by evaluating three published interpretive field studies drawn from

Bigtable: A distributed storage system for structured data

by Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, Robert E. Gruber - IN PROCEEDINGS OF THE 7TH CONFERENCE ON USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION - VOLUME 7 , 2006
"... Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications ..."
Abstract - Cited by 1028 (4 self) - Add to MetaCart
for all of these Google products. In this paper we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable.

Inducing Features of Random Fields

by Stephen Della Pietra, Vincent Della Pietra, John Lafferty - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 1997
"... We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the ..."
Abstract - Cited by 670 (10 self) - Add to MetaCart
the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random field models and techniques

Support Vector Machine Classification and Validation of Cancer Tissue Samples Using Microarray Expression Data

by Terrence S. Furey, Nello Cristianini, Nigel Duffy, David W. Bednarski, Michèl Schummer, David Haussler , 2000
"... Motivation: DNA microarray experiments generating thousands of gene expression measurements, are being used to gather information from tissue and cell samples regarding gene expression differences that will be useful in diagnosing disease. We have developed a new method to analyse this kind of data ..."
Abstract - Cited by 569 (1 self) - Add to MetaCart
Motivation: DNA microarray experiments generating thousands of gene expression measurements, are being used to gather information from tissue and cell samples regarding gene expression differences that will be useful in diagnosing disease. We have developed a new method to analyse this kind of data

Video google: A text retrieval approach to object matching in videos

by Josef Sivic, Andrew Zisserman - In ICCV , 2003
"... We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, ill ..."
Abstract - Cited by 1636 (42 self) - Add to MetaCart
, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre
Next 10 →
Results 1 - 10 of 4,082,300
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University