Results 1  10
of
149
The Research Thesis Was Done Under The Supervision of Dr. Yuval Ishai in the
"... I wish to express my deep gratitude to my advisor, Yuval Ishai, for his wise guidance, constant encouragement and many inspiring discussions. I would also like to thank Eyal Kushilevitz and Ronen Shaltiel for useful comments and suggestions regarding this thesis. The generous financial help of the T ..."
Abstract
 Add to MetaCart
I wish to express my deep gratitude to my advisor, Yuval Ishai, for his wise guidance, constant encouragement and many inspiring discussions. I would also like to thank Eyal Kushilevitz and Ronen Shaltiel for useful comments and suggestions regarding this thesis. The generous financial help
Derandomized constructions of kwise (almost) independent permutations
 In Proceedings of the 9th Workshop on Randomization and Computation (RANDOM
, 2005
"... Abstract Constructions of kwise almost independent permutations have been receiving a growingamount of attention in recent years. However, unlike the case of kwise independent functions,the size of previously constructed families of such permutations is far from optimal. This paper gives a new met ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
Abstract Constructions of kwise almost independent permutations have been receiving a growingamount of attention in recent years. However, unlike the case of kwise independent functions,the size of previously constructed families of such permutations is far from optimal. This paper gives a new method for reducing the size of families given by previous constructions. Ourmethod relies on pseudorandom generators for spacebounded computations. In fact, all we need is a generator, that produces "pseudorandom walks " on undirected graphs with a consistent labelling. One such generator is implied by Reingold's logspace algorithm for undirected connectivity [35, 36]. We obtain families of kwise almost independent permutations, with anoptimal description length, up to a constant factor. More precisely, if the distance from uniform for any k tuple should be at most ffi, then the size of the description of a permutation inthe family is O(kn + log 1ffi). 1 Introduction In explicit constructions of pseudorandom objects, we are interested in simulating a large randomobject using a succinct one and would like to capture some essential properties of the former. A natural way to phrase such a requirement is via limited access. Suppose the object that we areinterested in simulating is a random function f: {0, 1}n 7! {0, 1}n and we want to come up witha small family of functions G that simulates it. The kwise independence requirement in this caseis that a function g chosen at random from G be completely indistinguishable from a function fchosen at random from the set of all functions, for any process that receives the value of either
Quantitative Feedback Theory Toolbox User’s Guide, Terasoft
, 1993
"... The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form without prior written consent from Terasoft, Inc. ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form without prior written consent from Terasoft, Inc.
A Comparative Study of Information Extraction Strategies
 In Proc. of CICLing02
, 2002
"... Abstract. The availability of online text documents exposes readers to a vast amount of potentially valuable knowledge buried therein. The sheer scale of material has created the pressing need for automated methods of discovering relevant information without having to read it all. Hence the growing ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Abstract. The availability of online text documents exposes readers to a vast amount of potentially valuable knowledge buried therein. The sheer scale of material has created the pressing need for automated methods of discovering relevant information without having to read it all. Hence the growing interest in recent years in Text Mining. A common approach to Text Mining is Information Extraction (IE), extracting specific types (or templates) of information from a document collection. Although many works on IE have been published, researchers have not paid much attention to evaluate the contribution of syntactic and semantic analysis using Natural Language Processing (NLP) techniques to the quality of IE results. In this work we try to quantify the contribution of NLP techniques, by comparing three strategies for IE: naïve cooccurrence, ordered cooccurrence, and the structuredriven method – a rulebased strategy that relies on syntactic analysis followed by the extraction of suitable semantic templates. We use the three strategies for the extraction of two templates from financial news stories. We show that the structuredriven strategy provides significantly better precision results than the two other strategies (8090 % for the structuredriven compared with about only 60 % for the cooccurrence and ordered cooccurrence). These results indicate that a syntactical and semantic analysis is necessary if one wishes to obtain high accuracy.
Scaling up: Solving pomdps through value based clustering
 In Proceedings of AAAI
, 1295
"... Partially Observable Markov Decision Processes (POMDPs) provide an appropriately rich model for agents operating under partial knowledge of the environment. Since finding an optimal POMDP policy is intractable, approximation techniques have been a main focus of research, among them pointbased algor ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Partially Observable Markov Decision Processes (POMDPs) provide an appropriately rich model for agents operating under partial knowledge of the environment. Since finding an optimal POMDP policy is intractable, approximation techniques have been a main focus of research, among them pointbased algorithms, which scale up relatively well up to thousands of states. An important decision in a pointbased algorithm is the order of backup operations over belief states. Prioritization techniques for ordering the sequence of backup operations reduce the number of needed backups considerably, but involve significant overhead. This paper suggests a new way to order backups, based on a soft clustering of the belief space. Our novel soft clustering method relies on the solution of the underlying MDP. Empirical evaluation verifies that our method rapidly computes a good order of backups, showing orders of magnitude improvement in runtime over a number of benchmarks.
Preferences over sets
 In AAAI
, 2006
"... Typically, work on preference elicitation and reasoning about preferences has focused on the problem of specifying, modeling, and optimizing with preference over outcomes corresponding to single objects of interest. In a number of applications, however, the “outcomes ” of interest are really sets of ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Typically, work on preference elicitation and reasoning about preferences has focused on the problem of specifying, modeling, and optimizing with preference over outcomes corresponding to single objects of interest. In a number of applications, however, the “outcomes ” of interest are really sets of such atomic outcomes. For instance, when trying to form coalitions or committees, we need to select an optimal combination of individuals. In this paper we describe some initial work on specifying preferences over sets of objects, and selecting an optimal subset from a given set of objects. In particular, we show how TCPnets can be used to handle this problem, and how an existing algorithm for preferencebased constrained optimization can be adapted to the problem of optimal subset selection. 1
A.: Multiscale edge detection and fiber enhancement using differences of oriented means
"... We present an algorithm for edge detection suitable for both natural as well as noisy images. Our method is based on efficient multiscale utilization of elongated filters measuring the difference of oriented means of various lengths and orientations, along with a theoretical estimation of the effect ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We present an algorithm for edge detection suitable for both natural as well as noisy images. Our method is based on efficient multiscale utilization of elongated filters measuring the difference of oriented means of various lengths and orientations, along with a theoretical estimation of the effect of noise on the response of such filters. We use a scale adaptive threshold along with a recursive decision process to reveal the significant edges of all lengths and orientations and to localize them accurately even in lowcontrast and very noisy images. We further use this algorithm for fiber detection and enhancement by utilizing stochastic completionlike process from both sides of a fiber. Our algorithm relies on an efficient multiscale algorithm for computing all “significantly different ” oriented means in an image in O(N log ρ), where N is the number of pixels, and ρ is the length of the longest structure of interest. Experimental results on both natural and noisy images are presented. 1.
A Domain Independent Environment for Creating Information Extraction Modules
 In Proc. of the Int. Conf. on Information and Knowledge Management (CIKM01
, 2001
"... TextMining is a growing area of interest within the field of Data Mining and Knowledge Discovery. Given a collection of text documents, most approaches to Text Mining perform knowledgediscovery operations either on external tags associated with each document, or on the set of all words within each ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
TextMining is a growing area of interest within the field of Data Mining and Knowledge Discovery. Given a collection of text documents, most approaches to Text Mining perform knowledgediscovery operations either on external tags associated with each document, or on the set of all words within each document. Both approaches suffer from limitations. This paper focuses on an intermediate approach, one that we call text mining via information extraction, in which knowledge discovery takes place on focused, relevant terms, phrases and facts, as extracted from the documents.
On DecisionTheoretic Foundations for Defaults
 Artificial Intelligence
, 1995
"... In recent years, considerable effort has gone into understanding default reasoning. Most of this effort concentrated on the question of entailment, i.e., what conclusions are warranted by a knowledgebase of defaults. Surprisingly, few works formally examine the general role of defaults. We argue ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
In recent years, considerable effort has gone into understanding default reasoning. Most of this effort concentrated on the question of entailment, i.e., what conclusions are warranted by a knowledgebase of defaults. Surprisingly, few works formally examine the general role of defaults. We argue that an examination of this role is necessary in order to understand defaults, and suggest a concrete role for defaults: Defaults simplify our decisionmaking process, allowing us to make fast, approximately optimal decisions by ignoring certain possible states. In order to formalize this approach, we examine decision making in the framework of decision theory. We use probability and utility to measure the impact of possible states on the decisionmaking process. More precisely, we examine when a consequence relation, which is the set of default inferences made by an inference system, can be compatible with such a decision theoretic setup. We characterize general properties that such consequence relations must satisfy and contrast them with previous analysis of default consequence relations in the literature. In particular, we show that such consequence relations must satisfy the properties of cumulative reasoning. Finally, we compare our approach with Poole's decisiontheoretic defaults, and show how both can be combined to form an attractive framework for reasoning about decisions.
Results 1  10
of
149