Results 1  10
of
71
A finegrained access control system for XML documents
 In ACM Transactions on Information and System Security (TISSEC
"... Webbased applications greatly increase information availability and ease of access, which is optimal for public information. The distribution and sharing of information via the Web that must be accessed in a selective way, such as electronic commerce transactions, require the definition and enforc ..."
Abstract

Cited by 151 (5 self)
 Add to MetaCart
Webbased applications greatly increase information availability and ease of access, which is optimal for public information. The distribution and sharing of information via the Web that must be accessed in a selective way, such as electronic commerce transactions, require the definition and enforcement of security controls, ensuring that information will be accessible only to authorized entities. Different approaches have been proposed that address the problem of protecting information in a Web system. However, these approaches typically operate at the filesystem level, independently of the data that have to be protected from unauthorized accesses. Part of this problem is due to the limitations of HTML, historically used to design Web documents. The extensible markup language (XML), a markup language promoted by the World Wide Web Consortium (W3C), is de facto the standard language for the exchange of information on the Internet and represents an important opportunity to provide finegrained access control. We present an access control model to protect information distributed on the Web that, by exploiting XML’s own capabilities, allows the definition and enforcement of access restrictions directly on the structure and content of the documents. We present a language for the specification of access restrictions, which uses standard notations and concepts, together with a description of a system architecture for access control enforcement based on existing technology. The result is a flexible and powerful security system offering a simple integration with current solutions.
A General Model for Authenticated Data Structures
 Algorithmica
, 2001
"... Query answers from online databases can easily be corrupted by hackers or malicious database publishers. Thus it is important to provide mechanisms which allow clients to trust the results from online queries. Authentic publication is a novel approach which allows untrusted publishers to securely ..."
Abstract

Cited by 66 (1 self)
 Add to MetaCart
(Show Context)
Query answers from online databases can easily be corrupted by hackers or malicious database publishers. Thus it is important to provide mechanisms which allow clients to trust the results from online queries. Authentic publication is a novel approach which allows untrusted publishers to securely answer queries from clients on behalf of trusted offline data owners. Publishers validate answers using compact, hardtoforge verification objects (VOs), which clients can check efficiently. This approach provides greater scalability (by adding more publishers) and better security (online publishers don't need to be trusted).
Authentic Data Publication over the Internet
 Journal of Computer Security
, 2003
"... Integrity critical databases, such as financial information used in highvalue decisions, are frequently published over the Internet. Publishers of such data must satisfy the integrity, authenticity, and nonrepudiation requirements of clients. Providing this protection over public data networks ..."
Abstract

Cited by 61 (1 self)
 Add to MetaCart
(Show Context)
Integrity critical databases, such as financial information used in highvalue decisions, are frequently published over the Internet. Publishers of such data must satisfy the integrity, authenticity, and nonrepudiation requirements of clients. Providing this protection over public data networks is an expensive proposition. This is, in part, due to the di#culty of building and running secure systems. In practice, large systems can not be verified to be secure and are frequently penetrated. The negative consequences of a system intrusion at the publisher can be severe. The problem is further complicated by data and server replication to satisfy availability and scalability requirements.
Authenticating Query Results in Edge Computing
 In ICDE
, 2004
"... Edge computing pushes application logic and the underlying data to the edge of the network, with the aim of improving availability and scalability. As the edge servers are not necessarily secure, there must be provisions for validating their outputs. This paper proposes a mechanism that creates a ve ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
(Show Context)
Edge computing pushes application logic and the underlying data to the edge of the network, with the aim of improving availability and scalability. As the edge servers are not necessarily secure, there must be provisions for validating their outputs. This paper proposes a mechanism that creates a verification object (VO) for checking the integrity of each query result produced by an edge server – that values in the result tuples are not tampered with, and that no spurious tuples are introduced. The primary advantages of our proposed mechanism are that the VO is independent of the database size, and that relational operations can still be fulfilled by the edge servers. These advantages reduce transmission load and processing at the clients. We also show how insert and delete transactions can be supported. 1.
Authenticated Data Structures for Graph and Geometric Searching
 IN CTRSA
, 2001
"... Following in the spirit of data structure and algorithm correctness checking, authenticated data structures provide cryptographic proofs that their answers are as accurate as the author intended, even if the data structure is being maintained by a remote host. We present techniques for authenticatin ..."
Abstract

Cited by 49 (20 self)
 Add to MetaCart
(Show Context)
Following in the spirit of data structure and algorithm correctness checking, authenticated data structures provide cryptographic proofs that their answers are as accurate as the author intended, even if the data structure is being maintained by a remote host. We present techniques for authenticating data structures that represent graphs and collection of geometric objects. We use a model where a data structure maintained by a trusted source is mirrored at distributed directories, with the directories answering queries made by users. When a user queries a directory, it receives a cryptographic proof in addition to the answer, where the proof contains statements signed by the source. The user verifies the proof trusting only the statements signed by the source. We show how to efficiently authenticate data structures for fundamental problems on networks, such as path and connectivity queries, and on geometric objects, such as intersection and containment queries.
Tamper Detection in Audit Logs
 IN PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON VERY LARGE DATABASES
, 2004
"... Audit logs are considered good practice for business systems, and are required by federal regulations for secure systems, drug approval data, medical information disclosure, financial records, and electronic voting. Given the central role of audit logs, it is critical that they are correct and ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
(Show Context)
Audit logs are considered good practice for business systems, and are required by federal regulations for secure systems, drug approval data, medical information disclosure, financial records, and electronic voting. Given the central role of audit logs, it is critical that they are correct and inalterable. It is not su# cient to say, "our data is correct, because we store all interactions in a separate audit log." The integrity of the audit log itself must also be guaranteed. This paper proposes mechanisms within a database management system (DBMS), based on cryptographically strong oneway hash functions, that prevent an intruder, including an auditor or an employee or even an unknown bug within the DBMS itself, from silently corrupting the audit log. We propose that the DBMS store additional information in the database to enable a separate audit log validator to examine the database along with this extra information and state conclusively whether the audit log has been compromised.
On the Cost of Authenticated Data Structures
 In Proc. European Symp. on Algorithms, volume 2832 of LNCS
, 2003
"... Authenticated data structures provide a model for data authentication, where answers to queries contain extra information that can produce a cryptographic proof about the validity of the answers. In this paper, we study the authentication cost that is associated with this model when authenticatio ..."
Abstract

Cited by 38 (18 self)
 Add to MetaCart
(Show Context)
Authenticated data structures provide a model for data authentication, where answers to queries contain extra information that can produce a cryptographic proof about the validity of the answers. In this paper, we study the authentication cost that is associated with this model when authentication is performed through hierarchical cryptographic hashing. We introduce measures that precisely model the computational overhead that is introduced due to authentication.
Efficient data structures for tamperevident logging
 In Proceedings of the 18th USENIX Security Symposium
, 2009
"... Many realworld applications wish to collect tamperevident logs for forensic purposes. This paper considers the case of an untrusted logger, serving a number of clients who wish to store their events in the log, and kept honest by a number of auditors who will challenge the logger to prove its corre ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
(Show Context)
Many realworld applications wish to collect tamperevident logs for forensic purposes. This paper considers the case of an untrusted logger, serving a number of clients who wish to store their events in the log, and kept honest by a number of auditors who will challenge the logger to prove its correct behavior. We propose semantics of tamperevident logs in terms of this auditing process. The logger must be able to prove that individual logged events are still present, and that the log, as seen now, is consistent with how it was seen in the past. To accomplish this efficiently, we describe a treebased data structure that can generate such proofs with logarithmic size and space, improving over previous linear constructions. Where a classic hash chain might require an 800 MB trace to prove that a randomly chosen event is in a log with 80 million events, our prototype returns a 3 KB proof with the same semantics. We also present a flexible mechanism for the log server to present authenticated and tamperevident search results for all events matching a predicate. This can allow largescale log servers to selectively delete old events, in an agreedupon fashion, while generating efficient proofs that no inappropriate events were deleted. We describe a prototype implementation and measure its performance on an 80 million event syslog trace at 1,750 events per second using a single CPU core. Performance improves to 10,500 events per second if cryptographic signatures are offloaded, corresponding to 1.1 TB of logging throughput per week. 1
Computational bounds on hierarchical data processing with applications to information security
 In Proc. Int. Colloquium on Automata, Languages and Programming (ICALP), volume 3580 of LNCS
, 2005
"... Motivated by the study of algorithmic problems in the domain of information security, in this paper, we study the complexity of a new class of computations over a collection of values associated with a set of n elements. We introduce hierarchical data processing (HDP) problems which involve the comp ..."
Abstract

Cited by 26 (15 self)
 Add to MetaCart
(Show Context)
Motivated by the study of algorithmic problems in the domain of information security, in this paper, we study the complexity of a new class of computations over a collection of values associated with a set of n elements. We introduce hierarchical data processing (HDP) problems which involve the computation of a collection of output values from an input set of n elements, where the entire computation is fully described by a directed acyclic graph (DAG). That is, individual computations are performed and intermediate values are processed according to the hierarchy induced by the DAG. We present an Ω(log n) lower bound on various computational cost measures for HDP problems. Essential in our study is an analogy that we draw between the complexities of any HDP problem of size n and searching by comparison in an order set of n elements, which shows an interesting connection between the two problems. In view of the logarithmic lower bounds, we also develop a new randomized DAG scheme for HDP problems that provides close to optimal performance and achieves cost measures with constant factors of the (logarithmic) leading asymptotic term that are close to optimal. Our lower bounds are general, apply to all HDP problems and, along with our new DAG construction, they provide an interesting –as well as useful in the area of algorithm analysis – theoretical framework. We apply our results to two information security problems, data authentication through cryptographic hashing and multicast key distribution using keygraphs and get a unified analysis and treatment for these problems. We show that both problems involve HDP and prove logarithmic lower bounds on their computational and communication costs. In particular, using our new DAG scheme, we present a new efficient authenticated dictionary with improved authentication overhead over previously known schemes. Moreover, through the relation between HDP and searching by comparison, we present a new skiplist version where the expected number of comparisons in a search is 1.25log 2 n + O(1). 1
Formalizing human ignorance: Collisionresistant hashing without the keys
 In Proc. Vietcrypt ’06
, 2006
"... Abstract. There is a foundational problem involving collisionresistant hashfunctions: common constructions are keyless, but formal definitions are keyed. The discrepancy stems from the fact that a function H: {0, 1} ∗ → {0, 1} n always admits an efficient collisionfinding algorithm, it’s just t ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
Abstract. There is a foundational problem involving collisionresistant hashfunctions: common constructions are keyless, but formal definitions are keyed. The discrepancy stems from the fact that a function H: {0, 1} ∗ → {0, 1} n always admits an efficient collisionfinding algorithm, it’s just that us human beings might be unable to write the program down. We explain a simple way to sidestep this difficulty that avoids having to key our hash functions. The idea is to state theorems in a way that prescribes an explicitlygiven reduction, normally a blackbox one. We illustrate this approach using wellknown examples involving digital signatures, pseudorandom functions, and the MerkleDamg˚ard construction. Key words. Collisionfree hash function, Collisionintractable hash function, Collisionresistant hash function, Cryptographic hash function, Provable security. 1