Results 21 - 30
of
77
Towards tiny trusted third parties
, 2005
"... Many security protocols hypothesize the existence of a trusted third party (TTP) to ease handling of computation and data too sensitive for the other parties involved. Subsequent discussion usually dismisses these protocols as hypothetical or impractical, under the assumption that trusted third part ..."
Abstract
-
Cited by 5 (4 self)
- Add to MetaCart
(Show Context)
Many security protocols hypothesize the existence of a trusted third party (TTP) to ease handling of computation and data too sensitive for the other parties involved. Subsequent discussion usually dismisses these protocols as hypothetical or impractical, under the assumption that trusted third parties cannot exist. However, the last decade has seen the emergence of hardware-based devices that, to high assurance, can carry out computation unmolested; emerging research promises more. In theory, such devices can perform the role of a trusted third party in real-world problems. In practice, we have found problems. The devices aspire to be general-purpose processors but are too small to accommodate real-world problem sizes. The small size forces programmers to hand-tune each algorithm anew, if possible, to fit inside the small space without losing security. This tuning heavily uses operations that general-purpose processors do not perform well. Furthermore, perhaps by trying to incorporate too much functionality, current devices are also too expensive to deploy widely. Our current research attempts to overcome these barriers, by focusing on the effective use of tiny TTPs (T3Ps). To eliminate the programming obstacle, we used our experience building hardware TTP apps to design and prototype an efficient way to execute arbitrary programs on T3Ps while preserving the critical trust properties. To eliminate the performance and cost obstacles, we are currently examining the potential hardware design for a T3P optimized for these operations. In previous papers, we reported our work on the programming obstacle. In this paper, we examine the potential hardware designs. We estimate that such a T3P could outperform existing devices by several orders of magnitude, while also having a gate-count of only 30K-60K, one to three orders of magnitude smaller than existing devices. 1
Expressing trust in distributed systems: the mismatch between tools and reality
- in Forty-Second Annual Allerton Conference on Privacy, Security and Trust
, 2004
"... Abstract. Distributed systems typically support processes that involve humans separated by space and by organizational boundaries. Because of its ability to enable secure communications between parties that do not share keys a priori, public key cryptography is a natural building block for the eleme ..."
Abstract
-
Cited by 5 (4 self)
- Add to MetaCart
(Show Context)
Abstract. Distributed systems typically support processes that involve humans separated by space and by organizational boundaries. Because of its ability to enable secure communications between parties that do not share keys a priori, public key cryptography is a natural building block for the elements of these computing systems to establish trust with each other. However, if the trust structure we build into the computing systems does not match the trust structure in the human systems, then this trust infrastructure has not achieved its goal. In this paper, we assess the inability of the standard PKI-based tools to capture many trust situations that really arise in current distributed systems, based on our lab’s experience trying to make these tools fit. We offer some observations for future work that may improve the situation. 1.
A practical property-based bootstrap architecture
- In Proceedings of the ACM workshop on Scalable trusted computing (STC
, 2009
"... Binary attestation, as proposed by the Trusted Computing Group (TCG), is a pragmatic approach for software integrity protection and verification. However, it has also various shortcomings that cause problems for practical deployment such as scalability, manageability and privacy: On the one hand, da ..."
Abstract
-
Cited by 5 (1 self)
- Add to MetaCart
(Show Context)
Binary attestation, as proposed by the Trusted Computing Group (TCG), is a pragmatic approach for software integrity protection and verification. However, it has also various shortcomings that cause problems for practical deployment such as scalability, manageability and privacy: On the one hand, data bound to binary values remain inaccessible after a software update and the verifier of an attestation result has to manage a huge number of binary versions. On the other hand, the binary values reveal information on platform configuration that may be exploited maliciously. In this paper we focus on property-based bootstrap ar-chitectures with an enhanced boot loader. Our proposal improves the previous work in a way that allows a practi-cal and efficient integration into existing IT infrastructures. We propose a solution of the version rollback problem that, in contrast to the existing approaches, is secure even if the TPM owner of the attested platform is untrusted without requiring an interaction with a trusted third party. Finally, we show how our architecture can be applied to secure boot mechanisms of Mobile Trusted Modules (MTM) to realize a ”Property-Based Secure Boot”. This is especially important for human users, since with secure boot, users can rely on the fact that a loaded system is also in a trustworthy state.
Blind processing: Securing data against system administrators
- IN FIP/IEEE INTERNATIONAL WORKSHOP ON MANAGEMENT OF SMART GRIDS
, 2010
"... Multi-owner systems such as power grid need information from all parties to operate efficiently. However, in general, information sharing is limited by market and other constraints. In addition, the emerging problem of demand side management in distribution systems as a part of “smarter grid” effor ..."
Abstract
-
Cited by 5 (3 self)
- Add to MetaCart
Multi-owner systems such as power grid need information from all parties to operate efficiently. However, in general, information sharing is limited by market and other constraints. In addition, the emerging problem of demand side management in distribution systems as a part of “smarter grid” efforts, secure communication and execution between the utilities and the customers is required to ensure the privacy. In this paper, we propose blind processing, a novel communication and execution approach for entities that compete with each other but need to cooperate for the overall good of the system. Our goal is to allow information exchange between system components with protection mechanisms against everyone including system administrators. Shielding information will prevent gaining access to the sensitive data while providing a complete picture of the whole system in computations. Such a security mechanism can be provided by employing the functionality of Trusted Computing, a security technology that utilizes hardware and software modules to improve the trustworthiness of a system.
Prototyping an Armored Data Vault - Rights Management on Big Brother's Computer
- Springer-Verlag Lecture Notes on Computer Science
, 2002
"... This paper reports our experimental work in using commercial secure coprocessors to control access to private data. In our initial project, we look at archived network tra#c. We seek to protect the privacy rights of a large population of data producers by restricting computation on a central aut ..."
Abstract
-
Cited by 5 (1 self)
- Add to MetaCart
(Show Context)
This paper reports our experimental work in using commercial secure coprocessors to control access to private data. In our initial project, we look at archived network tra#c. We seek to protect the privacy rights of a large population of data producers by restricting computation on a central authority's machine. The coprocessor approach provides more flexibility and assurance in specifying and enforcing access policy than purely cryptographic schemes. This work extends to other application domains, such as distributing and sharing academic research data.
Leveraging ipsec for mandatory perpacket access control
- In Proceedings of the Second IEEE Communications Society/CreateNet International Conference on Security and Privacy in Communication Networks
, 2006
"... Mandatory access control (MAC) enforcement is becoming available for commercial environments. For example, Linux 2.6 includes the Linux Security Modules (LSM) framework that enables the enforcement of MAC policies (e.g., Type Enforcement or Multi-Level Security) for individual systems. While this is ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Mandatory access control (MAC) enforcement is becoming available for commercial environments. For example, Linux 2.6 includes the Linux Security Modules (LSM) framework that enables the enforcement of MAC policies (e.g., Type Enforcement or Multi-Level Security) for individual systems. While this is a start, we envision that MAC enforcement should span multiple machines. The goal is to be able to control interaction between applications on different machines based on MAC policy. In this paper, we describe a recent extension of the LSM framework that enables labeled network communication via IPsec that is now available in mainline Linux as of version 2.6.16. This functionality enables machines to control communication with processes on other machines based on the security label assigned to an IPsec security association. We outline a security architecture based on labeled IPsec to enable distributed MAC authorization. In particular, we examine the construction of a xinetd service that uses labeled IPsec to limit client access on Linux 2.6.16 systems. We also discuss the application of labeled IPsec to distributed storage and virtual machine access control. 1
Evaluation of Secure Peer-to-Peer Overlay Routing for Survivable SCADA Systems
- In WSC ’04: Proceedings of the 36th conference on Winter simulation
, 2004
"... Supervisory Control And Data Acquisition (SCADA) systems gather and analyze data for real-time control. SCADA systems are used extensively, in applications such as electrical power distribution, telecommunica-tions, and energy refining. SCADA systems are obvious targets for cyber-attacks that would ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
Supervisory Control And Data Acquisition (SCADA) systems gather and analyze data for real-time control. SCADA systems are used extensively, in applications such as electrical power distribution, telecommunica-tions, and energy refining. SCADA systems are obvious targets for cyber-attacks that would seek to disrupt the physical complexities governed by a SCADA system. This paper uses a discrete-event simulation to begin to investigate the characteristics of one potential means of hardening SCADA systems against a cyber-attack. When it appears that real-time message delivery con-straints are not being met (due, for example, to a de-nial of service attack), a peer-to-peer overlay network is used to route message floods in an effort to ensure de-livery. The SCADA system, and peer-to-peer nodes all use strong hardware-based authentication techniques to prevent injection of false data or commands, and to harden the routing overlay. Our simulations help to quantify the anticipated tradeoffs of message surviv-ability and latency minimization. 1
Verifying system integrity by proxy
- In TRUST
, 2012
"... Abstract. Users are increasingly turning to online services, but are concerned for the safety of their personal data and critical business tasks. While secure communication protocols like TLS authenticate and protect connections to these services, they cannot guarantee the correctness of the endpoin ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
(Show Context)
Abstract. Users are increasingly turning to online services, but are concerned for the safety of their personal data and critical business tasks. While secure communication protocols like TLS authenticate and protect connections to these services, they cannot guarantee the correctness of the endpoint system. Users would like assurance that all the remote data they receive is from systems that satisfy the users’ integrity requirements. Hardware-based integrity measurement (IM) protocols have long promised such guarantees, but have failed to deliver them in practice. Their reliance on non-performant devices to generate timely attestations and ad hoc measurement frameworks limits the efficiency and completeness of remote integrity verification. In this paper, we introduce the integrity verification proxy (IVP), a service that enforces integrity requirements over connections to remote systems. The IVP monitors changes to the unmodified system and immediately terminates connections to clients whose specific integrity requirements are not satisfied while eliminating the attestation reporting bottleneck imposed by current IM protocols. We implemented a proof-of-concept IVP that detects several classes of integrity violations on a Linux KVM system, while imposing less than 1.5 % overhead on two application benchmarks and no more than 8 % on I/O-bound micro-benchmarks. 1
CorrectDB: SQL Engine with Practical Query Authentication
"... Clients of outsourced databases need Query Authentication (QA) guaranteeing the integrity (correctness and completeness), and authenticity of the query results returned by potentially compromised providers. Existing results provide QA assurances for a limited class of queries by deploying several so ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Clients of outsourced databases need Query Authentication (QA) guaranteeing the integrity (correctness and completeness), and authenticity of the query results returned by potentially compromised providers. Existing results provide QA assurances for a limited class of queries by deploying several software cryptographic constructs. Here, we show that, to achieve QA, however, it is significantly cheaper and more practical to deploy serverhosted, tamper-proof co-processors, despite their higher acquisition costs. Further, this provides the ability to handle arbitrary queries. To reach this insight, we extensively survey existing QA work and identify interdependencies and efficiency relationships. We then introduce CorrectDB, a new DBMS with full QA assurances, leveraging server-hosted, tamper-proof, trusted hardware in close proximity to the outsourced data. 1.
Trusted S/MIME Gateways
, 2003
"... The utility of Web-based email clients is clear: a user is able to access their email account from any computer anywhere at any time. However, this option is unavailable to users whose security depends on their key pair being stored either on their local computer or in their browser. Our implementat ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
The utility of Web-based email clients is clear: a user is able to access their email account from any computer anywhere at any time. However, this option is unavailable to users whose security depends on their key pair being stored either on their local computer or in their browser. Our implementation seeks to solve two problems with secure email services. The first that of mobility: users must have access to their key pairs in order to perform the necessary cryptographic operations. The second is one of transition: initially, users would not want to give up their regular email clients. Keeping these two restrictions in mind, we decided on the implementation of a secure gateway system that works in conjunction with an existing mail server and client. Our result is PKIGate, an S/MIME gateway that uses the DigitalNet (formerly Getronics) S/MIME Freeware Library and IBM’s 4758 secure coprocessor. This thesis presents motivations for the project, a comparison with similar existing products, software and hardware selection, the design, use case scenarios, a discussion of implementation issues, and suggestions for future work. Acknowledgements This thesis would not have been possible without the kind assistance of many people. First and foremost, thank you to Grace, Esther, Kristen, and Reid for reading and editing the many incarnations of this thesis and participating in my use case studies. Thanks to Neha for the daily sanity checks and emergency food and beverage supplies. A big thank you to Alex and John for all the help with coding, Linux, and the many other problems I encountered. Also, thank you Mike and Ling for your help with Sherlock. And last, but certainly not least, thank you to Professor Sean Smith for giving me an interesting problem to work on and all the support I needed to make things happen.