Check out the new USENIX Web site.
HotSec '11 Banner

USENIX Security '11

WORKSHOP PROGRAM ABSTRACTS

Building Secure Robot Applications
Back to Program
This position paper recognizes that general purpose robots will become increasingly common and argues that we need to prepare ourselves to deal with security for robot applications in an intelligent way. We discuss ways that robots are similar to traditional computing devices and ways that robots are different, and we describe the challenges that arise. We propose a framework for providing security for robot applications and we discuss three potential robot applications: a "fetch coffee" app, a "pretend to be a Labrador" app, and a "is my advisor in his office and available" app. We discuss some of the security needs of these applications and propose a few potential ways to address those security needs.

Security Fusion: A New Security Architecture for Resource-Constrained Environments
Back to Program
There is a huge demand for wireless sensors and RFID tags for remote surveillance and tracking. However, in order for such technologies to gain wide acceptance in industry, there needs to be strong security integrated into them. Traditional cryptographic schemes are infeasible due to hardware, computation, and power constraints. To that end, we introduce a new security paradigm, namely security fusion. In this approach, strong security properties are synthesized from weaker point-to-point properties, thereby minimizing the resource requirements at each node without compromising the system-level security. In this paper, we describe the concept of security fusion and give a motivation example using finite state machines.

DISTROY: Detecting Integrated Circuit Trojans with Compressive Measurements
Back to Program
Detecting Trojans in an integrated circuit (IC) is an important but hard problem. A Trojan is malicious hardware—it can be extremely small in size and dormant until triggered by some unknown circuit state. To allow wake-up, a Trojan could draw a minimal amount of power, for example, to run a clock or a state machine, or to monitor a triggering event. We introduce DISTROY (Discover Trojan), a new approach that can efficiently and reliably detect extremely small background power leakage that a Trojan creates and as a result, we can detect the Trojan. We formulate our method based on compressive sensing, a recent advance in signal processing, which can recover a signal using the number of measurements approximately proportional to its sparsity rather than size. We argue that circuit states in which the Trojan background power consumption stands out are rare, and thus sparse, so that we can apply compressive sensing. We describe how this is done in DISTROY so as to afford sufficient measurement statistics to detect the presence of Trojans. Finally, we present our initial simulation results that validate DISTROY and discuss the impact of our work in the field of hardware security.

Privacy-Preserving Applications on Smartphones
Back to Program
Smartphones are becoming some of our most trusted computing devices. People use them to store highly sensitive information including email, passwords, financial accounts, and medical records. These properties make smartphones an essential platform for privacy-preserving applications. To date, this area remains largely unexplored mainly because privacy-preserving computation protocols were thought to be too heavyweight for practical applications, even for standard desktops. We propose using smartphones to perform secure multi-party computation. The limitations of smartphones provide a number of challenges for building such applications. In this paper, we introduce the issues that make smartphones a unique platform for secure computation, identify some interesting potential applications, and describe our initial experiences creating privacy-preserving applications on Android devices.

Public vs. Publicized: Content Use Trends and Privacy Expectations
Back to Program
From a semantic standpoint, there is a clear differentiation between the meanings of public and publicized content. The former includes any content that is accessible by anyone, while the latter emphasizes visibility &mdashl; publicized content is actively made available. As a user's online experience becomes more personalized and data is increasingly pushed rather than pulled, the line between public and publicized content is inevitably blurred. In this position paper, we present quantitative evidence that despite this trend, in some settings users do not anticipate the use of public content beyond the narrow context in which is was disclosed; they do not anticipate that the content may be publicized. While providing a "publicized" option for data is an important counterpart to the ability to limit access to data (e.g. through access control lists), such an option must be accompanied by both greater user awareness of the ramifications of such an option and by transparency into data usage.

Herbert West—Deanonymizer
Back to Program
The vast majority of scientific journal, conference, and grant selection processes withhold the names of the reviewers from the original submitters, taking a better-safe-than-sorry approach for maintaining collegiality within the small-world communities of academia. While the contents of a review may not color the long-term relationship between the submitter and the reviewer, it is best to not require us all to be saints. This paper raises the question of whether the assumption of reviewer anonymity still holds in the face of readily-available, high-quality machine learning toolkits. Our threat model focuses on how a member of a community might, over time, amass a large number of unblinded reviews by serving on a number of conference and grant selection committees. We show that with access to even a relatively small corpus of such reviews, simple classification techniques from existing toolkits successfully identify reviewers with reasonably high accuracy. We discuss the implications of the findings and describe some potential technical and policy-based countermeasures.

Towards Practical Avoidance of Information Leakage in Enterprise Networks
Back to Program
Preventing exfiltration of sensitive data is a central challenge facing many modern networking environments. In this paper, we propose a network-wide method of confining and controlling the flow of sensitive data within a network. Our approach is based on black-box differencing — we run two logical copies of the network, one with private data scrubbed, and compare outputs of the two to determine if and when private data is being leaked. To ensure outputs of the two copies match, we build upon recent advances that enable computing systems to execute deterministically at scale and with low overheads. We believe our approach could be a useful building block towards building general-purpose schemes that leverage black-box differencing to mitigate leakage of private data.

Towards Client-side HTML Security Policies
Back to Program
With the proliferation of content rich web applications, content injection has become an increasing problem. Cross site scripting is the most prominent examples of this. Many systems have been designed to mitigate content injection and cross site scripting. Notable examples are BEEP, BLUEPRINT, and Content Security Policy, which can be grouped as HTML security policies. We evaluate these systems, including the first empirical evaluation of Content Security Policy on real applications. We propose that HTML security policies should be the defense of choice in web applications going forward. We argue, however, that current systems are insufficient for the needs of web applications, and research needs to be done to determine the set of properties an HTML security policy system should have. We propose several ideas for research going forward in this area.

TouchLogger: Inferring Keystrokes on Touch Screen from Smartphone Motion
Back to Program
Attacks that use side channels, such as sound and electromagnetic emanation, to infer keystrokes on physical keyboards are ineffective on smartphones without physical keyboards. We describe a new side channel, motion, on touch screen smartphones with only soft keyboards. Since typing on different locations on the screen causes different vibrations, motion data can be used to infer the keys being typed. To demonstrate this attack, we developed TouchLogger, an Android application that extracts features from device orientation data to infer keystrokes. TouchLogger correctly inferred more than 70% of the keys typed on a number-only soft keyboard on a smartphone. We hope to raise the awareness of motion as a significant side channel that may leak confidential data.

On Dynamic Malware Payloads Aimed at Programmable Logic Controllers
Back to Program
With the discovery of the Stuxnet attack, increasing attention is being paid to the potential for malware to target Programmable Logic Controllers (PLCs). Despite much speculation about threats from PLC malware, the popular opinion is that automated attacks against PLCs are not practical without having a priori knowledge of the target physical process. In this paper, we explore the problem of designing PLC malware that can generate a dynamic payload based on observations of the process taken from inside the control system. This significantly lowers the bar for attacks against PLCs. We evaluate how PLC malware may infer the structure of the physical plant and how it can use this information to construct a dynamic payload to achieve an adversary's end goal. We find that at the very least, a dynamic payload can be constructed that causes unsafe behavior for an arbitrary process definition.

Effective Digital Forensics Research Is Investigator-Centric
Back to Program
Many technical mechanisms across computer security for attribution, identification, and classification are neither sufficient nor necessary for forensically valid digital investigations; yet they are often claimed as useful or necessary. Similarly, when forensic research is evaluated using the viewpoints held by computer security venues, the challenges, constraints, and usefulness of the work is often misjudged. In this paper, we point out many key aspects of digital forensics with the goal of ensuring that research seeking to advance the discipline will have the highest possible adoption rate by practitioners. We enumerate general legal and practical constraints placed on forensic investigators that set the field apart. We point out the assumptions, often limited or incorrect, made about forensics in past work, and discuss how these assumptions limit the impact of contributions.

footer
? Need help? Use our Contacts page.

Back to Program
Last changed: 8 July 2011 jel