CCS 2013

Monitor Integrity Protection with Space Efficiency and Separate Compilation

Low-level inlined reference monitors weave monitor code into a program for security. To ensure that monitor code cannot be bypassed by branching instructions, some form of control-flow integrity must be guaranteed. Past approaches to protecting monitor code either have high space overhead or do not support separate compilation. We present Monitor Integrity Protection (MIP), a form of coarse-grained control-flow integrity. The key idea of MIP is to arrange instructions in variable-sized chunks and dynamically restrict indirect branches to target only chunk beginnings. We show that this simple idea is effective in protecting monitor code integrity, enjoys low space and execution-time overhead, supports separate compilation, and is largely compatible with an existing compiler toolchain. We also show that MIP enables a separate verifier that completely disassembles a binary and verifies its security. MIP is designed to support inlined reference monitors. As a case study, we have implemented MIP- based Software-based Fault Isolation (SFI) on both x86-32 and x86-64. The evaluation shows that MIP-based SFI has competitive performance with other SFI implementations, while enjoying low space overhead.

Ben Niu (Lehigh University), Gang Tan (Lehigh University)

A Security Framework for the Analysis and Design of Software Attestation

Software attestation has become a popular and challenging research topic at many established security conferences with an expected strong impact in practice. It aims at verifying the software integrity of (typically) resource-constrained embedded devices. However, for practical reasons, software attestation cannot rely on stored cryptographic secrets or dedicated trusted hardware. Instead, it exploits side-channel information, such as the time that the underlying device needs for a specific computation. As traditional cryptographic solutions and arguments are not applicable, novel approaches for the design and analysis are necessary. This is certainly one of the main reasons why the security goals, properties and underlying assumptions of existing software attestation schemes have been only vaguely discussed so far, limiting the confidence in their security claims. Thus, putting software attestation on a solid ground and having a founded approach for designing secure software attestation schemes is still an important open problem. We provide the first steps towards closing this gap. Our first contribution is a security framework that formally captures security goals, attacker models and various system and design parameters. Moreover, we present a generic software attestation scheme that covers most existing schemes in the literature. Finally, we analyze its security within our framework, yielding sufficient conditions for provably secure software attestation schemes. We expect that such a consolidating work allows for a meaningful security analysis of existing schemes, supports the design of secure software attestation schemes and will inspire new research in this area.

Frederik Armknecht (Universitt Mannheim, Germany), Ahmad-Reza Sadeghi (Technische Universitt Darmstadt/CASED), Steffen Schulz (Intel Corporation), Christian Wachsmann (Intel Collaborative Research Institute for Secure Computing at TU Darmstadt)

Users Get Routed: Traffic Correlation on Tor by Realistic Adversaries

We present the first analysis of the popular Tor anonymity network that indicates the security of typical users against reasonably realistic adversaries in the Tor network or in the underlying Internet. Our results show that Tor users are far more susceptible to compromise than indicated by prior work. Specific contributions of the paper include(1)a model of various typical kinds of users,(2)an adversary model that includes Tor network relays, autonomous systems(ASes), Internet exchange points (IXPs), and groups of IXPs drawn from empirical study,(3) metrics that indicate how secure users are over a period of time,(4) the most accurate topological model to date of ASes and IXPs as they relate to Tor usage and network configuration,(5) a novel realistic Tor path simulator (TorPS), and(6)analyses of security making use of all the above. To show that our approach is useful to explore alternatives and not just Tor as currently deployed, we also analyze a published alternative path selection algorithm, Congestion-Aware Tor. We create an empirical model of Tor congestion, identify novel attack vectors, and show that it too is more vulnerable than previously indicated.

Aaron Johnson (U.S. Naval Research Laboratory), Chris Wacek (Georgetown University), Rob Jansen (U.S. Naval Research Laboratory), Micah Sherr (Georgetown University), Paul Syverson (U.S. Naval Research Laboratory)

Certified Computer-Aided Cryptography: Efficient Provably Secure Machine Code from High-Level Implementations

We present a computer-aided framework for proving concrete security bounds for cryptographic machine code implementations. The front-end of the framework is an interactive verification tool that extends the EasyCrypt framework to reason about relational properties of C-like programs extended with idealised probabilistic operations in the style of code-based security proofs. The framework also incorporates an extension of the CompCert certified compiler to support trusted libraries providing complex arithmetic calculations or instantiating idealized components such as sampling operations. This certified compiler allows us to carry to executable code the security guarantees established at the high-level, and is also instrumented to detect when compilation may interfere with side-channel countermeasures deployed in source code. We demonstrate the applicability of the framework by applying it to the RSA-OAEP encryption scheme, as standardized in PKCS#1 v2.1. The outcome is a rigorous analysis of the advantage of an adversary to break the security of assembly implementations of the algorithms specified by the standard. The example also provides two contributions of independent interest: it bridges the gap between computer-assisted security proofs and real-world cryptographic implementations as described by standards such as PKCS,and demonstrates the use of the CompCert certified compiler in the context of cryptographic software development.

Jos Bacelar Almeida (HASLab, INESC TEC and Universidade do Minho), Manuel Barbosa (HASLab, INESC TEC and Universidade do Minho), Gilles Barthe (IMDEA Software Institute), Franois Dupressoir (IMDEA Software Institute)

Security Analysis of Pseudo-Random Number Generators with Input: /dev/random is not Robust

A pseudo-random number generator (PRNG) is a deterministic algorithm that produces numbers whose distribution is indistinguishable from uniform. A formal security model for PRNGs with input was proposed in 2005 by Barak and Halevi (BH). This model involves an internal state that is refreshed with a (potentially biased) external random source, and a cryptographic function that outputs random numbers from the continually internal state. In this work we extend the BH model to also include a new security property capturing how it should accumulate the entropy of the input data into the internal state after state compromise. This property states that a good PRNG should be able to eventually recover from compromise even if the entropy is injected into the system at a very slow pace, and expresses the real-life expected behavior of existing PRNG designs. Unfortunately, we show that neither the model nor the specific PRNG construction proposed by BH meet this new property, despite meeting a weaker robustness notion introduced by BH. From a practical side, we give a precise assessment of the Linux PRNGs, /dev/random and /dev/urandom. In particular, we show attacks proving that these PRNGs are not robust according to our definition, due to vulnerabilities in their entropy estimator and their internal mixing function. Finally, we propose a simple PRNG construction that is provably robust in our new and stronger adversarial model and we show that it is more efficient than the Linux PRNGs. We therefore recommend to use this construction whenever a PRNG with input is used for cryptography.

Yevgeniy Dodis (New York University), David Pointcheval (Ecole Normale Superieure), Sylvain Ruhault (Ecole Normale Superieure and Oppida), Damien Vergnaud (Ecole Normale Superieure), Daniel Wichs (Northeastern University)

FANCI: Identification of Stealthy Malicious Logic Using Boolean Functional Analysis

Hardware design today bears similarities to software design. Often vendors buy and integrate code acquired from third-party organizations into their designs, especially in embedded/system-on-chip designs. Currently, there is no way to determine if third-party designs have built-in backdoors that can compromise security after deployment. The key observation we use to approach this problem is that hardware backdoors incorporate logic that is nearly-unused, i.e. stealthy. The wires used in stealthy backdoor circuits almost never influence the outputs of those circuits. Typically, they do so only when triggered using external inputs from an attacker. In this paper, we present FANCI, a tool that flags suspicious wires, in a design, which have the potential to be malicious. FANCI uses scalable, approximate, boolean functional analysis to detect these wires. Our examination of the TrustHub hardware backdoor benchmark suite shows that FANCI is able to flag all suspicious paths in the benchmarks that are associated with backdoors. Unlike prior work in the area, FANCI is not hindered by incomplete test suite coverage and thus is able to operate in practice without false negatives. Furthermore, FANCI reports low false positive rates: less than 1% of wires are reported as suspicious in most cases. All TrustHub designs were analyzed in a day or less. We also analyze a backdoor-free out-of- order microprocessor core to demonstrate applicability beyond benchmarks.

Adam Waksman (Columbia University), Matthew Suozzo (Columbia University), Simha Sethumadhavan (Columbia University)

Rethinking SSL Development in an Appified World

The Secure Sockets Layer (SSL) is widely used to secure data transfers on the Internet. Previous studies have shown that the state of non-browser SSL code is catastrophic across a large variety of desktop applications and libraries as well as a large selection of Android apps, leaving users vulnerable to Man-in- the-Middle attacks (MITMAs). To determine possible causes of SSL problems on all major appified platforms, we extended the analysis to the walled-garden ecosystem of iOS, analyzed software developer forums and conducted interviews with developers of vulnerable apps. Our results show that the root causes are not simply careless developers, but also limitations and issues of the current SSL development paradigm. Based on our findings, we derive a proposal to rethink the handling of SSL in the appified world and present a set of countermeasures to improve the handling of SSL using Android as a blueprint for other platforms. Our countermeasures prevent developers from willfully or accidentally breaking SSL certificate validation, offer support for extended features such as SSL Pinning and different SSL validation infrastructures, and protect users. We evaluated our solution against 13,500 popular Android apps and conducted developer interviews to judge the acceptance of our approach and found that our solution works well for all investigated apps and developers.

Sascha Fahl (Leibniz University Hannover), Marian Harbach (Leibniz Universitt Hannover), Henning Perl (Leibniz Universitt Hannover), Markus Koetter (Leibniz Universitt Hannover), Matthew Smith (Leibniz Universitt Hannover)

Security Analysis of Integrated Circuit Camouflaging

Camouflaging is a layout-level technique that hampers an attacker from reverse engineering by introducing, in one embodiment, dummy contacts into the layout. By using a mix of real and dummy contacts, one can camouflage a standard cell whose functionality can be one of many. If an attacker cannot resolve the functionality of a camouflaged gate, he/she will extract an incorrect netlist. In this paper, we analyze the feasibility of identifying the functionality of camouflaged gates. We also propose techniques to make the dummy contact-based IC camouflaging technique resilient to reverse engineering. Furthermore, we judiciously select gates to camouflage by using techniques which ensure that the outputs of the extracted netlist are controllably corrupted. The techniques leverage IC testing principles such as justification and sensitization. The proposed techniques are evaluated using ISCAS benchmark circuits and OpenSparc T1 microprocessor controllers.

Jeyavijayan Rajendran (Polytechnic Institute of NYU), Michael Sam (Polytechnic Insitute of NYU), Ozgur Sinanoglu (New York University Abu Dhabi), Ramesh Karri (Polytechnic institute of NYU)

Protocol Misidentification Made Easy with Format-Transforming Encryption

Deep packet inspection (DPI) technologies provide much-needed visibility and control of network traffic using port-independent protocol identification, where a network flow is labeled with its application-layer protocol based on packet contents. In this paper, we provide the first comprehensive evaluation of a large set of DPI systems from the point of view of protocol misidentification attacks, in which adversaries on the network attempt to force the DPI to mislabel connections. Our approach uses a new cryptographic primitive called format-transforming encryption (FTE), which extends conventional symmetric encryption with the ability to transform the ciphertext into a format of our choosing. We design an FTE-based record layer that can encrypt arbitrary application-layer traffic, and we experimentally show that this forces misidentification for all of the evaluated DPI systems. This set includes a proprietary, enterprise-class DPI system used by large corporations and nation- states. We also show that using FTE as a proxy system incurs no latency overhead and as little as 16% bandwidth overhead compared to standard SSH tunnels. Finally, we integrate our FTE proxy into the Tor anonymity network and demonstrate that it evades real-world censorship by the Great Firewall of China.

Kevin P. Dyer (Portland State University), Scott E. Coull (RedJack, LLC.), Thomas Ristenpart (University of Wisconsin-Madison), Thomas Shrimpton (Portland State University)

Heart-to-Heart (H2H): Authentication for Implanted Medical Devices

We present Heart-to-Heart (H2H), a system to authenticate external medical device controllers and programmers to Implantable Medical Devices (IMDs). IMDs, which include pacemakers and cardiac defibrillators, are therapeutic medical devices partially or wholly embedded in the human body. They often have built-in radio communication to facilitate non-invasive reprogramming and data readout. Many IMDs, though, lack well designed authentication protocols, exposing patients to over-the-air attack and physical harm. H2H makes use of ECG (heartbeat data) as an authentication mechanism, ensuring access only by a medical instrument in physical contact with an IMD-bearing patient. Based on statistical analysis of real-world data, we propose and analyze new techniques for extracting time-varying randomness from ECG signals for use in H2H. We introduce a novel cryptographic device pairing protocol that uses this randomness to protect against attacks by active adversaries, while meeting the practical challenges of lightweight implementation and noise tolerance in ECG readings. Finally, we describe an end-to-end implementation in an ARM-Cortex M-3 microcontroller that demonstrates the practicality of H2H in current IMD hardware. Previous schemes have had goals much like those of H2H, but with serious limitations making them unfit for deployment—such as naively designed cryptographic pairing protocols (some of them recently broken). In addition to its novel analysis and use of ECG entropy, H2H is the first physiologically- based IMD device pairing protocol with a rigorous adversarial model and protocol analysis.

Masoud Rostami (ECE Dept, Rice University), Ari Juels (RSA Laboratories), Farinaz Koushanfar (Rice University)

When Kids Toys Breach Mobile Phone Security

Touch-based verification — the use of touch gestures (e.g., swiping, zooming, etc.) to authenticate users of touch screen devices — has recently been widely evaluated for its potential to serve as a second layer of defense to the PIN lock mechanism. In all performance evaluations of touch-based authentication systems however, researchers have assumed naive (zero-effort) forgeries in which the attacker makes no effort to mimic a given gesture pattern. In this paper we demonstrate that a simple “Lego” robot driven by input gleaned from general population swiping statistics can generate forgeries that achieve alarmingly high penetration rates against touch-based authentication systems. Using the best classification algorithms in touch-based authentication, we rigorously explore the effect of the attack, finding that it increases the Equal Error Rates of the classifiers by between 339% and 1004% depending on parameters such as the failure-to-enroll threshold and the type of touch stroke generated by the robot. The paper calls into question the zero-effort impostor testing approach used to benchmark the performance of touch-based authentication systems.

Abdul Serwadda (Louisiana Tech University), Vir Phoha (Louisiana Tech University)

Path ORAM: An Extremely Simple Oblivious RAM Protocol

We present Path ORAM, an extremely simple Oblivious RAM protocol with a small amount of client storage. Partly due to its simplicity, Path ORAM is the most practical ORAM scheme for small client storage known to date. We formally prove that Path ORAM requires log^2 N / log X bandwidth overhead for block size B = X log N. For block sizes bigger than Omega(log^2 N), Path ORAM is asymptotically better than the best known ORAM scheme with small client storage. Due to its practicality, Path ORAM has been adopted in the design of secure processors since its proposal.

Emil Stefanov (UC Berkeley), Marten van Dijk (University of Connecticut), Elaine Shi (University of Maryland), Christopher Fletcher (MIT), Ling Ren (MIT), Xiangyao Yu (MIT), Srinivas Devadas (MIT)

Identity, Location, Disease and More: Inferring Your Secrets from Android Public Resources

The design of Android is based on a set of unprotected shared resources, including those inherited from Linux (e.g., Linux public directories). However, the dramatic development in Android applications (app for short) makes available a large amount of public background information (e.g., social networks, public online services), which can potentially turn such originally harmless resource sharing into serious privacy breaches. In this paper, we report our work on this important yet understudied problem. We discovered three unexpected channels of information leaks on Android: per-app data-usage statistics, ARP information, and speaker status (on or off). By monitoring these channels, an app without any permission may acquire sensitive information such as smartphone user’s identity, the disease condition she is interested in, her geo-locations and her driving route, from top-of-the-line Android apps. Furthermore, we show that using existing and new techniques, this zero-permission app can both determine when its target (a particular application) is running and send out collected data stealthily to a remote adversary. These findings call into question the soundness of the design assumptions on shared resources, and demand effective solutions. To this end, we present a mitigation mechanism for achieving a delicate balance between utility and privacy of such resources.

Xiaoyong Zhou (Indiana University, Bloomington), Soteris Demetriou (University of Illinois at Urbana-Champaign), Dongjing He (University of Illinois at Urbana-Champaign), Muhammad Naveed (University of Illinois at Urbana- Champaign), Xiaorui Pan (Indiana University, Bloomington), Xiaofeng Wang (Indiana University, Bloomington), Carl Gunter (University of Illinois at Urbana-Champaign), Klara Nahrstedt (University of Illinois at Urbana-Champaign)

Zero-Knowledge Using Garbled Circuits: How To Prove Non-Algebraic Statements Efficiently

Zero-knowledge protocols are one of the fundamental concepts in modern cryptography and have countless applications. However, after more than 30 years from their introduction, there are only very few languages (essentially those with a group structure) for which we can construct zero-knowledge protocols that are efficient enough to be used in practice. In this paper we address the problem of how to construct efficient zero-knowledge protocols for generic languages and we propose a protocol based on Yao’s garbled circuit technique. The motivation for our work is that in many cryptographic applications it is useful to be able to prove efficiently statements of the form e.g., “I know x s.t.y = SHA-256(x)” for a common input y (or other “unstructured” languages), but no efficient protocols for this task are currently known. It is clear that zero-knowledge is a subset of secure two-party computation (i.e., any protocol for generic secure computation can be used to do zero-knowledge). The main contribution of this paper is to construct an efficient protocol for the special case of secure two-party computation where only one party has input (like in the zero-knowledge case). The protocol achieves active security and is essentially only twice as slow as the passive secure version of Yao’s garbled circuit protocol. This is a great improvement with respect to the cut-n-choose technique to make Yao’s protocol actively secure, where the complexity grows linearly with the security parameter.

Marek Jawurek (SAP Research), Florian Kerschbaum (SAP Research), Claudio Orlandi (Aarhus University)

Fully Automated Analysis of Padding-Based Encryption in the Computational Model

Computer-aided verification provides effective means of analyzing the security of cryptographic primitives. However, it has remained a challenge to achieve fully automated analyses yielding guarantees that hold against computational (rather than symbolic) attacks. This paper meets this challenge for public-key encryption schemes built from trapdoor permutations and hash functions. Using a novel combination of techniques from computational and symbolic cryptography, we present proof systems for analyzing the chosen-plaintext and chosen-ciphertext security of such schemes in the random oracle model. Building on these proof systems, we develop a toolset that bundles together fully automated proof and attack finding algorithms. We use this toolset to build a comprehensive database of encryption schemes that records attacks against insecure schemes, and proofs with concrete bounds for secure ones.

Gilles Barthe (IMDEA Software Institute), Juan Manuel Crespo (IMDEA Software Institute), Benjamin Gregoire (INRIA Sophia-Antipolis), Csar Kunz (IMDEA Software Institute), Yassine Lakhnech (Universit de Grenoble, VERIMAG ), Benedikt Schmidt (IMDEA Software Institute), Santiago Zanella-Bguelin (Microsoft Research)

Obfuscation Resilient Binary Code Reuse through Trace-oriented Programming

With the wide existence of binary code, it is desirable to reuse it in many security applications, such as malware analysis and software patching. While prior approaches have shown that binary code can be extracted and reused, they are often based on static analysis and face challenges when coping with obfuscated binaries. This paper introduces trace-oriented programming (TOP), a general framework for generating new software from existing binary code by elevating the low-level binary code to C code with templates and inlined assembly. Different from existing work, TOP gains benefits from dynamic analysis such as resilience against obfuscation and avoidance of points-to analysis. Thus, TOP can be used for malware analysis, especially for malware function analysis and identification. We have implemented a proof-of-concept of TOP and our evaluation results with a range of benign and malicious software indicate that TOP is able to reconstruct source code from binary execution traces in malware analysis and identification, and binary function transplanting.

Junyuan Zeng (University of Texas at Dallas), Yangchun Fu (University of Texas at Dallas), Kenneth Miller (University of Texas at Dallas), Zhiqiang Lin (University of Texas at Dallas), Xiangyu Zhang (Purdue University), Dongyan Xu (Purdue University)

Chucky: Exposing Missing Checks in Source Code for Vulnerability Discovery

Uncovering security vulnerabilities in software is a key for operating secure systems. Unfortunately, only some security flaws can be detected automatically and the vast majority of vulnerabilities is still identified by tedious auditing of source code. In this paper, we strive to improve this situation by accelerating the process of manual auditing. We introduce Chucky, a method to expose missing checks in source code. Many vulnerabilities result from insufficient input validation and thus omitted or false checks provide valuable clues for finding security flaws. Our method proceeds by statically tainting source code and identifying anomalous or missing conditions linked to security- critical objects.In an empirical evaluation with five popular open-source projects, Chucky is able to accurately identify artificial and real missing checks, which ultimately enables us to uncover 12 previously unknown vulnerabilities in two of the projects (Pidgin and LibTIFF).

Fabian Yamaguchi (University of Goettingen), Christian Wressnegger (idalab GmbH), Hugo Gascon (University of Goettingen), Konrad Rieck (University of Goettingen)

AUTOCRYPT: Enabling Homomorphic Computation On Servers To Protect Sensitive Web Content

Web servers are vulnerable to a large class of attacks which can allow network attacker to steal sensitive web content. In this work, we investigate the feasibility of a web server architecture, wherein the vulnerable server VM runs on a trusted cloud. All sensitive web content is made available to the vulnerable server VM in encrypted form, thereby limiting the effectiveness of data-stealing attacks through server VM compromise. In this context, the main challenge is to allow the legitimate functionality of the untrusted server VM to work. As a step towards this goal, we develop a tool called AutoCrypt, which transforms a subset of existing C functionality in the web stack to operate on encrypted sensitive content. We show that such a transformation is feasible for several standard Unix utilities available in a typical LAMP stack, with no developer effort. Key to achieving this expressiveness over encrypted data, is our scheme to combine and convert between partially-homomorphic encryption (PHE) schemes using a small TCB in the trusted cloud hypervisor. We show that x86 code transformed with AutoCrypt achieves performance that is significantly better than its alternatives (downloading to a trusted client, or using fully- homomorphic encryption).

Shruti Tople (National University of Singapore), Shweta Shinde (National University of Singapore), Prateek Saxena (National University of Singapore), Zhaofeng Chen (National University of Singapore)

Belief Semantics of Authorization Logic

A formal belief semantics for authorization logics is given. The belief semantics is proved to subsume a standard Kripke semantics. The belief semantics yields a direct representation of principals’ beliefs, without resorting to the technical machinery used in Kripke semantics. A proof system is given for the logic; that system is proved sound with respect to the belief and Kripke semantics. The soundness proofs are mechanized in Coq.

Andrew Hirsch (George Washington University), Michael Clarkson (George Washington University)

Delegatable Pseudorandom Functions and Applications

We put forth the problem of delegating the evaluation of a pseudorandom function (PRF) to an untrusted proxy and introduce a novel cryptographic primitive called delegatable pseudorandom functions, or DPRFs for short: A DPRF enables a proxy to evaluate a pseudorandom function on a strict subset of its domain using a trapdoor derived from the DPRF secret key. The trapdoor is constructed with respect to a certain policy predicate that determines the subset of input values which the proxy is allowed to compute. The main challenge in constructing DPRFs is to achieve bandwidth efficiency (which mandates that the trapdoor is smaller than the precomputed sequence of the PRF values conforming to the predicate), while maintaining the pseudorandomness of unknown values against an attacker that adaptively controls the proxy. A DPRF may be optionally equipped with an additional property we call policy privacy, where any two delegation predicates remain indistinguishable in the view of a DPRF-querying proxy: achieving this raises new design challenges as policy privacy and bandwidth efficiency are seemingly conflicting goals. For the important class of policy predicates described as (1-dimensional) ranges, we devise two DPRF constructions and rigorously prove their security. Built upon the well-known tree-based GGM PRF family, our constructions are generic and feature only logarithmic delegation size in the number of values conforming to the policy predicate. At only a constant-factor efficiency reduction, we show that our second construction is also policy private. Finally, we describe that their new security and efficiency properties render our DPRF schemes particularly useful in numerous security applications, including RFID, symmetric searchable encryption, and broadcast encryption.

Aggelos Kiayias (National and Kapodistrian University of Athens), Stavros Papadopoulos (University of Science & Technology, Hong Kong), Nikos Triandopoulos (RSA Laboratories and Boston University), Thomas Zacharias (National and Kapodistrian University of Athens)

Practical Dynamic Proofs of Retrievability

In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety. A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes. In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work. We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.

Elaine Shi (University of Maryland), Emil Stefanov (UC Berkeley), Charalampos Papamanthou (University of Maryland)

ASIST: Architectural Support for Instruction Set Randomization

Code injection attacks continue to pose a threat to today’s computing systems, as they exploit software vulnerabilities to inject and execute arbitrary, malicious code. Instruction Set Randomization (ISR) is able to protect a system against remote machine code injection attacks by randomizing the instruction set of each process. This way, the attacker will inject invalid code that will fail to execute on the randomized processor. However, all the existing implementations of ISR are based on emulators and binary instrumentation tools that (i) incur a significant runtime performance overhead, (ii) limit the ease of deployment of ISR, (iii) cannot protect the underlying operating system kernel, and (iv) are vulnerable to evasion attempts trying to bypass ISR protection. To address these issues we propose ASIST: an architecture with hardware and operating system support for ISR. We present the design and implementation of ASIST by modifying and mapping a SPARC processor onto an FPGA board and running our modified Linux kernel to support the new features. The operating system loads the randomization key of each running process into a newly defined register, and the modified processor decodes the process’s instructions with this key before execution. Moreover, ASIST protects the system against attacks that exploit kernel vulnerabilities to run arbitrary code with elevated privileges, by using a separate randomization key for the operating system. We show that ASIST transparently protects all applications and the operating system kernel from machine code injection attacks with less than 1.5% runtime overhead, while only requiring 0.7% additional hardware.

Antonis Papadogiannakis (Institute of Computer Science, Foundation for Research and Technology Hellas), Laertis Loutsis (Institute of Computer Science, Foundation for Research and Technology Hellas), Vassilis Papaefstathiou (Institute of Computer Science, Foundation for Research and Technology Hellas), Sotiris Ioannidis (Institute of Computer Science, Foundation for Research and Technology Hellas)

Honeywords: Making Password-Cracking Detectable

We propose a simple method for improving the security of hashed passwords: the maintenance of additional honeywords (false passwords) associated with each user’s account. An adversary who steals a file of hashed passwords and inverts the hash function cannot tell if he has found the password or a honeyword. The attempted use of a honeyword for login sets off an alarm. An auxiliary server (the honeychecker) can distinguish the user password from honeywords for the login routine, and will set off an alarm if a honeyword is submitted.

Ari Juels (RSA), Ronald Rivest (MIT)

Practical Constructions and New Proof Methods for Large Universe Attribute-Based Encryption

We propose two large universe Attribute-Based Encryption constructions. In a large universe ABE system any string can be used as an attribute and attributes need not be enumerated at system setup. Our first construction establishes a novel large universe Ciphertext-Policy ABE scheme on prime order bilinear groups, while the second achieves a significant efficiency improvement over the large universe Key-Policy ABE system of Lewko-Waters and Lewko. Both schemes are selectively secure in the standard model under two q-type assumptions similar to ones used in prior works. Our work brings back program and cancel techniques to this problem and aims in providing practical large universe ABE implementations. To showcase the efficiency improvements over prior constructions, we provide implementations and benchmarks of our schemes in Charm; a programming environment for rapid prototyping of cryptographic primitives. We compare them to implementations of the only three published constructions that offer unbounded ABE in the standard model.

Yannis Rouselakis (University of Texas at Austin), Brent Waters (University of Texas at Austin)

Multi-Cloud Oblivious Storage

We present a 2-cloud oblivious storage (ORAM) system that achieves 2.6X bandwidth cost between the client and the cloud. Splitting an ORAM across 2 or more non-colluding clouds allows us to reduce the client-cloud bandwidth cost by at least one order of magnitude, shifting the higher-bandwidth communication to in-between the clouds where bandwidth provisioning is abundant. Our approach makes ORAM practical for bandwidth-constrained clients such as home or mobile Internet connections. We provide a full-fledged implementation of our 2-cloud ORAM system, and report results from a real-world deployment over Amazon EC2 and Microsoft Azure.

Emil Stefanov (UC Berkeley), Elaine Shi (University of Maryland)

FPDetective: Dusting the Web for Fingerprinters

In the modern web, the browser has emerged as the vehicle of choice, which users are to trust, customize, and use, to access a wealth of information and online services. However, recent studies show that the browser can also be used to invisibly fingerprint the user: a practice that may have serious privacy and security implications. In this paper, we report on the design, implementation and deployment of FPDetective, a framework for the detection and analysis of web-based fingerprinters. Instead of relying on information about known fingerprinters or third-party-tracking blacklists, FPDetective focuses on the detection of the fingerprinting itself. By applying our framework with a focus on font detection practices, we were able to conduct a large scale analysis of the million most popular websites of the Internet, and discovered that the adoption of fingerprinting is much higher than previous studies had estimated. Moreover, we analyze two countermeasures that have been proposed to defend against fingerprinting and find weaknesses in them that might be exploited to bypass their protection. Finally, based on our findings, we discuss the current understanding of fingerprinting and how it is related to Personally Identifiable Information, showing that there needs to be a change in the way users, companies and legislators engage with fingerprinting.

Gunes Acar (KU Leuven), Marc Juarez (Institut dInvestigaci en Intelligncia Artificial and KU Leuven), Nick Nikiforakis (KU Leuven), Claudia Diaz (KU Leuven), Seda Gurses (New York University and KU Leuven), Frank Piessens (KU Leuven), Bart Preneel (KU Leuven)

librando: Transparent Code Randomization for Just-in-Time Compilers

Just-in-time compilers (JITs) are here to stay. Unfortunately, they also provide new capabilities to cyber attackers, namely the ability to supply input programs (in languages such as JavaScript) that will then be compiled to executable code. Once this code is placed and marked as executable, it can then be leveraged by the attacker. Randomization techniques such as constant blinding raise the cost to the attacker, but they significantly add to the burden of implementing a JIT. There are a great many JITs in use today, but not even all of the most commonly used ones randomize their outputs. We present librando, the first comprehensive technique to harden JIT compilers in a completely generic manner by randomizing their output transparently ex post facto. We implement this approach as a system-wide service that can simultaneously harden multiple running JITs. It hooks into the memory protections of the target OS and randomizes newly generated code on the fly when marked as executable. In order to provide “black box” JIT hardening, librando needs to be extremely conservative. For example, it completely preserves the contents of the calling stack, presenting each JIT with the illusion that it is executing its own generated code. Yet in spite of the heavy lifting that librando performs behind the scenes, the performance impact is surprisingly low. For Java (HotSpot), we measured slowdowns by a factor of 1.15x, and for compute-intensive JavaScript (V8) benchmarks, a slowdown of 3.5x. For many applications, this overhead is low enough to be practical for general use today.

Andrei Homescu (University of California Irvine), Stefan Brunthaler (University of California, Irvine), Per Larsen (University of California, Irvine), Michael Franz (University of California, Irvine)

AppIntent: Analyzing Sensitive Data Transmission in Android for Privacy Leakage Detection

Android phones often carry personal information, attracting malicious developers to embed code in Android applications to steal sensitive data. With known techniques in the literature, one may easily determine if sensitive data is being transmitted out of an Android phone. However, transmission of sensitive data in itself does not necessarily indicate privacy leakage; a better indicator may be whether the transmission is by user intention or not. When transmission is not intended by the user, it is more likely a privacy leakage. The problem is how to determine if transmission is user intended. As a first solution in this space, we present a new analysis framework called AppIntent. For each data transmission, AppIntent can efficiently provide a sequence of GUI manipulations corresponding to the sequence of events that lead to the data transmission, thus helping an analyst to determine if the data transmission is user intended or not. The basic idea is to use symbolic execution to generate the aforementioned event sequence, but straightforward symbolic execution proves to be too time- consuming to be practical. A major innovation in AppIntent is to leverage the unique Android execution model to reduce the search space without sacrificing code coverage. We also present an evaluation of AppIntent with a set of 750 malicious apps, as well as 1,000 top free apps from Google Play. The results show that AppIntent can effectively help separate the apps that truly leak user privacy from those that do not.

Zhemin Yang (Fudan University), Min Yang (Fudan University), Yuan Zhang (Fudan University), Guofei Gu (Texas A&M University), Peng Ning (NC State University), X. Sean Wang (Fudan University)

Preventing Accidental Data Disclosure in Modern Operating Systems

Modern OSes such as Android, iOS, and Windows 8 have changed the way consumers interact with computing devices. Tasks are often completed by stringing together a collection of purpose-specific user applications (e.g., a barcode reader, a social networking app, a document viewer). As users direct this workflow between applications, it is difficult to predict the consequence of each step. Poor selection may result in accidental information disclosure when the target application unknowingly uses cloud services. This paper presents Aquifer as a policy framework and system for preventing accidental information disclosure in modern operating systems. In Aquifer, application developers define secrecy restrictions that protect the entire user interface workflow defining the user task. In doing so, Aquifer provides protection beyond simple permission checks and allows applications to retain control of data even after it is shared.

Adwait Nadkarni (North Carolina State University), William Enck (North Carolina State University)

OASIS: On Achieving a Sanctuary for Integrity and Secrecy on Untrusted Platforms

We present OASIS, a CPU instruction set extension for externally verifiable initiation, execution, and termination of an isolated execution environment with a trusted computing base consisting solely of the CPU. OASIS leverages the hardware components available on commodity CPUs to achieve a low-cost, low- overhead design.

Emmanuel Owusu (Carnegie Mellon University), Jorge Guajardo (Robert Bosch LLC Research and Technology Center, Pittsburgh, USA), Jonathan McCune (Carnegie Mellon University), Jim Newsome (Carnegie Mellon University), Adrian Perrig (ETH Zurich, CyLab / Carnegie Mellon University), Amit Vasudevan (Carnegie Mellon University)

Automatic Verification of Protocols with Lists of Unbounded Length

We present a novel automatic technique for proving secrecy and authentication properties for security protocols that manipulate lists of unbounded length, for an unbounded number of sessions. This result is achieved by extending the Horn clause approach of the automatic protocol verifier ProVerif. We extend the Horn clauses to be able to represent lists of unbounded length. We adapt the resolution algorithm to handle the new class of Horn clauses, and prove the soundness of this new algorithm. We have implemented our algorithm and successfully tested it on several protocol examples, including XML protocols coming from web services.

Bruno Blanchet (INRIA Paris-Rocquencourt), Miriam Paiola (INRIA Paris- Rocquencourt)

Ensuring High-Quality Randomness in Cryptographic Key Generation

The security of any cryptosystem relies on the secrecy of the system’s secret keys. Yet, recent experimental work demonstrates that tens of thousands of devices on the Internet use RSA and DSA secrets drawn from a small pool of candidate values. As a result, an adversary can derive the device’s secret keys without breaking the underlying cryptosystem. We introduce a new threat model, under which there is a systemic solution to such randomness flaws. In our model, when a device generates a cryptographic key, it incorporates some random values from an “entropy authority” into its cryptographic secrets and then proves to the authority, using zero-knowledge-proof techniques, that it performed this operation correctly. By presenting an entropy-authority-signed public key certificate to a third party (like a certificate authority or SSH client), the device can demonstrate that its public key incorporates randomness from the authority and is therefore drawn from a large pool of candidate values. Where possible, our protocol protects against eavesdroppers, entropy authority misbehavior, and devices attempting to discredit the entropy authority. To demonstrate the practicality of our protocol, we have implemented and evaluated its performance on a commodity wireless home router. When running on a home router, our protocol incurs a $1.7times$ slowdown over conventional RSA key generation and it incurs a $3.6times$ slowdown over conventional EC-DSA key generation.

Henry Corrigan-Gibbs (Stanford University), Wendy Mu (Stanford University), Dan Boneh (Stanford University), Bryan Ford (Yale University)

Verifiable Delegation of Computation on Outsourced Data

We address the problem in which a client stores a large amount of data with an untrusted server in such a way that, at any moment, the client can ask the server to compute a function on some portion of its outsourced data. In this scenario, the client must be able to efficiently verify the correctness of the result despite no longer knowing the inputs of the delegated computation, it must be able to keep adding elements to its remote storage, and it does not have to fix in advance (i.e., at data outsourcing time) the functions that it will delegate. Even more ambitiously, clients should be able to verify in time independent of the input-size – a very appealing property for computations over huge amounts of data. In this work we propose novel cryptographic techniques that solve the above problem for the class of computations of quadratic polynomials over a large number of variables. This class covers a wide range of significant arithmetic computations – notably, many important statistics. To confirm the efficiency of our solution, we show encouraging performance results, e.g., correctness proofs have size below 1 kB and are verifiable by clients in less than 10 milliseconds.

Michael Backes (Saarland University and Max Planck Institute for Software Systems), Dario Fiore (Max Planck Institute for Software Systems), Raphael M. Reischuk (Saarland University)

Shady Paths: Leveraging Surfing Crowds to Detect Malicious Web Pages

The web is one of the most popular vectors to spread malware. Attackers lure victims to visit compromised web pages or entice them to click on malicious links. These victims are redirected to sites that exploit their browsers or trick them into installing malicious software using social engineering. In this paper, we tackle the problem of detecting malicious web pages from a novel angle. Instead of looking at particular features of a (malicious) web page, we analyze how a large and diverse set of web browsers reach these pages. That is, we use the browsers of a collection of web users to record their interactions with websites, as well as the redirections they go through to reach their final destinations. We then aggregate the different redirection chains that lead to a specific web page and analyze the characteristics of the resulting redirection graph. As we will show, these characteristics can be used to detect malicious pages. We argue that our approach is less prone to evasion than previous systems, allows us to also detect scam pages that rely on social engineering rather than only those that exploit browser vulnerabilities, and can be implemented efficiently. We developed a system, called SpiderWeb, which implements our proposed approach. We show that this system works well in detecting web pages that deliver malware.

Gianluca Stringhini (University of California, Santa Barbara), Christopher Kruegel (University of California, Santa Barbara), Giovanni Vigna (University of California, Santa Barbara)

Blackbox Traceable CP-ABE: How to Catch People Leaking Their Keys by Selling Decryption Devices on eBay

In the context of Ciphertext-Policy Attribute-Based Encryption (CP-ABE), if a decryption device associated with an attribute set S D appears on eBay, and is alleged to be able to decrypt any ciphertexts with policies satisfied by S D, no one including the CP-ABE authorities can identify the malicious user(s) who build such a decryption device using their key(s). This has been known as a major practicality concern in CP-ABE applications, for example, providing fine- grained access control on encrypted data. Due to the nature of CP-ABE, users get decryption keys from authorities associated with attribute sets. If there exists two or more users with attribute sets being the supersets of S D, existing CP- ABE schemes cannot distinguish which user is the malicious one who builds and sells such a decryption device. In this paper, we extend the notion of CP-ABE to support Blackbox Traceability and propose a concrete scheme which is able to identify a user whose key has been used in building a decryption device from multiple users whose keys associated with the attribute sets which are all the supersets of S D. The scheme is efficient with sub-linear overhead and when compared with the very recent (non-traceable) CP-ABE scheme due to Lewko and Waters in Crypto 2012, we can consider this new scheme as an extension with the property of fully collusion-resistant blackbox traceability added, i.e. an adversary can access an arbitrary number of keys when building a decryption device while the new tracing algorithm can still identify at least one particular key which must have been used for building the underlying decryption device. We show that this new scheme is secure against adaptive adversaries in the standard model, and is highly expressive by supporting any monotonic access structures. Its additional traceability property is also proven against adaptive adversaries in the standard model. As of independent interest, in this paper, we also consider another scenario which we call it “found-in-the-wild”. In this scenario, a decryption device is found, for example, from a black market, and reported to an authority (e.g. a law enforcement agency). The decryption device is found to be able to decrypt ciphertexts with certain policy, say A, while the associated attribute set S D is missing. In this found-in-the-wild scenario, we show that the Blackbox Traceable CP-ABE scheme proposed in this paper can still be able to find the malicious users whose keys have been used for building the decryption device, and our scheme can achieve selective traceability in the standard model under this scenario.

Zhen Liu (Shanghai Jiao Tong University, City University of Hong Kong), Zhenfu Cao (Shanghai Jiao Tong University), Duncan Wong (City University of Hong Kong)

AVANT-GUARD: Scalable and Vigilant Switch Flow Management in Software-Defined Networks

Among the leading reference implementations of the Software Defined Networking (SDN) paradigm is the OpenFlow framework, which decouples the control plane into a centralized application. In this paper, we consider two aspects of OpenFlow that pose security challenges, and we propose two solutions that could address these concerns. The first challenge is the inherent communication bottleneck that arises between the data plane and the control plane, which an adversary could exploit by mounting a “control plane saturation attack” that disrupts network operations. Indeed, even well-mined adversarial models, such as scanning or denial-of-service (DoS) activity, can produce more potent impacts on OpenFlow networks than traditional networks. To address this challenge, we introduce an extension to the OpenFlow data plane called “connection migration”, which dramatically reduces the amount of data-to-control-plane interactions that arise during such attacks. The second challenge is that of enabling the control plane to expedite both detection of, and responses to, the changing flow dynamics within the data plane. For this, we introduce “actuating triggers” over the data plane’s existing statistics collection services. These triggers are inserted by control layer applications to both register for asynchronous call backs, and insert conditional flow rules that are only activated when a trigger condition is detected within the data plane’s statistics module. We present Avant-Guard, an implementation of our two data plane extensions, evaluate the performance impact, and examine its use for developing more scalable and resilient SDN security services.

Seungwon Shin (Texas A&M University), Vinod Yegneswaran (SRI International), Phillip Porras (SRI International), Guofei Gu (Texas A&M University)

Polyglots: Crossing Origins by Crossing Formats

In a heterogeneous system like the web, information is exchanged between components in versatile formats. A new breed of attacks is on the rise that exploit the mismatch between the expected and provided content. This paper focuses on the root cause of a large class of attacks: polyglots. A polyglot is a program that is valid in multiple programming languages. Polyglots allow multiple interpretation of the content, providing a new space of attack vectors. We characterize what constitutes a dangerous format in the web setting and identify particularly dangerous formats, with PDF as the prime example. We demonstrate that polyglot-based attacks on the web open up for insecure communication across Internet origins. The paper presents novel attack vectors that infiltrate the trusted origin by syntax injection across multiple languages and by content smuggling of malicious payload that appears formatted as benign content. The attacks lead to both cross-domain leakage and cross-site request forgery. We perform a systematic study of PDF-based injection and content smuggling attacks. We evaluate the current practice in client/server content filtering and PDF readers for polyglot-based attacks, and report on vulnerabilities in the top 100 Alexa web sites. We identify five web sites to be vulnerable to syntax injection attacks. Further, we have found two major enterprise cloud storage services to be susceptible to content smuggling attacks. Our recommendations for protective measures on server side, in browsers, and in content interpreters (in particular, PDF readers) show how to mitigate the attacks.

Jonas Magazinius (Chalmers University of Technology), Billy Rios (Google), Andrei Sabelfeld (Chalmers University of Technology)

Membership Privacy: A Unifying Framework For Privacy Definitions

XXX

Ninghui Li (Purdue University), Wahbeh Qardaji (Purdue University), Dong Su (Purdue University), Yi Wu (Purdue University), Weining Yang (Purdue University)

Anonymous Credentials Light

We define and propose an efficient and provably secure construction of blind signatures with attributes. Prior notions of blind signatures did not yield themselves to the construction of anonymous credential systems, not even if we drop the unlinkability requirement of anonymous credentials. Our new notion in contrast is a convenient building block for anonymous credential systems. The construction we propose is efficient: it requires just a few exponentiations in a prime-order group in which the decisional Diffie-Hellman problem is hard. Thus, for the first time, we give a provably secure construction of anonymous credentials that can work in the elliptic group setting without bilinear pairings and is based on the DDH assumption. In contrast, prior provably secure constructions were based on the RSA group or on groups with pairings, which made them prohibitively inefficient for mobile devices, RFIDs and smartcards. The only prior efficient construction that could work in such elliptic curve groups, due to Brands, does not have a proof of security.

Foteini Baldimtsi (Brown University), Anna Lysyanskaya (Brown University)

Catching Click-Spam in Search Ad Networks

Click-spam in online advertising, where unethical publishers use malware or trick users into clicking ads, siphons off hundreds of millions of advertiser dollars meant to support free websites and apps. Ad networks today, sadly, rely primarily on security through obscurity to defend against click-spam. In this paper, we present Viceroi, a principled approach to catching click-spam in search ad networks. It is designed based on the intuition that click-spam is a profit-making business that needs to deliver higher return on investment (ROI) for click-spammers than other (ethical) business models to offset the risk of getting caught. Viceroi operates at the ad network where it has visibility into all ad clicks. Working with a large real-world ad network, we find that the simple-yet-general Viceroi approach catches over six very different classes of click-spam attacks (e.g., malware-driven, search-hijacking, arbitrage) without any tuning knobs.

Vacha Dave (UC San Diego), Saikat Guha (Microsoft Research India), Yin Zhang (The University of Texas at Austin)

Vetting Undesirable Behaviors in Android Apps with Permission Use Analysis

Android platform adopts permissions to protect sensitive resources from untrusted apps. However, after permissions are granted by users at install time, apps could use these permissions (sensitive resources) with no further restrictions. Thus, recent years have witnessed the explosion of undesirable behaviors in Android apps. An important part in the defense is the accurate analysis of Android apps. However, traditional syscall-based analysis techniques are not well-suited for Android, because they could not capture critical interactions between the application and the Android system. This paper presents VetDroid, a dynamic analysis platform for reconstructing sensitive behaviors in Android apps from a novel permission use perspective. VetDroid features a systematic framework to effectively construct permission use behaviors, i.e., how applications use permissions to access (sensitive) system resources, and how these acquired permission-sensitive resources are further utilized by the application. With permission use behaviors, security analysts can easily examine the internal sensitive behaviors of an app. Using real-world Android malware, we show that VetDroid can clearly reconstruct fine-grained malicious behaviors to ease malware analysis. We further apply VetDroid to 1,249 top free apps in Google Play. VetDroid can assist in finding more information leaks than TaintDroid, a state-of-the-art technique. In addition, we show how we can use VetDroid to analyze fine-grained causes of information leaks that TaintDroid cannot reveal. Finally, we show that VetDroid can help identify subtle vulnerabilities in some (top free) applications otherwise hard to detect.

Yuan Zhang (Fudan University), Min Yang (Fudan University), Bingquan Xu (Fudan University), Zhemin Yang (Fudan University), Guofei Gu (Texas A&M University), Peng Ning (NC State University), X. Sean Wang (Fudan University), Binyu Zang (Fudan University)

Policy-based Secure Deletion

Securely deleting data from storage systems has become difficult today. Most storage space is provided as a virtual resource and traverses many layers between the user and the actual physical storage medium. Operations to properly erase data and wipe out all its traces are typically not foreseen, particularly not in networked and cloud-storage systems. This paper introduces a general cryptographic model for policy-based secure deletion of data in storage systems, whose security relies on the proper erasure of cryptographic keys. Deletion operations are expressed in terms of a policy that describes data destruction through deletion attributes and protection classes. The policy links attributes as specified in deletion operations to the protection class(es) that must be erased accordingly. A cryptographic construction is presented for deletion policies given by directed acyclic graphs; it is built in a modular way from exploiting that secure deletion schemes may be composed with each other. The model and the construction unify and generalize all previous encryption-based techniques for secure deletion. Finally, the paper describes a prototype implementation of a Linux filesystem with policy-based secure deletion.

Christian Cachin (IBM Research Zurich), Kristiyan Haralambiev (IBM Research Zurich), Hsu-Chun Hsiao (Carnegie Mellon University), Alessandro Sorniotti (IBM Research Zurich)

How to Keep a Secret: Leakage Deterring Public-key Cryptosystems

How is it possible to prevent the sharing of cryptographic functions? This question appears to be fundamentally hard to address since in this setting the owner of the key is the adversary: she wishes to share a program or device that (potentially only partly) implements her main cryptographic functionality. Given that she possesses the cryptographic key, it is impossible for her to be prevented from writing code or building a device that uses that key. She may though be deterred from doing so. We introduce leakage-deterring public-key cryptosystems to address this problem. Such primitives have the feature of enabling the embedding of owner-specific private data into the owner’s public- key so that given access to any (even partially functional) implementation of the primitive, the recovery of the data can be facilitated. We formalize the notion of leakage-deterring in the context of encryption, signature, and identification and we provide efficient generic constructions that facilitate the recoverability of the hidden data while retaining privacy as long as no sharing takes place.

Aggelos Kiayias (National and Kapodistrian University of Athens and University of Connecticut), Qiang Tang (National and Kapodistrian University of Athens and University of Connecticut)

PHANTOM: Practical Oblivious Computation in a Secure Processor

We introduce PHANTOM [1] a new secure processor that obfuscates its memory access trace. To an adversary who can observe the processor’s output pins, all memory access traces are computationally indistinguishable (a property known as obliviousness). We achieve obliviousness through a cryptographic construct known as Oblivious RAM or ORAM. We first improve an existing ORAM algorithm and construct an empirical model for its trusted storage requirement. We then present PHANTOM, an oblivious processor whose novel memory controller aggressively exploits DRAM bank parallelism to reduce ORAM access latency and scales well to a large number of memory channels. Finally, we build a complete hardware implementation of PHANTOM on a commercially available FPGA-based server, and through detailed experiments show that PHANTOM is efficient in both area and performance. Accessing 4KB of data from a 1GB ORAM takes 26.2us (13.5us for the data to be available), a 32x slowdown over accessing 4KB from regular memory, while SQLite queries on a population database see 1.2-6x slowdown. PHANTOM is the first demonstration of a practical, oblivious processor and can provide strong confidentiality guarantees when offloading computation to the cloud.

Martin Maas (UC Berkeley), Eric Love (UC Berkeley), Emil Stefanov (UC Berkeley), Mohit Tiwari (UT Austin), Elaine Shi (University of Maryland), Krste Asanovic (UC Berkeley), John Kubiatowicz (UC Berkeley), Dawn Song (UC Berkeley)

An Empirical Study of Cryptographic Misuse in Android Applications

Developers use cryptographic APIs in Android with the intent of securing data such as passwords and personal information on mobile devices. In this paper, we ask whether developers use the cryptographic APIs in a fashion that provides typical cryptographic notions of security, e.g., IND-CPA security. We develop program analysis techniques to automatically check programs on the Google Play marketplace, and find that 10.327 out of 11,748 applications that use cryptographic APIs – 88% overall – make at least one mistake. These numbers show that applications do not use cryptographic APIs in a fashion that maximizes overall security. We then suggest specific remediations based on our analysis towards improving overall cryptographic security in Android applications.

Manuel Egele (Carnegie Mellon University), David Brumley (Carnegie Mellon University), Yanick Fratantonio (University of California, Santa Barbara), Christopher Kruegel (University of California, Santa Barbara)

On the Security of TLS Renegotiation

The Transport Layer Security (TLS) protocol is the most widely used security protocol on the Internet. It supports negotiation of a wide variety of cryptographic primitives through different cipher suites, various modes of client authentication, and additional features such as renegotiation. Despite its widespread use, only recently has the full TLS protocol been proven secure, and only the core cryptographic protocol with no additional features. These additional features have been the cause of several practical attacks on TLS. In 2009, Ray and Dispensa demonstrated how TLS renegotiation allows an attacker to splice together its own session with that of a victim, resulting in a man-in- the-middle attack on TLS-reliant applications such as HTTP. TLS was subsequently patched with two defence mechanisms for protection against this attack. We present the first formal treatment of renegotiation in secure channel establishment protocols. We add optional renegotiation to the authenticated and confidential channel establishment model of Jager et al., an adaptation of the Bellare–Rogaway authenticated key exchange model. We describe the attack of Ray and Dispensa on TLS within our model. We show generically that the proposed fixes for TLS offer good protection against renegotiation attacks, and give a simple new countermeasure that provides renegotiation security for TLS even in the face of stronger adversaries.

Florian Giesen (Ruhr-Universitt Bochum), Florian Kohlar (Ruhr-Universitt Bochum), Douglas Stebila (Queensland University of Technology)

OAKE: A New Family of Implicitly Authenticated Diffie-Hellman Protocols

Cryptographic algorithm standards play an important role both to the practice of information security and to cryptography theory research. Among them, the KEA and OPACITY (KEA/OPACITY, in short) protocols, and the MQV and HMQV ((H)MQV, in short) protocols, are a family of implicitly authenticated Diffie-Hellman key- exchange (IA-DHKE) protocols that are among the most efficient authenticated key-exchange protocols known and are widely standardized. In this work, from some new design insights, we develop a new family of practical IA-DHKE protocols, referred to as OAKE (standing for “optimal authenticated key- exchange” in brief). We show that the OAKE protocol family combines, in essence, the advantages of both (H)MQV and KEA/OPACITY, while saving from or alleviating the disadvantages of them both.

Andrew C. Yao (IIIS, Tsinghua University, Beijing, China), Yunlei Zhao (Software School, Fudan University, Shanghai, China)

Diglossia: Detecting Code Injection Attacks With Precision and Efficiency

Code injection attacks continue to plague applications that incorporate user input into executable programs. For example, SQL injection vulnerabilities rank fourth among all bugs reported in CVE, yet all previously proposed methods for detecting SQL injection attacks suffer from false positives and false negatives. This paper describes the design and implementation of DIGLOSSIA, a new tool that precisely and efficiently detects code injection attacks on server-side Web applications generating SQL and NoSQL queries. The main problems in detecting injected code are (1) recognizing code in the generated query, and (2) determining which parts of the query are tainted by user input. To recognize code, DIGLOSSIA relies on the precise definition due to Ray and Ligatti. To identify tainted characters, DIGLOSSIA dynamically maps all application- generated characters to shadow characters that do not occur in user input and computes shadow values for all input-dependent strings. Any original characters in a shadow value are thus exactly the taint from user input. Our key technical innovation is dual parsing. To detect injected code in a generated query, DIGLOSSIA parses the query in tandem with its shadow and checks that (1) the two parse trees are syntactically isomorphic, and (2) all code in the shadow query is in shadow characters and, therefore, originated from the application itself, as opposed to user input. We demonstrate that DIGLOSSIA accurately detects both SQL and NoSQL code injection attacks while avoiding the false positives and false negatives of prior methods. By recasting the problem of detecting injected code as a string propagation and parsing problem, we gain substantial improvements in efficiency and precision over prior work. Our approach does not require any changes to the databases, Web servers, or Web browsers, adds virtually unnoticeable performance overhead, and is deployable today.

Sooel Son (The University of Texas at Austin), Kathryn McKinley (Microsoft Research and The University of Texas at Austin), Vitaly Shmatikov (The University of Texas at Austin)

Tappan Zee (North) Bridge: Mining Memory Accesses for Introspection

The ability to introspect into the behavior of software at runtime is crucial for many security-related tasks, such as virtual machine-based intrusion detection and low-artifact malware analysis. Although some progress has been made in this task by automatically creating programs that can passively retrieve kernel-level information, two key challenges remain. First, it is currently difficult to extract useful information from user-level applications, such as web browsers. Second, discovering points within the OS and applications to hook for active monitoring is still an entirely manual process. In this paper we propose a set of techniques to mine the memory accesses made by an operating system and its applications to locate useful places to deploy active monitoring, which we call tap points. We demonstrate the efficacy of our techniques by finding tap points for useful introspection tasks such as finding SSL keys and monitoring web browser activity on five different operating systems (Windows 7, Linux, FreeBSD, Minix and Haiku) and two processor architectures (ARM and x86).

Brendan Dolan-Gavitt (Georgia Institute of Technology), Tim Leek (MIT Lincoln Laboratory), Josh Hodosh (MIT Lincoln Laboratory), Wenke Lee (Georgia Institute of Technology)

Fast Two-Party Secure Computation with Minimal Assumptions

Wireless sensor networks will be widely deployed in the near future. While much research has focused on making these networks feasible and useful, security has received little attention. We present a suite of security protocols optimized for sensor networks: SPINS. SPINS has two secure building blocks: SNEP and μTESLA. SNEP includes: data confidentiality, two-party data authentication, and evidence of data freshness. μTESLA provides authenticated broadcast for severely resource-constrained environments. We implemented the above protocols, and show that they are practical even on minimal hardware: the performance of the protocol suite easily matches the data rate of our network. Additionally, we demonstrate that the suite can be used for building higher level protocols.

Abhi Shelat (University of Virginia), Chih-Hao Shen (University of Virginia)

Secure Data Deletion from Persistent Media

Secure deletion is the task of deleting data irrecoverably from a physical medium. In this work, we present a general approach to the design and analysis of secure deletion for persistent storage that relies on encryption and key wrapping. We define a key disclosure graph that models the adversarial knowledge of the history of key generation and wrapping. We introduce a generic update function and prove that it achieves secure deletion of data against a coercive attacker; instances of the update function implement the update behaviour of all arborescent data structures including B-Trees, extendible hash tables, linked lists, and others. We implement a B-Tree instance of our solution. Our implementation is at the block-device layer, allowing any block-based file system to be used on top of it. Using different workloads, we find that the storage and communication overhead required for storing and retrieving B-Tree nodes is small and that this therefore constitutes a viable solution for many applications requiring secure deletion from persistent media.

Joel Reardon (ETH Zurich), Hubert Ritzdorf (ETH Zurich), David Basin (ETH Zurich), Srdjan Capkun (ETH Zurich)

Quantifying the Security of Graphical Passwords: The Case of Android Unlock Patterns

Graphical passwords were proposed as an alternative to overcome the inherent limitations of text-based passwords, inspired by research that shows that the graphical memory of humans is particularly well developed. A graphical password scheme that has been widely adopted is the Android Unlock Pattern, a special case of the Pass-Go scheme with grid size restricted to 3x3 points and restricted stroke count. In this paper, we study the security of Android unlock patterns. By performing a large-scale user study, we measure actual user choices of patterns instead of theoretical considerations on password spaces. From this data we construct a model based on Markov chains that enables us to quantify the strength of Android unlock patterns. We found empirically that there is a high bias in the pattern selection process, e.g., the upper left corner and three- point long straight lines are very typical selection strategies. Consequently, the entropy of patterns is rather low, and our results indicate that the security offered by the scheme is less than the security of only three digit randomly-assigned PINs for guessing 20% of all passwords (i.e., we estimate a partial guessing entropy G 0.2 of 9.10 bit). Based on these insights, we systematically improve the scheme by finding a small, but still effective change in the pattern layout that makes graphical user logins substantially more secure. By means of another user study, we show that some changes improve the security by more than doubling the space of actually used passwords (i.e., increasing the partial guessing entropy G 0.2 to 10.81 bit).

Sebastian Uellenbeck (Ruhr-University Bochum), Markus Drmuth (Ruhr-University Bochum), Christopher Wolf (Ruhr-University Bochum), Thorsten Holz (Ruhr- University Bochum)

When Private Set Intersection Meets Big Data: An Efficient and Scalable Protocol

Large scale data processing brings new challenges to the design of privacy- preserving protocols: how to meet the increasing requirements of speed and throughput of modern applications, and how to scale up smoothly when data being protected is big. Efficiency and scalability become critical criteria for privacy preserving protocols in the age of Big Data. In this paper, we present a new Private Set Intersection (PSI) protocol that is extremely efficient and highly scalable compared with existing protocols. The protocol is based on a novel approach that we call oblivious Bloom intersection. It has linear complexity and relies mostly on efficient symmetric key operations. It has high scalability due to the fact that most operations can be parallelized easily. The protocol has two versions: a basic protocol and an enhanced protocol, the security of the two variants is analyzed and proved in the semi-honest model and the malicious model respectively. A prototype of the basic protocol has been built. We report the result of performance evaluation and compare it against the two previously fastest PSI protocols. Our protocol is orders of magnitude faster than these two protocols. To compute the intersection of two million-element sets, our protocol needs only 41 seconds (80-bit security) and 339 seconds (256-bit security) on moderate hardware in parallel mode.

Changyu Dong (University of Strathclyde), Liqun Chen (Hewlett-Packard Laboratories), Zikai Wen (University of Strathclyde)

Formal Verification of Information Flow Security for a Simple ARM-Based Separation Kernel

A separation kernel simulates a distributed environment using a single physical machine by executing partitions in isolation and appropriately controlling communication among them. We present a formal verification of information flow security for a simple separation kernel for ARMv7. Previous work on information flow kernel security leaves communication to be handled by model-external means, and cannot be used to draw conclusions when there is explicit interaction between partitions. We propose a different approach where communication between partitions is made explicit and the information flow is analyzed in the presence of such a channel. Limiting the kernel functionality as much as meaningfully possible, we accomplish a detailed analysis and verification of the system, proving its correctness at the level of the ARMv7 assembly. As a sanity check we show how the security condition is reduced to noninterference in the special case where no communication takes place. The verification is done in HOL4 taking the Cambridge model of ARM as basis, transferring verification tasks on the actual assembly code to an adaptation of the BAP binary analysis tool developed at CMU.

Mads Dam (KTH), Roberto Guanciale (KTH), Narges Khakpour (CSC, KTH), Hamed Nemati (KTH), Oliver Schwarz (SICS Swedish Institute of Computer Science)

25 Million Flows Later Large-scale Detection of DOM-based XSS

In recent years, the Web witnessed a move towards sophis- ticated client-side functionality. This shift caused a signifi- cant increase in complexity of deployed JavaScript code and thus, a proportional growth in potential client- side vulnera- bilities, with DOM-based Cross-site Scripting being a high impact representative of such security issues. In this paper, we present a fully automated system to detect and validate DOM-based XSS vulnerabilities, consisting of a taint-aware JavaScript engine and corresponding DOM implementation as well as a context-sensitive exploit generation approach. Using these components, we conducted a large-scale analysis of the Alexa top 5000. In this study, we identified 6167 unique vulnerabilities distributed over 480 domains, show- ing that 9,6% of the examined sites carry at least one DOM- based XSS problem.

Sebastian Lekies (SAP AG), Ben Stock (Friedrich-Alexander-University Erlangen- Nuremberg), Martin Johns (SAP AG)

ShadowReplica: Efficient Parallelization of Dynamic Data Flow Tracking

Dynamic data flow tracking (DFT) is a technique broadly used in a variety of security applications that, unfortunately, exhibits poor performance, preventing its adoption in production systems. We present ShadowReplica, a new and efficient approach for accelerating DFT and other shadow memory-based analyses, by decoupling analysis from execution and utilizing spare CPU cores to run them in parallel. Our approach enables us to run a heavyweight technique, like dynamic taint analysis (DTA), twice as fast, while concurrently consuming fewer CPU cycles than when applying it in-line. DFT is run in parallel by a second shadow thread that is spawned for each application thread, and the two communicate using a shared data structure. We avoid the problems suffered by previous approaches, by introducing an off-line application analysis phase that utilizes both static and dynamic analysis methodologies to generate optimized code for decoupling execution and implementing DFT, while it also minimizes the amount of information that needs to be communicated between the two threads. Furthermore, we use a lock-free ring buffer structure and an N-way buffering scheme to efficiently exchange data between threads and maintain high cache-hit rates on multi-core CPUs. Our evaluation shows that ShadowReplica is on average ~2.3× faster than in-line DFT (~2.75× slowdown over native execution) when running the SPEC CPU2006 benchmark, while similar speed ups were observed with command-line utilities and popular server software. Astoundingly, ShadowReplica also reduces the CPU cycles used up to 30%.

Kangkook Jee (Columbia University), Vasileios P. Kemerlis (Columbia University), Angelos D. Keromytis (Columbia University), Georgios Portokalidis (Stevens Institute of Technology)

Impact of Integrity Attacks on Real-Time Pricing in Smart Grids

Modern information and communication technologies used by smart grids are subject to cybersecurity threats. This paper studies the impact of integrity attacks on real-time pricing (RTP), a key feature of smart grids that uses such technologies to improve system efficiency. Recent studies have shown that RTP creates a closed loop formed by the mutually dependent real-time price signals and price-taking demand. Such a closed loop can be exploited by an adversary whose objective is to destabilize the pricing system. Specifically, small malicious modifications to the price signals can be iteratively amplified by the closed loop, causing inefficiency and even severe failures such as blackouts. This paper adopts a control-theoretic approach to deriving the fundamental conditions of RTP stability under two broad classes of integrity attacks, namely, the scaling and delay attacks. We show that the RTP system is at risk of being destabilized only if the adversary can compromise the price signals advertised to smart meters by reducing their values in the scaling attack, or by providing old prices to over half of all consumers in the delay attack. The results provide useful guidelines for system operators to analyze the impact of various attack parameters on system stability, so that they may take adequate measures to secure RTP systems.

Rui Tan (Advanced Digital Sciences Center, Illinois at Singapore), Varun Badrinath Krishna (Advanced Digital Sciences Center, Illinois at Singapore), David K. Y. Yau (Advanced Digital Sciences Center, Illinois at Singapore and Singapore Univeristy of Technology and Design), Zbigniew Kalbarczyk (University of Illinois at Urbana-Champaign)

Predictability of Android OpenSSLs Pseudo Random Number Generator

OpenSSL is the most widely used library for SSL/TLS on the Android platform. The security of OpenSSL depends greatly on the unpredictability of its Pseudo Random Number Generator (PRNG). In this paper, we reveal the vulnerability of the OpenSSL PRNG on the Android. We first analyze the architecture of the OpenSSL specific to Android, and the overall operation process of the PRNG from initialization until the session key is generated. Owing to the nature of Android, the Dalvik Virtual Machine in Zygote initializes the states of OpenSSL PRNG early upon booting, and SSL applications copy the PRNG states of Zygote when they start. Therefore, the applications that use OpenSSL generate random data from the same initial states, which is potential problem that may seriously affect the security of Android applications. Next, we investigate the possibility of recovering the initial states of the OpenSSL PRNG. To do so, we should predict the nine external entropy sources of the PRNG. However, we show that these sources can be obtained in practice if the device is fixed. For example, the complexity of the attack was O(2^{32+t}) in our smartphone, where t is the bit complexity for estimating the system boot time. In our experiments, we were able to restore the PRNG states in 74 out of 100 cases. Assuming that we knew the boot time, i.e., t=0, the average time required to restore was 35 min on a PC with four cores (eight threads). Finally, we show that it is possible to recover the PreMasterSecret of the first SSL session with O(2^{58}) computations using the restored PRNG states, if the application is implemented by utilizing org.webkit package and a key exchange scheme is RSA. It shows that the vulnerability of OpenSSL PRNG can be a real threat to the security of Android.

Soo Hyeon Kim (The Attached Institute of ETRI and KOREA Unisversity), Daewan Han (The Attached Institute of ETRI), Dong Hoon Lee (KOREA University)

Addressing the Concerns of the Lacks Family: Quantification of Kin Genomic Privacy

The rapid progress in human-genome sequencing is leading to a high availability of genomic data. This data is notoriously very sensitive and stable in time. It is also highly correlated among relatives. A growing number of genomes are becoming accessible online (e.g., because of leakage, or after their posting on genome-sharing websites). What are then the implications for kin genomic privacy? We formalize the problem and detail an efficient reconstruction attack based on graphical models and belief propagation. With this approach, an attacker can infer the genomes of the relatives of an individual whose genome is observed, relying notably on Mendel’s Laws and statistical relationships between the nucleotides (on the DNA sequence). Then, to quantify the level of genomic privacy as a result of the proposed inference attack, we discuss possible definitions of genomic privacy metrics. Genomic data reveals Mendelian diseases and the likelihood of developing degenerative diseases such as Alzheimer’s. We also introduce the quantification of health privacy, specifically the measure of how well the predisposition to a disease is concealed from an attacker. We evaluate our approach on actual genomic data from a pedigree and show the threat extent by combining data gathered from a genome-sharing website and from an online social network.

Mathias Humbert (EPFL), Erman Ayday (EPFL), Jean-Pierre Hubaux (EPFL), Amalio Telenti (Institute of Microbiology, University Hospital and University of Lausanne)

deDacota: Toward Preventing Server-Side XSS via Automatic Code and Data Separation

Web applications are constantly under attack. They are popular, typically accessible from anywhere on the Internet, and they can be abused as malware delivery systems. Cross-site scripting flaws are one of the most common types of vulnerabilities that are leveraged to compromise a web application and its users. A large set of cross-site scripting vulnerabilities originates from the browser’s confusion between data and code. That is, untrusted data input to the web application is sent to the clients’ browser, where it is then interpreted as code and executed. While new applications can be designed with code and data separated from the start, legacy web applications do not have that luxury. This paper presents a novel approach to securing legacy web applications by automatically and statically rewriting an application so that the code and data are clearly separated in its web pages. This transformation protects the application and its users from a large range of server-side cross-site scripting attacks. Moreover, the code and data separation can be efficiently enforced at run time via the Content Security Policy enforcement mechanism available in modern browsers. We implemented our approach in a tool, called deDacota, that operates on binary ASP.NET applications. We demonstrate on six real-world applications that our tool is able to automatically separate code and data, while keeping the application’s semantics unchanged.

Adam Doupe (University of California, Santa Barbara), Weidong Cui (Microsoft Research), Mariusz Jakubowski (Microsoft Research), Marcus Peinado (Microsoft Research), Christopher Kruegel (University of California, Santa Barbara), Giovanni Vigna (University of California, Santa Barbara)

Seeing Double: Reconstructing Obscured Typed Input from Repeated Compromising Reflections

Of late, threats enabled by the ubiquitous use of mobile devices have drawn much interest from the research community. However, prior threats all suffer from a similar, and profound, weakness - namely the requirement that the adversary is either within visual range of the victim (e.g., to ensure that the pop-out events in reflections in the victim’s sunglasses can be discerned) or is close enough to the target to avoid the use of expensive telescopes. In this paper, we broaden the scope of the attacks by relaxing these requirements and show that breaches of privacy are possible even when the adversary is around a corner. The approach we take overcomes challenges posed by low image resolution by extending computer vision methods to operate on small, high-noise, images. Moreover, our work is applicable to all types of keyboards because of a novel application of fingertip motion analysis for key-press detection. In doing so, we are also able to exploit reflections in the eyeball of the user or even repeated reflections (i.e., a reflection of a reflection of the mobile device in the eyeball of the user). Our empirical results show that we can perform these attacks with high accuracy, and can do so in scenarios that aptly demonstrate the realism of this threat.

Yi Xu (University of North Carolina at Chapel Hill), Jared Heinly (University of North Carolina at Chapel Hill), Andrew White (University of North Carolina at Chapel Hill), Jan-Michael Frahm (University of North Carolina at Chapel Hill), Fabian Monrose (University of North Carolina at Chapel Hill)

Computationally Complete Symbolic Attacker and Key Exchange

Recently, Bana and Comon-Lundh introduced the notion of computationally complete symbolic attacker to deliver unconditional computational soundness to symbolic protocol verification. First we explain the relationship between their technique and Fitting’s embedding of classical logic into S4. Then, based on predicates for “key usability”, we provide an axiomatic system in their framework to handle secure encryption when keys are allowed to be sent. We examine both IND-CCA2 and KDM-CCA2 encryptions, both symmetric and asymmetric situations. For unforgeability, we consider INT-CTXT encryptions. This technique does not require the usual limitations of computational soundness such as the absence of dynamic corruption, the absence of key-cycles or unambiguous parsing of bit strings. In particular, if a key-cycle possibly corrupts CCA2 encryption, our technique delivers an attack. If it does not endanger security, the security proof goes through. We illustrate how our notions can be applied in protocol proofs.

Gergei Bana (INRIA, Paris), Koji Hasebe (University of Tsukuba), Mitsuhiro Okada (Keio University)

Deduction Soundness: Prove One, Get Five for Free

Most computational soundness theorems deal with a limited number of primitives, thereby limiting their applicability. The notion of deduction soundness of Cortier and Warinschi (CCS‘11) aims to facilitate soundness theorems for richer frameworks via composition results: deduction soundness can be extended, generically, with asymmetric encryption and public data structures. Unfortunately, that paper also hints at rather serious limitations regarding further composition results: composability with digital signatures seems to be precluded. In this paper we provide techniques for bypassing the perceived limitations of deduction soundness and demonstrate that it enjoys vastly improved composition properties. More precisely, we show that a deduction sound implementation can be modularly extended with all of the five basic cryptographic primitives (symmetric/asymmetric encryption, message authentication codes, digital signatures, and hash functions). We thus obtain the first soundness framework that allows for the joint use of multiple instances of all of the basic primitives. In addition, we show how to overcome an important restriction of the bare deduction soundness framework which forbids sending encrypted secret keys. In turn, this prevents its use for the analysis of a large class of interesting protocols (e.g.~key exchange protocols). We allow for more liberal uses of keys as long as they are hidden in a sense that we also define. All primitives typically used to send secret data (symmetric/asymmetric encryption) satisfy our requirement which we also show to be preserved under composition.

Florian Bhl (Karlsruhe Institute of Technology), Vronique Cortier (LORIA CNRS), Bogdan Warinschi (University of Bristol)

Cross-Origin Pixel Stealing: Timing Attacks Using CSS Filters

Timing attacks rely on systems taking varying amounts of time to process different input values. This is usually the result of either conditional branching in code or differences in input size. Using CSS default filters, we have discovered a variety of timing attacks that work in multiple browsers and devices. The first attack exploits differences in time taken to render various DOM trees. This knowledge can be used to determine boolean values such as whether or not a user has an account with a particular website. Second, we introduce pixel stealing. Pixel stealing attacks can be used to sniff user history and read text tokens.

Robert Kotcher (Carnegie Mellon University), Yutong Pei (Carnegie Mellon University), Pranjal Jumde (Carnegie Mellon University), Collin Jackson (Carnegie Mellon University)

Low-Fat Pointers: Compact Encoding and Efficient Gate-Level Implementation of Fat Pointers for Spatial Safety and Capability-based Security

Referencing outside the bounds of an array or buffer is a common source of bugs and security vulnerabilities in today’s software. We can enforce spatial safety and eliminate these violations by inseparably associating bounds with every pointer (fat pointer) and checking these bounds on every memory access. By further adding hardware-managed tags to the pointer, we make them unforgeable. This, in turn, allows the pointers to be used as capabilities to facilitate fine-grained access control and fast security domain crossing. Dedicated checking hardware runs in parallel with the processor’s normal datapath so that the checks do not slow down processor operation (0% runtime overhead). To achieve the safety of fat pointers without increasing program state, we compactly encode approximate base and bound pointers along with exact address pointers for a 46b address space into one 64-bit word with a worst-case memory overhead of 3%. We develop gate-level implementations of the logic for updating and validating these compact fat pointers and show that the hardware requirements are low and the critical paths for common operations are smaller than processor ALU operations. Specifically, we show that the fat-pointer check and update operations can run in a 4 ns clock cycle on a Virtex 6 (40nm) implementation while only using 1100 6-LUTs or about the area of a double- precision, floating-point adder.

Albert Kwon (University of Pennsylvania, Philadelphia), Udit Dhawan (University of Pennsylvania, Philadelphia), Jonathan Smith (University of Pennsylvania, Philadelphia), Thomas Knight (BAE Systems), Andre Dehon (University of Pennsylvania, Philadelphia)

BIOS Chronomancy: Fixing the Core Root of Trust for Measurement

In this paper we look at the implementation of the Core Root of Trust for Measurement (CRTM) from a Dell Latitude E6400 laptop. We describe how the implementation of the CRTM on this system doesn’t meet the requirements set forth by either the Trusted Platform Module(TPM)PC client specification or NIST 800-155 guidance. We show how novel tick malware, a 51 byte patch to the CRTM, can replay a forged measurement to the TPM, falsely indicating that the BIOS is pristine. This attack is broadly applicable, because all CRTMs we have seen to date are rooted in mutable firmware. We also show how flea malware can survive attempts to reflash infected firmware with a clean image. To fix the untrustworthy CRTM we ported an open source “TPM-timing-based attestation” implementation from running in the Windows kernel, to running in an OEM’s BIOS and SMRAM. This created a new, stronger CRTM that detects tick, flea, and other malware embedded in the BIOS. We call our system “BIOS Chronomancy”, and we show that it works in a real vendor BIOS, with all the associated complexity, rather than in a simplified research environment.

John Butterworth (MITRE), Corey Kallenberg (MITRE), Xeno Kovah (MITRE), Amy Herzog (MITRE)

PCTCP: Per-Circuit TCP-over-IPsec Transport for Anonymous Communication Overlay Networks

Recently, there have been several research efforts to design a transport layer that meets the security requirements of anonymous communications while maximizing the network performance experienced by users. In this work, we argue that existing proposals suffer from several performance and deployment issues and we introduce PCTCP, a novel anonymous communication transport design for overlay networks that addresses the shortcomings of the previous proposals. In PCTCP, every overlay path, or circuit, is assigned a separate kernel-level TCP connection that is protected by IPsec, the standard security layer for IP. To evaluate our work, we focus on the Tor network, the most popular low-latency anonymity network, which is notorious for its performance problems that can potentially deter its wider adoption and thereby impact its anonymity. Previous research showed that the current transport layer design of Tor, in which several circuits are multiplexed in a single TCP connection between any pair of routers, is a key contributor to Tor’s performance issues. We implemented, experimentally evaluated, and confirmed the potential gains provided by PCTCP in an isolated testbed and on the live Tor network. We ascertained that significant performance benefits can be obtained using our approach for web clients, while maintaining the same level of anonymity provided by the network today. Our realistic large- scale experimental evaluation of PCTCP shows improvements of more than 60% for response times and approximately 30% for download times compared to Tor. Finally, PCTCP only requires minimal changes to Tor and is easily deployable, as it does not require all routers on a circuit to upgrade.

Mashael Alsabah (Qatar Computing Research Institute), Ian Goldberg (University of Waterloo)

Towards Reducing the Attack Surface of Software Backdoors

Backdoors in software systems probably exist since the very first access control mechanisms were implemented and they are a well-known security problem. Despite a wave of public discoveries of such backdoors over the last few years, this threat has only rarely been tackled so far. In this paper, we present an approach to reduce the attack surface for this kind of attacks and we strive for an automated identification and elimination of backdoors in binary applications. We limit our focus on the examination of server applications within a client- server model. At the core, we apply variations of the delta debugging technique and introduce several novel heuristics for the identification of those regions in binary application that backdoors are typically installed in (i.e., authentication and command processing functions). We demonstrate the practical feasibility of our approach on several real-world backdoors found in modified versions of the popular software tools ProFTPD and OpenSSH. Furthermore, we evaluate our implementation not only on common instruction set architectures such as x86-64, but also on commercial off-the-shelf embedded devices powered by a MIPS32 processor.

Felix Schuster (Ruhr-Universitt Bochum), Thorsten Holz (Ruhr-Universitt Bochum)

Breaking and Entering through the Silicon

As the surplus market of failure analysis equipment continues to grow, the cost of performing invasive IC analysis continues to diminish. Hardware vendors in high-security applications utilize security by obscurity to implement layers of protection on their devices. High-security applications must assume that the attacker is skillful, well-equipped and well-funded. Modern security ICs are designed to make readout of decrypted data and changes to security configuration of the device impossible. Countermeasures such as meshes and attack sensors thwart many state of the art attacks. Because of the perceived difficulty and lack of publicly known attacks, the IC backside has largely been ignored by the security community. However, the backside is currently the weakest link in modern ICs because no devices currently on the market are protected against fully-invasive attacks through the IC backside. Fully-invasive backside attacks circumvent all known countermeasures utilized by modern implementations. In this work, we demonstrate the first two practical fully-invasive attacks against the IC backside. Our first attack is fully-invasive backside microprobing. Using this attack we were able to capture decrypted data directly from the data bus of the target IC’s CPU core. We also present a fully invasive backside circuit edit. With this attack we were able to set security and configuration fuses of the device to arbitrary values.

Clemens Helfmeier (Semiconductor Devices, TU Berlin), Dmitry Nedospasov (Security in Telecommunications, TU Berlin), Christopher Tarnovsky (IOActive Inc.), Jan Krissler (Security in Telecommunications, TU Berlin), Christian Boit (Semiconductor Devices, TU Berlin), Jean-Pierre Seifert (Security in Telecommunications, TU Berlin)

Using SMT Solvers to Automate Design Tasks for Encryption and Signature Schemes

Cryptographic design tasks are primarily performed by hand today. Shifting more of this burden to computers could make the design process faster, more accurate and less expensive. In this work, we investigate tools for programmatically altering existing cryptographic constructions to reflect particular design goals. Our techniques enhance both security and efficiency with the assistance of advanced tools including Satisfiability Modulo Theories (SMT) solvers. Specifically, we propose two complementary tools, AutoGroup and AutoStrong. AutoGroup converts a pairing-based encryption or signature scheme written in (simple) symmetric group notation into a specific instantiation in the more efficient, asymmetric setting. Some existing symmetric schemes have hundreds of possible asymmetric translations, and this tool allows the user to optimize the construction according to a variety of metrics, such as ciphertext size, key size or computation time. The AutoStrong tool focuses on the security of digital signature schemes by automatically converting an existentially unforgeable signature scheme into a strongly unforgeable one. The main technical challenge here is to automate the “partitioned” check, which allows a highly-efficient transformation. These tools integrate with and complement the AutoBatch tool (ACM CCS 2012), but also push forward on the complexity of the automation tasks by harnessing the power of SMT solvers. Our experiments demonstrate that the two design tasks studied can be performed automatically in a matter of seconds.

Joseph A. Akinyele (Johns Hopkins University), Matthew Green (Johns Hopkins University), Susan Hohenberger (Johns Hopkins University)

Detecting Stealthy, Distributed SSH Bruteforcing

In this work we propose a general approach for detecting distributed malicious activity in which individual attack sources each operate in a stealthy, low- profile manner. We base our approach on observing statistically significant changes in a parameter that summarizes aggregate activity, bracketing a distributed attack in time, and then determining which sources present during that interval appear to have coordinated their activity. We apply this approach to the problem of detecting stealthy distributed SSH bruteforcing activity, showing that we can model the process of legitimate users failing to authenticate using a beta-binomial distribution, which enables us to tune a detector that trades off an expected level of false positives versus time-to- detection. Using the detector we study the prevalence of distributed bruteforcing, finding dozens of instances in an extensive 8-year dataset collected from a site with several thousand SSH users. Many of the attacks— some of which last months—would be quite difficult to detect individually. While a number of the attacks reflect indiscriminant global probing, we also find attacks that targeted only the local site, as well as occasional attacks that succeeded.

Mobin Javed (UC Berkeley), Vern Paxson (UC Berkeley and ICSI)

Relational Abstraction in Community-Based Secure Collaboration

Under scientific collaborations, resource sharing tends to be highly dynamic and often ad hoc. The dynamic characteristics and sharing patterns of ad-hoc collaborative sharing impose a need for comprehensive and flexible approaches to reflect and cope with the unique access control requirements associated with the ad-hoc collaboration. In this paper, we propose a role-based access management framework to enable secure resource sharing,especially focusing on the digital information sharing in the heterogeneous scientific collaboration environments.Our framework incorporates role-based approach to address distributed access control, delegation and dissemination control involved in the resource sharing within such environments. A set of XACML-based policy schemas is proposed to specify policies on our framework. To demonstrate the feasibility of our framework, we design and implement a proof-of-concept prototype system called ShareEnabler, which is based on a peer-to-peer information sharing toolkit developed by Lawrence Berkeley National Laboratory.

Philip Fong (University of Calgary), Pooya Mehregan (University of Calgary), Ram Krishnan (University of Texas at San Antonio)

Relational Abstract Interpretation for the Verification of 2-Hypersafety Properties

Information flow properties of programs can be formalized as hyperproperties specifying the relation of multiple executions. In this paper, we therefore introduce a framework for proving 2-hypersafety properties by means of abstract interpretation. The main idea is to apply abstract interpretation on the self- compositions of the control flow graphs of programs. As a result, our method is inherently capable of analyzing relational properties of even dissimilar programs. Constructing self-compositions of control flow graphs is nontrivial. Therefore, we present an algorithm for constructing quality self-compositions driven by a tree distance measure between the abstract syntax trees of subprograms. Finally, we demonstrate the applicability of the approach by proving intricate information flow properties of programs written in a simple language for tree manipulation motivated by the Web Services Business Process Execution Language.

Mt Kovcs (Technische Universitt Mnchen), Helmut Seidl (Technische Universitt Mnchen), Bernd Finkbeiner (Saarland University)

Content-Based Isolation: Rethinking Isolation Policy Design on Client Systems

Modern client platforms, such as iOS, Android, Windows Phone, and Windows 8, have progressed from a per-user isolation policy, where users are isolated but a user’s applications run in the same isolation container, to an application isolation policy, where different applications are isolated from one another. However, this is not enough because mutually distrusting content can interfere with one another inside a single application. For example, an attacker-crafted image may compromise a photo editor application and steal other images processed by the editor. In this paper, we advocate a content-based principal model in which the OS treats content owners as its principals and isolates content of different owners from one another. Our key contribution is to generalize the content-based principal model from web browsers, namely, the same-origin policy, into an isolation policy that is suitable for all applications. The key challenge we faced is to support flexible isolation granularities while remaining compatible with the web. In this paper, we present the design, implementation, and evaluation of our prototype system that tackles this challenge.

Alexander Moshchuk (Microsoft Research), Helen Wang (Microsoft Research), Yunxin Liu (Microsoft Research Asia)

mXSS Attacks: Attacking well-secured Web-Applications by using innerHTML Mutations

Back in 2007, Hasegawa discovered a novel Cross-Site Scripting (XSS) vector based on the mistreatment of the backtick character in a single browser implementation. This initially looked like an implementation error that could easily be fixed. Instead, as this paper shows, it was the first example of a new class of XSS vectors, the class of mutation-based XSS (mXSS) vectors, which may occur in innerHTML and related properties. mXSS affects all three major browser families: IE, Firefox, and Chrome. We were able to place stored mXSS vectors in high-profile applications like Yahoo! Mail, Rediff Mail, OpenExchange, Zimbra, Roundcube, and several commercial products. mXSS vectors bypassed widely deployed server-side XSS protection techniques (like HTML Purifier, kses, htmlLawed, Blueprint and Google Caja), client-side filters (XSS Auditor, IE XSS Filter), Web Application Firewall (WAF) systems, as well as Intrusion Detection and Intrusion Prevention Systems (IDS/IPS). We describe a scenario in which seemingly immune entities are being rendered prone to an attack based on the behavior of an involved party, in our case the browser. Moreover, it proves very difficult to mitigate these attacks: In browser implementations, mXSS is closely related to performance enhancements applied to the HTML code before rendering; in server side filters, strict filter rules would break many web applications since the mXSS vectors presented in this paper are harmless when sent to the browser. This paper introduces and discusses a set of seven different subclasses of mXSS attacks, among which only one was previously known. The work evaluates the attack surface, showcases examples of vulnerable high-profile applications, and provides a set of practicable and low-overhead solutions to defend against these kinds of attacks.

Mario Heiderich (Ruhr-Universitt Bochum), Jrg Schwenk (Ruhr-Universitt Bochum), Tilman Frosch (Ruhr-Universitt Bochum), Jonas Magazinius (Chalmers University of Technology), Edward Z. Yang (Stanford University)

HIFS: History Independence for File Systems

Ensuring complete irrecoverability of deleted data is difficult to achieve in modern systems. Simply overwriting data or deploying encryption with ephemeral keys is not sufficient. The mere (previous) existence of deleted records impacts the current system state implicitly at all layers. This can be used as an oracle to derive information about the past existence of deleted records. Yet there is hope. If all system layers would exhibit history independence, such implicit history-related oracles would disappear. However, achieving history independence efficiently is hard due to the fact that current systems are designed to heavily benefit from (data and time) locality at all layers through heavy caching, and existing history independent data structures completely destroy locality. In this work we devise a way to achieve history independence while preserving locality (and thus be practical). We then design, implement and experimentally evaluate the first history independent file system (HIFS). HIFS guarantees secure deletion by providing full history independence across both file system and disk layers of the storage stack. It preserves data locality, and provides tunable efficiency knobs to suit different application history-sensitive scenarios.

Sumeet Bajaj (Stony Brook University), Radu Sion (Stony Brook University)

Delta: Automatic Identification of Unknown Web-Based Infection Campaigns

Identifying malicious web sites has become a major challenge in today’s Internet. Previous work focused on detecting if a web site is malicious by dynamically executing JavaScript in instrumented environments or by rendering web sites in client honeypots. Both techniques bear a significant evaluation overhead, since the analysis can take up to tens of seconds or even minutes per sample. In this paper, we introduce a novel, purely static analysis approach, the Delta-system, that (i) extracts change-related features between two versions of the same website, (ii) uses a machine-learning algorithm to derive a model of web site changes, (iii) detects if a change was malicious or benign, (iv) identifies the underlying infection vector campaign based on clustering, and (iv) generates an identifying signature. We demonstrate the effectiveness of the Delta-system by evaluating it on a dataset of over 26 million pairs of web sites by running next to a web crawler for a period of four months. Over this time span, the Delta-system successfully identified previously unknown infection campaigns. Including a campaign that targeted installations of the Discuz!X Internet forum software, injected infection vectors into these forums, and redirected to an installation of the Cool Exploit Kit.

Kevin Borgolte (UC Santa Barbara), Christopher Kruegel (UC Santa Barbara), Giovanni Vigna (UC Santa Barbara)

Measuring Password Guessability for an Entire University

Despite considerable research on passwords, empirical studies of password strength have been limited by lack of access to plaintext passwords, small data sets, and password sets specifically collected for a research study or from low- value accounts. Properties of passwords used for high-value accounts thus remain poorly understood. We fill this gap by studying the single-sign-on passwords used by over 25,000 faculty, staff, and students at a research university with a complex password policy. Key aspects of our contributions rest on our (indirect) access to plaintext passwords. We describe our data collection methodology, particularly the many precautions we took to minimize risks to users. We then analyze how guessable the collected passwords would be during an offline attack by subjecting them to a state-of-the-art password cracking algorithm. We discover significant correlations between a number of demographic and behavioral factors and password strength. For example, we find that users associated with the computer science school make passwords more than 1.5 times as strong as those of users associated with the business school. while users associated with computer science make strong ones. In addition, we find that stronger passwords are correlated with a higher rate of errors entering them. We also compare the guessability and other characteristics of the passwords we analyzed to sets previously collected in controlled experiments or leaked from low-value accounts. We find more consistent similarities between the university passwords and passwords collected for research studies under similar composition policies than we do between the university passwords and subsets of passwords leaked from low-value accounts that happen to comply with the same policies.

Michelle L. Mazurek (Carnegie Mellon University), Saranga Komanduri (Carnegie Mellon University), Timothy Vidas (Carnegie Mellon University), Lujo Bauer (Carnegie Mellon University), Nicolas Christin (Carnegie Mellon University), Lorrie Faith Cranor (Carnegie Mellon University), Patrick Gage Kelley (University of New Mexico), Richard Shay (Carnegie Mellon University), Blase Ur (Carnegie Mellon University)

Unauthorized Origin Crossing on Mobile Platforms: Threats and Mitigation

With the progress in mobile computing, web services are increasingly delivered to their users through mobile apps, instead of web browsers. However, unlike the browser, which enforces origin-based security policies to mediate the interactions between the web content from different sources, today’s mobile OSes do not have a comparable security mechanism to control the cross-origin communications between apps, as well as those between an app and the web. As a result, a mobile user’s sensitive web resources could be exposed to the harms from a malicious origin. In this paper, we report the first systematic study on this mobile cross-origin risk. Our study inspects the main cross-origin channels on Android and iOS, including intent, scheme and web-accessing utility classes, and further analyzes the ways popular web services (e.g., Facebook, Dropbox, etc.) and their apps utilize those channels to serve other apps. The research shows that lack of origin-based protection opens the door to a wide spectrum of cross-origin attacks. These attacks are unique to mobile platforms, and their consequences are serious: for example, using carefully designed techniques for mobile cross-site scripting and request forgery, an unauthorized party can obtain a mobile user’s Facebook/Dropbox authentication credentials and record her text input. We report our findings to related software vendors, who all acknowledged their importance. To address this threat, we designed an origin- based protection mechanism, called Morbs, for mobile OSes. Morbs labels every message with its origin information, lets developers easily specify security policies, and enforce the policies on the mobile channels based on origins. Our evaluation demonstrates the effectiveness of our new technique in defeating unauthorized origin crossing, its efficiency and the convenience for the developers to use such protection.

Rui Wang (Microsoft Research), Luyi Xing (Indiana University), Xiaofeng Wang (Indiana University), Shuo Chen (Microsoft Research)

The Impact of Vendor Customizations on Android Security

The smartphone market has grown explosively in recent years, as more and more consumers are attracted to the sensor-studded multipurpose devices. Android is particularly ascendant; as an open platform, smartphone manufacturers are free to extend and modify it, allowing them to differentiate themselves from their competitors. However, vendor customizations will inherently impact overall Android security and such impact is still largely unknown. In this paper, we analyze ten representative stock Android images from five popular smartphone vendors (with two models from each vendor). Our goal is to assess the extent of security issues that may be introduced from vendor customizations and further determine how the situation is evolving over time. In particular, we take a three-stage process: First, given a smartphone’s stock image, we perform provenance analysis to classify each app in the image into three categories: apps originating from the AOSP, apps customized or written by the vendor, and third-party apps that are simply bundled into the stock image. Such provenance analysis allows for proper attribution of detected security issues in the examined Android images. Second, we analyze permission usages of pre-loaded apps to identify overprivileged ones that unnecessarily request more Android permissions than they actually use. Finally, in vulnerability analysis, we detect buggy pre-loaded apps that can be exploited to mount permission re- delegation attacks or leak private information. Our evaluation results are worrisome: vendor customizations are significant on stock Android devices and on the whole responsible for the bulk of the security problems we detected in each device. Specifically, our results show that on average 85.78% of all pre-loaded apps in examined stock images are overprivileged with a majority of them directly from vendor customizations. In addition, 64.71% to 85.00% of vulnerabilities we detected in examined images from every vendor (except for Sony) arose from vendor customizations. In general, this pattern held over time – newer smartphones, we found, are not necessarily more secure than older ones.

Lei Wu (North Carolina State University), Michael Grace (North Carolina State University), Yajin Zhou (North Carolina State University), Chiachih Wu (North Carolina State University), Xuxian Jiang (North Carolina State University)

Flexible and Scalable Digital Signatures in TPM 2.0

Trusted Platform Modules (TPM) are multipurpose hardware chips, which provide support for various cryptographic functions. Flexibility, scalability and high performance are critical features for a TPM. In this paper, we present the new method for implementing digital signatures that has been included in TPM version 2.0. The core part of this method is a single TPM signature primitive, which can be called by different software programmes, in order to implement signature schemes and cryptographic protocols with different security and privacy features. We prove security of the TPM signature primitive under the static Diffie-Hellman assumption and the random oracle model. We demonstrate how to call this TPM signature primitive to implement anonymous signatures (Direct Anonymous Attestation), pseudonym systems (U-Prove), and conventional signatures (the Schnorr signature). To the best of our knowledge, this is the first signature primitive implemented in a limited hardware environment capable of supporting various signature schemes without adding additional hardware complexity compared to a hardware implementation of a conventional signature scheme.

Liqun Chen (HP Labs), Jiangtao Li (Intel Labs)

Outsourced Symmetric Private Information Retrieval

In the setting of searchable symmetric encryption (SSE), a data owner D outsources a database (or document/file collection) to a remote server E in encrypted form such that D can later search the collection at E while hiding information about the database and queries from E. Leakage to E is to be confined to well-defined forms of data-access and query patterns while preventing disclosure of explicit data and query plaintext values. Recently, Cash et al. presented a protocol, OXT, which can run arbitrary boolean queries in the SSE setting and which is remarkably efficient even for very large databases. In this paper we investigate a richer setting in which the data owner D outsources its data to a server E but D is now interested to allow clients (third parties) to search the database such that clients learn the information D authorizes them to learn but nothing else while E still does not learn about the data or queried values as in the basic SSE setting. Furthermore, motivated by a wide range of applications, we extend this model and requirements to a setting where, similarly to private information retrieval, the client’s queried values need to be hidden also from the data owner D even though the latter still needs to authorize the query. Finally, we consider the scenario in which authorization can be enforced by the data owner D without D learning the policy, a setting that arises in court-issued search warrants. We extend the OXT protocol of Cash et al. to support arbitrary boolean queries in all of the above models while withstanding adversarial non-colluding servers (D and E) and arbitrarily malicious clients, and while preserving the remarkable performance of the protocol.

Stanislaw Jarecki (University of California, Irvine), Charanjit Jutla (IBM T.J. Watson Research Center), Hugo Krawczyk (IBM), Marcel C. Rosu (IBM T.J. Watson), Michael Steiner (IBM Research)

LogGC: Garbage Collecting Audit Log

System-level audit logs capture the interactions between applications and the runtime environment. They are highly valuable for forensic analysis that aims to identify the root cause of an attack, which may occur long ago, or to determine the ramifications of an attack for recovery from it. A key challenge of audit log-based forensics in practice is the sheer size of the log files generated, which could grow at a rate of Gigabytes per day. In this paper, we propose LogGC, an audit logging system with garbage collection (GC) capability. We identify and overcome the unique challenges of garbage collection in the context of computer forensic analysis, which makes LogGC different from traditional memory GC techniques. We also develop techniques that instrument user applications at a small number of selected places to emit additional system events so that we can substantially reduce the false dependences between system events to improve GC effectiveness. Our results show that LogGC can reduce audit log size by 14 times for regular user systems and 37 times for server systems, without affecting the accuracy of forensic analysis.

Kyu Hyung Lee (Purdue University), Xiangyu Zhang (Purdue University), Dongyan Xu (Purdue University)

The Robustness of Hollow CAPTCHAs

We carry out a systematic study of existing visual CAPTCHAs based on distorted characters that are augmented with anti-segmentation techniques. Applying a systematic evaluation methodology to 15 current CAPTCHA schemes from popular web sites, we find that 13 are vulnerable to automated attacks. Based on this evaluation, we identify a series of recommendations for CAPTCHA designers and attackers, and possible future directions for producing more reliable human/computer distinguishers.

Haichang Gao (Xidian University), Wei Wang (Xidian University), Jiao Qi (Xidian University), Xuqin Wang (Xidian University), Xiyang Liu (Xidian University), Jeff Yan (Newcastle University)

Security Analysis of a Widely Deployed Locking System

Electronic locking systems are rather new products in the physical access control market. In contrast to mechanical locking systems, they provide several convenient features such as more flexible access rights management, the possibility to revoke physical keys and the claim that electronic keys cannot be cloned as easily as their mechanical counterparts. While for some electronic locks, mechanical flaws have been found, only a few publications analyzed the cryptographic security of electronic locking systems. In this paper, we analyzed the electronic security of an electronic locking system which is still widely deployed in the field. We reverse-engineered the radio protocol and cryptographic primitives used in the system. While we consider the system concepts to be well-designed, we discovered some implementation flaws that allow the extraction of a system-wide master secret with a brute force attack or by performing a Differential Power Analysis attack to any electronic key. In addition, we discovered a weakness in the Random Number Generator that allows opening a door without breaking cryptography under certain circumstances. We suggest administrative and technical countermeasures against all proposed attacks. Finally, we give an examination of electronic lock security standards and recommend changes to one widely used standard that can help to improve the security of newly developed products.

Michael Weiner (Technische Universitt Mnchen), Maurice Massar (Technische Universitt Kaiserslautern), Erik Tews (Technische Universitt Darmstadt), Dennis Giese (Technische Universitt Darmstadt), Wolfgang Wieser (Ludwig-Maximilians- Universitt Mnchen)

Elligator: Elliptic-Curve Points Indistinguishable from Uniform Random Strings

Censorship-circumvention tools are in an arms race against censors. The censors study all traffic passing into and out of their controlled sphere, and try to disable censorship-circumvention tools without completely shutting down the Internet. Tools aim to shape their traffic patterns to match unblocked programs, so that simple traffic profiling cannot identify the tools within a reasonable number of traces; the censors respond by deploying firewalls with increasingly sophisticated deep-packet inspection. Cryptography hides patterns in user data but does not evade censorship if the censor can recognize patterns in the cryptography itself. In particular, elliptic-curve cryptography often transmits points on known elliptic curves, and those points are easily distinguishable from uniform random strings of bits. This paper introduces high-security high- speed elliptic-curve systems in which elliptic-curve points are encoded so as to be indistinguishable from uniform random strings. At a lower level, this paper introduces a new bijection between strings and about half of all curve points; this bijection is applicable to every odd-characteristic elliptic curve with a point of order 2, except for curves of $j$-invariant 1728. This paper also presents guidelines to construct, and two examples of, secure curves suitable for these encodings.

Daniel Bernstein (University of Illinois at Chicago), Mike Hamburg (Cryptography Research), Anna Krasnova (RU Nijmegen), Tanja Lange (Technische Universiteit Eindhoven)

Geo-Indistinguishability: Differential Privacy for Location-Based Systems

The growing popularity of location-based systems, allowing unknown/untrusted servers to easily collect huge amounts of information regarding users’ location, has recently started raising serious privacy concerns. In this paper we introduce geoind, a formal notion of privacy for location-based systems that protects the user’s exact location, while allowing approximate information – typically needed to obtain a certain desired service – to be released. This privacy definition formalizes the intuitive notion of protecting the user’s location within a radius $r$ with a level of privacy that depends on r, and corresponds to a generalized version of the well-known concept of differential privacy. Furthermore, we present a mechanism for achieving geoind by adding controlled random noise to the user’s location. We describe how to use our mechanism to enhance LBS applications with geo-indistinguishability guarantees without compromising the quality of the application results. Finally, we compare state-of-the-art mechanisms from the literature with ours. It turns out that, among all mechanisms independent of the prior, our mechanism offers the best privacy guarantees.

Miguel E. Andres (cole Polytechnique), Nicols E. Bordenabe (INRIA and cole Polytechnique), Konstantinos Chatzikokolakis (CNRS and cole Polytechnique ), Catuscia Palamidessi (INRIA and cole Polytechnique)

Scheduling Blackbox Mutational Fuzzing

Black-box mutational fuzzing is a simple yet effective technique to find bugs in software. Given a set of program-seed pairs, we ask how to schedule the fuzzings of these pairs in order to maximize the number of unique bugs found at any point in time. We develop an analytic framework using a mathematical model of black- box mutational fuzzing and use it to evaluate 26 existing and new randomized online scheduling algorithms. Our experiments show that one of our new scheduling algorithms outperforms the multi-armed bandit algorithm in the current version of the CERT Basic Fuzzing Framework (BFF) by finding 1.5x more unique bugs in the same amount of time.

Maverick Woo (Carnegie Mellon University), Sang Kil Cha (Carnegie Mellon University), Samantha Gottlieb (Carnegie Mellon University), David Brumley (Carnegie Mellon University)

MinimaLT: Minimal-latency Networking Through Better Security

MinimaLT is a new network protocol that provides ubiquitous encryption for maximal confidentiality, including protecting packet headers. MinimaLT provides server and user authentication, extensive Denial-of-Service protections, privacy-preserving IP mobility, and fast key erasure. We describe the protocol, demonstrate its performance relative to TLS and unencrypted TCP/IP, and analyze its protections, including its resilience against DoS attacks. By exploiting the properties of its cryptographic protections, MinimaLT is able to eliminate three way handshakes and thus create connections faster than unencrypted TCP/IP.

W. Michael Petullo (University of Illinois at Chicago), Jon Solworth (University of Illinois at Chicago), Daniel Bernstein (University of Illinois at Chicago), Tanja Lange (TU Eindhoven), Xu Zhang (University of Illinois at Chicago)

More Efficient Oblivious Transfer and Extensions for Faster Secure Computation

Protocols for secure computation enable parties to compute a joint function on their private inputs without revealing anything but the result. A foundation for secure computation is oblivious transfer (OT), which traditionally requires expensive public key cryptography. A more efficient way to perform many OTs is to extend a small number of base OTs using OT extensions based on symmetric cryptography. In this work we present optimizations and efficient implementations of OT and OT extensions in the semi-honest model. We propose a novel OT protocol with security in the standard model and improve OT extensions with respect to communication complexity, computation complexity, and scalability. We also provide specific optimizations of OT extensions that are tailored to the secure computation protocols of Yao and Goldreich-Micali- Wigderson and reduce the communication complexity even further. We experimentally verify the efficiency gains of our protocols and optimizations. By applying our implementation to current secure computation frameworks, we can securely compute a Levenshtein distance circuit with 1.29 billion AND gates at a rate of 1.2 million AND gates per second. Moreover, we demonstrate the importance of correctly implementing OT within secure computation protocols by presenting an attack on the FastGC framework.

Gilad Asharov (Bar-Ilan University), Yehuda Lindell (Bar-Ilan University), Thomas Schneider (TU Darmstadt), Michael Zohner (TU Darmstadt)

Efficient Targeted Key Subset Retrieval in Fractal Hash Sequences

This paper presents a new hash chain traversal strategy which improves performance of hash chain based one-time authentication schemes. This work is motivated by the need for efficient message authentication in low-latency multicast systems. Proposed solutions such as TV-OTS rely on hash chain generated values for keys, achieving reliable security by using only a small subset of generated values from each chain. However, protocols using hash chains are limited by the rate at which a hash chain traversal is able to supply keys. The new algorithm uses the same structure as Fractal Hash Traversal, but eliminates redundant operations incurred when used with applications such as TV- OTS. Performance is measured in terms of savings and is proportional to the chain-distance between consecutively retrieved values. For a distance of delta, we achieve Theta(delta 2(delta)) savings, which is shown analytically and supported by empirical tests.

Kelsey Cairns (Washington State University), Thoshitha Gamage (Washington State University), Carl Hauser (Washington State University)

Hang with Your Buddies to Resist Intersection Attacks

Some anonymity schemes might in principle protect users from pervasive network surveillance–but only if all messages are independent and unlinkable. Users in practice often need pseudonymity–sending messages intentionally linkable to each other but not to the sender–but pseudonymity in dynamic networks exposes users to intersection attacks. We present Buddies, the first systematic design for intersection attack resistance in practical anonymity systems. Buddies groups users dynamically into buddy sets, controlling message transmission to make buddies within a set behaviorally indistinguishable under traffic analysis. To manage the inevitable tradeoffs between anonymity guarantees and communication responsiveness, Buddies enables users to select independent attack mitigation policies for each pseudonym. Using trace-based simulations and a working prototype, we find that Buddies can guarantee non-trivial anonymity set sizes in realistic chat/microblogging scenarios, for both short-lived and long- lived pseudonyms.

David Wolinsky (Yale University), Ewa Syta (Yale University), Bryan Ford (Yale University)

Duppel: Retrofitting Commodity Operating Systems to Mitigate Cache Side Channels in the Cloud

This paper presents the design, implementation and evaluation of a system called Duppel that enables a tenant virtual machine to defend itself from cache-based side-channel attacks in public clouds. Duppel includes defenses for time-shared caches such as per-core L1 and L2 caches. Experiments in the lab and on public clouds show that Duppel effectively obfuscates timing signals available to an attacker VM via these caches and incurs modest performance overheads (at most 7% and usually much less) in the common case of no side-channel attacks. Moreover, Duppel requires no changes to hypervisors or support from cloud operators.

XXX

Cover Your ACKs: Pitfalls of Covert Channel Censorship Circumvention

In response to increasingly sophisticated methods of blocking access to censorship circumvention schemes such as Tor, recently proposed systems such as Skypemorph, FreeWave, and CensorSpoofer have used voice and video conferencing protocols as “cover channels” to hide proxy connections. We demonstrate that even with perfect emulation of the cover channel, these systems can be vulnerable to attacks that detect or disrupt the covert communications while having no effect on legitimate cover traffic. Our attacks stem from differences in the channel requirements for the cover protocols, which are peer-to-peer and loss tolerant, and the covert traffic, which is client-proxy and loss intolerant. These differences represent significant limitations and suggest that such protocols are a poor choice of cover channel for general censorship circumvention schemes.

John Geddes (University of Minnesota), Maxfield Schuchard (University of Minnesota), Nicholas Hopper (University of Minnesota)

Protecting Sensitive Web Content from Client-side Vulnerabilities with Cryptons

Web browsers isolate web origins, but do not provide direct abstractions to isolate sensitive data and control computation over it within the same origin. As a result, guaranteeing security of sensitive web content requires trusting all code in the browser and client-side applications to be vulnerability-free. In this paper, we propose a new abstraction, called Crypton, which supports intra-origin control over sensitive data throughout its life cycle. To securely enforce the semantics of Cryptons, we develop a standalone component called Crypton-Kernel, which extensively leverages the functionality of existing web browsers without relying on their large TCB. Our evaluation demonstrates that the Crypton abstraction supported by the Crypton-Kernel is widely applicable to popular real-world applications with millions of users, including webmail, chat, blog applications, and Alexa Top 50 websites, with low performance overhead.

Xinshu Dong (National University of Singapore), Zhaofeng Chen (Peking University ), Hossein Siadati (Polytechnic Institute of New York University), Shruti Tople (National University of Singapore), Prateek Saxena (National University of Singapore), Zhenkai Liang (National University of Singapore)

An Architecture for Practical Actively Secure MPC with Dishonest Majority

We present a runtime environment for executing secure programs via a multi-party computation protocol in the preprocessing model. The runtime environment is general and allows arbitrary reactive computations to be performed. A particularly novel aspect is that it automatically determines the minimum number of rounds needed for a computation, given a specific instruction sequence, and it then uses this to minimize the overall cost of the computation. Various experiments are reported on, on various non-trivial functionalities. We show how, by utilizing the ability of modern processors to execute multiple threads at a time, one can obtain various tradeoffs between latency and throughput

Marcel Keller (University of Bristol), Peter Scholl (University of Bristol), Nigel Smart (University of Bristol)

Configuration-based IDS for Advanced Metering Infrastructure

Smart grid deployment initiatives have been witnessed in the past recent years. Smart grids provide bi-directional communication between meters and headend system through Advanced Metering Infrastructure (AMI). Recent studies highlight the threats targeting AMI. Despite the need of tailored Intrusion Detection Systems (IDS) for the smart grid, very limited progress has been made in this area. Unlike traditional networks, smart grid has its own unique challenges, such as limited computational power devices and potentially high deployment cost, that restrict the deployment options of intrusion detectors. We show that smart grid exhibits deterministic and predictable behavior that can be accurately modeled to develop intrusion detection system. In this paper, we show that AMI behavior can be modeled using event logs collected at smart collectors, which in turn can be verified using the specifications invariant generated from the configurations of the AMI devices. Event logs are modeled using fourth order Markov Chain and specifications are written in Linear Temporal Logic (LTL). The approach provides robustness against evasion and mimicry attacks, however, we discuss that it still can be evaded to a certain extent. We validate our approach on a real-world dataset of thousands of meters collected at the AMI of a leading utility provider.

Muhammad Qasim Ali (University of North Carolina at Charlotte), Ehab Al-Shaer (UNCC)

SAuth: Protecting User Accounts from Password Database Leaks

Password-based authentication is the dominant form of access control in web services. Unfortunately, it proves to be more and more inadequate every year. Even if users choose long and complex passwords, vulnerabilities in the way they are managed by a service may leak them to an attacker. Recent incidents in popular services such as LinkedIn and Twitter demonstrate the impact that such an event could have. The use of one-way hash functions to mitigate the problem is countered by the evolution of hardware which enables powerful password- cracking platforms. In this paper we propose SAuth, a protocol which employs authentication synergy among different services. Users wishing to access their account on service S will also have to authenticate for their account on service V, which acts as a vouching party. Both services S and V are regular sites visited by the user everyday (e.g., Twitter, Facebook, Gmail). Should an attacker acquire the password for service S he will be unable to log in unless he also compromises the password for service V and possibly more vouching services. SAuth is an extension and not a replacement of existing authentication methods. It operates one layer above without ties to a specific method, thus enabling different services to employ heterogeneous systems. Finally we employ password decoys to protect users that share a password across services.

Georgios Kontaxis (Columbia University), Elias Athanasopoulos (Columbia University), Georgios Portokalidis (Stevens Institute of Technology), Angelos D. Keromytis (Columbia University)

An Analysis of the EMV Channel Establishment Protocol

This paper shows how the functionality associated with EMV-compliant payment cards can be securely emulated in software on platforms supporting Trusted Computing technology. We describe a detailed system architecture encompassing user enrolment, card deployment (in the form of software), card activation, and subsequent transaction processing. Our proposal is compatible with the existing EMV transaction processing architecture, and thus integrates fully and naturally with already deployed EMV infrastructure. We show that our proposal, which effectively makes available the full security of PoS transactions for Internet- based CNP transactions, has the potential to significantly reduce the opportunity for fraudulent CNP transactions.

Christina Brzuska (Tel Aviv University), Nigel P. Smart (University of Bristol), Bogdan Warinschi (University of Bristol), Gaven J. Watson (University of Bristol)

Beheading Hydras: Performing Effective Botnet Takedowns

Devices infected with malicious software typically form botnet armies under the influence of one or more command and control (C&C) servers. The botnet problem reached such levels where federal law enforcement agencies have to step in and take actions against botnets by disrupting (or “taking down”) their C&Cs, and thus their illicit operations. Lately, more and more private companies have started to independently take action against botnet armies, primarily focusing on their DNS-based C&Cs. While well-intentioned, their C&C takedown methodology is in most cases ad-hoc, and limited by the breadth of knowledge available around the malware that facilitates the botnet. With this paper, we aim to bring order, measure, and reason to the botnet takedown problem. We propose a takedown analysis and recommendation system, called rza, that allows researchers to perform two tasks: 1) a postmortem analysis of past botnet takedowns, and 2) provide recommendations on how to successfully execute future botnet takedowns. As part of our system evaluation, we perform a postmortem analysis of the recent Kelihos, Zeus and 3322.org takedowns. We show that while some of these takedowns were effective, others did not appear to have a significant long-term impact on the targeted botnet. In addition to the postmortem analysis, we provide takedown recommendation metrics for 45 currently active botnets, where we find that 42 of them can likely be disabled entirely by using a DNS-based takedown strategy only.

Yacin Nadji (Georgia Institute of Technology), Manos Antonakakis (Damballa Inc.), Roberto Perdisci (University of Georgia), David Dagon (Georgia Institute of Technology),Wenke Lee (Georgia Institute of Technology),

PoWerStore: Proofs of Writing for Efficient and Robust Storage

Existing Byzantine fault tolerant (BFT) storage solutions that achieve strong consistency and high availability, are costly compared to solutions that tolerate simple crashes. This cost is one of the main obstacles in deploying BFT storage in practice. In this paper, we present PoWerStore, a robust and efficient data storage protocol. PoWerStore’s robustness comprises tolerating network outages, maximum number of Byzantine storage servers, any number of Byzantine readers and crash-faulty writers, and guaranteeing high availability (wait-freedom) and strong consistency (linearizability) of read/write operations. PoWerStore’s efficiency stems from combining lightweight cryptography, erasure coding and metadata write-backs, where readers write-back only metadata to achieve strong consistency. Central to PoWerStore is the concept of Proofs of Writing (PoW), a novel data storage technique inspired by commitment schemes. PoW rely on a 2-round write procedure, in which the first round writes the actual data and the second round only serves to prove the occurrence of the first round. PoW enable efficient implementations of strongly consistent BFT storage through metadata write-backs and low latency reads. We implemented PoWerStore and show its improved performance when compared to existing robust storage protocols, including protocols that tolerate only crash faults.

Dan Dobre (NEC Labs Europe), Ghassan Karame (NEC Labs Europe), Wenting Li (NEC Labs Europe), Matthias Majuntke (Capgemini Deutschland ), Neeraj Suri (TU Darmstadt), Marko Vukoli (Eurecom)

Privacy-Preserving Matrix Factorization

Collaborative filtering (CF) techniques are becoming increasingly popular with the evolution of the Internet. Such techniques recommend products to customers using similar users’ preference data. The performance of CF systems degrades with increasing number of customers and products. To reduce the dimensionality of filtering databases and to improve the performance, Singular Value Decomposition (SVD) is applied for CF. Although filtering systems are widely used by E-commerce sites, they fail to protect users’ privacy. Since many users might decide to give false information because of privacy concerns, collecting high quality data from customers is not an easy task. CF systems using these data might produce inaccurate recommendations. In this paper, we discuss SVD- based CF with privacy. To protect users’ privacy while still providing recommendations with decent accuracy, we propose a randomized perturbation-based scheme.

Valeria Nikolaenko (Stanford), Stratis Ioannidis (Technicolor), Udi Weinsberg (Technicolor), Marc Joye (Technicolor), Nina Taft (Technicolor), Dan Boneh (Stanford)

PICCO: A General-Purpose Compiler for Private Distributed Computation

Secure computation on private data has been an active area of research for many years and has received a renewed interest with the emergence of cloud computing. In recent years, substantial progress has been made with respect to the efficiency of the available techniques and several implementations have appeared. The available tools, however, lacked a convenient mechanism for implementing a general-purpose}program in a secure computation framework suitable for execution in not fully trusted environments. This work fulfills this gap and describes a system, called PICCO, for converting a program written in an extension of C into its distributed secure implementation and running it in a distributed environment. The C extension preserves all current features of the programming language and allows variables to be marked as private and be used in general-purpose computation. Secure distributed implementation of compiled programs is based on linear secret sharing, achieving efficiency and information-theoretical security. Our experiments also indicate that many programs can be evaluated very efficiently on private data using PICCO.

Yihua Zhang (University of Notre Dame), Aaron Steele (University of Notre Dame), Marina Blanton (University of Notre Dame)

Control-Alt-Hack: The Design and Evaluation of a Card Game for Computer Security Awareness and Education

We scoped, designed, produced, and evaluated the effectiveness of a recreational tabletop card game created to raise awareness of and alter perceptions regarding-computer security. We discuss our process, the challenges that arose, and the decisions we made to address those challenges. As of May 2013, we have shipped approximately 800 free copies to 150 educators. We analyze and report on feedback from 22 of these educators about their experiences using Control-Alt- Hack with over 450 students in classroom and non-classroom contexts. The responses from the 14 educators who reported on their use of the game in a classroom context variously indicated that: their students’ awareness of computer security as a complex and interesting field was increased (11/14); they would use the game again in their classroom (10/14); and they would recommend the game to others (13/14). Of note, 2 of the 14 classroom educators reported that they would not have otherwise covered the material. Additionally, we present results from user studies with 11 individuals and find that their responses indicate that 8 of the 11 had an increased awareness of computer security or a changed perception; furthermore, all of our intended goals are touched upon in their responses.

Tamara Denning (University of Washington), Adam Lerner (University of Washington), Adam Shostack, Tadayoshi Kohno (University of Washington)