Bytecode, such as produced by e.g., Java and .NET compilers, has
become an important topic of interest, both for industry and academia.
The industrial interest stems from the fact that
bytecode is typically used for Internet and mobile device
applications (smart cards, phones, tablets, etc.), where security
is a major issue. Moreover, bytecode is device independent and
allows dynamic loading of classes, which introduces an extra
challenge for the application of formal methods. Also, the
unstructuredness of the code and the pervasive presence of the
operand stack result in further challenges for the analysis of
bytecode. This workshop will focus on theoretical and practical
aspects of semantics, verification, analysis, certification and
transformation of bytecode.
The workshop will be a mixture of extended abstracts, position papers, and invited presentations. The goal is to make the workshop an active discussion forum for all work related to bytecode. No formal proceedings of the workshop will be published, but selected papers will be invited for a special issue of Science of Computer Programming.
|December 9, 2012 December 23, 2012|
|December 16, 2012 December 28, 2012|
|January 21, 2013|
|February 8, 2013|
Call For Papers[.txt]
SubmissionWe solicit extended abstracts and position papers of at most 8 pages, describing work related to semantics, verification, analysis and transformation. In particular, we explicitly welcome tool demonstrations. Submissions may overlap with submissions to other conferences or journals. There will be a light-weight reviewing process, where submissions will be judged on their interest to the workshop audience.
There will not be any formal proceedings of the workshop. All accepted submissions will be distributed among the workshop participants. After the workshop, presenters of extended abstracts, position papers, tool demos and of invited presentations may be invited to contribute to a special issue of Science of Computer Programming. Invitations will be based on submissions and presentations.
Papers can be submitted (as PDF) through the easy chair page.
|Miguel Gomez-Zamalloa||Complutense University of Madrid, Spain|
|Germán Puebla||Technical University of Madrid, Spain|
|Richard Bubel||Technische Universität Darmstadt|
|Andrew Craik||Oracle Labs, Brisbane, Australia|
|Jürgen Giesl||RWTH Aachen|
|Marieke Huisman||University of Twente|
|Francesco Logozzo||Microsoft Research|
|Mark Marron||IMDEA Software Institute|
|Corina Pasareanu||CMU/NASA Ames Research Center|
|Ricardo Peña||Complutense University of Madrid|
|David Pichardie||INRIA Rennes - Bretagne Atlantique|
|Fausto Spoto||Dipartimento di Informatica, Verona|
|Wolfgang Ahrendt||Chalmers University of Technology|
|Étienne Payet||LIM-IREMIA, Université de la Réunion, France|
|09:30||Invited Speaker: Wolfgang Ahrendt
Developments in Object-Oriented Source Code Verification - an Inspiration for Bytecode Verification?
In the area of source code level verification of object-oriented software, the speaker has over the past years contributed, with various co-workers, to a number of developments, like the compositional semantics and verification of concurrent and distributed objects, logics for object creation, the correctness of transformation rules, verification based test case generation, and recently started to work on the combination of static and runtime verification. This talk will present the essence of these directions, also discussing their potential for the analysis of lower-level artifacts, like bytecode. The work this talk is based on is largely conducted in connection to the KeY framework and tool for object-oriented source code verification. The talk will briefly introduce KeY and its underlying 'dynamic logic', to provide a common ground for the discussion of the different directions.
|10:30||José Miguel Rojas and Corina Pasareanu
Compositional Symbolic Execution through Program Specialization (slides)
Scalability is a major challenge in symbolic execution. The large number of paths that need to be explored and the large size of the constraints that must be carried often compromise the effectiveness of symbolic execution for software testing in practice. Compositional symbolic execution aims to alleviate these scalability issues by executing the methods of a program separately, stowing their results in method summaries and using such summaries to incrementally execute the complete program. We present a novel compositional approach that leverages partial evaluation, a well-established technique that aims at automatically specializing a program with respect to some of its input. We report on its design and implementation in Symbolic PathFinder and on preliminary promising evaluation results.
|11:30||Aibek Sarimbekov, Yudi Zheng, Danilo Ansaloni, Lubomir Bulej, Lukas Marek, Walter Binder, Petr Tuma and Zhengwei Qi.
Java Bytecode Instrumentation – Reconciling Developer Productivity (slides)
Dynamic program analysis tools serve many important software engineering tasks such as profiling, debugging, testing, program comprehension, and reverse engineering. Many dynamic analysis tools rely on program instrumentation and are implemented using low-level instrumentation libraries, resulting in tedious and error-prone tool development. The Domain-Specific Language for Instrumentation (DiSL) has recently been released with the promise to boost the productivity of tool developers targeting the Java Virtual Machine, without impairing the performance of the resulting tools. DiSL offers high- level programming abstractions especially designed for instrumentation-based dynamic analysis. In this paper we assess whether DiSL meets its promise. We perform a controlled experiment to measure tool development time and correctness of the developed tools, comparing DiSL with a prevailing, state- of-the-art instrumentation library. Our study shows that DiSL significantly improves developer productivity.
|12:00||Andrey Breslav, Evgeny Geraschenko, Svetlana Isakova and Alexey Sedunov
KAnnotator: Inferring precise type information from Java byte codes (slides)
Many extensions to Java's type system rely on richer types than Java requires. Usually this problem is worked around by utilizing annotations: a metadata mechanism supported by Java. The biggest problem with such approaches is that the "real world" is not annotated: creators of popular Java libraries (beginning with JDK developers) did not annotated them for any sophisticated analyses researchers devise, and annotating a code base of a few hundred thousand methods (the size of Java' standard library) is a huge project that, to out knowledge, was not executed for any new analysis reported in the literature. The KAnnotator project aims at tackling this problem by providing machinery capable of automatically inferring annotations for entire libraries. In this paper we introduce a tool that infers nullability and mutability annotations from Java libraries given in the form of byte codes. The tool was used to analyze popular libraries such as JDK, Guava and ASM and has shown good results in both precision and performance.
|14:00||Invited Speaker: Etienne Payet
Modeling the Android platform (slides)
Android is currently the world's most popular mobile platform. One of its key features is the possibility for a user to install new applications on a device. Applications can be downloaded from anywhere in the Dalvik bytecode format and they do not have to be digitally signed before installation. As a consequence, reliability has become a major concern for users, as buggy applications can hang the device or malicious code can steal private data. Many analyses have been presented so far for finding bugs or malicious code in Android programs. A few of them rely on formal operational semantics for the Dalvik virtual machine executing the bytecode. But till now, no formal semantics for important specific features of the Android platform, such as the inter-component communication mechanism, has been proposed. In this talk, we review the semantics that have been proposed for Dalvik and we introduce a first attempt at defining an operational semantics for a part of the Android platform encompassing the "activity" component communication mechanism.
|15:00||Marieke Huisman and Matej Mihelčić
Specification and Verification of GPGPU programs using Permission-based Separation logic (slides)
Graphics Processing Units (GPUs) are increasingly used for general-purpose applications because of their low price, energy efficiency and enormous computing power. Considering the importance of GPU applications, it is vital that the behaviour of GPU programs can be specified and proven correct formally. This paper presents our ideas how to verify GPU programs written in OpenCL, a platform-independent low-level programming language for GPU kernels. Our verification approach is modular, based on permission-based separation logic. We first present the main ingredients of our logic, and then illustrate its use on several example kernels. We show in particular how the logic is used to prove data-race-freedom and functional correctness of GPU applications.
|16:00||Enrico Scapin and Fausto Spoto
Field-Sensitive Unreachability and Non-Cyclicity Analysis (slides)
Field-sensitive static analyses of object-oriented code use approximations of the computational states where fields are taken into account, for better precision. This article presents a novel and sound definite analysis of Java bytecode that states two strictly related properties: field-sensitive unreachability between program variables and field-sensitive non-cyclicity of program variables. The latter exploits the former for better precision. We use abstract interpretation to approximate the concrete semantics and an analysis framework based on constraint graphs, whose nodes are program points and whose arcs propagate information according to the semantics of each bytecode instruction. Our analysis has been designed with the goal of improving client analyses such as termination analysis, asserting the non-cyclicity of variables w.r.t. specific fields.
|16:30||Michael Barnett, Mehdi Bouaziz, Manuel Fahndrich and Francesco Logozzo
A Case for Static Analyzers in the Cloud (Position paper) (slides)
A cloud-based static analyzer runs as service. Clients issue analysis requests through the local network or over the internet. The analysis takes advantage of the large computation resources offered by the cloud: the underlying infrastructure ensures scaling and unlimited storage. Cloud-based analyzers may relax performance-precision trade-offs usually associated with desktop-based analyzers. More cores enable more precise and responsive analyses. More storage enables perfect caching of the analysis results, shareable among different clients, and queryable off-line. To realize these advantages, cloud-based analyzers need to be architected differently than desktop ones. We describe our ongoing effort of moving a desktop analyzer, Clousot, into a cloud-based one, Cloudot.
|17:00||Pavle Subotic, Andrew Edward Santosa and Bernhard Scholz
Static Analysis by Elimination (slides)
In this paper we describe a program analysis technique for finding value ranges for variables in the LLVM compiler infrastructure. Range analysis has several important applications for embedded systems, including elimination of assertions in programs, automatically deducing numerical stability, eliminating array bounds checking, and integer overflow detection. Determining value ranges poses a major challenge in program analysis because it is difficult to ensure the termination and precision of the program analysis in the presence of program cycles. This work uses a technique where loops are detected intrinsically within the program analysis. Our work combines methods of elimination-based data flow analysis with abstract interpretation. We have implemented a prototype of the proposed framework in the LLVM compiler framework and have conducted experiments with a suite of test programs to show the feasibility of our approach.