Research objectives

The research objectives of the CompCert project are twofold: (1) to investigate the development and correctness proof of verified compilers, and (2) to improve the tools needed by such a proof, such as program proof technology and mechanized semantics. More specifically, the research conducted in CompCert falls in the following four themes.

A formally verified C compiler

We have developed and formally verified a realistic compiler for a large subset of the C language, called Clight. More details on this subset of C and on the compiler itself can be found on this separate page. We use the Coq proof assistant not only to conduct the correctness proof, but also to program most of the compiler; the code extraction facility of Coq then generates executable Caml code from the Coq functional specifications.

The compiler is structured in 11 passes and uses 7 intermediate language. One intermediate language called Cminor is a much simplified version of Clight, typeless but still structured. Other intermediate languages are variations on register transfer language (RTL), also known as "3-address code". The compiler implements state-of-the-art register allocation by graph coloring (using the Appel-George coloring algorithm), as well as two dataflow-based optimizations: constant propagation and common subexpression elimination (by value numbering applied to extended basic blocks).

To deploy additional, more ambitious optimizations, we are currently studying the "verified verifier" approach, where advanced optimizations such as trace-based instruction scheduling and lazy code motion are performed by untrusted Caml code, then verified a posteriori for correctness using symbolic interpretation. Only the verifier needs to be written and proved correct in Coq.

Mechanized semantics

A central issue in compiler verification is to define formal semantics for the source, intermediate and target languages that lend themselves to machine-assisted reasoning over program transformations. Initially, we used natural semantics, a.k.a "big-step" semantics, for most of our languages. These semantics could only describe terminating executions and would let us observe only the exit code of the whole program, resulting in rather weak semantic preservation theorems.

To observe more precisely the behaviors of programs, we now use trace-based semantics, where the behavior of a program is defined as the trace of all input/output events it performs. To reason over non-terminating executions in addition to terminating executions, we use a combination of transition semantics (a.k.a. "small-step" semantics) for the target and low-level intermediate languages and coinductive natural semantics for the source and high-level intermediate languages. The addition of coinductively-defined evaluation rules to natural semantics enables it to describe precisely non-terminating evaluations.

To validate these operational semantics and pave the way to connections with program provers, we also study axiomatic semantics for Cminor and Clight, based on separation logic.

Improving Coq as a programming language and as a theorem prover

The Coq proof assistant plays a crucial role in the CompCert development: it is both the programming language in which the verified parts of the compiler are written, and the prover used to establish semantic preservation results.

We participate in the development of the Function mechanism introduced in version 8.1 of Coq to facilitate definition and reasoning over functions defined by deep pattern-matching and structural or well-founded recursion. This mechanism automates the generation of rich, easy-to-use induction and inversion principles over such definitions.

Coq is based on a logic of total (terminating) functions. However, it is difficult to prove termination of some of the functions involved in a compiler, and such termination proofs are not really useful since we are mostly interested in the partial correctness of the compiler. We investigate an approach to defining and reasoning over partial functions defined in Coq by general recursion. This approach is based on Tarski's fixpoint theorem and involves classical logic (an axiom of description) that can nonetheless be realized by a Caml fixpoint combinator.

The availability of efficient, verified, purely functional ("persistent") data structures plays a crucial role in the CompCert development. We improved the FSet library of finite sets (implemented efficiently as balanced binary trees), and developed and verified an FMap library of finite maps based on similar principles. Both are now part of the Coq standard library.

Finally, we are also interested in automating parts of the correctness proofs by using automated first-order theorem proving within interactive Coq proofs. The Zenon theorem prover is developed partly within the CompCert project. We conducted preliminary experiments on using first-order theorem proving to establish properties of the memory model used in CompCert.

Towards a trusted execution path for Coq programs

We write the CompCert compiler in Coq, then use Coq's extraction facility to produce Caml code, then use the Caml compiler to generate an executable compiler. But how can we be sure that Coq's extraction and Caml's compiler introduce no bugs in the compiler? This is the general problem of trust in compilers that CompCert focuses on, just "one level up". We are therefore interested in formal correctness guarantees for this approach to the development of proved programs, or in other words in developing a trusted execution path for programs written and proved in Coq.

We are developing and verifying a compiler from mini-ML (the pure, strict functional programming language that is the target of Coq's extraction mechanism) to the Cminor intermediate language. This compiler can then reuse the back-end of the CompCert C compiler to generate provably correct PowerPC executable code.

Unlike C, the mini-ML language needs a run-time system: essentially, a memory allocator and garbage collector. Such a run-time system, hand-written in Cminor, must be proved correct, and its proof of correctness must be interfaced with that of the mini-ML front-end compiler. We are currently making progress in this direction.

Finally, we are also investigating several approaches to proving the correctness of a code extraction mechanism similar to that of Coq. One approach relies on proving directly the correctness of the extractor. Another approach is to provide tools to establish semantics preservation for one particular run of extraction.