λk.(k blog): Posts tagged 'research'urn:https-www-williamjbowman-com:-tags-research-html2023-06-15T20:25:25ZWhat is a model?urn:https-www-williamjbowman-com:-blog-2023-06-15-what-is-a-model2023-06-15T20:25:25Z2023-06-15T20:25:25ZWilliam J. Bowman
<p>What is a model, particularly of a programming language? I’ve been struggling with this question a bit for some time. The word “model” is used a lot in my research area, and although I have successfully (by some metrics) read papers whose topic is models, used other peoples’ research on models, built models, and trained others to do all of this, I don’t really understand what a model is.</p>
<p>Before I get into a philosophical digression on what it even means to understand something, let’s ignore all that and try to discover what a model is from first principles.</p>
<!-- more-->
<h2 id="definitions-of-model">Definitions of “model”</h2>
<p>The apparent place to start to understand the meaning of a word is to read its definition. This is actually no help at all. There are lots of uses of the word “model”, with several definitions. Here are some.</p>
<p><strong>Definition 0</strong> In science and engineering, a model is “an abstract description of a concrete system using mathematical concepts and language”. See <a href="https://en.wikipedia.org/wiki/Mathematical_model" title="Mathematical model">Wikipedia</a> provides a nice introduction to this kind of model, and the <a href="https://plato.stanford.edu/entries/model-theory/#Modelling" title="Models and Modelling">Standard Encylopedia of Philosophy</a> provides a nice explanation in the context of model theory, which will be relevant later in this post.</p>
<p><strong>Definition 1</strong> A <em>syntactic model</em> (of a type theory) is defined by <a href="https://doi.org/10.1145/3018610.3018620" title="The Next 700 Syntactical Models of Type Theory">Boulier, Pédrot, and Tabareau</a> as a translation from one type theory into another that preserves typing, the definition of false, and definitional equivalence. This syntactic model enables the source type theory to inherit properties of the target type theory—such as consistency.</p>
<p><strong>Definition 2</strong> A <em>model</em> (of a <em>vocabulary</em> also called a <em>language</em>
<script type="math/tex">\sigma</script>) in the sense of model theory (as defined by <a href="https://doi.org/10.1007/978-3-662-07003-1" title="Elements of Finite Model Theory">Elements of Finite Model Theory</a>) is a <em>
<script type="math/tex">\sigma</script>-structure</em> (“also called a <em>model</em>”) defining a set <em>A</em> along with 3 sets providing interpretations of that vocabulary. These sets are
<script type="math/tex">Ic_A</script>, which interprets each constant in
<script type="math/tex">\sigma</script> as an element of
<script type="math/tex">A</script>,
<script type="math/tex">IP_A</script>, which interprets each n-ary predicate symbol or relation symbol from
<script type="math/tex">\sigma</script> as an n-ary (set-theoretic) relation between elements of
<script type="math/tex">A</script>, and
<script type="math/tex">If_A</script>, which interprets each n-ary function symbol in
<script type="math/tex">\sigma</script> as a (set-theoretic) function from n elements of
<script type="math/tex">A</script> to an element of
<script type="math/tex">A</script>.</p>
<p><strong>Definition 3</strong> The above definition is confusing, since it conflates <em>structure</em> and <em>model</em>, which the text later distinguishes with the following separate definition. A <em>model</em> (of a <em>theory</em> (over a vocabulary
<script type="math/tex">\sigma</script>)) is a <em>structure</em> (“also called a <em>model</em>”) of <em>vocabulary</em>
<script type="math/tex">\sigma</script> such that every sentence in the theory is interpreted in the structure to make the sentence true. (A <em>theory</em> is a set of sentences drawn from a vocabulary.) My rephrasing of the definition of model is intentionally confusing and difficult to parse, to make apparent the inherit confusingness created by the several layers of definitions and one definition that defines “model” using a second definition of “model”.</p>
<p><strong>Definition 4</strong> <a href="https://ncatlab.org/nlab/show/structure+in+model+theory#Definition" title="Definition of 'structure' in model theory">Nlab hosts an article</a> with a much clarified definition, which distinguishes <em>language</em>, <em>theory</em>, <em>structure</em>, and <em>model</em> carefully. In particular, it is careful to only call <em>structure</em> the interpretation of the <em>language</em> (call <em>vocabulary</em> above), and only call <em>model</em> an interpretation that makes true the <em>axioms</em> composing the <em>theory</em> of the <em>language</em>.</p>
<p><strong>Definition 5</strong> <a href="https://twitter.com/carloangiuli/status/1640421574733078528?s=20">Carlo Anguli once gave me the following definition of model</a>:</p>
<blockquote>
<p>A collection of interpretation functions that interpret every syntactic category in such that the original relationship is respected.</p>
<p>e.g.,
<br /> - interpret every context as a set,
<br /> - interpret every (non-dependent) type as a set, and
<br /> - interpret every term-of-a-type indexed-by-a-context as an element-of-the-interpretation-of-that-type indexed-by-elements-of-the-interpretation-of-that-context.</p>
<p>Implicit in this definition is that the interpretations must respect equality — because if you don’t respect equality of arguments then you’re not a function!</p></blockquote>
<p>This definition seems to be close to <strong>Definition 2</strong>, as it doesn’t mention axioms and their interpretation. However, it might be <strong>Definition 3</strong> instead, as there could be implicit in the definition of <a href="https://www.williamjbowman.com/blog/2023/06/07/what-is-syntax/" title="What is syntax?">syntax</a> an inclusion of all judgements of the programming language, and therefore in the phrase “such that the original relationship is respected” a requirement that axioms of those judgements become true.</p>
<p>Also implicit in this definition of model is what it is a model <em>of</em>. Perhaps a programming language, but it again depends on what <a href="https://www.williamjbowman.com/blog/2023/06/07/what-is-syntax/" title="What is syntax?">syntax</a> means and thus what “every syntactic category” refers to.</p>
<p>I’m interested in the requirement that the interpretation is a collection of <em>functions</em>, which seems to be missing or only implied in some model theory definitions of “model”.</p>
<h2 id="so-what-is-a-model">So what <em>is</em> a model?</h2>
<p>One of the first thing that jumps out to me after reviewing the above definitions is that to understand each definition, you <em>have</em> to reframe the definition of <em>model</em> into <em>model [of what]</em>. It really never makes sense to give a definition of merely “model”.</p>
<p><strong>Definition 0</strong> defines a <em>model of a system [of the real world]</em>. <strong>Definition 1</strong> defines a <em>model of a type theory</em>. <strong>Definitions 2</strong> and <strong>3</strong> give definitions of a <em>model</em>, in the sense of model theory, but of two different objects: <em>model of a vocabulary (or language)</em>, which is more often called a structure, and <em>model of a theory</em> (which everyone seems to agree is a “model”). <strong>Definition 4</strong> makes this distinction very clear. <strong>Definition 5</strong> seems to use “model” in the model-theoretic sense, but has abstracted a bit away from a particular notion of <em>theory</em> and generalized to <a href="https://www.williamjbowman.com/blog/2023/06/07/what-is-syntax/" title="What is syntax?"><em>syntax</em></a>.</p>
<h2 id="what-is-a-model-of-a-programming-language">What is a model <em>of</em> a programming language?</h2>
<p>I’ve had two problems understanding the word “model” in the context of programming languages.</p>
<p>First, we use “model” in three different senses, and I have neither understood that nor understood the relationships between them.</p>
<ol>
<li>Model in the sense of an abstract description of a system. This is <strong>Definition 0</strong>. This sense of “model” means something like “mathematical description”. What we want is a description in which we can work using math, so we can make predictions about the real world. Ideally, the predictions we make will be true.</li>
<li>Model in the strict sense of model theory. These are <strong>Definitions 2, 3, and 4</strong>. This sense of “model” is the closet to having a strict definition. It often carries a set-theoretic connotation, asking for a set defining the domain of values, and three interpretation functions that interpret specifics parts of a theory in specific ways.</li>
<li>Model in the generalized sense, inheriting from or related to model theory. I hesitate to even call this distinct from the second sense, but I will anyway. I also hesitate to speculate about history—perhaps this sense actually predates model theory. But I distinguish it from the second sense, because it frequently generalizes away from the strict 3 category “constant symbol”, “predicate symbol”, “function symbol” specification and doesn’t seem beholden to set theory. <strong>Definitions 1 and 5</strong> use “model” in this sense.</li></ol>
<p>In the second sense of “model”, the first sense of the word remains—we’re still interested in a description of some system (the <em>theory</em>), and of using the model to make predictions or reason. However, since the theory is also mathematical, we can be more rigid about our reasoning requirements—axioms of the theory <em>must</em> be true of the model, and relationships <em>must</em> be preserved in the model. This is rarely true of a model of the real world; <em>e.g.</em>, the Newtonian model of gravity works pretty well, until it doesn’t, so it’s a model that doesn’t <em>quite</em> make all axioms true or preserve all relationships.</p>
<p>The third sense seems closer to the idea of <a href="https://en.wikipedia.org/wiki/Semantics_of_logic" title="Semantics of Logic"><em>semantics</em></a>, in the mathematical logic sense of the word as assigning meaning or interpretation to <a href="https://www.williamjbowman.com/blog/2023/06/07/what-is-syntax/" title="What is syntax?">syntax</a>. In this sense, the word “model” frequently avoid committing to set theory as a formal foundation, generalizes away from the three interpretation functions, and focuses instead on the <em>relationships</em> between uninterpreted <a href="https://www.williamjbowman.com/blog/2023/06/07/what-is-syntax/" title="What is syntax?">syntax</a> being preserved by the interpretation. For example, in <strong>Definition 1</strong>, the relationships of interest are well-typedness, definitional equivalence, and falsehood, and the formal foundation is type theory. Category theory seems to come closest to a complete formalization of this sense of the word “model”, although I’ve had a hell of a time understanding that. Nlab articles don’t say this explicitly, but reading between the lines in articles linked to from <a href="https://ncatlab.org/nlab/show/model+theory" title="Nlab Article on Model Theory">the Nlab article on model theory</a> for the words <a href="https://ncatlab.org/nlab/show/internal+language" title="Nlab Article linked from 'syntax'">syntax</a> and <a href="https://ncatlab.org/nlab/show/structure" title="Nlab Article linked from 'semantics'">semantics</a> implies that the <a href="https://www.williamjbowman.com/blog/2023/06/07/what-is-syntax/" title="What is syntax?">idea of syntax</a>, <em>i.e.</em>, uninterpreted symbols with relationships between themselves and judgements about them, can be formalized in category theory, and then so can the idea of semantics, <em>i.e.</em>, providing an interpretation in some other domain of those uninterpreted symbols; a domain in which one can use all the power of the other domain to reason about the judgements one wishes to make about the uninterpreted symbols.</p>
<p>The second problem with the word “model” is that we frequently work with two senses simultaneously.</p>
<p>When I write down a programming language, I’m often trying to <em>model</em> (in the first sense) a real programming language (or some feature of it), one actual software developers use to make real things happen in the real world. I am not merely describing a mathematical object for study. (Okay, sometimes I do that, but usually to the first end, eventually.) When I write down such a model, I may describe the abstract syntax, the typing judgement, and an abstract machines or reduction rules. These form a pretty good mathematical description of how a real language behaves. A compiler will reject syntactically invalid expressions. It may then type check the abstract syntax tree, and reject some possibly semantically invalid expressions. If judged well typed, the compiler may transform the tree into something that runs, and that run-time behaviour can be predicted using the reduction rules.</p>
<p>However, for much programming languages work, I’m not interested in merely predicting the behaviour of a single program. I might want to predict behaviour or properties of the entire language, or its typing judgement, etc. To reason about single programs, the <em>model</em> (in the first sense) may work well. But it might not work well for, say, trying to decide whether certain types can even be inhabited. To solve this, we might build a <em>model</em> (in the second or third sense). We interpret the abstract syntax tree and typing judgement in some other domain. That is, the AST and the typing judgement, being a <em>model</em> in the first sense, form a <em>theory</em> in the model theoretic sense. We can then construct a model (in the second sense) of a model (in the first sense). The Standard Encyclopedia of Philosophy <a href="https://plato.stanford.edu/entries/model-theory/#Modelling" title="Models and Modelling">article on model theory</a> goes into this in detail in the context of model theory, which is great.</p>
<p>What’s more interesting is how these two senses of model interact in programming languages. If one is interested in a model, in the second sense, it may inform how one develops a model (in the first sense). If I know I will want to construct a model (in the second sense) to reason about the typing judgement, I may decide that single-step reduction rules are actually irrelevant; I only care that certain program equivalences hold, really, and any implementation that has those equivalences suffices. So rather than create a model (in the first sense) with an abstract machine or small-step operational semantics, I’ll specify an equivalence judgement. This might give less predictive power about a real world implementation, but allow the predictions I do make to apply to many implementations.</p>
<p>If you see these patterns, you may have some insight into how the author is approaching their work, and in what senses they are using the word “model”.</p>
<!-- ## References-->What is syntax?urn:https-www-williamjbowman-com:-blog-2023-06-07-what-is-syntax2023-06-07T20:58:46Z2023-06-07T20:58:46ZWilliam J. Bowman
<p>I’m in the middle of confronting my lack of knowledge about denotational semantics. One of the things that has confused me for so long about denotational semantics, which I didn’t even realize was confusing me, was the use of the word “syntax” (and, consequently, “semantics”).</p>
<p>For context, the contents of this note will be obvious to perhaps half of programming languages (PL) researchers. Perhaps half enter PL through math. That is not how I entered PL. I entered PL through software engineering. I was very interested in building beautiful software and systems; I still am. Until recently, I ran my own cloud infrastructure—mail, calendars, reminders, contacts, file syncing, remote git syncing. I still run some of it. I run secondary spam filtering over university email for people in my department, because out department’s email system is garbage. I am <em>way</em> better at building systems and writing software than math, but I’m interested in PL and logic and math nonetheless. Unfortunately, I lack lot of background and constantly struggle with a huge part, perhaps half, of PL research. The most advanced math course I took was Calculus 1. (well, I took a graduate recursion theory course too, but I think I passed that course because it was a grad course, not because I did well.)</p>
<p>So when I hear “syntax”, I think “oh sure. I know what that is. It’s the grammar of a programming language. The string, or more often the tree structure, used to represent the program text.”. And that led me to misunderstand half of programming languages research.</p>
<!-- more-->
<h2 id="the-first-meaning-of-syntax">The First Meaning of Syntax</h2>
<p>Syntax has two meanings in programming languages, and both meanings can frequently be found in the same paper.</p>
<p>The first meaning is the one I gave above. I could give a definition of the syntax (in the first sense) of the lambda-calculus as follows.</p>
<pre><code>e ::= x | (lambda (x) e) | (e e)</code></pre>
<p>Ah. Beautiful syntax.</p>
<p>If we were following a standard text, such as <a href="http://www.cs.cmu.edu/~rwh/pfpl/2nded.pdf">Harper’s <em>Practical Foundation for Programming Languages (2nd ed)</em></a>, we might next define the “semantics” of this “syntax”. We might define the “static semantics”, <em>i.e.</em>, the type system or binding rules, then the “dynamic semantics”, <em>i.e.</em>, the rules governing the evaluation behaviour of the syntax. For example, I might write the following small-step operational semantics.</p>
<pre><code>((lambda (x) e) e') -> e[x := e']</code></pre>
<p>Ah. Beautiful semantics.</p>
<p>Except, everything I wrote above, reduction rule included, is also <em>syntax</em> and <em>not semantics</em>.</p>
<h2 id="historical-interlude">Historical Interlude</h2>
<p>The words “syntax” and “semantics” come from mathematical logic.</p>
<p>In that context, “syntax” describes sentences, statements, symbols, formulas, etc, without respect to any meaning. You can write down a logical formula say as "∀ X.P(X, A)" (where “A” is a logical constant, “X” is a variable, “P” is a proposition), and it has no meaning; it’s mere syntax. It might be true, or might be false, depending on its interpretation of “P”, “A”, and "∀". I could say that it means “all leaves are green”, which would be false. A more relevant example for PL might be the syntax <code>((lambda (x) x+1) 2) = 3</code>, which I would certainly like to be true, but it very much depends on what I mean. If <code>+</code> means string append as in JavaScript, then the statement is false since <code>''.concat(1, 2) = '12'</code>. Wikipedia is a good start for trying to understand this history of the word “syntax”: <a href="https://en.wikipedia.org/wiki/Syntax_(logic)">https://en.wikipedia.org/wiki/Syntax_(logic)</a></p>
<p>By contrast, in that same context, “semantics” is the means by which syntax is given an interpretation. Perhaps the most widely used approach to providing an interpretation of syntax is model theory, which I never learned. In model theory, we start with a “syntax” (or “theory”). This theory is a collection of constants, function symbols, and predicate symbols. A model then is a map from the uninterpreted syntax to some interpretation that preserves relationships. I’ll say more of this in a later post, but for now, consider the following example. I might provide a model of our earlier example that interprets <code>+</code> as <code>''.concat</code>, and <code>=</code> is mapped to, say <code>===</code>. This preserves relationships, if all my constants are mapped to strings. Wikipedia is a good source for this history too: <a href="https://en.wikipedia.org/wiki/Semantics_of_logic">https://en.wikipedia.org/wiki/Semantics_of_logic</a>.</p>
<h2 id="when-semantics-is-the-syntax">When Semantics is the Syntax</h2>
<p>What’s interesting about this history is how it was adopted in programming languages, and evolved in two different ways. On the one hand, a programming language grammar is <em>syntax</em>, in the sense of being uninterpreted statements. That syntax can be given a semantics, an interpretation, by using operation semantics (this is the sense in which operational semantics <em>is</em> a semantics). The operational semantics provides an interpretation to our grammar.</p>
<p>But, in another sense, the grammar, typing rules, and evaluation rules (the “syntax”, “static semantics”, and “dynamic semantics”) are mere syntax, in the older logical sense. They are a theory, in the model-theoretic sense. To see why, we must understand what the earlier example <code>((lambda (x) x+1) 2) = 3</code> means. Or in fact, realize that it doesn’t mean anything at all.</p>
<p>To write this down is to write down a proposition about the grammar: that one piece of the grammar is equal to another. Except I didn’t write a proposition that the two were equal. I wrote the uninterpreted proposition symbol <code>=</code>, the syntax <code>=</code>, next to two pieces of uninterpreted grammar, two other pieces of syntax. Every syntactic judgment about our grammar is itself syntax, in the model theoretic sense. At least, this is true if we follow the tradition of writing them down synthetically, axiomatically, about the grammar, as is done in standard programming languages textbooks such as <a href="https://www.worldcat.org/search?q=bn:0262162091"><em>Types and Programming Languages</em></a> or <a href="http://www.cs.cmu.edu/~rwh/pfpl/2nded.pdf"><em>Practical Foundations for Programming Languages</em></a>.</p>
<p>In this view, the typing rules and reduction relations are syntax. This is a bizarre perspective from a software engineering perspective, but makes sense from the mathematical logic perspective.</p>
<p>With this perspective, it might make sense to call “operational semantics” “syntactic semantics”, or to imagine a tower of syntax and semantics where one level’s semantics become the next level’s syntax. This view finally helped me make sense of why we call “syntactic logical relations” <em>syntactic</em>, when they are clearly semantics. (A problem I danced around in <a href="https://www.williamjbowman.com/blog/2023/03/24/what-is-logical-relations/">my previous post on logical relations</a>.)</p>
<p>This perspective is also useful, for two reasons. The first is that reasoning purely syntactically, while very general, prevents you from importing any other reasoning principles from any other domain. By viewing the typing system as syntax, and then building a model of it (and by necessity, the programming language terms) in, say, set theory, we can import all set-theoretic reasoning in our attempts to reason about our type system. But more than that, we can reinterpret the syntax freely, to prove general results. While I might have written a type system using syntax that looks like numbers, I could build a model that interprets that type system as over strings, and know that actually the entire system is safe for strings, too. Appropriately generalized, I wouldn’t need to do any additional proofs.</p>
<p>Unfortunately, this double meaning of the word syntax seems to be completely taken for granted by some. nLab is a good example of this. To quote from the introduction to the nLab model theory page:</p>
<blockquote>
<p>On the one hand, there is <a href="https://ncatlab.org/nlab/show/internal+language">syntax</a>. On the other hand, there is <a href="https://ncatlab.org/nlab/show/structure">semantics</a>. Model theory is (roughly) about the relations between the two: model theory studies classes of <a href="https://ncatlab.org/nlab/show/models">models</a> of <a href="https://ncatlab.org/nlab/show/theories">theories</a>, hence classes of “<a href="https://ncatlab.org/nlab/show/structures+in+model+theory">mathematical structures</a>”.</p></blockquote>
<p>What’s most interesting about this quote isn’t what it says, but what it links to. The link for “syntax” is to the page on the internal logic of a category. From the software perspective, this is not syntax, but semantics. How on earth could it be syntax? The link for “semantics” is to the page on structure, the idea of equipping a category with a particular functor. How on earth is that any more semantics than the original abstract nonsense version of syntax?</p>
<p>Before I understood “syntax”, I couldn’t make any sense of that, but now I’m beginning to understand. The internal logic of a category in some sense must be able to express the grammar of a language, and the judgments of a language, but in a purely syntactic way—in the same way that when I write down the grammar and typing rules of a language, I don’t refer to any interpretation of those symbols beyond the way I combine them on the page. Then the semantics or structure is a the particular functor over that category, providing an interpretation, a semantics, of that original category (the syntax).</p>
<p>Anyway, now I think I’m ready to understand what a model is.</p>In What Sense is WebAssembly Memory Safe?urn:https-www-williamjbowman-com:-blog-2023-05-18-in-what-sense-is-webassembly-memory-safe2023-05-19T02:35:56Z2023-05-19T02:35:56ZWilliam J. Bowman
<p>I’ve been trying to understand the semantics of memory in WebAssembly, and realized the “memory safety” doesn’t mean what I expect in WebAssembly.</p>
<!-- more-->
<h2 id="what-is-memory-safety">What is memory safety?</h2>
<p>Here are some definitions.</p>
<blockquote>
<p>Memory safety is a feature of programming languages that prevents certain types of memory-access bugs, such as out-of-bounds reads and writes, and use-after-free bugs. In an app that manages a list of to-do items, for example, an out-of-bounds read could involve accessing the nonexistent sixth item in a list of five, while a use-after-free bug could involve accessing one of the items on an already deleted to-do list.</p></blockquote>
<p> <a href="https://spectrum.ieee.org/memory-safe-programming-languages">https://spectrum.ieee.org/memory-safe-programming-languages</a></p>
<blockquote>
<p>Memory safety is the state of being protected from various software bugs and security vulnerabilities when dealing with memory access, such as buffer overflows and dangling pointers. For example, Java is said to be memory-safe because its runtime error detection checks array bounds and pointer dereferences.</p></blockquote>
<p> <a href="https://en.wikipedia.org/wiki/Memory_safety">https://en.wikipedia.org/wiki/Memory_safety</a></p>
<h2 id="memory-unsafety-in-wasm">Memory (un)safety in Wasm</h2>
<p>WebAssembly (Wasm) is a <em>language</em> that guarantees “type safety … [preventing] invalid calls or illegal accesses to locals, … memory safety, and … inaccessibility of code addresses or the call stack”.</p>
<p>(Technically, the Wasm paper describes Wasm as a binary code format, that happens to be presented as a language.)</p>
<p>Formally, a whole Wasm program that type checks is guaranteed to either be a well-typed value, or take an evaluation step to a well-typed program, or evaluate to the well-known dynamic error “trap”.</p>
<p>This is in contrast to an unsafe language like C. A well-typed C program might take a step to a well-typed program, or it might evaluate to a value of arbitrary type or no type. For example, a well-typed program of type <code>char</code> that reads from a buffer might evaluate to a well-typed <code>char</code>, or it might evaluate to an arbitrary integer that does not correspond to any character because you were reading uninitialized memory.</p>
<p>For example, consider the following C program.</p>
<pre><code>// unsafe.c
#include <unistd.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char** argv) {
char* buf = malloc(0);
memcpy(buf, "Hello world\n", 12);
write(1, buf, 12);
return 0;
}</code></pre>
<p>(compiled with <code>clang -o unsafe.exe unsafe.c</code>; run with <code>./unsafe.exe</code>)</p>
<p>This program creates a buffer of size <code>0</code>, writes “Hello world\n” to it, and tries to print that to standard out. The program printed “Hello world” when I ran it, but it’s undefined behaviour, so anything could happen. I tried to writing a loop that <code>malloc</code>d lots of memory and wrote arbitrary numbers, but never managed to crash the program. Still, it’s not memory safe.</p>
<p>The equivalent Wasm program is below.</p>
<pre><code>;; safe.wat
(module
(import "wasi_unstable" "fd_write" (func $fd_write (param i32 i32 i32 i32) (result i32)))
(memory 0)
;;(memory 1)
(export "memory" (memory 0))
(data (i32.const 0) "Hello World\n")
(func $main (export "_start")
(i32.store (i32.const 12) (i32.const 0))
(i32.store (i32.const 16) (i32.const 12))
(call $fd_write (i32.const 1) (i32.const 12) (i32.const 1) (i32.const 20))
drop))</code></pre>
<p>(run with <code>wasmtime safe.wat</code>)</p>
<p>In this example, we create a string “Hello World\n” at address 0 in the module’s memory. We then create (encode) a new <code>iovs</code> just after it, starting as address <code>12</code>, with a pointer to address <code>0</code> and length <code>12</code>. Then we call <code>fd_write</code>, from the wasi API.</p>
<p>Unfortunately, we declared the memory size to be <code>0</code>, so trying to allocate this string fails, traps safely, and the process exits with an error message.</p>
<p>So wasm is memory safe right?</p>
<p>Well, sort of, but there’s a pretty key distinction here.</p>
<p>In C, we are creating a new pointer with <code>malloc</code>. We are allocating a new data structure, then using it (unsafely).</p>
<p>In Wasm, there is exactly one memory for the entire module. Inside that memory, we encode 2 data structures: our string, and the <code>iovs</code> structure used by <code>fd_write</code>. All access to the global memory are safe. But not all accesses to the encoded data structures are.</p>
<p>Most application will create data structure within the memory. That’s what our call to <code>fd_write</code> did. The two <code>store</code>s actually create an <code>iovs</code> structure in the global memory. We have no guarantees, within Wasm, about that data structure.</p>
<p>For example, here’s our Hello World program in Wasm which uses the <code>memory</code> safely and correctly, but creates an <code>iovs</code> whose length is claimed to be 100, larger than the actual string.</p>
<pre><code>;; unsafe.wat
(module
(import "wasi_unstable" "fd_write" (func $fd_write (param i32 i32 i32 i32) (result i32)))
(memory 1)
(export "memory" (memory 0))
(data (i32.const 0) "Hello World\n")
(func $main (export "_start")
(i32.store (i32.const 12) (i32.const 0))
(i32.store (i32.const 16) (i32.const 100))
(call $fd_write (i32.const 1) (i32.const 12) (i32.const 1) (i32.const 20))
drop))</code></pre>
<p>(run with <code>wastime unsafe.wat</code>)</p>
<p>When I run this, I get “Hello world\nd” printed to stdout. I have no idea where that trailing <code>d</code> comes from, and it didn’t crash, suggesting it read uninitialized memory of some kind.</p>
<p>Arguable, this is cheating: Wasm does not and cannot make claims about external system functions, and <code>wasi</code> is unstable. But IMO the root of the error isn’t really about <code>wasi</code>.</p>
<p>Really, the root cause of this error is memory unsafety, but of a data structure encoded within a Wasm module. In a truly memory-safe language, if I try to access the 100th element of a 12-character long string, I get an error:</p>
<div class="brush: sh">
<pre><code>> racket
Welcome to Racket v8.9 [cs].
> (string-ref "Hello world\n" 100)
; string-ref: index is out of range
; index: 100
; valid range: [0, 11]
; string: "Hello world\n"
; [,bt for context]</code></pre></div>
<p>But that doesn’t happen in Wasm.</p>
<p>Wasm memory safety doesn’t apply to <em>data structures</em> implemented (encoded) within the <code>memory</code>. It only applies to the module’s <code>memory</code>, which is protected from other modules, even those running in the same process’s virtual address space.</p>
<p>This means Wasm modules are protected from each other, and so this kind of memory unsafety probably isn’t a security risk, only a cause of logic bugs.</p>
<p>In Wasm, data structures have to be encoded anyway, since Wasm doesn’t provide any kind of structure data primitives; you only have integers and some integers are interpreted as addresses into <code>memory</code>. But, when you encode such data structures in the <code>memory</code> and use them incorrectly, you have no guarantees about what happens. You could read some arbitrary data (from your own module), or read some uninitialized memory (from your own module). I.e., you get out-of-bounds reads and writes.</p>
<p>In another view of this, <code>memory</code> is the only data structure in Wasm, and it is memory safe. That’s all the language can be responsible for; if you go about encoding weird things inside that data structure, errors are likely. But this doesn’t seem like what people would expect when they hear “memory safe”. At least, it’s not what I expected at first.</p>What is logical relations?urn:https-www-williamjbowman-com:-blog-2023-03-24-what-is-logical-relations2023-03-24T23:32:03Z2023-03-24T23:32:03ZWilliam J. Bowman
<p>I have long struggled to understand what a logical relation is. This may come as a surprise, since I have used logical relations a bunch in my research, apparently successfully. I am not afraid to admit that despite that success, I didn’t really know what I was doing—I’m just good at pattern recognition and replication. I’m basically a machine learning algorithm.</p>
<p>So I finally decided to dive deep and figure it out: what is a logical relation?</p>
<p>As with my previous note on realizability, this is a copy of my personal notebook on the subject, which is NOT AUTHORITATIVE, but maybe it will help you.</p>
<!-- more-->
<p>Here’s my working definition of a logical relation:</p>
<ol>
<li>A realizability semantic model,</li>
<li>built of predicates over syntax,</li>
<li>that <em>reflects</em> judgments and structures from semantics to syntax.</li></ol>
<p>Point 1 is subtle; it implies that the logical relation is both a model, and a <a href="/blog/2022/10/05/what-is-realizability/">realizability semantics</a>. Unfortunately, I still don’t know what a model is, so I’m going to have work with the following probably wrong oversimplification: the logical relation must take (syntactically) equal terms to (semantically) equal terms. Which notion of syntactic equality though? I’m not sure, and I’m going to ignore it for now.</p>
<p>Point 2 is actually more specific than necessary. We don’t need to predicates over syntax specifically, but really over some base model. It’s easier for me to think of this as “syntax”, though.</p>
<p>Point 3 is quite difficult to make precise without making a lot of this more precise in a mathematical framework. Jon Sterling gave me the following helpful definition:</p>
<blockquote>
<p>A logical relation on a model M (viewed as a category) is then a model that is constructed in the following way:</p>
<ol>
<li>
<p>Choose some functor R : M —> E where E is a sufficiently structured category (e.g. the category of sets, or something else!). The most basic example of a functor R is the “global sections functor” M —> Set, which sends every type in M to the set of <em>closed elements</em> of that type. This is exactly the usual “non-Kripke logical relations"; to get Kripke logical relations, you replace Set with a functor category (presheaf category) and choose a more interesting functor R.</p></li>
<li>
<p>Now define a new category G, as a category whose objects are pairs of an object A of M, together with a subobject of R(A). A morphism in G from (A,A’) to (B, B’) is given by a morphism (f : A -> B) that sends elements satisfying A’ to elements satisfying B’.</p></li>
<li>
<p>You have to show that the category G is actually a model of your language (e.g. show that it has function spaces, booleans, whatever). Doing so is the FTLR.</p></li></ol>
<p>Note that there are some ways to generalize the situation above, but this is basically what logical relations are.</p></blockquote>
<p>Point 3 is also is more specific than necessary; “syntax” can be generalized to be “base model”.</p>
<p>Despite the complexity, we can see point 3 in action in some examples below.</p>
<!-- https://types.pl/@wilbowma/110079663033785019-->
<h2 id="what-are-logical-relations-historically">What are logical relations, historically?</h2>
<h3 id="tait1967---intensional-interpretations-of-functionals-of-finite-type-i">tait1967 - Intensional interpretations of functionals of finite type I</h3>
<p>Logical relations are sometimes called “Tait’s Method”, dating back to <a href="https://doi.org/10.2307/2271658">Tait</a>, as far as I can tell.</p>
<p>In this paper, Tait proves that System T with bar induction is a conservative extension of intuitionistic analysis U_1, which is intuitionistic arithmetic plus quantification over functions plus the axiom (schema) of choice plus bar induction. This conservative extension property is the semantic property of interest. The proof starts with a proof that T without bar induction is a conservative extension of just intuitionistic arithmetic (no choice or bar induction).</p>
<p>To do this, Tait develops a type-indexed predicate over System T terms (without bar induction), providing a U_0 term for all T terms of each type. These predicates M_t, C_t, and E_t are (I think) what we refer to as a logical relation. In particular, the C_t relation provides the interpretation of T values of type t, M_t seems to deal with variables, and E_t seems to be a binary relation defining semantics (“weak α-definitional equality”) of terms.</p>
<p>Theorem V (page 205) uses this logical relation to prove that, for all semantics values at the same type, (weak α-) definitional equality is decidable: they either are or are not related in E_t. This seems to be the key point: the definitional equality is reflected out of the semantics of terms, so it can apply to the syntax of terms.</p>
<p>This use of logical relation seems to also be a realizability semantics, since it it assigns syntactic types to a collection of semantics terms, by induction over syntactic types, where the realizers are a subset of all possible semantics terms.</p>
<p>However, it seems to be more than a realizability semantics, too. What seems very important in this paper is that the semantics preserves structure, namely definitional equality. Perhaps implicitly though, other pieces are important. For example, T functions are interpreted as U functions, although it’s not clear to me that this is critical.</p>
<p>This is in contrast to Kleene’s (kleene1945) realizability, which did not seem concerned with structure, but only the existence of the realizers.</p>
<h3 id="plotkin1973---lambda-definability-and-logical-relations">plotkin1973 - Lambda-definability and logical relations</h3>
<p><a href="https://www.cl.cam.ac.uk/~nk480/plotkin-logical-relations.pdf">Plotkin</a> seems to be responsible for the name, and perhaps rediscovering logical relations in the context of programming languages.</p>
<p>Plotkin helpfully gives us a definition of “logical”, as well, and it seems quite importantly related to part 3 of my working definition. Plotkin defines a relation R as logical if it is:</p>
<ol>
<li>a subset of any D_k from the carrier any D∞ model (this seems to correspond to “admissible relations” in modern logical relations parlance);</li>
<li>the relation is preserved by functions in D. That is, the relation holds on a function f in D_k iff for all arguments x, R(x) implies R (f x) (extended to the n-ary case for n-ary relations).</li></ol>
<p>This suggests that it is important the logical relation is somehow interpreting syntactic structures as semantic structures, as in the case of Tait’s model interpreting syntactic functions as semantic functions. More generally, we likely want this property of all structures in the languages: syntactic pairs are interpreted as semantic pairs, etc. Jon’s category theoretical definition seems to generalize Plotkin’s definition nicely.</p>
<p>This denotational logical relation also shows us a logical relation that is not defined over syntax. Instead, it is a relation over some arbitrary non-trivial D∞ model. The author mentions that since they can interpret syntax in a D∞ model, they informally treat the logical relation as over syntax sometimes, which I suppose could be made formal easily enough.</p>
<h2 id="how-is-logical-relations-used-in-pl">How is “logical relations” used in PL?</h2>
<h3 id="ahmed2006---step-indexed-syntactic-logical-relations-for-recursive-and-quantified-types">ahmed2006 - Step-Indexed Syntactic Logical Relations for Recursive and Quantified Types</h3>
<p><a href="https://doi.org/10.1007/11693024_6">In this paper</a>, Ahmed is concerned with <em>syntactic</em> logical relations for recursive and quantified types, in particular for reasoning about contextual equivalence. Likely due to Ahmed’s work, this kind of syntactic logical relation seems to be what most people mean or think when they say “logical relation”, although that may be changing.</p>
<p>The desired property of the logical relation then is that two related semantic terms should be contextually equivalent in the syntax. That is, the logical relation reflects (from semantics to syntax) equivalence.</p>
<p>Strangely (for a realizability model), this particular syntactic logical relation also reflects typing: semantic terms in the relation are also guaranteed to be well-typed in the syntax. In contrast, some uses of “logical relations” enable semantics terms to be syntactically ill-typed. Such logical relations might be better called realizability models, although they do something reflect some structure, so perhaps reflecting typability is not a critical point of reflecting structure.</p>
<p>Ahmed in the introduction points out an interesting distinction: that logical relations can be either denotational, or syntactic. Syntactic logical relations model syntax as sets of syntactic values such that some property holds over that syntax. By contrast, denotational logical relations model syntax as some denotational object, Syntactic logical relations are useful for proving properties of the operational semantics directly. Denotational models instead model syntax as denotational objects, such as, <em>e.g.</em>, sets of set-theoretic functions over elements of a D∞ model in plotkin1973. This is useful for easily proving meta-theoretical properties by reflecting properties of the denotation into the syntax, but not necessarily about the operational semantics directly.</p>
<p>For example, Tait uses a “denotational logical relation” into intuitionistic analysis to prove that definitional equality of System T is decidable—the definition of definitional equality, in the model, and its proof of decidability, are reflected back into the syntax; this requires no operational semantics at all. Plotkin uses a denotational logical relation, into domain theory, to show that certain λ-calculus constructs are or are not definable—existence of a term in the logical relation is reflected into the syntax as a definable expression. Neither of these is a syntactic logical relation; the semantic values never mention syntactic values directly.</p>
<p>Ahmed uses a “syntactic logical relation” to prove something about the operational semantics, namely, to prove contextual equivalence (an operational notion), indirectly. Direct proofs of contextual equivalence are difficult. So instead, a semantic proof of equivalence is reflected back into the syntax as ta proof of contextual equivalence. This requires structuring the logical relation into a denotation of sets of syntactic terms that evaluate in the operational semantics, so that being in the relation tells us something about evaluation in the operational semantics, which tells us something about contextual equivalence.</p>
<h3 id="abel2018---decidability-of-conversion-for-type-theory-in-type-theory">abel2018 - Decidability of conversion for type theory in type theory</h3>
<p><a href="https://doi.org/10.1145/3158111">Abel <em>et al.</em></a> define a syntactic logical relation for typed, reducible (and equivalent) terms, to prove decidability of conversion for type theory. Here, the use of syntactic logical relation is important for proving a particular conversion algorithm over the syntax is decidable.</p>
<p>The interesting feature of this logical relation is the generalization from a model inductively defined over types, to inductively defined over judgments. This demonstrates a weakness in my working definition of logical relation and realizability, since I defined “realizability” in terms of models inductively defined over types.</p>
<h3 id="timany2022---a-logical-approach-to-type-soundness">timany2022 - A Logical Approach to Type Soundness</h3>
<p><a href="https://iris-project.org/pdfs/2022-submitted-logical-type-soundness.pdf">This paper</a> is interesting because it uses a syntactic logical relation that intentionally does not reflect typing, as many syntactic logical relations cdo. Semantically valid terms are not necessarily syntactically valid. In other ways, it looks very much like a logical relation: syntactic pairs are semantic pairs, sums sums, functions functions, etc.</p>
<p>The key property this paper is interested in is type safety: all well-typed terms are well-defined in the operational semantics, i.e., they evaluate to values or well-defined errors or fail to terminate, but importantly, do not get stuck. “in <em>the</em> operational semantics” is important to understanding why this is a syntactic logical relation; it must model terms as sets of syntactic values to reason about the operational semantics given in the paper.</p>
<p>However, one could imagine proving a slightly different form of type safety with a denotational logical relation. Giving a logical relation into an arbitrary model with a well-defined notion of evaluation would be implicitly a proof of type safety: that <em>there exists</em> a model that is type safe. The ability to reflect from semantics to syntax provides a mechanism for constructing that evaluation over syntax. So while the denotational logical relation provides no direct proof about the operational semantics, it may provide a mechanism for a type-safe-by-construction operational semantics. (This reflecting evaluation out of the semantics seems very related to the idea of normalization-by-evaluation, but I’m not clear on this.)</p>What is realizability?urn:https-www-williamjbowman-com:-blog-2022-10-05-what-is-realizability2022-10-05T21:54:39Z2022-10-05T21:54:39ZWilliam J. Bowman
<p>I recently decided to confront the fact that I didn’t know what “realizability” meant. I see it in programming languages papers from time to time, and could see little rhyme or reason to how it was used. Any time I tried to look it up, I got some nonsense about constructive mathematics and Heyting arithmetic, which I also knew nothing about, and gave up.</p>
<p>This blog post is basically a copy of my personal notebook on the subject, which is NOT AUTHORITATIVE, but maybe it will help you.</p>
<!-- more-->
<p>My best understanding of realizability right now, in programming languages (PL) terms, is:</p>
<ol>
<li>A technique for assigning each <em>syntactic type</em> to a collection of <em>semantic terms</em>;</li>
<li>By <em>induction</em> over syntactic types;</li>
<li>Where the semantic terms that are <em>realizers</em>—i.e., included in the collection related to some syntactic type—are a sub-collection of all possible terms in the semantic domain. That is, there are valid semantic terms not associated with any syntactic type.</li></ol>
<p>I use the word “collection” rather than “set” to avoid invoking set theory.</p>
<p>Graphically, we can represent this as follows:</p>
<div class="figure"><img src="/img/realizability.png" alt="" title="Realizability" />
<p class="caption"></p></div>
<p>The point of the technique is that clause 2 gives us a proof technique by induction, and clause 3 means we can relate the collection of terms (or proofs) to some other well-known collection. This yields a proof technique for metatheoretic properties about the collection, such as that there are only terminating terms in the collection of realizers, or there are only recursive functions and therefore some classical things remain unprovable.</p>
<p>I’m not entirely sure that clause 2, induction, is necessary, and I can’t find anything explicit about clause 3, but they seem to be true historically and in many uses of the term.</p>
<p>Okay so how did I get to this understanding?</p>
<h2 id="what-is-realizability-historically">What is realizability, historically?</h2>
<h3 id="kleene1945---on-the-interpretation-of-intuitionistic-number-theory">kleene1945 - On the Interpretation of Intuitionistic Number Theory</h3>
<p>Realizability seems to come from Kleene’s paper <a href="https://doi.org/10.2307/2269016">“On the Interpretation of Intuitionistic Number Theory”</a>. I say “seems to” as Kleene attributes the “detailed investigation of the notion of realizability” to David Nelson, attributes several of the results in the paper to Nelson, and claims that the main results of the paper are joint work with Nelson. But the paper only has Kleene’s name on it, and Kleene claims in the first footnote that they introduced the idea of realizability to Nelson in a seminar. So anyway, realizability seems to come from Kleene, and this is the canonical paper cited for the technique.</p>
<p>In this paper, realizability is quite specific. It’s a technique that takes an intuitionistic first-order logic formula about Peano arithmetic (Heyting arithmetic) and constructs a natural number from it, representing the (constructive) proof of that formula. Only provable formulas are realized. The point of this exercise is to prove various metatheorems about the realized language: is it consistent, and what are provable/unprovable in the intuitionistic formulae.</p>
<p>Intuitively, something is unprovable if there exists a formula, but there does not exist a realization of it. This can be shown by connecting the formula to the set of realizers (in this case, natural numbers), but showing that there cannot exist a related natural number (or, more often, function on natural numbers represented by its Gödel number) with the properties required of the realizability interpretation. The simplest example: since “false” is unprovable (it has no realization, by construction), the intuitionistic logic is consistent.</p>
<p>This also lets us prove something about the class of all provable statements. Since we have a method for constructing something from any provable (or true) statement, we can say something about the set of all provable statement in relation to the realizers. Kleene mentions one consequence is that the intuitionistic calculus cannot prove the existance of any function other than a general recursive function, since those are the only functions constructed in the realizability interpretation. This tells us, for example, that the intuitionistic calculus is different from classical set theory, which contains other functions.</p>
<p>An important detail in this paper that clarifies the distinction between the intuitionistic and the classical happens in Clause 6, on page 113. This is the definition of the realizability interpretation for existential quantification ∃x.A(x). This has a realization if, for some <strong>x</strong>, A(<strong>x</strong>) has a realization. It’s important to notice that this second “for some <strong>x</strong>” quantification happens in the metalanguage, namely, classical set theory, and therefore could be choosen by Choice. Kleene discusses this on page 118, where he uses the word “classically” as a modifier on various quantifiers to remind us that, when working with the quantification and realizers directly, we are working in a classical system in which intuitionistic proofs also exist.</p>
<p>What seems to be going on here is that the realizers are something like the intuitionistic subset of classical set theory. I think that statement isn’t exactly true; Kleene uses classical choice when working with the realizers to show there are unprovable theorems. For example, a realizer parameterized over (classically) <em>all</em> variables may not correspond to an intuitionistic formula. So it’s not that the realizers are only intuitionistic, I think. But any particular realizer is (must be)? The important point may be the realizers are a subset of the whole system, and thus we can prove interesting metatheorems that rely on distinguishing the realizers (and therefore, the formulae they realizer) from all the things in the full system.</p>
<h3 id="amadio1998---domains-and-lambda-calculi-chapter-15">amadio1998 - Domains and Lambda-Calculi, Chapter 15</h3>
<p>Chapter 15 of Amadio and Currien’s book <a href="https://doi.org/10.1017/cbo9780511983504">“Domains and Lambda-Calculus”</a> introduces realizability in its historical context. The introduction formalizes Kleene’s work as an example, and discusses its use.</p>
<p>They emphasize two things, which seem to confirm some of my understanding:</p>
<ol>
<li>The realizability relation is defined inductively over <em>formulas</em>, and relates <em>formulas</em> to <em>proofs</em>.</li>
<li>The use lets us reason about all <em>proofs</em> in the system.</li></ol>
<p>This is the best definition of realizability I’ve seen, and applies both to Kleene’s original, but also to uses in PL.</p>
<p>The authors point out that Kleene’s original goal was to prove consistency. They then confirm my above intuitions, that the realizability interpretation also lets us prove metatheorems about what is provable/unprovable in the realized system. However, they note that one application of this is to find <em>unprovable</em> <em>true</em> statements, which can be consistently axiomatized back into the original system. There are proofs in the set of realizers, i.e., true statements, that are never constructed by the realizability interpretation. These could be added back to the original system to enrich it.</p>
<p>This latter use seems to confirm one feature of realizability that isn’t explicit stated anywhere, but seems to be true of all realizability interpretations I’ve seen: that the realizers are a strict subsystem of some larger formal system.</p>
<h2 id="how-is-realizability-used-in-pl">How is “realizability” used in PL?</h2>
<p>In programming languages, we’re not often concerned with intuitionistic vs classical logic; we’re working constructively by default. In fact, many of the uses of “realizability” in PL don’t seem to be related to logic at all, but to modeling well-typed programs. And while, sure, these are related by Curry-Howard, the difference seems important to me. So what does realizability mean in this context?</p>
<p>In most uses in PL, the important feature seems to be clause 3 in my definition above: the collection of all values is larger than the set of realizers. In PL, this suggests that we’re ascribing types to “untyped” terms, and the realizers are those that are semantically well typed, but not necessarily syntactically well typed. The full collection contains also untyped terms, and we can therefore prove through realizability that the type system rules out ill-typed terms.</p>
<p>There do seem to be some examples in PL that are explicitly relating classical and intuitionistic ideas, namely those trying to import constructive interpretations of classic logic. I’m not really interested in those, and I think the connection to realizability is much more clear in those applications, so I’ll ignore that area.</p>
<p>Let’s look at some examples.</p>
<h2 id="benton2010---realizability-and-compositional-compiler-correctness-for-a-polymorphic-language">benton2010 - Realizability and Compositional Compiler Correctness for a Polymorphic Language</h2>
<p>In <a href="https://nickbenton.name/cccmsrtr.pdf">“Realizability and Compositional Compiler Correctness for a Polymorphic Language”</a>, Benton and Hur define a “realizability” interpretation of System F types realized by terms in low-level language, for proving some compiler correctness properties. The terms realize the types, and this lets us talk about which low-level programs are valid to link with, without restricting the set of linkable programs to only those generated by the compiler.</p>
<p>This has lost all connection to intuitionistic vs classical logic, but I suppose it keeps the key features of the technique: types (formula) of one language are realized by terms in another, and there is some concern that the realizers should be a subset of all terms. Not all low-level programs should be valid, but some set of them should be.</p>
<h2 id="nakano2000---a-modality-for-recursion">nakano2000 - A Modality for Recursion</h2>
<p><a href="https://doi.org/10.1109/LICS.2000.855774">“A Modality for Recursion”</a> was actually the start of my realizability journey. This paper starts by defining a collection of models (β-models) of the untyped λ-calculus. It then defines the class of realizability models, in terms of β-models, for an extrinsically typed λ-calculus with equi-recursive types. A realizability model is parameterized by a β-model, and is a relation inductively defined over types to their realizers, which are values drawn from the β-model.</p>
<p>So why is this realizability? Well, I don’t see anything to do with intuitionistic vs classical. But, the set of all values is larger than the set of realizers, which seems to be important to all uses of “realizability”, and important for this result in particular. In this paper, this is used to show that the dot modality rules out some valid β-model terms, namely those that would correspond to non-terminating λ terms.</p>
<p>Later in the paper, they define a “realizability interpretation”. This seems to be distinct from the collection of all realizability models in that they pick a particular set of realizers? So, it ought to be a realizability model, I guess? But they don’t say so explicitly. The interpretation is still quite heavily parameterized, but it does seem to fix or restrict the set of realizers. Anyway, this interpretation includes all the features of my definition above: it’s inductively defined over types, relating types to (a semantic model of) untyped λ terms, for the purposes of proving something about the collection of realizers as they related to the collection of all untyped λ terms.</p>The A Means Aurn:https-www-williamjbowman-com:-blog-2022-06-30-the-a-means-a2022-06-30T17:25:55Z2022-06-30T17:25:55ZWilliam J. Bowman
<p>I have argued about the definition of “ANF” many times. I have looked at the history and origins, and studied the translation, and spoken to the authors. And yet people insist I’m “quacking” because I insist that “ANF” means “A-normal form”, where the “A” only means “A”.</p>
<p>Here, I write down the best version of my perspective so far, so I can just point people to it.</p>
<!-- more-->
<p>I want to answer three question: what does the <em>A</em> mean, why does the <em>A</em> matter, and where does the <em>A</em> come from.</p>
<h2 id="what-does-the-a-mean">What does the <em>A</em> mean?</h2>
<p>The “A” in “A-normal form” refers to a particular formal object, named “A” (not “administrative”), with respect to which there is a normal form with certain useful properties. This form is “A normal”—none of the A reductions apply to terms in this form—hence, A-normal form.</p>
<p>While it’s true that the history of ANF is concerned with “administrative reductions” <em>in CPS</em>, this is an informal concept, modeled by the formal object “A”.</p>
<p>In truth, “A” is several formal objects, defined somewhat differently in at least 3 different papers. Only one of these is arguably called “administrative”, but is about CPS, and not what we now call ANF.</p>
<p>“A” appears in “The Essence of Compiling with Continuations”, page 5. Under the discussion of the CPS, optimization, and un-CPS diagram, the authors observe that this diagram begs for a completion, some direct process, “A”, that simply normalizes a term within the same language. This diagram is reproduced below:
<script type="math/tex; mode=display">
\begin{array}{ccc}
e & \overset{CPS}{\to} & e' \\
\overset{A}{\downarrow} && \overset{\beta}{\downarrow} \\
e_A & \overset{unCPS}{\leftarrow} & e_O
\end{array}</script> They ask, what are some set of reductions, call this set A, such that normalizing with respect to A would produce a normal form, A-normal form, that characterizes the use of CPS in practice.</p>
<p>The same pattern appears in "Reasoning about Programs in Continuation-Passing Style, page 1:</p>
<blockquote>
<p>Thus, we refine this question as follows: Is there a set of axioms, A, that extend the call-by-value λ-calculus such that: …</p></blockquote>
<p>The authors go on to define the set A, never naming it by <em>some administrative reductions</em>, but deriving A instead from the inverse CPS translation.</p>
<p>We could argue that Sabry’s thesis, Chapter 3, Section 1, “Administrative Source Reduction: The A-Reductions”, names the A-reductions “administrative”. He goes on to analyse those reductions considered to be the administrative ones, defining βlift and βflat in terms of CPS. He then defines in Definition 3.1, the administrative source reduction (A-reductions). However, these refer to reductions over CPS terms, and are distinct from the reductions considered for ANF. While they are the origin of ANF, they do not produce terms in what we now call ANF. A term in “administrative normal form” with respect to that set of reductions would actually be in CPS. That’s not what we mean when we say ANF; we mean normal with respect to the set A defined in “The Essence of Compiling with Continuations”.</p>
<p>Maintaining this distinction between the formal object A and the informal notion of administrative reductions is important for two reasons. First, it helps remind us that ANF is a form ultimately about normalizing a specific set of reductions, not the output of a particular translation, which is important in practice. Implementations often relax ANF until code generation, by omitting some of the A reductions, typically, A2 in “The Essence of Compiling with Continuations”—even that paper relaxes A2 in their implementation in the appendix, because A2 leads to exponential code duplication or requires object-language continuations (“join points”). It’s hard to even formally discuss this relaxation if we do not have the set of normalized reductions in mind. Second, the idea that ANF is free of “administrative” redexes is absurd, since the idea of the administrative redex is an informal concept: a reduction that isn’t really necessary but merely an artifact of the translation. It is easy to introduce such administrative redexes in ANF; e.g., <code>let x = y in
x</code> contains an extra unnecessary ζ redex, but it is in ANF. It is, however, free of A reductions.</p>
<h2 id="why-does-the-a-matter">Why does the <em>A</em> matter?</h2>
<p>I don’t actually care what the “A” means, or what the authors intended it to mean. I care that we think about ANF as a normal form, normal with respect to a specific set of reductions.</p>
<p>This most recent rant was triggered by a conversation with a reviewer, who, after observing that the “A” actually stood for “administrative”, asked whether our ANF translation could be decomposed into two translations, one that did everything but normalize the <code>if</code>s (handling <code>if</code> is annoying in ANF, as it either requires being clever or causes code duplication), and then separately handle <code>if</code>.</p>
<p>The answer is completely obvious… if you think about ANF in terms of a normal form with respect to a set of reductions, and not as merely the output of some translation process, nor “CPS but like without adminsitrative redexes”. Since ANF is a normal form with respect to “A”, we can easily decompose it into multiple normal forms, thus deriving several decomposed translations: remove the A reduction that normalizes <code>if</code>, and you get another normal form. Remove the rules that normalize <code>if</code> and nested <code>let</code>, and you get monadic form.</p>
<p>But all of this is much more complicated to explain if you think of ANF as a particular translation or particular syntactic form, and not a normal form with respect to the set A. And this seems to be very likely how you will think of you think <em>A</em> means <em>administrative</em>.</p>
<h2 id="where-does-the-a-come-from">Where <em>does</em> the <em>A</em> come from??</h2>
<p>Incidentally, I spoke with Amr after he read this blog post. The origin of the “A” comes from a result by Curry, who proves some theorems about any combinatory logic extended by a set <em>A</em> of ground equations: https://staff.fnwi.uva.nl/p.h.rodenburg/Varia/RelCLlam.pdf</p>
<p>This led Matthias to ask Amr to create a set <em>A</em>, such that bla bla bla.</p>
<p>Amr admits he may have intended a pun between <em>A</em> and <em>administrative</em>, but doesn’t remember.</p>How I Redex---Experimenting with Languages in Redexurn:https-www-williamjbowman-com:-blog-2019-10-06-how-i-redex-experimenting-with-languages-in-redex2019-10-06T19:45:13Z2019-10-06T19:45:13ZWilliam J. Bowman
<p>Recently, I asked my research assistant, Paulette, to create a Redex model. She had never used Redex, so I pointed her to the usual tutorials:</p>
<ul>
<li><a href="https://redex.racket-lang.org/">https://redex.racket-lang.org/</a></li>
<li><a href="https://docs.racket-lang.org/redex/tutorial.html">https://docs.racket-lang.org/redex/tutorial.html</a></li>
<li><a href="https://docs.racket-lang.org/redex/redex2015.html">https://docs.racket-lang.org/redex/redex2015.html</a></li></ul>
<p>While she was able to create the model from the tutorials, she was left the question “what next?”. I realized that the existing tutorials and documentation for Redex do a good job of explaining <em>how</em> to implement a Redex model, but fail to communicate <em>why</em> and <em>what</em> one does with a Redex model.</p>
<p>I decided to write a tutorial that introduces Redex from the perspective I approach Redex while doing work on language models—a tool to experiment with language models. The tutorial was originally going to be a blog post, but it ended up quite a bit longer that is reasonable to see in a single page, so I’ve published it as a document here:</p>
<div style="text-align: center">
<h3><a href="/doc/experimenting-with-redex/">Experimenting with Languages in Redex</a></h3></div>Untyped Programs Don't Existurn:https-www-williamjbowman-com:-blog-2018-01-19-untyped-programs-don-t-exist2018-01-19T23:37:01Z2018-01-19T23:37:01ZWilliam J. Bowman
<p>Lately, I’ve been thinking about various (false) dichotomies, such as typed vs untyped programming and type systems vs program logics. In this blog post, I will argue that untyped programs don’t exist (although the statement will turn out to be trivial).</p>
<p><a id="orgf07fde6"></a></p>
<h4 id="tldr">TLDR</h4>
<p>All languages are typed, but may use different enforcement mechanisms (static checking, dynamic checking, no checking, or some combination). We should talk about how to use types in programming—<em>e.g.</em> tools for writing and enforcing invariants about programs—instead of talking about types and type checking as properties of languages.</p>
<!-- more-->
<h2 id="table-of-contents">Table of Contents</h2>
<div id="table-of-contents">
<div id="text-table-of-contents">
<ul>
<li><a href="#orgf07fde6">1. TLDR</a></li>
<li><a href="#orgaf4345b">2. Some Context</a></li>
<li><a href="#orgdaebb53">3. Definitions</a></li>
<li><a href="#org124f9c8">4. Is X a Typed Language?</a></li>
<li><a href="#org8701978">5. But I Don't Get Type Errors in X!</a></li>
<li><a href="#orgbd43271">6. Untyped Programs Don't Exist.</a></li>
<li><a href="#org53d6b39">7. Conclusion</a></li>
<li><a href="#related">8. Related Reading</a></li></ul></div></div>
<p><a id="orgaf4345b"></a></p>
<h2 id="some-context">Some Context</h2>
<p>In most of my academic work, I work with “typed” languages. These languages have some nice properties for the metatheorist and compiler writer. Types lend themselves to strong automated reasoning, automatically eliminate large classes of errors, and simplify the job of whoever is reasoning about the programs. The downside is that the programmer must essentially statically prove properties of their program in such a way that a machine can understand the theorem and check the proof.</p>
<p>When I’m hacking, I write in “untyped” languages. I write programs in Racket, scripts in bash, plugins and tools in JavaScript, papers in latex, build systems in Makefile, and so on. These languages lend themselves to experimentation and avoid the overhead of necessarily proving properties of the programs. The downside is that the computer cannot help the programmer, since the programmer has not communicated the invariants about the program in a way the computer can understand.</p>
<p>“But surely”, a type evangelist says, “the very same benefits of types for metatheory help one develop the program in the first place? Why do you hobble yourself by omitting types? Come join us in the land of light! Use types from the start!”</p>
<p>“Dear friend, I couldn’t agree more!”, I reply, "Types are invaluable to developing my programs, but your ‘typed’ language <strong>prevent</strong> me from writing down my types!’</p>
<p>"Well, certainly there are some limitations of typed languages," the type evangelist concedes, "but this we could also choose to ignore the type system, create the uni-type, and program in the error monad. Now we have the benefits of both worlds!"</p>
<p>"Don’t you see,", I say excitedly, "that is just what I’m doing! My ‘untyped’ languages are, in fact, well-typed. My programs run implicitly in the error monad. What’s more, I am not required to <strong>prove</strong> it, for it is simply <strong>true</strong>."</p>
<p>A grave look comes over my interlocutor’s face. "But you forfeit all benefits of static typing. Your errors are reported later, and there are performance implications, and …"</p>
<p>"Exactly.", I interrupt. "We are not arguing about typing, for all programs are well typed. We are arguing about pragmatics."</p>
<p><a id="orgdaebb53"></a></p>
<h2 id="definitions">Definitions</h2>
<p>Before I can argue that untyped programs don’t exist, I need some near-formal definitions to work with. I posit the following definitions are reasonable, intuitive definitions about types and programs.</p>
<p><strong>Definition.</strong> An <em>expression</em> is a symbol of sequence of symbols given some interpretation.</p>
<p><strong>Example.</strong></p>
<ul>
<li><code>5</code> is an expression, whose interpretation is the number five.</li>
<li><code>e₁ + e₁</code> is an expression, whose interpretation is the mathematical addition function applied to expressions <code>e₁</code> and <code>e₂</code>.</li>
<li><code>function(): return 5;</code> is an expression, whose interpretation is a mathematical function that when applied to any number of arguments returns the expression <code>5</code>.</li></ul>
<p><strong>Definition.</strong> A <em>type</em> is a statement of the invariants of some expressions.</p>
<p><strong>Example.</strong></p>
<ul>
<li>a <em>register word</em> is a type describing the kinds of values that fit in an x86 register, such as a collection of 32 bits. A register word supports operations such as:
<ul>
<li>move a value of type register word into a register</li>
<li>move a value of type register word from one register into another</li></ul></li>
<li>a <em>pointer</em> is a type describing a memory address. It is either uninitialized or a valid memory address. A pointer supports operations such as:
<ul>
<li>initialization, giving an uninitialized pointer a value</li>
<li>dereference, reading the value of the memory address of an initialized pointer</li></ul></li>
<li>a <em>Nat</em> is a type describing an element of the set of natural numbers. A Nat supports operations such as:
<ul>
<li>addition</li>
<li>multiplication</li>
<li>subtraction, but only when subtracting a smaller natural number from a larger natural number</li></ul></li></ul>
<p><strong>Definition.</strong> A <em>language</em> is collection of expressions.</p>
<p><strong>Example.</strong></p>
<ul>
<li>arith is a language containing the following expressions <em>e</em>:
<ul>
<li><code>0</code>,<code>1</code>,<code>2</code> …, etc and <code>-1</code>,<code>-2</code>,<code>-3</code>, …, etc, each representing an integer</li>
<li><code>e₁ + e₂</code>, where <code>e₁</code> and <code>e₂</code> are integers</li>
<li><code>e₁ - e₂</code>, where <code>e₁</code> and <code>e₂</code> are integers</li></ul></li>
<li>JavaScript is a language, defined by the ECMAScript standard, and extended by various implementations.</li></ul>
<p><strong>Definition.</strong> A <em>program</em> is a collection of expressions from some language.</p>
<p><strong>Example.</strong></p>
<ul>
<li><code>5 + 5</code> is an arith program.</li>
<li><code>5</code> is a JavaScript program.</li></ul>
<p><a id="org124f9c8"></a></p>
<h2 id="is-x-a-typed-language">Is X a Typed Language?</h2>
<p>Is x86 assembly a typed language?</p>
<p>I say yes.</p>
<p>First, x86 assembly is a language. The language x86 assembly meets our definition of a language: it defines a collection of symbols or sequences of symbols given some interpretation. For example, <code>mov ax, bx</code> is an x86 assembly program that moves the contents of register <code>bx</code> to register <code>ax</code>.</p>
<p>Second, x86 is typed. Each expression in x86 assembly has invariants stated about the expression. For example, x86 defines the type “little endian”, which describes the particular encoding of binary data such as numbers over which operations like addition are defined. The division operation is well typed: as division is only defined when the denominator is non-zero. Attempting to divide by zero cause a type error (a dynamic exception).</p>
<p>I would make the same argument for every other language. C is a typed language. So is JavaScript. And Racket. And Haskell.</p>
<p><a id="org8701978"></a></p>
<h2 id="but-i-dont-get-type-errors-in-x">But I Don’t Get Type Errors in X!</h2>
<p>First, you probably do. Second, when you don’t, that’s a major problem.</p>
<p>Let’s visit x86 for a moment, to see dynamically enforced type errors. Recall that division is not defined when the denominator is zero. The result of division by zero in x86 is defined to be a general-protection exception, error code 0. That is a type error. It’s a type error describing that you attempted to divide by zero, and that this is ill-typed. It is a dynamically enforced type error.</p>
<p>Let’s move to C, in which we can easily see two different kinds of type errors: static and unenforced. The language C includes expressions like <code>x=e</code>, where <code>x</code> is a declared name and <code>e</code> is an expression. The expression <code>x=e</code> raises a static error when <code>x</code> is undeclared; this is a static type error. It is a statically enforced invariant that names must be declared before they are used. Other invariants are not enforced at all, such as notorious undefined behavior. For example, <code>bool b; if(b) { ... };</code> violates a C invariants, namely that uninitialized scalars are never used. However, C does not attempt to enforce this invariant, either statically or dynamically. The result of this sequence of symbols is undefined in C.</p>
<p><a id="orgbd43271"></a></p>
<h2 id="untyped-programs-dont-exist">Untyped Programs Don’t Exist</h2>
<p>First, a few more definitions based on the above arguments about the languages x86 and C.</p>
<p><strong>Definition.</strong> A <em>type error</em> is an error raised during the enforcement of a type, <em>i.e.</em>, during the enforcement of an invariant about an expression.</p>
<p><strong>Definition.</strong> <em>Undefined behavior</em> is the result of interpreting a non-expression, <em>i.e.</em>, a sequence of symbols that have no meaning because some invariant has been violated.</p>
<p><strong>Theorem.</strong> Untyped Programs Don’t Exist.</p>
<p><strong>Proof.</strong> Recall that programs consist of expressions from a language. Expressions are sequences of symbols that have meaning. But <em>undefined behavior</em> only results from non-expressions. As programs are composed of expressions, a program cannot have undefined behavior. Therefore, all programs obey the invariants required by the expressions in the language. That is, all programs are well typed, and untyped programs don’t exist. <strong>QED.</strong></p>
<p>I warned you it was a trivial theorem.</p>
<p><a id="org53d6b39"></a></p>
<h2 id="conclusion">Conclusion</h2>
<p>The theorem is trivial, but still useful because it helps us reframe our discussion.</p>
<p>Really, the statement is just a rephrasing of type safety: “well typed programs don’t go wrong”. For type safety, what we show is that programs exhibit only defined behavior. The difference is that, typically, type safety is typically thought of as a property of a <em>language</em>, and in particular, of statically typed languages. We should think about type safety differently: it is a property we must enforce of <em>programs</em>. Enforcing it via static typing of every program in the language is one useful way, but it is not the only way, and we cannot always hope to have type safety of a language.</p>
<p>Instead of arguing about untyped vs typed, a non-existent distinction, we should accept that all programs have invariants that must be obeyed, <em>i.e.</em>, all programs are typed. The argument we must have is about the pragmatics of types and type checking.</p>
<ul>
<li>how can we express types about complex languages like x86 and C</li>
<li>under what situations should we enforce types, <em>i.e.</em>, check types</li>
<li>is type checking useful</li>
<li>should we check types statically or dynamically</li>
<li>should we allow the programmer to circumvent types checking</li>
<li>is type checking decidable</li>
<li>should it be</li></ul>
<p><a id="related"></a></p>
<h2 id="related-reading">Related Reading</h2>
<h4 id="httpdxdoiorg107146bricsv7i3220167the-meaning-of-types---from-intrinsic-to-extrinsic-semantics-reynold-2000"><a href="http://dx.doi.org/10.7146/brics.v7i32.20167"><em>The Meaning of Types - From Intrinsic to Extrinsic Semantics</em></a> (Reynold 2000)</h4>
<p>This paper proves equivalence of an intrinsically typed languages in which meaning is only assigned to well-typed programs and an extrinsically typed language in which programs are first given meaning and can separately be ascribed types and proved to inhabit those types. In the extrinsic semantics, Reynold’s treat all programs as existing in the universal domain, and use embedding-projection pairs essentially as contracts at run-time, since, <em>e.g.</em>, only a function can be called. In my mind, this work essentially proves the same theorem as this blog post: even in an when the semantics of programs consider typing as happening “after” semantics, the semantics still require types.</p>
<h4 id="httpsexistentialtypewordpresscom20110319dynamic-languages-are-static-languagesdynamic-languages-are-static-languages-harper-2011"><a href="https://existentialtype.wordpress.com/2011/03/19/dynamic-languages-are-static-languages/"><em>Dynamic Languages are Static Languages</em></a> (Harper 2011)</h4>
<p>This blog post argues that dynamic languages are just straight-jacketed versions of static languages, and therefore they aren’t really a separate class of languages. In many ways, I agree with this blog post. Because “dynamic” languages lack any static enforcement, they can be a hindrance when you do know how to encode the types you want, and they can lead to weird type confusing programming patterns. My favorite example pattern is from Racket, where the value <code>#f</code> is sometimes used at type <code>bool</code> and sometimes at type <code>Maybe A</code>. This can lead to annoying problems with functions like <code>findf</code> over a list of <code>bools</code>. However, I think it ignores some of the pragmatics. For example, while sum types give you incredible expressive power, tagged sums are very annoying to use in many languages that enforce static typing, while very simple to use when you are not required to statically prove a term inhabits a sum.</p>
<h4 id="httpsmediumcomsamthon-typed-untyped-and-uni-typed-languages-8a3b4bedf68con-typed-untyped-and-uni-typed-languages-tobin-hochstadt-2014"><a href="https://medium.com/@samth/on-typed-untyped-and-uni-typed-languages-8a3b4bedf68c"><em>On Typed, Untyped, and Uni-typed Languages</em></a> (Tobin-Hochstadt 2014)</h4>
<p>This blog post begins to get at some of the same criticisms of Harper’s view, and starts to talk about pragmatics.</p>
<h4 id="httpblogsperlorgusersovid201008what-to-know-before-debating-type-systemshtmlwhat-to-know-before-debating-type-systems-smith-2010"><a href="http://blogs.perl.org/users/ovid/2010/08/what-to-know-before-debating-type-systems.html"><em>What to Know Before Debating Type Systems</em></a> (Smith 2010)</h4>
<p>This blog post, reproduced in 2010 on a perl blog, does a great job of breaking down some false dichotomies and fallacies in discussions about type systems. It into more depth than this article about some distinctions in type systems, when they are meaningful and when they are not, and I pretty much agree with it.</p>
<h4 id="httpswww2ccsneueduracketpubsdissertation-felleisenpdfthe-calculi-of-lambda-v-cs-conversion-a-syntactic-theory-of-control-and-state-in-imperative-higher-order-programming-languages-felleisen-1987"><a href="https://www2.ccs.neu.edu/racket/pubs/dissertation-felleisen.pdf"><em>The Calculi of Lambda-v-CS Conversion: A Syntactic Theory of Control and State in Imperative Higher-order Programming Languages</em></a> (Felleisen 1987)</h4>
<p>The abstract and chapter 1 of this dissertation have something to say about syntax and semantics, which I think are very related to the topic of this blog post. In particular, the thoughts on symbolic-syntactic reasoning I think are vital to understanding the trade-offs in different enforcements of typing.</p>
<h4 id="httphomessoicindianaedujsiekwhat-is-gradual-typingwhat-is-gradual-typing-siek-2014"><a href="http://homes.soic.indiana.edu/jsiek/what-is-gradual-typing/"><em>What is Gradual Typing</em></a> (Siek 2014)</h4>
<p>This blog post discusses some trade-offs in static vs dynamic typing, in the context of gradual typing. To me, advancement in gradual typing is crucial in making typing enforcement more pragmatic. However, I disagree with some of the “good points” in this blog post. For example, the point “Dynamic type checking doesn’t get in your way” is a bad point to me; it’s also an argument in favor of no enforcement and undefined behavior. I also find some examples of gradual typing to be great evidence of what is wrong with gradual typing. For example, the program <code>add1(true)</code> at the end of the post should be refuted by a gradual type system, but passes current “plausibility checkers”, even when <code>add1</code> has static type annotations requiring that its argument be a number.</p>The reviewers were right to reject my paperurn:https-www-williamjbowman-com:-blog-2017-10-08-the-reviewers-were-right-to-reject-my-paper2017-10-09T03:22:35Z2017-10-09T03:22:35ZWilliam J. Bowman
<p>I submitted two papers to POPL 2018. The first, <a href="https://williamjbowman.com/papers#cps-sigma">“Type-Preserving CPS Translation of Σ and Π Types is Not Not Possible”</a>, was accepted. The second, “Correctly Closure-Converting Coq and Keeping the Types, Too” (draft unavailable), was rejected.</p>
<p>Initially, I was annoyed about the reviews. I’ve since reconsidered the reviews and my work, and think the reviewers were right: this paper needs more work.</p>
<!-- more-->
<p>In short and in my own words, the reviews criticized my work as follows:</p>
<ol>
<li>The translation requires ad-hoc additions to the target language.</li>
<li>There is no proof of type soundness of the target language.</li>
<li>The work ignores the issue of computational relevance, compiling irrelevant things like functions in Prop.</li>
<li>The key insight is poorly explained, lost in the details of the Calculus of Inductive Constructions (CIC).</li></ol>
<p>Initially, I thought that the reviews were unfair. I had worked out type-preserving closure conversion for much of CIC! We have an argument for why the target ought to be sound, but a formal proof would be too much. It took many dissertations to work out the soundness of CIC! As for computational relevance, well sure, we’re compiling too much, but we’re preserving all the information! Computational relevance is hard; one dissertation has been written on the subject and another is in the works. Figuring out computational relevance is important, but will be a separate project in itself! As for ad-hoc, well, I disagree, but maybe I communicated badly; that’s on me.</p>
<p>And that’s, essentially, what I wrote in my rebuttal. However, now I’m reconsidering my position.</p>
<p>But first, some context.</p>
<p>In this POPL submission, I developed a type-preserving closure conversion for CIC. An early version of this work was presented as a student research competition poster at POPL 2017, which you can find at <a href="https://williamjbowman.com/papers#cccc-popl17-src">here</a>. In this paper, I scaled that work from the Calculus of Constructions to CIC; I added inductive types, guarded recursion, the universe hierarchy, and Set vs Prop. To do that, I made some compromises. I decided not to formally prove soundness, but give an argument thusly: use types that can be encoded in CIC, give a syntactic guard condition that seems plausible, but might have minor bugs that need to be repaired (which is the pragmatic approach to termination taken by Coq). As mentioned before, proving CIC sound was quite a challenge, and I felt it unrealistic to try to prove this target language sound. I also ignored computational relevance for two reasons. First I couldn’t find a great formal description of how to treat <code>Type</code>; there seems to be some kind of static analysis involved in giving it semantics via extraction. Second, after reading a lot about relevance, I think Set vs Prop is sort of the wrong way to encode it anyway, so I’d want to compile those into distinct concepts in the long run. So I decided to do CIC since it’s a more realistic source, but treat soundness of the target and relevance as future work.</p>
<p>To judge this work, we have to look at the type-preserving compilation literature. Since the reviews came out, I’ve been rereading the literature as I work on my thesis proposal, and talking to my committee; this helped put the reviews in a new context for me. The de-facto standard by which we judge type-preserving compilation work is “System F to Typed Assembly Language”. That work does not compile a realistic programming language; it compiles System F. Essentially it shows how to preserve one feature—parametric polymorphism—into a statically typed assembly language. And it took four of the best in our field to do that and do it <em>“right”</em>. While they do not handle a practical source language, they do handle a complicated type theoretic feature, preserve it through a realistic compiler to an assembly like language, and prove type soundness of that target language.</p>
<p>Judged by this standard, I can see the reviewers’ criticism as this: this paper was focusing on the wrong things. I am an academic, not an engineer. Instead of trying to handle all of CIC so that I have a practical source language, I should focus on compiling the new type theoretic feature—full spectrum dependent types—and doing that <em>right</em>. I should carve off the subset that I know how to do well, how to explain well, and how to prove correct. I should leave scaling to all the pragmatic features of CIC as future work, so that I have time to figure out how to do those features <em>right</em>.</p>
<p>So, thank you POPL anonymous reviewers for evaluating my work. You’ve given me a new perspective on my work and I think I know how to improve it.</p>What even is compiler correctness?urn:https-www-williamjbowman-com:-blog-2017-03-24-what-even-is-compiler-correctness2017-03-24T21:41:13Z2017-03-24T21:41:13ZWilliam J. Bowman
<p>In this post I precisely define common compiler correctness properties. Compilers correctness properties are often referred to by vague terms such as “correctness”, “compositional correctness”, “separate compilation”, “secure compilation”, and others. I make these definitions precise and discuss the key differences. I give examples of research papers and projects that develop compilers that satisfy each of these properties.</p>
<!-- more-->
<h3 id="what-is-a-language">What is a Language</h3>
<p>Our goal is to give a generic definition to compiler correctness properties without respect to a particular compiler, language, or class of languages. We first give a generic definition of a Language over which a generic Compiler can be defined.</p>
<p>A Language
<script type="math/tex"> \mathcal{L}</script> is defined as follows.
<script type="math/tex; mode=display">
\newcommand{\peqvsym}{\overset{P}{\simeq}}
\newcommand{\ceqvsym}{\overset{C}{\simeq}}
\newcommand{\leqvsym}{\overset{\gamma}{\simeq}}
\newcommand{\ctxeqvsym}{\overset{ctx}{\simeq}}
\newcommand{\neweqv}[3]{#2 \mathrel{#1} #3}
\newcommand{\peqv}{\neweqv{\peqvsym}}
\newcommand{\ceqv}{\neweqv{\ceqvsym}}
\newcommand{\leqv}{\neweqv{\leqvsym}}
\newcommand{\ctxeqv}{\neweqv{\ctxeqvsym}}
\begin{array}{llcl}
\text{Programs} & P \\
\text{Components} & C \\
\text{Linking Contexts} & \gamma \\
\text{Link Operation} & γ(C) & : & P \\
\text{Program Equivalence} & \peqvsym & : & P \to P \to Prop \\
\text{Linking Equivalence} & \leqvsym & : & \gamma \to \gamma \to Prop \\
\text{Component Equivalence} & \ceqvsym & : & C \to C \to Prop \\
\text{Observational Equivalence} & \ctxeqvsym & : & C \to C \to Prop \\
\end{array}</script> where
<script type="math/tex">\ctxeqvsym</script> is the greatest compatible and adequate equivalence on Components.</p>
<p>A Language
<script type="math/tex">\mathcal{L}</script> has a notion of Programs
<script type="math/tex">P</script>. Programs can be evaluated to produce observations. Program Equivalence
<script type="math/tex">\peqv{P_1}{P_2}</script> defines when two Programs produce the same observations. A Language also has a notion of Components
<script type="math/tex">C</script>. Unlike Programs, Components cannot be evaluated, although they do have a notion of equivalence. However, we can produce a Program from a Component by linking. We Link by applying a Linking Context
<script type="math/tex">\gamma</script> to a Component
<script type="math/tex">C</script>, written
<script type="math/tex">\gamma(C)</script>. Linking Contexts can also be compared for equivalence using Linking Equivalence
<script type="math/tex">\leqv{\gamma_1}{\gamma_2}</script>. Observational Equivalence is a “best” notion of when two Components are related. Note, however, that a Language’s Observational Equivalence is completely determined by other aspects of the language. We are not free to pick this relation.</p>
<p><span class="example">C is a Language; its definition is as follows. Let
<script type="math/tex">P</script> be any well-defined whole C program that defines a function<code>main</code>; such a program would produce a valid executable when compiled. Let
<script type="math/tex">C</script> be any well-defined C program that defines a function<code>main</code>, but requires external libraries to be linked either dynamic or statically. Such a component would produce a valid object file when compiled, but would not run without first being linked. Let
<script type="math/tex">\gamma</script> be directed graphs of C libraries with a C header file. Define the Link Operation by static linking libraries at the C level. Define two Programs to be Program Equivalent when the programs both diverge, both raise the same error, or both terminate leaving the machine in the same state. Define two Linking Contexts to be Equivalent when they are exactly the same. Define two Components to be Component Equivalent when both are Program Equivalent after Linking with Linking Equivalent Linking Contexts.</span></p>
<p><span class="example">Coq (or, CIC) is a Language; its definition is as follows. Let
<script type="math/tex">P</script> be any closed, well-typed Coq expression. Let
<script type="math/tex">C</script> be any open, well-typed Coq expression. Let
<script type="math/tex">\gamma</script> be maps from free variables to Programs of the right type. Define the Link Operation as substitution. Define Components Equivalence and Program Equivalence as definitional equality. Define Linking Equivalence by applying Program Equivalence pointwise to the co-domain of the maps.</span></p>
<h3 id="what-is-a-compiler">What is a Compiler</h3>
<p>Using our generic definition of Language, we define a generic Compiler as follows.</p>
<p>
<script type="math/tex; mode=display">
\newcommand{\newsteqvsym}[1]{_S\!\!#1_T}
\newcommand{\psteqvsym}{\newsteqvsym{\peqvsym}}
\newcommand{\lsteqvsym}{\newsteqvsym{\leqvsym}}
\newcommand{\csteqvsym}{\newsteqvsym{\ceqvsym}}
\newcommand{\psteqv}{\neweqv{\psteqvsym}}
\newcommand{\csteqv}{\neweqv{\csteqvsym}}
\newcommand{\lsteqv}{\neweqv{\lsteqvsym}}
\begin{array}{llcl}
\text{Source Language} & \mathcal{L}_S \\
\text{Target Language} & \mathcal{L}_T \\
\text{Program Translation} & \leadsto & : & P_S \to P_T \\
\text{Component Translation} & \leadsto & : & C_S \to C_T \\
\text{Cross-Language (S/T) Program Equivalence} & \psteqvsym \\
\text{S/T Linking Equivalence} & \lsteqvsym \\
\text{S/T Component Equivalence} & \csteqvsym \\
\end{array}</script></p>
<p>Every Compiler has a source Language
<script type="math/tex">\mathcal{L}_S</script> and target Language
<script type="math/tex">\mathcal{L}_T</script>. We use the subscript
<script type="math/tex">_S</script> when referring to definition from
<script type="math/tex">\mathcal{L}_S</script> and
<script type="math/tex">_T</script> when referring to definitions from
<script type="math/tex">\mathcal{L}_T</script>. Every Compiler defines a translation from
<script type="math/tex">\mathcal{L}_S</script> Programs to
<script type="math/tex">\mathcal{L}_T</script> Programs, and similarly a translation on Components. A Compiler also defines cross-language relations on Programs, Components, and Linking Contexts.</p>
<p><span class="example">We can define a Compiler from C to x86 as follows. Let
<script type="math/tex">\mathcal{L}_S</script> be the Language for C defined earlier. Define a Language for x86 similarly. Let <code>gcc</code> be both the Program and Component Translation. Define S/T Program Equivalence as compiling the Source Language Program to x86, and comparing the machine states after running the x86 programs. Define S/T Linking Equivalence similarly to the definition given for the C Language. Define S/T Component Equivalence by linking with S/T Equivalent Linking Contexts and referring to S/T Program Equivalence.</span></p>
<p><span class="example">We can define a Compiler from Coq to ML as follows. Let
<script type="math/tex">\mathcal{L}_S</script> be the Language for Coq defined earlier. Define a Language for ML similarly. Let the Coq-to-ML extractor be both the Program and Component Translation. Define Program Equivalence via a closed cross-language logical relation indexed by source types. Define Component Equivalence by picking related substitutions, closing the Components, and referring to the Program Equivalence. Define Linking Equivalence by applying Program Equivalence pointwise to the co-domain of the map.</span></p>
<h3 id="what-even-is-compiler-correctness">What Even is Compiler Correctness</h3>
<h4 id="type-preservation">Type Preservation</h4>
<p>The simplest definition of compiler correctness is that we compile Programs to Programs, i.e., our compiler never produces garbage. A slightly less trivial definition is that we compile Components to Components. In the literature, these theorems are called “Type Preservation”. Typically, Type Preservation also connotes that the target language has a non-trivial type system and the compiler is obviously non-trivial.</p>
<p><span class="theorem">Type Preservation (Programs)
<br />
<script type="math/tex"> P_S \leadsto P_T</script></span></p>
<p><span class="theorem">Type Preservation (Components)
<br />
<script type="math/tex">C_S \leadsto C_T</script></span></p>
<p>Type Preservation is only interesting when the source and target languages provide sound type systems that enforces sophisticated high-level abstractions. Even then, it still requires other properties or tests to ensure the compiler is non-trivial. For the user to understand the guarantees of Type Preservation, it is still necessary to understand the target language type system and the compiler.</p>
<p><span class="example">A compiler that compiles every Program to 42 is type-preserving, in the trivial sense. By definition, the source Programs are Programs and 42 is a valid Program in many Languages. However, if you were to call such a compiler “Type Preserving”, the academic community may laugh at you.</span></p>
<p><span class="example">A C-to-x86 compiler is type-preserving, in the trivial sense. Neither C nor x86 provide static guarantees worth mentioning. If you were to call such a compiler “Type Preserving”, the academic community may laugh at you.</span></p>
<p><span class="example">The <a href="http://compcert.inria.fr/">CompCert</a> C-to-Mach is type-preserving, in a weak but non-trivial sense. CompCert enforces a particular memory model and notion of memory safety, and preserves this specification through the compiler to a low-level machine independent language called Mach. The assembler is type-preserving in a trivial since, since x86 provides no static guarantees to speak of.</span></p>
<p><span class="example">The Coq-to-ML extractor is type-preserving, in a pretty trivial sense. As ML has a less expressive type system than Coq, and the extractor often makes use of casts, Type Preservation provides few guarantees for Components. For example, it is possible to cause a segfault by linking a extracted ML program a with a stateful ML program.</span></p>
<p><span class="example">The <a href="https://www.cs.princeton.edu/~dpw/papers/tal-toplas.pdf">System F-to-TAL</a> compiler is type-preserving in a strong sense. System F provides strong data hiding and security guarantees via parametric polymorphism. TAL provides parametric polymorphic and memory safety, allowing all of System F’s types to be preserved. Even so, type preservation could hold if we compile everything to the trivial TAL program, such as <code>halt[int]</code>. However, a quick look at the definition of the compiler or a small test suite is sufficient to convince us that the compiler is non-trivial, and thus Type Preservation is meaningful in this context.</span></p>
<h4 id="whole-program-correctness">Whole-Program Correctness</h4>
<p>The next definition is what I would intuitively expect of all compilers (that are bug free). A source Program should be compiled to a “related” target Program. In the literature, this theorem is referred to as “Whole-Program Correctness” or “Semantics Preservation”. Note that any Whole-Program or Semantics Preserving Compiler is also trivially Type Preserving. Such as Compiler may also be Type Preserving in a non-trivial sense.</p>
<p><span class="theorem">Whole-Program Correctness
<br /> If
<script type="math/tex">P_S \leadsto P_T</script> then
<script type="math/tex">\psteqv{P_S}{P_T}</script></span></p>
<p>A whole-program compiler provides no guarantees if we attempt to compile a Component and then link. Since many, arguably all, Programs are actually Components, Whole-Program Correctness is of limited use. A notable exception is in the domain embedded systems. In this domain, writing a whole source program may be practical.</p>
<p><span class="example">The <a href="http://compcert.inria.fr/">CompCert</a> C-to-Asm compiler is proven correct with respect to Whole-Program Correctness, with machine checked proofs. CompCert refers to this guarantee as “semantics preservation”. Prior versions of CompCert pointed out that, while it is possible to Link after compilation, “the formal guarantees of semantic preservation apply only to whole programs that have been compiled as a whole by CompCert C.” More recent versions lift this restrctions, as we discuss shortly.</span></p>
<p><span class="example">The <a href="https://cakeml.org/">CakeML</a> CakeML-to-Asm compiler is proven correct with respect to Whole-Program Correctness, with machine checked proofs. CakeML is “a substantial subset of SML”. Asm is one of several machine languages: ARMv6, ARMv8, x86–64, MIPS–64, and RISC-V. The assemblers here, unlike in <a href="http://compcert.inria.fr/">CompCert</a>, are proven correct.</span></p>
<h4 id="compositional-correctness">Compositional Correctness</h4>
<p>Intuitively, Compositional Correctness is the next step from Whole-Program Correctness. Compositional Correctness should give us guarantees when we can compile a Component, then Link with a valid Linking Context in the target Language.</p>
<p><span class="theorem">Compositional Correctness
<br /> If
<script type="math/tex">C_S \leadsto C_T</script> and
<script type="math/tex">\lsteqv{\gamma_S}{\gamma_T}</script> then
<script type="math/tex">\psteqv{\gamma_S(C_S)}{\gamma_T(C_T)}</script></span></p>
<p>To understand the guarantees of this theorem, it is necessary to understand how Linking Contexts are related between the source and target Languages. For instance, some compilers may allow linking with arbitrary target Linking Contexts. Some compilers may restrict linking to only Linking Contexts produced by the compiler.</p>
<p>The phrase “Compositional Correctness” usually connotes that the relation
<script type="math/tex">\lsteqvsym</script> is defined independently of the compiler—that is, it is used to mean there is a specification separate from the compiler for cross-language equivalent Linking Contexts. This supports more interoperability, since linking is permitted even with Linking Contexts produced from other compilers, or handwritten in the target language, as long as they can be related to source Linking Contexts.</p>
<p><span class="theorem">Compositional Compiler Correctness
<br /> If
<script type="math/tex">C_S \leadsto C_T</script> and
<script type="math/tex">\lsteqv{\gamma_S}{\gamma_T}</script> then
<script type="math/tex">\psteqv{\gamma_S(C_S)}{\gamma_T(C_T)}</script> (where
<script type="math/tex">\lsteqvsym</script> is independent of
<script type="math/tex">\leadsto</script>)</span></p>
<p>The phrase “Separate Compilation” usually connotes that linking is only defined with Linking Contexts produced by same compiler. That is, when
<script type="math/tex">\lsteqv{\gamma_S}{\gamma_T} \iff \gamma_S \leadsto \gamma_T</script>.</p>
<p><span class="theorem">Correctness of Separate Compilation
<br /> If
<script type="math/tex">C_S \leadsto C_T</script> and
<script type="math/tex">\gamma_S \leadsto \gamma_T</script> then
<script type="math/tex">\psteqv{\gamma_S(C_S)}{\gamma_T(C_T)}</script></span></p>
<p>Some papers present a variant of “Semantics Preservation” stated of Components instead of Programs. Usually, this theorem implies Compositional Correctness. This require a cross-language equivalence on Components, which is usually defined in terms of linking with S/T Equivalent Linking Contexts and observing S/T Equivalent Programs.</p>
<p><span class="theorem">Semantics Preservation
<br /> If
<script type="math/tex">C_S \leadsto C_T</script> then
<script type="math/tex">\csteqv{C_S}{C_T}</script></span></p>
<p>Some papers define interoperability between source Components and target Linking Contexts, and between target Components and source Linking Contexts. This supports a broader notion of linking and thus a more widely applicable Compositional Correctness guarantee. However, it requires understanding the source/target interoperability semantics to understand the guarantee. There is no way to relate the resulting behaviors back to the source language, in general.</p>
<div class="theorem">Open Compiler Correctness
<br />
<ol>
<li>If
<script type="math/tex">C_S \leadsto C_T</script>, then for all
<script type="math/tex">\gamma_T</script>,
<script type="math/tex">\psteqv{\gamma_T(C_S)}{\gamma_T(C_T)}</script></li>
<li>If
<script type="math/tex">C_S \leadsto C_T</script>, then for all
<script type="math/tex">\gamma_S</script>,
<script type="math/tex">\psteqv{\gamma_S(C_S)}{\gamma_S(C_T)}</script></li></ol></div>
<p>Note that to satisfy Open Compiler Correctness, two new definitions of linking must be defined: one that links target Linking Contexts with source Components, and one that links source Linking Contexts with target Components.</p>
<p>Most compilers that generate machine code aim to be Compositional Compilers, since we should be able to link the output with any assembly, even that produced by another compiler. Compilers that target .NET VM, JVM, and LLVM are similar.</p>
<p>Some languages, like Coq and Racket, target special purpose VMs and aim only to be Separate Compilers.</p>
<p>Languages with FFIs can be thought to aim for a limited form of Open Compiler Correctness. For instance, we can link Java and C code for certain limited definitions of Linking Contexts. The full spirit of the theorem is limited to research projects, for now.</p>
<p><span class="example">The <a href="https://www.cs.princeton.edu/~appel/papers/compcomp.pdf">Compositional CompCert</a> compiler extends <a href="http://compcert.inria.fr/">CompCert</a> and its correctness proofs to guarantee Compositional Compiler Correctness. Linking is defined for any target Linking Context whose <em>interaction semantics</em> are related to a source Linking Context. The paper’s Corollary 2 titled “Compositional Compiler Correctness” is a generalized version of our theorem by the same name. They allow for compiling an arbitrary number of Components, then linking those with a Linking Context.</span></p>
<p><span class="example">The <a href="https://people.mpi-sws.org/~viktor/papers/sepcompcert.pdf">SepCompCert</a> compiler extends <a href="http://compcert.inria.fr/">CompCert</a> and its correctness proofs to guarantee Separate Compiler Correctness. This work notes that Separate Compiler Correctness is significantly easier to proof, increasing the proof size by 2%, compared to the 200% required in Compositional CompCert. This work was merged into CompCert as of <a href="https://github.com/AbsInt/CompCert/releases/tag/v2.7">version 2.7</a>.</span></p>
<p><span class="example">The <a href="http://plv.mpi-sws.org/pils/paper.pdf">Pilsner</a> compiler is an MLish-to-Assemblyish compiler that guarantees Compositional Compiler Correctness. Linking is defined for any target Linking Context that is related to a MLish Component by a PILS.</span></p>
<p><span class="example"><a href="http://www.ccs.neu.edu/home/amal/papers/voc.pdf">Perconti and Ahmed</a> develop a compiler from System F to a low-level typed IR that guarantees Open Compiler Correctness. In every Language, Linking is defined for any Linking Context in the source, intermediate, or target languages that has a compatible type. This paper defined a multi-language semantics in which all languages can interoperate.</span></p>
<h4 id="full-abstractionsecure-compilation">Full Abstraction/Secure Compilation</h4>
<p>Some properties of a program cannot be stated in terms of a single run of the program. We require more sophisticated compiler correctness theorems to show these properties are preserved. For instance, security properties, such as indistinguishably of a cipher text to a random string, are relational properties. These can only be stated as a property between two Programs in the same Language.</p>
<p>Fully abstract compilers, seek to prove compilers preserve these relational properties. Since security properties are often relational, these are sometimes called “Secure Compilers”. Often times, we also want to <em>reflect</em> equivalence, which usually follows from Compositional Correctness. Full abstraction refers specifically to preserving and reflecting Observational Equivalence. Papers on this topic often focus on equivalence preservation, since equivalence reflection by itself usually follows from compiler correctness, and preservation is the direction of interest for stating security properties.</p>
<p>Compilers that guarantee these properties are limited to research projects, as there are many open problems to be solved. The key difficulty lies in the proofs of Equivalence Preservation, which essentially requires “decompiling” a target Program into a source Program.</p>
<p><span class="theorem">Equivalence Preservation
<br /> If
<script type="math/tex">\ceqv{C_S}{C'_S}</script> and
<script type="math/tex">C_S \leadsto C_T</script> and
<script type="math/tex">C'_S \leadsto C'_T</script> then
<script type="math/tex">\ceqv{C_T}{C'_T}</script></span></p>
<p><span class="theorem">Equivalence Reflection
<br /> If
<script type="math/tex">\ceqv{C_T}{C'_T}</script> and
<script type="math/tex">C_S \leadsto C_T</script> and
<script type="math/tex">C'_S \leadsto C'_T</script> then
<script type="math/tex">\ceqv{C_S}{C'_S}</script></span></p>
<p><span class="theorem">Full Abstraction
<br /> Let
<script type="math/tex">C_S \leadsto C_T</script> and
<script type="math/tex">C'_S \leadsto C'_T</script>.
<script type="math/tex">\ctxeqv{C_S}{C'_S}</script> iff
<script type="math/tex">\ctxeqv{C_T}{C'_T}</script></span></p>
<p><span class="example"><a href="https://williamjbowman.com/papers/#niforfree">Bowman and Ahmed</a> develop an Equivalence Preserving and Reflecting compiler from The Core Calculus of Dependency (DCC) to System F. DCC guarantees certain security properties, which are preserved by encoding using parametric polymorphism. This compiler also satisfies Compositional Compiler Correctness, using a cross-language logical relation to define relatedness of Components between languages. This compiler is not Fully Abstract, as it does not define Contextual Equivalence. Instead, the compiler Preserves and Reflects the security property of interest.</span></p>
<p><span class="example"><a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2013/01/js-star.pdf">Fournet et al.</a> develop a Fully Abstract compiler from a language similar to a monomorphic subset of ML with exceptions to JavaScript. This paper demonstrates a key difficulty in Fully Abstract compilers. Often, the source language must be artificially constrained (in this case, by eliminating polymorphism and adding exceptions) in order to support back-translation.</span></p>
<p><span class="example"><a href="https://people.mpi-sws.org/~marcopat/marcopat/Publications_files/logrel-for-facomp.pdf">Devriese et al.</a> develop a Fully Abstract compiler from STLC to the untyped lambda calculus. This paper developed a key innovation in back-translation techniques, allowing a more expressive and less typed target language to be back-translated to a less expressive source and more typed source language.</span></p>Toward Type-Preserving Compilation of Coq, at POPL17 SRCurn:https-www-williamjbowman-com:-blog-2017-01-03-toward-type-preserving-compilation-of-coq-at-popl17-src2017-01-03T21:41:11Z2017-01-03T21:41:11ZWilliam J. Bowman
<p>Almost two months ago, my colleagues in the Northeastern PRL wrote about <a href="http://prl.ccs.neu.edu/blog/2016/11/17/src-submissions/">three of our POPL 2017 Student Research Competition submissions</a>. There was fourth submission, but because I was hard at work completing proofs, it wasn’t announced.</p>
<h2 id="toward-type-preserving-compilation-of-coq">Toward Type-Preserving Compilation of Coq</h2>
<p><a href="https://williamjbowman.com/papers#cccc-popl17-src">Toward Type-Preserving Compilation of Coq</a>
<br /> William J. Bowman
<br /> 2016
<br /></p>
<blockquote>
<p>A type-preserving compiler guarantees that a well-typed source program is compiled to a well-typed target program. Type-preserving compilation can support correctness guarantees about compilers, and optimizations in compiler intermediate languages (ILs). For instance, <a href="http://dx.doi.org/10.1145/268946.268954">Morrisett <em>et al.</em> (1998)</a> use type-preserving compilation from System F to a Typed Assembly Languages (TAL) to guarantee absence of stuckness, even when linking with arbitrary (well-typed) TAL code. <a href="http://doi.acm.org/10.1145/231379.231414">Tarditi <em>et al.</em> (1996)</a> develop a compiler for ML that uses a typed IL for optimizations.</p>
<p>We develop type-preserving closure conversion for the Calculus of Constructions (CC). Typed closure conversion has been studied for simply-typed languages (<a href="http://dx.doi.org/10.1145/237721.237791">Minamide1996</a>, <a href="https://dl.acm.org/citation.cfm?id=1411227">Ahmed2008</a>, <a href="http://doi.acm.org/10.1145/2951913.2951941">New2016</a>) and polymorphic languages (<a href="http://dx.doi.org/10.1145/237721.237791">Minamide1996</a>, <a href="http://dx.doi.org/10.1145/268946.268954">Morrisett1998</a>). Dependent types introduce new challenges to both typed closure conversion in particular and to type preservation proofs in general.</p></blockquote>ICFP 2016urn:https-www-williamjbowman-com:-blog-2016-10-15-icfp-20162016-10-15T21:15:00Z2016-10-15T21:15:00ZWilliam J. Bowman
<p>Full disclosure: This blog post is sponsored in part by ACM SIGPLAN. ACM SIGPLAN! Pushing the envelope of language abstractions for making programs better, faster, correcter, stronger.</p>
<h4 id="tldr">TLDR</h4>
<p>I went to ICFP again this year. I’m a frequent attendee. Last year I had <a href="/papers/#niforfree">a paper</a> and <a href="https://youtu.be/-vgWefEXHt0">gave a talk</a>. This year I had <a href="/papers/#fabcc">a paper</a>, but someone else gave <a href="https://www.youtube.com/watch?v=Hylji4ezQHE">the talk</a>. But I also gave a <a href="http://conf.researchr.org/event/hope-2016/hope-2016-papers-growing-a-proof-assistant">talk</a> at HOPE 2016. I met some people and saw some talks and pet a deer.</p>
<hr />
<!-- more-->
<p>I’m a fifth year Ph.D. candidate studying compiler correctness, dependent types, and (functional) programming language abstractions. ICFP is my second home.</p>
<p>This year, I met some cool new researchers, several of whose names I’ve already forgotten (sorry new friends). I met <a href="https://zoep.github.io/">Zoe Paraskevopoulou</a>, who works on the CertiCoq project, a combination of my two favorite things: compiler correctness and dependent types. We talked a bit about this because I too have been looking at correctly compiling dependent types. I also met <a href="http://pleiad.cl/people/etanter">Éric Tanter</a>, who works on, among many things, gradual typing and dependent types. He gave <a href="https://youtu.be/GwmZTGd1rZs">a talk</a> on a method for verified interoperability with dependent types, which is related to certain kinds of compiler correctness problems that interest me, such as compositional compiler correctness and full abstraction. He’s also interested in Racket, so we spent some time discussing <a href="/papers/#cur">Cur</a>.</p>
<p>I got some new ideas for Cur. <a href="http://davidchristiansen.dk/">David Christiansen</a>’s talk on <a href="https://youtu.be/pqFgYCdiYz4">Elaborator Reflection: Extending Idris in Idris</a> did a great job of motivating the problem and comparing meta-programming styles in proof assistants. The elaborator monad looks like a good abstraction for reasoning about certain kinds of extensions, and I need to figure out how to make it good for reasoning about the complex extensions possible in Cur. <a href="https://distrinet.cs.kuleuven.be/people/jesper">Jesper Cockx</a>’s talk on <a href="https://youtu.be/TbyAfTCbyHQ">Unifiers as Equivalences</a> demonstrated ideas that might let me implement unification as a user defined extension in a proof-relevant way.</p>
<p>I saw most of the other talk, and a bunch of talks at the workshops. I have pages and pages of notes, and dozens of items in my TODO list to go and review papers and talks that I didn’t properly digest the first time. I hope I finish those by next year.</p>Post-ECOOPurn:https-www-williamjbowman-com:-blog-2016-08-10-post-ecoop2016-08-10T19:46:50Z2016-08-10T19:46:50ZWilliam J. Bowman
<p>I returned from ECOOP a few weeks ago, and have been trying to figure out what I got of the experience. I’ll focus on two big things.</p>
<p>For a long time I have been debating what I should do after I graduate, which I usually phrase as “industry vs academia”. I’m coming to understand this is a false dichotomy, as most dichotomies are. (It helps that a friend <a href="https://twitter.com/chckadee/status/761312153370517504">spelled it out for me</a>.) Dave Herman’s talk, on starting and running a research lab doing academic-style work (e.g., developing a principled, safe programming language) in industry, helped me see that. Shriram’s summer school lectures were equally helpful, and sort of the dual of this: taking objects from industry—scripting languages—and applying academic rigor to them. ECOOP, more than any other conference I’ve been to, brought together industry and academia in a smooth spectrum. I wish I had attended as a younger student.</p>
<p>The other big thing was a crystallized version of thoughts I had on programming language. Matthias Felleisen on Racket and Larry Wall on Perl 6 helped me see this: anything you might want to do to or in a program should be expressible in your programming language (Matthias said it better). This is what annoys me about languages like C, Java, and Coq. C has the preprocessor and <code>make</code> and the dynamic linker, etc. Java has Eclipse. Coq has OCaml plugins. All of these languages require doing “more” than writing programs, but have no way to express it in the language. Racket (and, apparently, Perl 6) pulls those things into the language so that those too become just writing programs: extend the reader, dynamically load a library, muck about with the top level, add new syntax.</p>
<p>I got a handful of smaller things: insights about what objects are best at, what a long-term (~25 year) research agenda looks like, an appreciation for the 99 different designs for any given program.</p>
<p>ECOOP was a great experience. If I go again, though, I hope the summer school won’t conflict with the entire research track.</p>ECOOP 2016urn:https-www-williamjbowman-com:-blog-2016-07-15-ecoop-20162016-07-15T20:18:06Z2016-07-15T20:18:06ZWilliam J. Bowman
<p>Full disclosure: This blog post is sponsored and required by the National Science Foundation (NSF): The NSF! Funding SCIENCE! since 1683 or whenever.</p>
<h4 id="tldr">TLDR</h4>
<p>I’m going to ECOOP to see a part of the PL community I wouldn’t normally see, talk to people that I wouldn’t normally talk to, attend the co-located summer school, and figure out what I want to do with my (academic) life. If you want to know why I might do those things, <a href="https://williamjbowman.com">read a little about me</a>.</p>
<h4 id="the-long-story">The long story</h4>
<p>On Sunday, I am heading to ECOOP. I have never been to ECOOP, the conference is a little outside of my specialty, I do not know anyone there, and I do not even have a paper or talk at one of the workshops. However, a few weeks ago I ignored an email from one of the mailing lists that said there was some NSF funding that students should apply for. Then I saw an email from Jan Vitek on a local mailing list saying students should really apply for this funding and get to go to Rome.</p>
<p>“Huh”, I thought to myself, “I wonder what’s interesting in Rome”. I went to the <a href="http://2016.ecoop.org/program/program-ecoop-2016">ECOOP program</a> and started looking around.</p>
<p>The Curry On program looks interesting. This co-located conference should help me understand how PL applies to industry problems. Unfortunately, I’m going to miss most or all the first day. But the talk I’m most interested in is the final keynote, “Building an Open Source Research Lab”; hopefully this will give me some insights on this <a href="https://williamjbowman.com/blog/2015/11/02/to-academia-or-not-to-academia">industry vs academia problem I have been struggling with</a>.</p>
<p>There is also a summer school. While the history of typed and untyped languages looks fascinating, I’m going to have to skip part of it to learn about type specialization of JavaScript programs; I prove things on type-preserving compilation and I want to see more work that uses types for optimizations. Next up, the lecture on “Building a Research Program for Scripting Languages” should help me better understand what an academic career will look like, and give me some idea of how to be a good academic. Then I’m going to learn how to build a JIT compiler for free, because despite being a compilers expert, I don’t know anything about JIT compilers. Finally, I’m going to learn a little about experimental evaluation; I normally do theory and proofs, but I imagine one day I might need to measure something.</p>
<p>Unfortunately, the summer school is in parallel with most of the conference talks, so it’s going to be tough to decide how much of the summer school to miss in order to see new research.</p>
<p>“Yeah”, I thought after much consideration, “I guess there are some interesting things to see in Rome”. I’m a little concerned about the accommodations and venue though; I understand that a lot of the architecture in Rome is <em>very</em> old.</p>Conference talks reconsideredurn:https-www-williamjbowman-com:-blog-2015-08-29-conference-talks-reconsidered2015-08-30T04:18:36Z2015-08-30T04:18:36ZWilliam J. Bowman
<p>A couple weeks ago, I wrote that I was beginning to hate conference talks. The next morning, I woke up with 50+ Twitter notifications caused by people debating that point. I have reconsidered my views.</p>
<p>In my earlier post, I point out that the typical advice I hear is “The talk should be an ad for the paper”. After several discussions, I think this is bad advice. Instead, <a href="http://composition.al/">Lindsey Kuper</a> and <a href="https://www.cs.cmu.edu/~cmartens/">Chris Martens</a> encouraged me to ignore this advice and instead make my talk a performance.</p>
<p>At first, I was unsure what this meant. In fact, I am still not quite what this means. What does it mean to perform a paper? But I followed it anyway.</p>
<p>Essentially I tried to communicate, at a high-level, why I think this work is cool, and what parts of the work are most interesting. I tried to tell a story about what inspired this work, why I care about it, and what came out of it. I did not try to show many technical details; I showed only those necessary to tell the story of this work. I did not try to explain the particulars of all this work; I showed only those necessary to fit the work into the context of the story I wanted to tell.</p>
<p>I think the end result is actually an effective ad for the paper. However, by approaching the talk differently, I produced a much better talk (IMHO). And thankfully, I am not alone in that opinion. For example, I was very excited after my initial practice talk when Matthias called the talk “90% perfect”, in defiance of a NU PRL tradition of not dwelling on positive aspects and only giving constructive <em>criticism</em> after a practice talk.</p>
<p>A video of this talk is <a href="https://youtu.be/-vgWefEXHt0">online here</a>.</p>Conference talksurn:https-www-williamjbowman-com:-blog-2015-08-08-conference-talks2015-08-08T22:24:06Z2015-08-08T22:24:06ZWilliam J. Bowman
<p>I am beginning to hate conference talks. I am in the midst of writing a conference talk for my <a href="/papers#niforfree">recently accepted paper</a>. Although I have only given one conference talk thus far, I have attended several conference and listened to many talks. These experiences have convinced me that conference talks are largely pointless.</p>
<p>I do not find conferences to be pointless. The papers are usually well written, if dense. The conferences themselves always lead to interesting conversations with clever people. I always return from a conference filled with creative energy. And, I admit, I like the excuse to travel to interesting locales.</p>
<p>However, the talks themselves are pointless. Most talks I have attended are terrible. Those that are not terrible I do not remember much of anyway, except that I should go read that paper. Of those talks, I would have made the same decision after reading the abstract for the paper. The talks add nothing because the talk slots are too short to communicate any technical material.</p>
<p>It is not entirely the fault of the speakers. For one, there is little incentive to give a good talk. If you give a good talk, then maybe you convince someone to read your paper, and maybe people remember who you are. This might be important if you are on the job market, but it does not matter for everyone else. Besides, most people will forget the talk in a month, good or bad.</p>
<p>Even if you are a perfectionist so incentive does not matter, it is not easy to craft a good talk. Conference papers are often complex and dense pieces of work. Frequently, the papers omit many details due to space, so completely understanding the work requires not only the paper but a technical appendix or code artifact published separately. Authors (usually (maybe only sometimes)) spend a great deal of time polishing these papers and supplementary materials to effectively communicate a complex and dense piece of work. The slot for the conference talk is 15—20 minutes, in which a speaker much fit a 12-page paper plus supplementary material?</p>
<p>“No! Obviously as a speaker you must <em>not</em> do that. The talk should be an advertisement for the paper. It should be an overview of the paper. It should communicate the key technical ideas and convince people to read the paper.”</p>
<p>What silly advice. I hate advertisements. Why should I sit through sessions and sessions of advertisements?</p>
<p>“No! Obviously as an audience member you must <em>not</em> do that. Just go read the abstracts and find the talks you want to attend. Skip the rest to have conversations with colleagues and authors.”</p>
<p>Okay, so the audience is going to read the abstract to convince them to see a talk that convinces them to read the paper of which they just read the to convince them to see the talk that convinces them to read the paper? This is circular reasoning that wastes the time of both the speaker and the audience.</p>
<p>As a speaker and writer, I have already spent a lot of time and effort on the paper. I have crafted the abstract and introduction to communicate the key technical ideas and give an overview of the paper as precisely and concisely as possible. Shortly thereafter, I have carefully written the rest of the paper to effectively communicate the technical contributions in as much detail yet as concisely as page limits allow. Besides, I had to write them anyway to effectively communicate my research. Why should I reproduce these efforts in a short talk that must communicate less due to the nature of the talk and the audience?</p>
<p>As an audience member, if I want an overview of the paper, the abstract and introduction section provide this. The author already spent a great deal of time writing these sections, which communicate more thoughts in less time than the talk will. If I want more details, these sections are conveniently located with the rest of the paper. Besides, I need to read the abstract anyway to figure out which talks to attend and which papers to read. Why should I then sit through a talk that advertises a paper that I have already decided whether or not to read?</p>
<p>“Well the talks give an excuse and talking points around which we can organize a conference.”</p>
<p>Well why can’t we find a better excuse or better talking points? Why not give longer highly-technical talks that supplement the paper, or questions-and-answer style talks for those who have read the paper and want more? Or why not make the papers more open ended so talks can be more speculative?</p>
<p>I do not know what should go in place of the current conference talks, but the current system seems utterly pointless and results in completely wasted effort.</p>Notes on "Ur: Statically-Typed Metaprogramming ..."urn:https-www-williamjbowman-com:-blog-2015-02-14-notes-on-ur-statically-typed-metaprogramming2015-02-14T17:22:11Z2015-02-14T17:22:11ZWilliam J. Bowman
<p>Today I read <a href="http://adam.chlipala.net/papers/UrPLDI10/">Ur: Statically-Typed Metaprogramming with Type-level Record Computation</a>. This paper presents the Ur language, a functional programming language based on an extension of System Fω. The novel idea is to use type-level functions as a form of type-safe meta-programming. The paper claims this novel idea enables safe heterogeneous and homogeneous meta-programming in Ur.</p>
<p>The interesting insight is that type-level computation may be valuable outside of dependently typed languages. The paper quickly and easily makes this case. The type-level computations reduce type annotations by enabling the programmer to compute types rather than manually write them everywhere. This could be a useful form of meta-programming in any typed language.</p>
<p>The claims about heterogeneous and homogeneous meta-programming seem overstated. Ignoring the novel ability to compute type annotations, type-safe heterogeneous programming could be as easily accomplished in any other type-safe language. I could just as easily (or more easily) write a program in Coq, ML, Haskell, or Typed Racket that generates HTML and SQL queries as I could in Ur. As for homogeneous meta-programming, restricting the meta-programs to record computations at the type-level seems to severely restricts the ability to generate code at compile-time and abstract over syntax, features which are provided by general-purpose meta-programming systems such as Racket’s macros or Template Haskell.</p>Beluga and explicit contextsurn:https-www-williamjbowman-com:-blog-2014-09-10-beluga-and-explicit-contexts2014-09-11T00:56:44Z2014-09-11T00:56:44ZWilliam J. Bowman
<p>In my recent work, I found it useful to pair a term and its context in order to more easily reason about weakening the context. At the prompting of a colleague, I’ve been reading about Beluga, <a href="http://www.cs.mcgill.ca/~complogic/beluga/flops10/flops.pdf" title="Beluga: programming with dependent types, contextual data, and
contexts">[1]</a> <a href="http://www.cs.mcgill.ca/~bpientka/papers/ppdp-pientka.pdf" title="Programming with Proofs and Explicit Contexts">[2]</a>, and their support for programming with explicit contexts. The idea seems neat, but I’m not quite sure I understand the motivations or implications.</p>
<p>So it seems Beluga has support for describing what a context contains (schemas), describing in which context a type/term is valid, and referring to the variables in a context by name without explicitly worrying about alpha-renaming. This technique supports reasoning about binders with HOAS in more settings, such as in the presence of open data and dependent types. Since HOAS simplifies reasoning about binders by taking advantage of the underlying language’s implementation of substitutions, this can greatly simplify formalized meta-theory in the presence of advanced features which previously required formalizing binders using more complicated techniques like De Bruijn indices. By including weakening, meta-variables, and parameter variables, Beluga enables meta-theory proofs involving binders to be much more natural, i.e., closer to pen-and-paper proofs.</p>
<p>Obviously this is great for formalized meta-theory. While I have seen how HOAS can simplify life for the meta-theorist, and seen how it fails, I don’t fully understand the strengths and weakness of this work, or how it compares to techniques such as the <a href="http://www.chargueraud.org/softs/ln/" title="LN: Locally nameless representation">locally nameless</a>. I’m also not sure if there is more to this work than a better way to handle formalization of binding (which is a fine, useful accomplishment by itself).</p>
<p>If anyone can elaborate on or correct my understanding, please do.</p>FASTRurn:https-www-williamjbowman-com:-blog-2013-03-13-fastr2013-03-13T07:05:00Z2013-03-13T07:05:00ZWilliam J. Bowman
<p>FASTR is a bill to ensure all publically funded research is open access. I urge you all to contact your congresspeople and demand they support this bill.</p>
<!-- more-->
<p>If you need help, <a href="https://action.eff.org/o/9042/p/dia/action/public/?action_KEY=9061">the EFF has a page</a> from which you can contact your congresspeople. You can use their template, or my template below that has been customized for researchers.</p>
<blockquote>
<p>As your constituent, and as a university researcher, I am urging you to support the Fair Access to Science & Technology Research Act (FASTR is S. 350 in the Senate and H.R. 708 in the House).</p>
<p>As a researcher, I want my research distributed widely, to anyone who is willing to read it! We in the scientific community are often held to the whims of for-profit journals and publishing agents—agents we must publish through to advance our career, and to get our work seen in the field, due the monopoly-like grip they have on what constitutes a high quality publishing venue—who seek to maximize profit at the expense of taxpayer dollars and the advancement of knowledge.</p>
<p>This research is developed, written, reviewed, digitally typeset, and presented AT NO COST to these publisher, BY US RESEARCHERS, who are often funded with taxpayer dollars through public universities and government agencies like the National Science Foundation. Some venues, through obscene application of copyright, do not allow authors to provide digital copies of THEIR OWN WORK via their personal websites or other means of distribution.</p>
<p>As a result, students, researchers at less well-funded institutions, and citizens have difficulty accessing information they need; professors have a harder time reviewing and teaching the state of the art; cutting-edge research remains hidden.</p>
<p>FASTR helps fix this. The bill makes government agencies design and implement a plan to facilitate public access to the results of their investments. Any researcher who receives federal funding must submit a copy of resulting journal articles to the funding agency, which will then make that research widely available within six months.</p>
<p>Please secure our rights as taxpayers, and our rights as scientists, and promote the progress of science by supporting FASTR.</p></blockquote>