250 likes | 391 Views
Supercompilation and Normalisation by Evaluation. Gavin Mendel-Gleason & Geoff Hamilton Lero@DCU Dublin City University. Why Bother?. Wanted to figure out when one thing was like another. Thought that Supercompilation's intuitive process trees might have a deeper meaning.
E N D
Supercompilation and Normalisation by Evaluation Gavin Mendel-Gleason & Geoff Hamilton Lero@DCU Dublin City University
Why Bother? • Wanted to figure out when one thing was like another. • Thought that Supercompilation's intuitive process trees might have a deeper meaning. • Normalisation is a well used framework for finding canonical terms • Normalisation is stuck in a terminating world
Normalisation • In the Curry Howard setting, normalisation is cut elimation and proof simplification. • We can tell if two proofs are the same if they are synatically identical after normalisation. • Normalisation just requires the application of appropriate reduction rules.
Curry-Howard Correspondance • We try to draw a correspondance between the world of proofs and the world of programs • Proofs <=> Programs • Propositions <=> Types
System F • System F is a simple language with an interesting type system. • It allows quantification over types: implicational monadic second order logic. • Λ A . (λ x : A . x) : ∀ A . A→A • It is strongly normalising, so we can find a “value” for any program by applying reduction rules.
System F - Extended • Types {A,B,C} := 1| X | A→B | ∀ X.A | A+B | A*B X.A • Terms {r,s,t} := x | f | () | λx:A.t | Λ X.t | r s | r A | inl(t,B) | inr(t,A) | (t,s) | in(t,A) | out(t,A) | split r as x1,x2 in s | case r of inl(x1) => s ; inr(x2) =>t • Ctx {G} := . | G,X | G,x:A • D = a map from function constants to terms.
What have we done? • Everything there is representable except unfolding. • We can do sums, products and even least and greatest fixed points without extension – using a church encoding – but it's slow • We've gained general recursion, lost normalisation. • Cut evaluation into two peices!
We can't compare anymore • We can still compare programs for syntactic equality, but it doesn't tell us anything about the unfolding behaviour. • We want a behavioural model of the program. • Let's see how things unfold
When is one thing like another? • Morris contextual equivalence says that we want to know that C[e] = C[e'] for any context C. • It's hard to quantify over contexts. • Gordon tells us about another path – we can treat functional programs as a transition system. • Equivalence becomes a question of bisimulation.
Two players, internal and external • “A point to watch is to make a distinction between internal and external behaviour” - Plotkin • We don't know what free variables are going to do except for what their type says. • This is how supercompilation has always built process trees – nothing new. • Our graph is built from a term t, using [t]
Transition System • T = (S,A,:SxAxS) • A a set of actions – here determined by the language • S a set of states – which are terms
A Difference without a distinction • Well known that we can distribute case – but we see it plainly here. Just compare edge labels and leaves to match. • Where do these transitions come from? • They came from one-hole evaluation contexts – or atomic experiments that define the reduction semantics. • Case [] of ... | Split [] as ... | [] b
Arbitrary Bisimulation • To show that a~b • Whenever (a,,a') in G1, then (b,,b') in G2 and a'~b' • Whenever (b,,b') in G2, then (a,,a') in G1,and a'~b'
Use Park's Principle • We do this by coming up with a monotone relation, that we can use to replace ~, then we can easily show that it is a subset of ~ hence we can show ~ • In practice this just means we have to be careful to have done something, before reusing our hypothesis. • That is: assume a~b but make sure we have a transition before using it.
Composition is Bisimilar • [t] • [s] ~ [t s] • [t] • T ~ [t T] • We can go back and forth between splitting out separate graphs and combining into one.
Pending Questions • Equivalence of process trees should give a contextual equivalence – we shouldn't need to show improvement, but I've yet to prove this. • Do we need the improvement theorem? • In the f 0 = 42 example [Bird], we have equational rewriting going wrong, but we don't have a replacement of one normal form for an equivalent normal form. • What fragments will we get normal forms for?
data CoTrue = T | D CoTrue true = T false = D false eq Z Z = true eq (S x) (S y) = eq x y eq x y = false {- Also called min on natural numbers -} join :: CoTrue -> CoTrue -> CoTrue join T b = T join a T = T join (D a) (D b) = D (join a b) ex :: (Nat -> CoTrue) -> NatStream -> CoTrue ex p (SCons x s) = D (join (p x) (ex p s)) {- Also called min on natural numbers -} join :: CoTrue -> CoTrue -> CoTrue join T b = T join a T = T join (D a) (D b) = D (join a b) ex :: (Nat -> CoTrue) -> NatStream -> CoTrue ex p (SCons x s) = D (join (p x) (ex p s)) {- Also called min on natural numbers -} join :: CoTrue -> CoTrue -> CoTrue join T b = T join a T = T join (D a) (D b) = D (join a b) ex :: (Nat -> CoTrue) -> NatStream -> CoTrue ex p (SCons x s) = D (join (p x) (ex p s))