Solid Intuition/Types

From SE Wiki
Revision as of 16:37, 22 December 2010 by (Talk)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

APToP generalises the notion of a function to a scope which is used for functions and quantifiers. All variables are bound inside a scope. All identifiers which are not variables denote constants. Examples for constants are T, ⊥, 0, 1, +, - and all other other symbols which have a fixed meaning in the theory. From a minimalist stand-point a constant is the same as a variable whose introduction we cannot see (i.e. we cannot zoom out of its scope). During proofs we can zoom into a scope and then the this scope's variables also behave like constants. The difference between variables and constants is that variables can be instantiated. But since the rule of instantiation says that all instances of a variable have to be instantiated with the same expression, we can only do it when we have zoomed out far enough.

To further generalize our model of expressions, all operators are treated like variables which have a function type.

All variables are bunches although many are elementary bunches ("elements" for short) and this distinction is crucial. All variables are also strings although some variables are one item strings ("items" for short); this distinction is also important but not as omnipresent in the proof rules. (Note: the "proof rules" are our only semantics, since we don't attach any other meaning to an expression.) The reader might wonder how a variable can both be a bunch and a string and smell a contradiction here. But there is no contradiction as long as we do not claim that a bunch cannot also be a string and vice versa. Indeed we will say that the bunch and string creation operators (null , ´ nil ;) distribute among each other and so a bunch of strings becomes a string of bunches. For the vast majority of functions (scopes) arguments are elements (and not necessarily items) which means that we can distribute all functions over bunches without any harm. It also means that all ∀ quantifications are over elements. Now, again the reader may ask how some variables can be non-elements at all, since we said above that all variables can be seen as having a scope somewhere. Well, we have a second kind of function whose arguments are any kind of bunch (again, they can be an item or not) and whose application does not distribute over bunches. This seems to work fine.

(I guess, the reader can see from this, how the question of itemness is not that important: functions never distribute over strings. Indeed the only thing that distributes over strings is bunch-operators, but that's because string operators (just like most operators of the theory) distribute over bunches. Strings might seem a little scary because they initially look like bunches, but actually they are pretty tame. (Not lame, though!))

Finally, all program-variables are both items and elements. Programming is relatively simple; sets and lists exist as ordinary data structures. (And as soon as we have invented modules the programmer can choose a different implementation for every set and list that he wants to use.)

Discovering element-hood

To use the law of function application < x -> expr > arg = (replace x in expr with arg), we have to show that "arg" is an element. This check is done together with type-checking (see next section) and therefore starts all the way down from the leaves of an expression.

First of all, there's a caveat which holds for element-hood checking as well as for type-checking: both are sound and incomplete. For instance a solution-bunch will always check as non-elementary creating a proof obligation for the user to discharge, if she truly believes that this bunch has exactly one element. Likewise, in a context that contains x:int and x>0, it is left to the user to show x:nat. (This is much like PVS.)

For purposes of checking element-hood there are three kinds of variables and functions: the easiest are functions which distribute over bunches. E-checking can simply skip over them and check that all their arguments are elements. Another kind are those who are not (necessarily) elements, such as "nat", "a,b", "null" and the solution quantifier. The third kind the most important: functions that take bunch arguments and return elements. Those are very view in the book (namely ":", "=", "{}", "¢") plus any that users may add. Now to check element-hood we simply start with the non-functions variables in the leaves of an expression (whose element-hood is known from the scope through which they are introduced) and iterate down. All we need to check is that every (possibly) non-elementary expression is elementified by one of the functions which do so.

Type checking

Our theory, aPToP, uses the bunch inclusion operator ":" to make statements about types. For non-function expressions the right-hand side of a typing statement is just an ordinary bunch. For function expressions the right-hand side is another function; there is an axiom defining what ":" means when comparing functions and expressions like "int→bool" are just shorthands for functions. Every typing statement is an ordinary expression of the theory and can be proven using ordinary proof techniques. The use of ":" means that is very natural for a variable to have many times. For example, since "nat : int" for any "x : nat" we also have "x : int" by simple transitivity. Also we can have different typing axioms for one and the same function, e.g. "_+_ : nat→nat→nat" and "_+_ : int→int→int" (although neither type is included in the other). This is all for free without the need of special rules to reason about it.

When using aPToP in practice, usually all typing is implicit since it is trivial and doesn't interfere with the interesting parts of a proof. Only in rare circumstances do we have to think about the validity of a variable instantiation. Our type checker embodies this by discharging most typing side-conditions via some very simple rules described here. All other typing conditions will simply be left as proof obligations to the user. (With successfully typed subexpressions as antecedents and full context from the surrounding proof, of course.)

As in the e-check, all a type-checker has to do is to start from the leaves of an expression, which all have type statements, because they were introduced via some scope and then work its way down using the following simple law about types: "x:A ∧ f:A→B ⇒ f x : B". To make type-check bother-free for the user, this law is hard-wired into the type-checker. From a semantic standpoint, however, it is just an ordinary proven expression (which follows from the definition of ":".) which can also be used in proofs; for instance, when doing a non-trivial typing proof by hand.

In order to express laws on sets and functions in the most general fashion, we will eventually need type variables. For a first prototype, however, it is sufficient to define a bunch "basic = number, [number], {number}, number→number, number→number→number" which is a type for most expressions we will ever encounter. Then we can write, for example ∀‹ A, B : basic |→ A,B = B,A ›.