When doing something that has apparently already been done often and seems to be well understood, it is helpful to look at what one is doing different than others. The question is: will the differences in the approach lead to substantial differences in the outcome? Or will one just produce a minor variant on a worn-out theme?
In this page I will argue, that although the differences between Solid Intuition are very simple and easy to explain, they are very deep and will lead to some significant differences. A lot of known problems do not exist in this approach and many new problems will show up.
I will sort the differences into some categories and start with the Goals / Objectives of Solid Intuition vs existing provers, since this is the root of many other differences.
Here is a summary:
- Solid Intuition focuses first and foremost on the User Interface and the User Experience. The logical and mathematical aspects are subordinate to this.
- Proofs as we construct them are primarily a means of communication between humans. A proof shows not only that something is true, but also why it is true. (For example, in programming a proof justifies that all the parts of a program are necessary, when they are all needed in the proof. And we can also see what part of the program is needed in what part of the proof. Our conjecture is that this permits a degree of understanding to the readers and authors of proofs which makes it easier to change and adapt programs, but also to develop solutions to similar and also more complex problems.)
- The logical foundations and the theory of programming that we are using are the most simple and practical that have been proposed to date.
Table of Differences
|powerful||easy to use|
|has a meta-logic / core logic||has only one logic|
|has a minimal set of rules to allow for different logics to be implemented by changing the axioms||has a redundant set of rules to make proofs easier and the tool more consistent|
|to proof is understood as to discover if it's a theorem||to proof is understood as to make transparent to anyone, that it is a theorem|
|designed as a software development tool. Later also used for teaching purposes.||designed as a teaching tool. Maybe also usable for industrial software development. (Models are the Pascal and Smalltalk programming languages.)|
The proof script that results from proving a theorem with a software theorem prover resembles a detailed journal of all the things a user had to do in order to make the computer accept the proof. It contains all the gory technical details, omitting nothing. Like a researcher's working recorded on video.
The proof editor, on the other hand, intends to show the result of the proving process. It is the essence of the researcher's work as it would be published, for example, in a journal article. The only technical details contained in the presentation are those that the author has chosen to present.
Goals / Objectives
The name prover used by HOL, PVS, Isabelle and others implies that these tools have been designed to proof, they are machines that proof automatically, while being directed by a human programmer. I say programmer instead of user because that person does not directly enter proofs, but rather she has to write tactics and scripts, which are nothing else but programs written in a very high-level language which is specialised to direct a prover in proving. As a consequence, the user must not only learn the language in which propositions are written and the language in which proofs are written, but additionally also the language in which proof-scripts are written.
On the other hand, the name proof editor which we use for Solid Intuition implies that the program is merely a helper for the user to do his proofs. The important difference is that the proof editor tries to speak the language of the user, while a prover usually imposes his language and his way of working on the user. The language used by Solid Intuition is clearly designed to correspond to what people use when they have to do the work on their own.
An important advice if one wants to be successful: specialize! We implement that advice by tailoring our tool to one kind of programs (or rather, one kind of programming): the development of general algorithms. The basic algorithms of computer science (searching, sorting, numerics, memory allocation, ...) which are taught to all students and which are still being reimplemented from time to time (not as often as they used to be, since function libraries have finally made their breakthrough in industry). An area that still needs to implement its own basic algorithms is high-safety programming or high-security programming, and occasionally also high-efficiency programming.
Most other provers use some kind of sequent calculus as a way to manage the local theorems and current hypothesis in proofs. But since the operators of the sequent calculus mirror those of boolean algebra, an additional layer of complexity is added. Since we are using equational proofs and context rules, we don't need need an additional layer. Boolean Algebra suffices for everything. Likewise, many formal systems try to develop very general theories, for example they might start with the axioms for sets (as Abrial does in his "B book") and then construct the integers via sets. Comparably, introducing both integers and sets with their own axioms is much easier. Everything about integers can then be proven by just referring to integer axioms. Finally, the same approach holds also for specifications of software modules: we can try to model them via sets, as the Z notation does, or we can just give their laws as algebraic specifications (as in LARCH). Of course, when a software module has a more complex state, then an indirect specification becomes necessary. But still, we have an advantage of specifying everything as simple and straightforward as possible.
A similar argument applies to our formalisation of programs. A specification is just a boolean expression that describes the relation between inputs and outputs. A program also describes this relation, but additionally is executable by a computer. The refinement relation between programs and specifications is just boolean implication. This way, we don't need any mechanisms to extract proof obligations. The refinement statement itself is what we proof. For a comparison between this approach and what other people use, see Ric's paper What's wrong with formal programming methods? (1991).
The idea to have one simple set of "core rules", on which everything else can be implemented and any logic been modelled, appeals to almost everyone. Researchers can't resist the generality that is implied by this approach. But in most cases, only one logic will be implemented inside the prover's meta-logic (or core-logic). For example in Isabelle, HOL is the embedded logic used for most of the applications. But in that case, the core-logic is just an additional layer in the software, that has to be understood and put into use.
In our approach we will try to resist that temptation of being too general and instead focus on just the proof rules and logic that is needed for the refinement of programs.
Other provers are usually implemented in functional and/or logical programming languages. This makes the development of inference algorithms easier, but the design of a good user interface harder. We'll do it the other way round: since the User Interface is the most important thing, we'll use a language, tools and methods that best support the design and implementation of good user interfaces.
Furthermore we'll make use of the current state of the art in software engineering regarding design by contract, documentation, tests, collaboration and bug/feature management.
One of the consequences of this is that we can make up for not having a meta-logic / core-logic by creating a good contract-/interface-based design of the program and well isolating the inference rules from the rest of the program. Then the part of the editor that needs to be validated to ensure that the system is sound will be clearly separated from all the fancy user interface functions. Other provers use a text-based interface between the graphical part and the underlying proof engine. We claim, that a well-documented contract-based OO or function call interface can do the same job even better.