### BibTeX

@MISC{Groote_,

author = {Jan Friso Groote and Fran~ois Monin and Jaco Van De Pol},

title = {},

year = {}

}

### OpenURL

### Abstract

Abstract We provide a treatise about checking proofs of distributed systems by computer using general purpose proof checkers. In particular, we present two approaches to verifying and checking the verification of the Sequential Line Interface Protocol (SLIP), one using rewriting techniques and one using the so-called cones and foci theorem. Both verifications are carried out in the setting of process algebra. Finally, we present an overview of literature containing checked proofs. Note: The research of the second author is supported by Human Capital Mobility (HCM). Proof checkers Anyone trying to use a proof checker, e.g. Isabelle The second difficulty is to get used to strict logical rules that govern the reasoning allowed by the proof checker. Most of us have been educated in a mathematical style, which can be best described as intuitive reasoning with steps that are chosen to be sufficiently small to be acceptable by others. We all know examples of sound looking proofs of obviously wrong facts ('1 = -1', 'every triangle is isosceles', (in every group of people all members have the same age'). In fact it is quite common that mathematical proofs contain flaws. Especially, the correctness of distributed programs and protocols is a delicate matter due to their nondeterministic and discrete character. Proof checkers are intended to ameliorate this situation. One must get rid of the sloppiness of mathematical reasoning and get used to a more logical way of inferring facts. That is to say, one should not eliminate the mathematical intuition that helps guiding the proof, as the logical reasoning steps are so detailed that one easily looses track. And if this happens, even relatively short proofs, are impossible to find. A typical exercise that was carried out using Coq during our first encounters with theorem checkers, gives an impression of the time required to provide a formal proof. We wanted to show that there does not exist a largest prime number. A well known mathematical proof of this fact goes like this. 1 PROOF CHECKERS AND CONCURRENCY 2 Suppose there exists a largest prime n. So, as now the product of all prime numbers exists, let it be m. Now consider m + 1. Clearly, dividing m + 1 by any prime number yields remainder 1, and therefore m + 1 is itself also a prime number, contradicting that n is the largest prime. The formal proof requires that first a definition of natural numbers, the induction principle, multiplication, dividahility and primality are given. Most theorem checkers contain nowadays libraries, where some of these notions, together with elementary lemmas are predefined and pre-proven. As a second step it is necessary to construct the product m of all prime numbers up to n (it is easier to construct the product of all numbers up to n) and prove that m + 1 is not dividable by any number larger than l. When doing this, it will turn out that the strict inductive proofs are not at all trivial, and need some thinking to find the appropriate induction hypotheses. It took more than a full month to provide the formalized proof, and we believe this to be typical for somebody with little experience in proof checking. However, after having mastered a theorem checker, and after having proof checked the first theorems, the benefits from proof checking will become very obvious. In the first place one starts to appreciate the power of higher order logics and learns to see the difference between a proof, which can be transformed to be checked by a proof checker, and a 'proof' (or better 'intuitive story') for which the relation with a logical counterpart cannot be seen. On a more concrete level, one finds in almost any proof -and correctness proofs of distributed systems or protocols are no exception -flaws that even may have impact on the correctness of the protocol. A typical example is the equality between an implementation and specification stated on page ll8 in Proof checkers and concurrency Concurrency and proof checkers are orthogonal fields. This means that proof checkers are not particularly aimed at any concurrency theory. Because we are most acquainted with proof checking within the context of process algebra, we provide a perspective from this field. However, most of our conclusions and guidelines carryover directly to any other perspective. There are actually three requirements that need to be fulfilled for a theorem checker to be usable to check proofs of correctness of distributed systems. 1. The proof checker must be sufficiently expressive to encode the concepts occurring in the concurrency theory. THE SLIP PROTOCOL Architecture of the SLIP protocol 3. Finally, to really get a proof checker to work, the theory must be made effective. This means that either the formal proof cannot contain a too large number of steps, which can all be entered by hand, or the proof checker allows that large parts of the proof are constructed by the checker. In one of our earliest encounters with a proof checker [8J, we expanded the parallel operator into alternative and sequential composition using the standard axioms of ACP [5J. Given the large number of applications of axioms that were needed, we had to develop specific expansion theorems. We have spent a lot of effort to make checking process algebraic proofs more tenable to be computer checked. This has boiled down in a method using cones and foci, which has been applied to a fully checked proof of the correctness of a distributed summing protocol [33J. Independently, an investigation into rewrite techniques has been carried out, which has been applied to the core of Philips' new Remote Control standard [36J. In the next sections we illustrate both techniques on the SLIP protocol. The SLIP protocol The Serial Line Interface Protocol (SLIP) is one of the protocols that is very commonly being used to connect individual computers via a modem and a phone line. It allows only one single stream of bidirectional information. This is a drawback, and therefore the SLIP protocol is gradually being replaced by the Point to Point Protocol (PPP) that allows multiple streams, such that several programs at one side can connect to several programs at ~he other side via one single line. Basically, the SLIP protocol works by sending blocks of data. Each block is a sequence of bytes that ends with the special end byte. Confusion can occur when the end byte is also part of the ordinary data sequence. In this case, the end byte is 'escaped', by placing an esc byte in front of the end byte. Similarly, to distinguish an ordinary esc byte from the escape character esc, each esc in the data stream is replaced by two esc characters. In our modeling of the protocol, we ignore the process of dividing the data in blocks, but only look at the insertion and removal of esc characters in the data stream. We model t,he system by three components: a sender, inserting escape characters, a channel, modeling the medium along which data is transferred, and a receiver, removing the escape characters (see We use four data types N, Bool, Byte and Queue to describe the SLIP protocol and its external behaviour. The sort N contains the natural numbers. With 0 and S we denote the zero element and the successor function on N. Numerals (e.g. 3) are used as abbreviations. The function eq: N X N -; Bool is true when its arguments represent the same number. The sort Bool contains exactly two constants t (true) and f (false) and we assume that all required boolean connectives are defined. The sort Byte contains the data elements to be transferred via the SLIP protocol. As the definition of a byte as a sequence of 8 bits is very det~iled and actually irrelevant we only assume about Byte that it contains at least two not necessarily different constants esc and end, and a function eq:Byte x Byte -; Byte that represents equality. Using a proof checker, we can find out that we indeed did not need any other assumption on bytes. THE SLIP PROTOCOL 4 Furthermore, to describe the external behaviour of the system, we introduce a sort Queue which we describe in slightly more detail to avoid the typical confusion that occurs with less standard data types. Queues are constructed using the empty queue 0 and the constructor in : Byte X Queue --> Queue. toe(in(d, in(d', q) We provide below the precise description of the SLIP protocol. For this we use process algebra with data in the form of peRL ([5, 34]). The processes are defined by guarded recursive equations for the channel C, the sender S and the receiver R (cf. Using the r action the sender reads a byte from a protocol user, who wants to Use the service of the SLIP protocol to deliver this byte elsewhere. Using the two armed condition p ~ c" q, which must be read as if c then p else q, it is obvious that if b equals esc or end first an additional esc is sent to the channel (via action SI) before b itself is sent. Otherwise, b is sent without prefix. The receiver is equally straightforward. After receiving a byte b from the channel (via rl) it checks whether it is an esc. If SO, it removes it and delivers the trailing end or esc. Otherwise, it just delivers b. Both the sender and the receiver repeat themselves indefinitely, too. In the fourth equation the SLIP protocol is defined by putting the sender, channel and receiver in parallel. We let the actions T1 and 81 communicate and the resulting action is called Cl. Similarly, T2 and 82 communicate into C2. This is defined using the communication function, by letting ,(Ti' 8i) ::;: Ci for i ::;: 1,2. The encapsulation operator G{rl ,'1,r2,'2} forbids the actions Tl, 81, T2 and 82 to occur on their own by renaming these actions to 0, which represents the process that cannot do anything. In this way the actions are forced to communicate. The hiding operation T{c"c,) hides these communications by renaming them to the internal action T. Using axioms x T ::;: x and X+T x ::;: T X in weak bisimulation USING REWRITE SYSTEMS IN ISABELLE/HOL 5 We want to obtain a better understanding of the protocol, because although rather simple, it is not straightforward to understand its external behaviour completely. Data that is read at r is of course delivered in sequence at s without loss or duplication of data. So, the protocol behaves like a kind of queue. The reader should now, before reading further, take a few minutes to determine the size of this queue 1 . Actually, the protocol behaves as a queue of size three, as long as there are no esc and end bytes being transferred. Simultaneously, one byte can be stored in the receiver, one in the channel and Oile in the sender. If an esc or end is in transfer, it matters whether it occurs at the first or second position in the queue. At the first position the esc or end is ultimately neatly stored in the receiver, taking up one byte position, allowing two other bytes to be simultaneously in transit. If this esc or end occurs at the second position, there must be a leading esc in the channel C, and the esc or end itself must be in the sender. Now, there is no place for a third byte. So, the conclusion is that the queue behaves itself as a queue of size three, except when an esc or end occurs at the second position in the queue, in which case the size is two. This explains the full predicate defined above, and yields the description of the external behaviour of the SLIP protocol below: If the queue is not fllll, an additional byte b can be read. If the queue is not empty an element can be delivered. Spec(q:Queue The theorem that we are interested in proving and proof checking is: Theorem 3.1. Slip = Spec(0) where (=' is interpreted as being branching or weakly bisimilar. In Section 4 below we prove Theorem 3.1 directly using process algebraic axioms and rewriting techniques to make this approach tenable for proof checkers. In Section 5 we apply the cones and foci theorem and check the set of rather straightforward preconditions in PVS. The checked proofs can be obtained by contacting the authors. Using rewrite systems in Isabelle/HOL The direct proof method in process algebra consists of three steps: 1. Unfold the implementation by repeatedly calculating its first step expansion. This results in a system of guarded recursive equations. 2. Shrink this system by using the laws of weak (or branching) bisimulation. 3. Prove that the specification obeys the smaller set of equations. 1 When trying to prove the correctness of the SLIP protocol, we erroneously took the size of the queue to be one. When proving equality between the SLIP protocol and such a queue, it became quickly obvious that this was a stupid thought. So, we took three for the size. But this is not cOITect, either. USING REWRITE SYSTEMS IN ISABELLE/HOL 6 The RSP-principle then guarantees that the specification and implementation are equal. The bunch of work is in the first step expansion. Given a process with a, the possible first steps of the process. The process Si denotes the sender after performance of ai. The first step expansion must he repeated for the derivatives TIDH(Si II C i II Hi). In this way, the computation tree of a process can be unfolded. To avoid an infinite unfolding of the process, names are introduced. These names can be used for sharing parts of the tree. The procedure of expansion is continued until a closed system of guarded equations is found. The introduction of new names and the criterion to terminate the unfolding remains the creative part of the proof. The first step expansion is rather straightforwardly calculated using the axioms of process algebra. However, due to the large number of applications of axioms automation is desired. In Section 4.2 we will present a conditional higher-order rewrite system that given a parallel process computes its first-step expansion, without running into exceedingly large intermediary terms. But first we provide the laws of process algebra and its implementation in Isabelle/HOL. The method is applied to the SLIP protocol in Sections 4.3 and 4.4. Formulation of Process Algebra in Isabelle In Isabelle, terms have types, and the types are contained in classes. We introduce new classes act and data, and a communication function gamma. Here act is the class of action alphabets on which ganuna is well-defined, and data is the class of types that may occur as data types in the processes. Given an alphabet 'a: :act, a type constructor 'a proc is declared for the processes over the (polymorphic) alphabet 'a. After that, the process algebra operators are declared, and infix notation is introduced. We use Finally) this approach uses the iterative construct y «1«1 z instead of the recursive definition x = yx+z. in traditional notation this is written y* z, meaning that y is repeated zero or more times, and then z is executed. Recursive definitions would introduce new names (x), that must be manually folded and unfolded during proofs. As an example, the type of the summation operator is as follows: Here 'd and 'a are type variables, restricted to class data (for data types) and act (for action alphabets), respectively. Finally, the axioms of process algebra are turned into rules for Isabelle/HOL. Below we give an exhaustive list of the axioms we used. Note that we work with weak bisimulation which is slightly easier than branching bisimulation in the direct proof method. The conditions gamdef abc and gamundef a b can be read as , (a, b) = XII A4 "(x ++ y) ** z ;;; x ** z ++ y ** Zll AS "(x ** y) ** z = x ** y ** z" A6 "x ++ delta A7 "delta ** x ;;; delta ll Di 11(-a mem H) --> enc H (a<d» ;;; a<d>" Did "enc H delta;;; delta" D2 lIa mem H --> enc H (a<d» ;;; delta" TIl ,,-a roem H --> hide H (a<e» :::: a<e>" Tlld "hide H delta :::: delta" TI2 "a mem B --> hide H (a<e»::::tau" TI3 "hide H ex ++ y) :::: hide H x ++ hide H yll T14 "hide H (x ** y) :::: hide H x ** hide H ylt BK51 "x <lI<lI Y = x .* (x <lI<lI y) ++ y" 4.2 A rewrite system for the expansions In order to find the first step expansion of a term) we have to apply the laws of process algebra with care. Many of these laws (regarded as rewrite rules) make copies of sub terms leading to an unnecessary blow-up of intermediate terms (cf. eMl). Rather than programming a rewrite strategy in the theorem prover, we enlarge the usual rewrite rules with the context in which they may be applied. In this way USING REWRITE SYSTEMS IN ISABELLE/HOL S we can control the application of the duplicating rewrite laws. The essence of our strategy is to avoid the generation of many sub terms that will eventually be encapsulated. We assume that the sub term to be expanded is of the shape ene H (D++p). Here 0 can be seen as the head and p as the tail of the list of summands to be processed. The rewrite rules are found by case analysis on the form of D. We will make sure that the duplication of sub terms can only take place in the head of the term. The encapsulation is used to remove idle subterms as quickly as possible. In order to start the system, a term enc H (x II y II z) first has to be transformed into ene H (x II y II z ++ delta). The rewrite system then starts with the following rule: From now on the general shape will be enc H (0 LL u ++ p), so we need an analogon of the previous rule: o is either a single component or the communication between two components. These cases are dealt with by the following non-duplicating rules: CM2, CM3, CM5, CM6, CM7, CF1, CF2 and CF2' (and possibly their symmetric counterparts). Only the rules for alternative components (CM4, CMS and CM9) are duplicating and have to be replaced by e.g.: Eventually, the first summand is so small that it either can be discarded by the conditional rewrite rule a mem H ==> enc H (a<d> ** x ++ p) = enc H p. or it contributes to the final result. In that case we apply -a mem H ==> enc H (a<d> ** x ++ p) = enc H p ++ a<d> ** enc H x, in order to proceed with the next summand, which is the head of p. The summation symbols ($) are pulled to the front of the individual summands, using rules 54, 55, 56, 57 and its symmetric variant S7'. Eventually, a lot of summation signs can be eliminated after communication takes place, by adding rules like As the latter rule is non-duplicating, we don't need the encapsulation context to steer its application. The iteration construct is only unfolded in certain contexts, such as Finally, conditionals are pulled to the top of the terms by rules of the form: The complete set of rewrite rules can be found in the appendix. These rules have been proven in Isabelle using a much simpler rewrite system (basically the completion of the process algebra laws, cf. [1]). The rules have been gathered in a simplification set called expandJ<s. Also tactics to automatically prove side conditions like a E Hand gamdef abc have been put into this simplification set. Finally, a tactic choose is defined, which (non-deterministically) applies the rule enc H p = enc H (p ++ delta), in order to bring the term into the required shape. Using backtracking, the user can really choose which terms to expand. USING REWRITE SYSTEMS IN ISABELLE/HOL datatype Act = r I r1 I c1 I sl I r2 I c2 I s2 I 5 rule gamma_det "gamma == [(rl,sl We are now ready to define the protocol itself. Because we can now use iteration we don't need axioms but only definitions. For brevity we omit the types. The first command unfolds the definitions in the left-hand side of the equation. The next command places the condition as an assumption in the context. Then one of the ene's is chosen and expanded using the expand-"s-system. This is repeated for a second expansion. Note that the default choice of the system was wrong so we had to backtrack. After that we unfold the definitions in the righthand side. Then we call the rewrite system for hiding, tau-"s. Finally the left-and right-hand side are compared. The latter step uses laws for commutativity of the alternative (AI) and parallel composition. Isabelle will not loop on such rules because it uses ordered rewriting. By doing some subtle substitutions in the equations above and using the tau-laws (taut, tau2) and the derived law r(x+y)+x = r(x+y), we reduce the system to the following set of equations. These equations form a system of guarded recursive equations, of which Slip is a solution. Using cones and foci in PVS If protocols become more complex, it is not enough to resort to automating basic steps, but one must resort to effective meta theorems. As an example we present here the cones and foci theorem or general equality theorem [35, 33], and explain the formalisation of Theorem 3.1 and its proof in PVS (see [78]). The basic observation underlying this method is that most verifications follow basically the same structure. The cones and foci theorem circumvents those verification steps that are similar and focuses on the parts that are different for each verification. However, in order to be able to formulate such a general theorem, the format of processes as being used up till now is too general. Therefore, we introduce the so called linear process equation format to which large classes of processes can be automatically translated [13J. Some remarks about this format are in order. First one should distinguish between the sum symbol with index i E I and the sum with index ej:Ej. The first one is a shorthand for a finite number of alternative composition operators. The second one is a binder of the dat,a variable ei. Definition 5.1. A linear process equation (LPE) over data type D is an expression of the form In LPEs are defined here having a single data parameter. The LPEs that we will consider generally have more than one parameter, but using cartesian products and projection functions, it is easily seen that this is an inessential extension. Finally, we note that sometimes (and we actually do it below) it is useful to group summands per action such that EiEI can be replaced by EaEAct where Act is the set of action labels. Such LPEs are called clustered, and by introducing some auxiliary sorts and functions, any LPE can be transformed to a clustered LPE (provided actions have a unique type). We call an LPE convergent if there are no infinite r-sequences: USING CONES AND FOCI We obtained this form, by identifying three explicit states in the sender and receiver, and two in the ., l(gi(d, ei)). We list below a number of invariants of Linlmpl that are sufficient to prove the results in the sequel. The proof of the invariants is straightforward, except that we need invariant 2 to prove invariant 3. Lemma 5.5. The following expressions are invariants for Linlmpl: 2. eq(s" 2)....., (eq(b" esc) V eq(b" end)); USING CONES AND FOCI IN PVS 13 The next step is to relate the implementation and the specification. In order to do this abstractly, we first introduce a clustered linear process equation representing the implementation: aEAct ea:Ea and a clustered linear process equation representing a specification: Note that the specification does not have internal T steps. We relate the specification by means of a state mapping h:Dp ---; D q . The mapping h maps states of the implementation to states of the specification. In order to prove implementation and specification branching bisimilar 1 the state mapping should satisfy certain properties, which we call matching criteria because they serve to match states and transitions of implementation and specification. They are inspired by numerous case studies in protocol verification, and reduce complex calculations to a few straightforward checks. In order to understand the matching criteria we first introduce an important concept, called a focus point. A focus point is a state in the implementation without outgoing r-steps. :JeT:E T (b T (d, e T )). The set of states from which a focus point can be reached via internal actions is called the cone belonging to this focus point. Now we formulate the criteria. We discuss each criterion directly after the definition. Here and below we assume that --, binds stronger than A and V, which in turn bind stronger than --+. Definition 5.7. Let h:Dp ---; Dq be a state mapping. The following criteria are called the matching criteria. We refer to their conjunction by Cp,q,h(d). The LPE for p is convergent VeT:ET(bT(d,e T ) ---; hid) = h(gT(d,eT))) (1 ) Criterion (2) says that if in a state d in the implementation an internal step can be done (i.e. b T (d, e T ) is valid) then this internal step is not observable. This is described by saying that both states relate to the same state in the specification. Criterion USING CONES AND FOCI IN PVS 14 This is guaranteed by criterion (4). It says that in a focus point of the implementation, an action a in the implementation can be performed if it is enabled in the specification. Criteria Using the matching criteria, we would like to prove that, for all d: Dp, Cp,q,h(d) For the SLIP protocol we define the state mapping using the auxiliary function caddo The expression cadd(c, b, q) yields a queue with byte b added to q if boolean c equals true. If c is false, it yields q itself. Hence the conditional add is defined by the equations cadd(f, b, q) = q and cadd (t, b, q) The state mapping is in this case: h (bslSS,bc,sc,br,sr) So, the state mapping constructs a queue out of the state of the implementation, containing at most b s , be and b r in that order. The byte b s from the sender is in the queue if the sender is not about to read a new byte (~eq(s"O)). The byte b, from the channel is in the queue if the channel is actually transferring data (eq(s" 1» and if it does not contain an escape character indicating that the next byte must be taken literally. Similarly, the byte b, from the receiver must be in the queue if it is not empty and b r is not an escape character. The focus condition of the SLIP implementation can easily be extracted and is (slightly simplified using the invariant): Proof. We apply Theorem 5.8 by taking LinImpl for p, Spec for q and the state mapping and invariant provided above. We simplify the conclusion by observing that the invariant and the focus condition are true for Sj = 0, Se = 0 and Sr = O. By moreover using that h(bb 0, b 2 , 0, b 3l 0) = 0, the lemma is a direct consequence of the generalized equation theorem. We are only left with checking the matching criteria: 1. The measure 13 -Sj -3s e -4s r decreases with each T step. USING CONES AND FOCI IN PVS 15 toe (untoe(h(b" s" be, s" br, s, ))) = b, and eq(b" esc) V eq(b" end) in a similar way. This also (f) We must show that the invariant, focus condition and ~empty(h(b" s,' b" So, br, s,)) implies eq(s" 2) V (eq(sr, 1) II ~eq(b" esc)). Assume Fe and, towards using contraposition, ~eq(s" 2) II (~eq(s" l)V eq(br, esc)). Using the invariant we deduce eq( Se, 0) V (eq(b" esc)lIeq(s" 1). By the second conjunct of the Fe (contraposition), we obtain ~eq(s" 1), so by the invariant, eq(s" 0), and by the first conjunct of Fe, eq(s" 0) holds. By the definition of the state mapping h, we easily see that h(bJ,ss,bc,sc,br,sr) = empty. (a) Trivial. (f) Use toe (cadd(c"b" cadd(c2,b 2 ,in(b 3 6. (a) Trivial using definitions (f) Idem. o Using Lemmas 5.3 and 5.9 it is easy to see that Theorem 3.1 can be proven. Only now we come to the actual checking of this protocol in PVS. We concentrate on proving the invariant and the matching criteria. We must choose a representation for all concepts used in the proof. As this would make the paper too long, we only provide some definitions and highlight some steps of the proof, giving a flavour of the input language of PVS. We start off defining the data types. We use as much of the built-in data types of PVS as possible. The advantage of this is that we can use all knowledge of PVS about these data types. A disadvantage is that the semantics of the data types in PVS may differ from the semantics of data types in the protocol, leading to mismatches between the computerized proof and the intended proof. The types Nand Bool are built in types of PVS and need not be defined. We declare Byte to be a nonempty type, with two elements esc and endb (end is a predefined symbol and can therefore not be used). For queues we take the built in type list and parameterize it with bytes. The type of the parameters of the linear implementation and the specification are now given by DX and DY respectively. The type upto (n) denotes a finite type with natural numbers up to and including n. A function such as untoe can now be defined in the following way: untce(q:Queue):RECURSIVE Queue=if null?(q) then null else if null?(edr(q)) then null else (ecns (car(q),untoe(Cdr(q)))) endif endif MEASURE(lambda(q:Queue) : length(q)) The function car, cdr and null are built in PVS. The MEASURE statement is added to help PVS finding criteria for the well-foundedness of the definition, which is in this case obtained via the length of the queue. Below we show how a linear process equation is modeled. In essence the information contents of an LPE is the set D, the index set I, the sets Ei, the actions ai and the function Ii, gi and b i . We only provide the LPE representation for the linear implementation of the SLIP protocol. The set D is given as ox defined above. We group all T-actions, which leaves us with three summands. We assume this a priori (and have even encoded this bound in all theorems) as making it more generic would make the presentation less clear. Using the knowledge that there are only three summands, we can define the sets Ei very explicitly: E1: TYPE=Byte, E2: TYPE=upto (0) and E3: TYPE=upto (3). (d»=>1(g(u1(lpox»(d) To assert the MAINTHM theorem described below in PVS corresponding to the main Lemma 5.9, GET is to be applied with the following instantation L-IMPL. L....spec. stmapp. Inv where Inv is an encoded expression of the invariants defined in Lemma 5.5. After application of GET theorem one is confronted with a long list of proof obligations. They could be proved with several seperate lemmas. To get an impression of how they look like, we provide below the lemma that corresponds to the sixth matching criterion. It has been proven using the built in grind tactic. Coq has by far the nicest underlying theory, which is not very easy to understand, however. Coq uses a strict separation between constructing a proof and checking it. Actually, using the Curry-Howard isomorphism, a term (;:;::proof) of a certain type (=theorem) is constructed using the vernacular of Coq. After that the term and type are sent to a separate type checker, which double checks whether the term is indeed of that type, or equivalently the proof is indeed a proof of the theorem. In a few rare cases we indeed constructed proofs that were incorrect, and very nicely intercepted in this way. This gives Coq by far the highest reliability of the provers. USING CONES AND FOCI IN PVS A disadvantage of Coq is that it is relatively hard to get going. This is due to the fact that the theory is difficult, and there are relatively few and underdeveloped libraries. Furthermore, automatic proof searching is less supported in Coq then in PVS and Isabelle. Isabelle is the most difficult theorem prover to learn. This is due to the fact that the user must have knowledge of the object logic (HOL, but there are others) and the metalogic (Higher order minimal logic). An advantage of this two level approach is that proof search facilities have a nice underpinning in the meta logic. These facilities include backtracking, higher order unification and resolution. Although there are no proof objects that are separately checked such as in Coq, Isabelle operates through a kernel, making it much more reliable than PVS. Term rewriting is an exception, as it has been implemented outside this kernel for efficiency reasons, but it is very powerful as ordered conditional higher-order rewriting is implemented, and rather efficient. In the context of process algebra [5] most such checks have been carried out using the language pCRL [34]. It has been encoded in the Coq system and applied to the verification of the alternating bit protocol [8, 7], Milner's scheduler Temporal logic has been mainly used for proving safety (invariance) properties and liveness (eventuality) properties of concurrent systems. The temporal logic of actions (TLA), developed by Lamport In A subset of the temporal formalism of Manna and Pnueli The Unity community has also used the Larc~ Prover to study a communication protocol over faulty channels Other formal frameworks have been applied to the verification of previous examples. The alternating bit protocol was checked in Coq in [25J. We can mention [77J where the Fisher mutual exclusion protocol and the railroad crossing controller were verified in PVS. The former is also done with PVS in [54J and the latter is proved with the Boyer-Moore prover in [83J. In [80], the steam boiler was checked by Vitt and Haoman also using PVS. The last author also verified a processor-group membership protocol and the binary exponential backoff protocol [44, 45J; and a safety property, together with a real-time progress property of the ACCESS bus protocol in [43J. Also the biphase mark protocol, similar to the protocol in As an interesting benchmark problem for specification and verification, the interactive convergence clock synchronization algorithm [51J has been mechanically checked respectively with the Boyer-Moore prover in [82] and with PVS in [73J. Also, several versions of the oral messages algorithm [52J have been proved correct in [84J with the new version ACL2 [46J of Nqthm and with PVS in [76, 72, 55J. Nqthm is also used by Since several years, numerous protocols have been checked in the field of security systems whith modal logic or general purpose formal methods. Among many checked cryptographic protocols, the protocols [6, 69, 70J were proved using Isabelle and the protocols [4, llJ were proved with Coq. Examples of protocols or distributed systems have also been verified in a combination of theorem proving and model checking. An 8.2m-bit multiplier was verified with LP for arbitrary values of m in A THE SET OF REWRITE RULES (APPENDIX TO SECTION 4) 20 A The set of rewrite rules (appendix to Section 4) We present the set of rewrite rules in four parts. First the rules for ACP and standard concurrency are presented. Then the extensions with the sum-operator, the star-operator and the conditionals is presented. All equations are to be read as rewrite rules from left to right. A.I ACP with standard concurrency There are three kinds of rules here. Purely administrative rules, that just rearrange the terms. Then there are rules to compute left merges and communication merges. Finally, there are rules to contract terms containing deltas. Administrative rules: These rules can be easily proved from the axioms, apart from associativity of ". The latter requires a lot of sophisticated applications of the laws of standard concurrency, In Isabelle, this proof heavily depends on ordered rewriting with the appropriate rules and backtracking. 21 These rules follow easily from the axioms, apart from the fourth and fifth. The latter require a case distinction on whether d = e and whether "'I ( a, b) is defined or not. All cases are easy.