### Table 8: Timed automata for the parallel operator

1996

"... In PAGE 27: ... So, for the sake of correctness in our de nitions, we choose a wide enough set of bound clocks in ck(p). We give the rules for the timed automaton in Table8 . Operators jj A and jA are the left-merge and the communicating versions of the parallel operator, respectively.... In PAGE 54: ...4, pjjAq a; - p0jjAck(q) and j= (v[ (pjjAq) a0] + d)( ) ^ @(pjjAq)). By rules in Table8 , a = 2 A and p a; - p0, (pjjAq) = (p) [ (q) and @(pjjAq) = @(p) ^ @(q). Since ncv(pjjAq), j= (v[ (p) a0] + d)( ) and j= (v[ (p) a0] + d)(@(p)).... In PAGE 55: ...4, pjjAq a; - ck(p)jjAq0 and j= (v[ (pjjAq) a0] + d)( ^ @(pjjAq)). By rules in Table8 , a = 2 A and q a; - q0, (pjjAq) = (p) [ (q) and @(pjjAq) = @(p) ^ @(q). Again, by rules in Table 8, p0jjAq a; - ck(p0)jjAq0, (p0jjAq) = (p0) [ (q) and @(p0jjAq) = @(p0) ^ @(q).... In PAGE 56: ...4, pjjAq a; ^ 00 - p0jjAq0 and j= (v[ (pjjAq) a0] + d)(( ^ 00) ^ @(pjjAq)). By rules in Table8 , a 2 A and p a; - p0, q a; 00 - q0, (pjjAq) = (p) [ (q) and @(pjjAq) = @(p) ^ @(q). Since ncv(pjjAq), j= (v[ (p) a0] + d)( ^ @(p)).... In PAGE 57: ...4, ck(p)jjAq a; - p0jjAck(q) and j= (v[ (ck(p)jjAq) a0] + d)( ^ @(ck(p)jjAq)). By rules in Table8 , a = 2 A and p a; - p0, (ck(p)jjAq) = (q) and @(ck(p)jjAq) = @(p) ^ @(q). By de nition of S1, there exists v, v0 and d0 such that v var(p) = (v[ (p) a0] + d0) var(p), v0 var(p0) = (v0[ (p0) a0] + d0) var(p0) and (p; v)Rvar(q)(p0; v0).... In PAGE 59: ...4, ck(p)jjAq a; - ck(ck(p))jjAq0, and j= (v[ (ck(p)jjAq) a0] + d)( ^ @(ck(p)jjAq)). By rules in Table8 , a = 2 A and q a; - q0, (ck(p)jjAq) = (q) and @(ck(p)jjAq) = @(p) ^ @(q). Again, by rules in Table 8, ck(p0)jjAq a; - ck(ck(p0))jjAq0, (ck(p0)jjAq) = (q) and @(ck(p0)jjAq) = @(p0) ^ @(q).... In PAGE 60: ...4, ck(p)jjAq a; ^ 00 - p0jjAq0 and j= (v[ (ck(p)jjAq) a0]+d)(( ^ 00)^@(ck(p)jjAq)). By rules in Table8 , a 2 A and p a; - p0, q a; 00 - q0, (ck(p)jjAq) = (q) and @(ck(p)jjAq) = @(p) ^ @(q). By de nition of S1, there exists v, v0 and d0 such that v var(p) = (v[ (p) a0] + d0) var(p), v0 var(p0) = (v0[ (p0) a0] + d0) var(p0) and (p; v)Rvar(q)(p0; v0).... ..."

Cited by 48

### Table 3: Computational times associated with parallel implementation of the finite element model.

2007

"... In PAGE 64: ... This ensured that the perturbed displacement data sets were still contained within the atlases. Results Parallel implementation of the Finite Element Model Table3 illustrates the computational time necessary to solve the biphasic model on a finite element mesh containing 19468 nodes and 104596 elements, using 16 processors (2.8GHz,... ..."

### Table 1. Steps of parallel computation

1995

"... In PAGE 7: ... Finally the computation of the slave portion xs corresponding to an eigenvector u of the slave problem can be done in parallel as well (Kssj ? p( u)Mssj)(xs)j = ?(Ksmj ? p( u)Msmj) u; j = 1; ; r : 4 Substructuring and parallel processes To each substructure we attach one process named `Sj apos; and with the master infor- mation we associate one further process called `Ma apos;. These processes work in parallel as shown in Table1 . For the necessary communication each `Sj apos; is connected to `Ma apos; directly or indirectly.... In PAGE 7: ... A detailed description is contained in [14]. Table1 shows how this parallel eigensolver which consists of the processes `Ma apos; and `R1 apos;,.... In PAGE 11: ... 6 Numerical results The parallel concept was tested on a distributed memory PARSYTEC transputer system equipped with T800 INMOS transputers (25MHz, 4 MB RAM) under the distributed operating system `helios apos;. Since each processor has a multiprocessing capability we were able to execute more than one process from Table1 on every processor which turned out to be extremly important for a good load balancing of the system. We do not discuss the mapping of the process topology to the processor network.... In PAGE 13: ... Table 4). For the parallel solution of the matrix eigenvalue problem via condensation and improvement using the Rayleigh functional according to Table1 we proposed in [16] the following proceeding.... ..."

Cited by 10

### Table 1. Steps of parallel computation

1995

"... In PAGE 7: ... Finally the computation of the slave portion xs corresponding to an eigenvector u of the slave problem can be done in parallel as well (Kssj ? p( u)Mssj)(xs)j = ?(Ksmj ? p( u)Msmj) u; j = 1; ; r : 4 Substructuring and parallel processes To each substructure we attach one process named `Sj apos; and with the master infor- mation we associate one further process called `Ma apos;. These processes work in parallel as shown in Table1 . For the necessary communication each `Sj apos; is connected to `Ma apos; directly or indirectly.... In PAGE 7: ... A detailed description is contained in [14]. Table1 shows how this parallel eigensolver which consists of the processes `Ma apos; and `R1 apos;,.... In PAGE 11: ... 6 Numerical results The parallel concept was tested on a distributed memory PARSYTEC transputer system equipped with T800 INMOS transputers (25MHz, 4 MB RAM) under the distributed operating system `helios apos;. Since each processor has a multiprocessing capability we were able to execute more than one process from Table1 on every processor which turned out to be extremly important for a good load balancing of the system. We do not discuss the mapping of the process topology to the processor network.... In PAGE 13: ... Table 4). For the parallel solution of the matrix eigenvalue problem via condensation and improvement using the Rayleigh functional according to Table1 we proposed in [16] the following proceeding.... ..."

Cited by 10

### Table 4: Class for nondeterministic finite automata

2005

"... In PAGE 2: ... But we provide methods to test for epsilon1-transitions and to convert an epsilon1-NDFA to a NDFA. See Table4 , Appendix A, for more details. 2.... ..."

Cited by 4

### Table 5. Timed automata for the parallel operator (p) = C (q) = C0

"... In PAGE 14: ... We say that pjjAq, pjj Aq and pjAq do not have con ict of variables if neither p or q do and (bv(p) \ var(q)) [ (bv(q) \ var(p)) = ;. We give the rules for the timed automaton in Table5 . Operators jj A and jA are the left-merge and the communicating versions of the parallel operator, respectively.... ..."

### Table 2. Finite automata pattern-matching architectures

"... In PAGE 14: ...2. Finite Automata Designs All the possible configurations of history and decoding components for finite automata (FA) designs are listed in Table2 . There is one design (GfS) that is not feasible because its his- tory and decoding styles conflict (i.... ..."

### Table 1: Messages in the finite state automata graph and their meanings

"... In PAGE 3: ... Note that the supplier then remains in its initial state until it receives the initial message from the final assembly plant. Table1 summarizes the messages and their meanings. The final assembly plant maintains a different finite state automata graph for each supplier.... ..."

### Table 2. Sieve parallelism degree for several computation and communication granularities

"... In PAGE 9: ...Table2 presents the number of parallel tasks and inter-tasks messages required to compute the prime numbers up to 100 000, for several //task computation and communication granularities. The grain-sizes values were selected to show representatives values of the sieve execution times.... ..."

Cited by 1

### Table 2. Checks and verification Here, the problem of space and time complexity of the finite state machines (automata) for recognizing languages arises. In general, the classical regular language operators (concatenation, alternative, repetition) do not introduce any exponential growth of the state space of a parsing finite state automaton. However, behavior protocols employ also the and- parallel, composition, and adjustment operators that introduce exponential complexity of the resulting automata which might lead to the state explosion problem. In fact, the composition and adjustment operators behave better than the and-parallel operator in terms of the required state space as they comprise synchronization of events, thus reducing the interleaving of traces.

2002

Cited by 112