### Table 1. Control architecture using parallel processing technique.

2006

### Table 1: Execution time for 100 iterations of neural net application (in seconds)

"... In PAGE 12: ... This version does not use AVS or Schooner. Table1 gives the performance results for the sequential version and the parallel version executing on the Paragon using 1, 2, 4, and 8 processors, respectively. Note that the 1 processor case is not a true parallel computation, but does provide a baseline for the parallel code.... In PAGE 12: ... The third reports the same overhead as a percentage of total execution time rather than absolute values. Table1 shows that the Schooner overhead varies depending on the configuration and the size of the neural net. For this application, the overhead is relatively independent of the number of processors since Schooner is only used to connect the AVS control module with the single master process, no matter how many computational processes are used.... ..."

### Table 1: Execution time for 100 iterations of neural net application (in seconds)

1995

"... In PAGE 4: ... For compar- ison, approximately 200,000 iterations are needed to reach a reasonable final result in a 32 by 32 neural net. Table1 gives a comparison between runningthe compu- tation on the workstation where AVS is running and on the Intel Paragon using different numbers of processors. The first line of the table shows the different neural net sizes used in the computation, where 16 16 means the neural net processing elements are arranged in a 16 by 16 grid, for a total of 256 points.... In PAGE 5: ... Of course, the frequency of monitoring can easily be changed by the user using the control widgets provided by the control module. From Table1 , we can see that the sequential version of the neural net application runs slightly faster on a single node of the Paragon than on the Sparc 10/41. When more processors are deployed and a parallel version is executed, the Paragon far out-performs the Sparc.... ..."

Cited by 1

### Table 2. Control parallel setting

2005

"... In PAGE 8: ... Thus in order to show the robustness of PGSA, there is no need to tune all these parameters to a specific function. The parameters, used in the test, are shown in Table2 . Function F8 is much more difficult to get the global minimal value, so the larger population size and mutation rate were used.... ..."

Cited by 1

### Table 1. Variable mapping for model predictive control (MPC) controllers Process Demand

2003

"... In PAGE 3: ... p and m represent the controller prediction and move hori- zons, respectively, while k represents time. r represents the setpoint trajectory, u is the control signal/manipulated variable, and y is the estimated output; the relationship of these variables to demand network information is summa- rized in Table1 and further discussed in Section 3. The three terms in the MPC cost function penalize predicted setpoint tracking error, excess movement of the manipu- lated variable, and deviation of the manipulated variable from a target value, respectively.... In PAGE 4: ... Single-Product, Two-Echelon, Two-Node Problem Analysis Inthissection,asingle-product,two-nodedemandnetwork representing a factory and a retailer is used to establish the linkages between the inventory management problem and the process control one. The assignment of demand net- work variables to process control variables is summarized in Table1 . Figure 2 illustrates the material flows from the factory to the retailer and on to the customer.... ..."

Cited by 2

### Table 9: Axioms for parallel composition

1996

"... In PAGE 28: ... This last situation would be incorrect since the time for process q would then not have progressed. Axioms for parallel composition are given in Table9 . Operator ck is just required in order to de ne associated timed automata.... In PAGE 30: ... Theorem 6.10 (Soundness) For all p and q obtained by extending Lv with jjA, jj A and jA, if p = q is deduced by means of equational reasoning using axioms in Table 4 and axioms in Table9 , then p $ q. Proof.... In PAGE 30: ...he case of axioms in Table 4 was proven in Theorem 5.2. For axiom PC it is routine to prove that R def= f((pjjAq; v); (pjj Aq + qjj Ap + pjAq; v)g [ Id [ f((p0jjAq0; v); (q0jjAp0; v))j p0 and q0 are any term g is a timed bisimulation. Let p = q any other axiom in Table9 . It is easy to prove that R def= f(p; q)g [ Id is a symbolic bisimulation except for LM3 and LM8 for which it is a symbolic bisim- ulation up to $f, and for CM0 for which R def= f(pjAq; qjAp)g [ f(p0jjAq0; q0jjAp0)j p0 and q0 are any term g could be proven to be a symbolic bisimulation.... In PAGE 30: ... Proof. Consider axioms in Table9 from left to right as rewrite rules modulo axioms in Table 4 and CM0. It is simple to prove that the normal form is a term q 2 L.... In PAGE 31: ... The components of the system can be described as follows. TRAIN = appr; fjxjg ( (x lt; 5) (x gt; 2)7!in; (x lt; 5) out; (x lt; 5) exit; TRAIN ) GATE = lower; before lt; 1 (down; raise; between(1; 2)(up; GATE)) CONTROLLER = appr; urgent1(lower; exit; before lt; 1 (raise; CONTROLLER)) SYSTEM = CONTROLLERjjfappr;exit;lower;raiseg(TRAINjj;GATE) By using axioms in Table9 parallel operations can be eliminated. Assuming only one clock for each component, the expression obtained at this point will contain 3 clocks and 19 states.... In PAGE 33: ... Thus, each component of the system can be modeled as follows. TRAIN = appr; between[3; 4](in; before2(out; before1(exit; TRAIN))) GATE = lower; before1(down; GATE0) GATE apos; = raise; ( before2(lower; GATE0) + between(1; 2](up; GATE)) CONTROLLER = appr; urgent1(lower; CONTROLLER0) CONTROLLER apos; = exit; before1(appr; CONTROLLER0 + raise; CONTROLLER) SYSTEM = CONTROLLERjjfappr;exit;lower;raiseg(TRAINjj;GATE) By using axioms in Table9 parallel operations can be eliminated. Assuming only one clock for each component, the expression obtained at this point will contain 3 clocks and 26 states.... ..."

Cited by 48

### Table 2: The speedup Sn for the Petri net shown in Figure 7.

"... In PAGE 17: ...equentially. If k = 10, then all of the tokens in i may be processed in parallel. The number of available processors is n. Table2 shows the speedup of the TIGRA... ..."

### Table 23. gchef Data and no net control results.

"... In PAGE 76: ...tandard deviation of 0.1. With no net control applied, the same network topology was also trained. The outcome for the net control case was recorded in Table 24, and the no net control was recorded in Table23 . The net control percent error was 5.... ..."

### Table 1. Controls included with the .NET Compact Framework

"... In PAGE 20: ...NET Framework, hosting third-party controls, bitmaps and menus. Table1 lists the controls included with the .NET Compact Framework.... ..."

### Table 2: A Look at Parallel Processing

1994

"... In PAGE 4: ... For a problem of size 1000, we expect a high degree of parallelism. Thus, it is not surprising that we get such high efficiency (see Table2 ). The actual percentage of parallelism, of course, depends on the algorithm and on the speed of the uniprocessor on the parallel part relative to the speed of the uniprocessor on the non-parallel part.... ..."

Cited by 321