### Table 3 Upper Frequency Limit

in Location

2000

"... In PAGE 37: ... This last limit is the lowest in frequency and is the one that usually prevails. The hierarchy of these lim its are shown in Table3 and discussed below. Table 3 Upper Frequency Limit ... ..."

### Table II. Sample Split Statistics This table reports a set of descriptive statistics for deseasonalised returns Ut+j and test statistics for three sub-samples. The sub-sample selection is based on the frequency of limit hits/moves. Limit hits/moves predominantly occur in the months of June and July 1988. Hence, we distinguish a PRE, a LIMIT, and a POST sample. These sub-samples correspond with respectively the first, second and third entry in each cell. The descriptive statistics are the first, second, third and fourth empirical moments, and the first-order serial correlation coefficient. A * indicates a significant rejection of the null hypothesis of normal distribution values for skewness and kurtosis and the null hypothesis of no autocorrelation, at the 5%-level. Mean Standard

### Table 1. Resources consumption for 2 temporal filter approaches implementation of stage S1 on a Virtex II XC2V6000-4 (results taken from the DK synthesizer (Celoxica 2006)). The IIR filters require about 22% more resources. Although they achieve a faster clock rate, it is not the clock frequency limiting stage, and therefore, it is not a significant improvement in the framework of the optical flow system.

2006

### Table 2.3: Investigated chaos generator types: frequency limits and application elds

### Table 1: A classi cation of user interface speci cation nota- tionss with respect to status and event.

"... In PAGE 2: ... Where possible, wehave included references to work that speci cally relate to the speci cation of interactivesys- tems or user interfaces. There are a couple of important points to makeinref- erence to Table1 . We can ask about the compositionality provided byany speci cation approach.... In PAGE 3: ...The last entry in Table1 refers to the new model of speci cation that we will present in the next section. This classi cation of previous approaches points quite clearly to the absence of any one approach that treats both status and event information symmetrically.... ..."

### Table 1 Properties of techniques for dimensionality reduction.

"... In PAGE 11: ...2. General properties In Table1 , the thirteen dimensionality reduction tech- niques are listed by four general properties: (1) the con- vexity of the optimization problem, (2) the main free... In PAGE 11: ... We discuss the four general properties below. For property 1, Table1 shows that most techniques for dimensionality reduction optimize a convex cost func- tion. This is advantageous, because it allows for find- ing the global optimum of the cost function.... In PAGE 11: ... Because of their nonconvex cost functions, autoencoders, LLC, and manifold charting may suffer from getting stuck in local optima. For property 2, Table1 shows that most nonlinear tech- niques for dimensionality reduction all have free param- eters that need to be optimized. By free parameters, we mean parameters that directly influence the cost func- tion that is optimized.... In PAGE 11: ... The main advantage of the presence of free parameters is that they provide more flexibility to the technique, whereas their main disadvantage is that they need to be tuned to optimize the performance of the di- mensionality reduction technique. For properties 3 and 4, Table1 provides insight into the computational and memory complexities of the com- putationally most expensive algorithmic components of the techniques. The computational complexity of a di- mensionality reduction technique is of importance to its applicability.... In PAGE 12: ...duction technique is determined by data properties such as the number of datapoints n, the original dimension- ality D, the target dimensionality d, and by parameters of the techniques, such as the number of nearest neigh- bors k (for techniques based on neighborhood graphs) and the number of iterations i (for iterative techniques). In Table1 , p denotes the ratio of nonzero elements in a sparse matrix to the total number of elements, m indi- cates the number of local models in a mixture of factor analyzers, and w is the number of weights in a neural network. Below, we discuss the computational complex- ity and the memory complexity of each of the entries in the table.... ..."

### Table 1: Comparison of model reduction conditions

"... In PAGE 4: ... The former procedure is computationally cheaper, while the latter might allow the incorporation of additional requirements of A r , B r , C r , D r into the reduced-order -model. To have a feeling of how the new su cient conditions perform, some statistics are collected and summarized in Table1 , whicharetheaverage values over the 22 nu- merical examples studied in [5]. The dimension of the model is originally (m;; n;; p) = (1;; 4;; 2), which is then reduced to (m;; r;;p).... In PAGE 5: ... In general, it is not the smallest upper bound. The \actual error quot; e in Table1 means the in mum of that satis es De nition 2 for E := G;G r , whichisthe tightest upper bound of the ` 2 -` 2 induced gain wecan derive so far for the error system, so it mightbeofinter- est. The values are summarized in Table 1.... In PAGE 5: ...ystem. In general, it is not the smallest upper bound. The \actual error quot; e in Table 1 means the in mum of that satis es De nition 2 for E := G;G r , whichisthe tightest upper bound of the ` 2 -` 2 induced gain wecan derive so far for the error system, so it mightbeofinter- est. The values are summarized in Table1 . It should be understood that although the new su cient conditions in Proposition 5 and Corollary 6 always improve e over e 0 , it does not follow that the \actual error quot; must be smaller.... In PAGE 5: ...st. The values are summarized in Table 1. It should be understood that although the new su cient conditions in Proposition 5 and Corollary 6 always improve e over e 0 , it does not follow that the \actual error quot; must be smaller. In average it is, according to Table1 . It is also observed that although Proposition 4 gives the small- est bound, the \actual error quot; is in average the highest.... In PAGE 5: ... The \tightness quot; in each case, re ected as e=e,isrecordedin the table. Table1 shows that in average, the new su cient con- ditions can reduce the error bound by about 40%, and the \actual error quot; by about 20%. Another important indicator is the \success rate quot; in Table 1.... In PAGE 5: ... Table 1 shows that in average, the new su cient con- ditions can reduce the error bound by about 40%, and the \actual error quot; by about 20%. Another important indicator is the \success rate quot; in Table1 . If the su - cient condition gives a reduced-order -model G r such that e 10% of the gain of G ; D, it is regarded as successful.... ..."

### Table 3. Adjusted ORs of disability at baseline (2001), according to quintiles of WC, by sex

"... In PAGE 6: ... Table3 shows the OR of disability at baseline by quin- tiles of WC, adjusted for age, educational level, tobacco use, alcohol consumption, and physical activity. In men, the highest quintile of WC registered a significantly greater frequency of limitation in mobility and agility, and of RDA, than did the lowest quintile.... In PAGE 6: ...05 for all types of disability except RDA). Furthermore, compared with women in the lowest WC quintile, those in the highest quintile were significantly more likely to be disabled in IADL and in bathing or dressing ( Table3 ). However, after further adjustment for BMI (Model 2), the association of WC with impairment in IADL and bathing or dressing among women lost statistical significance.... ..."

### Table 2: Results obtained by adding model reduction techniques. 7.2 Future Work Although the model reduction techniques described here reduce a model apos;s state space, complex models can still be constructed that cannot be handled in the available memory. It may be possible to use compositional techniques [1, 16] to handle such models: instead of model checking a complete model, the model is decomposed into manageable parts which are validated separately. Another possibility to reduce the memory requirements during model checking is to use a binary decision diagram (BDD) [70] to represent the visited states instead of a state space cache or a bit vector. BDDs are used in a number of current model checking systems [34, 53, 71]. Other model reduction techniques which have been investigated include exploiting symmetry [22], abstractions [23] and unfoldings [72].

"... In PAGE 102: ... For the partial order semantic rule and transition folding only communication transitions were considered eligible. DFS Sleep POSR Folding Sleep+POSR+Folding States Explored 5809 2686 4450 4338 1112 Time (seconds) 6:3 2:8 4:9 4:6 1:4 In Table2 the reduction in the number of unique states generated for the elevator, com- munication bridge and the X-Windows models are given. For the partial order semantic rule and transition folding only communication transitions were considered eligible.... In PAGE 102: ... While generating the state space of the X-Windows model over-writes did occur and therefore the number of unique states visited cannot be determined. The number of unique states in Table2 for the X-Windows model... ..."

### Table 1. Deterministic techniques Techniques % Reduction

"... In PAGE 5: ... In all the plots, the unrolling degree achieved by our a112 a32 a32 a114 a115 a72 a82 a66 and a112 a32 a32 a114 a115 a62 a87 tech- niques is also plotted by a line graph on the right hand side y-axis. The average percentage reduction (and standard de- viation) in power consumption achieved by the determin- istic (stochastic) techniques are summarized in columns 3 and 4 of Table1 ( 2) for the JPEG and MPEG-1 decoding algorithms, respectively. For the JPEG decoder, our a112 a32 a32 a114 a115 a72 a82 a66 and a112 a32 a32 a114 a115 a62 a87 techniques on an average give a179 a180 a181 a182 a183 a184 and a179 a179 a181 a180 a179 a184 reduc- tion in power over a functional pipelining (a163 a32 ) technique that exploits DPM and not DVS.... ..."