### Table 2. Forward and backward simulation schemes in CMAQ. Forward Model Adjoint Model

"... In PAGE 11: ...Table2 for an operational scheme of the forward and backward simulations). 11 255 260 265 270 275 Figure 4 shows comparisons between adjoint and BF horizontal advection sensitivities.... In PAGE 14: ... As a result: (a) the same numerical scheme can be used to horizontally diffuse concentrations and adjoints in forward and backward simulations; and (b) continuous and discrete horizontal diffusion adjoints are identical. We use the same subroutine for horizontal diffusion of adjoints with an internal decoupling from (division by) densities (see Table2 ). Adjoint and BF sensitivity fields show good agreement in Figure 7 where only horizontal diffusion in one direction is included in the simulations and sensitivities of final ozone at the 21st x-cross section with respect to initial ozone at the 20th column are shown.... In PAGE 15: ... This leads to significant computational savings when multiple backward simulations are performed. In general, the order in which processes are called during the backward simulation is reverse of that in the forward simulation ( Table2 ). As only chemistry requires knowledge of concentrations in the current implementation, checkpoints are written and read before each chemistry call.... ..."

### Table 2: The results for talker characterization with the explicit duration HMM

2000

"... In PAGE 10: ... The application however was to separate talkers in a small population (close-set). Table2 are the results of the ATC algorithm run on separating talkers in small populations. The experiment is to be given a set of C1 training sentences from a target talker, be able to pick the target talker when you have one sentence from the target talker and C8 A0 BD imposters.... In PAGE 10: ... Characterization vec- tors can be considered spanning a talker feature space. From the results of Table2 , this initial intuition is well founded. Talkers can be characterized by the characterization vector.... ..."

### Table 4: Forward-Backward results (F-measure) on the development sets

2002

"... In PAGE 3: ... Finally, the probability C8 AG D0CTD2 AG DB CT CY CQ CY AH CYBX CY AH is assumed to be exponentially distributed. Table4 shows the results obtained by stacking the FB algorithm on top of fnTBL. Comparing the results with the ones in Table 2, one can ob- serve that the global search does improve the perfor- mance by 3 F-measure points when compared with fnTBL+Snow and 5 points when compared with the fnTBL system.... In PAGE 3: ... Comparing the results with the ones in Table 2, one can ob- serve that the global search does improve the perfor- mance by 3 F-measure points when compared with fnTBL+Snow and 5 points when compared with the fnTBL system. Also presented in Table4 is the per- formance of the algorithm on perfect boundaries; more than 6 F-measure points can be gained by improving the boundary detection alone. Table 5 presents the detailed performance of the FB algo- rithm on all four data sets, broken by entity type.... ..."

Cited by 5

### Table 4. Intraprocedural forward and backward slicing algorithms (partial).

"... In PAGE 14: ... Methods GetBranchExpressionsBackwardUp( ) and GetBranchExpressionsBackwardDown( ) track and mark those expressions that may potentially affect the execution of a given state- ment being sliced. Table4 shows a portion of the intraprocedural forward and backward slicing algorithms. The forward slicing algorithm, in brief, invokes ComputeDUChain( ) transi- tively to facilitate the forward slicing process.... ..."

### Table 1. Jacobian matrices for the forward and backward distortion models.

2004

Cited by 45

### Table 4: Source distribution estimation with forward-backward iteration Estimated probabilities with/without hard decision (SNR)

"... In PAGE 7: ... Similar to the de nition of , we can de ne t(i; j) = ProbfOt+1; Ot+2; ON; Lt = jjct = ig; which is computed using the backward recursion N(i; j) = (L; j)b(i; j; O); t(i; j) = Pk t+1(k; j + lk)Probfct+1 = kjct = i; Lt = jgb(k; j + lk; O); 8t = N ? 1; ; 1: (7) To update HMM parameters, we de ne t(i; j) = Probfct = i; Lt = jjOg = t(i; j) t(i; j) ProbfOg ; (8) t(i; k; j) = Probfct = i; ct+1 = k; Lt = jjOg; = Pl t(i; l)Probfct+1 = ijct = k; Lt = jg t+1(l + lk; k)b(k; l + lk; O) ProbfOg ; (9) where ProbfOg = Pi T(i; L). We can update the distribution of the rst codeword to f 0g and the probabilities for codewords to fp0 ikg, using: 8 lt; : 0(i) = 1(i); p0 ik = Pt=1; T?1;j=1; ;L;Probfct+1=mjct=i;Lt=jg gt;0;8m t(i;j;k) Pt=1; T?1;j=1; ;L;Probfct+1=mjct=i;Lt=jg gt;0;8m t(i;j) : (10) Table4 provides an example of the results of source distribution estimation. The re- sults in the table are based on one packet containing 100 symbols from the VLC x1=0, x2= 10, and x3=11, transmitted over a AWGN channel with di erent noise powers.... In PAGE 8: ...Of course, even if hard decisions are used for the estimation, the original soft observations need to be retained in memory so that the SISO VLC decoding algorithm de- scribed in the previous section can be utilized. Simulations with other codeword alphabets produced result consistent with Table4 and con rm that the approach in equation (10) can produce an accurate estimate of the source distribution, even in the presence of a very low SNR. Table 4: Source distribution estimation with forward-backward iteration Estimated probabilities with/without hard decision (SNR)... ..."

### Table 1: Performance improvement in the recognition of the E-set alphabet after selective training. Shown are closed and open speaker results using both the Forward-Backward and selective training methods.

"... In PAGE 14: ...left-to-right model with 4 mixtures in each state. The closed-set and open-set error rates before and after employing the selective training method are listed in Table1 . An error rate reduction of 26.... ..."