### Table 1: Summary of algorithms for Weighted Non-Negative Matrix Euclidean Distance (ED) KL Divergence (KLD)

### Table 3: A GapL algorithm for the determinant over non-negative integers

"... In PAGE 22: ...) Essentially, HA gives us a uniform polynomial size, polynomial width branching program, corresponding precisely to GapL. Table3 lists the code for an NL machine computing, through its gap 6.2-10 function, the determinant of a matrix A with non-negative integral entries.... ..."

### Table 3: A GapL algorithm for the determinant over non-negative integers

"... In PAGE 22: ...) Essentially, HA gives us a uniform polynomial size, polynomial width branching program, corresponding precisely to GapL. Table3 lists the code for an NL machine computing, through its gap 6.2-10 function, the determinant of a matrix A with non-negative integral entries.... ..."

### Table 1: The algorithms tested in this article. These include a number of different algorithms for finding matrix factorisations under non-negativity constraints and a neural network algorithm for performing competitive learning through dendritic inhibition.

"... In PAGE 4: ...3. 3 Results In each of the experiments reported below, all the algorithms listed in Table1 were applied to learning the com- ponents in a set of p training images. The average number of components that were correctly identified over the course of 25 trials was recorded.... ..."

Cited by 3

### Table 1: Data reconstruction errors under the L2 norm. cICA = contextual ICA, NMF = Non-negative matrix factorisation with fi=0, NMFe = NMF with exponential prior having fi = 0:1

"... In PAGE 6: ...Table 1: Data reconstruction errors under the L2 norm. cICA = contextual ICA, NMF = Non-negative matrix factorisation with fi=0, NMFe = NMF with exponential prior having fi = 0:1 Table1 shows the data reconstruction results across... ..."

### Table 1: The algorithms tested in this article. These include a number of different algorithms for finding matrix factorisations under non-negativity constraints and a neural network algo- rithm for performing competitive learning through dendritic inhibition.

"... In PAGE 6: ...ogether with a concrete example of its function in section 3.3. 3. Results In each of the experiments reported below, all the algorithms listed in Table1 were applied to learning the components in a set of p training images. The average number of components that were correctly identified over the course of 25 trials was recorded.... ..."

Cited by 3

### Table 1. A summary of certain non-negative integer parameters

"... In PAGE 24: ... The next lemma involves four of the seven parameters 1{ 5, s, and t. In Table1 , which appears below, these seven parameters are summarized. 11.... In PAGE 26: ...Table 1. A summary of certain non-negative integer parameters In Table1 , for easy reference, we have summarized information about the seven parameters 1{ 5, s, and t each of which must be a non-negative integer. Four of these parameters, t, 1, 4, and 5, change their values according to the case we are in.... ..."

### Table 1. A summary of certain non-negative integer parameters

"... In PAGE 26: ... The next lemma involves four of the seven parameters 1{ 5, s, and t. In Table1 , which appears below, these seven parameters are summarized. 12.... In PAGE 27: ... = 1. By Lemmas 7.1, 7.2, and 7.3, in every case, J2 2 G;(H0 2). In Table1 , for easy reference, we have summarized information about the the seven parameters 1{ 5, s, and t each of which must be a non-negative integer. Four of these parameters, t, 1, 4, and 5, change their values according to the case we are in.... ..."

### Table 1: Minimax thresholds for non-negative garrote.

"... In PAGE 9: ... The minimax thresholds for the soft shrinkage were derived in Donoho and Johnstone (1994); the minimax thresholds for the hard shrinkage were computed by Bruce and Gao (1996b); and the minimax thresholds for the rm shrinkage can be found in Gao and Bruce (1997). The minimax thresholds for the non-negative garrote were computed from (14) and tabulated in Table1 . The values in Table 1 were computed using a grid search over with increments = 0:00001.... In PAGE 9: ... At each grid point, the supremum over was computed using Splus non-linear minimization function nlmin(). The obtained minimax bounds are also listed in Table1 and plotted along with minimax bounds for the soft, the hard and the rm in Figure 2. The minimax bounds for the soft, the hard and the rm are all listed in Table 1 of Gao and Bruce (1997).... In PAGE 9: ... The obtained minimax bounds are also listed in Table 1 and plotted along with minimax bounds for the soft, the hard and the rm in Figure 2. The minimax bounds for the soft, the hard and the rm are all listed in Table1 of Gao and Bruce (1997). From Figure 2, we can see that garrote has tighter minimax bounds than both the hard and the soft shrinkage rules, and comparable to the rm shrinkage rule.... ..."

### Table 2: Complexity results for single-machine problems with non-negative time-lags

1998

"... In PAGE 15: ... Furthermore, single-machine problems with classical precedence constraints reduce polynomially to corresponding problems with constant positive time-lags where the value l is xed. All known complexity results for single-machine problems with non-negative nish-start time-lags are summarized in Table2 . Besides the problems mentioned at the end of the last section there are many other open problems in this area which can be found under the address http://www.... ..."

Cited by 12