### Table 1: Collaborative filtering data

"... In PAGE 1: ... Our goal is to use all the available informa- tion in a single probabilistic model. 2 Content-based versus Collaborative Filtering Table1 shows the data typically available to a recom- mender system. Here, the rows represent users, each with an associated vector of covariates (i.... ..."

### Table 1. User ratings Matrix for Collaborative Filtering

"... In PAGE 6: ...Table 1. User ratings Matrix for Collaborative Filtering The data in the Table1 . correspond to sample of user ratings in a data base.... ..."

### Table 8. Information filtering method of the systems

"... In PAGE 24: ....1. Information filtering methods There are three main information filtering methods: demographic, content- based and collaborative. Table8 shows the information filtering techniques used by the various systems analyzed. 4.... ..."

### Table 9. User Profile Matching Technique of the Systems based on Collaborative Filtering

2001

"... In PAGE 45: ... After this, the common techniques used to compute the similarity between users are explained (the nearest neighbor, clustering and classifiers). Table9 shows the user profile matching techniques used by the different analyzed systems. 10.... ..."

Cited by 3

### Tables method subspace sparsity

2004

Cited by 4

### Table 2: The running time (in seconds) for different algorithm combinations using different sparsity functions.

2007

"... In PAGE 6: ... We initialized the bases randomly and ran each algorithm combination (by alternatingly optimizing the coefficients and the bases) until convergence.12 Table2 shows the running times for different algorithm combinations. First, we observe that the Lagrange dual method significantly outperformed gradient descent with iterative projections for both L1 and epsilonL1 sparsity; a typical convergence pattern is shown in Figure 1 (left).... ..."

Cited by 8

### Table 2: The running time (in seconds) for different algorithm combinations using different sparsity functions.

"... In PAGE 6: ... We initialized the bases randomly and ran each algorithm combination (by alternatingly optimizing the coefficients and the bases) until convergence.12 Table2 shows the running times for different algorithm combinations. First, we observe that the Lagrange dual method significantly outperformed gradient descent with iterative projections for both L1 and epsilonL1 sparsity; a typical convergence pattern is shown in Figure 1 (left).... ..."

### Table 2: Comparison of SNR and S value of predicted datasets before (raw data) and after (filtered data) removing false positive protein-protein interaction pairs

2007

"... In PAGE 6: ... Therefore, we define SNR as the ratio of the true positive fraction of a predicted dataset to the true positive fraction of a randomly selected dataset with the same sample size. True positive fraction of a data- set is the ratio of matched protein pairs with the experi- mental dataset to the total number of pairs in the same dataset: SNR was calculated for all four predicted datasets for each of yeast and worm in the following two circumstances: effect of the rules on the reduction of false positive predic- tions was measured by the strength (S): As seen in Table2 , SNR values for all filtered data were larger than those for corresponding raw data, indicating that the proposed algorithm can reduce false positive pre- diction of PPI pairs. Depending on the PPI-predicting method employed, the S value varies from 2.... ..."

### Table 1: Table : The features of recommender systems based on the EVA framework, information retrieval and filtering technology and collaborative filtering

in Because Men Don’t Bite Dogs: A Logical Framework for Identifying and Explaining Unexpected News ∗