### Table 3: Firm Wage Inequality and Firm Performance: Within and Between Firm Effects

"... In PAGE 15: ... In order to shed light on these issues we apply panel data methods. The first and third column of Table3 introduce firm fixed effects.17 These estimates refer to the impact of changes in inequality over time within a firm on standardised wages and thus refer to short- term effects.... In PAGE 15: ... In Table 4 we check the robustness of our results. Here the turning point of the hump-shape is very similar to the one found in the OLS- regression, whereas the coefficients in Table3 indicate a positive inequality wage relation over the most part (up to two standard deviations above the mean) of the inequality scale. Especially for blue-collar workers, the fixed effects estimates reveal a limited impact of temporary changes in within-firm wage dispersion on firm outcomes.... In PAGE 16: ...technological differences and other effects of particular industries, we also include in addition the log average real wage of the industry in which the respective firm is operating. Table 3 Columns 2 and 4 of Table3 present the results of the between-firm (group-means) regressions. The estimates are weighted by the number of periods a firm is observed in the sample.... In PAGE 16: ... Here however, the turning point is much later: at a value of firm wage inequality sigma of 0.28 in Table3 . This means, that for the most part of the empirical distribution of the inequality measure sigma, a positive dispersion-wage schedule can be observed.... ..."

### Table 4: Parameters of least squares t to Max ips scaling. value 68.3% conf. interval

1995

Cited by 4

### TABLE 3 ORDINARY LEAST SQUARES (OLS) REGRESSIONS: ESTABLISHING THE PREDICTIVE VALIDITY OF THE DESTINATION PERSONALITY SCALE (N = 250)

### Table 2: The city-block multidimensional scaling representation of the stimulus domain.

1990

Cited by 1

### Table 6: Sum of Squared Difierences Between the Estimates when Dividing the HIV Data in Two Groups of 4 Replicates. The posterior mean reduces the variability by 83% relative to the raw log ratios.

2003

Cited by 2

### Table 1 Ordinary Least Squares

2001

"... In PAGE 11: ... IV. Statistical Results Table1 presents basic statistical relationships between per capita consumption, income, and the two measures of wealth. The first three columns present regression results for the panel of countries (228 observations on 14 countries), while the next three columns report the results for the panel of states (3498 observations on 50 states and the District of Columbia).... ..."

Cited by 2

### Table 5: Least Squares projection

"... In PAGE 16: ...014% within minutes.18 17 Table5 reports the maximum values from 10 runs of the collocation algorithm. 18We have also found this result to hold in other problems.... ..."

### Table 1 Iteration scaling with grid size. Nit = CNb e

"... In PAGE 6: ... Similar data is also shown by Burns [2] using diagonal preconditioning for a three dimensional problem with spatial domain decomposition. Table1 summarizes the iteration scaling for three solvers - generalized minimal residual (GMRES), stabilized biconjugate gradient (BiCGStab), and conjugate gradient squared (CGS) - using diagonal, Neumann, and least squares preconditioning 2. Assuming that the number of iterations scales as Nit = CNb e, Table 1 gives values for C and b for each solver/preconditioner combination.... In PAGE 6: ... Table 1 summarizes the iteration scaling for three solvers - generalized minimal residual (GMRES), stabilized biconjugate gradient (BiCGStab), and conjugate gradient squared (CGS) - using diagonal, Neumann, and least squares preconditioning 2. Assuming that the number of iterations scales as Nit = CNb e, Table1 gives values for C and b for each solver/preconditioner combination. All of the data shown in Table 1 were obtained using the AZTEC Krylov matrix solver library developed at Sandia National Laboratories in Albuquerque, New Mexico.... In PAGE 6: ... Assuming that the number of iterations scales as Nit = CNb e, Table 1 gives values for C and b for each solver/preconditioner combination. All of the data shown in Table1 were obtained using the AZTEC Krylov matrix solver library developed at Sandia National Laboratories in Albuquerque, New Mexico. The data in Table 1 shows that, of the three solvers considered , the BiCGStab solver requires the fewest number of iterations for a given preconditioner.... In PAGE 6: ... All of the data shown in Table 1 were obtained using the AZTEC Krylov matrix solver library developed at Sandia National Laboratories in Albuquerque, New Mexico. The data in Table1 shows that, of the three solvers considered , the BiCGStab solver requires the fewest number of iterations for a given preconditioner. BiCGStab also provides the slowest increase in the number of iterations, i.... In PAGE 6: ... A similar result was obtained by Burns [2]. The data in Table1 also shows that preconditioning has a bene cial e ect on the BiCGStab algorithm in terms of the iteration count. The additional cost of the Neumann and least squares preconditioners, however, tend to negate this bene t and 2Using a Krylov subspace size of 30 for GMRES and a polynomial order of 2 for Neumann and least... ..."

### Table 2: Matrix Y obtained from X via least-squares standardization.

"... In PAGE 6: ... The least-squares ap- proximation also leads to the average and standard deviation as the most appropriate values. The standardized matrix Y =#28y ik #29 obtained with these shift and scale parameters is presented in Table2 which is not symmetric anymore. However, other approximation criteria may lead to di#0Berently de- #0Cned a k and b k .... In PAGE 7: ... The other preferred distances are Euclidean distance squared, d 2 ij := X k2J jx ik , x jk j 2 ; and the city-block metric, d c := X k2J jx ik , x jk j: Curiously, because of binary entries, these latter distances coincide, in this particular case, with each other and with the Hamming distance. Table 4 is matrix A = YY T of scalar products of the rows of matrix Y in Table2 . It is a similarity matrix.... ..."

### Table 2: Matrix Y obtained from X via least-squares standardization.

"... In PAGE 6: ... The least-squares ap- proximation also leads to the average and standard deviation as the most appropriate values. The standardized matrix Y =#28y ik #29 obtained with these shift and scale parameters is presented in Table2 which is not symmetric anymore. However, other approximation criteria may lead to di#0Berently de- #0Cned a k and b k .... In PAGE 7: ... The other preferred distances are Euclidean distance squared, d 2 ij := X k2J jx ik , x jk j 2 ; and the city-block metric, d c := X k2J jx ik , x jk j: Curiously, because of binary entries, these latter distances coincide, in this particular case, with each other and with the Hamming distance. Table 4 is matrix A = YY T of scalar products of the rows of matrix Y in Table2 . It is a similarity matrix.... ..."