### Table 1 Performance of three algorithms for several gradient-domain compositing problems. For each dataset, we show the number of megapixels, the number of variables per color channel in the reduced linear system as a percentage of the total number of pixels, and the error between the solutions computed using the reduced and full linear sys- tems (error is measured using the 8-bit red channel, with both an average per-pixel RMS error and the maximum error across all pixels). We show the time and memory performance of three algorithms: quadtree-based (QT), hierarchical basis preconditioning (HB), and locally adapted hier- archical basis preconditioning (LHB). Each panorama was stitched from five source images.

"... In PAGE 1: ... (Fifth row) The result computed in this reduced space, which can be computed much more efficiently, is vi- sually identical to the full gradient-domain solution. The numerical error is shown in Table1 . Images courtesy of Tobias Oberlies.... In PAGE 4: ...46 9 118 112 Table 2 Performance of quadtree-based gradient-domain compositing for several very large panoramas. 4 Experimental results We compare the performance of our technique against our imple- mentation of two other algorithms for several datasets of different sizes ( Table1 ), and show several results that were too large to com- pute in available memory using other algorithms (Table 2). Most of our results are panoramas whose seams were computed using hi- erarchical graph cuts [Agarwala et al.... In PAGE 4: ... Most of our results are panoramas whose seams were computed using hi- erarchical graph cuts [Agarwala et al. 2005], though the first result in Table1 demonstrates image region copy-and paste with manu- ally chosen seams. In the interest of space, most of our results can only be seen on the project web site, although the Rainier dataset is shown in Figure 1.... In PAGE 4: ... Even when we scale the computed offsets by ten to generate the visualization in the third row of Figure 1, no differences are visible. The error values in Table1 explain why. For color values that range from 0 to 255, the per-pixel RMS error is in the hundredths.... ..."

### Table 2: Running times of BLAS3 and QUADTREE

1997

"... In PAGE 7: ...2 Multiprocessing behavior. Table2 contains the multiprocessing results for the BLAS3 and QUADTREE algorithms on the SGI ONYX and SGI POWER CHAL- LENGE. These times are also graphed in Figures 10 and 11.... In PAGE 9: ...3 A strange case. All of the running times for the BLAS3 algorithm on matrices of or- der 2048 are reproducibly out of line in Table2 on the SGI POWER CHALLENGE. Under multiprocessing those for order 4096 are also surprisingly slow.... ..."

Cited by 80

### Table 2. New Parallel Quadtree Construction Methods.

1991

"... In PAGE 3: ... Overview of Parameters. problem pointer based quadtree linear quadtree hypercube PRAM hypercube PRAM convert image to quad tree (s = p = M) t = O(log2 M) t = O(log M) t = O(log M) t = O(log M) convert boundary code to quad tree (s = p = b) t = O(h log b) t = O(h log b) t = O(log b (h+ log2logb)) t = O(h log b) Table2 . New Parallel Quadtree Construction Methods.... In PAGE 5: ... We describe algorithms for converting images represented either by a binary array or a boundary code into pointer based as well as linear quadtrees. Table2 summarizes the obtained results. Furthermore, all previous papers studied only the parallel processing of linear quadtrees with path encoding.... ..."

Cited by 2

### Table 2. New Parallel Quadtree Construction Methods.

1991

"... In PAGE 3: ... Overview of Parameters. problem pointer based quadtree linear quadtree hypercube PRAM hypercube PRAM convert image to quad tree (s = p = M) t = O(log2 M) t = O(log M) t = O(log M) t = O(log M) convert boundary code to quad tree (s = p = b) t = O(h log b) t = O(h log b) t = O(log b (h+ log2logb)) t = O(h log b) Table2 . New Parallel Quadtree Construction Methods.... In PAGE 5: ... We describe algorithms for converting images represented either by a binary array or a boundary code into pointer based as well as linear quadtrees. Table2 summarizes the obtained results. Furthermore, all previous papers studied only the parallel processing of linear quadtrees with path encoding.... ..."

Cited by 2

### Table 1: Costs of patterned and unpatterned matrices as quadtrees.

"... In PAGE 12: ... Similarly, no representation of a quadtree-permutation ma- trix has both northeast and southwest quadrants 0 while both its northwest and southeast are simultaneously either 0 or I. Table1 summarizes space and access-time asymptotes extracted from the an- alytic results of Wise and Franco [23]. They show how familiarly patterned ma- trices are uniformly represented in expectedly shrinking space, albeit with pro- portional overhead beyond case-specific data structures.... In PAGE 12: ... The expected path here reflects the cost to access a random [i; j] element of a matrix, from the root of the entire tree. Although Table1 shows that this measure also decreases with patterning, good quadtree algorithms will not probe these structures from their roots; instead the recurrences of these algorithms apply locally to deeper subtrees. The next section shows how addition and multiplication, for instance, decompose to independent, parallel processes on subtrees.... ..."

### Table 1: Costs of patterned and unpatterned matrices as quadtrees.

"... In PAGE 12: ... Similarly, no representation of a quadtree-permutation ma- trix has both northeast and southwest quadrants 0 while both its northwest and southeast are simultaneously either 0 or I. Table1 summarizes space and access-time asymptotes extracted from the an- alytic results of Wise and Franco [23]. They show how familiarly patterned ma- trices are uniformly represented in expectedly shrinking space, albeit with pro- portional overhead beyond case-specific data structures.... In PAGE 12: ... The expected path here reflects the cost to access a random [i; j] element of a matrix, from the root of the entire tree. Although Table1 shows that this measure also decreases with patterning, good quadtree algorithms will not probe these structures from their roots; instead the recurrences of these algorithms apply locally to deeper subtrees. The next section shows how addition and multiplication, for instance, decompose to independent, parallel processes on subtrees.... ..."

### Table 1: Costs of patterned and unpatterned matrices as quadtrees.

1995

"... In PAGE 12: ... Similarly, no representation of a quadtree-permutation ma- trix has both northeast and southwest quadrants 0 while both its northwest and southeast are simultaneously either 0 or I. Table1 summarizes space and access-time asymptotes extracted from the an- alytic results of Wise and Franco [23]. They show how familiarly patterned ma- trices are uniformly represented in expectedly shrinking space, albeit with pro- portional overhead beyond case-specific data structures.... In PAGE 12: ... The expected path here reflects the cost to access a random #5Bi; j#5D element of a matrix, from the root of the entire tree. Although Table1 shows that this measure also decreases with patterning, good quadtree algorithms will not probe these structures from their roots; instead the recurrences of these algorithms apply locally to deeper subtrees. The next section shows how addition and multiplication, for instance, decompose to independent, parallel processes on subtrees.... ..."