### TABLE I UPPER BOUND OF ERROR TERM amp; AS A FUNCTION OF SCALE FOR THREE 512 X 512 IMAGES. AT MODERATE AND FINE RESOLUTIONS, THE CORRECT SEGMENTATION IS UNCERTAIN AND amp; VERY SMALL. Bound on amp;

1994

Cited by 156

### Table 4: Capacities of structures used in delay calculations

"... In PAGE 8: ... From the access latencies, we compute the access penal- ties for each of the three clock scaling rates (f16, f8, and fSIA). In Table4 , we show both the number of bits per entry and the number of ports for each of the baseline structures. These param- eters are used to compute the delay as a function of capacity and technology as well as the capacity as a function of access penalty and technology.... In PAGE 8: ... In the issue window entry, we show two bit ca- pacities, one for the instruction queue and one for the tag-matching CAM. In the third column of Table4 , we show the baseline structure sizes that we use for our pipeline scaling experiments. In Table 5, we show the actual access penalty of the structures for the fixed baseline capacities.... ..."

### Table 1: Scale Factors

"... In PAGE 14: ... Fraction Modified degree-of-modification parameters: the percentage of design modified (DM), the percentage of code modified (CM), and the percentage of modification to the original integration effort required for integrating the reused software (IM). The Software Understanding increment (SU) is obtained from Table1 . SU is expressed quantitatively as a percentage.... In PAGE 15: ... Self-descriptive code; documentation up-to- date, well-organized, with design rationale. SU Increment to ESLOC 50 40 30 20 10 Table1 : Rating Scale for Software Understanding Increment SU The other nonlinear reuse increment deals with the degree of Assessment and Assimilation (AA) needed to determine whether a fully-reused software module is appropriate to the application, and to integrate its description into the overall product description.... In PAGE 31: ... External Inquiry (Queries) Count each unique input-output combination, where an input causes and generates an immediate output, as an external inquiry type. Table1 0: User Function Types Each instance of these function types is then classified by complexity level. The complexity levels determine a set of weights, which are applied to their corresponding function counts to determine the Unadjusted Function Points quantity.... In PAGE 33: ...anguage, etc.) in order to assess the relative conciseness of implementation per function point. COCOMO II does this for both the Early Design and Post-Architecture models by using tables such as those found in [Jones 1991] to translate Unadjusted Function Points into equivalent SLOC. Language SLOC / UFP Ada 71 AI Shell 49 APL 32 Assembly 320 Assembly (Macro) 213 ANSI/Quick/Turbo Basic 64 Basic - Compiled 91 Basic - Interpreted 128 C 128 C++ 29 ANSI Cobol 85 91 Fortan 77 105 Forth 64 Jovial 105 Lisp 64 Modula 2 80 Pascal 91 Prolog 64 Report Generator 80 Spreadsheet 6 Table1 1: Converting Function Points to Lines of Code 5.4 Cost Drivers The Early Design model uses KSLOC for size.... In PAGE 34: ...esign model counterparts. It involves the use and combination of numerical equivalents of the rating levels. Specifically, a Very Low Post-Architecture cost driver rating corresponds to a numerical rating of 1, Low is 2, Nominal is 3, High is 4, Very High is 5, and Extra High is 6. For the combined Early Design cost drivers, the numerical values of the contributing Post- Architecture cost drivers, Table 12, Early Design Cost Driver Counterpart Combined Post-Architecture Cost Drivers RCPX RELY, DATA, CPLX, DOCU RUSE RUSE PDIF TIME, STOR, PVOL PERS ACAP, PCAP, PCON PREX AEXP, PEXP, LTEX FCIL TOOL, SITE SCED SCED Table1 2: Early Design and Post-Architecture Effort Multipliers are summed, and the resulting totals are allocated to an expanded Early Design model rating scale going from Extra Low to Extra High. The Early Design model rating scales always have a Nominal total equal to the sum of the Nominal ratings of its contributing Post-Architecture elements.... In PAGE 34: ...cale from Very Low (=1) to Very High (=5). Adding up their numerical ratings produces values ranging from 3 to 15. These are laid out on a scale, and the Early Design PERS rating levels assigned to them, as shown in Table 21. Extra Low Very Low Low Nominal High Very High Extra High Sum of ACAP, PCAP, PCON Ratings 3, 4 5, 6 7, 8 9 10, 11 12, 13 14, 15 Combined ACAP and PCAP Percentile 20% 39% 45% 55% 65% 75% 85% Annual Personnel Turnover 45% 30% 20% 12% 9% 5% 4% Table1 3: PERS Rating Levels The Nominal PERS rating of 9 corresponds to the sum (3 + 3 + 3) of the Nominal ratings for ACAP, PCAP, and PCON, and its corresponding effort multiplier is 1.0.... In PAGE 35: ... As with PERS, the Post-Architecture RELY, DATA CPLX, and DOCU rating scales in Table 21 provide detailed backup for interpreting the Early Design RCPX rating levels. Extra Low Very Low Low Nominal High Very High Extra High Sum of RELY, DATA, CPLX, DOCU Ratings 5, 6 7, 8 9 - 11 12 13 - 15 16 - 18 19 - 21 Emphasis on reliability, documentation Very little Little Some Basic Strong Very Strong Extreme Product complexity Very simple Simple Some Moderate Complex Very complex Extremely complex Database size Small Small Small Moderate Large Very Large Very Large Table1 4: RCPX Rating Levels 5.4.... In PAGE 35: ... A summary of its rating levels is given below and in Table 21. Very Low Low Nominal High Very High Extra High RUSE none across project across program across product line across multiple product lines Table1 5: RUSE Rating Level Summary 5.4.... In PAGE 36: ...Version 1.4 - Copyright University of Southern California 32 Low Nominal High Very High Extra High Sum of TIME, STOR, and PVOL ratings 8 9 10 - 12 13 - 15 16, 17 Time and storage constraint G219 50% G219 50% 65% 80% 90% Platform volatility Very stable Stable Somewhat volatile Volatile Highly volatile Table1 6: PDIF Rating Levels 5.4.... In PAGE 36: ... Extra Low Very Low Low Nominal High Very High Extra High Sum of AEXP, PEXP, and LTEX ratings 3, 4 5, 6 7, 8 9 10, 11 12, 13 14, 15 Applications, Platform, Language and Tool Experience G219 3 mo. 5 months 9 months 1 year 2 years 4 years 6 years Table1 7: PREX Rating Levels 5.4.... In PAGE 37: ... Very strong support of collocated or simple M/S devel. Table1 8: FCIL Rating Levels Very Low Low Nominal High Very High Extra High SCED 75% of nominal 85% 100% 130% 160%... In PAGE 37: ... Very strong support of collocated or simple M/S devel. Table 18: FCIL Rating Levels Very Low Low Nominal High Very High Extra High SCED 75% of nominal 85% 100% 130% 160% Table1... ..."

### Table 3: Average root MSE for various thresholding techniques, moderate sample sizes

1996

"... In PAGE 15: ... Ultimately, the balance between minimizing MSE and visual quality must be determined by the application. To compare thresholding methods numerically, a large-scale simulation study was conducted, with results contained in Table3 and Table 4. The simulation was programmed in FORTRAN using the pseudo-random number generation routine RNNOA from the IMSL subroutine library.... In PAGE 15: ...ependent estimator with two values of : 0.05, and 0.01. The averages of the square roots of the mean square errors for all methods are tabulated in Table3 for moderate sample sizes (n = 128, n = 256, and n = 512) and in Table 4 for large sample sizes (n = 1024 and n = 2048). A graphical representation of part of Table 3 is Figure 5, which plots average root MSE for each of the methods considered for a single sample size (n = 256) as the signal-to-noise ratio increases.... In PAGE 15: ...ependent estimator with two values of : 0.05, and 0.01. The averages of the square roots of the mean square errors for all methods are tabulated in Table 3 for moderate sample sizes (n = 128, n = 256, and n = 512) and in Table 4 for large sample sizes (n = 1024 and n = 2048). A graphical representation of part of Table3 is Figure 5, which plots average root MSE for each of the methods considered for a single sample size (n = 256) as the signal-to-noise ratio increases. Similarly, Figure 6 is a plot from Table 4 for n = 1024.... ..."

Cited by 18

### TABLE III. Performance and cost parameters scaled for 55.9 TByte capacity. transtec 3000 transtec 6600 Dell EMC IBM IBM

### TABLE VII. CAPACITY SCALING EFFECT ON THE NUMBER OF BICS LOCATIONS IDmax = 10 mA IDmax = 30 mA Chip

2003

### Table 37: Survey of FPGA-Implemented Processor Capacity

"... In PAGE 57: ... In the most extreme case of spatial limitations, we might end up building a processor-like design on top of the FPGA. Table37 summarizes the capacity density provided by several processors which have been built on top of FPGAs. From Table 37, we see that such processors, when optimized for the FPGA, have a peak capacity of about 2 ALU bit operations/ 2 s, or about one fourth the capacity of a custom processor.... In PAGE 57: ... Table 37 summarizes the capacity density provided by several processors which have been built on top of FPGAs. From Table37 , we see that such processors, when optimized for the FPGA, have a peak capacity of about 2 ALU bit operations/ 2 s, or about one fourth the capacity of a custom processor. The architecture for R16 and jr16 are moderately straight RISC processor architectures, and are likely to yield about the same fraction of this capacity as most other RISC processors.... ..."

### Table 1: Capacity Matrix of 11-node COST 239 EON

1995

"... In PAGE 2: ... 2 The Problem The problem this paper seeks to address is that of producing a route and wavelength allocation plan for the 11-node central network of the proposed partitioned COST 239 EON (see Figure 1). The capacity requirements for the 11-node EON, given in Table1 , have been generated from the real tra c data (scaled to long-term levels, and completed using the PFD model). As only 2.... ..."

Cited by 2