• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 81,228
Next 10 →

Table 2: Comparison of End-to-End Execution Time

in Hierarchical Control of Multiple Resources in Distributed Real-time and Embedded Systems
by Nishanth Shankaran, Xenofon D. Koutsoukos, Chengyang Lu, Douglas C. Schmidt, Yuan Xue 2006
"... In PAGE 8: ... Average end-to-end execution time consists of (1) net- work transmission latency and (2) processing time at the receiver node. Table2 compares the end-to-end execu- tion time when the system was operated with and without HiDRA. Since the system crashed when the number of ob-... In PAGE 9: ...8 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Error in Pixels Images UAV-3 With HiDRA Without HiDRA (c) UAV-3 Figure 7: Target-tracking Precision end execution time as 1. Table2 shows that end-to-end execution time is much lower when the system is operated with HiDRA than when it operates without HiDRA. HiDRA responds to fluctuation in resource requirements by constant monitoring of resource utilization.... ..."
Cited by 2

Table 2: Comparison of End-to-End Execution Time jects being tracked increased to 2, we represent the end-to- end execution time as 1. Table 2 shows that end-to-end execution time is much lower when the system is operated with HiDRA than when it operates without HiDRA.

in Hierarchical Control of Multiple Resources in Distributed Real-time and Embedded Systems
by Nishanth Shankaran, Xenofon D. Koutsoukos, Chenyang Lu, Douglas C. Schmidt, Yuan Xue 2006
"... In PAGE 8: ... Average end-to-end execution time consists of (1) net- work transmission latency and (2) processing time at the receiver node. Table2 compares the end-to-end execu- tion time when the system was operated with and without HiDRA. Since the system crashed when the number of ob-... ..."
Cited by 2

Table 2: Comparison of End-to-End Execution Time jects being tracked increased to 2, we represent the end-to- end execution time as 1. Table 2 shows that end-to-end execution time is much lower when the system is operated with HiDRA than when it operates without HiDRA. HiDRA responds to fluctuation in resource requirements by constant monitoring of resource utilization. Figures 5

in Hierarchical Control of Multiple Resources in Distributed Real-time and Embedded Systems
by Nishanth Shankaran, Xenofon D. Koutsoukos, Chenyang Lu, Douglas C. Schmidt, Yuan Xue 2006
"... In PAGE 8: ... Average end-to-end execution time consists of (1) net- work transmission latency and (2) processing time at the receiver node. Table2 compares the end-to-end execu- tion time when the system was operated with and without HiDRA. Since the system crashed when the number of ob-... ..."
Cited by 2

Table 1. Delay formulae for the worst case end-to-end delay in an ATM network.

in Integrated End-to-end Delay Analysis in ATM Networks
by Joseph Kee-Yin Ng, Shibin SONG, Chengzhi LI, Wei Zhao
"... In PAGE 15: ... Delay formulae for the worst case end-to-end delay in an ATM network. Table1 shows the delay formulae of the worst case end-to-end delay bounds derived by the decomposed method, the integrated method by guaranteed service curves, and our new integrated method by system equivalency. The derivation of delay formulae with the decomposed method is given in Appendix B.... ..."

Table 1. Performance of various answer selection modules in TextMap, an end-to-end QA system.

in Multiple-Engine Question Answering in TextMap
by Abdessamad Echihabi, Ulf Hermjakob, Eduard Hovy, Daniel Marcu, Eric Melz, Deepak Ravich 2003
"... In PAGE 9: ...Table1 summarizes the results: it shows the percentage of correct, exact answers returned by each answer selection module with and without ME-based re-ranking, as well as the percentage of correct, exact answers returned by an end-to-end QA system that uses all three answer selection modules together. Table 1 also shows the performance of these systems in terms of percentage of correct answers ranked in the top 5 answers and the corresponding MRR scores.... In PAGE 9: ...returned by each answer selection module with and without ME-based re-ranking, as well as the percentage of correct, exact answers returned by an end-to-end QA system that uses all three answer selection modules together. Table1 also shows the performance of these systems in terms of percentage of correct answers ranked in the top 5 answers and the corresponding MRR scores. The results in Table 1 show that appropriate weighting of the features used by each answer selection module as well as the ability to capitalize on global features, such as the counts associated with each answer, are extremely important means for increasing the overall performance of a QA system.... In PAGE 9: ... Table 1 also shows the performance of these systems in terms of percentage of correct answers ranked in the top 5 answers and the corresponding MRR scores. The results in Table1 show that appropriate weighting of the features used by each answer selection module as well as the ability to capitalize on global features, such as the counts associated with each answer, are extremely important means for increasing the overall performance of a QA system. ME re-ranking led to significant increases in performance for each answer selection module individually.... In PAGE 10: ... For example, Maximum Entropy naturally integrated additional features into the knowledge- based answer selection module; a significant part of the 9.2% increase in correct answers reported in Table1 can be attributed to the addition of redundancy features, a source of knowledge that was unexploited by the base system. References Bikel, D.... ..."
Cited by 8

Table 1. Delay formulae for the worst case end-to-end delay in an ATM network.

in A New Method for Integrated End-to-End Delay Analysis in ATM Networks
by Joseph Kee-Yin Ng, Shibin Song, Chengzhi Li, Wei Zhao
"... In PAGE 6: ... A. Comparison of Delay Formulae Table1 shows the delay formulae for the worst case end- to-end delay bounds derived by the decomposed method, the integrated method by guaranteed service curves and our new integrated method by system equivalency. The derivation of delay formulae with the decomposed method is given in Appendix B or in [24].... ..."

Table 5: For each deployment we plot the system MTTF, the number of cut-sets, and the end-to-end latency.

in A Formal Approach to Fault Tree Synthesis for the Analysis of Distributed Fault Tolerant Systems
by Mark L. McKelvin, Jr., Gabriel Eirea, Claudio Pinello, Sri Kanajan, Alberto L. Sangiovanni-Vincentelli 2005
"... In PAGE 8: ... After generating the fault trees for the three deployments, and analyzing them in the Item Toolkit, we obtain the results as in Table 5. Table5 shows that the additional redundancy improves the MTTF only marginally, whereas the use of a more ro- bust sensor fusion algorithm yields much better results in this example. The end-to-end latency is the result of a tim-... ..."
Cited by 2

Table 3 shows the average end-to-end BITE cost mea- sured at the client side by running one standard OO7 T1 traversal against Thor, SNAP and Diff respectively. Hybrid has the latency of SNAP for recent snapshots, and latency of Diff otherwise. The end-to-end BITE la- tency (page fetch cost) increases over time as pages are archived. Table 3 lists the numbers corresponding to a particular point in system execution history with the in- tention of providing general indication of BITE perfor- mance on different representations compared to the per- formance of accessing the current database. The perfor-

in Thresher: An efficient storage manager for copy-on-write snapshots
by Liuba Shrira 2006
"... In PAGE 12: ...ne extent is 5.42ms. The cost of constructing the re- quested page version by applying the diff-pages back to the checkpoint page is negligible. Table3 : End-to-end BITE performance... ..."
Cited by 1

Table 3 shows the average end-to-end BITE cost mea- sured at the client side by running one standard OO7 T1 traversal against Thor, SNAP and Diff respectively. Hybrid has the latency of SNAP for recent snapshots, and latency of Diff otherwise. The end-to-end BITE la- tency (page fetch cost) increases over time as pages are archived. Table 3 lists the numbers corresponding to a particular point in system execution history with the in- tention of providing general indication of BITE perfor- mance on different representations compared to the per- formance of accessing the current database. The perfor-

in Thresher: An efficient storage manager for copy-on-write snapshots
by Liuba Shrira 2006
"... In PAGE 12: ...ne extent is 5.42ms. The cost of constructing the re- quested page version by applying the diff-pages back to the checkpoint page is negligible. Table3 : End-to-end BITE performance... ..."
Cited by 1

Table 2: End-to-end Roundtrip Latency

in Analysis of techniques to improve protocol processing latency
by David Mosberger, Larry L. Peterson, Patrick G. Bridges 1996
"... In PAGE 8: ...Table 2: End-to-end Roundtrip Latency Table2 shows the end-to-end results. The rows are sorted according to decreasing latency, with each row giving the performance of one version of the TCP/IP and RPC stacks.... ..."
Cited by 52
Next 10 →
Results 1 - 10 of 81,228
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University