• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

The 28:1 Grant/Sackman legend is misleading, or: How large is interpersonal variation really? (1999)

by Lutz Prechelt
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 14
Next 10 →

A social network approach to free/open source software simulation

by Patrick Wagstrom, Jim Herbsleb, Kathleen Carley - In Proceedings First International Conference on Open Source Systems , 2005
"... Free and Open Source Software (F/OSS) development is a complex process that is just beginning to be understood. The actual development process is frequently characterized as disparate volunteer developers collaborating to make a piece of software. The developers of F/OSS, like all software, spend a ..."
Abstract - Cited by 26 (0 self) - Add to MetaCart
Free and Open Source Software (F/OSS) development is a complex process that is just beginning to be understood. The actual development process is frequently characterized as disparate volunteer developers collaborating to make a piece of software. The developers of F/OSS, like all software, spend a significant portion of their time in social communications to foster collaboration. We have analyzed several methods of communication; a social networking site, project mailing lists, and developer weblogs; to gain an understanding of the social network structure behind F/OSS projects. This social network data was used to create a model of F/OSS development that allows for multiple projects, users, and developers with varying goals and socialization methods. Using this model we have been able to replicate some of the known phenomena observed in F/OSS and provide a first step in the creation of a robust model of F/OSS.
(Show Context)

Citation Context

...e repeated multiple times depending on the skill of the agent. Distribution of agent skill is provided by the analysis of the different levels from Advogato.Org and research on interpersonal variation=-=[12, 5]-=-. At the end of each simulation period, the developers and users of a project vote on which set of changes they wish to keep. Users will vote yes for a set of changes if it improves their overall fitn...

Plat Forms: A Web Development Platform Comparison by an Exploratory Experiment Searching for Emergent Platform Properties

by Lutz Prechelt, Ieee Cs
"... Abstract—Background: For developing web-based applications, there exist several competing and widely used technological platforms (consisting of a programming language, framework(s), components, and tools), each with an accompanying development culture and style. Research question: Do web developmen ..."
Abstract - Cited by 7 (1 self) - Add to MetaCart
Abstract—Background: For developing web-based applications, there exist several competing and widely used technological platforms (consisting of a programming language, framework(s), components, and tools), each with an accompanying development culture and style. Research question: Do web development projects exhibit emergent process or product properties that are characteristic and consistent within a platform but show relevant substantial differences across platforms or do team-to-team individual differences outweigh such differences, if any? Such a property could be positive (i.e. a platform advantage), negative, or neutral and it might be unobvious which is which. Method: In a non-randomized, controlled experiment, framed as a public contest called “Plat Forms”, top-class teams of three professional programmers competed to implement the same requirements for a web-based application within 30 hours. Three different platforms (Java EE, PHP, or Perl) were used by three teams each. We compare the resulting nine products and process records along many dimensions, both external (usability, functionality, reliability, security, etc.) and internal (size, structure, modifiability, etc.). Results: The various results obtained cover a wide spectrum: First, there are results that many people would have called “obvious ” or “well known”, say, that Perl solutions tend to be more compact than Java solutions. Second, there are results that contradict conventional wisdom, say, that our PHP solutions appear in some (but not all) respects to be actually at least as secure as the others. Finally, one result makes a statement we have not seen discussed previously: Along several dimensions, the amount of within-platform variation between the teams tends to be smaller for PHP than for the other platforms. Conclusion: The results suggest that substantial characteristic platform differences do indeed exist in some dimensions, but possibly not in others.
(Show Context)

Citation Context

...d in [9, Section 2.3] Our biggest constancy concern is within-platform variation between the teams which must be very small or will make it impossible to reliably detect existing platform differences =-=[17]-=-. Our only hope is to have a very homogeneous set of teams in the contest. We attempted to solve this problem by going for top-class teams (rather than average teams) only: Their performance is most l...

Evaluating Methods and Technologies in Software Engineering with Respect

by Gunnar R. Bergersen, Dag I. K. Sjøberg , 2012
"... Abstract—Background: It is trivial that the usefulness of a technology depends on the skill of the user. Several studies have reported an interaction between skill levels and different technologies, but the effect of skill is, for the most part, ignored in empirical, human-centric studies in softwar ..."
Abstract - Cited by 2 (2 self) - Add to MetaCart
Abstract—Background: It is trivial that the usefulness of a technology depends on the skill of the user. Several studies have reported an interaction between skill levels and different technologies, but the effect of skill is, for the most part, ignored in empirical, human-centric studies in software engineering. Aim: This paper investigates the usefulness of a technology as a function of skill. Method: An experiment that used students as subjects found recursive implementations to be easier to debug correctly than iterative implementations. We replicated the experiment by hiring 65 professional developers from nine companies in eight countries. In addition to the debugging tasks, performance on 17 other programming tasks was collected and analyzed using a measurement model that expressed the effect of treatment as a function of skill. Results: The hypotheses of the original study were confirmed only for the low-skilled subjects in our replication. Conversely, the high-skilled subjects correctly debugged the iterative implementations faster than the recursive ones, while the difference between correct and incorrect solutions for both treatments was negligible. We also found that the effect of skill (odds ratio = 9.4) was much larger than the effect of the treatment (odds ratio = 1.5). Conclusions: Claiming that a technology is better than another is problematic without taking skill levels into account. Better ways to assess skills as an integral part of technology evaluation are required.
(Show Context)

Citation Context

...ults.sMeta-analysis has also confirmed that individual variabilitysin programming is large, even though it may appear less thansthe 1:28 differences reported in the early days of softwaresengineering =-=[36]-=-. Nevertheless, large variability in skillslevels implies that one should be meticulous when definingsthe sample population as well as the target population insempirical studies in software engineerin...

An Empirical Study of Working Speed Differences Between Software Engineers for Various Kinds of Task

by Lutz Prechelt , 2000
"... How long do different software engineers take to solve the same task? In 1967, Grant and Sackman published their now famous number of 28:1 interpersonal performance differences, which is both incorrect and misleading. This article presents the analysis of a larger dataset of software engineering wo ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
How long do different software engineers take to solve the same task? In 1967, Grant and Sackman published their now famous number of 28:1 interpersonal performance differences, which is both incorrect and misleading. This article presents the analysis of a larger dataset of software engineering work time data taken from various controlled experiments. It corrects the false 28:1 value, proposes more appropriate metrics, presents the results for the larger dataset, and further analyzes the data for distribution shapes and effect sizes.

1Construction and Validation of an Instrument for Measuring Programming Skill

by Gunnar R. Bergersen, Dag I. K. Sjøberg
"... Abstract—Skilled workers are crucial to the success of software development. The current practice in research and industry for assessing programming skills is mostly to use proxy variables of skill, such as education, experience, and multiple-choice knowledge tests. There is as yet no valid and effi ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Abstract—Skilled workers are crucial to the success of software development. The current practice in research and industry for assessing programming skills is mostly to use proxy variables of skill, such as education, experience, and multiple-choice knowledge tests. There is as yet no valid and efficient way to measure programming skill. The aim of this research is to develop a valid instrument that measures programming skill by inferring skill directly from the performance on programming tasks. Over two days, 65 professional developers from eight countries solved 19 Java programming tasks. Based on the developers ’ performance, the Rasch measurement model was used to construct the instrument. The instrument was found to have satisfactory (internal) psychometric properties and correlated with external variables in compliance with theoretical expectations. Such an instrument has many implications for practice, for example, in job recruitment and project allocation. Index Terms—skill, programming, performance, instrument, measurement

Measuring the human factor with the Rasch model,” in Balancing Agility and Formalism

by Dirk Wilking , David Schilli , Stefan Kowalewski - in Software Engineering, ser. Lecture Notes in Computer Science
"... Abstract. This paper presents a test for measuring the C language knowledge of a software developer. The test was grounded with a web experiment comprising 151 participants. Their background ranged from pupils to professional developers. The resulting variable is based on the Rasch Model. Therefore ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Abstract. This paper presents a test for measuring the C language knowledge of a software developer. The test was grounded with a web experiment comprising 151 participants. Their background ranged from pupils to professional developers. The resulting variable is based on the Rasch Model. Therefore single questions as well as the entire test could be assessed. The paper describes the experiment, the application of the Rasch Model in software engineering, and further concepts of measurement.
(Show Context)

Citation Context

...mming experiments, the effect strength of novel techniques appears problematic. Novel techniques are always of major interest, but their strength sometimes is so small, that other factors mask its effect. One of the masking factors is assumed here to be the developer’s programming ability. Experiences with software development projects, language knowledge, algorithm knowledge, development environment knowledge, and other person related abilities may have an effect on the time a participant needs to develop a program. Indirectly, this is shown by performance estimation of developers as done in [11]. A factor of three is reported as difference in performance with natural outliers to be found sometimes. Regarding this from a software engineering view, a length of a development task might be depending on the person executing the programming task. Finding a technique with an influence of approximately the same strength as a factor of three appears at least problematic. The use of the Rasch Model within practical software engineering is limited. First of all, a developer’s knowledge is subject to change during a project. A static assessment using a single test in the beginning thus is not ap...

FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT. THIS PUBLICATION COULD INCLUDE TECHNICAL INACCURACIES OR TYPOGRAPHICAL ERRORS. CHANGES ARE PERIODICALLY ADDED TO THE INFORMATION HEREIN. Commentary

by Derek M. Jones, Derek Jones , 2008
"... The material in the C99 subsections is copyright © ISO. The material in the C90 and C++ sections that is quoted from the respective language standards is copyright © ISO. Credits and permissions for quoted material is given where that material appears. ..."
Abstract - Add to MetaCart
The material in the C99 subsections is copyright © ISO. The material in the C90 and C++ sections that is quoted from the respective language standards is copyright © ISO. Credits and permissions for quoted material is given where that material appears.
(Show Context)

Citation Context

... been selected in several alternative ways, others interrelate to each other. The functionality available in C can affect the way an algorithm is coded (not forgetting individual personal differences =-=[1069, 1070]-=- ). Sections of source may only be written that way because that is how things are done in C; they may be written differently, and have different execution time characteristics, [1071] in other langua...

Abstract Submission to IEEE Transactions on Software Engineering

by unknown authors
"... empirical study of working speed differences between software engineers for various kinds of task How long do different software engineers take to solve the same task? In 1967, Grant and Sackman published their now famous number of 28:1 interpersonal performance differences, which is both incorrect ..."
Abstract - Add to MetaCart
empirical study of working speed differences between software engineers for various kinds of task How long do different software engineers take to solve the same task? In 1967, Grant and Sackman published their now famous number of 28:1 interpersonal performance differences, which is both incorrect and misleading. This article presents the analysis of a larger dataset of software engineering work time data taken from various controlled experiments. It corrects the false 28:1 value, proposes more appropriate metrics, presents the results for the larger dataset, and further analyzes the data for distribution shapes and effect sizes. 1
(Show Context)

Citation Context

... power of five different tests using the group pairs from variance.data as the empirical basis. Since this analysis has many technical caveats, I will not delve into the details here (please refer to =-=[13]-=- instead) and just point out the main conclusion: Depending on the actual data samples, any single test can sometimes mislead and it is therefore advisable to use several tests at once and present the...

A continuous, evidence-based approach to discovery and assessment of software engineering best practices Contents

by Philip M. Johnson
"... ..."
Abstract - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...original dataset in combination with other published datasets indicates a smaller but still significant multiple— from 2:1 to 6:1 depending upon conditions and the kind of statistical comparison used =-=[39]-=-. There is even evidence that some programmers may actually decrease overall productivity, a phenomenon known as the “net negative producing programmer” [44]. While comparison of different individual’...

The New C Standard -- An Economic . . .

by Derek M. Jones , 2005
"... ..."
Abstract - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

... been selected in several alternative ways, others interrelate to each other. The functionality available in C can affect the way an algorithm is coded (not forgetting individual personal differences =-=[1068, 1069]-=- ). Sections of source may only be written that way because that is how things are done in C; they may be written differently, and have different execution time characteristics, [1070] in other langua...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University