Results 1 - 10
of
14
A social network approach to free/open source software simulation
- In Proceedings First International Conference on Open Source Systems
, 2005
"... Free and Open Source Software (F/OSS) development is a complex process that is just beginning to be understood. The actual development process is frequently characterized as disparate volunteer developers collaborating to make a piece of software. The developers of F/OSS, like all software, spend a ..."
Abstract
-
Cited by 26 (0 self)
- Add to MetaCart
(Show Context)
Free and Open Source Software (F/OSS) development is a complex process that is just beginning to be understood. The actual development process is frequently characterized as disparate volunteer developers collaborating to make a piece of software. The developers of F/OSS, like all software, spend a significant portion of their time in social communications to foster collaboration. We have analyzed several methods of communication; a social networking site, project mailing lists, and developer weblogs; to gain an understanding of the social network structure behind F/OSS projects. This social network data was used to create a model of F/OSS development that allows for multiple projects, users, and developers with varying goals and socialization methods. Using this model we have been able to replicate some of the known phenomena observed in F/OSS and provide a first step in the creation of a robust model of F/OSS.
Plat Forms: A Web Development Platform Comparison by an Exploratory Experiment Searching for Emergent Platform Properties
"... Abstract—Background: For developing web-based applications, there exist several competing and widely used technological platforms (consisting of a programming language, framework(s), components, and tools), each with an accompanying development culture and style. Research question: Do web developmen ..."
Abstract
-
Cited by 7 (1 self)
- Add to MetaCart
(Show Context)
Abstract—Background: For developing web-based applications, there exist several competing and widely used technological platforms (consisting of a programming language, framework(s), components, and tools), each with an accompanying development culture and style. Research question: Do web development projects exhibit emergent process or product properties that are characteristic and consistent within a platform but show relevant substantial differences across platforms or do team-to-team individual differences outweigh such differences, if any? Such a property could be positive (i.e. a platform advantage), negative, or neutral and it might be unobvious which is which. Method: In a non-randomized, controlled experiment, framed as a public contest called “Plat Forms”, top-class teams of three professional programmers competed to implement the same requirements for a web-based application within 30 hours. Three different platforms (Java EE, PHP, or Perl) were used by three teams each. We compare the resulting nine products and process records along many dimensions, both external (usability, functionality, reliability, security, etc.) and internal (size, structure, modifiability, etc.). Results: The various results obtained cover a wide spectrum: First, there are results that many people would have called “obvious ” or “well known”, say, that Perl solutions tend to be more compact than Java solutions. Second, there are results that contradict conventional wisdom, say, that our PHP solutions appear in some (but not all) respects to be actually at least as secure as the others. Finally, one result makes a statement we have not seen discussed previously: Along several dimensions, the amount of within-platform variation between the teams tends to be smaller for PHP than for the other platforms. Conclusion: The results suggest that substantial characteristic platform differences do indeed exist in some dimensions, but possibly not in others.
Evaluating Methods and Technologies in Software Engineering with Respect
, 2012
"... Abstract—Background: It is trivial that the usefulness of a technology depends on the skill of the user. Several studies have reported an interaction between skill levels and different technologies, but the effect of skill is, for the most part, ignored in empirical, human-centric studies in softwar ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Abstract—Background: It is trivial that the usefulness of a technology depends on the skill of the user. Several studies have reported an interaction between skill levels and different technologies, but the effect of skill is, for the most part, ignored in empirical, human-centric studies in software engineering. Aim: This paper investigates the usefulness of a technology as a function of skill. Method: An experiment that used students as subjects found recursive implementations to be easier to debug correctly than iterative implementations. We replicated the experiment by hiring 65 professional developers from nine companies in eight countries. In addition to the debugging tasks, performance on 17 other programming tasks was collected and analyzed using a measurement model that expressed the effect of treatment as a function of skill. Results: The hypotheses of the original study were confirmed only for the low-skilled subjects in our replication. Conversely, the high-skilled subjects correctly debugged the iterative implementations faster than the recursive ones, while the difference between correct and incorrect solutions for both treatments was negligible. We also found that the effect of skill (odds ratio = 9.4) was much larger than the effect of the treatment (odds ratio = 1.5). Conclusions: Claiming that a technology is better than another is problematic without taking skill levels into account. Better ways to assess skills as an integral part of technology evaluation are required.
An Empirical Study of Working Speed Differences Between Software Engineers for Various Kinds of Task
, 2000
"... How long do different software engineers take to solve the same task? In 1967, Grant and Sackman published their now famous number of 28:1 interpersonal performance differences, which is both incorrect and misleading. This article presents the analysis of a larger dataset of software engineering wo ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
How long do different software engineers take to solve the same task? In 1967, Grant and Sackman published their now famous number of 28:1 interpersonal performance differences, which is both incorrect and misleading. This article presents the analysis of a larger dataset of software engineering work time data taken from various controlled experiments. It corrects the false 28:1 value, proposes more appropriate metrics, presents the results for the larger dataset, and further analyzes the data for distribution shapes and effect sizes.
1Construction and Validation of an Instrument for Measuring Programming Skill
"... Abstract—Skilled workers are crucial to the success of software development. The current practice in research and industry for assessing programming skills is mostly to use proxy variables of skill, such as education, experience, and multiple-choice knowledge tests. There is as yet no valid and effi ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Abstract—Skilled workers are crucial to the success of software development. The current practice in research and industry for assessing programming skills is mostly to use proxy variables of skill, such as education, experience, and multiple-choice knowledge tests. There is as yet no valid and efficient way to measure programming skill. The aim of this research is to develop a valid instrument that measures programming skill by inferring skill directly from the performance on programming tasks. Over two days, 65 professional developers from eight countries solved 19 Java programming tasks. Based on the developers ’ performance, the Rasch measurement model was used to construct the instrument. The instrument was found to have satisfactory (internal) psychometric properties and correlated with external variables in compliance with theoretical expectations. Such an instrument has many implications for practice, for example, in job recruitment and project allocation. Index Terms—skill, programming, performance, instrument, measurement
Measuring the human factor with the Rasch model,” in Balancing Agility and Formalism
- in Software Engineering, ser. Lecture Notes in Computer Science
"... Abstract. This paper presents a test for measuring the C language knowledge of a software developer. The test was grounded with a web experiment comprising 151 participants. Their background ranged from pupils to professional developers. The resulting variable is based on the Rasch Model. Therefore ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. This paper presents a test for measuring the C language knowledge of a software developer. The test was grounded with a web experiment comprising 151 participants. Their background ranged from pupils to professional developers. The resulting variable is based on the Rasch Model. Therefore single questions as well as the entire test could be assessed. The paper describes the experiment, the application of the Rasch Model in software engineering, and further concepts of measurement.
FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT. THIS PUBLICATION COULD INCLUDE TECHNICAL INACCURACIES OR TYPOGRAPHICAL ERRORS. CHANGES ARE PERIODICALLY ADDED TO THE INFORMATION HEREIN. Commentary
, 2008
"... The material in the C99 subsections is copyright © ISO. The material in the C90 and C++ sections that is quoted from the respective language standards is copyright © ISO. Credits and permissions for quoted material is given where that material appears. ..."
Abstract
- Add to MetaCart
(Show Context)
The material in the C99 subsections is copyright © ISO. The material in the C90 and C++ sections that is quoted from the respective language standards is copyright © ISO. Credits and permissions for quoted material is given where that material appears.
Abstract Submission to IEEE Transactions on Software Engineering
"... empirical study of working speed differences between software engineers for various kinds of task How long do different software engineers take to solve the same task? In 1967, Grant and Sackman published their now famous number of 28:1 interpersonal performance differences, which is both incorrect ..."
Abstract
- Add to MetaCart
(Show Context)
empirical study of working speed differences between software engineers for various kinds of task How long do different software engineers take to solve the same task? In 1967, Grant and Sackman published their now famous number of 28:1 interpersonal performance differences, which is both incorrect and misleading. This article presents the analysis of a larger dataset of software engineering work time data taken from various controlled experiments. It corrects the false 28:1 value, proposes more appropriate metrics, presents the results for the larger dataset, and further analyzes the data for distribution shapes and effect sizes. 1
A continuous, evidence-based approach to discovery and assessment of software engineering best practices Contents
"... ..."
(Show Context)