Results 11  20
of
829
Imprecision in Engineering Design
 ASME JOURNAL OF MECHANICAL DESIGN
, 1995
"... Methods for incorporating imprecision in engineering design decisionmaking are briefly reviewed and compared. A tutorial is presented on the Method of Imprecision (MoI), a formal method, based on the mathematics of fuzzy sets, for representing and manipulating imprecision in engineering design. The ..."
Abstract

Cited by 56 (6 self)
 Add to MetaCart
Methods for incorporating imprecision in engineering design decisionmaking are briefly reviewed and compared. A tutorial is presented on the Method of Imprecision (MoI), a formal method, based on the mathematics of fuzzy sets, for representing and manipulating imprecision in engineering design. The results of a design cost estimation example, utilizing a new informal cost specification, are presented. The MoI can provide formal information upon which to base decisions during preliminary engineering design and can facilitate setbased concurrent design.
A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem
 European Journal of Operational Research
, 2006
"... Over the last decade many metaheuristics have been applied to the flowshop scheduling problem, ranging from Simulated Annealing or Tabu Search to complex hybrid techniques. Some of these methods provide excellent effectiveness and efficiency at the expense of being utterly complicated. In fact, seve ..."
Abstract

Cited by 51 (11 self)
 Add to MetaCart
(Show Context)
Over the last decade many metaheuristics have been applied to the flowshop scheduling problem, ranging from Simulated Annealing or Tabu Search to complex hybrid techniques. Some of these methods provide excellent effectiveness and efficiency at the expense of being utterly complicated. In fact, several published methods require substantial implementation efforts, exploit problem specific speedup techniques that cannot be applied to slight variations of the original problem, and often reimplementations of these methods by other researchers produce results that are quite different from the original ones. In this work we present a new iterated greedy algorithm that applies two phases iteratively, named destruction, were some jobs are eliminated from the incumbent solution, and construction, where the eliminated jobs are reinserted into the sequence using the well known NEH construction £Corresponding author 1 heuristic. Optionally, a local search can be applied after the construction phase. Our iterated greedy algorithm is both very simple to implement and, as shown by experimental results, highly effective when compared to stateoftheart methods.
MultiModal Identity Verification Using Expert Fusion
 Information Fusion
, 2000
"... The contribution of this paper is to compare paradigms coming from the classes of parametric, and nonparametric techniques to solve the decision fusion problem encountered in the design of a multimodal biometrical identity verification system. The multimodal identity verification system under con ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
(Show Context)
The contribution of this paper is to compare paradigms coming from the classes of parametric, and nonparametric techniques to solve the decision fusion problem encountered in the design of a multimodal biometrical identity verification system. The multimodal identity verification system under consideration is built of d modalities in parallel, each one delivering as output a scalar number, called score, stating how well the claimed identity is verified. A decision fusion module receiving as input the d scores has to take a binary decision: accept or reject the claimed identity. We have solved this fusion problem using parametric and nonparametric classifiers. The performances of all these fusion modules have been evaluated and compared with other approaches on a multimodal database, containing both vocal and visual biometric modalities. Keywords: Multimodal identity verification, biometrics, decision fusion. 1 Introduction The automatic verification 1 of a person is more and...
Statistical strategies for avoiding false discoveries in metabolomics and related experiments
, 2006
"... Many metabolomics, and other highcontent or highthroughput, experiments are set up such that the primary aim is the discovery of biomarker metabolites that can discriminate, with a certain level of certainty, between nominally matched ‘case ’ and ‘control ’ samples. However, it is unfortunately ve ..."
Abstract

Cited by 39 (10 self)
 Add to MetaCart
(Show Context)
Many metabolomics, and other highcontent or highthroughput, experiments are set up such that the primary aim is the discovery of biomarker metabolites that can discriminate, with a certain level of certainty, between nominally matched ‘case ’ and ‘control ’ samples. However, it is unfortunately very easy to find markers that are apparently persuasive but that are in fact entirely spurious, and there are wellknown examples in the proteomics literature. The main types of danger are not entirely independent of each other, but include bias, inadequate sample size (especially relative to the number of metabolite variables and to the required statistical power to prove that a biomarker is discriminant), excessive false discovery rate due to multiple hypothesis testing, inappropriate choice of particular numerical methods, and overfitting (generally caused by the failure to perform adequate validation and crossvalidation). Many studies fail to take these into account, and thereby fail to discover anything of true significance (despite their claims). We summarise these problems, and provide pointers to a substantial existing literature that should assist in the improved design and evaluation of metabolomics experiments, thereby allowing robust scientific conclusions to be drawn from the available data. We provide a list of some of the simpler checks that might improve one’s confidence that a candidate biomarker is not simply a statistical artefact, and suggest a series of preferred tests and visualisation tools that can assist readers and authors in assessing papers. These tools can be applied to individual metabolites by using multiple univariate tests performed in parallel across all metabolite peaks. They may also be applied to the validation of multivariate models. We stress in
A Concept Exploration Method For Determining Robust TopLevel Specifications
, 1997
"... In the early stages of design of complex systems, it is necessary to explore the design space to determine a suitable range for specifications and identify feasible starting points for design. Thus, a robust concept exploration method have been developed to improve the efficiency and effectiveness o ..."
Abstract

Cited by 37 (18 self)
 Add to MetaCart
In the early stages of design of complex systems, it is necessary to explore the design space to determine a suitable range for specifications and identify feasible starting points for design. Thus, a robust concept exploration method have been developed to improve the efficiency and effectiveness of the process of identifying suitable starting points for the design of complex systems. Using this method, quality concepts (robustness) are introduced into the choice of the initial specifications for design. The Concept exploration is implemented by integrating the Response Surface Method, robust design techniques and the compromise Decision Support Problem. The proposed approach is demonstrated to determining toplevel specifications for airframe geometry and the propulsion system for the High Speed Civil Transport aircraft. The focus in this paper is on illustrating the approach rather than on the results per se. Word Count: 6986. Key words: Concept exploration, robust design, specifi...
An Experimental Comparison of UsageBased and ChecklistBased Reading
 IEEE Transactions on Software Engineering
, 2003
"... Software quality can be defined as the customer’s perception of how a system works. Inspection is a method to control the quality throughout the development cycle. Reading techniques applied to inspections help reviewers to stay focused on the important parts of an artefact when inspecting. However, ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
(Show Context)
Software quality can be defined as the customer’s perception of how a system works. Inspection is a method to control the quality throughout the development cycle. Reading techniques applied to inspections help reviewers to stay focused on the important parts of an artefact when inspecting. However, many reading techniques focus on finding as many faults as possible, regardless of their importance. Usagebased reading helps reviewers to focus on the most important part of an software artefact from a user’s point of view. This paper is an extended abstract of a technical report describing an experiment, which compares usagebased and checklistbased reading. The results show that reviewers applying usagebased reading are more efficient and effective in detecting the most critical faults from a user’s point of view than reviewers using checklistbased reading. Usagebased reading may be preferable to use for software organisations utilising or will start utilising use cases in their software development. 1.
Benchmark Instances for Project Scheduling Problems
 Handbook on Recent Advances in Project Scheduling
, 1998
"... this paper can be stated as follows: A set V = f0; 1; : : : ; n; n+1g of activities has to be processed. Fictitious activity 0 and n+1 correspond to the "project start" and to the "project end", respectively. The activities use renewable resources and consume nonrenewable resourc ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
this paper can be stated as follows: A set V = f0; 1; : : : ; n; n+1g of activities has to be processed. Fictitious activity 0 and n+1 correspond to the "project start" and to the "project end", respectively. The activities use renewable resources and consume nonrenewable resources. The sets of renewable and nonrenewable resources are denoted by K
Using Experimental Design to Find Effective Parameter Settings for Heuristics
 Journal of Heuristics
, 2001
"... In this paper, we propose a procedure, based on statistical design of experiments and gradient descent, that finds effective settings for parameters found in heuristics. We develop our procedure using four experiments. We use our procedure and a small subset of problems to find parameter settings fo ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
In this paper, we propose a procedure, based on statistical design of experiments and gradient descent, that finds effective settings for parameters found in heuristics. We develop our procedure using four experiments. We use our procedure and a small subset of problems to find parameter settings for two new vehicle routing heuristics. We then set the parameters of each heuristic and solve 19 capacityconstrained and 15 capacityconstrained and routelengthconstrained vehicle routing problems ranging in size from 50 to 483 customers. We conclude that our procedure is an effective method that deserves serious consideration by both researchers and operations research practitioners. Key Words: statistical design of experiments, heuristics, vehicle routing 1.
Tuning Search Algorithms for RealWorld Applications: A Regression Tree Based Approach
, 2004
"... The optimization of complex realworld problems might benefit from well tuned algorithm's parameters. We propose a methodology that performs this tuning in an effective and efficient algorithmical manner. This approach combines methods from statistical design of experiments, regression analysis ..."
Abstract

Cited by 33 (5 self)
 Add to MetaCart
The optimization of complex realworld problems might benefit from well tuned algorithm's parameters. We propose a methodology that performs this tuning in an effective and efficient algorithmical manner. This approach combines methods from statistical design of experiments, regression analysis, design and analysis of computer experiments methods, and treebased regression. It can also be applied to analyze the influence of different operators or to compare the performance of different algorithms. An evolution strategy and a simulated annealing algorithm that optimize an elevator supervisory group controller system are used to demonstrate the applicability of our approach to realworld optimization problems.