|Authors||E. Arisholm, L. Briand and B. C. D. Anda|
|Title||Proceedings of the First Workshop on Empirical Studies of Model-Driven Engineering (ESMDE)|
|Afilliation||Software Engineering, Software Engineering|
|Publication Type||Edited books|
|Year of Publication||2008|
It is often difficult to rigorously evaluate Model-Driven Engineering (MDE) technologies. Performing empirical studies require skills, experience and tacit knowledge that are in many ways very different from the “core” MDE research. Furthermore, empirical studies often entail large investments in terms of human resources, time and money. Nevertheless, evaluations of MDE technologies are needed in order to demonstrate the soundness, applicability, and cost effectiveness of proposed technologies in various development contexts. The aim of this workshop is to exemplify and discuss ways in which proposed modeldriven engineering (MDE) technologies should be evaluated, with a specific emphasis on how to plan, conduct, analyze and report the results of empirical studies. The workshop will have focus on the challenges of empirical studies involving human users, since MDE technologies are typically expected to be used by software engineers to improve various quality aspects of software systems and the productivity of software development. More detailed topics include: What are the main obstacles and potential remedies when performing empirical studies of MDE? What are the main threats to validity of empirical studies of MDE, and how should they be dealt with? For example, since MDE often represent new and complex technology, the selection and training of human subjects who participate in empirical studies often become critical factors. What are the most important outcome variables of the costs and benefits of MDE? How can quality be measured in the context of MDE? And can we define an unambiguous set of (benchmark) outcome measures to facilitate metaanalyses across subjects, systems, tasks and technologies? The goal of the workshop is to pave the way for the development of a MDE-specific framework for empirical evaluation of MDE technologies, or at least provide a minimum standard for evaluation that published work in the MDE community should abide by.