Industrial-Engineering-Concepts-Methodologies-Tools-and-Applications-3-Volume-Set.pdf - PDFCOFFEE.COM (2025)

Industrial Engineering: Concepts, Methodologies, Tools and Applications Information Resources Management Association USA

3VolumeSet

Volume I

Managing Director: Senior Editorial Director: Book Production Manager: Publishing Systems Analyst: Assistant Acquisitions Editor: Development Manager: Development Editor: Assistant Production Editor: Cover Design:

Lindsay Johnston Heather Probst Jennifer Romanchak Adrienne Freeland Kayla Wolfe Joel Gamon Chris Wozniak Deanna Jo Zombro Nick Newcomer

Published in the United States of America by Engineering Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail: [emailprotected] Web site: http://www.igi-global.com Copyright © 2013 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Industrial engineering : concepts, methodologies, tools, and applications / Information Resources Management Association, editor. v. cm. Includes bibliographical references and index. ISBN 978-1-4666-1945-6 (hardcover) -- ISBN 978-1-4666-1946-3 (ebook) -- ISBN 978-1-4666-1947-0 (print & perpetual access) 1. Industrial engineering. 2. Industrial engineering--Case studies. I. Information Resources Management Association. T56.I43 2013 620--dc23 2012023210

British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. The views expressed in this book are those of the authors, but not necessarily of the publisher.

xxvii

Preface

The constantly changing landscape of Industrial Engineering makes it challenging for experts and practitioners to stay informed of the field’s most up-to-date research. That is why Information Science Reference is pleased to offer this three-volume reference collection that will empower students, researchers, and academicians with a strong understanding of critical issues within Industrial Engineering by providing both broad and detailed perspectives on cutting-edge theories and developments. This reference is designed to act as a single reference source on conceptual, methodological, technical, and managerial issues, as well as provide insight into emerging trends and future opportunities within the discipline. Industrial Engineering: Concepts, Methodologies, Tools and Applications is organized into eight distinct sections that provide comprehensive coverage of important topics. The sections are: (1) Fundamental Concepts and Theories, (2) Development and Design Methodologies, (3) Tools and Technologies, (4) Utilization and Application, (5) Organizational and Social Implications, (6) Managerial Impact, (7) Critical Issues, and (8) Emerging Trends. The following paragraphs provide a summary of what to expect from this invaluable reference tool. Section 1, Fundamental Concepts and Theories, serves as a foundation for this extensive reference tool by addressing crucial theories essential to the understanding of Industrial Engineering. Introducing the book is “Defining, Teaching, and Assessing Engineering Design Skills” by Nikos J. Mourtos, a great foundation laying the groundwork for the basic concepts and theories that will be discussed throughout the rest of the book. Another chapter of note in Section 1 is titled “Integrating ‘Designerly’ Ways with Engineering Science” by Ian de Vere and Gavin Melles, which discusses the novel techniques of adding aspects of design science into the stricter roles of engineering practices. Section 1 concludes, and leads into the following portion of the book with a nice segue chapter, “Tracing the Implementation of Non-Functional Requirements,” by Stephan Bode and Matthias Riebisch. Where Section 1 leaves off with fundamental concepts, Section 2 discusses architectures and frameworks in place for Industrial Engineering. Section 2, Development and Design Methodologies, presents in-depth coverage of the conceptual design and architecture of Industrial Engineering, focusing on aspects including parametric design, service design, fuzzy logic, control modeling, supply chain systems, and many more topics. Opening the section is “Learning Parametric Designing” by Marc Aurel Schnabel. This section is vital for developers and practitioners who want to measure and track the progress of Industrial Engineering on a through the multiple lens of parametric design. Through case studies, this section lays excellent groundwork for later sections that will get into present and future applications for Industrial Engineering, including, of note: “Decision Support Framework for the Selection of a Layout Type” by Jannes Slomp and Jos A.C. Bokhorst, and “Internal Supply Chain Integration” by Virpi Turkulainen. The section concludes with an

xxviii

excellent work by Mousumi Debnath and Mukeshwar Pandey, titled “Enhancing Engineering Education Learning Outcomes Using Project-Based Learning.” Section 3, Tools and Technologies, presents extensive coverage of the various tools and technologies used in the implementation of Industrial Engineering. Section 3 begins where Section 2 left off, though this section describes more concrete tools at place in the modeling, planning, and applications of Industrial Engineering. The first chapter, “Semantic Technologies in Motion,” by Ricardo Colomo-Palacios, lays a framework for the types of works that can be found in this section, a perfect resource for practitioners looking for the fundamentals of the types of semantic technologies currently in practice in Industrial Engineering. Section 3 is full of excellent chapters like this one, including such titles as “Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing Systems under Uncertain Situations,” “Multi-Modal Assembly-Support System for Cellular Manufacturing,” and “An Estimation of Distribution Algorithm for Part Cell Formation Problem” to name a few. Where Section 3 described specific tools and technologies at the disposal of practitioners, Section 4 describes successes, failures, best practices, and different applications of the tools and frameworks discussed in previous sections. Section 4, Utilization and Application, describes how the broad range of Industrial Engineering efforts has been utilized and offers insight on and important lessons for their applications and impact. Section 4 includes the widest range of topics because it describes case studies, research, methodologies, frameworks, architectures, theory, analysis, and guides for implementation. Topics range from serios games, enterprise resource planning, and crisis management, to air travel development and design. The first chapter in the section is titled “Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context,” which was written by Souleiman Naciri, Min-Jung Yoo, and Rémy Glardon. The breadth of topics covered in the chapter is also reflected in the diversity of its authors, from countries all over the globe, including Germany, Slovenia, Norway, Hong Kong, Malaysia, Brazil, Cyprus, Turkey, the United States, and more. Section 4 concludes with an excellent view of a case study in a new program, “UB1-HIT Dual Master’s Programme,” by David Chen, Bruno Vallespir, Jean-Paul Bourrieres, and Thecle Alix. Section 5, Organizational and Social Implications, includes chapters discussing the organizational and social impact of Industrial Engineering. The section opens with “Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs” by Kathryn J. Hayes and Ross Chapman. Where Section 4 focused on the broad, many applications of Industrial Engineering technology, Section 5 focuses exclusively on how these technologies affect human lives, either through the way they interact with each other, or through how they affect behavioral/workplace situations. Other interesting chapters of note in Section 5 include “Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral” by Cengiz Kahraman, Selçuk Çebi, and Ihsan Kaya, and “Direct Building Manufacturing of Homes with Digital Fabrication” by Lawrence Sass. Section 5 concludes with a fascinating study of a new development in Industrial Engineering, in “Firm-Specific Factors and the Degree of Innovation Openness” by Valentina Lazzarotti, Raffaella Manzini, and Luisa Pellegrini. Section 6, Managerial Impact, presents focused coverage of Industrial Engineering as it relates to effective uses of offshoring, network marketing, knowledge management, e-government, knowledge dissemination, and many more utilities. This section serves as a vital resource for developers who want to utilize the latest research to bolster the capabilities and functionalities of their processes. The section begins with “Offshoring Process,” a great look into whether or not offshoring practices could help a given business, alongside best practices and some new trends in the field. The 13 chapters in this section offer

xxix

unmistakable value to managers looking to implement new strategies that work at larger bureaucratic levels. The section concludes with “Research Profiles” by Gretchen Jordan, Jonathon Mote, and Jerald Hage. Where Section 6 leaves off, section seven picks up with a focus on some of the more contenttheoretical material of this compendium. Section 7, Critical Issues, presents coverage of academic and research perspectives on Industrial Engineering tools and applications. The section begins with “Cultural Models and Variations” by Yongjiang Shi and Zheng Liu. Other issues covered in detail in Section 7 include design paradigns, knowledge dynamics, layout structuring, design ethos, and much more. The section concludes with “Engineer-to-Order” by Ephrem Eyob and Richard Addo-Tenkorang, a great transitional chapter between Sections 7 and 8 because it examines an important trend going into the future of the field. The last chapter manages to show a theoretical look into future and potential technologies, a topic covered in more detail in Section 8. Section 8, Emerging Trends, highlights areas for future research within the field of Industrial Engineering, opening with “Advanced Technologies for Transient Faults Detection and Compensation” by Matteo Sonza Reorda, Luca Sterpone, and Massimo Violante. Section 8 contains chapters that look at what might happen in the coming years that can extend the already staggering amount of applications for Industrial Engineering. Other chapters of note include “Embedded RFID Solutions Challenges for Product Design and Development” and “Green Computing as an Ecological Aid in Industry.” The final chapter of the book looks at an emerging field within Industrial Engineering, in the excellent contribution, “Zero-Downtime Reconfiguration of Distributed Control Logic in Industrial Automation and Control” by Thomas Strasser and Alois Zoitl. Although the primary organization of the contents in this multi-volume work is based on its eight sections, offering a progression of coverage of the important concepts, methodologies, technologies, applications, social issues, and emerging trends, the reader can also identify specific contents by utilizing the extensive indexing system listed at the end of each volume. Furthermore to ensure that the scholar, researcher, and educator have access to the entire contents of this multi volume set as well as additional coverage that could not be included in the print version of this publication, the publisher will provide unlimited multi-user electronic access to the online aggregated database of this collection for the life of the edition, free of charge when a library purchases a print copy. This aggregated database provides far more contents than what can be included in the print version, in addition to continual updates. This unlimited access, coupled with the continuous updates to the database ensures that the most current research is accessible to knowledge seekers. As a comprehensive collection of research on the latest findings related to using technology to providing various services, Industrial Engineering: Concepts, Methodologies, Tools and Applications, provides researchers, administrators and all audiences with a complete understanding of the development of applications and concepts in Industrial Engineering. Given the vast number of issues concerning usage, failure, success, policies, strategies, and applications of Industrial Engineering in countries around the world, Industrial Engineering: Concepts, Methodologies, Tools and Applications addresses the demand for a resource that encompasses the most pertinent research in technologies being employed to globally bolster the knowledge and applications of Industrial Engineering.

Table of Contents

Volume I Section 1 Fundamental Concepts and Theories This section serves as a foundation for this exhaustive reference tool by addressing underlying principles essential to the understanding of Industrial Engineering. Chapters found within these pages provide an excellent framework in which to position Industrial Engineering within the field of information science and technology. Insight regarding the critical incorporation of global measures into Industrial Engineering is addressed, while crucial stumbling blocks of this field are explored. With 10 chapters comprising this foundational section, the reader can learn and chose from a compendium of expert research on the elemental theories underscoring the Industrial Engineering discipline.

Chapter 1 Defining, Teaching, and Assessing Engineering Design Skills .............................................................. 1 Nikos J. Mourtos, San Jose State University, USA Chapter 2 Why Get Your Engineering Programme Accredited?............................................................................ 18 Peter Goodhew, University of Liverpool, UK Chapter 3 Quality and Environmental Management Systems in the Fashion Supply Chain ................................ 21 Chris K. Y. Lo, The Hong Kong Polytechnic University, Hong Kong Chapter 4 People-Focused Knowledge Sharing Initiatives in Medium-High and High Technology Companies: Organizational Facilitating Conditions and Impact on Innovation and Business Competitiveness..................................................................................................................................... 40 Nekane Aramburu, University of Deusto, Spain Josune Sáenz, University of Deusto, Spain

Chapter 5 Integrating ‘Designerly’ Ways with Engineering Science: A Catalyst for Change within Product Design and Development....................................................................................................................... 56 Ian de Vere, Swinburne University of Technology, Australia Gavin Melles, Swinburne University of Technology, Australia Chapter 6 E-Learning for SMEs: Challenges, Potential and Impact...................................................................... 79 Asbjorn Rolstadas, Norwegian University of Science and Technology, Norway Bjorn Andersen, Norwegian University of Science and Technology, Norway Manuel Fradinho, Cyntelix, the Netherlands Chapter 7 Categorization of Losses across Supply Chains: Cases of Manufacturing Firms................................. 98 Priyanka Singh, Jet Airways Limited, India Faraz Syed, Shri Shankaracharya Group of Institutions, India Geetika Sinha, ICICI Lombard, India Chapter 8 Collaborative Demand and Supply Planning Networks...................................................................... 108 Hans-Henrik Hvolby, Aalborg University, Denmark Kenn Steger-Jensen, Aalborg University, Denmark Erlend Alfnes, Norwegian University of Science and Technology, Norway Heidi C. Dreyer, Norwegian University of Science and Technology, Norway Chapter 9 Instructional Design of an Advanced Interactive Discovery Environment: Exploring Team Communication and Technology Use in Virtual Collaborative Engineering Problem Solving.................................................................................................................................. 117 YiYan Wu, Syracuse University, USA Tiffany A. Koszalka, Syracuse University, USA Chapter 10 Modes of Open Innovation in Service Industries and Process Innovation: A Comparative Analysis............................................................................................................................................... 137 Sean Kask, INGENIO (CSIC-UPV), Spain Chapter 11 Production Competence and Knowledge Generation for Technology Transfer: A Comparison between UK and South African Case Studies...................................................................................... 159 Ian Hipkin, École Supérieure de Commerce de Pau, France Chapter 12 Tracing the Implementation of Non-Functional Requirements........................................................... 172 Stephan Bode, Ilmenau University of Technology, Germany Matthias Riebisch, Ilmenau University of Technology, Germany

Section 2 Development and Design Methodologies This section provides in-depth coverage of conceptual architecture frameworks to provide the reader with a comprehensive understanding of the emerging developments within the field of Industrial Engineering. Research fundamentals imperative to the understanding of developmental processes within Industrial Engineering are offered. From broad examinations to specific discussions on methodology, the research found within this section spans the discipline while offering detailed, specific discussions. From basic designs to abstract development, these chapters serve to expand the reaches of development and design technologies within the Industrial Engineering community. This section includes 14 contributions from researchers throughout the world on the topic of Industrial Engineering.

Chapter 13 Learning Parametric Designing .......................................................................................................... 197 Marc Aurel Schnabel, The Chinese University of Hong Kong, Hong Kong Chapter 14 Service Design: New Methods for Innovating Digital User Experiences for Leisure......................... 211 Satu Miettinen, Savonia University of Applied Sciences, Finland Chapter 15 A Mass Customisation Implementation Model for the Total Design Process of the Fashion System . ............................................................................................................................................... 223 Bernice Pan, Seamsystemic Design Research, UK Chapter 16 Integration of Fuzzy Logic Techniques into DSS for Profitability Quantification in a Manufacturing Environment......................................................................................................................................... 242 Irraivan Elamvazuthi, Universiti Teknologi PETRONAS, Malaysia Pandian Vasant, Universiti Teknologi PETRONAS, Malaysia Timothy Ganesan, Universiti Teknologi PETRONAS, Malaysia Chapter 17 Control Model for Intelligent and Demand-Driven Supply Chains..................................................... 262 Jan Ola Strandhagen, SINTEF Technology and Society, Norway Heidi Carin Dreyer, Norwegian University of Science and Technology, Norway Anita Romsdal, Norwegian University of Science and Technology, Norway Chapter 18 Reducing Design Margins by Adaptive Compensation for Thermal and Aging Variations................ 284 Zhenyu Qi, University of Virginia, USA Yan Zhang, University of Virginia, USA Mircea Stan, University of Virginia, USA

Chapter 19 Modeling Closed Loop Supply Chain Systems .................................................................................. 313 Roberto Poles, University of Melbourne, Australia Chapter 20 A Production Planning Optimization Model for Maximizing Battery Manufacturing Profitability ......................................................................................................................................... 343 Hesham K. Alfares, King Fahd University of Petroleum & Minerals, Saudi Arabia Chapter 21 Multi-Objective Optimization of Manufacturing Processes Using Evolutionary Algorithms........................................................................................................................................... 352 M. Kanthababu, Anna University, India Chapter 22 Decision Support Framework for the Selection of a Layout Type ..................................................... 377 Jannes Slomp, University of Groningen, The Netherlands Jos A.C. Bokhorst, University of Groningen, The Netherlands Chapter 23 Petri Net Model Based Design and Control of Robotic Manufacturing Cells . .................................. 393 Gen’ichi Yasuda, Nagasaki Institute of Applied Science, Japan Chapter 24 Lean Thinking Based Investment Planning at Design Stage of Cellular/Hybrid Manufacturing Systems ............................................................................................................................................... 409 M. Bulent Durmusoglu, Istanbul Technical University, Turkey Goksu Kaya, Istanbul Technical University, Turkey Chapter 25 Internal Supply Chain Integration: Effective Integration Strategies in the Global Context................................................................................................................................................. 430 Virpi Turkulainen, Aalto University, Finland Chapter 26 Equipment Replacement Decisions Models with the Context of Flexible Manufacturing Cells .................................................................................................................................................... 453 Ioan Constantin Dima, Valahia University of Târgovişte, Romania Janusz Grabara, Częstochowa University of Technology, Poland Mária Nowicka-Skowron, Częstochowa University of Technology, Poland Chapter 27 Enhancing Engineering Education Learning Outcomes Using Project-Based Learning: A Case Study........................................................................................................................................ 464 Mousumi Debnath, Jaipur Engineering College and Research Centre, India Mukeshwar Pandey, Jaipur Engineering College and Research Centre, India

Section 3 Tools and Technologies This section presents an extensive coverage of various tools and technologies available in the field of Industrial Engineering that practitioners and academicians alike can utilize to develop different techniques. These chapters enlighten readers about fundamental research on the many tools facilitating the burgeoning field of Industrial Engineering. It is through these rigorously researched chapters that the reader is provided with countless examples of the up-and-coming tools and technologies emerging from the field of Industrial Engineering. With 14 chapters, this section offers a broad treatment of some of the many tools and technologies within the Industrial Engineering field.

Chapter 28 Semantic Technologies in Motion: From Factories Control to Customer Relationship Management......................................................................................................................................... 477 Ricardo Colomo-Palacios, Universidad Carlos III de Madrid, Spain Chapter 29 Similarity-Based Cluster Analysis for the Cell Formation Problem . ................................................. 499 Riccardo Manzini, University of Bologna, Italy Riccardo Accorsi, University of Bologna, Italy Marco Bortolini, University of Bologna, Italy Chapter 30 Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles................................................................................................................................................. 522 Paolo Renna, University of Basilicata, Italy Michele Ambrico, University of Basilicata, Italy Chapter 31 Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing Systems under Uncertain Situations .......................................................................... 539 Vahidreza Ghezavati, Islamic Azad University, Iran Mohammad Saidi-Mehrabad, University of Science and Technology, Iran Mohammad Saeed Jabal-Ameli, University of Science and Technology, Iran Ahmad Makui, University of Science and Technology, Iran Seyed Jafar Sadjadi, University of Science and Technology, Iran Chapter 32 Multi-Modal Assembly-Support System for Cellular Manufacturing ................................................ 559 Feng Duan, Nankai University, China Jeffrey Too Chuan Tan, The University of Tokyo, Japan Ryu Kato, The University of Electro-Communications, Japan Chi Zhu, Maebashi Institute of Technology, Japan Tamio Arai, The University of Tokyo, Japan

Chapter 33 Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets ..................................................................................................................................................... 577 Gen’ichi Yasuda, Nagasaki Institute of Applied Science, Japan Chapter 34 Human-Friendly Robots for Entertainment and Education................................................................. 594 Jorge Solis, Waseda University, Japan & Karlstad University, Sweden Atsuo Takanishi, Waseda University, Japan Chapter 35 Dual-SIM Phones: A Disruptive Technology?..................................................................................... 617 Dickinson C. Odikayor, Landmark University, Nigeria Ikponmwosa Oghogho, Landmark University, Nigeria Samuel T. Wara, Federal University Abeokuta, Nigeria Abayomi-Alli Adebayo, Igbinedion University Okada, Nigeria Chapter 36 Data Envelopment Analysis in Environmental Technologies . ........................................................... 625 Peep Miidla, University of Tartu, Estonia Chapter 37 Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm . .......................................................................................................................................... 643 Alexandros Xanthopoulos, Democritus University of Thrace, Greece Dimitrios E. Koulouriotis, Democritus University of Thrace, Greece Chapter 38 Comparison of Connected vs. Disconnected Cellular Systems: A Case Study................................... 663 Gürsel A. Süer, Ohio University, USA Royston Lobo, S.S. White Technologies Inc., USA Chapter 39 AutomatL@bs Consortium: A Spanish Network of Web-based Labs for Control Engineering Education............................................................................................................................................. 679 Sebastián Dormido, Universidad Nacional de Educación a Distancia, Spain Héctor Vargas, Pontificia Universidad Católica de Valparaíso, Chile José Sánchez, Universidad Nacional de Educación a Distancia, Spain

Volume II Chapter 40 An Estimation of Distribution Algorithm for Part Cell Formation Problem ...................................... 699 Saber Ibrahim, University of Sfax, Tunisia Bassem Jarboui, University of Sfax, Tunisia Abdelwaheb Rebaï, University of Sfax, Tunisia

Chapter 41 A LabVIEW-Based Remote Laboratory: Architecture and Implementation....................................... 726 Yuqiu You, Morehead State University, USA Section 4 Utilization and Application This section discusses a variety of applications and opportunities available that can be considered by practitioners in developing viable and effective Industrial Engineering programs and processes. This section includes 14 chapters that review topics from case studies in Cyprus to best practices in Africa and ongoing research in the United States. Further chapters discuss Industrial Engineering in a variety of settings (air travel, education, gaming, etc.). Contributions included in this section provide excellent coverage of today’s IT community and how research into Industrial Engineering is impacting the social fabric of our present-day global village.

Chapter 42 Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context...................................................................................................................................... 744 Souleiman Naciri, Laboratory for Production Management and Processes, Ecole Polytechnique Fédérale de Lausanne, Switzerland Min-Jung Yoo, Laboratory for Production Management and Processes, Ecole Polytechnique Fédérale de Lausanne, Switzerland Rémy Glardon, Laboratory for Production Management and Processes, Ecole Polytechnique Fédérale de Lausanne, Switzerland Chapter 43 Serious Gaming Supporting Competence Development in Sustainable Manufacturing..................... 766 Heiko Duin, BIBA – Bremer Institut für Produktion und Logistik GmbH, Germany Gregor Cerinšek, Institute for Innovation and Development of University of Ljubljana, Slovenia Manuel Fradinho, The Foundation for Scientific and Industrial Research at the Norwegian Institute of Technology, Norway Marco Taisch, Politecnico di Milano, Italy Chapter 44 Reengineering for Enterprise Resource Planning (ERP) Systems Implementation: An Empirical Analysis of Assessing Critical Success Factors (CSFs) of Manufacturing Organizations.................. 791 C. Annamalai, Universiti Sains Malaysia, Malaysia T. Ramayah, Universiti Sains Malaysia, Malaysia Chapter 45 Optimal Pricing and Inventory Decisions for Fashion Retailers under Value-At-Risk Objective: Applications and Review..................................................................................................................... 807 Chun-Hung Chiu, City University of Hong Kong, Hong Kong Jin-Hui Zheng, The Hong Kong Polytechnic University, Hong Kong Tsan-Ming Choi, The Hong Kong Polytechnic University, Hong Kong

Chapter 46 Implementation of Rapid Manufacturing Systems in the Jewellery Industry in Brazil: Some Experiences in Small and Medium-Sized Companies......................................................................... 817 Juan Carlos Campos Rúbio, Universidade Federal de Minas Gerais, Brasil Eduardo Romeiro Filho, Universidade Federal de Minas Gerais, Brasil Chapter 47 Cases Illustrating Risks and Crisis Management................................................................................. 838 Simona Mihai Yiannaki, European University, Cyprus Chapter 48 Aircraft Development and Design: Enhancing Product Safety through Effective Human Factors Engineering Design Solutions.............................................................................................................. 858 Dujuan B. Sevillian, Large Aircraft Manufacturer, USA Chapter 49 Adoption of Information Technology Governance in the Electronics Manufacturing Sector in Malaysia . ............................................................................................................................................ 887 Wil Ly Teo, Universiti Teknologi Malaysia Khong Sin Tan, Multimedia University, Malaysia Chapter 50 An Environmentally Integrated Manufacturing Analysis Combined with Waste Management in a Car Battery Manufacturing Plant ........................................................................................................ 907 Suat Kasap, Hacettepe University, Turkey Sibel Uludag Demirer, Villanova University, USA Sedef Ergün, Drogsan Pharmaceuticals, Turkey Chapter 51 Ghabbour Group ERP Deployment: Learning From Past Technology Failures.................................. 933 M. S. Akabawi, American University in Cairo, Egypt Chapter 52 Matching Manufacturing and Retailing Models in Fashion ............................................................... 959 Simone Guercini, University of Florence, Italy Chapter 53 Production Information Systems Usability in Jordan ......................................................................... 975 Emad Abu-Shanab, Yarmouk University, Jordan Heyam Al-Tarawneh, Ministry of Education, Jordan

Chapter 54 Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China.................................................................................................................................................... 990 Tao Chen, SanJiang University, China, Nanjing Normal University, China, & Harbin Institute of Technology, China Li Kang, SanJiang University, China, & Nanjing Normal University, China Zhengfeng Ma, Nanjing Normal University, China Zhiming Zhu, Hohai University, China Chapter 55 UB1-HIT Dual Master’s Programme: A Double Complementary International Collaboration Approach............................................................................................................................................ 1001 David Chen, IMS-University of Bordeaux 1, France Bruno Vallespir, IMS-University of Bordeaux 1, France Jean-Paul Bourrières, IMS-University of Bordeaux 1, France Thècle Alix, IMS-University of Bordeaux 1, France Section 5 Organizational and Social Implications This section includes a wide range of research pertaining to the social and behavioral impact of Industrial Engineering around the world. Chapters introducing this section critically analyze and discuss trends in Industrial Engineering, such as participation, attitudes, and organizational change. Additional chapters included in this section look at process innovation and group decision making. Also investigating a concern within the field of Industrial Engineering is research which discusses the effect of customer power on Industrial Engineering. With 13 chapters, the discussions presented in this section offer research into the integration of global Industrial Engineering as well as implementation of ethical and workflow considerations for all organizations.

Chapter 56 Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs: Absorptive Capacity Limitations....................................................................................................... 1026 Kathryn J. Hayes, University of Western Sydney, Australia Ross Chapman, Deakin University Melbourne, Australia Chapter 57 Teaching Technology Computer Aided Design (TCAD) Online . .................................................... 1043 Chinmay K Maiti, Indian Institute of Technology, India Ananda Maiti, Indian Institute of Technology, India Chapter 58 Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment....................................................................................................................................... 1064 Sami Akabawi, American University in Cairo, Egypt Heba Hodeeb, American University in Cairo, Egypt

Chapter 59 Sharing Scientific and Social Knowledge in a Performance Oriented Industry: An Evaluation Model......................................................................................................................... 1085 Haris Papoutsakis, Technological Education Institute of Crete, Greece Chapter 60 Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral .............................................................................................................................................. 1115 Cengiz Kahraman, Istanbul Technical University, Turkey Selçuk Çebi, Karadeniz Technical University, Turkey İhsan Kaya, Selçuk University, Turkey Chapter 61 Operator Assignment Decisions in a Highly Dynamic Cellular Environment ................................. 1135 Gürsel A. Süer, Ohio University, USA Omar Alhawari, Royal Hashemite Court, Jordan Chapter 62 Capacity Sharing Issue in an Electronic Co-Opetitive Network: A Simulative Approach................ 1153 Paolo Renna, University of Basilicata, Italy Pierluigi Argoneto, University of Basilicata, Italy Chapter 63 Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation............................................................................................................................................ 1180 Goldstain Ofir, Tel-Aviv University, Israel Ben-Gal Irad, Tel-Aviv University, Israel Bukchin Yossi, Tel-Aviv University, Israel Chapter 64 Cell Loading and Family Scheduling for Jobs with Individual Due Dates ...................................... 1201 Gürsel A. Süer, Ohio University, USA Emre M. Mese, D.E. Foxx & Associates, Inc., USA Chapter 65 Evaluation of Key Metrics for Performance Measurement of a Lean Deployment Effort................ 1220 Edem G. Tetteh, Paine College, USA Ephrem Eyob, Virginia State University, USA Yao Amewokunu, Virginia State University, USA Chapter 66 Direct Building Manufacturing of Homes with Digital Fabrication . ............................................... 1231 Lawrence Sass, Massachusetts Institute of Technology, USA

Chapter 67 eRiskGame: A Persistent Browser-Based Game for Supporting Project-Based Learning in the Risk Management Context................................................................................................................. 1243 Túlio Acácio Bandeira Galvão, Rural Federal University of the Semi-Arid – UFERSA, Brazil Francisco Milton Mendes Neto, Rural Federal University of the Semi-Arid – UFERSA, Brazil Mara Franklin Bonates, Rural Federal University of the Semi-Arid – UFERSA, Brazil Chapter 68 Effect of Customer Power on Supply Chain Integration and Performance....................................... 1260 Xiande Zhao, Chinese University of Hong Kong, Hong Kong Baofeng Huo, Xi’an Jiaotong University, China Barbara B. Flynn, Indiana University, USA Jeff Hoi Yan Yeung, Chinese University of Hong Kong, Hong Kong Chapter 69 Firm-Specific Factors and the Degree of Innovation Openness........................................................ 1288 Valentina Lazzarotti, Carlo Cattaneo University, Italy Raffaella Manzini, Carlo Cattaneo University, Italy Luisa Pellegrini, University of Pisa, Italy Section 6 Managerial Impact This section presents contemporary coverage of the social implications of Industrial Engineering, more specifically related to the corporate and managerial utilization of information sharing technologies and applications, and how these technologies can be extrapolated to be used in Industrial Engineering. Core ideas such as service delivery, gender evaluation, public participation, and other determinants that affect the intention to adopt technological innovations in Industrial Engineering are discussed. Equally as crucial, chapters within this section discuss how leaders can utilize Industrial Engineering applications to get the best outcomes from their shareholders and their customers.

Chapter 70 Offshoring Process: A Comparative Investigation of Danish and Japanese Manufacturing Companies.......................................................................................................................................... 1312 Dmitrij Slepniov, Aalborg University, Denmark Brian Vejrum Wæhrens, Aalborg University, Denmark Hiroshi Katayama, Waseda University, Japan Chapter 71 Network Marketing and Supply Chain Management for Effective Operations Management....................................................................................................................................... 1336 Raj Selladurai, Indiana University Northwest, USA

Chapter 72 Knowledge Management in SMEs: A Mixture of Innovation, Marketing and ICT: Analysis of Two Case Studies............................................................................................................................... 1350 Saïda Habhab-Rave, ISTEC, Paris, France Chapter 73 Developments in Modern Operations Management and Cellular Manufacturing............................. 1362 Vladimír Modrák, Technical University of Kosice, Slovakia (Slovak Republic) Pavol Semančo, Technical University of Kosice, Slovakia (Slovak Republic) Chapter 74 Fashion Supply Chain Management through Cost and Time Minimization from a Network Perspective......................................................................................................................................... 1382 Anna Nagurney, University of Massachusetts Amherst, USA Min Yu, University of Massachusetts Amherst, USA

Volume III Chapter 75 An Exploratory Study on Product Lifecycle Management in the Fashion Chain: Evidences from the Italian Leather Luxury Industry......................................................................... 1402 Romeo Bandinelli, Università degli Studi di Firenze, Italy Sergio Terzi, Università degli Studi di Bergamo, Italy Chapter 76 Knowledge Dissemination in Portals................................................................................................. 1418 Steven Woods, Boeing Phantom Works, USA Stephen Poteet, Boeing Phantom Works, USA Anne Kao, Boeing Phantom Works, USA Lesley Quach, Boeing Phantom Works, USA Chapter 77 A Comparative Analysis of Activity-Based Costing and Traditional Costing Systems: The Case of Egyptian Metal Industries Company............................................................................. 1429 Khaled Samaha, American University in Cairo, Egypt Sara Abdallah, British University in Egypt, Egypt Chapter 78 Complex Real-Life Supply Chain Planning Problems ..................................................................... 1441 Behnam Fahimnia, University of South Australia, Australia Mohammad Hassan Ebrahimi, InfoTech International Company, Iran Reza Molaei, Iran Broadcasting Services, Iran

Chapter 79 E-Government Clusters: From Framework to Implementation......................................................... 1467 Kristian J. Sund, Middlesex University Business School, UK Ajay Kumar Reddy Adala, Centre for e-Governance, India Chapter 80 Hybrid Algorithms for Manufacturing Rescheduling: Customised vs. Commodity Production.......................................................................................................................................... 1488 Luisa Huaccho Huatuco, University of Leeds, UK Ani Calinescu, University of Oxford, UK Chapter 81 Negotiation Protocol Based on Budget Approach for Adaptive Manufacturing Scheduling ........... 1517 Paolo Renna, University of Basilicata, Italy Rocco Padalino, University of Basilicata, Italy Chapter 82 Research Profiles: Prolegomena to a New Perspective on Innovation Management........................ 1539 Gretchen Jordan, Sandia National Laboratories, USA Jonathon Mote, Southern Illinois University, USA Jerald Hage, University of Maryland, USA Section 7 Critical Issues This section contains 13 chapters, giving a wide variety of perspectives on Industrial Engineering and its implications. Such perspectives include reading in privacy, gender, ethics, and several more. The section also discusses new ethical considerations within social constructivism and gender gaps. Within the chapters, the reader is presented with an in-depth analysis of the most current and relevant issues within this growing field of study. Crucial questions are addressed and alternatives offered, and topics discussed such as creative regions in Europe, ethos as an enabler of organizational knowledge creation, and design of manufacturing cells based on graph theory.

Chapter 83 Cultural Models and Variations......................................................................................................... 1560 Yongjiang Shi, Institute for Manufacturing, University of Cambridge, UK Zheng Liu, University of Cambridge, UK Chapter 84 New Design Paradigm: Shaping and Employment............................................................................ 1574 Vladimir M. Sedenkov, Belarusian State University, Belarus Chapter 85 Dynamics in Knowledge . ................................................................................................................. 1595 Shigeki Sugiyama, University of Gifu, Japan

Chapter 86 Tool and Information Centric Design Process Modeling: Three Case Studies.................................. 1613 William Stuart Miller, Clemson University, USA Joshua D. Summers, Clemson University, USA Chapter 87 Application of Dynamic Analysis in a Centralised Supply Chain..................................................... 1638 Mu Niu, Northumbria University, UK Petia Sice, Northumbria University, UK Ian French, University of Teesside, UK Erik Mosekilde, The Technical University of Denmark, Denmark Chapter 88 The Drivers for a Sustainable Chemical Manufacturing Industry .................................................... 1659 George M. Hall, University of Central Lancashire, UK Joe Howe, University of Central Lancashire, UK Chapter 89 Cellular or Functional layout?........................................................................................................... 1680 Abdessalem Jerbi, University of Sfax, Tunisia Hédi Chtourou, University of Sfax, Tunisia Chapter 90 Random Dynamical Network Automata for Nanoelectronics: A Robustness and Learning Perspective......................................................................................................................................... 1699 Christof Teuscher, Portland State University, USA Natali Gulbahce, Northeastern University, USA Thimo Rohlf, Genopole, France Alireza Goudarzi, Portland State University, USA Chapter 91 Creative Regions in Europe: Exploring Creative Industry Agglomeration and the Wealth of European Regions.............................................................................................................................. 1719 Blanca de-Miguel-Molina, Universitat Politècnica de València, Spain José-Luis Hervás-Oliver, Universitat Politècnica de València, Spain Rafael Boix, Universitat de València, Spain María de-Miguel-Molina, Universitat Politècnica de València, Spain Chapter 92 Design of Manufacturing Cells Based on Graph Theory................................................................... 1734 José Francisco Ferreira Ribeiro, University of São Paulo, Brazil Chapter 93 Ethos as Enablers of Organisational Knowledge Creation ............................................................... 1749 Yoshito Matsudaira, Japan Advanced Institute of Science and Technology, Japan

Chapter 94 Engineering Design as Research . ..................................................................................................... 1766 Timothy L.J. Ferris, Defence and Systems Institute, University of South Australia, Australia Chapter 95 Engineer-to-Order: A Maturity Concurrent Engineering Best Practice in Improving Supply Chains................................................................................................................................................ 1780 Richard Addo-Tenkorang, University of Vaasa, Finland Ephrem Eyob, Virginia State University, USA Section 8 Emerging Trends

This section highlights research potential within the field of Industrial Engineering while exploring uncharted areas of study for the advancement of the discipline. Introducing this section are chapters that set the stage for future research directions and topical suggestions for continued debate, centering on the new venues and forums for discussion. A pair of chapters on supply chain management and green computing makes up the middle of the section of the final 14 chapters, and the book concludes with a look ahead into the future of the Industrial Engineering field, with “Zero-Downtime Reconfiguration of Distributed Control Logic in Industrial Automation and Control.” In all, this text will serve as a vital resource to practitioners and academics interested in the best practices and applications of the burgeoning field of Industrial Engineering.

Chapter 96 Advanced Technologies for Transient Faults Detection and Compensation .................................... 1798 Matteo Sonza Reorda, Politecnico di Torino, Italy Luca Sterpone, Politecnico di Torino, Italy Massimo Violante, Politecnico di Torino, Italy Chapter 97 Augmented Reality for Collaborative Assembly Design in Manufacturing Sector........................... 1821 Rui (Irene) Chen, The University of Sydney, Australia Xiangyu Wang, The University of Sydney, Australia Lei Hou, The University of Sydney, Australia Chapter 98 E-Business/ICT and Carbon Emissions............................................................................................. 1833 Lan Yi, China University of Geosciences (Wuhan), China Chapter 99 Building for the Future: Systems Implementation in a Construction Organization.......................... 1853 Hafez Salleh, University of Malaya, Malaysia Eric Lou, University of Salford, UK

Chapter 100 Embedded RFID Solutions: Challenges for Product Design and Development................................ 1873 Álvaro M. Sampaio, Polytechnic Institute of Cávado and Ave, Portugal & University of Minho, Portugal António J. Pontes, University of Minho, Portugal Ricardo Simões, Polytechnic Institute of Cávado and Ave, Portugal & University of Minho, Portugal Chapter 101 Future Trends in SCM........................................................................................................................ 1885 Reza Zanjirani Farahani, Kingston University London, UK Faraz Dadgostari, Amirkabir University of Technology, Iran Ali Tirdad, University of British Columbia, Canada Chapter 102 Green Computing as an Ecological Aid in Industry.......................................................................... 1903 Oliver Avram, Ecole Polytechnique Fédérale de Lausanne, Switzerland Ian Stroud, Ecole Polytechnique Fédérale de Lausanne, Switzerland Paul Xirouchakis, Ecole Polytechnique Fédérale de Lausanne, Switzerland Chapter 103 Improving Energy-Efficiency of Scientific Computing Clusters . .................................................... 1916 Tapio Niemi, Helsinki Institute of Physics, Finland Jukka Kommeri, Helsinki Institute of Physics, Finland Ari-Pekka Hameri, University of Lausanne, Switzerland Chapter 104 Organic Solar Cells Modeling and Simulation ................................................................................. 1934 Mihai Razvan Mitroi, Polytechnic University of Bucharest, Romania Laurentiu Fara, Polytechnic University of Bucharest, Romania & Academy of Romanian Scientists, Romania Andrei Galbeaza Moraru, Polytechnic University of Bucharest, Romania Chapter 105 Programming Robots in Kindergarten to Express Identity: An Ethnographic Analysis.................... 1952 Marina Umaschi Bers, Tufts University, USA Alyssa B. Ettinger, Tufts University, USA Chapter 106 Prototyping of Robotic Systems in Surgical Procedures and Automated Manufacturing Processes............................................................................................................................................ 1969 Zheng (Jeremy) Li, University of Bridgeport, USA

Chapter 107 Software Process Lines: A Step towards Software Industrialization................................................. 1988 Mahmood Niazi, Keele University, UK & King Fahd University of Petroleum and Minerals, Saudi Arabia Sami Zahran, Process Improvement Consultant, UK Chapter 108 Super High Efficiency Multi-Junction Solar Cells and Concentrator Solar Cells............................. 2003 Masafumi Yamaguchi, Toyota Technological Institute, Japan Chapter 109 Zero-Downtime Reconfiguration of Distributed Control Logic in Industrial Automation and Control .............................................................................................................................................. 2024 Thomas Strasser, AIT Austrian Institute of Technology, Austria Alois Zoitl, Vienna University of Technology, Austria Martijn Rooker, PROFACTOR GmbH, Austria

Section 1

Fundamental Concepts and Theories

This section serves as a foundation for this exhaustive reference tool by addressing underlying principles essential to the understanding of Industrial Engineering. Chapters found within these pages provide an excellent framework in which to position Industrial Engineering within the field of information science and technology. Insight regarding the critical incorporation of global measures into Industrial Engineering is addressed, while crucial stumbling blocks of this field are explored. With 10 chapters comprising this foundational section, the reader can learn and chose from a compendium of expert research on the elemental theories underscoring the Industrial Engineering discipline.

1

Chapter 1

Defining, Teaching, and Assessing Engineering Design Skills Nikos J. Mourtos San Jose State University, USA

ABSTRACT The paper discusses a systematic approach for defining, teaching, and assessing engineering design skills. Although the examples presented in the paper are from the field of aerospace engineering, the principles apply to engineering design in general. What makes the teaching of engineering design particularly challenging is that the necessary skills and attributes are both technical and non-technical and come from the cognitive as well as the affective domains. Each set of skills requires a different approach to teach and assess. Implementing a variety of approaches for a number of years at SJSU has shown that it is just as necessary to teach affective skills, as it is to teach cognitive skills. As one might expect, each set of skills presents its own challenges.

INTRODUCTION Design is the heart of engineering practice. In fact, many engineering experts consider design as being synonymous with engineering. Yet engineering schools have come under increasing criticism after World War II because they have overemphasized analytical approaches and engineering science at the expense of hands-on, design skills (Seely, DOI: 10.4018/978-1-4666-1945-6.ch001

1999; Petrosky, 2000). As the editor of Machine Design put it, schools are being charged with not responding to industry needs for hands-on design talent, but instead are grinding out legions of research scientists (Curry, 1991). In response to this criticism and to increase student retention, many engineering schools, including SJSU, introduce design at the freshman level to excite students about engineering. Freshman design also helps students put into perspective the entire curriculum, by viewing each subject as

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Defining, Teaching, and Assessing Engineering Design Skills

a necessary tool in the design process. Design is also globally dispersed in a variety of junior and senior level courses in the form of mini design projects and is finally experienced in a more realistic setting in a two-semester, senior design capstone experience. The paper first attempts to provide a comprehensive definition of design skills. Subsequently, it presents a model for curriculum design that addresses these skills. Lastly, it presents ideas for assessing student competence in design. What makes teaching engineering design particularly challenging is that the necessary skills and attributes are technical as well as non-technical, and come from the cognitive as well as the affective domains. For example, the ability to define “real world” problems in practical (engineering) terms, to investigate and evaluate prior solutions, and to develop constraints and criteria for evaluation are technical skills, while the ability to communicate the results of a design, to work in teams, and decide on the best course of action when a decision has ethical implications are non-technical skills. Most technical skills are cognitive, however, there are several skills from the affective domain as well, such as the willingness to spend time reading, gathering information and defining the problem, and the willingness to risk and cope with ambiguity, to welcome change and manage stress. All these skills, technical and non-technical, cognitive and affective are essential for engineers, yet each requires a different approach to teach and assess.

DEFINING ENGINEERING DESIGN SKILLS What is Engineering? To define the skills necessary for design engineers we need to start with the definition of engineering itself. Nicolai (1988) defines engineering as the design of a commodity for the benefit of mankind. Obviously, the word design is key to the definition

2

of engineering. Engineers design things in their attempt to solve everyday problems and improve the quality of our lives. As Theodore Von Karman put it: A scientist discovers that which exists. An engineer creates that which never was.

What is Design? The next step in our search for design skills is to define design itself. “Design is a process through which one creates and transforms ideas and concepts into a product that satisfies certain requirements and constraints.” Design requirements are usually technical and describe the performance expectations of the product, as specified by the customer or a perceived need. For example, a new passenger airplane may have mission requirements such as: • •

• •

A range of 3,000 km (i.e., the distance it will be able to fly without refueling). A payload of 100 passengers (i.e., the number of passengers along with their luggage it will be able to carry). A flight speed of 750 km/hr at a cruise altitude of 10 km. A takeoff field length of 1,500 m at standard sea level conditions.

The performance requirements specified by an airline (the customer), however, are not the only technical requirements that a passenger airplane must meet. To be certified, the plane must also satisfy additional airworthiness requirements. For example, FAR 25.121 part(b), refers to the ability of the plane to climb with one engine inoperative and requires that: •

In the takeoff configuration with the landing gear fully retracted but without ground effect the airplane must be able to maintain

Defining, Teaching, and Assessing Engineering Design Skills

a steady climb gradient of at least 2.4% for two-engine airplanes, 2.7% for threeengine airplanes, and 3% for four-engine airplanes at a climb speed that is also specified and known as V2 (Flightsim Aviation Zone, 2010). Such airworthiness requirements often prove to be more challenging than the original performance requirements specified by the customer. Additional design requirements, not specified by the customer, are not unique to aerospace engineering. For example, civil and architectural engineers must satisfy building code requirements, usually set by cities or countries. The definition of design also mentions constraints. Constraints are sometimes difficult to distinguish from requirements. They may be viewed as limitations stated in regards to materials, cost, environmental factors, etc. For example, the Hughes H-4 Hercules aircraft, the largest flying boat ever built, was made out of wood because of wartime restrictions on the use of aluminum (Wikipedia, 2011). Another example is the noise standards for transport aircraft (Flightsim Aviation Zone, 2010). In summary, design engineers must satisfy technical requirements, as specified by the customer and possibly additional technical requirements related to safety. Furthermore they must be concerned with the broader impact of their designs to individuals, the society, and the environment. This has become increasingly more important in our interconnected, globalized world. Pink (2005) adds yet another challenge to engineering design, one that relates to aesthetics. He argues that because of the ‘abundance’ of products we have come to expect in the 21st century, the lower manufacturing cost in many countries, and the fact that many engineering tasks can now be automated, it is no longer enough to create a product that’s reasonably priced and adequately functional. It must also be beautiful, unique, and meaningful. This requirement adds a new dimen-

sion to engineering design, a dimension that has much in common with the creative arts.

The Engineering Design Process The next step in our search for design skills is to look at the engineering design process. Figure 1 is an attempt to illustrate this iterative process, as it takes place in our brain (Nicolai, 1998). Design begins with brainstorming of ideas. This takes place in the right (creative) part of the brain. There are virtually no rules in generating these ideas. In fact, it is desirable to come up with as many ideas as possible and allow for “wild” ideas as well as conventional ones. While brainstorming, the right brain tends to be holistic, intuitive, and highly nonlinear (i.e., it jumps around). It sees things in their context as well as metaphorically, recognizes patterns, focuses on relationships between the various parts and cares about aesthetics. Subsequently, each idea is evaluated in the left (analytical) part of the brain under very rigid rules. The left brain acts as a filter on the ideas generated, deciding which ones are viable under the current rules and which ones are not. The left brain tends to be logical, sequential, computer-like. It sees things literally and focuses on categories. As Figure 1 illustrates, the design process involves an iterative cycling through a sequence that involves creative, imaginative exploration, objective analytical evaluation, and finally making a decision. It is this context, known also as convergent – divergent thinking (Nicolai, 1998), in which one should look for the skills and attributes necessary for a good design engineer. But there is more to the iterative nature of engineering design than the interchange between the right and the left brain illustrated in Figure 1; iteration is also necessary because of the openended nature of design. It is simply not possible to follow a linear, step-by-step process to arrive at a single answer or a unique product that meets our need. First of all, design requires numerous

3

Defining, Teaching, and Assessing Engineering Design Skills

Figure 1. The engineering design process: an iteration between creative synthesis and analytical evaluation (adapted from Nicolai, 1998)

assumptions because there are always so many unknowns. Some of these assumptions may be proven wrong down the road, requiring us to go back, make changes, and repeat our calculations, hence the need for iteration. The non-unique nature of design becomes obvious when one looks at the multitude of products available in the market to address a given need. Figure 2 illustrates the engineering design process. Engineering design begins with identifying a need. This need is articulated in terms of specific

technical requirements that the product must meet. Following this design specification engineers research existing solutions to the problem before proposing any new ones. Brainstorming is the most creative part in the design process. The members of the design team who brainstorm typically bring various perspectives and expertise to the problem. The goal is to create as many ideas as possible, including unusual and wild ones. To achieve this goal, participants are not allowed to criticize any ideas put forth. Rather, to create synergy, they

Figure 2. The engineering design process: From identifying a need to production

4

Defining, Teaching, and Assessing Engineering Design Skills

are encouraged to build on others’ ideas. After brainstorming the group selects two or three of these ideas to move forward with evaluation. Each proposed concept is analyzed systematically using appropriate engineering science in an effort to prove its feasibility and functionality. Hopefully, at least one of these concepts will prove feasible through analysis. A model is then built for actual testing. The tests will hopefully validate one of the proposed concepts, at which point the design is finalized and goes into production. Design also requires compromise because requirements often conflict with each other. For example, to provide comfort for airplane passengers one needs a large cross-sectional area. But a large cross-sectional area results in greater drag and compromised fuel efficiency, especially at high speeds. A successful aircraft designer must decide where to draw the line between these two conflicting requirements.

Skills and Attributes of Design Engineers Clearly, engineering design is a very complex process and as such, it requires several, very different from each other, sets of skills. These are briefly discussed in the following sub-sections.

Analytical Skills The right-hand side of Figure 1 attests to the need for traditional engineering analytical skills: solid fundamentals in mathematics, physical science (e.g., physics, chemistry, etc.), and engineering science (e.g., fluid mechanics, thermodynamics, dynamics, etc.). The need for such skills has been articulated in the desired attributes of a global engineer (The Boeing Company & Rensselaer Polytechnic Institute, 1997), as well as in ABET EC 2000, Outcome 3a (Engineering Accreditation Commission):

“A good grasp of engineering science fundamentals, including: mechanics and dynamics, mathematics (including statistics), physical and life sciences, and information science/technology An ability to apply knowledge of mathematics, science, and engineering”

Open-Ended Problem Solving Skills Design skills build upon open-ended problem solving skills. Outcome 3e of ABET EC 2000 (Engineering Accreditation Commission) highlights the need for such skills when it states that engineering graduates must be able to identify and formulate engineering problems in addition to being able to solve such problems. Students who are open-ended problem solvers exhibit the attributes listed below (Woods, 1997). Mourtos, Okamoto, and Rhee (2004) classified these attributes according to the various levels of Bloom’s taxonomy of educational objectives in the cognitive and the affective domains (Bloom, 1984; Bloom, Karthwohl, & Massia, 1984): a.

b. c.

d. e.

f. g.

h.

Are willing to spend time reading, gathering information and defining the problem (Affective) Use a process, as well as a variety of tactics and heuristics to tackle problems (Cognitive) Monitor their problem-solving process and reflect upon its effectiveness (Affective and Cognitive) Emphasize accuracy rather than speed (Affective and Cognitive) Write down ideas and create charts / figures, while solving a problem (Affective and Cognitive) Are organized and systematic (Affective) Are flexible (keep options open, can view a situation from different perspectives / points of view) (Affective) Draw on the pertinent subject knowledge and objectively and critically assess the quality,

5

Defining, Teaching, and Assessing Engineering Design Skills

i.

j.

accuracy, and pertinence of that knowledge / data (Cognitive) Are willing to risk and cope with ambiguity, welcoming change and managing stress (Affective) Use an overall approach that emphasizes fundamentals rather than trying to combine various memorized sample solutions (Cognitive)

It is interesting to note that the need for flexibility (attribute g) is also established as a desired attribute for a global engineer in a context much broader than engineering problem solving (The Boeing Company & Rensselaer Polytechnic Institute, 1997): “Flexibility: the ability and willingness to adapt to rapid and/or major change.” The observation that some of these attributes are associated with the affective domain suggests that engineering design is not all about cognitive skills; it is also about acquiring the right attitudes. Although it is not difficult to illustrate the need for such skills in class, their assessment is more challenging and requires special rubrics. Mourtos (2010) presents an example of a set of rubrics developed to assess open-ended problem solving skills.

A View for Total Engineering Design engineers must be generalists and acquire a basic understanding of a variety of subjects, from within as well as outside their major – in fact, even from outside of engineering – to develop a view for total engineering. This need has been expressed in three desired attributes for a global engineer (The Boeing Company & Rensselaer Polytechnic Institute, 1997):

6

• •

A good understanding of the design and manufacturing process (i.e., understands engineering and industrial perspective) A multidisciplinary, systems perspective, along with a product focus An awareness of the boundaries of one’s knowledge, along with an appreciation for other areas of knowledge and their interrelatedness with one’s own expertise

For example, an aircraft designer must have a good understanding of the basic aeronautical engineering disciplines: aerodynamics, propulsion, structures and materials, stability and control, performance, weight and balance. In addition, he/she must develop an understanding of how each part is manufactured and how its design and manufacturing affects the acquisition and operation cost of the airplane. The example illustrates the multidisciplinary nature of engineering design. Clearly, being an expert in one of the fields involved and inadequate in one or more of the rest, will not work well for a design engineer. Furthermore, engineers must take into consideration a variety of constraints when they design a new product. Some of these constraints are technical; some are non-technical. This expectation is stated in Outcome 3c of ABET EC 2000 (Engineering Accreditation Commission): “Engineering graduates must have an ability to design a system, component, or process to meet desired needs within realistic constraints such as economic, environmental, social, political, ethical, health and safety, manufacturability, and sustainability.” The importance of taking into consideration non-technical constraints (e.g., social, political, ethical, safety) is further reinforced in other ABET outcomes as well, where engineering graduates are expected to have:

Defining, Teaching, and Assessing Engineering Design Skills

“3f: an understanding of professional and ethical responsibility.

e.

3h: the broad education necessary to understand the impact of engineering solutions in a global, economic, environmental, and societal context.

f.

3j: a knowledge of contemporary issues”

g.

A basic understanding of the context in which engineering is practiced, including: customer and societal needs and concerns, economics and finance, the environment and its protection, the history of technology and society High ethical standards (honesty, sense of personal and social responsibility, fairness, etc.)

In summary, the design engineer must develop an aptitude for systems thinking and maintain sight of the big picture, which is often influenced by technical as well as non-technical factors. Clearly, it is very difficult to quantify a set of specific skills to describe the ideal design engineer. Nevertheless, in an effort to facilitate the teaching and assessment of these design skills, the BSAE Program at SJSU adapted the following set of performance criteria: Aerospace engineering graduates must be able to: a. b.

c.

d.

Research, evaluate, and compare aerospace vehicles designed for similar missions. Follow a prescribed process to develop the conceptual / preliminary design of an aerospace vehicle. Develop economic, environmental, social, political, ethical, health and safety, manufacturability, and sustainability constraints and ensure that the vehicle they design meets these constraints. Select an appropriate configuration for an aerospace vehicle with a specified mission.

Develop and compare alternative configurations for an aerospace vehicle, considering trade-offs and appropriate figures of merit. Apply aerospace engineering principles (ex. aerodynamics, structures, flight mechanics, propulsion, stability and control) to design the various vehicle subsystems. Develop final specifications for an aerospace vehicle.

Ability to Use Design Tools Freehand Drawing and Visualization Drawing is the ability to translate a mental image into a visually recognizable form. Eventually any design drawing is rendered as a Computer–Aided Drawing (CAD) with the help of appropriate software. However, CAD is not the best medium when a creative design engineer wants to convey an idea of “how things work” to nontechnical people. Freehand pictorial drawing is most easily and universally understood. Furthermore, a freehand drawing can be a very effective and quick way to communicate ideas in three-dimensions when concepts evolve quickly, as is the case during the early stages of design (e.g., brainstorming), at which point it is not worth investing time and effort in a CAD. Leonardo da Vinci (1452 – 1519) was one of the earliest engineers who demonstrated mastery in freehand drawing, making it possible for us today to visualize how his inventions worked and appreciate his genius (Figure 3). Freehand drawing is a right-brain activity because it is free of technical symbols and it is closely associated with our ability to visualize things in three dimensions, an indispensable design skill.

Computer–Aided Drawing and Computer–Aided Design Unlike freehand drawing with its artistic flavor, engineering drawing is a precise discipline based

7

Defining, Teaching, and Assessing Engineering Design Skills

Figure 3. Design for flying by Leonardo da Vinci (the drawings of Leonardo da Vinci)

work and what will not work. For example, in the design of an airplane landing gear, the designer must be able to visualize how the gear will fold and retract in its proper space and make sure that it will not conflict with other components in the process. The skills described in this section fall under Outcome 3k of ABET EC 2000, which states that engineering graduates must have an ability to use the techniques, skills, and modern engineering tools necessary for engineering practice.

Interpersonal, Communication, and Team Skills Interpersonal and Team Skills on the principles of orthographic projection. In contrast to freehand drawing, engineering drawing emphasizes accuracy, something that has been greatly enhanced by the use of modern computers and graphic capabilities. Today a CAD is much more than a computer generated engineering drawing; it involves an extensive database detailing the attributes of an object and allows it to be rotated, sectioned, and viewed from any angle. This capability is indispensable in the design of complex engineering equipment, such as an airplane, because engineers can now superposition the various subsystems and immediately see potential conflicts. CAD has led to Computer-Aided Manufacturing (CAM), where the machines that manufacture the various components receive their operating instructions directly from the database in the computer.

Kinematics A design engineer needs skills in kinematics since the various parts of an engineering product move, rotate and may also expand / retract or fold. An understanding of kinematics (e.g., selecting the proper mechanism and visualizing its operation) allows the design engineer to evaluate what will

8

Archimedes designed his screw pump (Wikipedia, 2007) alone. This was not uncommon in the ancient world. Similarly, Leonardo da Vinci designed his engineering devices, such as the one shown in Figure 3, alone. Today, working alone to design an engineering product is, for the most part, a thing of the past unless, of course, the product is a very simple one. The complexity of modern engineering products requires engineers to work in teams; in fact, sometimes several teams must work together. For example, in the design of a new transport, it is typical to have a team of engineers for each of the disciplines mentioned above (aerodynamics, controls, manufacturing, etc.). These teams work closely together to meet the same set of mission and airworthiness requirements, while at the same time making sure there are no conflicts between the various airplane sub-systems. Hence, although earlier we expressed the need for design engineers to be generalists, so they can appreciate the multidisciplinary requirements that come into play in the design of a new product, it is not possible for an individual to have enough expertise in each and every one of the technical areas to adequately perform the detail design of all the subsystems, not to mention the analysis of

Defining, Teaching, and Assessing Engineering Design Skills

the impact of a new product in a global, economic, environmental, and societal context. Outcome 3d of ABET EC 2000 states that engineering graduates must have an ability to function on multidisciplinary teams. In today’s multicultural world, this outcome also implies an ability to collaborate with people from different cultures, abilities, and backgrounds. This is further elaborated in the following four desired attributes for a global engineer (The Boeing Company & Rensselaer Polytechnic Institute, 1997):

Communicate clearly with team members when speaking and writing. Understand the direction of the team. Bring a positive attitude to the team, encourage others, seek consensus, and bring out the best in others.

Communication Skills

The following performance criteria have been chosen to assess this outcome in the BSAE Program at SJSU: Students working in teams are expected to:

Design requires clear and effective communication not only between team members, but also between the team and third parties (management, customers, etc.). Communication usually takes two forms, oral and written and can be informal, such as between team members or formal, such as when the team presents information to third parties. All four types are crucial for the success of a project. Needless to say, good verbal communication requires not only ability to express one’s ideas clearly but also the ability to listen carefully and understand ideas and concerns expressed by others. The need to communicate effectively is outlined in Outcome 3g of ABET EC 2000. In the BSAE Program at SJSU the following performance criteria were selected to express the skills embedded in this outcome: Ability to:

a.

• •

An awareness of and strong appreciation for other cultures and their diversity, their distinctiveness, and their inherent value. A strong commitment to team work, including extensive experience with and understanding of team dynamics. An ability to think both critically and creatively, in both independent and cooperative modes. An ability to impart knowledge to others.

Be committed to the team and the project; be dependable, faithful, and reliable. Attend all meetings, arrive on time or early, and come prepared and ready to work. Exhibit leadership by taking initiative, making suggestions, providing focus. Be creative, bring energy and excitement to the team, and have a “can do” attitude; spark creativity in others. Gladly accept responsibility for work and get it done; exhibit a spirit of excellence. Demonstrate abilities the team needs and make the most of these abilities by giving fully to the team.

b.

c.

d.

Produce well-organized reports, following guidelines. Use clear, correct language and terminology while describing experiments, projects or solutions to engineering problems. Describe accurately in a few paragraphs a project / experiment performed, the procedure used, and the most important results (abstracts, summaries). Use appropriate graphs and tables following published engineering standards to present results.

It is interesting to note that the desired attribute for a global engineer relating to communication skills, includes listening but also graphic skills

9

Defining, Teaching, and Assessing Engineering Design Skills

as part of the list (The Boeing Company & Rensselaer Polytechnic Institute, 1997): “Good communication skills, including written, verbal, graphic, and listening.” Although graphic skills were discussed earlier in the topic of freehand drawing and CAD, the term graphic here includes the ability to prepare engineering graphs that illustrate for example, parametric studies pertinent to a particular design. One thing that becomes obvious in this discussion is that the skills and attributes necessary for competent engineering design are so integrated that in some cases it is not even possible to draw clear distinctive lines between them.

CURRICULUM AND COURSE DESIGN FOR TEACHING ENGINEERING DESIGN SKILLS Like any set of skills, design skills must be introduced early in the curriculum, practiced often, and culminate in a realistic design experience if students are to achieve the level of mastery prescribed in ABET EC 2000 and expected in industry. The following subsections describe how design skills are introduced at the freshman level, dispersed throughout the BSAE curriculum, and culminate in a senior design capstone sequence. The Project-Based Learning (PBL) pedagogical model is used in all the courses where design is taught and students work in teams for all design projects. Non-traditional ways of assessing design skills are also discussed.

First-Year Design At SJSU engineering design is first taught in our Introduction to Engineering course (E10). E10 is a one-semester, two-hour lecture/three-hour laboratory course for freshmen, required by all engineering majors. Engineering design is taught

10

through hands-on projects (PBL) as well as through case studies in engineering failures, which also bring up the subject of engineering ethics. For each project, students work in teams to research, brainstorm, design, build, test, and finally demonstrate a device in class (Mourtos & Furman, 2002). Typically, students participate in two or three projects during the semester. This course design followed well-established research, which shows that first-year design courses help attract and retain engineering students (Ercolano, 1996). E10 students report significant gains in their understanding of design and ethics, design report writing and briefing skills (Mourtos & Furman, 2002). They report slightly lower gains in openended problem solving skills, including estimation and mathematical modeling. On the other hand, they report low gains in team skills. This was due to the fact that team skills were not taught explicitly at the time of the assessment. Despite a significant amount of time spent working in teams, students needed more guidance and coaching on skills like conflict resolution, task delegation, decision making, etc.). These skills are now taught more explicitly. In addition to student self-reporting, authentic assessment data from course instructors show that engineering freshmen perform fairly well in their design assignments.

Design Globally DispersedTeaching and Assessment of OpenEnded Problem Solving Skills In the BSAE Program design is dispersed throughout the curriculum, so students have an opportunity to practice design in a variety of subjects. Student design practice begins with open-ended problems to help them develop the related skills and attributes described earlier. For example, to help students develop: a.

A habit of doing research before attempting to solve a problem: an extensive literature

Defining, Teaching, and Assessing Engineering Design Skills

b.

c.

d.

e.

f.

g.

h.

i.

review is required for all open-ended problems and design projects. Competency in the use of a process, as well as specific tactics and heuristics to solve a problem: a problem-solving methodology is taught and required to use in the solution of all open-ended problems. An ability to monitor their progress following a problem-solving process: students write a reflection on the effectiveness of their problem-solving process and identify their strengths and weaknesses. A value system in which accuracy is more important than speed: students are given sufficient time to tackle problems, whether in class (exams) or outside of class, and their grading depends heavily on the accuracy of their calculations. A habit of writing down ideas and creating sketches, charts, and figures while solving a problem: students are graded not only on their final answer but also on how well they integrate such features in their solution of problems. An organized and systematic way of approaching problems: students are expected to document in their solutions every step of the problem-solving methodology they are required to follow. An open mindedness and flexibility when solving problems: students are required to consider, analyze, discuss, and present multiple approaches and solutions to a problem. A risk-taking attitude when solving problems: innovative approaches are encouraged; students are not penalized for presenting such solutions, even when the final outcome is not the best. An ability to use an overall approach that emphasizes fundamentals rather than combining memorized solutions as well as an ability to cope with ambiguity and manage the stress: open-ended problems are practiced in all upper division courses.

Design was originally introduced through projects in several junior level aerospace engineering courses. For example, in aerodynamics (AE162), students designed an airfoil for an ultralight aircraft and a wing for a high subsonic transport, both of which had to meet very specific requirements. Similarly, in propulsion (AE167) students designed a compressor and a turbine and they subsequently matched them for placement in a jet engine with specific thrust requirements. In an effort to address the compartmentalization of traditional engineering curricula this approach was modified in 2005. In each of the junior fall and spring semesters, students now define their own design project that involves applications from at least two courses, taken concurrently in the particular semester (Mourtos, Papadopoulos, & Agrawal, 2006). For example, one project involved the design of a ramjet inlet and required integration of compressible flow (AE164) and propulsion principles (AE167). Another, more ambitious project involved the design of a flexible wing for high maneuverability and required integration of principles from aerospace structures (AE114), aerodynamics (AE162), flight mechanics (AE165), and computational fluid dynamics (AE169). This project-based integration of the curriculum offers students an opportunity to appreciate the integrative nature of aerospace engineering design on a smaller scale, before they delve into a much more demanding senior design experience.

Senior Design Capstone Experience In their senior year, aerospace engineering students may specialize in aircraft (AE171A&B) or spacecraft (AE172A&B) design. Both course sequences involve the conceptual and preliminary design of an aerospace vehicle. Depending on the project, the experience may also include the detail design and manufacturing of the vehicle. Although only one of these course sequences is

11

Defining, Teaching, and Assessing Engineering Design Skills

required, a few students choose to take both in lieu of technical electives.

Teaching and Assessment of Team Skills As anyone who has ever worked in a team knows, team skills are not acquired automatically simply by working in a team; they need to be taught explicitly, practiced regularly, and assessed periodically, just like any other set of skills. Although team skills are now taught in E10 and assessed in every course that involves a team project or experiment, it is in the senior design course sequence that these skills are formally taught and assessed. As the course meets once a week for two and a half Table 1. Team member report card

12

hours, the first 15 to 30 minutes are dedicated to building an understanding of how effective teams work. At the beginning of the year, after teams are formed, students engage in various teambuilding activities. Lessons from these activities are discussed in class. Subsequently, in each class meeting students present and discuss one of the 17 laws of teamwork (Maxwell, 2001). Finally, at the end of each semester students submit a team member report card, in which they evaluate the performance of their teammates as well as their own, using the performance criteria for effective teamwork defined earlier and which are shown also in Table 1. These peer reviews are taken into consideration when assigning individual course grades.

Defining, Teaching, and Assessing Engineering Design Skills

Assessment of Total Engineering Skills Many student teams choose to participate in the SAE (Society for Automotive Engineering) Aero-Design or the AIAA (American Institute for Aeronautics and Astronautics) Design/Build/ Fly competitions. In addition to the conceptual and preliminary design, these teams carry out the detail design of their airplane, which they proceed to build and test. Clearly, these competitions give students an opportunity to go beyond a design on paper and experience challenges related to manufacturability and cost. Often engineering professionals from the aerospace industry mentor students in their designs. Participation in design competitions offers unique learning experiences through interactions with students, faculty, and engineers from educational institutions and companies around the country (US) and the world. Both the SAE and the AIAA competitions attract student teams from universities around the world. Furthermore, it provides unique opportunities for authentic assessment of student design skills by engineering professionals. In addition to the engagement factor, which in itself enhances the students’ learning experience in engineering design (Mourtos, 2003), the flight competition itself provides the ultimate test for their designs.

Assessment of Technical Communication Skills Although students must pass a technical writing course (E100W) and have several design and lab reports evaluated in previous courses, it is again the senior design capstone experience that offers opportunities for more realistic assessment of technical communication skills. For example, students who participate in design competitions have their design reports and drawings evaluated by a team of professional engineers, from whom they receive a score sheet and written feedback. Teams also present their design orally and receive

a separate evaluation of their presentation. This kind of feedback naturally adds to any comments given by the course instructor throughout the year. In fact, in many cases it carries a greater weight. In addition to participating in design competitions, students are encouraged to submit and present papers to conferences (e.g., Johnson et al., 2009; Casas et al., 2008). Whether a student conference or a professional conference, participation provides similar benefits in terms of evaluating student written and oral communication skills.

Safety, Ethics, and Liability Issues Safety, ethics, and liability issues are addressed in the course through aerospace case studies involving accidents. Students research background information for each case, make a class presentation, and argue about the various issues in class. A written report is also required. Students in general engage in these discussions and perform fairly well in their written assignments not only because safety, ethics, and liability provide an interesting dimension to aerospace vehicle design but also because these assignments are the only ones addressing ABET Outcome 3f in the BSAE Curriculum, and as such, they have been designated as “gateway” assignments. Hence, students must receive a score of 70% or better in these assignments to pass the course, regardless of their performance in the technical aspects of their design.

Economic, Environmental, Societal, and Global Impact Students discuss in one of their reports the impact of their designs in an economic, environmental, societal, and global context. For example, a team that designed a solar-powered UAV performed a simple analysis on the environmental impact of their airplane by estimating the emissions from a small internal combustion engine with comparable power. They also discussed operating cost taking

13

Defining, Teaching, and Assessing Engineering Design Skills

into consideration the replacement cost of their expensive solar panels every time their UAV crashed. On the other hand, it is not always possible to find interesting and realistic social, political and other types of constraints for all airplanes that students choose to design. Nevertheless, it is important that students develop at least a basic understanding of such issues as well as ways to properly research them before attempting to address them. To develop such an understanding of these issues as they relate to aircraft design, students perform an additional individual assignment by selecting and researching a topic of interest to them. For example, two very interesting topics selected by students were the impact of airplanes on cultural integration and the contribution of jet aircraft contrails on global warming. Students are required to find at least five references related to their topic, at least two of which must be technical journal articles, conference papers or technical reports. For the rest of their references students may use newspaper or magazine articles and the worldwide web. Students study these references and prepare a two-page paper summarizing the key points of their research and a ten-minute presentation for our class. In their presentation students must include two key questions related to their issue, as a way to facilitate class discussion.

Graphic Communication Skills To introduce students to freehand drawing a collaboration has been established with the SJSU School of Art and Design. A team of students from the graduate class Artists Teaching Art Seminar (Art 276) visits the aircraft design class to offer a three-hour workshop on freehand drawing, which includes contour drawing, gesture drawing, and perspective. Both groups of students have been very positive about their experience; the art students because they are given an opportunity to practice their teaching skills in a realistic setting; the aircraft design students because they get an opportunity to express themselves creatively

14

Figure 4. Example of a free-hand drawing in the early design stages

within the context of a very demanding engineering course. An example of a free-hand drawing illustrating a possible configuration for a small solar-powered UAV is shown in Figure 4. Engineering students tend to be very capable with computer programs, including those used in design. For example, a student produced an artist’s concept of his proposed very large, luxury airship, as a way of helping his audience visualize the level of comfort and luxury afforded in this kind of vehicle and provide a contrast with the interior one finds in most airlines today (Figure 5). Naturally, three-view CAD drawings are expected from students in all final design reports. Students are introduced to CAD early in their curriculum with a required freshman-level course in Design & Graphics (ME20). In addition, Computer Aided Design (ME165) is a popular technical elective for many students.

REQUIRED SKILLS FOR FACULTY WHO TEACH ENGINEERING DESIGN An additional challenge in teaching design is the competence level, as far as design skills are concerned, of the faculty who teach design courses.

Defining, Teaching, and Assessing Engineering Design Skills

Figure 5. Example of an artist’s concept drawing for the interior of a very large, luxury airship

To provide a thorough analysis of this issue is beyond the scope of this article, however, it is worth mentioning two very distinct reasons, which contribute to this challenge: a.

b.

A successful completion of a Ph.D. degree, required for a faculty position at most engineering schools, entails primarily development of analytical (left brain), research skills. On the other hand, as we have seen, design requires both analytical and creative skills. To earn tenure and promotion in an academic setting engineering faculty are required to perform research, publish in refereed journals, and seek external funding. To maximize their chances for success under this kind of pressure, engineering faculty continue the same line of research they did in graduate school. After all, the venues available for publishing design work or seeking funding to do such work are limited compared with traditional areas of engineering research.

Hence, faculty members who are asked to teach a design course, often find themselves unprepared. One way to address this deficiency is to require engineering faculty to undergo some training in engineering design before teaching a design course. There are many workshops on design for

faculty members as well as for engineers who work in industry, sponsored by professional societies, universities, and engineering companies. Professional societies also offer summer fellowships for engineering faculty willing to spend a summer in industry working alongside design engineers. Another way to address this issue is to hire adjunct faculty with current design experience from industry to teach design courses. This solution, however, poses its own problems. a.

b.

While some engineering schools are strategically located in areas where adjunct faculty with design experience are available, not every engineering school is blessed with proximity to engineering companies that may provide such faculty. This issue can be addressed in creative ways. For example, to accommodate an adjunct faculty member who teaches a design course at SJSU, a blended course has been scheduled: traditional (face-to-face) and online. The instructor flies from another state every other week and spends three hours with the students. In between, the course is conducted online using appropriate software. Teaching any subject including design requires not only expertise in the subject matter but also appropriate pedagogical

15

Defining, Teaching, and Assessing Engineering Design Skills

knowledge (Mourtos, 2007). Unfortunately, most engineering faculty do not possess such knowledge, as it is not a requirement in their job description. This is true for full-time as well as part-time faculty. Our experience at SJSU has shown that both full-time and adjunct faculty have opportunities to develop pedagogical knowledge through experience and reflection by teaching a variety of courses over time as well as through optional pedagogical training available at most universities. As a result, some – certainly not all – of the faculty do develop appropriate pedagogical content knowledge over time and become effective teachers.

CONCLUSION An attempt has been made to provide a comprehensive list of skills, technical and non-technical, for design engineers. These skills include analytical, open-ended problem solving, a view for total engineering, interpersonal and team skills, communication skills, as well as fluency with modern tools and techniques used in engineering design. In addition to these skills, design engineers must develop certain attributes, such as curiosity to learn new things and explore new ideas, self-confidence in making design decisions, taking risks by trying new concepts, thinking out-of-the-box, and persistence to keep trying when things don’t work. The paper presented course and curriculum design from the BSAE Program at SJSU that addresses these skills and attributes and touched briefly on the challenge of engineering faculty competence in design skills and pedagogy. Some of the elements in this curriculum were introduced several years ago, have been assessed extensively and indicate that students indeed acquire an adequate level of competence in some of these skills. Some of these elements, such as the teaching of freehand drawing through the collaboration with the College of Arts and Design, were introduced

16

only recently and have not yet been assessed. In any case, the attributes of a design engineer, as described above, are difficult to measure and will require the development of special rubrics.

REFERENCES Bloom, B. S. (1984). Taxonomy of educational objectives; Handbook 1: Cognitive domain. Reading, MA: Addison-Wesley. Bloom, B. S., Karthwohl, D. R., & Massia, B. S. (1984). Taxonomy of educational objectives; Handbook 2: Affective domain. Reading, MA: Addison-Wesley. Casas, L. E., Hall, J. M., Montgomery, S. A., Patel, H. G., Samra, S. S., Si Tou, J., et al. (2008). Preliminary design and CFD analysis of a fire surveillance unmanned aerial vehicle. In Proceedings of the Thermal-Fluids Analysis Workshop. Curry, D. T. (1991). Engineering schools under fire. Machine Design, 63(10), 50. Dym, C. L., Agogino, A. M., Eris, O., Frey, D. D., & Leifer, L. J. (2005). Engineering design thinking, teaching, and learning. Journal of Engineering Education, 94(1), 103–120. Engineering Accreditation Commission & Accreditation Board for Engineering and Technology. (2009). Criteria for accrediting engineering programs, effective for evaluations during the 2010-2011 cycle. Retrieved August 20, 2010 from http://www.abet.org/forms.shtml Ercolano, V. (1996). Freshmen: These first-year design courses help attract and retain engineering students. ASEE Prism, 21-25. Flightsim Aviation Zone. (2010). Federal Aviation Regulations, Part 25 – Airworthiness Standards, Transport Category Airplanes. Retrieved April 18, 2011 from http://www.flightsimaviation.com/ data/FARS/part_25.html

Defining, Teaching, and Assessing Engineering Design Skills

Flightsim Aviation Zone. (2010). Federal Aviation Regulations, Part 36 – Noise Standards. Retrieved April 30, 2011 from http://www.flightsimaviation. com/data/FARS/part_36.html Johnson, K. T., Sullivan, M. R., Sutton, J. E., & Mourtos, N. J. (2009). Design of a skydiving glider. In Proceedings of the Aerospace Engineering Systems Workshop. Maxwell, J. C. (2001). The 17 indisputable laws of teamwork: Embrace them and empower your team. Nashville, TN: Thomas Nelson. Mourtos, N. J. (2003). From learning to talk to learning engineering; Drawing connections across the disciplines. World Transactions on Engineering & Technology Education, 2(2), 195–204. Mourtos, N. J. (2007). Course design: A 21st century challenge (pp. 1–4). San Jose, CA: Center for Faculty Development and Support, San Jose State University. Mourtos, N. J. (2010). Challenges students face when solving open - ended problems. International Journal of Engineering Education, 26(4). Mourtos, N. J., DeJong-Okamoto, N., & Rhee, J. (2004). Open-ended problem-solving skills in thermal-fluids engineering. Global Journal of Engineering Education, 8(2), 189–199. Mourtos, N. J., & Furman, B. J. (2002). Assessing the effectiveness of an introductory engineering course for freshmen. In Proceedings of the 32nd IEEE/ASEE Frontiers in Education Conference. Mourtos, N. J., Papadopoulos, P., & Agrawal, P. (2006). A flexible, problem-based, integrated aerospace engineering curriculum. In Proceedings of the 36th IEEE/ASEE Frontiers in Education Conference.

Nicolai, L., & Pinson, J. (1988). Aircraft Design Short Course. Dayton, OH: Bergamo Center. Nicolai, L. M. (1998). Viewpoint: An industry view of engineering design education. International Journal of Engineering Education, 14(1), 7–13. Petroski, H. (2000). Back to the future. ASEE Prism, 31-32. Pink, D. H. (2005). A whole new mind: Why the right-brainers will rule the future. New York, NY: Riverhead Books. Reuteler, D. (2010). The drawings of Leonardo da Vinci. Retrieved August 20, 2010, from http:// www.drawingsofleonardo.org/ Seely, B. E. (1999). The other re-engineering of engineering education, 1900-1965. Journal of Engineering Education, 285-294. The Boeing Company & Rensselaer Polytechnic Institute. (1997). A Manifesto for Global Engineering Education: Summary Report of the Engineering Futures Conference. Seattle, WA. Wikipedia. (2007). Archimedes’ screw. Retrieved August 18, 2010 from http://en.wikipedia.org/ wiki/Archimedes%27_screw Wikipedia. (2011). Hughes H-4 Hercules. Retrieved April 18, 2011, from http://en.wikipedia. org/wiki/Hughes_H-4_Hercules Woods, D. R., Hrymak, A. N., Marshall, R. R., Wood, P. E., Crowe, C. M., & Hoffman, T. W. (1997). Developing problem-solving skills: The McMaster problem-solving program. Journal of Engineering Education, 86(2), 75–91.

This work was previously published in the International Journal of Quality Assurance in Engineering and Technology Education (IJQAETE), Volume 2, Issue 1, edited by Arun Patil, pp. 14-30, copyright 2012 by IGI Publishing (an imprint of IGI Global).

17

18

Chapter 2

Why Get Your Engineering Programme Accredited? Peter Goodhew University of Liverpool, UK

ABSTRACT In many countries engineering degree programmes can be submitted for accreditation by a professional body and/or graduate engineers can be certified or registered. Where this is available most academic institutions feel that they must offer accredited engineering programmes. The author suggests that these processes are at best ineffective (they do not achieve their aims) and at worst they are destructive of creativity, innovation and confidence in the academic community. The author argues that such processes (including any internal certification within the Conceive-Design-Implement-Operate, i.e., CDIO Initiative) should be abandoned completely. The author proposes alternative ways of maintaining the quality of engineering design and manufacture, which place the responsibility where it properly lies – with the manufacturer or contractor. This is a polemic piece, not a referenced review of accreditation.

INTRODUCTION In many countries undergraduate engineering programmes can be submitted to a national body for accreditation. Graduates from accredited programmes are eligible, often with an additional requirement for relevant work experience, for registration as a professional engineer. In the UK this accreditation is overseen by the Engineering Council via UK-Spec. and opens the way to C.Eng, I.Eng or Eng Tech qualifications. In the USAABET DOI: 10.4018/978-1-4666-1945-6.ch002

serves a similar function, while in Australia the appropriate body is Engineers Australia. In all cases the programme, its students, and sometimes its graduates, are scrutinised by a committee of professional engineers before accreditation is awarded for a fixed period such as five years. The accreditation process involves substantial paperwork and usually a one or two day visitation, so is quite costly both for the educational institution and the professional body. I argue in this article that this considerable effort does not represent good value for money and in some cases may have a negative effect on the quality of engineering education.

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Why Get Your Engineering Programme Accredited?

THE CASE AGAINST ACCREDITATION Did the accreditation of professional engineering programmes prevent the disastrous crash of the Airbus 330, flight AF 447, in June 2009? Equally, is it responsible for the fact that the Eiffel tower has remained standing for 120 years? Or that my iPhone is so brilliant? No, no and no. So what is accreditation supposed to be for? At the highest level I presume that the intention is to ensure and enhance the quality and safety of engineered products throughout the world. At a more mundane (and self-interested) national level it might be intended to enable the world-wide transferability, and thus profitability, of a nation’s engineering industry by ensuring the international credibility and employability of its engineers. These seem to be laudable objectives, but delivery of them is several steps away from the accreditation of university programmes. The logic is presumably that the employers of professional engineers must have confidence, via external testimony, in their skills and their fitness to practice. This confidence is engendered by their status as professional (chartered in UK parlance, registered in other jurisdictions) engineers, part of the qualification for which is that, at some time in the past, they graduated from an accredited degree programme. These engineers also have to demonstrate some appropriate experience in employment and the membership of a professional body. I find the whole system of accreditation unsatisfactory in two ways: It does not deliver the intended outcome (and so is ineffectual) and, additionally, it can damage our education system and thus our students and graduates. First, the charge that it is ineffectual: Engineered products are conceived, designed, made and operated (CDIO-ed) by engineers employed by large or small companies. Some, but certainly not all, of these engineers may be chartered. They will usually have earned their chartered status by virtue

of the work undertaken in their first few years of employment, backed up by the degree they were awarded several years ago. Since receiving their chartered status they will have been encouraged to undertake continuous professional development, but this will not have been checked. A fifty-yearold chartered engineer is thus operating on the basis of a validation process twenty years ago and a degree awarded about 25 to 30 years ago. The accreditation of this degree, so long ago, has almost no relevance for the engineering practices in use today. Indeed if the degree was typical of those awarded 25 years ago it will have contained a significant amount of engineering science and very few tests of engineering aptitude or attitude (which is of course why we have the CDIO movement). The fitness to practice of an individual engineer will in reality depend on what they have done, seen and learned during their working life, which is almost independent of the content of their first degree. Indeed the technical content of a degree in one engineering discipline may have almost no overlap with the content of another engineering discipline so it is hard to argue that subject content has anything to do with being, or thinking like, an engineer. Furthermore an engineer employed today may be working in an area unrelated to their original area of study. This is very likely for bioengineers, nanoengineers, environmental engineers, nuclear engineers and others working in interdisciplinary areas. Their original degree would either have been un-accredited or the accreditation would relate to a different disciplinary area. How can this in any way validate or assure the quality of their current work? A third issue is the effectiveness of the quality assurance provided by chartered status. I have already asserted that there are almost no checks on the continued professional development of chartered engineers, but equally there are almost no cases of the de-registration of rogue chartered engineers (and even if there were, they would certainly – like doctors – be de-registered after

19

Why Get Your Engineering Programme Accredited?

they had committed a grave misjudgment or offence, not before). So the accreditation of programmes is certainly ineffectual, but it is also damaging to the education process. University departments of Engineering spend a great deal of time preparing for accreditation visits, and tuning their degree programmes to fit the perceived requirements of their professional bodies. They do this not to improve their programmes (most programme leaders do not believe that the comments of accreditors will achieve this) but because of the fear that they will no longer be able to compete in the marketplace for students if they are not accredited. This fear is probably misplaced, but no department has the courage to put it to the test. Accreditation panels almost always feel that they should make some critical (framed as helpful) comments but these usually reflect the prejudices of individual panel members, who are rarely experts in higher education and frequently elderly and tending to be out of date. (I have resolved never to accept another invitation to sit on an accreditation panel now I have reached 65.) The damage to the system is that the threat of accreditation makes our engineering departments more conservative, less willing to change or innovate, as well as taking time and money which would be better spent on the education of their students. It also reinforces (unhelpfully) the audit culture which has over-run our universities in the last twenty years (at least in the UK). It would be unreasonable to criticise the existing system of accreditation without making some attempt to suggest what might replace it to provide the assurance of quality demanded by society. My suggestion is that the responsibility for the safety and quality of products (from multi-billion tunnels to five-penny toys) should remain where it legally is – with the manufacturer or major contractor. These businesses should assure themselves that their workers are appropriately skilled and work

to appropriate safety and ethical standards. To achieve this they might need to strengthen their recruitment procedures to include a real assessment of candidates’ current abilities and skill sets. They would also want, as many do, to ensure periodically that their employees are up to date. They might wish to buy in the necessary training expertise, perhaps even from a local university, but they will not be much helped by a past accreditation. The proof of the quality of training, and of initial education, will be demonstrated by the performance of the employee – supervised and checked by experienced colleagues – not by their possession of a yellowing piece of paper. I notice that I have not mentioned professional bodies. What might their role be? Certainly not as accreditors, but perhaps as honest brokers between employers and trainers and educators, or as forums for discussion (but not regulation) of best practice. In which case perhaps there should be an upper age limit for service on any committee or as an officer – shall we say 50 – and those in their dotage (like me) should only speak when asked.

CONCLUSION The arguments I have advanced here also apply to the certification of undergraduate programmes as, CDIO-compliant. Such a scheme would cost effort (and almost certainly money) to implement, it would cost even more to police (so this would be unlikely to happen) and would still offer no assurance of the quality of an engineering graduate. A further particular argument which applies to CDIO members is that (unlike many other engineering teaching departments) they have already shown their commitment to improving engineering education and are thus the least likely programmes to need the additional discipline offered by certification process. So I strongly suggest that we do not bother.

This work was previously published in the International Journal of Quality Assurance in Engineering and Technology Education (IJQAETE), Volume 2, Issue 2, edited by Arun Patil, pp. 93-95, copyright 2012 by IGI Publishing (an imprint of IGI Global). 20

21

Chapter 3

Quality and Environmental Management Systems in the Fashion Supply Chain Chris K. Y. Lo The Hong Kong Polytechnic University, Hong Kong

ABSTRACT Consumers and stakeholders have rising concerns over product quality and environmental issues, and therefore, quality and environmental management have become important topics for today’s fashion products manufacturers. This chapter presents some empirical evidence of the adoption of quality management systems (QMS) and environmental management systems (EMS) and their impact on fashion and textiles related firms’ supply chain efficiency. Although both management systems are commonly adopted in the manufacturing industries and becoming a passport to business, their actual impacts specifically on the fashion supply chain have not been explored. By investigating the adoption of ISO 9000 (a quality management system) and ISO 14000 (an environmental management system) in the U.S. fashion and textiles firms, we estimate their impact on manufacturers’ supply chain performance. Based on 284 publicly listed fashion and textiles manufacturing firms in the U.S., we find that fashion and textiles firms operating cycle time had shortened by 15.12 days in a five-year period. In the crosssectional analysis, the results show that early adopters of ISO 9000 and high-tech textiles related firms obtained more supply chain benefits. We only find mixed results of the impact of ISO 14000 on supply chain performance.

BACKGROUND The quality of textiles products at each stage in the fashion supply chain is essential for the success of a fashion product. The quality level delivered to the final customer is the result of quality manageDOI: 10.4018/978-1-4666-1945-6.ch003

ment practices of each link in the fashion supply chain, thus each actor is responsible for their own quality issues (Romano & Vinelli, 2001). This is because the quality of the final product that reaches the customers is clearly the results of a chain of successive, inter-linked phases: spinning, weaving, apparel and distribution, and thus quality management in supply chain are particularly

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Quality and Environmental Management Systems in the Fashion Supply Chain

relevant in the fashion and textiles industries (Romano & Vinelli, 2001). Quality management is defined as an integrated approach to achieve and sustain high quality output, focusing on the maintenance and continuous improvement of processes and defect prevention at all levels and in all functions of the organization to meet or exceed customer expectation (Flynn, Schroeder, & Sakakibara, 1994). The customer expectations on the quality of a product, however, are not just its physical attributes and workmanship. According to ISO 9000, quality is defined as customer expectations over actual performance (ISO, 2004). Consumers’ expectations on fashion products, nowadays, also include its environmental attributes, for instance, use of sustainable materials and control of environmental impacts during the manufacturing processes, etc. Therefore, both quality and environmental management have become important focuses for today’s fashion and textiles manufacturers. International buyers of major brands often use quality management system (QMS) and environmental management systems (EMS) as a major tool to select capable fashion and textiles suppliers (Boiral, 2003; Boiral & Sala, 1998), to ensure their products and raw materials could meet customers’ expectations on quality and environmental aspects. To respond to the call for management systems in various industries, International Organization for Standardization (ISO) has developed ISO 9000 in 1987 and ISO 14000 in 1996, which are generic QMS and EMS for worldwide applications. The number of ISO 9000 certified firms has been increasing persistently since its introduction some 20 years ago. According to recent statistics (ISO, 2009), almost one million of firms or business divisions in 175 countries have adopted ISO 9000. In the past five years, almost 800,000 firms or business units have adopted ISO 9000, representing an increase of almost 570%. For ISO 14000, it has been adopted by 188,815 firms or business divisions in 155 countries (ISO, 2009). From 2006 to 2008, almost 60,000 firms or busi-

22

ness units have adopted ISO 14000, representing an increase of about 47% (ISO, 2009). Multinational enterprises (MNEs) with operations in more than one country are widely recognized as key agents in the diffusion of ISO certifications across national borders. The diffusion of ISO 9000 in the fashion and textiles industries is particularly pronounced. In the early 1990s, European Committee for Standardization (Committee European pour Normalization - CEN) developed importing regulations for use by the European Union (EU) countries. CEN requires manufacturing firms that are importing products into the European market to comply with ISO 9000 standard. Import of fashion and textiles products to EU countries are under this regulation. The requirement of ISO 9000 was then followed by major MNEs, which use the ISO-based criteria to certify their own suppliers and have developed their internal quality management systems according to the ISO guidelines (Guler, Guillen, & Macpherson, 2002). Many suppliers to MNEs subsequently required their upstream suppliers or business partners to be ISO certified, leading to the widespread diffusion of the standard in global supply chain. ISO 14000 follows the global diffusion pattern of ISO 9000, and it has become the most widely adopted EMS in the world (Corbett & Kirsch, 2001). It is a set of management processes and procedures requiring firms to identify, measure, and control their environmental impacts (Bansal & Hunter, 2003). With the aim of improving the environmental performance of a firm, compliance with the standard is audited and certified by an independent, third-party certification body (Jiang & Bansal, 2003). The initial version of ISO 14000 was a consolidation of various elements in BS 7750, a British environmental management standard, and European’s Environmental Management and Audit Scheme (EMAS). Regulations of different countries towards the adoption of ISO 14000 also affect the diffusion of this standard. European countries and

Quality and Environmental Management Systems in the Fashion Supply Chain

some Asian countries, such as Japan (Bansal & Roth, 2000) and Singapore (Quazi, Khoo, Tan, & Wong, 2001), provide favourable legislative environment for firms to adopt EMS, while the U.S. comparatively provides less favourable legislative environment (Kollman & Prakash, 2001, 2002). The regulatory environment within a country affect the costs and perceived benefits of ISO 14000 adoption (Delmas, 2002). Jennings and Zandbergen (1995) maintained that the larger the pressure from the environment, the faster the diffusion of EMS. Due to globalization, the environmental law and regulations are not just affecting the firms within a particular country, but also affecting firms which import and export goods to other countries. Christmann and Taylor (2001) found that if the polluting firms in developing countries export a large proportion of their output to developed countries, they would be more likely to adopt ISO 14000, despite they may be tempted by lax environmental regulations in developing countries. Although the original objective of ISO 14000 is quite different from ISO 9000’s, they shared the same management framework and both diffuse along the global supply chain. In the literature, some scholars investigated on the interactions between ISO 9000 and ISO 14000. Corbett & Kirsch (2001) found that ISO 9000 appears as an important factor explaining diffusion of ISO 14000, suggesting that the motivation, such as attracting potential customers, behind the two have significantly overlap. Pan (2003) also found that there is a strong linkage between the motivations of implementing ISO 9000 and ISO 14000 and their perceived benefits of adoption. Albuquerque, Bronnenberg, & Corbett (2007) further investigated the global diffusion patterns of ISO 9000 and ISO 14000, and they found that both certifications diffuse across countries primarily by geography. In addition, the adoption experience of ISO 9000 could also help certified firms to effectively implement ISO 14000, as the two systems are very similar in terms of imple-

mentation requirements (Poksinska, Dahlgaard, & Eklund, 2003).

LITERATURE REVIEW OF QMS AND EMS ADOPTION IN THE FASHION SUPPLY CHAIN In fashion and textiles literature, researchers mainly focus on the usefulness of QMS and EMS in supplier selection process. Buyers use management certification (e.g., ISO 9000 and ISO 14000) as an instrument to determine whether the supplier is capable to follow industry’s standards (Motwani, Youssef, Kathawala, & Futch, 1999; Teng & Jaramillo, 2005; Thaver & Wilcock, 2006). Teng and Jaramillo (2005) developed a model for evaluation of supplier in the fashion supply chain. They proposed five performance clusters to evaluate textiles supplier performance, which are delivery, flexibility, cost, reliability, and quality. They suggested using QMS certifications as the evidence for quality performance evaluation. The suppliers would receive a higher score in the evaluation model, if they are ISO 9000 certified. There are only a few anecdote cases that discussed the impact of QMS on the fashion supply chain. Sarkar (1998) found that a textiles mill in India obtained higher customer satisfaction through increased employee involvement and product quality improvement due to ISO 9000 adoption. Romano and Vinelli (2001) conducted a case study of the Marzotto Group, one of the most important Italian textiles and apparel manufactures, about its relationships with both upstream and downstream suppliers. They found that quality management system is the “glue” that makes the supply network to operate as a “whole system”. Adanur and Allen (1995) conducted the first industry survey of ISO 9000 in the U.S. textiles industry, and they found that the certified firms experienced decrease in production time and product returns. The certified firms also reported fewer raw material rejections from ISO 9000

23

Quality and Environmental Management Systems in the Fashion Supply Chain

certified suppliers. A recent study shows that the adoption of ISO 9000 can improve the adopting firms’ supply chain efficiency in the general manufacturing industries in the U.S. (Lo, Yeung, & Cheng, 2009), suggesting that the adoption of QMS might help the certified firms to become a more efficient node in their supply chain. The implementation of QMS are often under the pressure of major customers, thus small- and medium-sized textiles manufacturers have no choice but to comply with customer requirements on these certification. They might pursuit ISO 9000 without genius knowledge about QMS. Allen and Oakland (1988, 1991a, 1991b) found that small textiles firms lack correct knowledge about QMS compared to large firms, in three survey studies of 183 textiles firms. They concluded that there is a distinct lack of good quality management practices within in the British textiles industry. Fatima and Ahmed (2010) studied the ISO 9000 certified firms in Pakistan’s bedwear textiles industry. They found that 60% of the firms offered poor training, 70% had poorly defined quality policy and objectives, and 70% had ineffective internal audit. Their findings show that despite the high adoption rate of ISO 9000 in the industry, there is lack of real implementation of QMS, and ISO 9000 is merely a passport into export markets. Fashion and textiles manufacturers in developing countries have self-regulation pressures to adopt QMS for gaining legitimacy from the MNEs of developed countries (Christmann & Taylor, 2001). In the operation management (OM) literature, there are some critics on EMS’s benefits in firm operations. The critics believe that environmental initiatives often transfer the cost previously bored by the society back to the firms (McGuire, Sundgren, & Schneeweis, 1988). The increased liability and environmental obligations would also lead to a negative impact on firms’ operational performance and its flexibility in production (i.e. limited choices of environmental friendly dyes and fabrics, which are often more expensive). Therefore, textiles firms’ operations managers

24

would hesitate to adopt EMS in their production. On the other hand, the advocate of EMS believe that the ISO 14000 could improve environmental performance, which eventually improved firms’ performance through cost-saving and revenue gain pathways (Klassen & McLaughlin, 1996). In fact, the perceived benefits of EMS in fashion supply chain are quite direct. The dying process in textiles processing could produce huge amount of emissions that would lead to fine and high restoration cost. The impact of EMS is especially important for the wet processing of natural fibres, which are water, energy and pollution intensive (Ren, 2000). Managing the environmental impact through a systematic process would help the firms to reduce water and energy consumption, as well as to avoid serious pollution incidents. Therefore, EMS adoption is particularly relevant to fashion and textiles related firms. However, compared to ISO 9000, the number of empirical works that focused on the EMS adoption in the fashion supply chain is very limited. There are only a few survey studies discussed about the sustainability of the fashion supply chain.. For example, Fresner (1998) conducted a case study of an Austrian textiles mill and found that the adoption of ISO 9000 helped to reduce solid waste and then pursuit ISO 14000 for further improvement in productivity. Brito et al. (2008) found that the adoption of ISO 14000 in Europe fashion and textiles industries would improve customer services and cost optimization for the adopting firms, and eventually improve the overall performance of the whole supply chain. However, both studies mentioned above did not provide objective evidence of such impact. To explore how QMS and EMS affect fashion and textiles firms’ supply chain efficiency, more empirical evidence is needed.

HYPOTHESIS DEVELOPMENT Supply chain efficiency could refer to lead-time performance, delivery promptness and inventory

Quality and Environmental Management Systems in the Fashion Supply Chain

level (Kojima, Nakashima, & Ohno, 2008). If the members of a fashion supply chain could deliver their products to their down stream customers in shorter period, the performance of the overall supply chain performance is improved. The well-known Supply Chain Operations Reference (SCOR) model suggested that firms’ operating cycle time is an important indicator to measure supply chain efficiency (Stewart, 1995, 1997). To measure supply chain efficiency for a particular firm, accounting-based performance indicators, namely 1) inventory days, 2) accounts receivable days, and 3) operating cycle time can be used (Lo, et al., 2009). We discuss how ISO 9000 and ISO14000 could possibly affect the supply chain efficiency of adopting firms as follows. ISO 9000 requires the adopting firm to ensure that product quality is constantly measured and appropriate corrective actions are taken whenever defects occur. These actions must be undertaken through a well-defined management system that monitors the potential quality problems continuously. Therefore, the defect rate of the fashion products should decrease and defects should be detected and corrected early in the production processes, and less scrap and rework need to be handled in the manufacturing processes (Naveh & Marcus, 2005). Therefore, the overall time required turning raw material into fashion and textiles products that fulfil customer order should be shortened (i.e. lower inventory days). ISO 14000 adoption could also lead to lower inventory days, as it requires organizations to implement pollution prevention procedures to avoid environmental spills, crises, and liabilities that might incur huge effort in restoration (Bansal & Hunter, 2003; Brio, Fernandez, Junquera, & Vazquez, 2001). Therefore, ISO 14000 certified firms could face less frequent mandatory pollution restoration that could seriously disrupt their operations. ISO 14000 certified firms are also perceived as less risky than their non-certified counterparts, less frequent environmental inspections from customers and regulators are required (Potoski

& Prakash, 2005), leading to further shortening of inventory days. Therefore, we hypothesize that the time required to convert raw materials into textiles products (i.e., inventory days) is shorter after ISO 9000 or ISO 14000 implementation. As the initial objectives and its impact on supply chain are different between ISO 9000 and ISO 14000, we develop two parallel hypotheses to estimate their impact on supply chain performance independently. H1a: The adoption of QMS leads to lower inventory days. H1b: The adoption of EMS leads to lower inventory days. The perceived benefits of ISO 9000 are not just confined to improving product quality, but also enhancing customer services (Buttle, 1997). If ISO 9000 implementation can improve product quality and customer services, the customer orders fulfilment time should be shorter. Moreover, if there were any quality problem of the fashion products, payment would be postponed as defective fashion products are returned to rework. Customers may not pay for the products until the reworked products being delivered and met their quality requirements. As a result, the time between product delivery and customer payment (i.e. accounts receivable days) should be shorter for firms with higher product and service quality for the ISO 9000 certified firms (Lo, et al., 2009). By taking proactive measures to prevent environmental crises, an ISO 14000 certified firm are able to prevent mistaken use of hazardous materials, and violation of environmental regulatory requirements in the customer’s market, which might results in large-scale product recall. Once a product recall is needed, the accounts receivable days will be significantly affected (i.e. longer). Besides, as customers are more favourable toward environmentally friendly products and organizations, ISO 14000 adoption can establish a positive corporate image and trust from customers in the

25

Quality and Environmental Management Systems in the Fashion Supply Chain

long run (Bansal & Hunter, 2003). Therefore, such firms have the potential to bargain for more favourable payment terms from customers who trusted and loyal to them (i.e., customers are willing to pay earlier). This hypothesis can be tested by measuring the accounts receivable days. H2a: The adoption of QMS leads to lower accounts receivable days. H2b: The adoption of EMS leads to lower accounts receivable days. In the fashion supply chain, operating cycle time consists of manufacturing time (the time required to turn raw materials into products), delivery time (the time required to deliver products from the manufacturer to customers), and payment fulfilment time (the time required for customers to pay for their accepted products). The total time incurred in the above processes is known as operating cycle or “cash-to-cash cycle” (Eskew & Jensen, 1996). Therefore, we hypothesize that operating cycle time should be shorter after the implementation of ISO 9000 and ISO 14000. H3a: The adoption of QMS leads to a shorter operating cycle. H3b: The adoption of EMS leads to a shorter operating cycle.

METHODOLOGY AND DATA COLLECTION In this research, we focus on fashion and textiles related firms in the manufacturing industry (SIC code 2000-3999). All firms under the industry that contains keywords such as “Fashion”, “Textiles”, “Dye”, “Apparels” and “Fabrics”, were included in our sample. To generate our sample, we identified ISO 9000 and ISO 14000 certified firms and their years of certification from registration data of Quality Digest and Who’s Registered, which are online registration databases. Since each firm

26

could have multiple plants/sites certified, we follow the practice of previous research (e.g., Corbett, Montes-Sancho, & Kirsch, 2005; Naveh & Marcus, 2005) by focusing on the first certification. It is because this is the only time period representing the change from a non-certified to a certified firm status. Additional certifications after the first certification only mean continuous improvements. After compiling the data from the online databases and from Standard and Poor’s financial database - COMPUSTAT, we found that 284 ISO 9000 certified and 61 ISO 14000 certified publicly listed fashion or textiles manufacturing firms in the U.S. We define the year of formal ISO certification as the certification year (year 0). To measure the abnormal change in performance over a long-term period (event window), we should start by defining the base year, when there is no prior impact from the preparation of ISO certification on the sample firms. To pass the certification audit, the average preparation time is 6-18 months prior to registration (Corbett, et al., 2005). Therefore, year -2 should be taken as the base year. As we only focus on the first certification, we can assume that there is no impact from ISO implementation on all the sample firms during the base year (year -2). ISO requires a third-party audit to verify that it has been effectively implemented, so a strong impact of ISO on performance should appear in the certification year when the firm passed the audit. The performance of certified firms should also experience the impact years after ISO certification. Therefore, we set the event period in this research at year -2 as the base year and measure the changes over the next five years (year -1, year 0, year 1, year 2, and year 3). To estimate the abnormal changes in supply chain performance within the event window, we compare the actual performance with the expected performance of the sample firms, which is based on the changes in performance of control firms (Barber & Lyon, 1996). The selection of control firms should be based on a combination of three

Quality and Environmental Management Systems in the Fashion Supply Chain

criteria: pre-event performance, industry, and firm size (Barber and Lyon, 1996). First, matching on pre-event performance can avoid the mean reversion problem of accounting data and control the impact of other factors on firms’ performance (Barber & Lyon, 1996). Second, industry economic status could account for up to 20% of the changes in financial performance (McGahan & Porter, 1997). Moreover, environmental issues and the impact of EMS are industry-specific (Russo & Fouts, 1997), so we must control industry type in matching sample and control firms. Third, previous studies have suggested that operating performance varies by size (e.g., Fama & French, 1995). We thus match sample and control pairs based on the three matching criteria. We generate the sample-control pairs and regard these pairs as the performance-industrymatched group. The first step is to match each sample firm to a portfolio of control firms based on at least a two-digit SIC code and 90%-110% of performance in year -2. In Step 2, if there are some firms have no control firm is matched in step 1, we use at least a one-digit SIC code and 90%-110% of performance to match for control firms. If no control firm is matched in the Step 2, we use only the 90%-110% of performance as the matching criterion. In our sample, there are 248 ISO 9000 certified firms, of which 58 are ISO 14000 certified. We discard 57 firms that did not have financial information (Operating cycle) in year -2. In the remaining 191 fashion and textiles related firms, 154 firms are matched in Step 1 (80.6%). five firms are matched in Step 2. No firm is matched in Step 3. We matched 159 sample firms with at least one control firm in year -2. The financial information presented in Table 1, the number of sample and control matched pairs gradually decrease from the ending period year 1 to year 3, due to lack of financial information in either sample firms or control firms. For another group of matching, i.e., performance-industry-size-matched, we further control

the firm size. We use the 50%-200% range of total assets for controlling the firm size. In other words, we match the sample and control firms with at least a two-digit SIC code, 50-200% firm size (in terms of total assets), and 90%-110% prior certification performance in year -2 (Step 1). In the case where a sample firm cannot match any control firm in Step 1, we relax the industrymatching criterion to at least a one-digit SIC code, while keeping the other matching criteria unchanged (Step 2). If no control firm is matched in Step 2, we use only the 50%-200% of firm size and 90%-110% of performance as the criteria (Step 3). The reason for taking the prescribed steps to create two control groups is to try to match most of the firms without compromising on the tightness of the matches on performance (Hendricks & Singhal, 2008).

Measurements of Indicators We measure the fashion and textiles firms’ supply chain efficiency by measuring the inventory days, accounts receivable days, and operating cycle time. For the calculation of inventory days, we first divide the cost of goods sold by average inventory. We then divide the 365 days by inventory turnover ratio for the Inventory days. The unit of inventory days is in terms of day (see Formula 1). For accounts receivable days, similarly, we first calculate the accounts receivable turnover by dividing the credit sales by average accounts receivable. We then use 365 days over accounts receivable turnover to estimate the number of accounts receivable days (see Formula 2). The overall operating cycle is the summation of inventory days and accounts receivable days, and it represents the time required to turn the raw materials to cash from customers. The corresponding formulas as follows: Letting IT be inventory turnover ratio, given by IT =

COGS , Avg.Inv.

27

28

Table 1. Abnormal operating cycles of sample firms for the ISO 9000 adoption in three-year period (year -2 to year 1), four-year period (year -2 to year 2), and five-year period (year -2 to year 3), based on performance-industry-matched and performance-industry-size-matched matching. p < 0.1*; p < 0.05**; p < 0.01*** for one-tailed tests; a in percentage year - 2 to year 1

year -2 to year 2

year -2 to year 3

Performance measures

N

Mean

Median

% negative

N

Mean

Median

% negative

N

Mean

Median

% negative

Operating Cycle

149

-18.44

-13.24

63.00

142

-11.59

-14.13

63.00

139

-17.24

-15.12

67.00

t / z statistics Inventory Days

-2.19 149

t / z statistics Accounts Receivable Days

-14.00 -2.28

149

t / z statistics

**

***

-10.31 **

-9.95 -1.86

-3.46 -3.05

-1.71

***

62.00 ***

-1.05 **

-3.11 -2.79

142 ***

51.00 **

-2.51 -11.00 -2.83 142

-0.16

***

-3.64

***

-9.69 ***

-3.26

-1.78

-1.56

-0.80

-1.40

-3.11

***

63.00 ***

-3.11

139 ***

54.00 *

-3.34 -14.48 -3.10 139

-0.92

***

***

-9.47 ***

-7.56 -2.48

-4.15 -3.46

-2.76

***

65.00 ***

-4.32 ***

-3.90 -3.39

***

61.00 ***

-2.55

***

Performance-industry-size-matched year - 2 to year 1

year -2 to year 2

year -2 to year 3

Performance measures

N

Mean

Median

% negative

N

Mean

Median

% negative

N

Mean

Median

% negative

Operating Cycle

144

-12.43

-5.51

57.00

137

-9.07

-3.92

59.00

131

-14.22

-7.18

60.00

t / z statistics Inventory Days

-1.44 144

t / z statistics Accounts Receivable Days t / z statistics

-1.58 144

*

-9.57

-1.68

*

-1.89 *

-1.57

-1.58 55.00

*

*

-1.74 137

-1.08

-2.86

0.20

50.00

-0.88

-0.53

0.00

-2.39 137

**

-11.28

-1.51

*

-3.50 ***

-1.68

-2.05 55.00

**

**

-1.65 131

-1.20

2.32

0.61

47.00

1.06

1.69

0.68

-1.86 131

*

-12.54

-1.44

*

-5.46 **

-1.52

-2.10

**

59.00 *

-1.92

-2.41

0.38

48.00

-0.77

-0.20

-0.35

**

Quality and Environmental Management Systems in the Fashion Supply Chain

Performance-industry-matched

Quality and Environmental Management Systems in the Fashion Supply Chain

Where

performance (i.e., in year -2) and change in the median performance of control firms in that period (i.e., from year -2 to year 1, year 2 and year 3). The formulas are as follows:

COGS = cost of goods sold, Avg.Inv. = average inventory balance,

AP(t+j) = PS(t+j) - EP(t+j),

We have I =

365 . IT

(1)

Where

Similarly, letting ART be the accounts receivable turnover ratio, given by ART =

CS , Avg.AR

Where CS = credit sales, Avg. AR = average accounts receivable balance, We have AR =

365 . ART

OC = I + AR

EP(t+j) = PS(t+i) + (PC(t+j) - PC(t+i)),

(2) (3)

Where OC = operating cycle, I = number of inventory days, AR = number of accounts receivable days. We estimate abnormal supply chain efficiency (i.e., inventory days, accounts receivable days, and operating cycle time) within the event window as the difference between sample post-event performance (i.e., actual performance in year 1, year 2 and year 3) and expected performance (in year 1, year 2 and year 3). We estimate expected performance as the sum of sample pre-event

AP – abnormal performance, EP – expected performance, PS – performance of sample firms, PC – median performance of control firms, t – year of ISO 9000 / ISO 14000 certification, i – starting year of comparison (i = -2), j – ending year of comparison (j = 1, 2, or 3). We obtain the performance data from the COMPUSTAT database. Since the first ISO 9000 (ISO 14000) certification was awarded in 1990 (1996) and we need performance data at least two years before certification (year -2) and three years after certification (year 3) for analysis, we obtain performance data covering the period 1988 to 2008. We conduct the Wilcoxon signed-rank (WSR) test to examine the median abnormal performance. We also carry out the Sign test to determine if the percentage of positive abnormal performance is significantly higher (i.e., higher than 50%). To check for consistency, we further conduct the parametric t-test on the mean abnormal performance to ensure that our findings are robust. Table 1 and Table 2 present the results of ISO 9000 and ISO 14000 on supply chain efficiency respectively.

RESULTS We begin our discussion by examining abnormal supply chain efficiency on ISO 9000 certified textiles and textiles related firms. The cumula-

29

30

Table 2. Abnormal operating cycles of sample firms for the ISO 14000 adoption in three-year period (year -2 to year 1), four-year period (year -2 to year 2), and five-year period (year -2 to year 3), based on performance-industry-matched and performance-industry-size-matched matching. p < 0.1*; p < 0.05** for one-tailed tests; a in percentage year - 2 to year 1

year -2 to year 2

year -2 to year 3

Performance measures

N

Mean

Median

% negative

N

Mean

Median

% negative

N

Mean

Median

% negative

Operating Cycle

45

-13.36

-11.15

64.00

41

0.30

-3.93

59.00

33

-5.49

-4.37

55.00

0.04

-0.93

-0.94

-0.82

-0.87

-0.35

-3.76

-6.96

61.00

-0.63

-1.29

-1.25

-2.84

-0.14

51.00

-0.54

-0.16

0.00

t / z statistics Inventory Days

-1.79 45

t / z statistics Accounts Receivable Days

-6.36 -1.45

45

t / z statistics

**

**

-4.86 *

-15.45 -2.05

-2.13 -1.93

-1.95

**

64.00 **

-2.54 **

-1.79 -1.79

41 **

62.00 **

-1.49

41 *

33

33

-6.22

-4.17

-1.10

-1.39

58.00

-3.90

-2.33

58.00

-0.95

-0.78

-0.70

Median

% negative

*

-0.70

Performance-industry-size-matched year -2 to year 1

year -2 to year 2

year -2 to year 3

Performance measures

N

Mean

Median

% negative

N

Mean

Median

% negative

N

Operating Cycle

40

-3.21

-2.99

55.00

37

2.71

-4.66

59.00

32

-0.68

-0.91

-0.47

0.37

-0.46

-0.99

-4.78

-2.01

55.00

-2.26

-2.64

59.00

-1.15

-1.25

-0.47

-0.33

-1.12

-0.99

1.63

3.74

40.00

5.03

3.99

43.00

0.45

-0.90

1.11

1.87

1.49

0.66

t / z statistics Inventory Days

40

t / z statistics Accounts Receivable Days t / z statistics

40

37

37

32

32

Mean -4.66

-3.03

53.00

-0.68

-0.64

-0.18

-5.08

-3.51

56.00

-0.89

-1.18

-0.53

-1.01

-1.92

56.00

-0.34

-0.51

-0.53

Quality and Environmental Management Systems in the Fashion Supply Chain

Performance-industry-matched

Quality and Environmental Management Systems in the Fashion Supply Chain

tive results of three-year to five-year (from year -2 to year 1, year 2 and year 3) changes provide a clearer picture on the long term impact of ISO 9000 adoption on firms’ supply chain efficiency in the fashion supply chain. For the results of the performance-industry-matched group from the year prior to ISO 9000 implementation (year -2) to the year of post-certification (year 1), the median (mean) cumulative changes in operating cycle is -13.24 days (-18.44 days), which is significant at the 1% (1%) level. More than half of the sample firms (63%) experience a shorter operating cycle, which is significantly higher than 50% (p < 0.01). The median (mean) abnormal change in inventory days is -10.31 days (-14.00 days), which is significant at the 1% (1%) level, with over 62% of the sample firms shortened their inventory days. The median (mean) abnormal change in accounts receivable days is -1.05 days (-9.95 days), which is significant at the 5% (5%) level. More than half of the sample firms (51%) experiencing a shorter accounts receivable days. For the results of performance-industrymatched group from year -2 to year 2, the median (mean) cumulative changes in operating cycle is -14.13 days (-11.59 days), which is significant at the 1% (1%) level. About 63% of sample firms experience a shorter operating cycle, which is significantly higher than 50% (p < 0.01). The median (mean) abnormal change in inventory days is -9.69 days (-11.00 days), which is significant at the 1% (1%) level, with over 63% of the sample firms shortened their inventory days. The median (mean) abnormal change in accounts receivable days is -1.56 days (-1.78 days). More than half of the sample firms (54%) experiencing a shorter accounts receivable days. For the results of performance-industrymatched group from year -2 to year 3, the median (mean) cumulative changes in operating cycle is -15.12 days (-17.24 days), which is significant at the 1% (1%) level. About 67% of sample firms experience a shorter operating cycle, which is significantly higher than 50% (p < 0.01). The

median (mean) abnormal change in inventory days is -9.47 days (-14.48 days), which is significant at the 1% (1%) level, with over 65% of the sample firms shortened their inventory days. The median (mean) abnormal change in accounts receivable days is -4.32 days (-7.56 days). More than half of the sample firms (61%) experiencing a shorter accounts receivable days. The results of abnormal supply chain performance are similar between the performanceindustry-matched group and the performance-industry-size-matched group, except for the accounts receivable days results. In both matching groups, the impacts of ISO 9000 on operating cycle and inventory days are statistically significant. Hypotheses H1a and H3a are supported. However, the abnormal impact of ISO 9000 on accounts receivable days is not statistically significant in the performance-industry-size-matched group. This mixed results suggest that accounts receivable days is sensitive to firm size; hypothesis H2a is only partially supported. The overall results are robust between the threeyear, four-year and five-year cumulative results, revealing that the impact of ISO 9000 on supply chain performance is long lasting in fashion and textiles industries. The length of shorten operating cycle is longer in the period from year -2 to year +3, which means the impact of ISO 9000 is long lasting. The certified textiles related firms improve their supply chain efficiency continuously in the five-year period. Table 2 presents the results of ISO 14000 matching groups. The performance-industrymatched group from the year -2 to year 1, the median (mean) cumulative changes in operating cycle is -11.15 days (-13.36 days), which is significant at the 5% (5%) level. More than half of the sample firms (64%) experience a shorter operating cycle, which is significantly higher than 50% (p < 0.05). The median (mean) abnormal change in inventory days is -4.86 days (-6.36 days), which is significant at the 5% (5%) level, with over 64% of the sample firms shortened their inventory days.

31

Quality and Environmental Management Systems in the Fashion Supply Chain

The median (mean) abnormal change in accounts receivable days is -2.54 days (-15.45 days), which is significant at the 5% (5%) level. More than half of the sample firms (62%) experiencing a shorter accounts receivable days. The abnormal median changes of all three supply chain performance indicators are negative in the four-year (from year -2 to year 2) and fiveyear periods (from year -2 to year 3). However, they are not statistically significant in nearly all the statistical tests. Such results suggest that the impact of ISO 14000 on fashion and textiles related firms’ supply chain efficiency is temporary. This impact diminished after year 1. We could not found significant impact of ISO 14000 on the three indicators in the performance-industry-sizematched group. These mixed results suggest that hypotheses 1b, 2b and 3b are not supported. The adoption of ISO 14000 only has some short-term impact on supply chain efficiency but it is not enduring. We will discuss these findings in the discussion section.

and dedicating additional manpower to facilitate the implementation process of ISO 9000. We use firms’ total assets to represent the firm size. We also control the financial performance of the firm because firms that are more profitable are more efficient in operations. As ISO 9000 adoption calls for improvement in a firm’s operational efficiency, firms that are more efficient may be able to implement the system more effectively than less efficient firms. Firms that are more profitable could also have more resources for ISO 9000 implementation. We estimate the financial performance of a firm as the firm’s ROA in year 3. We include three independent variables into the regression model, which are labour intensity, R&D intensity, and time of ISO 9000 adoption. The arguments and predictions of the three indicators are as follows: Independent variables: 1.

MODERATING FACTORS OF ISO 9000 ADOPTION IN THE FASHION SUPPLY CHAIN We try to provide a deeper understanding on the association between ISO 9000 adoption and the improvement in operating cycle. We only focus on ISO 9000 because we reveal no significant change on operating cycle from ISO 14000 adoption in the previous section. We construct a regression model to study how firm-level characteristics affect the impact of ISO 9000 on abnormal operating cycle over the five-year event period (from year -2 to year 3). We use firm size, and original financial performance of the ISO 9000 certified firms as the control factors. The abnormal performance in operating cycle of ISO 9000 certified firms could be more positive for larger firms. Larger firms normally have more resources for hiring external consultants, providing additional training,

32

2.

3.

Labour intensity: The abnormal changes in operating cycle could be more positive (shortened length of operating cycle) in more labour intensive firms. It is because these firms might have higher need in standardizing their operation procedure to ensure the production processes is smooth. The calculation of labour intensity is the number of employee over firm’s total assets. R&D intensity: Industries with higher R&D intensity would normally mean that they face more rapid technological and product changes. There are thus more opportunities for new product development for hightechnology textiles firms. This allows them to implement efficient process designs to a greater extent. Therefore, the positive impact of ISO 9000 on abnormal operating cycle could be higher in firms with higher levels of R&D intensity. We measure R&D intensity as R&D expenses over sales. Time of ISO 9000 adoption: According to the institutional theory, early adoptions of organizational innovations are motivated by

Quality and Environmental Management Systems in the Fashion Supply Chain

technical and economic needs (DiMaggio & Powell, 1983), while later adopters respond to the growing social legitimacy of the innovations as taken-for-granted organizational structure improvements (Westphal, Gulati, & Shortell, 1997). ISO 9000 is a well-recognized example of institutionalized management practice (Guler, et al., 2002). Therefore, we predict that early adopter of ISO 9000 could have larger improvement in operating cycle than the later ones, as the formers are motivated by technical benefits of ISO 9000. Table 3 presents the cross-sectional regression results. We use the abnormal performance in operating cycle of the performance-industrymatched group for analysis. For this model, the F-value is 2.89, which is significant at the 1% level. The adjusted R2 values is 8.0%, which is comparable to those observed in previous studies that attempted to use cross-sectional regression models to explain abnormal performance (e.g., Hendricks & Singhal, 2008). We find that firm size and firm labour intensity do not moderate the association between ISO 9000 adoption and abnormal operating cycle.

Fashion and textiles firms’ ROA is negatively related (p < 0.01) to abnormal operating cycle. It means textiles firms that are more profitable can further shortened their operating cycle time from ISO 9000 adoption. The coefficient of R&D intensity is negatively and statistically significant at the 1% level (p < 0.01). It means that high technology fashion and textiles firms could benefit more from the reduction of operating cycle time after ISO 9000 adoption. The coefficient of time of adoption is positive and statistically significant at the 5% level. Late adopters of ISO 9000 obtain less abnormal benefit from ISO 9000 adoption (i.e. they have longer abnormal operating cycle compared to early adopters). It shows that the institutionalization of ISO 9000 in general manufacturing industries also appear in the fashion supply chain.

DISCUSSION AND SUMMARY This study provides empirical evidence on the impact of QMS and EMS adoption on firms’ supply chain performance in the fashion supply chain. Based on the 248 ISO 9000 and 61 14000 certified publicly-listed fashion or textiles related

Table 3. Estimated coefficients (t-statistics in parentheses) from regression of abnormal operating cycle change from year -2 to year 3 Independent Variables Intercept

-5879

(-1.684)

Firm size +

0.001

(0.663)

ROA +

-108.259

(-2.911)

Labour intensity

200.802

(0.939)

R&D intensity +

-61.501

(-2.463)

***

Year of ISO 9000 adoption +

2.940

(1.683)

**

Number of observations

109

Model F value

2.89

R squared (%)

12.2

Adjusted R squared (%)

8.0

** ***

***

Note: Significance levels (One-tailed tests) of independent variables: p max Timeic : ICM ic = *max {ICM ic* }∧NTaskic = max Ntask { ’ } (18) ic  c =1,…,C  c ’ =1,…,C c =1,…,C  c ≠c '' c ’ ≠c c * ≠c  

        NTask ∧ Time = max Time }  c = 1, …,C : ICM ic = *max {ICM ic* }∧NTaskic = max { } { ic ic ' ic ''   c =1,…,C c ' =1,…,C C c '' =1,…,C   '' * ' c ≠c c ≠c c ≠c      

508

Similarity-Based Cluster Analysis for the Cell Formation Problem

A result of the assignment of parts to the manufacturing cells is the so-called block-diagonal incidence matrix shown in Figure 5 as the result of the application of the proposed procedure to the instance illustrated in Section 5.

Step 5: Plant Layout Configuration This step deals with the determination of the location of each manufacturing resource (machines and human resources) in the production area. Layout decisions are significantly influenced by the configuration of cells and part families in CM systems, but they are omitted in this chapter. Nevertheless a few significant performance measures on layout results are quantified as clearly explained below.

• •

a part family and columns representing the related machine cell. Void. This is a “zero” element appearing in a diagonal block (see Figure 4). Exceptional element. This is a “one” appearing in off-diagonal blocks (see Figure 4). The exceptional element causes intercell movements.

A set of CM measurements of performance quantified in the proposed experimental analysis is now reported and discussed. (high) and (low) labels refer to the expected values for best performing the CF and CM. Problem Density: PD PD =

CLUSTERING PERFORMANCE EVALUATION Sarker (2001) presents, discusses, and compares the most notable measurements of grouping efficiency in CM. The measurements adopted in the following illustrated experimental analysis are based on the following definitions: •

lock. This is a submatrix of the machinepart matrix composed of rows representing

number of "ones" in the incidence matrix nts in the incidence matrix number of elemen (19)

Global Inside cells density: ICD (high) ICD = number of "ones" in diagonal blocks number of elements in diagonal blocks

(20)

Figure 5. Block-diagonal matrix. Simple matching, farthest neighbour.& 75° percentile.

509

Similarity-Based Cluster Analysis for the Cell Formation Problem

Exceptional elements: EE (low) EE = Number of exceptional elements in the offdiagonal blocks (21) Ratio of non-zero elements in cells: REC REC = total number of "ones" number of elements in diagonal bllocks

(22)

It is a weighted average of two functions and it is defined as:

r =1

η2 = 1 −

(23)

eo mn − ∑ M r N r

where

510

number of 1s in the diagonal blocks number of 1s in the off-diagonal blocks number of diagonal blocks number of machines in the rth cell number of components in the rth part-family weighting factor (0≤q≤1) that fixes the relative importance between voids and inter-cell movements. If q=0.5 both get the same importance: this is the value adopted in the numerical example and in the experimental analysis illustrated in sections 5 and 6.

ICW PW

(24)

where ICW total intercellular workload PW total plant workload. ICW and PW can be defined as: C m  n  ICW = ∑ ∑ Yic (∑ (1 − Z kc )X ik mkTik )   c =1 i =1  k =1 

m

n

PW = ∑ ∑ X ik mkTik

(25) (26)

i =1 k =1

k

r =1

ed eo k Mr Nr q

It is a measure of independence of machinecomponent groups. High values of QI are expected in presence of high independency. QI is defined as: QI = 1 −

Grouping Efficiency: ƞ (Sarker 2001) (high)

η = q η1 + (1 − q )η2 e η1 = k d ∑ Mr Nr

Quality Index: QI (Seifoddini and Djassemi 1994, 1996) (high)

where n is the number of parts, k=1,…,n the generic part, m the number of machines and i,j=1,…,m the generic machine. This is the notation previously introduced 1 if machine i is assigned to cell c Yic =  0 otherwiswe 1 if part k is assigned to cell c Z kc =  0 otherwiswe  1 if part k has operation on machine i X ki =  0 otherwiswe 

mk Tki

volume of part k processing time of part k on machine i

Similarity-Based Cluster Analysis for the Cell Formation Problem

QI measures the number of intracellular movements which ask to be maximized minimizing intercellular ones. Now the authors introduce a new grouping efficiency based on QI as previously defined. Grouping Efficiency based on QI: ƞQI (high) ηQI=qη1+(1-q)QI

(27)

ηg = ηu − ηm e1 ηu = e1 + ev e ηm = o e where ƞu

The adopted value of weighting factor q is: ƞm

k

q=

∑M N r

r =1

mn

r

(28)

Grouping Efficacy: τ (Kumar and Chandrasekharan 1990) (high) Group efficacy can be quantified by the application of the following equation: τ=

e − e0 e + ev

(29)

where e

total number of “ones” in the matrix (i.e. the total number of operations) e0=EE number of exceptional elements (number of “ones” in the off-diagonal blocks) ev number of voids (number of “zeros” in the diagonal blocks). Grouping measure: ƞG (Miltenburg and Zhang 1991) (high) It gives higher values if both the number of voids and exceptional elements are fewer, and it is defined as:

(30)

e1

ratio of the number of 1s to the number of total elements in the diagonal block (this is the inside cell density - ICD) ratio of exceptional elements to the total number of 1s in the matrix number of 1s in the diagonal block.

Group technology efficiency: GTE (Nair and Narendran 1998) (high) It is defined as the ratio of the difference between the maximum number of inter-cell travels possible and the number of inter-cell travels actually required to the maximum number of inter-cell possible: GTE = n

I −U I

I = ∑ (rj − 1)

(31)

j =1 n

rj −1

U = ∑ ∑ xl js j =1 s =1

where I U rj

maximum number of inter-cell travels number of inter-cell movements required by the system maximum number of operations for component j

xl js = 0 if operations s, s + 1 are performed in the same cell   1 otherwise 

511

Similarity-Based Cluster Analysis for the Cell Formation Problem

Bond efficiency: BE (high) This is an important index because depends on both the within-cell compactness (by the ICD) and the minimization of inter-cell movements by the GTE. It is defined as: BE=q∙IDC+(1-q)∙GTE

(32)

The adopted value of weight q in the experimental analysis is 0.5.

NUMERICAL EXAMPLE This section presents a numerical example which relates to a problem oriented instance presented by De Witte (1980) and made of 19 parts and 12 machines. Manufacturing input data are reported in Table 2. Table 3 reports the 12x19 machine-part incidence matrix useful for the evaluation of a general purpose similarity index.

A General Purpose Evaluation This section presents the results obtained by the application of a general purpose similarity index in cluster analysis for the cell formation problem. Table 4 reports the result of the evaluation of the general purpose index known as Simple Matching (SI) and defined in Table 1. Figure 4 shows the dendrogram generated by the application of the fn combined with the SI similarity coefficient. In particular a sequence of numbers is explicitly reported in figure for each node of the diagram. The generic node corresponds to a specific aggregation ordered in agreement with the similarity metric and the adopted hierarchical rule. The list of nodes and aggregations, the related values of similarity, and the number of objects per group are also reported in Table 5. The obtained number of nodes is 11.

512

Now it is possible to define a partitioning of the available set of machines by the identification of a cut value, the so called “cutting threshold similarity value”. The adopted level of homogeneity within the generic cluster is the percentile-based threshold measure discussed in Section 3. Given the dendrogram in Figure 4 and assuming a threshold percentile cut value equal to 20°, the corresponding range of similarity is (0.585, 0.622) as demonstrated in Section 3. The obtained configuration of the manufacturing cells (nine different cells are obtained) is: Cell 1 (single machine): M12 Cell 2 (single machine): M11 Table 2. Manufacturing input data, De Witte (1980) Part

Volume

Work Cycle

Processing Time

p1

2

m1, m4, m8, m9

20, 15, 10, 10

p2

3

m1, m2, m6, m4, m8, m7

20, 20, 15, 15, 10, 25

p3

1

m1, m2, m4, m7, m8, m9

20, 20, 15, 25, 10, 15

p4

3

m1, m4, m7, m9

20, 15, 25, 15

p5

2

m1, m6, m10, m7, m9

20, 15, 20, 25, 15

p6

1

m6, m10, m7, m8, m9

15, 50, 25, 10, 15

p7

2

m6, m4, m8, m9

15, 15, 10, 15

p8

1

m3, m5, m2, m6, m4, m8, m9

30, 50, 20, 15, 15, 10, 15

p9

1

m3, m5, m6, m4, m8, m9

30, 50, 15, 15, 10, 15

p10

2

m3, m6, m4, m8

30, 15, 15, 10

p11

3

m6, m12

15, 20

p12

1

m11, m7, m12

40, 25, 20

p13

1

m11, m10, m7, m12

40, 50, 25, 20

p14

3

m11, m7, m10

40, 25m 50

p15

1

m11, m10

40, 50

p16

2

m11, m12

40, 20

p17

1

m11, m7, m12

40, 25m 20

p18

3

m6, m7, m10

15, 25, 50

p19

2

m10, m7

50, 25

Similarity-Based Cluster Analysis for the Cell Formation Problem

Table 3. Machine-part incidence matrix m1

p1

p2

p3

p4

p5

1

1

1

1

1

1

1

m2

p6

p7

1

1

1

1

1

m5 m6

1

m7

1

1

1

1

m8

1

m9

1

1

m10

p9

p10

1

1

1

1

1

1

1

1

1

1

p11

p12

p13

p14

p15

p16

p17

p18

p19

1

m3 m4

p8

1

1

1

1

1

1 1

1

1

1

1

1

1

1

1

1

1

1

1

1

1 1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

m11 m12

1

1

1

1

1

Cell 3 (single machine): M10 Cell 4 (single machine): M7 Cell 5 (two machines): M3, M5 Cell 6 (single machine): M9 Cell 7 (two machines): M8, M4 Cell 8 (single machine): M2 Cell 9 (single machine): M1

aggregation, it is possible to include (exclude) the node in the formation of cells. In particular, assuming a level of threshold similarity equal to 80°, two alternative configurations can be obtained as the result of inclusion/exclusion of one or more nodes of the dendrogram located in correspondence of the cutting level:

In case a cut value corresponds to one or more nodes generated by the hierarchical process of

Case 1: Including node 10 and node 9

Table 4. Simple matching similarity matrix m1

m2

m3

m4

m5

m6

m7

m8

m9

m10

m11

m1

1.0000

m2

0.5479

1.0000

m3

0.4021

0.5479

1.0000

m4

0.5118

0.5118

0.5118

1.0000

m5

0.4389

0.5847

0.6576

0.4750

1.0000

m6

0.3292

0.4021

0.4750

0.4389

0.4389

1.0000

m7

0.4021

0.3292

0.1826

0.2195

0.2195

0.2556

m8

0.4389

0.5118

0.5118

0.6215

0.4750

0.5118

0.2194

1.0000

m9

0.5118

0.4389

0.4389

0.5479

0.4750

0.4389

0.2924

0.5479

1.0000

m10

0.3292

0.3292

0.3292

0.1465

0.3653

0.3292

0.4750

0.2194

0.2924

1.0000

m11

0.2924

0.3653

0.3653

0.1826

0.4021

0.1465

0.3653

0.1826

0.1826

0.4389

1.0000

m12

0.3292

0.4021

0.4021

0.2194

0.4389

0.2556

0.3292

0.214

0.2194

0.3292

0.5847

m12

1.0000

1.0000

513

Similarity-Based Cluster Analysis for the Cell Formation Problem

Table 5. List and configuration of nodes generated by fn rule & SI similarity coefficient Node

Group 1

Group 2

Simil.

Objects in Group

1

M3

M5

0.658

2

2

M4

M8

0.622

2

3

M11

M12

0.585

2

4

M1

M2

0.548

2

5

Node 2

M9

0.548

3

6

M7

M10

0.475

2

7

Node 4

Node 5

0.439

5

8

Node 1

M6

0.439

3

9

Node 7

Node8

0.329

8

10

Node 6

Node 3

0.329

4

11

Node 9

Node 10

0.146

12

The second column in Table 8 reports the obtained values of the performance evaluation for the case study object of this numerical example adopting the Simple Matching similarity index, the fn heuristic, and the cutting threshold percentile value equal to 75°.

A Problem Oriented Evaluation

Cell 1 (four machines): M12, M11, M10 and M7 Cell 2 (eight machines): M6, M5, M3, M9, M8, M4, M2, M1. Case 2: Not including node 10 and node 9 Cell 1 (two machines): M12, M11 Cell 2 (two machines): M10, M7 Cell 3 (3 machines): M6, M5, M3 Cell 4 (5 machines): M9, M8, M4, M2, M1.

Table 6 reports the result of the evaluation of the problem oriented similarity coefficient as proposed by Nair and Narendran (1998). Figure 6 shows the dendrogram as the result of the application of the fn clustering rule. The generic node of the dendrogram corresponds to a specific aggregation ordered in agreement with the adopted similarity metric and the adopted hierarchical rule. The list of nodes and aggregations, the related values of similarity, and the number of objects in group are also reported in Table 7 as the result of the application of the fn rule and Nair and Narendran (1998) problem oriented similarity coefficient. The obtained number of nodes is 11. Figure 6 reports the dendrogram obtained by the application of the fn clustering heuristic rule and the “Nair and Narendran” problem oriented similarity coefficient to the literature instance of interest.

Table 6. Nair & Narendran similarity matrix m1

m2

m3

m4

m5

m6

m7

m8

m9

m10

m11

m1

1.0000

m2

0.5000

1.0000

m3

0.0000

0.2220

1.0000

m4

0.6920

0.5000

0.4210

1.0000

m5

0.0000

0.2860

0.6670

0.2350

1.0000

m6

0.3450

0.3480

0.3640

0.5450

0.2000

1.0000

m7

0.5620

0.3080

0.0000

0.3890

0.0000

0.4620

1.0000

m8

0.5000

0.5560

0.4710

0.8570

0.2670

0.6450

0.2940

1.0000

m9

0.6670

0.2220

0.2350

0.7150

0.2670

0.4520

0.4720

0.6150

1.0000

m10

0.1670

0.0000

0.0000

0.0000

0.0000

0.3870

0.7060

0.0770

0.2310

1.0000

m11

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.4000

0.0000

0.0000

0.4550

1.0000

m12

0.0000

0.0000

0.0000

0.0000

0.0000

0.2310

0.2070

0.0000

0.0000

0.0950

0.5880

514

m12

1.0000

Similarity-Based Cluster Analysis for the Cell Formation Problem

Figure 6. Dendrogram by the application of Nair & Narendran similarity coefficient and the farthest neighbour

Table 7. List and configuration of nodes generated by the fn rule & Nair and Narendran (1998) similarity coefficient Node

Group 1

Group 2

Simil.

Objects in Group

1

M4

M8

0.857

2

2

M7

M10

0.706

2

3

M1

M9

0.667

2

4

M3

M5

0.667

2

5

M11

M12

0.588

2

6

Node 1

M6

0.545

3

7

M2

Node 6

0.348

4

8

Node 3

Node 7

0.222

6

9

Node 2

Node 5

0.095

4

10

Node 8

Node 4

8

11

Node 10

Node 9

12

}

{

Cell 1 (two machines): M11, M12 Cell 2 (two machines): M7, M10 Cell 3 (two machines): M3, M5 Cell 4 (six machines): M6, M8, M4, M2, M9, M1 • Assuming %p=20°:

{

}

{

}

T _ value20° ∈ simil 0.20 × 11 , simil 0.20 × 11  =   = simil {3} , simil {2} =  0.667, 0.706   

The obtained configuration of the manufacturing cells (eleven different cells are obtained) is:

Assuming %p=80°:

{

The obtained configuration of the manufacturing cells (four different cells are obtained) is:

}

T _ value80° ∈ simil 0.80 × 11 , simil 0.80 × 11  =   = simil {9} , simil {8} =  0.095, 0.222   

Single machine cells: Cell 1(M12), Cell 2(M11), Cell 4(M5), Cell 5(M3), Cell 6(M3), Cell7(M6), Cell 9(M2), Cell 10(M9), Cell 11(M1) Double machines cells: Cell 3 (M7, M10), Cell 8 (M8, M4).

515

Similarity-Based Cluster Analysis for the Cell Formation Problem

Table 8. Performance evaluation of numerical example; 75° percentile Similarity index

ID

Simple matching

Nair & Narendran

Problem Density

PD

0.329

0.329

Inside Cells Density

ICD

0.705

0.828

REC

REC

0.962

1.293

Exceptional Element

EE

20

27

Grouping Efficiency [%]

ƞ

60.2

57.9

Grouping Efficiency QI [%]

ƞQI

68.8

72.8

Group Technology Efficiency [%]

GTE

61.9

45.5

Bond Efficiency [%]

BE

66.2

66.1

Group Efficacy [%]

τ

82.9

84.5

Grouping measure

ƞG

0.438

0.468

The third column in Table 8 reports the obtained values of the performance evaluation for the case study object of this numerical example adopting the “Nair and Narendran” similarity index, the fn heuristic, and the cutting threshold percentile value equal to 75°. Which is the best similarity index? It is not correct to try to reply to this question as is, because previous sections demonstrate that there are different factors affecting the performance of the system configuration: the similarity index, the clustering rule, the threshold cutting value of similarity, and the part assignment rule. As a

consequence it is useful to measure the simultaneous effects generated by different combinations of these critical factors. Next section presents an experimental analysis conducted on the instance proposed by De Witte (1980) comparing the performance obtained adopting general purpose and problem oriented similarity metrics.

EXPERIMENTAL ANALYSIS This section presents the results obtained by the application of the proposed systematic procedure to cell formation and parts assignment to cells (part family formation), as the result of different settings of the similarity and hierarchical procedure as illustrated in previous sections. This what-if analysis is applied to the problem oriented instance introduced by De Witte (1980) and reported in Table 2. This analysis represents the first step to identify the best combination of

Table 9. What-if analysis, factors and levels general purpouse

problem oriented

Similarity Coefficient

J, SI, H, B, SO, R, SK, O, RM, RR

S, GS, SH (fbk=0.6;fek=0.4), N

Rule

CLINK, ALINK, SLINK

Percentile

10°, 25°, 40°, 50°, 75°

Figure 7. Block-diagonal matrix. Nair and Narendran, farthest neighbour.& 75° percentile.

516

Similarity-Based Cluster Analysis for the Cell Formation Problem

values, called levels, for the parameters, called factors, of the decision problem. Table 9 reports the adopted levels for each factor in the experimental analysis. Figures 8 to 10 present the main effects plot (Minitab ® Statistical Software Inc.) for the following performance indices: ƞG, called ƞ(G) in figures, τ, BE.

Similarity indices perform in a different way in terms of ƞG, τ, BE. In particular problem oriented (PO) perform better than general purpose (GP). Clink rule and percentile threshold value equal to 50° (or 75°) seem to be the best levels to set the clustering algorithm. The best performing indices are Seiffoddini - S (1987) and Nair and Narendran - N (1998).

Figure 8. Main effects plot for grouping measure

Figure 9. Main effects plot for grouping efficacy

517

Similarity-Based Cluster Analysis for the Cell Formation Problem

Figure 10. Main effects plot for bond efficiency

Figure 11 shows that the number of exceptional elements significantly depends on the adopted threshold value of group similarity, but the adopted similarity index is not important. ƞQI, called ƞ(QI) in Figure 12, has an anomalous trend if compared with previous graphs.

Figure 11. Main effects plot for exceptional elements

518

Figure 13 shows the trend of the EE for different values of couples of factors, and the importance of the percentile threshold value of group similarity. Similarly, Figure 14 shows the importance of threshold value of similarity and clink rule for grouping items.

Similarity-Based Cluster Analysis for the Cell Formation Problem

Figure 12. Main effects plot for grouping efficacy based on QI

CONCLUSION AND FURTHER RESEARCH This chapter illustrates the CFP as supported by the similarity based manufacturing clustering, and a hierarchical and systematic procedure for supporting managers in the configuration of cellular manufacturing systems by the applica-

tion of cluster analysis and similarity indices. In particular, both general purpose and problem oriented indices are illustrated and applied. The experimental analysis conducted on a literature problem oriented case study represents the first basis for the identification of the best setting of the cell formation problem and supporting decision models and tools.

Figure 13. Exceptional elements for couples of factors

519

Similarity-Based Cluster Analysis for the Cell Formation Problem

Figure 14. Interaction plot for τ

For the first time, this chapter successfully applies the threshold group similarity index to problem oriented similarity environment. The threshold value was introduced by the authors in a previous study on general purpose indices evaluation (Manzini et al. 2010). This chapter confirms the importance of this threshold cut value for the dendrogram when it is explained in percentile on the number of nodes. Further research is expected to improve the experimental analysis including more case studies and applications. Finally it is important to improve the critical process of part family formation and the decisions regarding the duplication of machines and resources in different manufacturing cells in order to minimize intercellular flows.

REFERENCES Aldenderfer, M. S., & Blashfield, R. K. (1984). Cluster analysis. (Sage University Paper series on Quantitative Applications in the Social Sciences, 1984, No. 07-044), Beverly Hills, CA: Sage.

520

Alhourani, F., & Seifoddini, H. (2007). Machine cell formation for production management in cellular manufacturing systems. International Journal of Production Research, 45(4), 913–934. doi:10.1080/00207540600664144 Bindi, F., Manzini, R., Pareschi, A., & Regattieri, A. (2009). Similarity-based storage allocation rules in an order picking system. An application to the food service industry. International Journal of Logistics Research and Applications, 12(4), 233–247. doi:10.1080/13675560903075943 De Witte, J. (1980). The use of similarity coefficients in production flow analysis. International Journal of Production Research, 18, 503–514. doi:10.1080/00207548008919686 Gupta, T., & Seifoddini, H. (1990). Production data based similarity coefficient for machinecomponent grouping decisions in the design of a cellular manufacturing system. International Journal of Production Research, 28, 1247–1269. doi:10.1080/00207549008942791 Heragu, S. (1997). Facilities design. Boston, MA: PWS Publishing Company.

Similarity-Based Cluster Analysis for the Cell Formation Problem

Kumar, C. S., & Chandrasekharan, M. P. (1990). Grouping efficacy a quantitative criterion for goodness of block diagonal forms of binary matrices in group technology. International Journal of Production Research, 28(2), 233–243. doi:10.1080/00207549008942706 Manzini, R., & Bindi, F. (2009). Strategic design and operational management optimization of a multi stage physical distribution system. Transportation Research Part E, Logistics and Transportation Review, 45, 915–936. doi:10.1016/j. tre.2009.04.011 Manzini, R., Bindi, F., & Pareschi, A. (2010). The threshold value of group similarity in the formation of cellular manufacturing system. International Journal of Production Research, 48(10), 3029–3060. doi:10.1080/00207540802644860 Manzini, R., Persona, A., & Regattieri, A. (2006). Framework for designing and controlling a multicellular flexible manufacturing system. International Journal of Services and Operations Management, 2, 1–21. doi:10.1504/ IJSOM.2006.009031 McAuley, J. (1972). Machine grouping for efficient production. Production Engineering, 51, 53–57. doi:10.1049/tpe.1972.0006 Mosier, C. T. (1989). An experiment investigating the application of clustering procedures and similarity coefficients to the GT machine cell formation problem. International Journal of Production Research, 27(10), 1811–1835. doi:10.1080/00207548908942656 Nair, G. J., & Narendran, T. T. (1998). CASE: A clustering algorithm for cell formation with sequence data. International Journal of Production Research, 36, 157–179. doi:10.1080/002075498193985

Papaioannou, G., & Wilson, J. M. (2010). The evolution of cell formation problem methodologies based on recent studies (1987-2008): Review and directions for future research, (vol. 206, pp. 509-521). Sarker, B. R. (2001). Measures of grouping efficiency in cellular manufacturing systems. European Journal of Operational Research, 130, 588–611. doi:10.1016/S0377-2217(99)00419-1 Seifoddini, H. (1987). Incorporation of the production volume in machine cell formation in group technology applications. Proceedings of the 9th International Conference on Production Research ICPR, (pp. 2348-2356). Seifoddini, H., & Djassemi, M. (1994). Analysis of efficiency measures for block diagonal machine-component charts. Proceedings of the 16th International Conference on Computers and Industrial Engineering, Ashikaga, Japan. Seifoddini, H., & Djassemi, M. (1996). The thresold value of a quality index for formation of cellular manufacturing systems. International Journal of Production Research, 34(12), 3401–3416. doi:10.1080/00207549608905097 Sokal, R. R., & Sneath, P. H. A. (1968). Principles of numerical taxonomy. San Francisco, CA: W. H. Freeman. Stawowy, A. (2004). Evolutionary strategy for manufacturing cell design. Omega: The International Journal of Management Science, 34, 1–18. doi:10.1016/j.omega.2004.07.016 Yin, Y., & Yasuda, K. (2006). Similarity coefficient methods applied to cell formation problem: A taxonomy and review. International Journal of Production Economics, 101, 329–352. doi:10.1016/j. ijpe.2005.01.014

This work was previously published in Operations Management Research and Cellular Manufacturing Systems: Innovative Methods and Approaches, edited by Vladimir Modrák and R. Sudhakara Pandian, pp. 140-163, copyright 2012 by Business Science Reference (an imprint of IGI Global). 521

522

Chapter 30

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles Paolo Renna University of Basilicata, Italy Michele Ambrico University of Basilicata, Italy

ABSTRACT Cellular manufacturing systems (CMSs) are an effective response in the economic environment characterized by high variability of market. The aim of this chapter is to compare different configurations of cellular models through the main performance. These configurations are fractal CMS (defined FCMS) and cellular systems with remainder cells (defined RCMS), compared to classical CMS used as a benchmark. FCMSs consist of a cellular system characterized by identical cells each capable of producing all types of parts. RCMSs consist of a classical CMS with an additional cell (remainder cell) that in specific conditions is able to perform all the technological operations. A simulation environment based on Rockwell ARENA® has been developed to compare different configurations assuming a constant mix of demand and different congestion levels. The simulation results show that RCMSs can be a competitive alternative to traditional cells developing opportune methodologies to control the loading of the cells.

INTRODUCTION Competitiveness in today’s market is much more intense compared to the past decades. Considerable resources are invested on facilities planning and DOI: 10.4018/978-1-4666-1945-6.ch030

re-planning in order to adapt the manufacturing systems to the market changes. A well-established manufacturing philosophy is the group technology concept. Group technology (GT) can be defined as a manufacturing philosophy identifying similar parts and grouping them together to take advantage

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

of their similarities in manufacturing and design (Selim et al.,1998). It is the basis of so-called cellular manufacturing systems (CMSs). In current production scenario demand for products is characterized by continuous fluctuations in terms of volumes, type of product (part mix), new products introduction and the life cycle of products has significantly reduced. The planning horizon needs to be divided into smaller horizons (time bucket) and the length of each period is related to the characteristics of products. These characteristics need to be considered in design process of a manufacturing system. Introduction of Cellular Manufacturing Systems has already introduced significant improvements. They are conceived with the aim of reducing costs such as setup costs or handling costs and also to reduce lead time and work in process (WIP). They combine advantages of flow shop and job shop, but a further step can be accomplished to be competitive in the market. They allow significant improvements such as: product quality, worker satisfaction, space utilization. Benefits and disadvantages (Irani et al.,1999) are showed in Table 1. They documented that companies implementing cellular manufac-

turing have a very high probability of obtaining improvements in various areas. The first column of Table 1 shows the case studies with improvements and the second column reports the percentage of improvement of the measures. Similarly, the third column shows the percentage of cases with worsening and in the fourth column is evidenced the rate of deterioration. The demand volatility and continuous new product introduction lead to re-configure several times the cellular manufacturing systems in order to keep a high level of performance. For the above reasons, new configurations have been proposed in literature such as Virtual Cell Manufacturing System (VCMS), Fractal Cell Manufacturing System (FCMS), Dynamic Cell Manufacturing System (DCMS), with the aim of keeping high flexibility of manufacturing systems. The concept of DCMS was introduced for the first time by Rehault et al. (1995). It provides a physical reconfiguration of the cells. The reconfiguration activity can be periodic or resulting from the variation of performance parameters. Reconfigure can mean duplicating machines,

Table 1. Benefits and disadvantages of CMS Measure

Percentage cases with improvements

Average percentage improvement

Percentage cases with worsening

Average percentage worsening

Tooling cost

31%

-10%

69%

+17%

Labor cost

91%

-33%

9%

+25%

Setup Time

84%

-53%

16%

+32%

Cycle Time

84%

-40%

16%

+30%

Machine utilization

53%

+33%

47%

-20%

Subcontracting

57%

-50%

43%

+10%

Product quality

90%

+31%

10%

-15%

Worker satisfaction

95%

+36%

5%

-

Space utilization

17%

-25%

83%

+40%

WIP inventory

87%

-58%

13%

+20%

Labor turnover/absenteeism

100%

-50%

-

Variable production cost

93%

-18%

7%

+10%

523

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

relocating machines between cells, removing machines, or also subcontracting some parts to other companies. These problems must be addressed by the decision maker. The concept of VCMS requires that the machines are dedicated to a part family but these machines are not necessarily close together in a classical cell. One machine can belong simultaneously to different cells. Hence sharing of machine makes the system more flexible. Moreover the machines are not shifted as dynamic cellular system therefore costs of reallocation are eliminated. On the other hand we must consider the increase in the movements of parts (or batches) across machines. A further problem may be the complication in the measurement of performance of the cells. This is because monitoring stations are usually located out of the cell, but in this case the cell does not exist physically. The FCMSs are based on the constructions of identical cells and they are not built for different families. The idea comes from Skinner (1974) and that is to build factory within a factory with duplication of processes. Each cell can work all products. Working time will be greater but these configurations are very effective if there are changes in part mix and in cases of machine breakdowns. Even if for example there are flash orders. A further idea was mentioned by Sripathy Maddisetty (2005). The author referred to so-called remainder cells and we can call them RCMSs. In addition to traditional cells refer to the product families you may create an additional cell that operates when conditions such as machine failures or overloaded machines occur. Focusing on an advanced design the RCMSs could provide interesting results in terms of competitiveness. Our goal in this chapter is to compare the various approaches to the design of manufacturing systems, making a complete performance comparison. In particular we aimed to compare the following systems: CMSs, FCMSs and RCMSs. A simulation environment has been developed to

524

compare the performance (WIP, Throughput Time, Tardiness, Throughput and Average Utilization) using as a benchmark the classic CMS. The aim is to evaluate the responses of different systems when market fluctuations occur in terms of arrival demand. The chapter is structured as follows. Section 2 provides an overview of the literature of various manufacturing system configurations, while in section 3 the system context is formulated. In section 4 there is a brief description of scheduling approaches. Section 5 presents the simulation environment and the case study while in section 6 are discussed simulation results. In Section 7 conclusions and future developments are discussed.

BACKGROUND Recently, several authors have investigated the configuration of manufacturing cells in order to keep a high level of performance when the market conditions change. Hachicha et al. (2007) proposed a simulation based methodology which takes into consideration the stochastic aspect in the CMS. They took into account the existence of exceptional elements between the parts and the effect of the correspondent inter-cell movements. They compared two strategies: permitting intercellular transfer and exceptional machine duplication. They used the simulation (Rockwell Arena) and they analyzed the following performance: mean transfer time, mean machining time, mean wait time, mean flow time. They assumed demand fixed and known for the parts. They did not consider failures of machines and maintenance policies. A multi-objective dynamic cell formation was presented by Bajestani et al. (2007) where purpose was to minimize simultaneously total cell load variation and sum of miscellaneous costs (machine cost, inter-cell material handling cost, and machine relocation cost). Since the problem

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

is NP-hard they used a scatter search approach for finding locally Pareto-optimal frontier. Safei et al. (2007) proposed to use an approach based on fuzzy logic for the design of CMS under uncertain and dynamic conditions. They began by finding that in most of research related on DCMS input parameters were considered deterministic and certain. Therefore they introduced fuzzy logic as a tool for the expression of the uncertainty in design parameters such as part demand and available machine capacity. Ahkioon et al. (2007) tried to investigate DCMS focusing on routing flexibility. They studied the creation of alternate contingency process routings in addition to alternate main process routings for all part types. Contingency routings had the function to provide continuity in case of exceptional events such as machine breakdowns but also flash orders. Furthermore their work provided discussions on the trade-off between the additional cost related to the formation of contingency routings and the advantages of increased flexibility. Linearized model proposed by the authors was solved with CPLEX. Aryanezhad et al. (2008) developed a new model which simultaneously embrace dynamic cell formation and worker assignment problem. They focused on two separate components of cost: the machine based costs such as production costs, inter-cell material handling costs, machine costs and human related costs such as hiring costs, firing costs, training costs and wages. They made the comparison of two models. One considered the machine costs and the other considered both machine costs and human related costs. The model was NP-hard even though they did not consider learning curve. Xiaoqing Wang et al. (2008) proposed a nonlinear multi-objective mathematical model in dynamic cells formation problem by giving weighing to three conflicting objectives: machine relocation costs, utilization rate of machine capacity, and total number of inter-cell moves over the planning horizon. A scatter search approach was

developed to solve the nonlinear model. Results were compared with those obtained by CPLEX. They considered certain demand and they did not consider machine breakdowns. Safei et al. (2009) proposed an integrated mathematical model of the multi-period cell formation and production planning in a dynamic cellular manufacturing system (DCMS). The focus was on the effect of the trade-off between production and outsourcing costs on the reconfiguration of the cells. Balakrishnan (2005) discussed cellular manufacturing system under conditions of changing product demand. He made a conceptual comparison to virtual cell manufacturing and he discussed a case study. Kesen et al. (2008) investigated three different types of system (cellular layout, process layout and virtual cells) by using simulation. They paid attention to the following performance: mean flow time and mean tardiness. Based on these simulations they used regression meta-models to estimate the systems behaviours. They only considered one family-based scheduling scheme and they did not consider extraordinary events such as machine failures. Vakharia et al. (1999) proposed and validated analytical approximations for comparing the performance of virtual cells and multistage flow shops. First they used these approximations and hypothetical data to identify some key factors that influenced the implementation of virtual cells in a multistage flow shop environment. Then they concluded with an application of approximations to industrial data. Kesen et al. (2009) examined the behaviours of VCMs, process layouts and cellular layouts. They addressed the VCMs by using family-based scheduling rule. The different systems were compared by simulation. Subsequently they developed an ant colony optimization based meta-models to reflect the system’s behaviours. Kesen et al. (2010) presented a genetic algorithm based heuristic approach for job scheduling

525

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

in virtual manufacturing cells (VMCs). Cell configurations were made to optimize the scheduling objective under changing demand conditions. They considered the case with multiple jobs and different processing routes. It was considered multiple machine types with several identical machines in each type and they were located in different locations in the shop floor. The objective was to minimize the total travelling distance. To evaluate the effectiveness of the genetic algorithm heuristic they compared it with a mixed integer programming solution. Results showed that genetic algorithm was promising in finding good solutions in very shorter. Uday Venkatadri et al.(1997) proposed a methodology for designing job shops under the fractal layout organization as an alternative to the more traditional function and product organizations. The challenge in assigning flow to workstation replicates was that flow assignment is in itself a layout dependent decision problem. They proposed an iterative algorithm that updated layouts depending on flow assignments, and flow assignments based on layouts. Their work has had the far-reaching consequence of demonstrating the validity of the fractal layout organization in manufacturing systems (FCMSs). Montreuil (1999) developed a new fractal alternative for manufacturing job shops which allocated the total number of workstations for most processes equally across several fractal cells. He introduced fractal organization and he briefly discussed the process of implementing fractal designs. He illustrated a case example and he showed that system is characterized by great flexibility. Maddisetty (2005) discussed the design cells in a probabilistic demand environment. He discussed idea of remainder cells (RCMS). A remainder cell is a kind of lung to cope in changes in demand. He examined the following performance: total WIP, average flow time, machine utilization. He proposed a comparison using three different approaches: mathematical, heuristic, and simulation.

526

Süer et al. (2010) proposed a new layered cellular manufacturing system to form dedicated, shared and remainder cells to deal with the probabilistic demand. Moreover they proposed a comparison of its performance with the classical cellular manufacturing system. Simulation and statistical analysis were performed to help identify the best design within and among both layered cellular design and classical cellular design. They observed that the average flow time and total WIP were not always the lowest when additional machines were used by the system, but the layered cellular system performed better when demand fluctuations was observed. There are several limitations encountered in existing literature. In previous research the demand of products was usually determined at the beginning of each period and it was known. The change in part mix was rarely assumed. Frequently the bottleneck station in each cell was considered as fixed and independent of the type of the part. Almost never were held in account exceptional events such as machine failures and maintenance. Almost never flash orders was considered and similarly backorders. The concept of learning curve was rarely covered. Furthermore hardly researchers focused on a wide range of performance measures. In this chapter the objective is to evaluate the reaction of different manufacturing systems configurations (CMSs, FCMSs and RCMSs) when there is a fluctuation in terms of arrival demand. The configurations are investigated considering the same machines for all cases; the machines are set in order to obtain the particular configuration. The analysis conducted allows to highlight the most promising configurations in terms of performance measures. Another objective of the chapter is to develop a simulation environment based on Rockwell Arena® tool in order to analyse the different configurations. The simulation allows build a model with minor simplification compared to mathematical models which require significant simplifications (linearization) in cases

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

of complex systems. Moreover the dynamic model (demand not known a priori, unexpected events like machine breakdowns) cannot be obtained with mathematical models.

MANUFACTURING SYSTEM CONTEXT The mentioned objective of this chapter is to compare the performance of different manufacturing systems. In particular, the configurations analyzed by using simulation tools based on the software Rockwell ARENA® are: CMS, FCMS and RCMS. Moreover another configuration has been considered changing the layout of machines and obtaining a CMS in line. The manufacturing system consists of M machines general purpose that are used for each configuration. It has been considered three part families. We consider a constant mix of each part family. We introduce the following assumptions for the model: •

the demand for each part type is unknown at priori end it is extracted randomly from an exponential distribution. Therefore, the parameter to set is the exponential parameter; set-up times are not simulated. When, the manufacturing cells are configured the setup times are very low for the product family assigned to the cell; the due date is obtained by processing time multiplied with an index greater than or equal to 1;

• • • •

Machine breakdowns and maintenance are not considered; intra-cell handling times are negligible; it is assumed that parts moved in units; each configuration presents the same number of machines in order to make a comparison in the same conditions.

The performance measures used to compare the manufacturing systems are the following: • • • • • •

Work in Process (WIP); Average utilization of the manufacturing system; Throughput time; Average throughput time; Tardiness (total of all the parts); Throughput.

Figure 1 describes the parameters and the performance analyzed in this research. The first manufacturing system configuration considered is a classical cellular system (CMS). The scheme is showed in Figure 2. The system manufactures N product families with N cells. Each cell is specialized to perform the technological operations required by the product family assigned (setup time is not necessary). In this chapter, it has been also considered a CMS with a different routing, as showed in Figure 3. The second configuration considered is the FCMS. In this case, the allocation of machines to cells is performed in order to obtain N identical cells. Each cell manufactures all product families with higher processing time, because the machines will be able to perform all the technological op-

Figure 1. Manufacturing configurations analysis

527

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Figure 2. CMS configuration

remainder cell (ptir) is major of processing time of the machine i-th in the cell j-th in CMS (machine configured for the technological operations of a particular family) (ptij): over =

ptif ptij

=

ptir ptij

, over > 1

(1)

LOADING POLICY Figure 3. CMS in line configuration

In the previous section we have discussed the different cell configurations. Each configuration needs a loading approach policy to operate. For classical CMS parts arrive in the system and each family has its own cell competence. In CMS we have provided two different layouts: one Figure 4. FCMS configuration

erations required. The scheme of FCMS is showed in Figure 4. The third configuration considered is the RCMS. In this configuration there are N cells respectively for N product families. In addition, there is a further cell called remainder cell where all operations can be performed with higher processing times. It may be useful in case of machine failures but also in case of congestion of the system. The scheme of RCMS is showed in Figure 5. Each configuration includes the same number of machines and the time to manufacture each part is assumed the same, except for fractal cells (belonging to FCMS) and the remainder cell (belonging to RCMS) where machines can produce all kinds of part with a higher processing time (general purpose machine configuration). Therefore the processing time of machine i-th in fractal cell (ptif) and processing time of machine i-th in 528

Figure 5. RCMS configuration

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

with parallel machines and other with machines in line, as described above. In FCMS configuration parts arrive in the system and they are routed to cells with minor workload. The RCMS needs a specific loading policy for the use of the remainder cell. Parts arrive in the system and each cell is designed for a part family. In each cell, there is a controller that adopts the following strategy: it measures the number of parts in queue in each machine. If the measured value in cell j-th is greater than a maximum threshold of the cell (defined Smaxj) then the part is conveyed to the remainder cell. Similarly, when the measured value is minor of a minimum threshold of the cell j-th (defined Sminj) then the part is assigned to the cell designed for the part family. The logic of controller above described is showed in the flowchart of Figure 6.

SIMULATION ENVIRONMENT The manufacturing system consists of M=10 machines. All different configurations are obtained re-allocating the same number of machines available. It is considered that each machine functions

for 24 hours a day. Therefore total numbers of minute that system works is considered to be 43200 minutes per month. This is the simulation horizon considered. In order to evidence only the difference among the configurations, it is assumed that each part needs 40 minutes to complete processing. This technological time is divided by the number of machines used in the process, depending on manufacturing configuration. As above introduced it is equal for all parts except for those made in fractal cells and remainder cell where machines take more time. We assume three product families. The product mix is as follow: Product 1 (40%), Product 2 (40%) and Product 3 (20%). We have analyzed the performance of four different cellular systems changing one parameter: the average inter-arrival time. We have considered five different values of inter-arrival time that leads to different congestion levels of the manufacturing system (see Table 2). These values were selected to keep the average utilization of machines in a range that goes from 0.56 (low utilization) to 0.99 (high utilization). The demand for each part type is unknown at priori and it is extracted randomly from an

Figure 6. The logic of RCMS

529

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Table 2. Average inter-arrival times 4 4.5 5 6 7

exponential distribution with mean equal to the inter-arrival time reported in Table 2. The due date is obtained from the sum of the arrival time (tnow) and the technological working time (WT) multiplied with an index (DdateINDEX), as showed in equation 2. Ddate = tnow + (WT ⋅ DdateINDEX )

(2)

The WT is obviously equal to 40 minutes. The Ddateindex is 1.5 for parts 1 and 2, while it is 1 for part 3. The minor index of part 3 is justified by the lower demand than other part-mix, so there is no shift of the due date. However the due dates are the same for all configurations examined. Therefore not affect the comparison, but they are included in the model for completeness. Cellular systems analyzed are those already mentioned: CMS, CMS in line, RCMS and FCMS. The benchmark system is the CMS. The simulation environment has been developed by Rockwell Arena® tool. Arena is characterized by a block diagram that makes it more familiar environment simulation. Figure 7. Arrival and exit stations

530

The arrival stations of the parts and the exit station are showed in the Figure 7. In the first three boxes are showed the arrival stations where to each part is assigned a delivery time and a destination in the respective cell for processing; then the parts leave the arrival station. Exit station is equal for all types of configuration: if the delivery time has been observed then the WIP is updated and the part leaves the system. Otherwise the delay is calculated.

Cellular Manufacturing System In this case we consider three cells of production. The first two cells containing four identical machines working in pairs and in parallel. These cells are respectively for both products type 1 and type 2. The third cell contains 2 machines for products of type 3 (minor product mix). Each machine has a process time equal to 20 minutes. The scheme is showed in Figure 8. In each rectangle is indicated the working time.

Cellular Manufacturing System in Line In this case we also consider 3 cells of production. The first two cells containing 4 machines in line. Each machine has a process time equal to 10 minutes. These cells are respectively for type 1 and type 2. The third cell contains 2 machines for product type 3, each machine has a process

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Figure 8. CMS considered in simulation

Figure 9. CMS in line considered in simulation

Remainder Cellular Manufacturing System In this case there are 3 cells (one for each part type) and there is a remainder cell where is defined a loading policy based on the number of parts in queue in other cells. The scheme is showed in Figure 11. The three machines operating in cell 1 (product type 1) has a process time equal to 13,33 minutes. The same for the machines operating in cell 2 (product type 2). The two machines operating in cell 3 has a process time equal to 20 minutes. The machines assigned to the remainder cell perform the manufacturing operations with a

Figure 10. FCMS considered in simulation

time equal to 20 minutes. The scheme is showed in Figure 9.

Fractal Cellular Manufacturing System In this case there are 5 identical cells. Each cell contains 2 machines and each cell is able to work on all the product mix. The scheme is showed in Figure 10. Naturally the machines perform the manufacturing operations with a major process time (see equations 1 and 2) because they are not dedicated to a part family but they are configured for all operations. In fact the process time of each machine is equal to 20 units time increased by 20% (over=1.2).

Figure 11. RCMS considered in simulation

531

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

major process time (see equations 1 and 2) because they are configured for all operations; the process time of each machine is equal to 20 units time increased by 20%(over=1.2). In this work, it has been investigated different instances of the same policy loading about the use of remainder cell. Each cell has a controller that measures the number of parts in queue in each machine. Using thresholds the parts can be conveyed to the remainder cell. In ARENA the controller is showed in Figure 12. The first “scan” controls the maximum threshold (Smaxj) and therefore assigns the part to the cell. Similarly, the second “scan” checks the minimum thresholds(Sminj). For the values of maximum (Smaxj) and minimum (Sminj) thresholds have been considered respectively six cases, equal for all three cells (see Table 3).

SIMULATION RESULTS The length of each simulation is fixed to 43200 minutes. During this period the average interarrival time and part mix are both constant. Table 4 reports the design of simulation experiments conducted for all four configurations of the manufacturing system. Combining the five inter-arrival times, four system configurations, and for the last configuration (RCMS) six cases regarding the thresholds, it has been obtained 45 experimental classes. Figure 12. Control blocks cell 1

532

For each experiment class have been conducted a number of replications able to assure a 5% confidence interval and 95% of confidence level for each performance measure. As previously described the performance measures investigated are the following: • •

Work in Process (WIP); Average utilization of the manufacturing system (av.utilization); Throughput time for each part j(thr. Time j); Average throughput time (average thr. Time); Total tardiness time of all the parts (tardiness); Throughput (thr.).

• • • •

The objective of the analysis of simulation results is the comparison between different manufacturing configurations and classical cellular configuration (CMS, used as base for percentage

Table 3. Threshold values Cases

Smax

1

7

Smin 5

2

5

3

3

3

2

4

4

1

5

3

1

6

2

1

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Table 4. Experimental classes Exp. No.

Configuration

Inter-arrival

Exp. No.

Configuration

Inter-arrival

1

CMS

4

26

RCMS(3,2)

4

2

CMS

4,5

27

RCMS(3,2)

4,5

3

CMS

5

28

RCMS(3,2)

5

4

CMS

6

29

RCMS(3,2)

6

5

CMS

7

30

RCMS(3,2)

7

6

CMS in line

4

31

RCMS(4,1)

4

7

CMS in line

4,5

32

RCMS(4,1)

4,5

8

CMS in line

5

33

RCMS(4,1)

5

9

CMS in line

6

34

RCMS(4,1)

6

10

CMS in line

7

35

RCMS(4,1)

7

11

FCMS

4

36

RCMS(3,1)

4

12

FCMS

4,5

37

RCMS(3,1)

4,5

13

FCMS

5

38

RCMS(3,1)

5

14

FCMS

6

39

RCMS(3,1)

6

15

FCMS

7

40

RCMS(3,1)

7

16

RCMS(7,5)

4

41

RCMS(2,1)

4

17

RCMS(7,5)

4,5

42

RCMS(2,1)

4,5

18

RCMS(7,5)

5

43

RCMS(2,1)

5

19

RCMS(7,5)

6

44

RCMS(2,1)

6

20

RCMS(7,5)

7

45

RCMS(2,1)

7

21

RCMS(5,3)

4

22

RCMS(5,3)

4,5

23

RCMS(5,3)

5

24

RCMS(5,3)

6

25

RCMS(5,3)

7

computation). The aim is to use the performance parameters to highlight the behaviour of different configurations when changing the volume of demand (the variation of average inter-arrival times). Table 5 shows the average utilizations of machines in classical CMS at different inter-arrival times. Therefore the simulations are performed for five congestion levels of the manufacturing system. It is important to emphasize that the results showed do not include machine breakdowns. Table 6 reports the first three parameters (WIP, Tardiness and Throughput) for the different manufacturing configurations. Table 6 shows the

average values over inter-arrival times with the respective standard deviations (St.dev). The standard deviation allows to highlight the variability of the results when the inter-arrival changes. The Table 5. Average utilizations Configuration

CMS

Inter-arrival time

Av. utilization

4

0,99

4,5

0,88

5

0,80

6

0,66

7

0,57

533

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

percentages refer to the comparison with the classical CMS. The positive percentages represent an increase of the respective factor while the negative percentages represent a decrease. Table 7 is the same for the throughput time of different parts and for the average throughout time. Tables 6 and 7 show that CMS with configuration in line has almost the same behaviour of the classical CMS except for the tardiness that increments significantly. Tables 6 and 7 also show that fractal configuration (FCMS) is the worst configuration. This is because the scheduling policy used is more simply. An opportune policy needs to be implemented for the FCMS. This is a limit of FMCS configuration, because a more complex control system has to

be designed. The standard deviation shows the variability of the performance measures related to the inter-arrival changes in fact the FCMS is the configuration with the higher dependence on the inter-arrival changes. As the reader can notice, the RCMS performance depends on the choose of the threshold values. Table 8 reports the variation of performance observed in correspondence of three values of inter-arrival times (5, 6, and 7). The percentages always refer to the comparison with the classical CMS. Among the various configurations of RCMS is showed only one (with thresholds 2, 1) with the most interesting results (see Table 8). Except for value of tardiness (when inter-arrival time is

Table 6. Simulation results WIP average CMS(in line)

Tardiness St. dev

2,15%

average

1,62%

85,97%

Throughput St. dev

179,28%

average

St. dev

0,01%

0,19%

FCMS

495,96%

699,08%

956,98%

1583,56%

-4,49%

6,74%

RCMS 7,5

62,55%

64,65%

148,50%

49,65%

-0,77%

1,66%

RCMS 5,3

76,76%

91,60%

136,93%

49,38%

-0,99%

2,10%

RCMS 3,2

107,54%

118,07%

134,58%

71,72%

18,64%

45,45%

RCMS 4,1

95,70%

133,95%

133,78%

96,78%

-1,47%

2,92%

RCMS 3,1

132,86%

170,05%

191,64%

237,27%

-1,62%

3,36%

RCMS 2,1

203,37%

265,32%

315,70%

514,08%

-2,21%

3,81%

Table 7. Simulation results Thr. Time 1 average

St. dev

Thr. Time 2 average

St. dev

Thr. Time 3 average

St. dev

Average Thr. Time average

St. dev

CMS(in line)

3,27%

1,24%

2,21%

2,68%

0,42%

0,79%

2,17%

1,58%

FCMS

551,65%

775,44%

547,81%

770,79%

352,77%

508,21%

496,34%

699,14%

RCMS 7,5

76,62%

63,94%

75,71%

63,20%

-12,66%

13,99%

53,20%

43,86%

RCMS 5,3

88,43%

87,56%

87,40%

86,09%

-7,56%

5,58%

62,97%

63,25%

RCMS 3,2

113,21%

101,38%

112,44%

100,65%

17,07%

44,23%

87,72%

78,76%

RCMS 4,1

101,34%

117,21%

100,78%

116,89%

-0,88%

16,07%

74,25%

89,56%

RCMS 3,1

139,38%

162,54%

136,99%

159,71%

12,55%

31,48%

104,75%

125,62%

RCMS 2,1

208,51%

275,98%

205,19%

272,66%

38,91%

75,03%

161,62%

218,90%

534

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Table 8. Simulation results: arrival comparison Inter-arrival time

CMS in line

FCMS

RCMS 2,1

WIP

Thr. time 1

Thr. time 2

Thr. time 3

Average Thr. Time

Tardiness

Throughput

5

3,20%

4,41%

3,21%

0,83%

3,08%

7,56%

0,17%

6

3,08%

3,72%

4,00%

0,02%

2,94%

5,88%

0,22%

7

2,56%

3,56%

3,51%

0,15%

2,77%

4,80%

-0,23%

5

65,16%

77,98%

76,58%

29,02%

64,87%

247,06%

0,08%

6

13,64%

19,60%

19,42%

-4,64%

13,72%

22,65%

-0,10%

7

13,23%

17,97%

17,96%

0,26%

13,92%

22,87%

-0,58%

5

18,39%

31,34%

30,25%

-18,17%

18,25%

63,03%

0,08%

6

7,95%

14,47%

14,27%

-11,52%

8,18%

6,19%

-0,20%

7

9,17%

13,43%

13,43%

-5,43%

9,14%

12,61%

0,12%

equal to 5) the other performance converge to values close to CMS configuration with differences about 10%. The better performance of RCMS is obtained with inter-arrival time equal to 5 therefore with a medium –high average utilization of the manufacturing system (see Table 5). With high and low congestion levels the other configurations compared to CMS have very low performance level. This is confirmed in Figure 13 that shows the profile of performance at different congestion levels. Figures 14 and 15 show the comparison of the performance measures. It is clear that FCMS configuration in all cases performs worse especially for average inter-arrival time equal to 5. The design of this configuration needs to be rethought. For

higher inter-arrival times the differences tend to decline. The behaviour of RCMS is more interesting and there is more possibility for improvement. In Figure 13 observing the curve of RCMS (2,1), it is interesting to note that the throughput time of product 3 performs better than other configurations. This is probably due to the fact that the cell 3 has lower loads (since part mix 3 is 20%) and it obtains more synergy from the remainder cell. In that configuration queues larger than 2 units (parts) are not tolerated. In this case, the remainder cell is used frequently and this is the key to a better behaviour of system configuration. The results showed indicate that a better balance of utilizations between dedicated cells and remainder cell leads to an improvement in performance.

Figure 13. Performance comparison: RCMS

535

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Figure 14. Performance comparison: interarrival time

Figure 15. Performance comparison: interarrival time

CONCLUSION AND FUTURE DEVELOPMENT This chapter investigates several configurations of the cellular manufacturing systems. A simulation environment is used to create equal operating conditions for different cellular systems. Each simulation includes the same number of machines. Thus the comparison between systems is normalized. Volume changes are analysed changing of

536

inter-arrival times. It has been considered interesting to compare the performance because the economic environment is extremely turbulent. In particular our attention has focused on alternative approaches to traditional cells. A solution that looks interesting results is the remainder cellular manufacturing system (RCMS). The results of this research can be summarized as it follows:

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

the classical cellular configuration with machines placed in line (CMS in line) is the best solution with static market conditions; the results are very close to the case of machines that are not in line (CMS); the fractal cellular configuration(FCMS) gives bad results as it is conceived in a static environment and should think a more complex logic with different loading policies; the cellular manufacturing system with remainder cell (RCMS) is already competitive in some cases with larger inter-arrival times; the best configuration is one that requires more stringent threshold values which imply a greater use of the remainder cell.

From this it follows that RCMS could become very competitive when the presence of a turbulent market would involve a greater use of remainder cell, and similarly the presence of noise on the manufacturing system(such as machine breakdowns or maintenance). In literature, it is known that the FCMS and the RCMS are not very competitive against classical CMS. But in previous studies remainder cells were often used as support cells with exclusive use in special circumstances. Our proposal is to adopt loading policies designed to achieve a strategic use of the remainder cells. Simple loading policies included in simulation models show how the remainder cell can be used to keep different performance under certain conditions. This work aims to demonstrate under certain dynamic conditions the proposed configurations can be competitive with classical CMS. Furthermore this chapter demonstrates the strong dependence of the results from the design of loading approaches, which deserve special attention. Future research could focus on defining complex loading policies able to maintain high performance of the manufacturing system in different operating conditions and also taking into account the need for maintenance and possible

failures of the machines(also for those belonging to remainder cell). These policies will certainly improve both the RCMS as for FCMS. Moreover in the RCMS the logic of loading machines have a strong influence on the performance. Under dynamic conditions with market fluctuations these strategies using remainder cells can avoid the reconfigurations of manufacturing systems, avoiding downtimes and reducing costs. Future works could investigate a variety of systems that integrate the configurations showed in this chapter with decision-making systems with intelligence to interpret the variability of real production scenario, moreover would also be interesting to analyze the economic aspect of different manufacturing solutions and how it may influence the choices.

REFERENCES Ahkioon, S., Bulgak, A. A., & Bektas, T. (2009). Cellular manufacturing systems design with routing flexibility, machine procurement, production planning and dynamic system reconfiguration. International Journal of Production Research, 47(6), 1573–1600. doi:10.1080/00207540701581809 Aramoon Bajestani, M., Rabbani, M., Rahimi Vahed, A. R., & Baharian Khoshkhou, G. (2009). A multiobjective scatter search for a dynamic cell formation problem. Computers & Operations Research, 36, 777–794. doi:10.1016/j.cor.2007.10.026 Aryanezhad, M. B., & Deljoo, V., Mirzapour Al-ehashem, S. M. J. (2009). Dynamic cell formation and the worker assignment problem: A new model. International Journal of Advanced Manufacturing Technology, 41, 329–342. doi:10.1007/s00170008-1479-4 Chen, C. H., & Balakrishnan, J. (2005). Dynamic cellular manufacturing under multiperiod planning horizons. Journal of Manufacturing Technology Management, 16(5), 516–530. doi:10.1108/17410380510600491 537

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Chen, C. H., & Balakrishnan, J. (2007). Multiperiod planning and uncertainty issues in cellular manufacturing: A review and future directions. European Journal of Operational Research, 177, 281–309. doi:10.1016/j.ejor.2005.08.027

Rheault, M., Drolet, J., & Abdulnour, G. (1995). Physically reconfigurable virtual cells: A dynamic model for a highly dynamic environment. Computers & Industrial Engineering, 29(1–4), 221–225. doi:10.1016/0360-8352(95)00075-C

Hachicha, W., Masmoudi, F., & Haddar, M. (2007). An improvement of a cellular manufacturing system design using simulation analysis. International Journal of Simulation Modelling, 4(6), 193–205. doi:10.2507/IJSIMM06(4)1.089

Safei, N., Saidi-Mehrabad, M., & Babakhani, M. (2007). Designing cellular manufacturing systems under dynamic and uncertain conditions. Journal of Intelligent Manufacturing, 18, 383–399. doi:10.1007/s10845-007-0029-5

Irani, S. A., Subramanian, S., & Allam, Y. S. (1999). Introduction to cellular manufacturing system. In Irani, S. A. (Ed.), Handbook of cellular manufacturing systems (pp. 29–30). John Wiley & Sons. doi:10.1002/9780470172476.ch

Safei, N., & Tavakkoli-Moghaddam, R. (2009). Integrated multi-period cell formation and subcontracting production planning in dynamic cellular manufacturing systems. International Journal of Production Economics, 120, 301–314. doi:10.1016/j.ijpe.2008.12.013

Kelton, W. D., & Sadowski, R. P. (2009). Simulation with Arena. McGraw-Hill. Kesen, S. E., Sanchoy, K., & Gungor, Z. (2010). A genetic algorithm based heuristic for scheduling of virtual manufacturing cells (VMCs). Computers & Operations Research, 37, 1148–1156. doi:10.1016/j.cor.2009.10.006 Kesen, S. E., Toksari, M. D., Gungor, Z., & Guner, E. (2009). Analyzing the behaviors of virtual cells (VCs) and traditional manufacturing systems: Ant colony optimization (ACO)-based metamodels. Computers & Operations Research, 36(7), 2275–2285. doi:10.1016/j.cor.2008.09.002 Maddisetty, S. (2005). Design of shared cells in a probabilistic demand environment. PhD Thesis. College of Engineering and Technology of Ohio University, Ohio(USA). Montreuil, B. (1999). Fractal layout organization for job shop environments. International Journal of Production Research, 37(3), 501–521. doi:10.1080/002075499191643

Selim, M. S., Askin, R. G., & Vakharia, A. J. (1998). Cell formation in group technology: Review evaluation and directions for future research. Computers & Industrial Engineering, 34(1), 3–20. doi:10.1016/ S0360-8352(97)00147-2 Süer, G. A., Huang, J., & Maddisetty, S. (2010). Design of dedicated, shared and remainder cells in a probabilistic demand environment. International Journal of Production Research, 48(19), 5613–5646. doi:10.1080/00207540903117865 Vakharia, A. J., Moily, J., & Huang, Y. (1999). Evaluating virtual cells and multistage flow shops: An analytical approach. International Journal of Flexible Manufacturing Systems, 11, 291–314. doi:10.1023/A:1008117329327 Venkatadri, U., Rardin, R. L., & Montreuil, B. (1997). A design methodology for fractal layout organization. IIE Transactions, 29, 911–924. doi:10.1080/07408179708966411 Wang, X., Tang, J., & Yung, K. (2009). Optimization of the multi-objective dynamic cell formation problem using a scatter search approach. International Journal of Advanced Manufacturing Technology, 44, 318–329. doi:10.1007/s00170-008-1835-4

This work was previously published in Operations Management Research and Cellular Manufacturing Systems: Innovative Methods and Approaches, edited by Vladimir Modrák and R. Sudhakara Pandian, pp. 366-384, copyright 2012 by Business Science Reference (an imprint of IGI Global). 538

539

Chapter 31

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing Systems under Uncertain Situations Vahidreza Ghezavati Islamic Azad University, Iran

Mohammad Saeed Jabal-Ameli University of Science and Technology, Iran

Mohammad Saidi-Mehrabad University of Science and Technology, Iran

Ahmad Makui University of Science and Technology, Iran

Seyed Jafar Sadjadi University of Science and Technology, Iran

ABSTRACT In practice, demands, costs, processing times, set-up times, routings, and other inputs to classical cellular manufacturing systems (CMS) problems may be highly uncertain, which can have a major impact on characteristics of manufacturing system. So, development models for cell formation (CF) problem under uncertainty can be a suitable area for researchers and belongs to a relatively new class of CMS problems that not researched well in the literature. In this way, random parameters can be either continuous or described by discrete scenarios. If probability information is known, uncertainty is described using a (discrete or continuous) probability distribution on the parameters, otherwise, continuous parameters are normally limited to lie in some pre-determined intervals. This chapter introduces basic concepts about uncertainty themes associated with cellular manufacturing systems and briefly reviews literature survey for this type of problem. The chapter also discusses the characteristics of different mathematical models in the context of cellular manufacturing.

DOI: 10.4018/978-1-4666-1945-6.ch031

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

INTRODUCTION During the past few decades, there have been various types of optimization techniques and mathematical programming approaches for cellular manufacturing systems under different random situations. In a cell manufacturing, once work cells and scheduling of parts in each cell are determined, it may be possible that cycle time in a specific cell be more than the other cells which creates a bottleneck in a manufacturing system. In this way, there are two different approaches in order to decrease cycle time in bottleneck cell: duplicating bottleneck machines or outsourcing exceptional parts which are known as group scheduling (GS) in the literature. Selecting each approach to balance cycle times among all cells can lead to variations in machines layout characteristics by changes in type and number of machines. Finally, formations of cells are also changed according to the changes in scheduling decisions. Thus, scheduling problem is one of the operational issues which must be addressed in design stage concurrently in an integrated problem so that the best performance of cells would be achieved. It is noted that scheduling problem includes many tactical parameters with random and uncertain characteristics. In addition, uncertainty or fluctuations in input parameters leads to fluctuations in scheduling decisions which could reduce the effects of cell formation decisions. Figure 1 in-

dicates transmission of uncertainty from tactical parameters to the CMS problem. Thus, in order to intensify effectiveness of the solution, integrated problem in uncertain conditions must be studied so that final solution will be robust and immune against the fluctuations in input parameters. In the concerned problem, uncertain parameters can be listed as follows: • • • • • • • • • •

Demand, Processing time, Routings or machine-part matrix, Machines’ availability, Failure rate of machines, Capacities, Lead times, Set-up considerations, Market aspects, …,

where the impact of each factor is discussed in the following sections.

PROBLEM BACKGROUND Group technology (GT) is a management theory that aims to group products with similar processes or manufacturing characteristics, or both. Cellular manufacturing system (CMS) is a manufacturing

Figure 1. Illustration of uncertainty transmission to the CMS decision

540

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

concept to group products into part families based on their similarities is manufacturing processing. Machines are also grouped into machine cells based on the parts which are supposed to be manufactured by these machines. CMS framework is a important application of group technology (GT) philosophy. The basic purpose of CM is to identify machine cells and part families concurrently, and to assign part families to machine cells in order to minimize the intercellular and intracellular costs of parts. Some real-world limitations in CF are: • • •

Available capacity of machines must not be exceeded, Safety and technological necessities must be met, The number of machines in a cell and the number of cells have not be exceeded an upper bound, Intercellular and intracellular costs of handling material between machines must be minimized, Machines must be utilized in effect (Heragu, 1997).

Aggregating traditional considerations with newly ones such as scheduling, stochastic approaches, processing time, variable demand, sequencing, and layout consideration can be more practical. This survey highlights studies that are relevant to the uncertainty planning of CMS problems; however, a survey of certain conditions will also be presented. Cellular manufacturing decisions are strategic decisions which can be affected by operational decisions such as scheduling, production planning, layout consideration, utilities, productivity and etc. Thus, in order to effecting decision making related to cell formation design, it is necessary to integrate strategic decisions and operational decisions in a single problem. Recently, researchers have had some efforts in order to integrate two types of decisions. But the lack of literature is that most of them are studied in certain situations while in

real-world most of the operational parameters are uncertain; and thus, integrated problems must be more studied in uncertain situations. In the literature correspondence to CMS problems, uncertainty has been considered under different circumstances. We have classified previous researches into different groups which are discussed next. Group 1: Uncertainty could appear either in demand or in products’ mix. In this group, there are two approaches of fuzzy theory and stochastic optimization to handle uncertainty. In some of them stochastic demand is aggregated with tactical aspects such as production planning (Hurley and Whybark 1999), layout problem (Song and Hitomi1996) or dynamic and multi period conditions (Balakrishnan and Cheng 2007). Also, in other studies, uncertainty in products’ demand has been resolved by fuzzy approach (Safaei et. al. 2008). Group 2: Researchers formulated and analyzed CMS problem considering fuzzy coefficients in the objective function and constraints (Papaioannou and Wilson 2009). Group 3: Processing times of products are assumed to be uncertain where mathematical programming and fuzzy approaches are implemented to obtain the results which are immune against the perturbation on the uncertainty. Also, some studies such as Sun and Yih (1996) and Andres et. al. (2007) attempted to achieve solutions by heuristic procedures. Some studies have formulated the problem as a queue network and then analyzed it by queuing theory (Yang and Deane 1993). Group 4: Uncertainty normally appears due to fluctuations in design aspects during production process. Since, fluctuations in design aspects are not certain events, so uncertainty can be formulated by a set of future scenarios. In this way, some studies applied interval

541

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

coefficient to resolve uncertainty (Shanker and Vrat 1998). Group 5: In some explorations, uncertainty has been considered in the availability of resources for production equipments. In this way, some works have formulated CMS problem applying probability theory (Kuroda and Tomita (2005) and Hosseini 2000). In addition, some of them considered multi processing routes to be substituted once a machine encounters with failure (Siemiatkowski and Przybylski (2007) and Asgharpour and Javadian 2004). Group 6: Uncertainty has been recognized in similarity coefficients. For example, a new similarity coefficient has been introduced where applied fuzzy theory and then transformed it to a binary matrix (Ravichandran and Chandra Sekhara Rao 2001). Group 7: Capacity level of machines is considered to be uncertain. Since this critical parameter has an important role to determine bottleneck machine, thus it is vital to make flexible decisions under any realization of this parameter (Szwarc et al 1997). Group 8: Finally, uncertainty in CMS problem has been detected in products arrival time to cells. Classical models assume that all products are available at the beginning of the production planning while in real application it may be occurred that products arrive to cell with unknown time. In this way, researchers modeled CMS problem as queue network to resolve uncertainty (Yang and Deane 1993). Literature survey classifications can be described as follows. There exist many researches in certain situations for designing CMS in different areas such as cell formation integrated with scheduling (Solimanpur et al. (2004), Aryanezhad and Aliabadi et al 2011), considering exceptional elements in CF (Tsai et al. (1997), Mahdavi et al. 2007), some works apply meta-heuristics and

542

heuristics methods to solve large scale problems are more practical and appealing real-case problems (Xiaodan Wu et al (2006), Venkataramanaiah 2007).

OPTIMIZATION APPROACHES IN UNCERTAIN SITUATIONS Rosenhead et al (1972) divided decision environments into three groups of deterministic, risk and uncertain. In deterministic situations, all problem parameters are considered to be given. In risk problems parameters have probability distribution function where it is known for decision maker while in uncertain situations there is no information about probabilities. The problems which are classified into the risk are named stochastic and the primary objective is to optimize expected value of system outcome. Also, the uncertain problems are known as robust and the primary objective is mainly to optimize performance of the system in the worst case conditions. The aim of both stochastic and robust optimization methods is to find solution with a suitable performance in realization of any value for uncertain parameter. Random parameters can be either continues or explained by discrete scenarios. If probability information are known, uncertainty will be explained by continues or discrete distribution functions. But if no information is available, parameters are assumed to be in predefined intervals. Scenario planning is a method in which decision makers achieve uncertainty by indicating a number of possible future states. In such conditions, the goal is to find solutions which perform well under all scenarios. In some cases, scenario planning replaces predicting as a way to assess trends and potential modifications in the industry environment (Mobasheri et al 1989). Decisions makers can thus develop strategic responses to a range of environmental adjustments, more

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

adequately preparing themselves for the uncertain future. Under such conditions, scenarios are qualitative descriptions of possible future states, consequences from the present state with consideration of potential key industry events. In other cases, scenario planning is used as a tool for modeling and solving specific operational problems (Mulvey 1996). While scenarios here also depict a range of future states, they do so through quantitative descriptions of the various values that problem input parameters may resolve. Scenario based planning has two main negative aspects. The first is that identifying scenarios and assigning probabilities to them are a difficult task. The second is that we are unable to increase the number of scenarios since due to limitation on computation time which consequently limits the future correspondence situations for decision making. This approach has the advantageous that provides statistical correlation between parameters (Snyder 2006).

DECISION MAKING APPROACHES IN UNCERTAIN SITUATIONS There are different approaches which can be applied in modeling process based on the problem characteristics: Stochastic Optimization (SO), Robust Optimization (RO) and Queuing Theory (QT) with defined decision tree as follows. • • • •

Stochastic Optimization Discrete Planning - Set of Scenario Continues Optimization Mean Value model: the most popular objective in any SO problem is to optimize expected value of the system outcome. For example, minimizing expected cost or maximizing expected income. Mean – Variance Model: in some studies variance and expected of system performance are considered simultaneously in optimization problem.

• •

• •

Probability Approaches Max Probability Optimization: Maximizing the probability of a random event that solution performs good under each realization of random parameter. Chance Constrained Programming: a probability event located in problem constraint sets such as service level constraint. Queuing Theory & Markov Chain: It is a well-known approach. Robust Optimization

The objective in any stochastic optimization problem mainly focuses on optimizing the expected value of system outcome such as maximizing expected profit or minimizing total expected cost. In any stochastic programming we must determine which variables are considered in the first stage (design variable) and which are considered in the second stage (control variable). In other words, which variables must be determined first and which of them must be determined after uncertainty is resolved. In modeling process for cellular manufacturing problem, cell formation decisions are the first and operational and tactical decisions are the second variables. If both decisions are made in a single stage, the model is reduced to a certain problem in which uncertainty of parameters are replaced by mean of variables.

Mean-Variance Models The mean value models discuss only the expected performance of the system without reflecting on the fluctuations in performance and the decision maker’s risk aversion limitations. However, a portion of literature incorporates the company’s level of risk aversion into the decision-making process, classically by applying a mean–variance objective function. Min = E(Cost) + 𝜆Var(Cost)

543

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Probabilistic Approaches The mean-variance models consider only the expected value or variance of the stochastic objective function, there is an extensive portion of literature which considers probabilistic information about the performance of the system; for example, maximizing the probability that the performance is good or minimizing the probability that it is bad, under suitable and predefined explanations of “good” and “bad”. We introduce two such approaches: (1) max-probability problems; (2) chance-constrained programming;

Queuing Theory for CMS Problem Queuing theory can be applied to any manufacturing or service systems (also, in cellular manufacturing systems). For example, in a machine shop, jobs wait to be machined; (Heragu 1997b). In a queuing system, customers arrive by some arrival process and wait in a queue for the next available server. In the manufacturing framework, customers can be assumed as parts and servers may be machines or working cells. The input process shows how parts arrive at a queue in a cell. An arrival process is commonly identified by the probability distribution of the number of arrivals in any time interval. The service process is usually described by a probability distribution. The service rate is the number of parts served per unit time. The arrival rate of a queuing system is usually given as the number of parts arriving per unit time. Thus, measurements of a queue system such as maximization the probability that each server is busy (utilization factor), minimization waiting time in queues (that leads to minimization work in process in cells) and etc can be optimized and cells will be formed optimality.

Robust Optimization Once there is no probability information about the uncertain parameters, the expected cost and

544

other objectives discussed in previous section are inappropriate. Many measurements of robustness have been introduced for this condition. The two most common are mini-max cost and mini-max regret, which are directly related to one another. Just like the stochastic optimization case, uncertain parameters in robust optimization problems may be considered as being either discrete or continuous. Discrete parameters are formulated applying the scenario based planning. Continuous parameters are normally assumed to lie in some predefined interval, because it is often impossible to consider a “worst case scenario” when parameter values are unbounded. This type of uncertainty is described as “interval uncertainty”. The two most common robustness measurements consider the regret of a solution, which is the difference (absolute or percentage) between the cost of a solution in a given scenario and the cost of the optimal solution for that scenario. Regret is sometimes described as opportunity loss: the difference between the quality of a given strategy and the quality of the strategy that would have been chosen had one known what the future held (Snyder 2006). As it was already described, the performance of a cellular manufacturing system heavily influenced by tactical and operational decisions such as scheduling, production planning, layout and etc. Notable point is that the tactical decisions and operational parameters are dependent on many uncertainties that affect the system. As a result, the tactical and operational decisions are suffering from uncertainty. This causes to transfer uncertainty into the decisions related cell formation. Therefore, it is essential for researchers to recognize different types of uncertainty in the problem and make decisions regarded to their impact into the problem. The most important parameters with uncertainty in manufacturing cell formation problem considered as below: •

Demand

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

• • • •

Processing time Routings or machine-part matrix Machine’s failure rate Capacities

One of the factors causing uncertainty in the problem associated with product design changes during the course of production. Moreover, changes in product design with many features of the product are altered. Design changes can occur based on a variety of reasons such as changes in customer expectations, short-term life products, and competing products to market entry. Under such circumstances, many characteristics of products such as demand and time will find a process of change. Note that the reasons of changes expressed are not certain events in the future and thus they have to be predicated as some discrete scenarios. In such case analytical space problem is discrete and can be optimized by discrete optimization. As it was discussed earlier, one of the product features which can be changed due to changes in product design is product routings. In this way, sequence of machines in which product has to visit them may be changed and therefore part – machine index may be changed. In such cases, the values within the part – machine matrix unlike classical models that were only zero or one can

be a probabilistic value between zero and one. In such problems, discrete optimization can be applied to formulation. Another factor with uncertainty is the rate of access to machines based on their failure. Since failure and machine downtime are not certain events, the machine accessibility for the decision maker at the time of manufacturing cells with defined uncertainty is also under uncertainty. Another parameter that is uncertain and can affect formation of work cells is features of capacity. These factors include different items: the capacity of processing machinery on parts as well as physical capacities for manufacturing framework. Such variations must be predicted at the beginning planning horizon. The summary of above discussions can be found Table 1.

MATHEMATICAL MODELLING In this section, different mathematical models with different optimization approaches which include two new models and one published model are discussed. The selected approaches are stochastic optimization and queuing theory.

Table 1. Summary uncertainty developments in CMS problem No.

Uncertain parameter

Optimization Approach

Decision space

1

demand

Stochastic

Continuous & Discrete

2

Processing time

Stochastic

Continuous & Discrete

3

Processing time

Robust

Continuous & Discrete

4

Processing time

Queuing Theory

Continuous

5

Routing

Stochastic

Discrete

6

Routing

Queuing Theory

Discrete

7

Capacity

Stochastic

Discrete

8

Machines’ Availability

Queuing Theory

Continuous & Discrete

9

Machines’ Availability

Stochastic

Continuous & Discrete

10

Lead times

Stochastic & Robust

Continuous & Discrete

545

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Model 1 In this section, a bi-objective mathematical model to form manufacturing cells is presented where uncertainty is accessed in part – machine matrix. As discussed earlier, due to changes in design characteristics of products, several factors are subject to changes such as the processing routings of parts. Thus, according to the forecasting based on the scenario planning, forecasting different routing processes for a part in uncertain situation is possible. In this condition, each part can have different routing process for each scenario. Therefore, in order to design cellular configuration efficiently, all planning conditions must be considered. In current problem the factor with uncertainty is part – machine matrix. In classical models, only zero-one elements are used in part – machine matrix while in the presented problem each element can be a continuous value between zero and one. Each array denotes the probability that part i visits machine j with regard to all scenarios. For example, if there are two scenarios in which the probability of the first scenario is 0.4 and for the second one is 0.6, we have: p1 = 0.4 ⇒ Routing in scenario 1 for part 1: Machine 1 → Machine 2 → Machine 3 → Finish p2 = 0.6 ⇒ Routine in scenario 2 for part 1: Machine 1 → Machine 2 → Machine 3 → Finish M .1 M .2 M .3 M .4   a[ij ] =   . . 1 0 4 1 0 6   Where element [ij] indicates the probability that part i processed on machine j. Since, in both scenarios, machines 1 and 3 are the same in processing routing, so part 1 has to visit them surely (or with probability 1) to do operation process. But, based on the first scenario this part has to visit machine 2 with probability 0.4 and also machine 4 with probability 0.6. As it can be seen, in introduced part – machine matrix

546

each array can have a value between zero and one based on the probability occurrence for scenarios. In a mathematical model which is presented in this section, the first objective function minimizes the costs associated with the under utilization in a manufacturing system. Also, the second objective function is optimizing a random event in manufacturing system unlike the classical models which optimized only certain events. As it was discussed in definitions of a cellular manufacturing system, one of the most important objectives is to minimize the number of inter cellular transportation. In this problem, since processing rout for parts is uncertain, therefore the number of inter cellular transportation is uncertain too. A random event which is considered for optimization is to “minimizing the probability that the number of inter cellular transportation exceeds the upper bound limitations”. For computing above objective the following notations are defined: Parameters: 1 aijs =  0  1 0 p s: N:

if part i needs to be processed on machine j in scenario s otherwise Probability of occurring scenario s Maximum number of intercellular transportation allowed in each scenario

Decision Variables: ns: Number of intercellular transportation in scenario s. 1 es =  0  1 0

if no. of intercellular transportation in scenario s configuration exceeded up bound N otherwise

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

or

final solution. In other words, above probability transforms to the following function:

1 es =  0 

1 0 zs:

MinZ 2 = ∑ es × ps

if ns ≥ N if ns < N Integer additional variable for each scenario.

1 x ik =  0  1 0

Since there is s scenarios in the proposed problem which are similar to s independent random events, thus probability of total events will be equal to summation of probability of each event. In other words, assuming s1,s2,…,sn as n independent random events, we have: P (s1 ∪ s2 ∪ ... ∪ sn ) = P (s1 ) + P (s2 ) + ... + P (sn )

if part i is assigned to cell k otherwise

1 y jk =  0  1 0

(2)

s

if machine j is assigned to cell k otherwise

In order to minimizing under utilization costs in the first objective function, the following function is defined: MinZ1 = ∑ ∑ ∑ ps × (1 − aijs ) × x ik × y jk s

i

j

As a result, in above function if in scenario s the number of inter cellular transportation exceeds the upper bound limitation then we can assume that inter cellular transportation may be occurred with the probability of ps. Finally, the summation of the probability of scenarios with unsatisfied inter cellular transportation restriction denotes the final probability of the problem. In this model, the objective functions and also, the following constraints are effective: MinZ1 = ∑ ∑ ∑ ps × (1 − aijs ) × x ik × y jk s

i

j

MinZ 2 = ∑ es × ps

s

(1) Also, based on the above definitions, an attractive random event for minimizing the second function can be defined as follows:

Constraints:

∑x

ik

=1

∑y

jk

=1

k

p (no. of intercellular transportation in each condition ≥ N)

k

The above random event must be optimized by minimizing the probability of occurrence that leads to maximum utility for decision maker in

∀i

∀j

ns − ∑ ∑ aijs × x ik × (1 − y jk ) = 0 i

(3)

(4)

(5)

j

547

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

n  z s −  s  = 0  N 

(6)

z s ≤ M ×es

(7)

x ik , y jk , es ∈ {0, 1}

z s integer ≥ 0

ns ≥ 0

The first objective minimizes total expected cost associated with the utilization computed when a part do not need to be processed on a machine placed together in a same cell. The second objective minimizes the probability that number of inter cellular transportation exceeds the maximum transportation. Set constraint (3) says that each part must be assigned to a single cell. Set constraint (4) states that each machine can be assigned only to one cell. Set constraint (5) computes total number of inter transportation in each scenario. In set constraint (6) additional variable zs will be zero if the number of inter transportations in scenario s is less than the maximum limit and it is an integer value greater than 1 else. Set constraint (7) guarantees that if ns ≥ N then es will be 1. Otherwise, es will be 0.

Model 2 Applying Queuing Theory to CMS Problem In this section, we formulate a CMS problem as a queue system. Also, assume a birth-death process with constant arrival (birth) and service completion (death) rates. The role of the birth-death process in automated manufacturing systems is described in detail in Viswanadham and Narahari (Viswanadham and Narahari 1992). Specifically, let λ and μ be the arrival and service rate of parts, respectively, per unit time. If arrival rate is greater than the service rate, the queue will grow infinitely. The ratio of λ to μ is named utilization factor or the probability that a machine is busy and is defined as 𝜌 = 𝜆 | 𝜇. Therefore, for a system in steady state, this ratio must be less than one. In this research, we assume M/M/1 queue system for each machine in CMS where each part arrives to cells with rate 𝜆i and parts served by machines. In this condition, due to operate different parts (or different customers) on each machine and each part has different arrival rate, so for each machine (server) ρ is computed using the following property. Figure 2 illustrates modeling of cellular manufacturing system by queuing theory approach.

Figure 2. A CMS problem and queuing theory framework (Ghezavati and Saidi-Mehrabad 2011)

548

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Property 1 the minimum of independent exponential random variables is also, exponential. Let F1, F2,…, Fn be independent random variables with parameters 𝜆1, 𝜆2,…, 𝜆n. Let Fmin = min{F1, F2,…, Fn}. Then for any t ≥ 0, P (Fmin > t ) = P (F1 > t ) × P (F2 > t ) × ... × P (Fn > t ) =e

−λ1t −λ2t

e

...e

−λn t

=e

−[ λ1 +λ2 +...+λn ]t

An interesting implication of this property to inter-arrival times is discussed in Hillier and Lieberman (Hillier and Lieberman 1995). Suppose there are n types of customers, with the ith type of customer having an exponential inter-arrival time distribution with parameter 𝜆i, arrive at a queue system. Let us assume that an arrival has just taken place. Then from a no-memory property of exponential distribution, it follows that the time remaining until the next arrival is also exponential. Using mentioned property, we can see that the inter-arrival time for entire queue system or efficient arrival rate (which is the minimum among all inter-arrival times) has an exponential distribution with parameter: λeff = ∑ λi i =1

Hence, utilization factor or the probability that each machine (j) is busy is as follow (efficient arrival rate divided by service rate): N

λeff µj

P (W ≥ t ) = e s

−µ(1 − ρ)t

=

∑λ i =1

i

µj

(9)

Proof: Assume that there are N customers in a system once a new customer is arrived. Thus, based on the conditional probability theory: ∞

P (Ws ≥ t ) = ∑ P (Ws ≥ t | N = n ) × P (N = n ) n =0

(10)

On the other side, total time in which a new customer has to wait is equal to: (11)

Wq = F1 + F2 + ... + Fn

where Fi denotes service time for customer i. So: (12)

Ws = Wq + Fn +1

N

ρj =

In order to prevent long waiting time for each customer, a chance constraint must be considered in the formulation. Note that distribution function denoting total time for each customer in a M/M/1 system is as follows:

where Fn+1 denotes service time for new arrived customer. It is obvious that sum of the n+1 random variables with exponential distribution with rate 𝜇 will be an Erlang random variable with parameters n+1 and 𝜇. So: n +1

(8)

Chance Constrained Programming Since, both arrival time and service time are uncertain so the amount of time in which each customer spends in server will be uncertain, too.

P (Ws ≥ t | N = n ) = P (∑ Fi > t ) i =1

=

∫ t

µ ×e −µ⋅y

n

(µy ) d n! y

(13)

Note that the probability of being n customers in a M/M/1 model system is:

549

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

pn = ρ n (1 − ρ)

where

ρ=

λ µ

(14)

Based on the Equations 13 and 14, Equation 10 will be computed as: ∞

n =0

t

P (Ws ≥ t ) = ∑ ρ n (1 − ρ)∫ µ × e −µ⋅y

t

n =0

= µ(1 − ρ)∫ e −µ⋅y ∑

ρ n (µy )n dy n!

(µy )n d n! y (15) (16)

Also, based on the exponential series, we have: ρ n (µy )n ∑ n ! = e ρ⋅µ⋅y = eλ⋅y n =0 ∞

(17)

If we replace Equation 17 to the Equation 16, the Equation 9 will be proven. It can be found that Ws has an exponential distribution function with parameter 𝜇 –𝜆. In order to satisfy service level this probability must be at most α. So, the chance constraint will be determined as P(Ws ≥ t) ≤ 𝛼. In order to linearize this nonlinear constraint the following procedure is performed: P (Ws ≥ t ) ≤ α

(18)

⇒ e −µ(1−ρ )t ≤ α

(19)

⇒ −µ(1 − ρ)t ≤ Ln(α)

(20)

The achieved constraint indicates that a customer will be in system more than critical time t with probability at most α. Property 2. If n types of customers have to visit a server to receive service with different arrival rate 𝜆i then the probability of a random 550

customer in which visits the server be ith type will be as follows: pi =

λi

∑λ j

(21) j

In concerned model, the characteristics of a Jacson service network will be applied. In a Jacson network, it is assumed that each customer has to visit multiple servers in order to complete service stages. For example, each part refers to several machines to complete operation processes. In such network, input rate for machines needed for the first operation will be equal to the arrival rate of the part to the system. But, the ratio for any machines need for the second operation input rate will be equal to the output rate from the previous server (or machine). Similarly, any machine needs for the third operation input rate will be equal to the exit rate of the second machine and this process goes on for the other machines. In a cellular manufacturing problem formulated as a queue system, each part based on its routing process visits machines or multi cells in order to receive service. Figure 3 illustrates such process. For each machine, effective input rate is made of two elements. The first fraction is the summation of arrival rate of parts which visit the machine in the first operation. The second fraction is the summation of input rate of parts which visit the machine after the second operation. This rate is equal to the output rate of the previous machine. Figure 3 illustrates difference between arrival rates for machines per a specific part. In this model, such procedure will be applied to compute effective input rate for each machine. In this section, a part—machine matrix—will be applied where sequence operations of parts are determined. This can help us formulate problem as a Jacson network. Each element of this matrix is defined as follows:

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Figure 3. Arrival rate for part 1 into the different machines based on the routing

 j aik =  0  j 0

if kth process of part i is completed by machine j otherwise

k bij =  0  k 0

if part i refers to machine j to complete kth process otherwise Other parameters are defined as follows:

1 z ij =  0 

𝜆I = A rrival rate of part i to manufacturing system. 𝜇j = service rate of machine j (or [1/𝜇j] denotes average operation time on machine j). pij = The probability that a random part type i leaves machine j. 𝛽 = Penalty rate multiplied to arrival process if intercellular movement occurs. It is assumed that if an operation of a part has to transfer to the other cell (or inter cellular movement) then arrival rate will be multiplied by𝛽 in which included transfer time and also waiting time between cells. 𝜆jeff = Effective arrival rate for machine j.

Based on the above definitions 𝜆jeff will be computed by the following equation. λeff = j i =1

1 0

if operation on machine j is the first operation of part i otherwise

1 cij =  0  1 0

if part i needs to be processed on machine j otherwise

m

m

∑ (z

ij

× λi ) + ∑ (1 − z ij ) × cij × λaeff i =1

i ,bij −1

× pi,a

i ,bij −1

In above equation, in order to compute effective input rate for each machine two fractions are considered: the first fraction is the summation of the arrival rate for parts which visit machine j in the first operation. The second term is the summation of input rate for parts which visit the machine after the second operation. Number of operation which completed by machine j is bij based on the defined

551

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

parameters. Thus, number of previous operation is bij – 1. Finally, according to the definition of aik (the machine completes kth operation of part i), the machine which completes previous operation of part i will be ai,b𝜇-1. Therefore, the second term of above equation effective arrival rate for parts visit machine j after the second operation are computed as follows: effective arrival rate of a machine needs before machine j multiplied by the probability of leaving for part i from previous machine. For example, assume that customers arrive to a book store with Poisson distribution with rate 10 per hour where 60 percent is man and 40 percent is women. Hence, the number of men arrives to the store will be Poisson with rate 15×0.6 per hour and also, number of women arrives to the store will be Poisson with rate 15×0.4 per hour. Note that if operation j of part i needs inter cellular transportation, machine j is penalized by increasing arrival rate of part i and the rate is multiplied by 𝛽. Finally, the model must determine whether each operation needs inter cellular transportation or not. It must be mentioned that operation j of part i needs inter cellular transportation when machine j and part i are not located in the same cell. Based on the above description 𝜆eff is computed as Equation 22.

Mathematical Model

In this section, a mathematical model optimizes cell formation decisions based on the queuing theory will be proposed. The objective function is

to minimize total cost included under utilization cost. Also, a chance constraint will be considered in order to prevent additional waiting time of parts in a queue line in front of each machine. As it was discussed, assuming each machine as a M/M/1 model, the chance constraint (13) satisfies considered objective. MinZ = ∑ ∑ (1 − aij ) × x ik × y jk i

Constraints:

∑x

ik

=1

∑y

jk

=1

k

k

ρj −

λeff j µj

(25)

∀j

(27)

−µj × (1 − ρ j )t ≤ Ln(α) ρj ≤ 1

pij −

∀j

λi × cij

∑λ r =1

r

× crj

x ik , y jk ∈ {0, 1}

(28) (29)

∀j

λeff = ∑ (x ik × y jk ) × λi + x ik × (1 − y jk ) × λi × β  × z ij j   i =1   m pi,a × (x ik × y jk ) × λaeff + x ik × (1 − y jk ) × λaeff × β  i , b − 1 i , b − 1 i , b − 1   ij ij ij +∑  i =1 ×c × (1 − z ) ij ij

552

(24)

∀i

=0

Equation 22. m

(23)

j

=0

∀i, j

ρ j , pij ≥ 0

(30)

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Equation 26. m

λeff = ∑ (x ik × y jk ) × λi + x ik × (1 − y jk ) × λi × β  × z ij j   i =1   m pi,a × (x ik × y jk ) × λaeff + x ik × (1 − y jk ) × λaeff × β  i ,bij −1 i ,bij −1  + ∑ i ,bij −1  i =1 ×c × (1 − z ) ij ij

Constraints (24) and (25) compute effective arrival rate and utilization factor for each machine, respectively. Set constraint (28) guaranties satisfaction of chance constrained for each machine where the probability that each part has to wait more than critical time t is at most a. Set constraint (29) ensures that utilization factor for each machine will be less than one. Set constraint (30) determines the probability that a random part leaves machine j be type i.

Model 3 Recently, Ghezavati and Saidi-Mehrabad (2010) proposed a stochastic cellular manufacturing problem in where uncertainty is captured by discrete fluctuations in processing times of parts on machines. The aim of their model was to optimize scheduling cost (expected maximum tardiness cost) plus cell formation costs, concurrently. The mathematical model is represented in this part and interested readers are referred to read the paper for more details.

𝜇ij: Cost part i not utilizing machine j Mmax: Maximum number of machines permitted in a cell C𝜇: Maximum number of cells permitted ps: Probability of scenario s occurs tijs: Processing time for part i on machine j in scenario s DDi: Due Date of part i pc: Penalty cost for unit time delayed Decision variables: 1 x ik =  0  1 0

if part i processed in cell k otherwise

1 y jk =  0  if machine j assigned to cell k otherwise

Parameters:

1 0

1 aij =  0 

1 Z is [r ] =  0 

1 0 ci:

if part i required to be process on machine j otherwise Penalty cost of subcontracting for part i

1 0

if part i assigned to sequence [r] in scenario s otherwise

553

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

F[r]ks: The time in which process of part with sequence [r] ends in cell k and scenario s FD[r]ks: Due date of part with sequence [r] in cell k in scenario s L[r]ks: Tardiness of part with sequence [r] in cell k in scenario s MLs: Maximum Tardiness occurred in scenario s Diks: Total processing times of part i needs to be processed in cell k and scenario s T[r]ks: Total processing times of a part with sequence [r] assigned to cell k in scenario s CF decisions are scenario – independent: they must be made before occurring scenarios and they are made based on their similarities in processing parts and are independent to quantity of processing time. Scheduling decisions are scenario – dependent, thus Z, D, T, FD, L, ML and F variables are indexed by scenario since they must be made after we realize scenario and where the processing time is occurred.

Mathematical Model (Ghezavati, V.R. and Saidi-Mehranad, M., 2010)

∑x i

ik

Z is ,[r +1] ≤ ∑ X ik Z is [r ]

Diks = ∑ aij tijs x ik y jk

s

k

j

i

k

j

i

∑ ∑ ∑u

ij

(35)

(36)

∀i, k, s

j

∑x i

ik

Z is [r ] ≤ 1

(37)

∀r , s, k

T[r ]ks = ∑ Z is [r ]Diks

∀ k , s, r

(38)

∀ k , s, r

(39)

i

r

F[r ]ks = ∑ ∑ Tαks r =1 α =1

FD[r ]ks = ∑ x ik × Z is [r ] × DDi

∀ k , s, r

i

(40)

{

L[r ]ks = max 0, F[r ]ks − FD[r ]ks

}

∀ k , s, r (41)

MLs = Max {L[r ]ks : k = 1,...,C and [r ] = 1,..., P } ∀s

(42)

Minimize Z = ∑ pc × ps × MLs +

∑ ∑ ∑ ciaij x ik (1 − y jk ) +

∀ k , s, r

i

(31)

∑y j

(1 − aij )x ik y jk

jk

≤ M max

∀k

(43)

x ik , y jk , Z isr ~ (0, 1)

(44)

Diks , Trks , Frks , FDrks ≥ 0

(45)

Subject to:

∑x k

∑y k

ik

jk

∑Z r

=1

=1

is [ r ]

=1

∀i

∀j

∀i, s

(32)

(33)

(34)

Set constraints (32), (33) and (43) indicate cell formation constraints and set constraints (34), (35), (36), (37), (38), (39), (40), (41) and (42) perform scheduling computations and rational constraints.

Linearization Approaches In above formulation, since there are both binary and continuous variables where are multiplied

554

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

to each other, nonlinear terms are appeared in formulation process. Two common types of nonlinear terms are: Type 1: Pure 0-1 polynomial problem in which n binary variables are multiplied to each other such as Z = x1 × x2 ×…× xn. Type 2: Mixed 0-1 polynomial problems which n binary variables are multiplied to each other and this term is multiplied to a continuous variable such as Z = x1 × x2 ×…× xn × Y. For linearization type 1 the following method can be applied by introducing some new auxiliary constraints: Z ≤ xi

i = 1, 2,..., n

n

Z ≥ ∑ x i − (n + 1) i −1

Also, for linearization type 2 in a minimization problem, the following auxiliary constraints will be applied: P1: Nonlinear problem MinZ = x 1 × x 2 × ... × x n × y St: L(X ,Y ) P2: Linear form Min

Z

where U is upper bound for continuous variable y and therefore Z will be a continuous variable (Ghezavati and Saidi-Mehrabad 2011).

CONCLUSION In summary, in this chapter basic principles of uncertainty in a cellular manufacturing system were established. Since CMS problem is affected by tactical decisions such as scheduling, production planning, layout considerations, utilization aspects and many other factors, thus each CMS problem must be aggregated with tactical decisions in order to achieve maximum efficiency. As it is known, tactical decisions are made of many uncertain parameters. Since strategic decisions are influenced by tactical decisions, therefore CMS decisions will be mixed with uncertainty. There are some popular approaches which can analysis uncertain problems such as: Stochastic Optimization, Discrete Planning - Set of Scenario, Continues Optimization, Mean Value model, Mean – Variance Model, Max Probability Optimization, Chance Constrained Programming, Queuing Theory and Markov Chain, and Robust Optimization. This chapter has proposed two sample mathematical models and also one published model [32]. It was assumed that processing routing, inter arrival and service time and also processing time to be uncertain. Stochastic optimization and queuing theory were to resolve uncertainty in formulation process. A complete survey on meta-heuristic methods to solve CMS problems can be found by Ghosh et al (2011). For future directions, the following suggested developments can be applied for researchers and readers:

St:

n   Z ≥ y − U × n − ∑ x i    i =1 Z ≥ 0 L(X ,Y )

Uncertain Processing time optimized by robust approach in continuous or discrete space Uncertain capacities optimized by stochastic or robust approach in discrete space

555

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

• •

• •

Uncertain machines’ availability optimized by stochastic or queuing theory approaches in continuous or discrete space. Aggregating CMS problem with logistics considerations in uncertain environments. Aggregating CMS problem with production planning aspects in uncertain environments. Aggregating CMS problem with layout considerations in uncertain environments. Aggregating CMS problem with scheduling concerns in uncertain environments.

REFERENCES Andres, C., Lozano, S., & Adenso-Diaz, B. (2007). Disassembly sequence planning in a disassembly cell. Robotics and Computer-integrated Manufacturing, 23(6), 690–695. doi:10.1016/j. rcim.2007.02.012 Aryanezhad, M. B., & Aliabadi, J. (2011). A new approach for cell formation and scheduling with assembly operations and product structure. International Journal of Industrial Engineering, 2, 533–546..doi:10.5267/j.ijiec.2010.06.002

Ghezavati, V. R., & Saidi-Mehranad, M. (2010). Designing integrated cellular manufacturing systems with scheduling considering stochastic processing time. International Journal of Advanced Manufacturing Technology, 48(5-8), 701–717. doi:10.1007/s00170-009-2322-2 Ghezavati, V. R., & Saidi-Mehranad, M. (2011). An efficient linearization technique for mixed 0-1 polynomial problems. Journal of Computational and Applied Mathematics, 235(6), 1730–1738. doi:10.1016/j.cam.2010.08.009 Ghosh, T., Sengupta, S., Chattopadhyay, M., & Dan, P. K. (2011). Meta-heuristics in cellular manufacturing: A state-of-the-art review. International Journal of Industrial Engineering Computations, 2(1), 87–122. doi:10.5267/j.ijiec.2010.04.005 Heragu, S. (1997a). Facilities design (p. 316). Boston, MA: PWS Publishing Company. Heragu, S. (1997b). Facilities design, (p. 345). Boston, MA: PWS publishing company Hillier, F. S., & Lieberman, G. J. (1995). Introduction to operation research (6th ed.). New York, NY: McGraw-Hill.

Asgharpour, M. J., & Javadian, N. (2004). Solving a stochastic cellular manufacturing model using genetic algorithm. International Journal of Engineering, Transactions A. Basics, 17(2), 145–156.

Hosseini, M. M. (2000). An inspection model with minimal and major maintenance for a system with deterioration and Poisson failures. IEEE Transactions on Reliability, 49(1), 88–98. doi:10.1109/24.855541

Balakrishnan, J., & Cheng, C. H. (2007). Dynamic cellular manufacturing under multi-period planning horizon. European Journal of Operational Research, 177(1), 281–309. doi:10.1016/j. ejor.2005.08.027

Hurley, S. F., & Clay Whybark, D. (1999). Inventory and capacity trade-off in a manufacturing cell. International Journal of Production Economics, 59(1), 203–212. doi:10.1016/S09255273(98)00101-7

Ghezavati, V. R., & Saidi-Mehrabad, M. (2011). An efficient hybrid self-learning method for stochastic cellular manufacturing problem: A queuing-based analysis. Expert Systems with Applications, 38, 1326–1335. doi:10.1016/j. eswa.2010.07.012

Kuroda, M., & Tomita, T. (2005). Robust design of a cellular-line production system with unreliable facilities. Computers & Industrial Engineering, 48(3), 537–551. doi:10.1016/j.cie.2004.03.004

556

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Mahdavi, I., Javadi, B., Fallah-Alipour, K., & Slomp, J. (2007). Designing a new mathematical model for cellular manufacturing system based on cell utilization. Applied Mathematics and Computation, 190, 662–670. doi:10.1016/j. amc.2007.01.060

Siemiatkowski, M., & Przybylski, W. (2007). Modeling and simulation analysis of process alternative in cellular manufacturing of axially symmetric parts. International Journal of Advanced Manufacturing Technology, 32(5-6), 516–530. doi:10.1007/s00170-005-0366-5

Mobasheri, F., Orren, L. H., & Sioshansi, F. P. (1989). Scenario planning at southern California Edison. Interfaces, 19(5), 31–44. doi:10.1287/ inte.19.5.31

Snyder, L. V. (2006). Facility location under uncertainty: A review. IIE Transactions, 38, 537–554. doi:10.1080/07408170500216480

Mulvey, J. M. (10996). Generating scenarios for the Towers Perrin investment system. Interfaces, 26(2), 1–15. doi:10.1287/inte.26.2.1 Papaioannou, G., & Wilson, J. M. (2009). Fuzzy extensions to integer programming of cell formation problem in machine scheduling. Annals of Operations Research, 166(1), 1–19. doi:10.1007/ s10479-008-0423-1 Ravichandran, K. S., & Chandra Sekhara Rao, K. (2001). A new approach to fuzzy part family formation in cellular manufacturing system. International Journal of Advanced Manufacturing Technology, 18(8), 591–597. doi:10.1007/ s001700170036 Rosenhead, J., Elton, M., & Gupta, S. K. (1972). Robustness and optimality as criteria for strategic decisions. Operational Research Quarterly, 23(4), 413–431. Safaei, N., Saidi-Mehrabad, M., TavakkoliMoghaddam, R., & Sassani, F. (2008). A fuzzy programming approach for cell formation problem with dynamic & uncertain conditions. Fuzzy Sets and Systems, 159(2), 215–236. doi:10.1016/j. fss.2007.06.014 Shanker, R., & Vrat, P. (1998). Post design modeling for cellular manufacturing system with cost uncertainty. International Journal of Production Economics, 55(1), 97–109. doi:10.1016/S09255273(98)00043-7

Solimanpur, M., Vrat, P., & Shankar, R. (2004). A heuristic to optimize makespan of cell scheduling problem. International Journal of Production Economics, 88, 231–241. doi:10.1016/S09255273(03)00196-8 Song, S.-J., & Hitomi, K. (1996). Determining the planning horizon and group part family for flexible cellular manufacturing. Production Planning and Control, 7(6), 585–593. doi:10.1080/09537289608930392 Sun, Y.-L., & Yih, Y. (1996). An intelligent controller for manufacturing cell. International Journal of Production Research, 34(8), 2353–2373. doi:10.1080/00207549608905029 Szwarc, D., Rajamani, D., & Bector, C. R. (1997). Cell formation considering fuzzy demand and machine capacity. International Journal of Advanced Manufacturing Technology, 13(2), 134–147. doi:10.1007/BF01225760 Tsai, C. C., Chu, C. H., & Barta, T. (1997). Analysis and modeling of a manufacturing cell formation problem with fuzzy integer programming. IIE Transactions, 29(7), 533–547. doi:10.1080/07408179708966364 Venkataramanaiah, S. (2007). Scheduling in cellular manufacturing systems: An heuristic approach. International Journal of Production Research, 1, 1–21.

557

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Viswanadham, N., & Narahari, Y. (1992). Performance modeling of automated manufacturing systems. Englewood Cliffs, NJ: Prentice Hall. Wu, X. D., Chu, C. H., Wang, Y. F., & Yan, W. L. (2006). Concurrent design of cellular manufacturing systems: A genetic algorithm approach. International Journal of Production Research, 44(6), 1217–1241. doi:10.1080/00207540500338252

Yang, J., & Deane, R. H. (1993). Setup time reduction and competitive advantage in a closed manufacturing cell. European Journal of Operational Research, 69(3), 413–423. doi:10.1016/03772217(93)90025-I

This work was previously published in Operations Management Research and Cellular Manufacturing Systems: Innovative Methods and Approaches, edited by Vladimir Modrák and R. Sudhakara Pandian, pp. 298-316, copyright 2012 by Business Science Reference (an imprint of IGI Global).

558

559

Chapter 32

Multi-Modal AssemblySupport System for Cellular Manufacturing Feng Duan Nankai University, China Jeffrey Too Chuan Tan The University of Tokyo, Japan Ryu Kato The University of Electro-Communications, Japan Chi Zhu Maebashi Institute of Technology, Japan Tamio Arai The University of Tokyo, Japan

ABSTRACT Cellular manufacturing meets the diversified production and quantity requirements flexibly. However, its efficiency mainly depends on the operators’ working performance. In order to improve its efficiency, an effective assembly-support system should be developed to assist operators during the assembly process. In this chapter, a multi-modal assembly-support system (MASS) was proposed, which aims to support operators from both information and physical aspects. To protect operators in MASS system, five main safety designs as both hardware and control levels were also discussed. With the information and physical support from the MASS system, the assembly complexity and burden to the assembly operators are reduced. To evaluate the effect of MASS, a group of operators were required to execute a cable harness task. From the experimental results, it can be concluded that by using this system, the operators’ assembly performance is improved and their mental work load is reduced. Consequently the efficiency of the cellular manufacturing is improved.

DOI: 10.4018/978-1-4666-1945-6.ch032

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Multi-Modal Assembly-Support System for Cellular Manufacturing

INTRODUCTION Traditionally, when the mass production was major in industry production, various assembly systems had been designed as automated manufacturing lines, which are aimed to produce a single specific product without much flexibility. Nowadays, the tastes of consumers change from time to time; therefore, traditional automated manufacturing lines cannot meet the flexibility and efficiency at the same time. To solve this problem, cellular manufacturing system, also called cell production system, has been introduced. In this system, an operator manually assembles each product from start to finish (Isa & Tsuru, 2002; Wemmerlov & Johnson, 1997). The operator enables a cellular manufacturing system to meet the diversified production and quantity requirements flexibly. However, due to the negative growth of the population in Japan, it will become difficult to maintain the cellular manufacturing system with enough skilled operators in the near future. How to improve the assembly performance of the operators and how to reduce their assembly burden are two important factors, which limit the efficiency of the cellular manufacturing system. Without an effective supporting system, it is difficult to maintain the cellular manufacturing system in Japan. Taking the advantages of the operators and robots, but avoiding their disadvantages at the same time, a new cellular manufacturing system was proposed, namely, the human-robot collaboration assembly system (Duan, 2008). In this human-robot collaboration assembly system, the operators are only required to execute the complicated and flexible assembly tasks that need human assembly skills; while the robots are employed to execute the monotonous and repeated tasks, such as the repetitions of parts feeding during assembly process (Arai, 2009). To make this system has the applicability to assemble a variety of products in different manufacturing circumstances, the following assembly sequence is assumed: each assembly part is collected from

560

the tray shelf by manipulators; all the parts are automatically fed to the operator on a tray as a kit of parts; an operator grasps the individual part respectively and assembles it to form a final product; the assembled product is transferred out to the next station, and so on. In the following part, a multi-modal assemblysupport system (MASS) is introduced, which aims to support an assembly operator in a cellular manufacturing system from both information side and physical side while satisfying the actual manufacturing requirements. MASS system utilizes robots to support the operator and several information devices to monitor and guide the operator during the assembly process. Since it is a human-robot collaboration assembly system, safety strategy must be designed to protect the operator with a reasonable cost benefit balance in the real production line. The remainder of the chapter is organized as follows: Firstly, the background information and the related studies are introduced. Then, the entire MASS system and its subsystems are briefly described. After that, a description of two manipulators and a mobile base are introduced in physical support part, which are used to feed assembly parts to the assembly operator. Assembly information support part contains a discussion of a multimedia-based assembly table and corresponding devices. Safety standard and safety design are presented in safety strategy part. Taking a cable harness task as an example, the effect of MASS system was evaluated. Finally, the conclusion and the future work are given.

PREVIOUS RELATED STUDIES To improve the efficiency of the cellular manufacturing system, various cellular manufacturing systems have been designed to improve the assembly performance of the operators and reduce their assembly burden.

Multi-Modal Assembly-Support System for Cellular Manufacturing

Seki (2003) invented a production cell called “Digital Yatai” which monitors the assembly progress and presents information about the next assembly process. Using a semi-transparent head mount display, Reinhart (2003) developed an augmented reality (AR) system to supply information to the operator. These studies support the operator from information aspect. To reduce the operator’s physical burden and improve the assembly precision, Hayakawa (1998) employed a manipulator to grasp the assembly parts during the assembly process. This improved the assembly cell in physical support aspect. Sugi (2005) aimed to support the operators from both information side and physical side, and developed an attentive workbench (AWB) system. In this system, a projector was employed to provide assembly information to the operator; a camera was used to detect the direction of an operator’s pointing finger; and several self-moving trays were used to deliver parts to the operator. Although AWB achieved its goal of supporting operators from both information aspect and physical aspect, the direct supporting devices are just a projector and several self-moving trays, which are general purpose instruments that cannot meet the actual manufacturing requirements. In the coming aging society, it will be impossible to maintain the working efficiency if everything is done manually by the operator in the current cellular manufacturing system. In order to increase working efficiency, many researchers have used robot technologies to provide supports to the operator (Kosuge, 1994; Bauer, 2008; Oborski, 2004). According to these studies, human-robot collaboration has potential advantages to improve the operator’s working efficiency. However, before implementing this proposal, the most fundamental issue will be the safety strategy, which allows the operators and the robots to execute the collaboration work in their close proximity. Human-robot collaboration has been studied in many aspects but has not been utilized in the real manufacturing systems. This is mainly because

safety codes on industrial robots (ISO 12100, ISO 10218-1, 2006) prohibit the coexistence of an operator in the same space of a robot. According to the current industrial standards and regulations, in a human-robot collaboration system, a physical barrier must be installed to separate the operator and the assisting robot. Under this condition, the greatest limitation is that the close range assisting collaboration is impossible. Based on the definition of Helms (2002), there are four types of human-robot collaboration: Independent Operation, Synchronized Cooperation, Simultaneous Cooperation, and Assisted Cooperation. The assisted cooperation is the closest type of collaboration, which involves the same work piece being processed by the operator and the robot together. In this kind of human-robot collaboration, the operator is working close to the working envelope of the assisting robot without physical separation, so that both of them can work on the same work piece in the same process. The most distinguished concept of this study is that the assisting robot in this work is active and is able to work independently as robot manipulator. The advantage of this collaboration is to provide a human-like assistance to the operator, which is similar with the cooperation between two operators. This kind of assistance can improve the working efficiency by automating portion of the work and enable the operator to focus only on the other portion of work which requires human skill and flexibility. However, since the active robot is involved, this kind of collaboration is extremely dangerous and any mistake can be fatal (Beauchamp & Stobbe, 1995). The challenge of this research work is to design an effective assembly supporting system, which can support the operator in both physical and information aspects. During the assembly process, employing of the assisting robot is an effective method to reduce the operator’s assembly burden while improving the working efficiency. This involves the safety issue in this kind of close range active human-robot collaboration. However, there

561

Multi-Modal Assembly-Support System for Cellular Manufacturing

are no industrial safety standards and regulations. Besides the design of the assembly supporting system, the scope of this work also covers both safety design study and development of prototype production cell in cellular manufacturing.

The entire MASS system is divided into physical support part and assembly information support part, as shown in Figure 1. 1.

MULTI-MODAL ASSEMBLYSUPPORT SYSTEM Structure of the Entire System Following the fundamental idea that robots and operators share the assembly tasks can maximize their corresponding advantages, the MASS system was designed and its subsystems are shown in Figure 1 as structure view and in Figure 2 as system configuration.

Figure 1. Structure of the entire MASS system

562

2.

Physical Supporting Part: The physical supporting part is aimed to support operators from physical aspect, and it is composed of two manipulators with six degrees of freedom and a mobile base, which have two functions: one is to deliver assembly parts from a tray shelf to an assembly table; and the other is to grasp the assembly parts and prevent any wobbling during the assembly process. Information Supporting Part: The assembly information supporting part is designed to aid operators in assembly information aspect. An LCD TV, a speaker, and a laser pointer are employed to provide assembly information to guide the operator.

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 2. Configuration of MASS system

3.

Safety Control Part: To guarantee the operator’s safety during the assembly process, vital sensors are used to monitor the operator’s physical conditions during the assembly process, and a serial of safety strategies is used to protect the operator from injury by the manipulators. It controls the collaboration between a robot and an operator (also referred to Figure 2).

In the developed MASS system, there are two stations connected through an intelligent part tray as shown in Figure 2, on which all the necessary parts are fed into the assembly station and the assembled products are transferred out through a shipment from the assembly station. 1.

2.

Part Feeding Station: Only robots work here. It is mainly in charge of part handling, such as bin picking, part feeding, kitting and part transferring. Assembly Station: An operator executes the assembly tasks with some aid of the robots. Supporting information from the

MASS system is implemented to accelerate the operator assembly efficiency. Figure 1 illustrates the setup of the MASS system, in which an operator assembles a product on the workbench in the area of assembly station. The operator is supported with the assembly information and with physical holding of parts for assembly. In this study, the sample product to assemble is a cable harness with several connectors and faster plates. Even experienced operators maybe spend about 15 minutes finishing this assembly task.

Simulator of the Entire System To reduce the design period, in this study, a simulator of the entire system was developed based on ROBOGUIDE (FANUC ROBOGUIDE) and OpenGL (Neider, 1993), as shown in Figure 3. This simulator can not only reproduce the actual motion of the manipulators but also predict collisions in the work space.

563

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 3. Simulator of the entire MASS system

Since MASS system is a human-robot cooperation assembly system, considering the operator’s safety, the distance between the manipulators and the operator should be optimized to prevent the collisions between them. Furthermore, the moving trajectories of the manipulators should also be optimized to prevent the collisions between themselves. In order to accelerate the development period, all of the optimization assignments are done in this simulator first, and then evaluated in the actual MASS system. With the aid of this simulator, the distance between manipulators and the operator can be adjusted easily, and the moving trajectories of the manipulators’ end points can also be reproduced conveniently during the manipulators’ moving process. Therefore, based on the simulation results, the actual system could be conveniently constructed.

564

Physical Support To increase the physical support provided by the MASS system, two manipulators with six degrees of freedom are installed on a mobile base and used to deliver assembly parts to the operator, as shown in Figures 1-4. A CCD camera with an LED light is equipped to each manipulator respectively for recognition of picking target from a part bin in scramble. The manipulators are utilized in part feeding station to 1. 2. 3. 4.

Draw a part bin from part shelves; Pick a part from the bin one by one; Kit parts onto a tray; Check visually the parts in a tray.

The parts are efficiently fed by the manipulators, because one manipulator hangs a bin up and

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 4. Assembly operations with the aid of manipulators

the other one grasps a part out like an operator does. Since the bin picking system by manipulators can work 24 hours a day, it enables high productivity. The base carries a few trays and moves to the assembly station, where the base docks in the electric charge connector. In the assembly station, an operator continuously assembles parts one by one, which are transferred by one of the mobile twin manipulators. To increase the precision of assembly and reduce the operator’s burden, one manipulator can grasp an assembly part to prevent wobbling during assembly, and the operator executes the assembly task on the basis of the manipulator’s assistance, as shown in Figure 4. Obviously, the assistant manipulators move near to the operator during the assembly process. To achieve this collaboration, the manipulators have to penetrate the operator’s area. Since the penetration is prohibited by the regulations of the industrial robots (ISO 12100), a new countermeasure must be developed. After finishing an assembly step, the operator pushes a footswitch to send a control command to the manipulators, and the manipulators provide the next assembly part to the operator and the assembly information of the next assembly step is given. Without this control command, the manipulators cannot move to the next step. Furthermore, the operator can stop the manipulators with an emergency button

when an accident occurs. These strategies enable the manipulators to support human operators in physical aspect effectively and safely.

Assembly Information Support Previous studies, Szeauch as Digital Yatai (Seki, 2003), have already testified that providing assembly information to the operator during his assembly process can not only improve his assembly efficiency, but also reduce his assembly errors. Taking the advantages of the previous studies, and also considering the characteristics of human cognition, an assembly information supporting system is designed to guide operators by means of indicating the next assembly sequence and/or an appropriate way of operation. The developed system has three major advantages: 1. 2.

Each assembly sequence is instructed step by step; Considering the characteristics of human cognition, the assembly information can be provided as easily understandable formats for humans, including text, voice, movie, animation and flashing emphasis marks;

565

Multi-Modal Assembly-Support System for Cellular Manufacturing

3.

The assembly information can be selected and provided to the operator according to his assembly skill level.

The total software system of MASS system in Figure 5 has been developed. It consists of three subsystems as 1. 2. 3.

Multi-modal Assembly Skill TransfER (MASTER); Multi-modal Assembly Information SupportER (MAISER); Multi-modal Assembly FOSTER (MAFORSTER).

MASS is designed to extract the skill information from skilled operators by MASTER and to transfer it to novice operators by MAISER as illustrated in Figure 5. Here, a human assembly skill model was proposed (Duan, 2009), which extracts and transfers human assembly skills as the cognition skill part and the motor skill part. In the cognition skill part, depending on questionnaire, MASTER obtains the different cognition skills between the skilled operators and the novice operators. In the motor skill part, MASTER mainly utilizes motion capture system to obtain the different motor skills between the skilled opFigure 5. Software system of MASS system

566

erators and the novice operators, especially in the assembly pose aspect and the assembly motion aspect (Duan, Tan, Kato, & Arai, 2009). MAISER provides understandable instructions to novice operators by displaying multi-modal information about assembly operations. MAFOSTER controls interface devices to organize comfortable environment for operators to execute the assembly task like a foster does. MAISER works mainly off-line at a data-preparation phase, and watches on-line the state of an operator to avoid bad motion and dangerous states (Duan, Tan, Kato, & Arai, 2009). MAISER takes the role of an instruction phase. Interface devices are installed as shown in Figure1 and Figure 4 again: 1.

LCD TV: The horizontal assembly table with built-in 37 inch LCD TV as shown in Figure 4 may be the first application for assembly. Since it enables operators to read the instructions without moving his/her gaze in different direction, assembly errors can be decreased. The entire assembly scheme is divided into several simple assembly steps, and the corresponding assembly information is written in PowerPoint slides (Zhang, 2008). During the assembly process, these

Multi-Modal Assembly-Support System for Cellular Manufacturing

2.

3.

4.

5.

PowerPoint slides are inputted into the LCD TV and switched by footswitch. Laser Pointer: Showing the assembly position to the operator is an effective way to reduce assembly mistakes. To this end, a Laser pointer, which is fixed on the environment, is projected onto a task to indicate the accurate position of assembly as shown in the left photo of Figure 4. The position can change by the motion of the manipulator. The operator can insert a wire into the instructed assembly position with the aid of the Laser spot. Audio Speakers: To easily permit the operator to understand the assembly information, a speaker and a wireless Bluetooth earphone are used to assist the operator with voice information. Footswitch: During the assembly process, it is difficult for the operator to switch the PowerPoint slides with his hands. Therefore, a footswitch is used, as shown in Figure 1. There are two kinds of footswitches: footswitch A has three buttons, and footswitch B has one button. Just stepping the different buttons on footswitch A, the operator can move the PowerPoint slides forward or backward. Stepping the button on footswitch B, the operator controls the manipulators to supply the necessary assembly parts to the operator, or makes manipulators change the position and orientation of the assembly part during the assembly process. Assembly Information: The assembly support information is provided to the operators to improve the productivity by means of good understanding in assembly tasks and of skill transfer with audio-visual aids. As the software structure for the assembly task description is not discussed in this study, please refer to our papers (Duan, 2008; Tan, 2008). Applying Hierarchical Task Analysis (HTA) one assembly task is divided into several simpler assembly steps, whose cor-

6.

responding information is stored in multimedia. Then appropriate level of information is displayed on LCD panel as shown in Figure 6. In each PowerPoint slide, the assembly parts and assembly tools are illustrated with pictures. The assembly positions are noted with color marks. Following the assembly flow chart, videos showing the assembly motions of the experienced operators will appear to guide the novices to execute the assembly tasks. To facilitate the operator’s understanding of the assembly process, the colors of the words in the slides are the same as the actual colors used for the assembly parts. For example, there are “blue cable” and “grey cable” in Figure 6. In each slide, several design principals of data presentation are introduced such as multimedia principle, coherence principle and spatial contiguity principle (Mayer, 2001). In Figure 6, three types of information are displayed as (a) text instruction, (b) pictorial information, (c) movie, and the sequence of assembly is also illustrated. During the assembly process, the PowerPoint slides are output to an LCD TV and switched by the operator’s foot with footswitch during the assembly process. Assembly Information Database: In this multimedia based assembly supporting system, the assembly information is classified into paper, audio, and video files. The assembly guidance is concisely written in paper files. Guidance of each assembly step is recorded in audio files. After the standard motions of the experienced operators are recorded and analyzed into primitive assembly motions, they are saved into video files. Tan (2008) set up an assembly information database to preserve all of these assembly information files and provide them to the operator depending on the situation. This database contains training data and assembly data: training data are designed for novices, and the assembly information files contain

567

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 6. Multimedia based assembly supporting information

assembly details. Assembly data are used to assist experienced operators by indicating the assembly sequence but not assembly details. As a consequence, this system may promote both novice and experienced operators to enter the workforce. All the operators who used the assembly table with LCD evaluated positively that the instruction on LCD can be read easily and understood smoothly.

Safety Strategy MASS system is a kind of human-robot cooperation system. Although employing the assistant robots to support the operator can increase the assembly efficiency and reduce the assembly burden, this collaboration can be extremely dangerous because the active robot is involved and any mistake can be fatal. To protect the operator during the assembly process, several safety designs are proposed and developed in this manufacturing system, which cover both hardware and software to achieve good robot-human collaboration. Fundamental concepts are:

568

1. 2. 3. 4. 5. 6.

Risk assessment by ISO regulation; Area division by safety light curtains as illustrated in Figure 7; Speed/Force limiter by serve controller; Collision protector by physical devices; Collision detector by IP cameras; Inherent safety theory.

Risk Assessment by ISO Regulation Since no direct industrial safety standards and regulations that govern this type of close range active human-robot collaboration, the safety design in this work is formulated by collective reference to related safety standards and regulations to verify component systems’safety first (non-collaboration safety) and then assess system safety as a whole (collaboration safety). Table 1 summarizes the referred industrial safety standards and regulations in mobile robot manipulators system development and total system development. This chapter mainly focuses on the discussion on human-robot collaboration safety; therefore the non-collaboration safety of component systems is omitted. However, it is important to bear in mind that the following safety designs for collaboration are built in accordance with the referred

Multi-Modal Assembly-Support System for Cellular Manufacturing

Table 1. Related safety standards and regulations Standards and Regulations

Descriptions

Related to mobile robot manipulators system development IEC 60364-4-41 (JIS C0364-4-41)

Low-voltage electrical installations – Part 4-41: Protection for safety – Protection against electric shock

IEC 60364-7-717

Electrical installations of buildings – Part 7-717: Requirements for special installations or locations – Mobile or transportable units

IEC 61140 (JIS C0365)

Protection against electric shock – Common aspects for installation and equipment

BS EN 1175-1

Safety of industrial trucks – Electrical requirements – Part 1: General requirements for battery powered trucks

ISO 10218-1 (JIS B8433-1)

Robots for industrial environments – Safety requirements – Part 1: Robot

Related to total system development ISO 12100-1 (JIS B9700-1)

Safety of machinery – Basic concepts, general principles for design – Part 1: Basic terminology, methodology

ISO 12100-2 (JIS B9700-2)

Safety of machinery – Basic concepts, general principles for design – Part 2: Technical principles

ISO 14121-1 (JIS B9702)

Safety of machinery – Risk assessment – Part 1: Principles

ISO 14121-2

Safety of machinery – Risk assessment – Part 2: Practical guidance and examples of methods

ISO 13849-1 (JIS B9705-1)

Safety of machinery – Safety-related parts of control systems – Part 1: General principles for design

BS EN 954-1

Safety of machinery. Safety related parts of control systems. General principles for design

ANSI/RIA R15.06

Industrial Robots and Robot Systems - Safety Requirements

ISO 13852 (JIS B9707)

Safety of machinery – Safety distances to prevent danger zones being reached by the upper limbs

ISO 14119 (JIS B9710)

Safety of machinery – Interlocking devices associated with guards – Principles for design and selection

ISO 13854 (JIS B9711)

Safety of machinery – Minimum gaps to avoid crushing of parts of the human body

ISO 14118 (JIS B9714)

Safety of machinery – Prevention of unexpected start-up

ISO 13855 (JIS B9715)

Safety of machinery – Positioning of protective equipment with respect to the approach speeds of parts of the human body

ISO 14120 (JIS B9716)

Safety of machinery – Guards – General requirements for the design and construction of fixed and movable guards

standards and regulations in the component level. EU standard permits the collaboration of robots with the operator when the total output of robots is less than 150 (N) at the tip of the end-effecter. Japanese standard defines that each actuator has the power less than 80 (W). The collaboration safety design is presented in hardware design and control design in the following.

Area Division by Safety Light Curtains The software systems in robot controller and other computers are prepared as Dual Check Safety (DCS), which checks speed and position data of motors with two independent CPUs in the robot controller. In risk assessment, we listed up to 168 risks and take its countermeasure respectively

569

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 7. Three robot working zones for safety

so as to satisfy the required performance level. Whatsoever definition industrial robots are, it is strongly prohibited that robots exist with the operator in the same space. Thus a cage is required to separate the operator from the robots. For the area division, the whole cell in Figure 7 is divided into human area (H), robot area (R), and buffer area (B) by safety fences, photoelectric sensors and light curtains in order to obtain safe working areas and to monitor border crossing for safety. Robots are allowed to operate in high speed motion in area R but low speed movement in area B. In area H, the strong restrictions are applied to robot motions. When the manipulators move too close to the operators and cross the light curtain 2, the power of the manipulator is cut down by the light curtain. Consequently, the manipulators stop.

Speed/Force Limiter by Serve Controller As shown in Figure 8, by the servo controller, the speed of the mobile manipulators is limited, and the force/torque at the end-effecter is also limited by software. The controller also has a function of abnormal force limiter in case of unexpected collision of the manipulator against the environment.

570

Based on the recommendation from safety standards and risk assessment, during collaboration process, the speed of the mobile manipulators is limited to below 150 (mm/s) and the working area of the robot is restricted within the pink region in Figure 8. The minimum distance between the robot gripper and surface of the workbench is 120 (mm) according to ISO 13854.

Collision Protectors by Physical Devices During the assembly process, several collision protectors by physical devices have been designed for accident avoidance and the protection of the operator. 1.

Mobile Base: To prevent the operator from being hurt by the manipulators, the localization accuracy of the mobile base should be maintained. With vision system to detect marks on the floor, the system has a localization accuracy of 5 (mm) and 0.1°. The base is equipped with bumper switch for object collision detection and wheel guard to prevent foreign object being tangled with the wheels, as illustrated in Figure 8.

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 8. Robot speed, force, and area restrictions

2.

3.

Footswitch: In the MASS system, the twin mobile manipulators are used to assist the operator to execute the assembly tasks during the assembly process. Without a safety strategy, the operator could be injured by the manipulators. An effective working sequence is one of the effective ways to reduce the probability of collision between an operator and a manipulator. The manipulators are prevented from moving in the direction of the operator as he performs an assembly task. The probability for collisions is reduced with the introduction of the working sequence. To realize the proposed working sequence, a footswitch is used to control the manipulators, as illustrated in Figure 9. When the operator finishes an assembly step, he steps on footswitch, which signals the manipulators to provide the assembly parts to the operator for the next step. Emergency Button: When an accident occurs, the operator can just push the emergency button on the right-hand side of the assembly workbench to stop the entire system, as shown in Figure 9. After any problem has

4.

been solved, the operator pushes the reset button to restart the assembly process. Safe Bar: In addition, steel safe bar is installed in front of the assembly workbench (referred to Figure 9). If other strategies failed to stop the manipulator to collide the operator, this safe bar can protect the operator.

Collision Detector by IP Cameras The developed system installs a robot with higher ability than both EU and Japan Standard. Even though various countermeasures are introduced, the risk assessment shows residual risks. For the intelligent compensation of safety, two IP cameras are utilized to monitor the operator’s safety (referred to Figure 10); that is, the cameras track the color marks on the head and shoulders of the operator to measure the body posture and position to estimate the human operation conditions (Duan, 2009). The vision monitoring system has positioning accuracy of 30 (mm) and process delay of 0.6 (s).

571

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 9. Collision protectors by physical devices

Figure 10. Operator safety monitoring system

Inherent Safety Theory Although several safety strategies are adopted, there is no guarantee that a collision between a manipulator and an operator will never occur. Therefore, the manipulators should be ameliorated according to inherent safety theory (Ikeda & Saito, 2005) to reduce the injury of the operator. The sharp edges of the manipulators are softened into obtuse-angled brims. The force and speed of the manipulators are reduced as much as possible while still meeting the assembly requirement. In addition, the overall mobile robot manipulators system is built with low center of gravity design to prevent tipping.

Evaluation of MASS System To evaluate the effect of the MASS system, a group of operators were required to execute an assembly task of cable harness, as illustrated in Figure 11. In this task, operators must insert correct cables into corresponding correct holes in the connector. After that, following the cable routes, operators must fix the cables to the jigs on the assembly table. The operators executed the cable

572

harness task in two cases: (1) all of the assembly information, including the cable type, the position of the hole in the connector and the assembly step, was only provided by the assembly manual (Exp I); (2) operators executed the cable harness task under the support of MASS (Exp II). Two parameters were measured in the experiments: assembly time and assembly error. The assembly time is compared between conventional manual assembly setup (Exp I) and the new setup (Exp II). Five novice operators and five expert ones performed three assembly trails respectively for both the setups. From Figure 12, it is proved that the overall performance is better (shorter in assembly time) in the new setup (Exp II). Novices and experts show almost the same assembly time from the first trial in the case of the new setup as the dotted lines. It means that the assembly can be executed at the minimum time even by the unskilled operators. Comparing to the assembly time of the conventional setup (Exp I), the novice operators need only 50% of the time in the MASS system (Exp II), which indicates double productivity. Note that

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 11. Cable harness task

the assembly time at the third trial converges to the minimum by all the cases. This implies that the assembly operation is easy to achieve and the human ability of learning is high. In other words, this system may be beneficial for very frequent change of products. In terms of assembly quality, 10% to 20% of assembly error (insertion error)

is observed in conventional setup (Exp I), while in new cell production setup (Exp II) the error is totally being prevented by the robot assistance, especially by guidance of laser pointer and by the instruction of the assembly sequences. According to the experimental results, it can be concluded that the developed MASS system

Figure 12. Difference of assembly time by experts and novices

573

Multi-Modal Assembly-Support System for Cellular Manufacturing

can accelerate the operator’s assembly process as well as prevent assembly errors. According to Zhang (2008), this cable harness task is a kind of cognitive assembly task (Norman, 1993); therefore, the mental work load of the operators cannot be ignored. To evaluate the mental work load of the operators in (Exp I) and (Exp II), NASA-TLX (Task Load Index) method (NASA-TLX for Windows U.S.) was used. After the operators finished the cable harness task, they were required to answer the questionnaires. Based on the NASA-TLX method, the mental work load of the operators can be computed. The mental work load of (Exp I) is 62, which is much higher than that of (Exp II), which is 38. This means that based on the support of MASS, the mental work load of the operators can be reduced significantly.

CONCLUSION This work aims to realize a new cellular manufacturing system for frequent changes of products. In this chapter, a multi-modal assembly-support system (MASS) was developed for a cellular manufacturing system. In the MASS, two manipulators are used to replace the operators to execute the laborious tasks. Based on the assembly information database and assembly information supporting system, this system is capable of meeting the assembly and training requirements of the experienced and the novice operators. Besides developing the actual system, a simulator for an entire assembly system was created to reduce the time and costs required for development. To protect the operator from harm, several safety strategies and equipments were presented. According to inherent safety theory, two manipulators are ameliorated, which could reduce the injury of the operators even when they were collided by the manipulators. To evaluate the effect of MASS, a group of experienced operators and novice operators were required to execute a cable harness task. According

574

to the experimental results, basing on the support of MASS, not only the assembly time and the error ratios are reduced, but also the mental work load of the operators is reduced. Therefore, the MASS allows an operator to receive physical and informational support while working in the actual manufacturing assembly process. Future studies should be directed at identifying and monitoring the conditions that contribute to the operator’s fatigue and intention during the assembly process; these efforts will lead to improvements in comfort for the operators and assembly efficiency.

ACKNOWLEDGMENT This study is supported by NEDO (New Energy and Industrial Technology Development Organization) as one of “Strategic Projects of Element Technology for Advanced Robots”. The author Feng DUAN is supported by the Fundamental Research Funds for the Central Universities (No.65010221). We appreciate NEDO and MSTC for accelerating the development of this practical system. In particular, we would like to acknowledge FANUC Company for their excellent cooperation and technical support.

REFERENCES Arai, T., Duan, F., Kato, R., Tan, J. T. C., Fujita, M., Morioka, M., & Sakakibara, S. (2009). A new cell production assembly system with twin manipulators on mobile base. Proceeding of 2009 International Symposium on Assembly and Manufacturing (pp. 149-154). Suwon, Korea. Bauer, A., Wollherr, D., & Buss, M. (2008). Human-robot collaboration: A survey. International Journal of Humanoid Robotics, 5(1), 47–66. doi:10.1142/S0219843608001303

Multi-Modal Assembly-Support System for Cellular Manufacturing

Beauchamp, Y., & Stobbe, T. J. (1995). A review of experimental studies on human-robot system situations and their design implications. The International Journal of Human Factors in Manufacturing, 5(3), 283–302. doi:10.1002/ hfm.4530050306 Colgate, J. E., Wannasuphoprasit, W., & Peshkin, M. A. (1996). Cobots: Robots for collaboration with human operators. Proceedings of the International Mechanical Engineering Congress and Exhibition, 58, 433–439. Duan, F. (2009). Assembly skill transfer system for cell production. Unpublished doctoral dissertation, The University of Tokyo, Japan. Duan, F., Morioka, M., Tan, J. T. C., & Arai, T. (2008). Multi-modal assembly-support system for cell production. International Journal of Automation Technology, 2(5), 384–389. Duan, F., Tan, J. T. C., Kato, R., & Arai, T. (2009). Operator monitoring system for cell production. The International Journal of the Robotics Society of Japan-Advanced Robotics, 23, 1373–1391. Hayakawa, Y., Kitagishi, I., & Sugano, S. (1998). Human intention based physical support robot in assembling work. Proceedings of the 1998 IEEE/ RSJ International Conference on Intelligent Robots and Systems (pp. 930-935). Victoria, B.C., Canada. Helms, E., Schraft, R. D., & Hagele, M. (2002). Rob@work: Robot assistant in industrial environments. Proceedings of 11th IEEE International Workshop on Robot and Human Interactive Communication (pp. 399-404). Berlin, Germany. Ikeda, H., & Saito, T. (2005). Proposal of inherently safe design method and safe design indexes for human-collaborative robots. (. Specific Research Reports of the NIIS-SRR-NO., 33, 5–13. ISO10218-1. (2006). Robots for industrial environments- Safety requirements - Part 1: Robot.

Isa, K., & Tsuru, T. (2002). Cell production and workplace innovation in Japan: Towards a new model for Japanese manufacturing? Industrial Relations, 41(4), 548–578. doi:10.1111/1468232X.00264 ISO 12100. (2010). Safety of machinery. Retrieved from http://www.iso.org/iso/iso_catalogue/ catalogue_ics/catalogue_detail_ics. htm?csnumber=27239 ISO 14121-1. (2007). Safety of machinery - Risk assessment-Part 1: Principles. Kosuge, K., Yoshida, H., Taguchi, D., Fukuda, T., Hariki, K., Kanitani, K., & Sakai, M. (1994). Robot-human collaboration for new robotic applications. Proceeding of the 20th International Conference on Industrial Electronics, Control, and Instrumentation (pp. 713-718). Mayer, R. E. (2001). Multi-media learning. New York, NY: Cambridge University Press. NASA. (n.d.). NASA-TLX for Windows. US Naval Research Laboratory. Retrieved from http://www. nrl.navy.mil/aic /ide/NASATLX.php Neider, J., Davis, T., & Woo, M. (1993). OpenGL programming guide: The official guide to learning OpenGL. Addison-Wesley Publishing Company. Norman, D. A. (1993). Things that make us smart: Defending human attributes in the age of the machine. Addison Wesley Publish Company. Oborski, P. (2004). Man-machine interactions in advanced manufacturing systems. International Journal of Advanced Manufacturing Technology, 23(3-4), 227–232. doi:10.1007/s00170-0031574-5 Reinhart, C., & Patron, C. (2003). Integrating augmented reality in the assembly domain — Fundamentals, benefits and applications. Annals of the CIRP, 52(1), 5–8. doi:10.1016/S00078506(07)60517-4

575

Multi-Modal Assembly-Support System for Cellular Manufacturing

Roboguide, F. A. N. U. C. (n.d.). Robot system animation tool. Retrieved from http://www.fanuc. co.jp/en/ product/robot/ro boguide/index.html Seki, S. (2003). One by one production in the “Digital Yatai” —Practical use of 3D-CAD data in the fabrication. Journal of the Japan Society of Mechanical Engineering, 106(1013), 32–36. Sugi, M., Nikaido, M., Tamura, Y., Ota, J., Arai, T., & Kotani, K. … Sato, Y. (2005). Motion control of self-moving trays for human supporting production cell “attentive workbench”. Proceedings of the 2005 IEEE International Conference of Robotics and Automation (pp. 4080-4085). Barcelona, Spain.

Tan, J. T. C., Duan, F., Zhang, Y., Watanabe, K., Pongthanya, N., & Sugi, M. … Arai, T. (2008). Assembly information system for operational support in cell production. The 41st CIRP Conference on Manufacturing Systems (pp. 209-212). Wemmerlov, U., & Johnson, D. J. (1997). Cellular manufacturing at 46 user plants: Implementation experiences and performance improvements. International Journal of Production Research, 35(1), 29–49. doi:10.1080/002075497195966 Zhang, Y., Duan, F., Tan, J. T. C., Watanabe, K., Pongthanya, N., & Sugi, M. … Arai, T. (2008). A study of design factors for information supporting system in cell production. The 41st CIRP Conference on Manufacturing Systems (pp. 319-322).

This work was previously published in Operations Management Research and Cellular Manufacturing Systems: Innovative Methods and Approaches, edited by Vladimir Modrák and R. Sudhakara Pandian, pp. 412-427, copyright 2012 by Business Science Reference (an imprint of IGI Global).

576

577

Chapter 33

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets Gen’ichi Yasuda Nagasaki Institute of Applied Science, Japan

ABSTRACT This chapter deals with modeling, simulation, and implementation problems encountered in robotic manufacturing control systems. Extended Petri nets are adopted as a prototyping tool for expressing real-time control of robotic systems and a systematic method based on hierarchical Petri nets is described for their direct implementation. A coordination mechanism is introduced to coordinate the event activities of the distributed machine controllers through friability tests of shared global transitions. The proposed prototyping method allows a direct coding of the inter-task cooperation by robots and intelligent machines from the conceptual Petri net specification, so that it increases the traceability and the understanding of the control flow of a parallel application specified by a net model. This approach can be integrated with off-the-shelf real-time executives. Control software using multithreaded programming is demonstrated to show the effectiveness of the proposed method.

1. INTRODUCTION Complex robotic systems such as flexible manufacturing systems require sophisticated distributed real-time control systems. A major problem concerns the definition of the user tasks and the cooperation between the subsystems, DOI: 10.4018/978-1-4666-1945-6.ch033

especially since the intelligence is distributed at a low level (machine level). Controlling such systems generally requires a hierarchy of control units corresponding to several abstraction levels. At the bottom of the hierarchy, i.e. machine control level, are the programmable logic controllers (PLC). The next level performs coordination of the PLCs. The third level implements scheduling, that is, the real-time assignments of workpieces

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

and tools to machines. At the machine level, PLCs perform local logical control operations of flexible, modular, high-speed machines through the use of multiple independent drives (Holding, et al. 1992). Implementation languages can be based on ladder diagrams or more recently state machines (Silva, et al. 1982). However, when the local control is of greater complexity, the above kinds of languages may not be well adapted. The development of industrial techniques makes a sequential control system for robotic manufacturing systems larger and more complicated one, in which some subsystems operate concurrently and cooperatively (Neumann, 2007). In the area of real-time control of discrete event robotic systems the main problems that the system designer has to deal with are concurrency, synchronization, and resource sharing problems. Presently, the implementation of such control systems makes a large use of microcomputers. Real-time executives are available with complete sets of synchronization and communication primitives. However, coding the specifications is a hazardous work and debugging the implementation is particularly difficult when the concurrency is important. It is important to have a formal tool powerful enough and allowing developing validation procedures before implementation. Conventional specification languages do not allow an analytical validation. Consequently, the only way to validate is via simulation and step by step debugging. On the other hand, a specification method based on a mathematical tool may be more restricting, but analytical procedures can strongly reduce the simulation step. Rapid prototyping is an economical and crucial way to experiment, debug, and improve specifications of parallel applications. Increasing complexity of synchronizing mechanisms involved in concurrent system design makes necessary a prototyping step starting from a formal, already verified model. Petri nets allow to validate and to evaluate a model before its implementation. The formalism allowing a validation of the main properties of the

578

Petri net control structure (liveness, boundedness, etc.) guarantees that the control system will not fall immediately in a deadlocked situation. In the field of flexible manufacturing cells, the last aspect is essential because the sequences of control are complex and change very often. When using Petri nets, events are associated with transitions. Activities are associated to the firing of transitions and to the marking of places which represents the state of the system. The network model can describe the execution order of sequential and parallel tasks directly without ambiguity (Silva, 1990). Pure synchronization between tasks, choices between alternatives and rendezvous can be naturally represented. Moreover at the machine level Petri nets have been successfully used to represent the sequencing of elementary operations (Yasuda, et al. 1992). In addition to its graphic representation differentiating events and states, Petri nets allow the possibility of progressive modeling by using stepwise refinements or modular composition. Libraries of well-tested subnets allow components reusability leading to significant reductions in the modeling effort. The possibility of progressive modeling is absolutely necessary for large and complex systems, because the refinement mechanism allows the building of hierarchically structured net models. Furthermore, a real-time implementation of the Petri net specification by software called a token player can avoid implementation errors. This is because the specification is directly executed by the token player and the implementation of these control sequences preserves the properties of the model. In this approach, the Petri net model is stored in a database and the token player updates the state of the database according to the operation rules of the model. For control purposes, this solution is very well suited to the need of flexibility, because, when the control sequences change, only the database needs to be changed. Some techniques derived from Petri nets have been successfully introduced as an effective tool for describing control specifications and real-

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

izing the control in a uniform manner (Murata, et al. 1986). However, in the field of flexible manufacturing cells, the network model becomes complicated and it lacks the readability and comprehensibility (David, et al. 1992). Therefore, the flexibility and expandability are not satisfactory in order to deal with the specification change of the control system. Despite the advantages offered by Petri nets, the synthesis, correction, updating, etc. of the system model and programming of the controllers are not simple tasks (Zhou, et al. 1993), (Desrochers, et al. 1995), (Lee, et al. 2006). Some Petri net implementation methods have already been proposed for simulation purposes or for application prototyping (Butler, 1991), (Garcia, 1998), (Piedrafita, et al. 2008). However the implementation method for hierarchical and distributed control of complex robotic systems has not been established sufficiently so far (Breant, et al. 1992), (Girault, et al. 2003), (Zhou, et al. 1999). If it can be implemented using Petri nets, the modeling, simulation and control can be realized consistently. This chapter describes a Petri net based prototyping method for real-time control of complex robotic systems. The presented method, based on the author’s previous works (Yasuda, et al. 2010), (Yasuda, 2010), involves three major steps, and progressively gathers all information needed by the control system design and the code generation for simulation experiments. The first step consists in specifying the conceptual net model for overall system control. The second step consists in transforming the net model into the detailed net model. Based on the hierarchical and distributed structure of the system, the specification procedure is a top-down approach from the conceptual level to the detailed level. The third step consists in decomposing the detailed net into local net models for machine control and the coordination model. The coordination algorithms are simplified since the robots and machines in the system are separately controlled using dedicated task execution programs. In order to deal with complex

models, a hierarchical approach is adopted for the coordination model design. In this way, the macro representation of the system is broken down to generate the detailed nets at the local machine control level. Finally, the C++ code generation using multithreaded programming is described for the prototype hierarchical and distributed control system.

2. MODELING OF DISCRETE EVENT SYSTEMS USING EXTENDED PETRI NETS A Petri net is a directed graph whose nodes are places shown by circles and transitions shown by bars. Directed arcs connect places to transitions and transitions to places. Formally, a Petri net is a bipartite graph represented by the 4-tuple G={P,T,I,O} such that: P = {p1,p2,…,pn} is a finite, not empty, set of places; T = {t1,t2,…,tm} is a finite, not empty, set of transitions; P∩T = ϕ, i.e. the sets P and T are disjointed; I: T→P∞ is an input function, a mapping from transitions to bags of places; O: T→P∞ is an output function, a mapping from transitions to bags of places. The input function I maps from a transition tj to a collection of places I(tj), known as input places of a transition. The output function O maps from a transition tj to a collection of places O(tj), known as input places of a transition. The preincidence matrix of a Petri net is C − = [cij− ] where cij− = 1 (pi ∈ I(tj)), cij− = 0 (pi ∉ I (t j )); the

post–incidence matrix is C + = [cij+ ] where cij+ = 1 (pi ∈ O(tj)), cij+ = 0 (pi ∉ O(t j )), then

the incidence matrix of the Petri net C = C + − C − . Each place contains integer (positive or zero) marks or tokens. The number of tokens in each

579

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

place is defined by the marked vector or marking M = (m1, m2 ,..., mn )T . The number of tokens in one place pi is simply indicated by M(pi). The firing of a transition will change the token distribution (marking) in the net according to the transition firing rule. In the basic Petri net, a transition t j is enabled if, ∀p i ∈ I(t j ), M k (pi ) ≥ w(pi , t j ), where the current marking is Mk and w(pi,tj) is the weight of the arc from pi to tj. A sequence of firings will result in a sequence of markings. A marking Mn is said to be reachable from a marking M 0 if there exists a sequence of firings that transforms M 0 to Mn. The set of all

possible markings reachable from M 0 is denoted as R(M0). A Petri net is said to be k-bounded or simply bounded if the number of tokens in each place does not exceed a finite number for any marking reachable from M 0 , i.e., ∀pi ∈ P,

∀M ∈ R(M 0 ), M(pi)≤k (Reisig, 1985), (Murata, 1989). In the basic Petri net, bumping occurs when despite the holding of a condition, the preceding event occurs. This can result in the multiple holding of that condition. From the viewpoint of discrete event process control, bumping phenomena should be excluded. So, the firing rule was modified so that the system is free of this phenomenon. Because the modified Petri net must be 1-bounded, for each place pi, mi = 0 or 1, and the weight of every arc is 1. A Petri net is said to be ordinary if all of its arc weights are 1’s. Thus the axioms of the modified Petri net are as follows. 1.

2.

A transition tj is enabled if for each place pk ∈ I(tj), mk = 1 and for each place pl ∈ O(t j ),

ml = 0. When an enabled transition tj fires, the marking M is changed to M ′, where for each place pk ∈ I (t j ), mk′ = 0 and for each place pl ∈ O(t j ), ml′ = 1.

580

3.

In any initial marking, there must not exist more than one token in each place.

A transition without any input place is called a source transition, and one without any output place is called a sink transition. A source transition is unconditionally enabled, and the firing of a sink transition consumes a token in each input place, but does not produce any. According to these axioms, the number of tokens in a place never exceeds one, thus the modified Petri net is essentially 1-bounded and said to be a safe graph. Besides the guarantee of safeness, considering not only the modeling but also the actual system control of robotic systems, the additional capabilities of input and output interfaces which connect the net to its environment are required. The extended Petri net adopts the following two elements: 1) Gate arc, 2) Output signal arc (Hasegawa, et al. 1984). A gate arc connects a transition with a signal source, and depending on the signal, it either permits or inhibits the occurrence of the event which corresponds to the connected transition. Gate arcs are classified as permissive or inhibitive, and internal or external. An output signal arc sends a command request signal from a place to an external machine. The interfaces are a set of transitions which represent the communication activities of the net with its environment. Thus the firing rule of a transition is described as follows: an enabled transition may fire when it does not have any internal permissive arc signaling 0 nor any internal inhibitive arc signaling 1 and it does not have any external permissive arc signaling 0 nor any external inhibitive arc signaling 1. A robotic action is modeled by two transitions and one condition as shown in Figure 1. At the “Start” transition the command associated with the transition is sent to the corresponding robot or machine. At the “End” transition the status report is received. When a token is present in the “Action” place, the action is in progressive. The “Completed” place can be omitted, and then the “End” transition is fused with the “Start” transi-

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 1. Extended Petri net model of robotic action with external permissive and inhibitive gate arcs

ing to the structural information of the net specifying the robotic task. These tables are the following ones: 1. 2. 3.

4. tion of the next action. Activities can be assigned an amount of time units to monitor them in time for real performance evaluation. The firing of a transition is indivisible; the firing of a transition has duration of zero. Extended Petri nets to consider timing conditions where each activity is assigned an amount of time units can also be used as shown in Figure 2. Through the simulation steps, the transition vector table is efficiently used to extract enabled or fired transitions. The flow chart of simulation and evaluation procedure is shown in Figure 3. At a step of simulation of robotic task, the configuration of the robots can be seen with graphic simulation. The data structure of the extended Petri net simulator is made up of several tables correspondFigure 2. Examples of representation of timing conditions: external timer with output signal arc and external gate arc, (b) timed transition

5.

6.

The table of the labels of the input and output places for each transition, The table of the transitions which are likely to be arbitrated for each conflict place, The table of the gate arcs which are internal or external, permissive or inhibitive, for each transition, The table of marking which indicates the current marking for each place, The table of places to tasks mapping which points out the tasks that have to be put into the ready state when the corresponding place receives a token, The table of the “end of task” transitions, which associates with each of task the set of transitions with external gate arcs switched each time an “end of task” message is received by the simulator. The “end of task” transitions are only fired on the reception of an “end of task” message.

3. PETRI NET MODELS OF COOPERATIVE CONTROL OF CONCURRENT TASKS A robotic task consists of several subtasks or operations, and a subtask consists of several actions. Conceptually, robotic processes are represented as sequential constructs or state machines, where each transition has exactly one incoming arc and exactly one outgoing arc. The structure of a place having two or more output transitions is referred to as a conflict, decision, or choice, depending on applications. State machines allow the representation of decisions, but not the synchronization of parallel activities. In a net model of robotic task, the set of places can be classified into three groups: idle, operation, and resource places. A token in an idle place indicates that the robot is ready to work and

581

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 3. Flow chart of simulation and evaluation procedure

waiting for a specified signal from another robot or its environment. An operation place represents an operation to be processed for a workpiece or part in a manufacturing sequence and initially it has no token. Resource places represent resources (robots and machines) and their initial tokens represent the number of available resource units. A task executed by a robot or machine can be seen as some connection of more detailed subtasks. For example, transferring an object from a start position to a goal position is a sequence of the following subtasks: moving the hand to the start position, grasping the object, moving to the goal position, and putting it on the specified place. Figure 4 shows the net representation of

582

a robotic task: pick and place operation with the input and output conveyors. While the place “Robot” in Figure 4(a) indicates that the state of the robot is “ready” when the token is in the place, in Figure 4(b) it indicates that the state of the robot is “operating”. The place also indicates the macro representation of the pick and place operation. The parallel net in Figure 4(b) is equivalent to the cyclic net in Figure 4(a) with respect to the enabling conditions of all the transitions in the net. The parallel net assures that the robot can load or unload only one workpiece at a time. Figure 4(c) shows the possible evolution of dynamic behavior of the net.

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 4. Net representation of robotic task: (a) cyclic net model of robot, (b) equivalent parallel net model, (c) possible state of net model

Furthermore the subtasks are translated into more detailed actions. A hierarchical approach consists in building a model by stepwise refinements. At each step some parts of the model are replaced by a more complex model. A modular approach by model composition affords to build complex validated Petri nets. Figure 5 shows the view of a hierarchical net representation by the graphic net simulator.

A specification procedure for discrete event robotic systems based on Petri nets is as follows. First, the conceptual level activities of the system are defined through a net model considering the task specification corresponding to the aggregate discrete event process. The places which represent the subtasks indicated as the task specification are connected by arcs via transitions in the specified order corresponding to the flow of subtasks and a workpiece. The places representing

Figure 5. Hierarchical net representation of robotic task (loading) on graphic net simulator

583

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

robots and machines used for the subtasks are also added to connect transitions which correspond to the beginning and ending of their subtasks. Then, the places describing the subtasks are substituted by a subnet based on activity specification and required control strategies in a manner which maintains the structural properties (Yasuda, et al. 2010). For concurrent control in the conceptual net model, two implementation methods of synchronous interaction between two tasks which are executed respectively by one robot are shown in Figure 6(a), (b) (Yasuda, 2000), while an implementation with asynchronous communication based on the well-known signal/wait (semaphore) concept is shown in Figure 6(c). A coordination mechanism is introduced to coordinate concurrent systems with separate robots which require interaction between each other, such as synchronization and resource conflict. Figure 7 shows the net model of a coordination mechanism to conduct synchronous interaction by means of synchronous communication between two robots. This is also the detailed representation of the shared transition. A shared transition for synchronous interaction by separate robots is said to be a global transition, while a transition for independent action by a single robot is said to be a local transition. For synchronous interaction, the coordination algorithm is formally expressed using logical variables such that

the global transition is fired if all of associated transitions in the local net models are fired. The firing condition of a global transition Gj in the conceptual net which represents the event of synchronous action by S robots is written as Gj =

S

t

sub =1

jsub

(1)

where the corresponding event of action by each robot is represented by t jsub (sub = 1⋯S), and ⋂ denotes logical product operation. Figure 8 shows a net model of a coordination mechanism to execute selective control by means of synchronous communication, where the decision place executes any arbitration rule such as order or priority, to select independent action by a robot or cooperative action by two robots. A hierarchical and distributed control system is composed of one system controller and several machine controllers. The coordination mechanism as well as the conceptual net model of the system is implemented in the system controller and detailed net models are allocated to machine controllers. The coordination program is substantially the firing test program of global transitions using the firing rule and a set of relational data tables of global transitions with their input and output places and associated gate arcs. The coordination procedure through communica-

Figure 6. Net representation of synchronous interaction between two concurrent tasks: (a) synchronous communication with a shared transition, (b) interlock with mutual gate arcs,(c) asynchronous communication (signal/wait)

584

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 7. Net model of coordination mechanism for synchronous interaction by means of synchronous communication with a shared transition

tion between the coordinator and machine controllers is shown as follows: 1.

2.

3.

When a machine controller receives a command start signal from the coordinator, it starts the execution of the requested command task. At the end of execution, the machine controller sends a status report as a gate condition to the coordinator. When the coordinator receives the new gate condition, it updates the net model and tests the firing condition associated with the gate condition in the net. If a new transition is fired, then a command associated with the transition is sent to the corresponding machine controller.

For the actual control, the operations of each machine or robot are broken down into a series of unit motions, which is represented by mutual connection between places and transitions. A place means a concrete unit motion of a machine. From these places, output signal arcs are connected to the external machines, and external gate arcs from the machines are connected to the transitions of the net when needed, for example, to synchronize and coordinate operations. When a token enters a place that represents a subtask, the machine defined by the machine code is informed to execute a specified subtask with positional and control data; these code and data are defined as the place parameters. Figure 9 shows the net representation of real-time execution control of a robotic unit action.

Figure 8. Net model of coordination mechanism for selective control

585

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 9. Net representation of execution control of robotic action using output signal arc and external permissive gate arc

4. IMPLEMENTATION OF DISTRIBUTED CONTROL WITH MULTITHREADS The example robotic system has one robot, one machining center, and two conveyors, where one is for carrying in and the other one is for carrying out. The main execution of the system is indicated as the following task specification: 1. 2. 3. 4. 5.

A workpiece is carried in by the input conveyor. The workpiece is loaded to the machining center by the robot. The workpiece is processed by the machining center. The workpiece is unloaded from the machining center by the robot. The workpiece is carried out by the output conveyor.

The robotic system works in the following way: A workpiece comes on the input conveyor

586

up to the take up position. The robot waits in the waiting position in front of the conveyor, and, on stopping, approaches in the take up position, grips the object and returns to the waiting position. Then it turns, goes into the working space of the machine center and there it leaves the workpiece. After automatic gripping of the object the robot draws back and it waits for the machining center to complete object processing. After object processing, the robot goes to the machining center, takes the workpiece from an opened vice and carries it over to a free position on the output conveyor. The discrete event processes of the robotic system are represented at the conceptual level as shown in Figure 10. All the transitions are used to coordinate the overall system. Shared transitions between the robot and a machine, represent synchronous interaction and are coordinated as global transitions to be fired simultaneously. In this step, if necessary, control conditions such as the capacity of the system between the respective subtasks must be connected to regulate the execution of the manufacturing process. Next, each place

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 10. Net representation of robotic task at conceptual level

representing a subtask at the conceptual level is translated into a detailed subnet, which is used for local control of each robot or machine. The prototyping method was applied to produce semi-automatically C++ program from the net models on a general PC using multithreaded programming (Grehan, et al. 1998). Then, by executing the coordination program and net based controllers algorithms based on loaded information on a network of dedicated microcomputers, experiments on the final test can be performed (Yasuda, 2010). Multithreads control software composed of one modeling thread, one simulation thread and several task execution threads, were designed and written in Microsoft Visual C# under OS Windows XP SP3. The simulation thread executes the coordination program and the conceptual net based controller algorithm, while the task execution threads execute local net based controller algorithms, which control robots and machines through serial interfaces using the command/response concept. An example diagram of two-level net based concurrent real-time control of two external machines using one simulation and two task execution threads is shown in Figure 11.

The modeling thread, which is the main thread, executes the event driven net modeling, drawing and modification based on task specification using windows button clicks and mouse operations, as shown in Figure 12. When the transformation of graphic data of the net model into internal structural data is finished, the simulation thread is activated using window buttons by the user from the modeling thread. The simulation thread executes the enabling and firing test using gate conditions as shown in Figure 13, and when a transition is fired, the simulation thread activates the task execution thread and initiates the execution of a subtask by sending commands attached to the fired transition. When all the subtasks in the system are in progressive, the simulation thread waits for the turning on of any gate condition repeatedly. If a subtask is completed, the gate condition is turned on, then the simulation thread receives the gate signal through the external gate arc and updates the table of gate conditions. The program structure and the main C# code of the task execution thread are illustrated in Figure 14 and Listing 1. One task execution thread is allocated in each machine. When a task execution thread is activated, it sends the direct commands with specified positional and control data

587

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 11. Two-level net based concurrent real-time control of two external machines

Figure 12. Program structure and main C# code of modeling thread

588

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 13. Program structure of simulation thread

through the serial interface to the dedicated robot or machine. Then the thread searches the target gate condition and waits for any status report through the interface repeatedly. When the subtask ends its activity normally, the thread receives the normal end report. Then, the thread turns the target gate condition on and the current gate condition off, so that the simulation thread can proceed with the next activations through the external gate arc. During simulation and task execution, it is decided by the simulator that whether it is in a deadlocked situation or not, and the user can stop the task execution through the simulator at any time. In the real-time control both the simulation thread and the task execution threads access to the external gate variables as shared variables; task execution threads write the values of the gate variables after the completion of subtasks and the simulation thread reads them for the friability test considering gate conditions. Mutual exclusive access control and the C# code were implement-

ed as shown in Figure 15 and Listing 2 respectively. The method function uses a “lock” statement of C# for the mutual exclusive access to shared Figure 14. Program structure of task execution thread

589

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Listing 1. Task execution thread

variables so that, while one thread calls the function, the other thread can not call it. Using the method function call, the simulator thread waits for the external gate signal, and after a task execution thread writes the permissive value of the target gate variable, the simulator thread reads it. Experimental results of multithreads scheduling for one and two task execution threads are shown in Figure 16(a), (b), respectively. In case of two threads, one thread takes charge of a robot controller, and another takes charge of a PLC for sequence control of a conveyor respectively through serial interface. Here, the time slice of the OS is about 15ms, and the timer program is inserted in the reference position of each thread to capture the time the method function is called.

Numerous experiments show that the gate condition is transferred through the shared memory from the task execution thread to the simulation thread as quickly as possible. Experiments using a real robot and conveyors show that the simulation thread and the task execution threads are concurrently executed with even priority, and the values of external gate variables are changed successfully in the conceptual net model. The global transitions fire simultaneously with the transition of the conceptual net of the whole system task. The robot cooperated with the conveyors and the machining center, and the example robotic system performed the task specification successfully.

Figure 15. Program structure of method function for mutual exclusive access

590

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Listing 2. Method function for mutual exclusive access

Figure 16. Experimental results of multithreads scheduling: one task execution thread, (b) two task execution threads

591

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

In accordance with the implementation using multithreaded programming, hierarchical and distributed implementation under a real-time operating system on a network of microcomputers connected via a shared serial bus is also possible, where each microcomputer is dedicated to the local net model of a subsystem in the overall robotic system. After the arrival of a request, the response carrying the status information crosses the communication network and gets to the input buffer of the network board of the controller and written in the cache memory. The control data issued by the controller is written in the cache memory shared with the network board, sent to its target microcomputer, and there causes the new controlled status. By a shared communication network replacing traditional directly wired systems, a more flexible, reliable and efficient control performance can be surely expected.

5. CONCLUSION A prototyping methodology to build hierarchical and distributed control systems corresponding to the hardware structure of robotic control systems has been presented. The conceptual net is used to coordinate distributed local machine controllers using the decomposed information of global transitions representing cooperative interaction between machines: the coordination mechanism can be implemented in each layer of the control hierarchy of the system repeatedly. The overall control structure of the example robotic system was implemented on a general PC with serial interface using multithreaded programming. For the example system, detailed net models can be automatically generated using the database of robotic operations. The hierarchical approach allows us to reuse validated net models, such as loading, unloading, and specific handling operations, already defined for other purposes, that is an efficient way to deal with complex net models. The conceptual and local net models are

592

not so large that all of the net based controllers can be implemented on general microcomputers or PLCs. Thus, modeling, simulation and control of large and complex manufacturing systems can be performed consistently using Petri nets.

REFERENCES Breant, F., & Paviot-Adet, E. (1992). OCCAM prototyping from hierarchical Petri nets. In Becker, M., Litzler, L., & Trehel, M. (Eds.), Transputers ’92 Advanced Research and Industrial Applications (pp. 189–209). Amsterdam, Netherlands: IOS Press. Butler, B., Esser, R., & Mattmann, R. (1991). A distributed simulator for high order Petri nets. In Rozenberg, G. (Ed.), Advances in Petri Nets LNCS 483 (pp. 47–63). Berlin, Germany: Springer-Verlag. Caloini, A., Magnani, G., & Pezze, M. (1998). A technique for designing robotic control systems based on Petri nets. IEEE Transactions on Control Systems Technology, 6(1), 72–87. doi:10.1109/87.654878 David, R., & Alla, H. (1992). Petri Nets and Grafcet: Tools for modelling discrete events systems. UK: Prentice-Hall International. Desrochers, A. D., & Al-Jaar, R. Y. (1995). Applications of Petri Nets in manufacturing systems: Modeling, control and performance analysis. New York, NY: IEEE Press. Garcia, F. J., & Villarroel, J. L. (1998). Decentralized implementation of real-time systems using time Petri nets. Application to mobile robot control. In Proceedings of the 5th IFAC Workshop on Algorithms and Architectures for Real-time Control, (pp. 11-16). Girault, C., & Valk, R. (2003). Petri Nets for systems engineering. Berlin, Germany: SpringerVerlag.

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Grehan, R., Moote, R., & Cyliax, I. (1998). Realtime programming: A guide to 32-bit embedded development. Reading, MA: Addison-Wesley. Hasegawa, K., Takahashi, K., Masuda, R., & Ohno, H. (1984). Proposal of mark flow graph for discrete system control. Transactions of SICE, 20(2), 122–129. Holding, D. J., & Sagoo, J. S. (1992). A formal approach to the software control of high-speed machinery. In G. W. Irwin & P. J. Fleming (Eds.), Transputers in real-time control, (pp. 239-282). Taunton, Somerset, UK: Research Studies Press. Lee, E. J., Togueni, A., & Dangoumau, N. (2006). A Petri net based decentralized synthesis approach for the control of flexible manufacturing systems. In Proceedings of the 2006 IMACS Multiconference Computational Engineering in Systems Applications. Murata, T. (1989). Petri nets: Properties, analysis and applications. Proceedings of the IEEE, 77(4), 541–580. doi:10.1109/5.24143 Murata, T., Komoda, N., Matsumoto, K., & Haruna, K. (1986). A Petri net based controller for flexible and maintainable sequence control and its application in factory automation. IEEE Transactions on Industrial Electronics, 33(1), 1–8. doi:10.1109/TIE.1986.351700 Neumann, P. (2007). Communication in industrial automation – What is going on? Control Engineering Practice, 15(11), 1332–1347. doi:10.1016/j. conengprac.2006.10.004 Piedrafita, R., Tardioli, D., & Villarroel, J. L. (2008). Distributed implementation of discrete event control systems based on Petri nets. In Proceedings of the 2008 IEEE International Symposium on Industrial Electronics, (pp. 1738-1745). Reisig, W. (1985). Petri Nets. Berlin, Germany: Springer-Verlag.

Silva, M. (1990). Petri nets and flexible manufacturing. In Rozenberg, G. (Ed.), Advances in Petri Nets LNCS 424 (pp. 374–417). Berlin, Germany: Springer-Verlag. Silva, M., & Velilla, S. (1982). Programmable logic controllers and Petri nets: A comparative study. In Ferrate, G., & Puente, E. A. (Eds.), In IFAC Software for Computer Control 1982 (pp. 83–88). Oxford, UK: Pergamon. Yasuda, G. (2000). A distributed control and communication structure for multiple cooperating robot agents. In IFAC Artificial Intelligence in Real Time Control 2000 (pp. 283–288). Oxford, UK: Pergamon. Yasuda, G. (2010). Distributed cooperative control of industrial robotic systems using Petri net based multitask processing. In H. Liu, H. Ding, Z. Xiong, & X. Zhu (Eds.), Proceedings of the 3rd International Conference on Intelligent Robotics and Applications LNAI 6425, (pp. 32-43). Berlin, Germany: Springer-Verlag. Yasuda, G., & Ge, B. (2010). Petri net model based specification and distributed control of robotic manufacturing systems. In Proceedings of the 2010 IEEE International Conference on Information and Automation, (pp. 699-705). Yasuda, G., & Tachibana, K. (1992). Implementation of real-time control schemes on a parallel processing architecture using transputers. In Proceedings of IEEE Singapore International Conference on Intelligent Control and Instrumentation, (pp. 760-765). Zhou, M., DiCesare, F., & Desrochers, A. A. (1993). Petri Net synthesis for discrete event control of manufacturing systems. London, UK: Kluwer. Zhou, M., & Venkatesh, K. (1999). Modeling, simulation, and control of flexible manufacturing systems: A Petri Net approach. Singapore: World Scientific.

This work was previously published in Prototyping of Robotic Systems: Applications of Design and Implementation, edited by Tarek Sobh and Xingguo Xiong, pp. 51-69, copyright 2012 by Information Science Reference (an imprint of IGI Global). 593

594

Chapter 34

Human-Friendly Robots for Entertainment and Education Jorge Solis Waseda University, Japan & Karlstad University, Sweden Atsuo Takanishi Waseda University, Japan

ABSTRACT Even though the market size is still small at this moment, applications of robots are gradually spreading out from the manufacturing industrial environment to face other important challenges, like the support of an aging society and to educate the new generations. The development of human-friendly robots drives research that aims at autonomous or semi-autonomous robots that are natural and intuitive for the average consumer to interact with, communicate with, and work with as partners, besides learning new capabilities. In this chapter, an overview of research done on the mechanism design and intelligent control strategies implementation on different platforms and their application to entertainment and education domains will be stressed. In particular, the development of an anthropomorphic saxophonist robot (designed to mechanically reproduce the organs involved during saxophone playing) and the development of a two-wheeled inverted pendulum (designed to introduce the principles of mechanics, electronics, control, and programming at different education levels) will be presented.

INTRODUCTION The development of anthropomorphic robots is inspired by the ancient dream of humans replicating themselves. However, human behaviors are difficult to explain and model. The recent technological advances in robot technology,

artificial intelligence, power computation, etc. have contributed to enable humanoid robots to roughly emulate the physical dynamics and motor dexterity of humans. Nowadays, humanoid robots are able of displaying motor dexterities for dancing, playing musical instruments, talking, etc. Although the long-term goal of true autonomous humanoid robots has yet to be accomplished, the

DOI: 10.4018/978-1-4666-1945-6.ch034

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Human-Friendly Robots for Entertainment and Education

feasibility of integrating them into people’s daily lives is becoming closer. Towards developing humanoid robots capable of interacting more naturally with human partners, robots are required to process and display humanlike emotions. The way a person interacts with a humanoid robot is quite different from interacting with the majority of industrial robots today. Modern robots are generally viewed as tools that human specialists use to perform hazardous tasks in remote environments. In contrast, human-like personal robots are often designed to engage people in order to achieve social or emotional goals. The development of socially intelligent and socially skillful robots drives research to develop autonomous or semi-autonomous robots that are natural and intuitive for the average consumer to interact with, communicate with, work with as partners, and teach new capabilities. In addition, this domain motivates new questions for robotics researchers, such as how to design for a successful long-term relationship where the robot remains appealing and provides consistent benefit to people over weeks, months, and even years. The benefit that social robots provide people extends far beyond the strict task performing utility to include educational, health and therapeutic, domestic, social and emotional goals (e.g., entertainment, companionship, communication, etc.), and more. However, these mechanical devices are still far from understanding and processing emotional states as humans do. Research on musical performance robots seems like a particularly promising path toward helping to overcome this limitation, because music is a universal communication medium, at least within a giving cultural context. Furthermore, research into robotic musical performance can shed light on aspects of expression that traditionally have been hidden behind the rubric of “musical intuition.” The late Prof. Ichiro Kato argued that the artistic activity such as playing a keyboard instrument would require human-like intelligence and dexterity (Kato, et al., 1973). In 1984, at Waseda University, the WABOT-2 was

the first attempt of developing an anthropomorphic music robot capable of playing a concert organ (Sugano & Kato, 1987). Then, in 1985, the WASUBOT built also at Waseda, could read a musical score and play a repertoire of 16 tunes on a keyboard instrument. More recently, thanks to the technological advances on power computation, Musical Information Retrieval (MIR) and Robot Technology, several researchers have been focusing on developing anthropomorphic robots and interactive automated instruments capable of interacting with musical partners. As a result, different kinds of wind playing-instrument automated machines and humanoid robots have been developed for playing wind instruments (Doyon & Liaigre, 1966; Klaedefabrik, 2005; Solis, et al., 2008; Takashima & Miyawaki, 2006; Solis, et al., 2009a; Dannenberg, 2005; Toyota Motor Corporation, 2011; Degallier, 2006; etc.). Other researchers have been focusing in analyzing wind instrument playing from a musical engineering approach by performing experiments with simplified mechanisms (Ando, 1970; Guillemain, et al., 2010; etc.) and from a physiological point of view by analyzing medical imaging data of professional players (Mukai, 1992; Fletcher, 2001; etc.). In this research, we particularly deal with the development of an anthropomorphic saxophone-playing robot designed to mechanically emulate the required organs during the saxophone playing. Due to the interdisciplinary nature of this research, our collaboration with musicians, musical engineers, and medical doctors will certainly contributes to better reproduce and understand the human motor control from an engineering point of view. Certainly, the performance of any musical instrument is not well defined and far from a straightforward challenge due to the many different perspectives and subject areas. An idealized musical robot requires many different complex systems to work together integrating musical representation, techniques, expressions, detailed control and sensitive multimodal interactions within the context of a piece, as well as interac-

595

Human-Friendly Robots for Entertainment and Education

tions between performers and the list grows. Due to the inherent interdisciplinary nature of the topic, this research can contribute to the further enhance musical understanding, interpretation, performance, education, and enjoyment. However, if we consider the use of such a complex mechanisms to introduce undergraduate students the principles of robot technology, there could be difficulties to have experience on-hands with an anthropomorphic robot. On the other hand, the continuous falling of the birthrate in developed countries is resulting in a reduction in the number of students where most of them are going away from scientific fields. This situation may tremendously affect the industry by losing competitive power in the future due to the shortage of talented engineers. Moreover, the curricula of engineering universities is currently lacking in practical, design elements resulting in a shortage of opportunities for promoting the creativity of students. For this purpose, several attempts to built educational robots have been done during the past few decades (Miller, et al., 2008).

DEVELOPMENT OF ANTHROPOMORPHIC MUSICAL ROBOTS Background During the golden era of automata, the “Flute Player” developed by Jacques de Vaucanson was designed and constructed as a means to understand the human breathing mechanism (Doyon & Liaigre, 1966). Vaucanson presented “The Flute Player” to the Academy of Science in 1738. For this occasion, he wrote a lengthy report carefully describing how his flutist can play exactly like a human. The design principle was that every single mechanism corresponded to every muscle (Vaucanson, 1979). Thus, Vaucanson had arrived at those sounds by mimicking the very means by which a man would make them. Nine bellows were

596

attached to three separate pipes that led into the chest of the figure. Each set of three bellows was attached to a different weight to give out varying degrees of air, and then all pipes joined into a single one, equivalent to a trachea, continuing up through the throat, and widening to form the cavity of the mouth. The lips, which bore upon the hole of the flute, could open and close; and move backwards or forwards. Inside the mouth was a moveable metal tongue, which governed the airflow and created pauses. More recently, the “Flute Playing Machine” developed by Martin Riches was designed to play a specially made flute somewhat in the manner of a pianola, except that all the working parts are clearly visible (Klaedefabrik, 2005). The Flute Playing Machine is composed of an alto flute, blower, electro-magnets, and electronics. The design principle is basically transparent in a double sense. The visual scores can be easily followed so that the visual and acoustic information is synchronized. The pieces it plays are drawn with a felt tip pen on long transparent music rolls, which are then optically scanned by the photo cells of a reading device. The machine has a row of 15 photocells, which read felt-tip pen markings on a transparent roll. Their amplified signals operate the 12 keys of the flute and the valve, which controls the flow of air into the embouchure. The two remaining tracks may be used for regulating the dynamics or sending timing signals to a live performer when performing a duet. Since 1990, the authors have been focusing on development of the anthropomorphic flutist robot designed to mechanically emulate the anatomy and physiology of the organs involved during flute playing. In 2007, the Waseda Flutist Robot No. 4 Refined IV (WF-4RIV) was developed. The WF-4RIV has a total of 41-DOFs and it is composed of the following simulated organs (Solis, et al., 2008): lungs, lips, tongue, vocal cord, fingers, and other simulated organs to hold the flute (i.e. neck and arms). The lips mechanism is composed by 3-DOFs to realize an accurate

Human-Friendly Robots for Entertainment and Education

control of the motion of the superior lip (control of airstream’s thickness), inferior lip (control of airstream’s angle) and sideway lips (control of airstream’s length). The artificial lip is made of a thermoplastic rubber named “Septon” (Kuraray Co. Ltd., Japan). The lung system is composed of two acrylic cases, which are sealed. Each of the cases contains a bellow, which is connected to an independent crank mechanism. The crank mechanism is controlled by using an AC motor so that the robot can breathe air into the acrylic cases and breathe air out from them by controlling the speed of motion of the bellow. Finally, the vocal cord is composed by 1-DOF and the artificial glottis is also made of Septon. In order to add vibration to the incoming air stream, a DC motor linked to a couple of gears is used One of the first attempts to develop a saxophone-playing robot was done by Takashima at Hosei University (Takashima & Miyawaki, 2006). Such a robot, named APR-SX2, is composed of three main components: mouth mechanism (as a pressure controlled oscillating valve), the air supply mechanism (as a source of energy), and fingers (to make the column of air in the instrument shorter or longer). The artificial mouth consisted of flexible artificial lips and a reed pressing mechanism. The artificial lips were made of a rubber balloon filled with silicon oil with the proper viscosity. The air supplying system (lungs) consists of an air pump and a diffuser tank with a pressure control system (the supplied air pressure is regulated from 0.0 MPa to 0.02 MPa). The APR-SX2 was designed under the principle that the instrument played by the robot should not be changed. A finger mechanism was designed to play the saxophone’s keys (actuated by solenoids), and a modified mouth mechanism was designed to attach it to the mouthpiece, no tonguing mechanism was implemented (normally reproduced by the tongue motion). The control system implemented for the APR-SX2 is composed by one computer dedicated to the control of the key fingering, air pressure and flow, pitch of the tones, tonguing,

and pitch bending. In order to synchronize all the performance, the musical data was sent to the control computer through MIDI in real-time. In particular, the SMF format was selected to determine the status of the tongue mechanism (on or off), the vibrato mechanism (pitch or volume), and pitch bend (applied force on the reed). Hosei University has developed the APR-SX2; its design is based on the concept of reproducing melodies on a tenor saxophone. Therefore, the saxophone playing robot has been developed under the condition that the musical instrument played by robots should not be changed or remodeled at all. However, a total of twenty-three fingers have been used to play the saxophone’s keys (actuated by solenoids), a modified mouth mechanism has been designed (composed by a flexible artificial lip and a reed pressing force control mechanism were developed) to attach it with the mouthpiece, and no tonguing mechanism has been implemented (normally reproduced by the tongue motion). In contrast, authors proposed in Solis et al. (2009b) the development of an anthropomorphic saxophonist robot as an approach to enable the interaction with musical partners. Therefore, as a long-term goal, we expect that the proposed saxophonist robot is able not only of performing a melody, but also to dynamically interact with the musical partner (i.e. walking while playing the instrument, etc.). As a first result of our research, we have presented the Waseda Saxophonist Robot No. 1 (WAS-1), which it was composed by 15 Degrees of Freedom (DOF) required to play an alto saxophone (Solis, et al., 2009a). In particular, lower lip (1-DOF), tongue (1-DOF), oral cavity, artificial lungs (air pump: 1-DOF and air flow valve: 1-DOF), and fingers (11-DOFs) were developed. Both lips and oral cavity were made of a thermoplastic rubber (named Septon and produced by Kuraray Co.). An improved version, the Waseda Saxophonist Robot No. 2 (WAS-2) was presented, where the design of the artificial lips was improved and a human-like hand was designed (Solis, et al., 2010a). Furthermore, an

597

Human-Friendly Robots for Entertainment and Education

Overblowing Correction Controller was implemented in order to assure the steady tone during the performance by using the pitch feedback signal to detect the overblowing condition and by defining a recovery position to correct it (Solis, et al., 2010b). However, the range of sound pressure was still too limited to reproduce the dynamic effects of the sound (i.e. decrescendo) and deviations on the pitch were detected. Therefore, the design of the oral cavity shape has been improved to expand the range of sound pressure and potentiometers were attached to each finger for implementing a dead-time compensation controller. From the control system point of view, a Pressure-Pitch Controller has been proposed to ensure the accurate control of the pitch during the steady phase of the sound produced by the saxophone. Thus, in the following sub-section, we describe the mechanical improvements on the oral cavity and finger mechanisms. In addition, the implementation of a finger dead-time compensation controller and Multiple-Input Multiple-Output controller to assure the accurate control of both air pressure and sound pitch.

Anthropomorphic Saxophonist Robot: Mechanism Design and Control Implementation In 2010, we have developed the Waseda Saxophonist Robot No. 2 Refined (WAS-2R), which has improved the shape of the oral cavity for increasing the sound range volume and added sensors to each finger for reducing the response delay. In particular, the WAS-2R is composed by 22-DOFs that reproduce the physiology and anatomy of the organs involved during the saxophone playing as follows (Figure 1): 3-DOFs to control the shape of the artificial lips, 16-DOFs for the human-like hand, 1-DOF for the tonguing mechanism, and 2-DOFs for the lung system. In addition, to improve the stability of the pitch of the sound produced, a pressure-pitch controller system has been implemented.

598

In the previous mechanism, it was possible to confirm the enhancement of the sound range produced by WAS-2 (Solis, et al., 2010a). However, we detected that the note C3 was not possible to be produced. Therefore, we considered to analyze in more detail the oral cavity (in particular, the gap between the palate and the tongue) of professional saxophonist while playing the instrument. For this purpose, we have used an ultrasonic sound probe (ALOKA ProSound II, SSD-6500SV) to obtain images of the oral cavity from professional players while producing the sound of the note C4. By analyzing the obtained images, when a higher volume sound is produced, a large gap between the palate and the tongue is observed. In contrast, while producing lower volume sounds, the gab is considerably narrowed. As a result from these measurements, a new oral cavity for the WAS-2R has been designed (Figure 2). Basically, based on the measurements obtained from images obtained from the professional player, the sectional area has been designed with 156 mm2 (previous one was 523 mm2).

Figure 1. The Waseda saxophonist robot no. 2 refined (WAS-2R)

Human-Friendly Robots for Entertainment and Education

Figure 2. Detail of the oral cavity of WAS-2R

In the previous mechanism, a human-like hand (actuated by a wire-driven mechanism) had been designed to enable the WAS-2 to push all the keys of the alto saxophone (Solis, et al., 2010a). However, due to the use of the wire-driven mechanism, a dynamic response delay (approximately 110ms) has been observed. Therefore, in order to reduce such a delay time, we proposed to embed sensors for measuring the rotational angle of each finger (Figure 3). For this purpose, a rotary sensor (RDC506002A from Alps Co.) has been embedded into the each finger mechanism. In particular, each sensor was placed on a fixing mount device produced by a rapid prototyping device (CONNEX 500). As a result, we were able of attaching the sensing system without increasing the size of the whole mechanism. RC servo motors have been

used to control the wire-driven mechanism designed for each finger. As end-effector, an artificial finger made of silicon has been designed. In order to control the sixteen RC motors, the RS485 serial communication protocol has been used. On the other hand, the previous mouth mechanism was designed with 1-DOF in order to control the vertical motion of the lower lip. Based on the up/down motion of the lower lip, it became possible to control the pitch of the saxophone sound. However, it is difficult to control the sound pressure by means of 1-DOF. Therefore, the mouth mechanism of the WAS-2 consists of 2-DOFs designed to control the up/down motion of both lower and upper lips (Figure 4a). In addition, a passive 1-DOF has been implemented to modify the shape of the sideway lips. The artificial lips were also made of Septon. In particular, the arrangement configuration of the lip mechanism is as follows: upper lip (rotation of the motor axis is converted into vertical motion by means of a timing belt and ball screw to avoid the leak of air flow), lower lip (a timing belt and ball screw so that the rotational movement of the motor axis is converted into vertical motion to change the amount of pressure on the reed), and sideway lip. In order to select the motor for the mouth mechanism, the required force for pressing the reed and the maximum stroke of the pins embedded in lip were considered. The target time for

Figure 3. Details of the finger mechanism of WAS-2R

599

Human-Friendly Robots for Entertainment and Education

Figure 4. Mechanism details of the WAS-2R: a) mouth mechanism; b) tonguing mechanism; c) lung mechanism

the positioning was set to 100 ms. In order to assure a compact design for the mechanism, a ball screw and timing belt were used. Due to the space constrains, the ball screw SG0602-45R88C3C2Y (KSS Co.) was used. The shaft diameter is 6 mm, and the lead is 2 mm. From those, the axial direction allowable load and allowable revolution were calculated. The requirement of the system is to move 10 mm in 100 ms. Therefore, the average speed v and acceleration a are 0.1 m/s and 4 m/ s2 respectively. In order to move the pin attached to both sides of lip, the total mass of moving part is 0.05 kg. The axial direction load generated when pin is pulled is given by (1), and this value is the maximum axial direction load applied to the ball screw. The core diameter of the ball screw is 5.1 mm; therefore, the screw shaft minimum moment of inertia of area is given by (2). Fa = 8 + ma = 8 + 0.05 × 4 = 8.2 [N ]

600

(1)

I =

π 4 π × 5.14 d1 = = 33.2 [mm 4 ] 64 64

(2)

The buckling load is computed by (3), where la is the distance between two mounting surfaces (40 mm), E the Young’s modulus (2.1×105 N/ mm2) and η1 the factor according to the mounting method (2.0). As a result of the above calculations, we confirmed that the selected ball screw is safe in use. P 1=

η1 ⋅ π 2 ⋅ E ⋅ I la2

× 0.5 = 12485.6 [N ] (3)

Then, we have verified the critical speed. Due to the reduction ratio is 1, the required motor revolution is given by (4); the sectional area S of the screw axis is computed by (5).

Human-Friendly Robots for Entertainment and Education

Vmax ⋅ 1000 ⋅ 60

1 l A 0.20 × 1000 × 60 1 = × 2 1 = 6000 [rpm ]

Nm =

S=

×

π × 5.12 = 20.4 [mm 2 ] 4

(4)

(5)

Finally, the allowable revolution of threaded shaft can be computed as (6), where S is the section area (20.4 mm2), γ the density (7.85×10-6 kg/ mm3) and λ1 is the factor according to the mounting method (3.927). From the above calculations, we could confirm the required revolution is allowable. Thus, we decided to use this ball screw. N1 = ×

60 ⋅ λ12 2π ⋅ la2

×

60 × 3.9272 E ×I ×g × 0.8 = γ ⋅S 2π × 402

2.1 × 105 × 6.21 × 9.8 × 103 × 0.8 = 1520061 [rpm ] 7.85 × 10−6 × 20.4

(6)

After confirming the ball screw specifications, the selection of the motor was verified. For the mouth mechanism, the motor RE-25 (Maxon Co.) was used. In order to calculate the rotary torque required to translate rotary motion into linear motion, the required rotary torque T1 for an external load is defined as (7), where η is the efficiency of ball screw (0.9). T1 =

Fa ⋅ l 10.2 × 2 ⋅A = × 1 = 2.90[N ⋅ mm ] 2π ⋅ η 2π × 0.9 (7)

Because the preload torque Td of the selected ball screw is 3.0-7.0 Nmm, the preload torque generated T2 is defined as (8). T2 = Td ⋅ A = 7.0 × 1 = 7.0 [N ⋅ mm ]

Considering inertia moment of screw shaft and the pulley on the side of motor, the inertia moment J is computed as (9); where JS is the Inertia moment of screw shaft (2.5×10-8 kg*m2) and JB is the Inertia moment of pulley on the side of motor (9.11×10-7 kg*m2). 2

 l  J = m   × 10−6 + J S + J B  2π  2  2   = 0.05 ×   × 10−6 + 2.50  2π  × 10−8 + 9.11 × 10−7 = 9.41 × 10−7 [kg ⋅ m 2 ] (9) Because the acceleration time is 0.05 sec, the angular acceleration is computed as (10). Therefore, the required acceleration torque T3 is given by (11). 2π ⋅ N m

2π × 6000 60t 60 × 0.050 = 12566.3 [rad / sec2 ] •

ω=

=

(10)

T3 = J × ω× 103 = 9.41 × 10−7 × 12566.3 × 103 = 11.83 [N ⋅ mm ] (11) From the torques calculated above, the total required acceleration torque Tk is given by (12). The effective value of torque required to the motor is then computed as (13). As a result from the calculations below, it is verified that the RE-25 motor covers the required specifications. TK = T1 + T2 + T3 = 2.90 + 7.0 + 11.83 = 21.73 [N ⋅ mm ]

(12)

(8)

601

Human-Friendly Robots for Entertainment and Education

Trms =

T12 × t1 + T22 × t2 + T32 × t3

t 2.90 × 0.10 + 7.02 × 0.10 + 21.732 × 0.05 = 0.10 + 0.10 + 0.05 = 5.033 [N ⋅ mm ] 2

(13) On the other hand, the tonguing mechanism is shown in Figure 4b. The motion of the tongue tip is controlled by a DC motor which is connected to a link attached to the motor axis. In such a way, the airflow can be blocked by controlling the motion of the tongue tip. Thanks to this tonguing mechanism of the WAS-2, the attack and release of the note can be reproduced. In order to select the motor for tongue mechanism, we assumed a response time of 20 ms. As the motor of the tongue mechanism should rotate 20deg in 20ms, the average angular speed is 17.45 rad/s. On the other hand, to approximate the real lingual motion speed, the maximum angular speed is 34.9 rad/s. Therefore, acceleration of it is 3490.7 rad/s2. The required torque to rotate the tongue mechanism covered with SEPTON is 5.5×10-2 Nm and the inertia moment of the center of rotation generated to a part rotate with tongue is 1.19×10-5 kg*m2. Therefore, the required total torque Ttotal for driving the tongue mechanism is computed by (14). Ttotal = T + I θ = 0.09654[N ⋅ m ] = 96.54[N ⋅ mm ]

(14)

As a result of calculations above, and because of motor size, the motor RE-30 (Maxon Co.) is selected for tongue mechanism. Regarding the WAS-2R’s air source, a DC servo motor has been used to control the motion of the air pump diaphragm, which is connected to an eccentric crank mechanism (Figure 4c). This mechanism has been designed to provide a minimum 20 L/min airflow and a minimum pressure

602

of 30kPa. In addition, a DC servo motor has been designed to control the motion of an air valve so that the delivered air by the air pump is effectively rectified. In order to select the motor for the lung mechanism, the requirement specification was based on the maximum oral cavity pressure (8 kPa) and the calculations of the external force F computed by (15), where Fa is the inertia, Fk is the spring and Fp is the pressure. The force applied to motor arm Fl is then computed by (16), where θ is the angle of rotation and ϕ is the angle of arm. Finally, based on the motor load torque T given by (17), where r is the arm length, the motor RE-30 (Maxon Co.) has been selected. F = Fa + Fk + Fp

Fl =

F ⋅ sin(φ + θ) cos φ

T = Fl ⋅ r

(15) (16)

(17)

Regarding the control system in our previous research, a feed-forward air pressure controller with dead-time compensation has been implemented to ensure the accurate control of the air pressure during the attack time (Solis, et al., 2010b). Moreover, for the control of the finger mechanism, a simple ON/OFF controller has been implemented. In particular, the feedback error learning during the attack phase of the sound has been used to create the inverse dynamics model of the Multiple-Input Single-Output (MISO) controlled system based on Artificial Neural Networks (ANN). In addition, an Overblowing Correction Controller (OCC) has been proposed and implemented in order to ensure the steady tone during the performance by using the pitch feedback signal to detect the overblowing condition and by defining a recovery position (offline) to correct it (Solis, et al., 2010b). However, we still detect deviations on the pitch while playing the saxophone.

Human-Friendly Robots for Entertainment and Education

Therefore, we proposed the implementation of the control system shown in Figure 5a. In particular, the improved control system includes a dead-time compensation controller for the finger mechanism (to reduce the effect of response delay due to the wire-driven mechanism) and a Pressure-Pitch Controller (PPC) for the control of the valve and lip mechanism (to assure the accurate control of the pitch). Regarding the implementation of the dead-time compensation control; for each finger of WAS-2R, the pressing time of the saxophone’s key is measured by means of the embedded potentiometer sensor (defined as LN; where N represents the total number of DOFs designed for the finger mechanism). By including the dead-time factor (referred as esL), it is possible

to compensate the finger’s response delay during the saxophone playing (Kim, et al., 2003). As for the implementation of the control system, a pressure-pitch controller during the sustain phase of the sound has been proposed not only to ensure the accurate control of the air pressure during the attack phase of the sound, but also to ensure the accurate control of both air pressure and sound pitch during the sustain phase of the sound. For this purpose, we implemented a feedforward error learning method (Kawato & Gomi, 1992) to create the inverse model of the proposed Multiple-Input-Multiple-Output (MIMO) system which is computed by means of an ANN. During the training process, the inputs of the ANN are defined as follows (Figure 5b): pressure reference (PressureREF), pitch reference (PitchREF). In this

Figure 5. Detail of the control system implemented for the WAS-2R: a) block diagram of the improved control system; b) detail of the ANN during the learning phase based on the feedback error learning method

603

Human-Friendly Robots for Entertainment and Education

case, a total of six hidden units were used (experimentally determined while varying the number of hidden units). As an output, the position of the air valve (ΔValve) and lower lip (ΔLip) are controller to ensure the accurate control of the required air pressure and pitch to produce the saxophone sound. Moreover, during the training phase, the air pressure (PressureRES) and sound pitch (PitchRES) are used as feedback signals and both outputs from the feedback controller are used as teaching signals for the effectively training the ANN. As a result from the training phase, during a saxophone playing performance, the created inverse model is used.

ing the saxophone, we programmed the WAS-2R to play the main theme of the “Moonlight Serenade” composed by Glenn Miller before and after training the inverse model. In particular, as for the neural network parameters, a total of 6 hidden units were used. For the training process, a total of 144 steps were done. The experimental results are shown in Figure 6b; where 1[cent] is defined as (equi-tempered semitone/100). As we could observe, the deviations of the pitch after the training (Standard Error is 41.7) are considerable less than before training (Standard Error is 2372.8).

Musical Performance In order to verify if the re-designed shape of the oral cavity contributes to extend the range of sound pressure, we have compared the previous mechanism with the new one while playing the notes from C3 to C5. The average sound pressure ranges for WAS-2R and WAS-2 are 17.7 dB and 9.69 dB, respectively. Moreover, an intermediate player and professional are 13.2 and 22.6 respectively. From this result, we confirmed an increment of 83% thanks to the new shape of the oral cavity. Therefore, we could conclude that the shape of the gap between the palate and tongue has a big influence on the sound pressure range. Thanks to this considerable improvement on the range of sound pressure, we proposed to compare the reproduction of the decrescendo, which is a dynamic sound effect that gradually reduces the loudness of the sound. For this purpose, we programmed the WAS-2 and WAS-2R to play the principal theme of the “Moonlight Serenade” composed by Glenn Miller. The experimental results are shown in shown in Figure 6a. As we may observe, the WAS-2R was able of reproducing nearly similar to the performance of the professional one. On the other hand, in order to determine the effectiveness of the proposed pressure-pitch controller to reduce the pitch deviations while play-

604

Figure 6. Experimental results: a) reproduction of decrescendo effect; b) comparing the deviations of the pitch before and after training the inverse model of the proposed MIMO system with the WAS-2R.

Human-Friendly Robots for Entertainment and Education

DEVELOPMENT OF EDUCATIONAL ROBOTS Background Even though several universities and companies have been building robotic platforms for educational purposes, we may observe that there is still no platform designed to intuitively introduce the principles of RT from the fundamentals to their application to solve real world problems. In fact, most of the current educational platforms focus on providing the basic components to enable students building their own designed system. However; such kind of platforms are used to merely introduce basic control methods (i.e. Sequential Control), basic programming (i.e. Flow Chart Design, C language), and basic mechanism design. As an approach to cover different aspects of the Robot Technology, in this project we focused in developing an education tool designed to introduce at different educational levels the principle of developing mechatronic systems. In particular, the development of an inverted pendulum mobile robot has been proposed. In fact, the inverted pendulum has been the subject of numerous studies in automatic control (Grasser, et al., 2002; Salerno & Angeles, 2007; Koyanagi, et al., 1992; Kim, et al., 2003; Pathak, et al., 2005; etc.), introduction to Mechatronics (Solis & Takanishi, 2009; etc.), etc. Up to now, several attempts to build educational robots have been made during the past few decades (Miller, et al., 2008). In fact, the development of educational robots started in the early 1980s with the introduction of the Heathkit Hero − 1 (Heath Co.). Such kind of robot was designed to encourage students to learn how robots are built. However, no information on the theory or principles behind the assembly is given. More recently, several other companies in cooperation with universities and research centers have been trying to introduce educational robots to the market. Some examples are as follows: K-Team (K-TEAM Ltd.) introduces the Hemisson, which is

a low-cost educational robot designed to provide an introduction to robot programming by using reduced computational power and few sensors. Another example is the LEGO ® Mindstorms RCX, which is a good tool for early and fast robot design by using the LEGO blocks (LEGO Ltd.). In Japan, we can also find some examples such as: the RoboDesigner kit designed to provide a general platform to enable students to build their own robots (Japan Robotech Ltd.), ROBOVIE − MS from ATRRobotics designed as an education tool to introduce principles of mechanical manufacturing, assembly, and operational programming of small-sized humanoid robot, etc. From the perspective of introducing RT technology to undergraduate students, it is a good example to provide experience to them on control designing, signal processing, distributed control systems and the consideration of real-time constraints for real applications purposes. However, most of the current proposed robots do not consider the educational issues while designing the inverted pendulum (i.e. possibility of changing the center of mass, etc.). In addition, authors consider the importance to consider the introduction of humanrobot interaction to motivate their further interest (i.e. the size of the robot should fit the size of a personal mobile computer, etc.). Therefore, the authors have proposed the development of a two-wheeled inverted pendulum type mobile robot designed to cover the basic principles in electronics, mechanical engineering, programming, as well as, more advanced topics on control engineering, complex programming, and embedded systems. As a result of our research, the Waseda Wheeled Vehicle No.2 Refined (WV2R) has been introduced (Solis, et al., 2009c). In particular, the WV-2R has been designed to enable students to verify the changes on the response of the robot while varying some physical parameters of the robot. From the experimental results, we confirm some of the educational functions of the proposed robot (i.e. PID tuning, varying the center of mass, etc.). However, a hand-made control

605

Human-Friendly Robots for Entertainment and Education

board has been used so that several problems of wire connections were detected. Furthermore, the WV-2R didn’t include any additional mechanism for proposing different kinds of robot contest. Finally, from our discussions with undergraduate students, the development of a simulator could considerably increase their knowledge.

Figure 7. The Waseda wheeled vehicle no. 2 refined II (WV-2RII).

Two-Inverted Pendulum Robot: Mechanism Design and Control Implementation In the 2010, the Waseda Wheeled Vehicle Robot No. 2 Refined II (WV-2RII) was developed as an educational robot designed to implement different educational issues to introduce undergraduate students the principles of RT (Figure 7). The specifications are shown in Table 1. The WV-2RII is composed of two-actuated wheels, a generalpurpose control board (Figure 8a), an adjustable weighting bar attached to the pendulum, a gyro and accelerometer sensors, a remote controller (Figure 8b), and two optional mechanisms that can be easily attached/detached from the main body of the robot. In particular, the general-purpose control board consists of a 32 bits ARM microcontroller, 10 general I/O ports, 2 motor drives, a LCD display, 8 LEDs, a Zigbee module, and 2 servo connectors. The WV-2RII is endowed with two active wheels actuated by DC motors. The model description is shown in Figure 9; where the following parameters are defined as follows. θ: Tilt angle of the chassis φ: Axial component of the angular velocity of the wheel m1: Mass of the chassis m2: Wheel mass J1: Moment of inertia of the chassis J2: Wheel Moment of Inertia l: Distance between wheel axis and robot mass center r: Wheel radius

606

Table 1. The specification of WV-2RII Parameter

Specifications

Height [mm]

530

Weight [kg]

3.8

DOFs

2-DOFs

Microcontroller

STM32F103VB x 1 Accelerometer x 1

Sensors

Red Gyro x 1 Optical Encoder x 2

Motor

RDO-37BE50G9 (12 Volts) x 2

Power Supply

Battery: 6 [V] x 1 RC–Battery: 12 [V] x 1

Remote Controller

ZigBee: 2.6GHz

Figure 8. a) general-purpose control board; b) remote controller for the WV-2RII

Human-Friendly Robots for Entertainment and Education

Figure 9. The model of the two-wheeled inverted pendulum robot

m1 (l cos θ)′′ = −m1g + fy

(21)

J 1θ ′′ = fyl sin θ − fx l cos θ − nT

(22)

x = rϕ

(23)

Equations (24) and (25) follow from above equations upon elimination of intermediate variables fx, fy, S. From Equation (24), we may notice that when the angular acceleration of the body is less than zero, it is possible to correct the vertical inclination of the body to the standing upright position. ϕ ′′ = 1 nT − m1rl(θ ′′ cos θ − θ ′2 sin θ) 2 J 2 + r m1 + r m2 2

By using the above parameters, and by defining T as torque, n as reduction ratio of the gear and S as the Frictional Force on the wheel along the horizontal ground plane (where fx and fy are the components of the force acting between the wheel and pendulum at the center of the wheel), we may define the following Equations (18-23): m2x 2′′ = S − fx

(18)

J 2ϕ ′′ = nT − rS

(19)

m1 (x 2′′ + (l sin θ)′′ ) = fx

(20)

{

}

(25)

If we define the maximum tilt angle of the chassis to 50 degrees, and use the respective physical parameters corresponding to the WV-2RII (m1 = 2.247 kg; m2 = 0.800 kg; J1 = 0.015 kgm2; J2 = 0.002 kg*m2; g = 9.81 m/s2; l = 0.0477 m; r = 0.0725 m) into Eq. (28), we obtain the following relation: ∴ nT ≥ 2.2[Nm ] Based on the above relation, we have selected the motor RDO-37BE50G9 (stall torque 0.160 Nm and Gear ratio 9:1). If we consider the coef-

Equation 24

θ ′′ =

1 m12rl 2 cos2 θ J + m1l 2 + 1 J2 + rm1 + rm2 r

      2     nT + m1rl θ ′ sin θ  m1gl sin θ − m1l cos θ  − nT       J 2    + rm1 + rm2     r 

607

Human-Friendly Robots for Entertainment and Education

ficient of safety of the power generated by the two motors as 0.8, then nT is 2.3 Nm satisfies the required specification. On the other hand, as we have previously introduced, we have developed two additional mechanisms that can be easily attached to the main body of the WV-2RII. In particular, a kicking mechanism for soccer (Figure 10a) and an arm mechanism for sumo (Figure 10b) have been designed and constructed. In particular, the soccer-kicking mechanism is composed by a spring, hook, stopper, and a DC motor. In order to kick the ball, a tension spring is used to increase the speed of movement of the kicking mechanism (maximum output load of 22N). Basically, the kicking mechanism is attached to a hook which is moved until a certain point when the hook is automatically released (by a stopper), the reaction force accumulated by the spring is used to kick the ball. On the other hand, the sumo-arm mechanism is composed by sliding-crank mechanism actuated by a DC motor, an arm base actuated by a RC motor to adjust the pitch of the whole arm mechanism and a pushing plate with embedded switches for detecting the contact with the opponent. Basically, in gear wheels of the slider of the crank mechanism, the fixed and movable racks are used. The rotation motion of the crank is transmitted to the gear wheels and the movable rack moves at twice the stroke of the fixed rack. From this, the

arm mechanism provides a large stroke (around 88mm) by using a compact mechanism. As a further example of application of WV2RII for showing the potentialities of the proposed system, a female undergraduate student (from mechanical engineering background) during an internship at Waseda University was asked to design an upper body with appearance and gestures that are appealing to children using this new additional robot. For this purpose, we asked the student to design of the upper body mechanism, to develop the required commands for controlling it from a remote controller integrated on the WV2RII. The detail of the mechanism designed by the internship student is shown in Figure 11a. The proposed upper body uses 4 RC motors to control the motion of head (2-DOFs) and arms/wings (2-DOFs), lending more expression to the robot. Moreover, the possible motions realizable by the upper body are shown in Figure 11b. In Figure 12, the block diagram of the control system implemented for the WV-2RII is shown. As we may observe, the WV-2RII is controlled by feedback control system. In particular, the rate gyro sensor signal measures the body angular velocity (θ’) and the encoder measures the wheel rotational angle (φ). Because the drift on the signal obtained from the gyro is extremely small, the use of a high-pass filer is not required. Therefore, a low-pass filter is only used to compute the

Figure 10. Detail of the additional mechanisms designed for WV-2RII: a) soccer-kicking mechanism; b) sumo-arm mechanism

608

Human-Friendly Robots for Entertainment and Education

Figure 11. Pictures of the possible motions that the upper body mounted on the WV-2RII

body angular velocity (θ’); where the cut-off frequency is 0.32 Hz. In order to compute the body angle, the body angle and the wheel angular velocity, the body angular velocity and wheel angle are integrated and derivated respectively. In order to control all the parameter, a feedback controller has been implemented by using Equation (26), where k1~k6 parameters are the gain coefficients of the controller which are tuned to assure the stabilization of the system. Furthermore, a current feedback controller has been implemented by Equation (27), where the parameter k7 is tuned for assuring the accurate control of the

command current to each motor. As for the command control signal, the θREF, φ’REF, α’REF are set to zero, while the other commands are sent by a remote controller.

Control Stability In order to verify the robustness of the proposed controller implemented for the WV-2RII, we have placed the pendulum horizontally on the ground without activating the control. From this starting position, we have activated the control system and given as control goal the vertical position (90

Figure 12. Control block diagram implemented for the WV-2RII

609

Human-Friendly Robots for Entertainment and Education

Equations 26 and 27 ′ ) + k5 ⋅ (α − αREF ) + k6 ⋅ (α ′ − αREF ′ ) ioutR = k1 ⋅ θ + k2 ⋅ θ ′ + k 3 ⋅ (φ − φREF ) + k 4 ⋅ (φ ′ − φREF uR = k7 ⋅ (ioutR − iR )

′ ) − k5 ⋅ (α − αREF ) − k6 ⋅ (α ′ − αREF ′ ) ioutL = k1 ⋅ θ + k2 ⋅ θ ′ + k 3 ⋅ (φ − φREF ) + k 4 ⋅ (φ ′ − φRE uL = k 8 ⋅ (ioutL − iL )

degrees). From this experiment, we may observe the dynamic response of WV-2R by analyzing the body angle θ and the motor current measured. The experimental results are shown in Figure 13. As we may observe, the WV-2RII requires around 0.8 sec to reach the target position, where a maximum of 3A is required (the current circuit has been designed to support a peak current up to 7 Amperes).

FUTURE RESEARCH DIRECTIONS Conventionally, anthropomorphic musical robots are mainly equipped with sensors that allow them to acquire information about its environment. Based on the anthropomorphic design of humanoid robots, it is therefore important to emulate two of

the human’s most important perceptual organs: the eyes and the ears. For this purpose, the humanoid robot integrates in its head, vision and aural sensors attached to the sides for stereo-acoustic perception. In the case of a musical interaction, a major part of the typical performance (i.e. Jazz) is based on improvisation. In these parts, musicians take turns in playing solos based on the harmonies and rhythmical structure of the piece. Upon finishing his solo section, one musician will give a visual signal, a motion of the body or his instrument, to designate the next soloist. Toward enabling the multimodal interaction between the musician and musical robots, a Musical-based Interaction System (MbIS) will be integrated on the Waseda Saxophonist robot (Figure 14a). The MbIS has been conceived for enabling the interaction between the musical robot and musicians (Petersen, et al.,

Figure 13. Experimental results while programming the WV-2RII to rise from the ground by analyzing the body angle and the applied motor current

610

Human-Friendly Robots for Entertainment and Education

Figure 14. a) proposed musical-based interaction system; b) two-wheeled double inverted pendulum

2010). Even though the WAS-2R still requires several improvements from the mechanical and control point of view, we do expect the robot can be used for the entertainment of elderly people, reproduce the performance of famous saxophonist players passed away and for education of young players as practical applications. On the other hand, in order to introduce interactive educational robotic systems, the educational platform (both for university students and engineering at the industry) must be designed to cover the basic principles in electronics, mechanics, programming as well as more advanced topics on control, advanced programming and humanrobot interaction. Moreover, to enhance the entertainment issue, the educational platform could also include some aspects of art (i.e. music, etc.) to learn other basic aspects such as signal processing (i.e. musical retrieval information, etc), recognition systems (i.e. Hidden Markov Model, etc.), game design (i.e. audio/motion design), etc. Further challenges on dynamic con-

trol of a two-wheeled double inverted pendulum robot can be also conceived (Figure 14b). Based in this approach, it is possible to be used in classes beyond the classical Electrical, Mechanics, and Mechatronics Engineering curriculum, including Music Engineering (Martin, et al., 2009; Yanco, et al., 2007), etc. The WV-2RII is now being commercialized as “MiniWay” by Japan Robotech Ltd. Even though this robot has been designed as an educational robot, it is possible to conceive (with some mechanical and control design modifications) different kinds of practical applications such as baggage transportation within an airport, guidance for visitors or entertainment of children at museums, etc.

CONCLUSION In this chapter, the mechanism design and control implementation proposed for two different humanfriendly robotic platforms have been introduced.

611

Human-Friendly Robots for Entertainment and Education

In particular, the developments of an anthropomorphic saxophonist robot and a two-wheeled inverted pendulum robot have been detailed. The saxophonist robot has been designed to reproduce the organs involved during the saxophone playing and a feed-forward controller has been implemented in order to accurately control both the air pressure and the sound pitch during a musical performance. On the other hand, the two-wheeled inverted pendulum has been designed to introduce the principles of robot technology at different educational levels and a feedback controller has been implemented in order to assure the stability of the inverted pendulum.

ACKNOWLEDGMENT Part of the research on the Waseda Saxophonist Robot and Waseda Vehicle Robot Part was done at the Humanoid Robotics Institute (HRI), Waseda University and at the Center for Advanced Biomedical Sciences (TWINs). This research is supported (in part) by a Gifu-in-Aid for the WABOT-HOUSE Project by Gifu Prefecture. This work is also supported (in part) by Global COE Program “Global Robot Academia” from the Ministry of Education, Culture, Sports, Science, and Technology of Japan. Finally, the study on the Waseda Saxophonist Robot is supported (in part) by a Grant-in-Aid for Young Scientists (B) provided by the Japanese Ministry of Education, Culture, Sports, Science, and Technology, No. 23700238 (J. Solis, PI).

Dannenberg, R. B., Brown, B., Zeglin, G., & Lupish, R. (2005). McBlare: A robotic bagpipe player. In Proceedings of the International Conference on New Interfaces for Musical Expression, (pp. 80-84). ACM. de Vaucanson, J. (1979). Le mécanisme du fluteur automate: An account of the mechanism of an automation: Or, image playing on the german-flute. In Vester, F. (Ed.), The Flute Library: First Series No. 5. Dordrecht, The Netherlands: Uitgeverij Frits Knuf. Degallier, S., Santos, C. P., Righetti, L., & Ijspeert, A. (2006). Movement generation using dynamical systems: A humanoid robot performing a drumming task. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots, (pp. 512-517). IEEE. Doyon, A., & Liaigre, L. (1966). Jacques Vaucanson: Mecanicien de genie. Paris, France: PUF. Fletcher, N., Hollenberg, L., Smith, J., & Wolfe, J. (2001). The didjeridu and the vocal tract. In Proceedings of the International Symposium on Musical Acoustics, (pp. 87-90). ACM. Grasser, F., D’Arrigo, A., Comlombi, S., & Rufer, A. (2002). Joe: A mobile inverted pendulum. IEEE Transactions on Electronics, 49(1), 107–114. doi:10.1109/41.982254 Guillemain, P., Vergez, C., Ferrand, D., & Farcy, A. (2010). An instrumented saxophone mouthpiece and its use to understand how an experienced musician plays. Acta Acoustica, 96(4), 622–634. doi:10.3813/AAA.918317

REFERENCES

Heath Company. (2011). Website. Retrieved from http://www.hero-1.com/broadband/.

Ando, Y. (1970). Drive conditions of the flute and their influence upon harmonic structure of generated tone. Journal of the Acoustical Society of Japan, 297-305.

Japan Robotech Ltd. (2011). Website. Retrieved from http://www.japan-robotech.com/eng/index. html.

612

K-TEAM. (2011). Website. Retrieved from http:// www.k-team.com.

Human-Friendly Robots for Entertainment and Education

Kato, I., Ohteru, S., Kobayashi, H., Shirai, K., & Uchiyama, A. (1973). Information-power machine with senses and limbs. In Proceedings of the CISMIFToMM Symposium on Theory and Practice of Robots and Manipulators, (pp. 12-24). ACM.

Miller, D., Nourbakhsh, I., & Siegwart, R. (2008). Robots for education. In Siciliano, B., & Khatib, O. (Eds.), Springer Handbook of Robotics (pp. 1287–1290). Berlin, Germany: Springer. doi:10.1007/978-3-540-30301-5_56

Kawato, M., & Gomi, H. (1992). A computational model of four regions of the cerebellum based on feedback-error-learning. Biological Cybernetics, 68, 95–103. doi:10.1007/BF00201431

Mukai, S. (1992). Laryngeal movement while playing wind instruments. In Proceedings of International Symposium on Musical Acoustics, (pp. 239–241). ACM.

Kim, H., Kim, K., & Young, M. (2003). On-line dead-time compensation method based on time delay control. IEEE Transactions on Control Systems Technology, 11(2), 279–286. doi:10.1109/ TCST.2003.809251

Pathak, K., & Franch, J., Agrawal, & Sunil, K. (2005). Velocity and position control of a wheeled inverted pendulum by partial feedback linearization. IEEE Transactions on Robotics, 21(3), 505–513. doi:10.1109/TRO.2004.840905

Kim, Y. H., Kim, S. H., & Kwak, Y. K. (2003). Dynamic analysis of a nonholonomic two-wheeled inverted pendulum robot. In Proceedings of the Eighth International Symposium on Artificial Life and Robotics, (pp. 415-418). ACM.

Petersen, K., Solis, J., & Takanishi, A. (2010). Musical-based interaction system for the waseda flutist robot: Implementation of the visual tracking interaction module. Autonomous Robots Journal, 28(4), 439–455. doi:10.1007/s10514-010-9180-5

Klaedefabrik, K. B. (2005). Martin riches Maskinerne / the machines. Berlin, Germany: Kehrer Verlag.

Salerno, A., & Angeles, J. (2007). A new family of two wheeled mobile robot: Modeling and controllability. IEEE Transactions on Robotics, 23(1), 169–173. doi:10.1109/TRO.2006.886277

Koyanagi, E., Lida, S., & Yuta, S. (1992). A wheeled inverse pendulum type self-contained mobile robot and its two-dimensional trajectory control. In Proceedings of ISMCR, (pp. 891-898). ISMCR. Kuraray Co. (2011). Website. Retrieved from http://www.kuraray.co.jp/en/. LEGO. (2011). Website. Retrieved from http:// mindstorms.lego.com/. Martin, F., Greher, G., Heines, J., Jeffers, J., Kim, H. J., & Kuhn, S. (2009). Joining computing and the arts at a mid-size university. Journal of Computing Sciences in Colleges, 24(6), 87–94.

Solis, J., Ninomiya, T. N., Petersen, K., Takeuchi, M., & Takanishi, A. (2009a). Development of the anthropomorphic saxophonist robot WAS-1: Mechanical Design of the simulated organs and implementation of air pressure. Advanced Robotics Journal, 24, 629–650. doi:10.1163/016918610X493516 Solis, J., Petersen, K., Ninomiya, T., Takeuchi, M., & Takanishi, A. (2009b). Development of anthropomorphic musical performance robots: From understanding the nature of music performance to its application in entertainment robotics. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, (pp. 2309-2314). IEEE Press.

613

Human-Friendly Robots for Entertainment and Education

Solis, J., Petersen, K., Yamamoto, T., Takeuchi, M., Ishikawa, S., Takanishi, A., & Hashimoto, K. (2010a). Design of new mouth and hand mechanisms of the anthropomorphic saxophonist robot and implementation of an air pressure feedforward control with dead-time compensation. In Proceedings of the International Conference on Robotics and Automation, (pp. 42-47). ACM. Solis, J., Petersen, K., Yamamoto, T., Takeuchi, M., Ishikawa, S., Takanishi, A., & Hashimoto, K. (2010b). Implementation of an overblowing correction controller and the proposal of a quantitative assessment of the sound’s pitch for the anthropomorphic saxophonist robot WAS-2. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, (pp. 1943-1948). IEEE Press. Solis, J., & Takanishi, A. (2009c). Introducing robot technology to undergraduate students at waseda university. In Proceedings of the ASME Asia-Pacific Engineering Education Congress. ASME. Solis, J., Taniguchi, K., Ninomiya, T., & Takanishi, A. (2008). Understanding the mechanisms of the human motor control by imitating flute playing with the waseda flutist robot WF-4RIV. Mechanism and Machine Theory, 44(3), 527–540. doi:10.1016/j.mechmachtheory.2008.09.002 Sugano, S., & Kato, I. (1987). WABOT-2: Autonomous robot with dexterous finger-arm coordination control in keyboard performance. In Proceedings of the International Conference on Robotics and Automation, (pp. 90-97). ACM. Takashima, S., & Miyawaki, T. (2006). Control of an automatic performance robot of saxophone: Performance control using standard MIDI files. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems Workshop: Musical Performance Robots and Its Applications, (pp. 30-35). IEEE Press.

614

Toyota Motor Corporation. (2011). Website. Retrieved from http://www.toyota-global.com/ innovation/partner_robot/. Yanco, H. A., Kim, H. J., Martin, F. G., & Silka, L. (2007). Artbotics: Combining art and robotics to broaden participation in computing. In Proceedings of the AAAI Spring Symposium Robots and Robot Venues. AAAI.

ADDITIONAL READING Aggarwal, R., Darzi, A., & Sullivan, P. (2009). Skills acquisition and assessment after a microsurgical skills course for ophthalmology residents. Ophthalmology, 116(2), 257–262. doi:10.1016/j. ophtha.2008.09.038 Billard, A. (2003). Robota: Clever toy and educational tool. Robotics and Autonomous Systems, 42, 259–269. doi:10.1016/S0921-8890(02)00380-9 Hashimoto, S. (2009). Kansei technology and robotics machine with a heart. Kansei Engineering International, 8(1), 11–14. Hayashi, E., Yamane, M., & Mori, H. (200). Development of moving coil actuator for an automatic piano. International Journal of Japan Society for Precision Engineering, 28(2), 164–169. iRobot Corp. (2005). iRobot roomba serial command interface (SCI) specification. Burlington, VT: iRobot. Ishii, H., Koga, H., Obokawa, Y., Solis, J., Takanishi, A., & Katsumata, A. (2009). Development and experimental evaluation of oral rehabilitation robot that provides maxillofacial massage to patients with oral disorders. The International Journal of Robotics Research, 28(9), 1228–1239. doi:10.1177/0278364909104295 Kajitani, M. (1989). Development of musician robots. Journal of Robotics and Mechatronics, 1(3), 254–255.

Human-Friendly Robots for Entertainment and Education

Kotosaka, S., & Schaal, S. (2001). Synchronized robot drumming by neural oscillator. Journal of the Robotics Society of Japan, 19, 116–123. Kusuda, Y. (2008). Toyota’s violin-playing robot. Industrial Robot: An International Journal, 35(6), 504–506. doi:10.1108/01439910810909493 Martin, F. G. (2001). Robotic explorations: A hands-on introduction to engineering. Upper Saddle River, NJ: Prentice Hall. Matari, M. J., Koenig, N., & Feil-Seifer, D. (2007). Materials for enabling hands-on robotics and STEM education. Technical Reporty SS-07–09. Washington, DC: AAAI. Mayrose, J., Kesavadas, T., Chugh, K., Dhananjay, J., & Ellis, D. E. (2003). Utilization of virtual reality for endotracheal intubation training. Resuscitation, 59(1), 133–138. doi:10.1016/S03009572(03)00179-5 Murphy, R. R. (2000). Using robot competitions to promote intellectual development. Artificial Intelligence Magazine, 21(1), 77–90. Nakadate, R., Matsunaga, Y., Solis, J., Takanishi, A., Minagawa, E., Sugawara, M., & Niki, K. (2011). Development of a robotic-assisted carotid blood flow measurement system. Mechanism and Machine Theory Journal, 46(8), 1066–1083. doi:10.1016/j.mechmachtheory.2011.03.008 Noh, Y., Segawa, M., Shimomura, A., Ishii, H., Solis, J., Hatake, K., & Takanishi, A. (2008). WKA-1R robot-assisted quantitative assessment of airway management. International Journal of Computer Assisted Radiology and Surgery, 3(6), 543–550. doi:10.1007/s11548-008-0238-1 Rowe, R. (2001). Machine musicianship. Cambridge, MA: The MIT Press. Sobh, T. M., & Wange, B. (2003). Experimental robot musicians. Journal of Intelligent & Robotic Systems, 38, 197–212. doi:10.1023/A:1027319831986

Solis, J., Marcheschi, S., Frisoli, A., Avizzano, C. A., & Bergamasco, M. (2007). Reactive robots system: An active human/robot interaction for transferring skill from robot to unskilled persons. International Advanced Robotics Journal, 21(3), 267–291. doi:10.1163/156855307780131992 Solis, J., Oshima, N., Ishii, H., Matsuoka, N., Hatake, K., & Takanishi, A. (2008). Towards an understanding of the suture/ligature skills during the training process by using the WKS-2RII. International Journal of Computer Assisted Radiology and Surgery, 3(3-4), 231–239. doi:10.1007/ s11548-008-0220-y Solis, J., & Takanishi, A. (2010). Recent trends in humanoid robotics research: Scientific background, applications and implications. Accountability in Research, 17, 278–298. doi:10.1080/0 8989621.2010.523673 Weinberg, G., & Driscoll, S. (2006). Toward robotic musicianship. Computer Music Journal, 30(4), 28–45. doi:10.1162/comj.2006.30.4.28

KEY TERMS AND DEFINITIONS Anthropomorphic: Musical Robots: A robot designed to reproduce the organs involved during the musical instrument playing able of displaying both motor dexterity and intelligence. Bio-Inspired Robotics: A robot that mechanically emulates or simulates living biological organisms. Education Robots: A robot used by students composed by low-cost components commonly found on any robotic platform. Feed-Forward Error Learning: A computational theory of supervised motor learning that can be used as a training method to compute the inverse dynamics model of the controller system. Human-Friendly Robotics: Research field focus on the development of new methodologies for the design, control and safety operation of ro-

615

Human-Friendly Robots for Entertainment and Education

bots designed to naturally and intuitively interact, communicate and work with humans as partners. Humanoid Robots: A robot designed to reproduce the human body in order to interact naturally with human partners within the human environment.

Inverted Pendulum Robot: A robot composed by an inverted pendulum attached to a mobile base equipped with motors that dive it along a horizontal plane.

This work was previously published in Service Robots and Robotics: Design and Application, edited by Marco Ceccarelli, pp. 130-153, copyright 2012 by Information Science Reference (an imprint of IGI Global).

616

617

Chapter 35

Dual-SIM Phones:

A Disruptive Technology? Dickinson C. Odikayor Landmark University, Nigeria Ikponmwosa Oghogho Landmark University, Nigeria Samuel T. Wara Federal University Abeokuta, Nigeria Abayomi-Alli Adebayo Igbinedion University Okada, Nigeria

ABSTRACT Dual-SIM mobile phones utilize technology that permits the use of two SIMs at a time. The technology permits simultaneous access to the mobile network services. Its disruptive nature is with reference to the mobile phone market in Nigeria and other parts of the world. Earlier market trend was inclination to “newer” and “better” phones, in favour of established single-SIM mobile phone manufacturers like Nokia and Samsung. Introduction of dual-SIM phones mainly manufactured by Chinese mobile phone manufacturing firms propelled user preference for phones acquisition which permits dual and simultaneous access to mobile network. This technology has compelled its adoption by established manufacturing names in order that they may remain competitive. It is a clear case of a disruptive technology, and this chapter focuses on it need, effects, and disruptive nature.

1.0 INTRODUCTION Christensen (1997) used the term “disruptive technology” in his book The Innovator’s Dilemma. Such technologies surprise the market by generating a considerable improvement over existing technology, and this can be attained in DOI: 10.4018/978-1-4666-1945-6.ch035

a number of ways. This technology may not be as expensive as an existing technology or more complicated in nature but does attract more potential users (www.wisegeek.com). At times it may be expensive and complicated, requiring highly skilled personnel and infrastructure to implement. Two types of technology change have shown different effects on the industry leaders. Sustained technology sustains the rate of improvement in a

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Dual-SIM Phones

product’s performance in the industry. Dominant industry firms are always at the fore developing and adopting such technologies. Disruptive technology changes or disrupts the performance path and continually results in the failure of the industry leading firms. Few technologies are basically or essentially disruptive or sustaining in nature. It’s the impact created by the strategy or business model that the technology enables that is disruptive (Christensen & Raynor 2003). The advent of Global System for Mobile communication (GSM) resulted in a major communication leap worldwide. Mobile phones actually became an indispensible electronic gadget defining the modern world (Sally, Sebire, Riddington, 2010). Mobile phone manufacturers continue to include different features on their mobile phone products in addition to basic functions of communication. This is with a purpose of sustaining the market for the products. The mobile phone has become a gadget with full range of services. Ranging from basic telephony to business and leisure or entertainment features. However, performance issues with mobile network services furnished further basis for multiple SIM (Subscriber Identity Module) acquisition by users, for improved access. The problems that led to this were initially poor network coverage and poor performance problems of mobile network service providers in the country and later lower call tariff. Mobile phone users acquired phones depending on the number of networks to which they were subscribed and the trend still exists today. An opportunity was created for a product that would satisfy user needs with regard to multiple SIM capabilities.

1.1 History of Mobile Phone The history of mobile phone began in the 1920s. The very first usage of it was in taxis/cars where it was used as a two-way radio for communication. Cell phones evolved over time like any other electronic equipment, and each stage or era was most certainly interesting. From its first official

618

use by the Swedish police in 1946 to connecting a hand-held phone to the central telephone network, modern cell phones evolved tremendously. Ring (1947) created a communication architecture of hexagonal cells for cell phones. Later an engineer discovered that cell towers can both transmit and receive signal in three different directions led to further advancement. Early cell phone users were limited to certain blocks of area often referring to base stations covering a small land area. It was not possible to remain in reach beyond such defined boundaries until Joel’s development of handoff system. By this, users were enabled to roam freely across cell areas without interruption to their calls. Cell phone had analog services between 1982 and 1990. In 1990, Advanced Mobile Phone Services (AMPS) turned the analog services to digital and went online (“History of Cell Phone” 2010).

1.1.1 First Generation (1G) Mobile Phones The USA Federal Communication Commission (FCC) approved for public use the first cell phone called Motorola DynaTAC 8000X from Motorola but was made available to the public market after 15 years and was developed by Dr. Martin Cooper. It was considered to be a lightweight cell phone of about 28 ounces. Its dimensions were 13 x 1.75 x 3.5 inches. First generation mobile phones worked with the Frequency Division Multiple Access (FDMA) technology. The first generation mobiles are large in size and heavy to carry. First generation mobile phones were used only for the voice communication purpose(“History of Cell Phone” 2010).

1.1.2 Second Generation (2G) Mobile Phones The second generation mobile phones were introduced in the 1990s. Second generation (2G) mobile phones worked with both GSM and CDMA (Code Division Multiple Access) technologies.

Dual-SIM Phones

2G network signals are digital while 1G network signals are analog. 2G cell phones were smaller, weighing between 100 to 200 grams; these were hand-held and were portable. Later improvements on these cell phones included faster internet access with GPRS (General Packet Radio Service) and subsequently, EGDE (Enhanced Data rates for Global Evolution) technology. And sharing of files with other mobile devices using infra red or Bluetooth technology. There were other improvements like Short Message Service (SMS) smaller batteries and longer battery life, etc. Due to all these improvements, the mobile phone customer base expanded rapidly worldwide.

1.1.3 Third Generation (3G) Mobile Phones Most present day mobile phones are the third generation phones. The standards used on 3G phones differ from one model of the mobile phone to the other which essentially depends on the network providers. These phones were capable of streaming live videos, stream radio, making video calls, send emails, have mobile TV, have high internet access speed due to HSDPA (High Speed Data Packet Access) and WCDMA (Wideband Code Division Multiple Access) technology. They also use Wi-Fi and touch screen technology apart from performing all the functions of the 2G mobile phones (“History of Cell Phone” 2010).

1.2 Dual SIM Mobile Phones A dual-SIM mobile phone has the capacity to hold two SIM cards. The earliest model of this technology made use of dual-SIM adapters on single SIM phones which of course had only one transceiver. The use of this adapter rendered a slim phone bulky. Sometime the SIM card needed to be trimmed to fit into the adapter and the phone. The dual-SIM adapter could hold two SIMs at a time and was small enough to fit behind the battery of a regular mobile phone. However both

SIMs could not be activated at the same time on the mobile phone. Switching from one SIM to the other was done by restarting the mobile phone; this combination is called a standby dual-SIM phone. Recent dual-SIM phones have both SIMs activated simultaneously such that there is no need to restart the phone, these are referred to as active dual-SIM phones. Most of these phones have two transceivers in built of which one of the transceivers may support both 2G and 3G while the other transceiver only supports 2G. Another type of dual-SIM mobile phone exists which supports both GSM and CDMA network. A new generation of dual SIM mobile phones makes use of only one transceiver yet provide 2 active SIMs simultaneously e.g. the LG GX200. Some dualSIM phone use calls management software and can divert calls from one SIM to the other SIM’s voicemail when a call is in progress or simply indicate that the line is busy. Both SIM share the phone’s memory such that they share the same contact list and SMS and MMS message library (li, R. 2010). A recent introduction is mobile phones capable of holding three SIM cards; an example is the Akai Trio (“Dual SIM” 2011).

1.3 Telephony in Nigeria Up until 2001 Nigeria experienced problems with the services provided by its then main communications service provider the Nigerian Telecommunication Plc (NITEL) including inefficient services, lack of access, limitation of services to places of institution since only landlines were majorly deployed.In 1992, the telecommunications industry in Nigeria was deregulated. First was the commercialization or corporatization of Nigerian Telecommunications Plc (NITEL) while the second was the establishment of the Nigerian Communications Commission (NCC), the telecommunications industry regulator (Alabi 1996). The deregulation led to the introduction of Global System for Mobile communication (GSM) network providers operating on the 900/1800

619

Dual-SIM Phones

MHz Spectrum were MTN Nigeria, Econet (now Airtel), Globacom and Mtel in 2001. As a result the use of mobile phones soared, and has replaced the unreliable services of the Nigerian Telecommunications Limited (NITEL). With an estimated 45.5 million mobile phones in use as at August 2007, and most people having more than one cell phone, Nigeria has witnessed a phenomenal growth in this sector (“Telecommunications in Nigeria,” 2011).

2.0 THE NEED FOR DUAL SIM MOBILE PHONE The GSM service in Nigeria came with its own problems as subscribers were not getting value for their money. Tariffs were high and the GSM service providers were plagued with numerous problems such as instability in power supply, insecurity of infrastructure, call drops, difficulty in network accessibility. Due to the peculiar nature of power supply in Nigeria, GSM service providers had difficulty in powering their cell sites. Electric power generators installed at base stations to supplement or provide power meant additional deployment and operational cost. This in advertently led to increase in call tariffs. GSM service providers also incurred additional cost with regard to securing installed facilities. GSM Service providers have high numbers of security personnel on their payroll, because these guards are needed to guard their installations against theft and vandals. As of October 2007, Airtel (formerly Zain) had 2500 base stations, MTN2900, and Globacom-3000 in Nigeria (Adegoke, Babalola, et al 2008). With two security personnel per cell site, one can relate the cost. These costs go into the total cost of operation thereby leading to increases in call tariffs. The presence of security personnel doesn’t however guarantee the safety of these facilities since there are reported cases of stolen generators and siphoned fuel from reservoirs (Njoku, 2007).

620

Major complaints from network subscribers were the inability to access the network to initiate calls. A subscriber had to dial several times before a call could go through. Sometimes after dialing several times, a subscriber might be connected to the wrong number. Often established calls are abruptly terminated in the middle of conversions. This can happen for several reasons. There may be loss of signal between the network and the mobile phone, when the mobile phone (subscriber) is outside the network coverage area, or the call is dropped upon handoff between cells on the same provider’s network. Other causes include cell sites running at full capacity no room for additional traffic, poor network configuration such that a cell is not aware of incoming traffic from a mobile device; the call is lost when the mobile phone cannot find an alternative cell to handoff.

2.1 The GSM Service Network accessibility, dropped calls and high tariff appear to be must worrisome to the average GSM subscriber. A common maxim then was “of what use is a mobile phone when it cannot be used at will?” Disturbingly, GSM service network problems often persist for days and on rare occasions, for weeks. These problems are peculiar to all the service networks. When one network is down, often service may be available on other networks. The logical option to subscribers was subscription to multiple networks; this of course meant acquisition of multiple GSM phones, with the attendant inconvenience associated with having to keep more than one handset. Many subscribers using multiple handsets experienced loss or theft of some of these phones. Most Nigerians therefore desired and looked for a means of having two SIMs on a phone to overcome the problem of carrying more than one phone. Major mobile phone manufacturers however concentrated on producing sophisticated phones with mind blowing features like camera,

Dual-SIM Phones

FM radio, memory card, WAP, GPRS and EGDE capabilities at the time. These companies excelled in product performance using current technology to produce better and more durable phones with each new release. They sustained product performance with their new innovations and high tech phones.

3.0 THE DISRUPTIVE IMPACT OF DUAL-SIM TECHNOLOGY Mobile phones of all brands, shapes and sizes, were introduced into the phone market at the onset just as GSM service providers were expanding network coverage. Common household names included Nokia, Samsung, Sagem, Sony Erickson and LG. There were a few other brands albeit insignificant compared to these six. The trend was slick, high-tech mobile phones with improved performance and durability. However, Chinese phone manufacturing companies introduced a disruption in this market trend and became a major player on the Nigerian mobile phone market via the introduction of dual-SIM capable phones popularly called “China Phones”. Although these products did not equal the existing brands in performance, look and durability, they provided an innovational intervention for the target market in providing access to multiple service networks on a single phone. As such, with the additional vantage of being cheap and easily affordable, the Nigerian market embraced the product. Most of the features on existing sophisticated phones are also available on the dual-SIM phones. According to a market research company GFK Retail and Technology, 30 per cent of mobile phones in Nigeria are dual-SIM (Rattue, 2011). This development which is directly related to the phenomenal growth of multi-SIM devices globally is not only in Nigeria. In Indonesia, Vietnam, Ghana and India, the market has grown from one in ten in 2009 to one in four by the quarter of 2010. According to the report, in Middle East and Africa, one in every

10 mobile phones sold uses dual SIM. In Asia, 16 per cent of all mobile phones sold have dual SIM capabilities, which represents an increase from 13 per cent at the beginning of 2010 (Rattue, 2011). There were however warranty issues with the first adapter type dual-SIM phones; these adapters could be used with normal single SIM phones. The use of the dual-SIM adapter voided warranty for such phones. Also “China Phones” that are active dual-SIM phones are bought from the dealers without warranty. When asked “why?” they reply that they equally bought them wholesale without warranty. Another issue is that of durability; “China Phones” broke down unpredictably. In event of fault, local repair shops find it difficult to get replacement parts as there are no service centers or parts shop for such products. The lack of International Mobile Equipment Identity (IMEI) numbers in the unbranded made in China handsets makes them non-traceable and creates security concerns. In spite of these shortcomings the demand for them is ever increasing, as low income earners can easily afford them. Most local mobile phone outlets sell mostly these “China Phone”. Established firm and global mobile phone manufacturers are facing stiff competition from Chinese brands and “fakes” in the Nigeria mobile phone market. This they have done by enticing consumers with attractive combination of features at affordable prices. Chief among these features is the dual-SIM capabilities of these mobile phones which established manufacturers are slowly introducing (Rattue, 2011). Samsung’s D880 Duos was not so successful when it was introduced since calls were only possible with its primary SIM, unlike the Chinese brands which offered dual call capabilities. To initiate call from the secondary SIM of a Samsung D880, it must be made the primary. This difficulty in addition to its high cost were factors that made it unsuccessful. Subsequent Samsung active dual-SIM phone models had better performance but their costs were still high. Nokia only introduced their cheap dual-SIM phone, Nokia C Series in Nigeria

621

Dual-SIM Phones

in 2010. There is general acceptance in the country that Nokia phones are more durable than others. However, the Nokia C1-00 is a standby dual-SIM phone as only one of the SIM is active at a time. Initially most Nigerians embraced the dual SIM phones due to inconvenience associated with carrying two mobile phones at the time. Presently, there is improvement in the power sector in the country. There have also been reductions in inaccessibility to GSM networks and the rate of dropped calls. And insecurity still remains an issue at each cell sites. The inclination to ownership of multiple mobile phones is currently not only driven by these factors but by new factors including lower call tariff, promo by various GSM service providers to entice customer and privacy/ personal security issues.

3.1 The Way Forward Some of the problems facing the dual-SIM “China Phones” and possible steps to address them. • • • • •

No warranty Issues Poor durability No service centres Difficulty in Getting Replacement Parts Security Issues (No IMEI number)

The cases of void warranties as a result of using dual-SIM adapter on normal single SIM mobile has drastically reduced if not eliminated by recent active dual-SIM mobile phones. The lack of warranty for a product often creates doubt in the mind of the customers as to durability or authenticity of the product. Wholesale dealers who get these “China Phones” should be made to demand that the manufacturers of such phones issue warranties for them. This will encourage more patronage. The issue of durability can be a result of poor design or the use of substandard materials to implement the dual-SIM technology. Since most of these phones are cheap as compared with other dual-SIM mobile phones manufactured by big and

622

popular mobile phone manufacturers like Nokia, Samsung etc. There is the likelihood that the use of substandard materials is the cause of poor durability. Better materials will lead to increase in cost of production and product cost. I believe they can strike a balance and still produce phones that are reasonably priced. Initially, the “China Phones” had short battery life, but the phones now come with extra battery. Chinese phone manufacturing companies need to establish service centres in the country or train and certify a hand full of owners of local mobile phone repair shop who will in turn pass on the skills acquired to others such that there will be enough skilled technicians who will be able to repair these phones in the event of faults. Replacement parts for “China Phones” should be made available to the trained technicians through the service centres. There is need for regulation to stop the use of dual-SIM mobile phones without IMEI number. International Mobile Equipment Identity (IMEI) number is unique to every GSM and WCDMA mobile phone and found printed inside the battery compartment of the phone. It can be displayed on the screen of the phone by entering*#06# on the keypad. In India when a large percentage of people used such phones, mobile operators implanted IMEIs onto such phones rather than bar services. But the Indian government placed a ban on the usage of phones without IMEI which took effect from December 1, 2009.

4.0 CONCLUSION The need for communication in spite of poor network coverage and quality of service by the Mobile (GSM) service providers, informed ownership of multiple number of single SIM mobile phones to guarantee access to available network services by mobile phone users in Nigeria, with associated multi-phone ownership problems. Major mobile

Dual-SIM Phones

phone manufacturers who prefer a sustained technology model responded to the increase of market with improved and more sophisticated products. Disruptive market innovation by way of dual-SIM mobile phones products met market anticipation. These dual-SIM “ made in China” phones were not as attractive or as durable as the existing sophisticated brands but had most features on these phones and were cheap and affordable. The dual-SIM phones performance were also affected by short battery life, plus absence of warranties, technical support/services outlets and replacement parts. Additional security issues are associated with the phones’ lack of IMEI numbers. However, the dual-SIM innovation met a market need and is widely used in Nigeria. Whereas electricity supply, insecurity and other problems which informed telecoms service quality which informed multiple phone ownership are declining, personal security issues and preference of lower tariff offerings continue to inform multiple network access. As such, dual SIM phones remain a popular market choice. Associated problems with this dual SIM products must however be addressed by the China based manufacturers and other market players.

REFERENCES Adegoke, A. S., Babalola, I. T., & Balogun, W. A. (2008). Performance evaluation of GSM mobile system in Nigeria. Pacific Journal of Science and Technology, 9(2), 436–441. Alabi, G. A. (1996). Telecommunications in Nigeria. Retrieved March 10, 2011, from www. africa.upenn.edu Christensen, M. C. (1997). Innovator’s dilemma. Harvard Business School Press. Christensen, M. C., & Raynor, M. E. (2003). Innovator’s solution. Harvard Business School Press.

Dual, S. I. M. (2011). Retrieved March 10, 2011, from http://en.wikipedia.org/wiki/Dual_SIM Dual SIM Mobile Phones. (2009). Retrieved March 10, 2011, from http://www.dualsimmobilephones.com/2009/09/dual-sim-mobile-phones/ History of Cell Phone. (2010). Retrieved March 10, 2011, from www.historyofcellphones.net/ Li, R. (2010). Cell phone mysteries, what is dualSIM? Retrieved March 9, 2011, from www.articles. webraydian.com Njoku, C. (2007). The real problem with GSM in Nigeria. Retrieved March 9, 2011, from http://www.nigeriavillagesquare.com/index2. php?option=com_content&do_pdf=1&id=7829 Rattue, A. (2009). Buoyant Nigerian market sees 15 million mobile handsets sold in 2009. Retrieved July 12, 2011, from http://www.gfkrt.com/ news_events/market_news/single_sites/005203/ index.en.html Rattue, A. (2011). Multi SIM phenomenon continues in emerging mobile markets. Retrieved July 12, 2011, from http://www.gfkrt.com/news_events/ market_news/single_sites/007260/index.en.html Sally, M., Sebire, G., & Riddington, E. (2010). GSM/EDGE: Evolution and performance (p. 504). John Wiley and Sons Ltd. doi:10.1002/9780470669624 Telecommunications in Nigeria. (2011). Retrieved March 10, 2011, from http://en.wikipedia.org/ wiki/Telecommunications_in_Nigeria#mw-head

KEY TERMS AND DEFINITIONS Active Dual-SIM Phone: A dual-SIM that has both SIM activated, calls can be made or received simultaneously and there is no need to restart or switch between SIMs. Cell Phone: America’s name for mobile phone

623

Dual-SIM Phones

China Phone: substandard and sometimes unbranded dual-SIM mobile phone manufactured in China Dual-SIM Phone: Mobile phone capable of holding two SIM cards that may or may not have

both SIM cards activated to make or receive calls simultaneously. Standby Dual-SIM Phone: A dual-SIM mobile that has one SIM activated at a time and needs to be restarted to activate the other SIM or switch between SIMs.

This work was previously published in Disruptive Technologies, Innovation and Global Redesign: Emerging Implications, edited by Ndubuisi Ekekwe and Nazrul Islam, pp. 462-469, copyright 2012 by Information Science Reference (an imprint of IGI Global).

624

625

Chapter 36

Data Envelopment Analysis in Environmental Technologies Peep Miidla University of Tartu, Estonia

ABSTRACT Contemporary scientific and economic developments in environmental technology suggest that it is of great importance to introduce new approaches that would enable the comparison of different scenarios for their effectiveness, their distributive effects, their enforceability, their costs and many other dimensions. Data Envelopment Analysis (DEA) is one of such methods. The DEA is receiving an increasing importance as a tool for evaluating and improving the performance of manufacturing and service operations. It has been extensively applied in performance evaluation and benchmarking of several types of organizations, so-called Decision Making Units (DMU) with similar goals and objectives. Among these are schools, hospitals, libraries, bank branches, production plants, but also climate policies, urban public transport systems, renewable energy plants, pollutant emission systems, environmental economics, etc. The purpose of this chapter is to present the basic principles of the DEA and give an overview of its application possibilities for the problems of environmental technology.

INTRODUCTION The Earth is the common home for all of us and because of this the great attention paid to environmental problems is more than natural and urgent. The lack of economic value of environmental goods often leads to over-exploitation and degradation of these resources. It is extremely important to monitor and control interactions between DOI: 10.4018/978-1-4666-1945-6.ch036

production technologies and the environment. To keep and conserve the natural environment, environmental technology is developed. Independently of application areas of environment sciences, new approaches and methods, particularly of mathematical modeling, are extremely needed and welcome in this area. It is well-known that mathematical modelling is the most efficient method for investigating different processes, their simulation and prediction.

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Data Envelopment Analysis in Environmental Technologies

Data Envelopment Analysis (DEA) is a relatively new data oriented mathematical method for evaluating the performance of a set of peer entities traditionally called Decision Making Units (DMU) which convert multiple inputs into multiple outputs. Since DEA was first introduced, in its present form, in 1978 (Charnes et al., 1978), we have seen a great variety of applications of DEA being used in evaluating the performances of many different kinds of entities engaged in many different activities in many different contexts in many different countries. These DEA applications have used DMUs of various forms, such as hospitals, schools, universities, cities, courts, business firms, and others, including the performance of countries, regions, etc. The DEA is frequently applied in many areas of applied economic sciences, including agricultural economics, development economics, financial economics, public economics, macroeconomic policy, etc.; and among others, in addition to its traditional confinements in productivity and efficiency analysis; it has also diffused into the field of environmental economics and environmental technology. As it requires very few theoretical assumptions, DEA has opened up possibilities for its use in cases which have been resistant to other approaches because of the complex (often unknown) nature of the relations between the multiple inputs and multiple outputs involved in DMU. There are several examples of areas of environmental science, where DEA is used and where remarkable theoretical and practical results have been achieved. In the framework of problems raised by climate change, one of the major threats to the Earth’s sustainability, we see that DEA is applied for assessing the relative performance of different climate policy scenarios as DMUs, through accounting their long-term economic, social and environmental impacts as input and output parameters. (Bosetti & Buchner, 2008) Quantitative techniques are crucial here in order to make adequate decisions quickly. In the sense of climate change measuring the carbon emission

626

performance is also important. (Zhou et al., 2008) In another context, the paper by (Piot-Lepetit, 1997) considers the usefulness of DEA for estimating potential input reductions and assessing potential reductions of environmental impact on agricultural inputs. We can see the same in the paper (Madlener et al., 2006), where the assessment of the performance of biogas plants is realized. In the study (Rodrigues Diaz et al., 2004) DEA was used to select the most representative irrigation districts in Andalusia. One can find the use of DEA to assess corporate enactment of Environmentally Benign Manufacturing as work parts move from place to place in a company (Wu et al., 2006); this work touches green manufacturing problems. The DEA is used even in political decision making (Taniguchi et al., 2000), and to discuss a methodology to assess the performances of tourism management of local governments when economic and environmental aspects are considered as equally relevant. (Bosetti et al., 2004) The structure of the present chapter is the following. First an overview of DEA method is given. Today the DEA itself has developed and has several forms, versions and modifications, each of which has specific application features. Below we formulate only the basic version of it because there is a lot of literature available in libraries and on the internet. The main part of the paper deals with the case studies, the results of which have been achieved by using the DEA. There is a rising trend to apply DEA and naturally some selection of them is included. Finally the reader finds the conclusions and references. One important objective of this chapter is to emphasize that environmental technologies are very open to innovation, and using new methods of mathematical modeling is a part of this.

Data Envelopment Analysis in Environmental Technologies

OVERVIEW OF THE DATA ENVELOPMENT ANALYSIS In this paragraph we give a short description of Data Envelopment Analysis (DEA). More profound treatments of the topic can be found in many books, (e.g. Cooper et al., 2004; Cooper et al., 2007; Thanassoulis 2003) and there are also books (edited by Charnes et al., 1994) and papers dealing with the applications of DEA. Comprehensive information about DEA can be found on the web-page of Ali Emrouznejad (Emrouznejad 1995-2001) or in the paper (Emrouznejad et al., 2008). Also, it is important to mention that all the papers referred to in the next section “Case Studies” contain a sufficient overview of DEA versions or modifications used in particular cases under consideration.

DEA and Benchmarking Data Envelopment Analysis belongs to the wider class of efficiency measuring methods, to the so-called frontier methods, and is a data oriented approach for evaluating the performance of a set of peer entities called Decision Making Units (DMUs). DEA is a multi-factor productivity analysis model for measuring the relative efficiencies of DMUs and is receiving increasing importance as a tool for evaluating and improving the performance of manufacturing and service operations. DEA is a powerful methodology for identifying best practice frontier and has been extensively applied in performance evaluation and benchmarking of schools, hospitals, bank branches, production plants, public-sector agencies, etc. Technically DEA represents the collection of non-parametric, linear programming techniques developed to estimate the relative efficiency of DMUs. Largely the result of multi-disciplinary research in economics, engineering and management, DEA is a theoretically sound framework that offers many advantages over traditional efficiency measures such as performance ratios and regression.

The most important feature of DEA is its ability to handle effectively the multidimensional nature of inputs and outputs in the production and management process. Efficiency as an economic category is in use widely in profit – targeted organizations and enterprises. The simple idea that greater input into system brings us greater output and increases the profit lies in the base of the general meaning of efficiency. Theoretically and practically several types of efficiency are in use and every one of them gives a different possibility for interpretation and for making new decisions. The same is also seen in DEA concepts. Here for each DMU the efficiency score in the presence of multiple input and output factors is defined as: weighted sum of outputs divided to weighted sum of inputs. All scores are between zero and one; DMU’s whose efficiency is equal to one, are certified as fully efficient or simply efficient. For every inefficient DMU, i.e. for these which efficiency score is less then one, the DEA identifies a set of corresponding efficient units, so called reference group, that can be utilized as benchmarks for the improvement of activities. The estimation of the efficiency of a single DMU, on the base of other organizations acting in the same economic environment, is the main advantage of the DEA approach. Reference group means: the set of real DMUs whose outputs and inputs belong to the outputs and inputs of composite virtual DMU with nonzero weights. The fact that efficiencies of reference DMUs are taken equal to 1 (or 100%) does not mean that the quality of work of those base organizations could not improved. The results of the application of the DEA fix only the situation in the moment when the data for input and output parameters is collected. Geometrically for every DMU evaluated the reference set determines a frontier in the input – output parameters space. This is called best practice production frontier and the points which correspond to efficient DMUs are situated on this. If in the group of DMUs under consideration are n members for all the values of demanded input-

627

Data Envelopment Analysis in Environmental Technologies

output parameters, then in the parameters space we have also n data points. The best practice frontier is constructed then as piecewise linear surface in this space which envelopes set of data points and separates it from the beginning of coordinates. Fully efficient DMUs are vertices of this. Efficiency means geometrically a radial measure which quantitatively equals to the rate of distances from the beginning of coordinates of the composite virtual DMU and DMU being evaluated. Evaluation process of every single nonefficient DMU gives in general different efficiency coefficients and individual reference set because the data points are different. Exceptions are efficient DMUs: as mentioned, the efficiency of a DMU which belongs to reference set is established equal to one but over that there may be other DMUs which efficiency is also equal to one but which do not belong to any reference set. In all textbooks one can find the exact explanations also about geometrical issues. (See e.g. Cooper, et al., 2004). Data for DEA is a set of parameters, evaluated for each DMU. The parameters are divided into input and output parameters which represent different activities of DMUs under consideration. For example, if we consider DMUs as public libraries, the input parameters could be fixed as yearly expenditures on acquisition of new books, yearly expenditures on salaries, the size of collections and the area of the library rooms. As for output parameters one may consider the number of readers and the number of loans. It is interesting to mention some important conclusions of this work although research considered libraries. In the years 2002 by 2005 eight central libraries, i.e. 40% from the whole selection, used their resources effectively. The score of relative efficiency of the rest of libraries varied from 0,740 to 0,979.The data of four years show that the trend of the score of efficiency of central public libraries of Estonia was falling, i.e. the average score decreases. In 2005 there were six central libraries from 20 which were scale efficient, i.e. of optimal size. (Miidla &

628

Kikas, 2009). The list of parameters and division into inputs and outputs can be chosen and fixed differently, depending on the particular goals of research and, naturally, on application area. One can say that DMUs convert multiple inputs into multiple outputs. It is just interesting that the inputs and outputs do not need to have a common measure; they can be quantities of completely different units – meters, dollars, number of persons, number of lakes in regions, etc. It is important that in the whole selection the inputs and outputs of all DMUs have a value; although in the case of absent data there are also approaches which allow the use of DEA (Kao & S-Tai Liu, 2000). Further with regards to the words ‘selection’ or ‘selection group’ we refer to the whole set of DMUs under consideration. The DEA approach uses a linear programming model for measuring the relative efficiencies of those DMUs on the basis of the given data. First, a DMU is fixed and a hypothetical composite operating unit, a hypothetical DMU, based on all units in the selection group, is constructed. The input and output of this composite virtual DMU are determined by computing the weighted average of the inputs and outputs of all real DMUs in the selection group, and the efficiency score of the initially fixed DMU is defined as the rate output and input of this constructed composite DMU. This procedure is repeated for each single DMU, and as the output for the DEA application itself one gets an array of these relative efficiencies, which lay between zero and one. Thus the DEA approach is a kind of peer comparison method. Constraints in the linear programming model require all outputs of the composite virtual DMU to be greater than or equal to the outputs of the DMU being evaluated. So, if in the selection group there are n members then for evaluating all of the members, from the point of view of relative efficiency, we need to establish and solve n problems of linear programming. It should be mentioned that often more than one problem of linear programming is needed to solve each DMU. In the advance

Data Envelopment Analysis in Environmental Technologies

use of DEA, it is necessary to use more than one problem of linear programming when a more detailed analysis is needed. (Thanassoulis, 2003) If the inputs for the virtual composite unit can be shown to be less than the inputs for the DMU being evaluated, the composite DMU will be shown to have the same, or more output for less input. In this case, the model will show that the composite virtual DMU is more efficient than the DMU being evaluated. In other words, the DMU under evaluation is less efficient than the virtual DMU. Since the composite DMU is based on all DMUs in the selection group, the DMU being evaluated can be judged relatively inefficiently when compared to the other units in the selection. The estimation of the efficiency of a single DMU on the basis of others acting in the same environment, the so-called reference group, is the main advantage of the DEA approach. The reference group means the set of real DMUs whose outputs and inputs belong to the outputs and inputs of composite virtual DMU with nonzero weights. This approach makes the DEA very attractive because in the case of environmental entities it is difficult to speak about efficiency for all these in one sense, as it is possible, for example, for profit organizations. The efficiency of reference DMUs is taken equal to 1 (or 100%). The results of the application of the DEA can only fix the situation when applying the results to the time frame in which the data for input and output parameters was collected. The environment may have already changed; applying these results to the following year can render inaccurate data. Geometrically for every DMU evaluated the reference set determines the frontier in the input – output parameters space. This is called the best practice production frontier, although in the case of environmental technologies it is difficult to speak so, and the points which correspond to effective DMUs are situated on this. As mentioned before, the frontier itself is a piecewise linear surface in the input - output parameters space as an envelope to a production possibility set. Points

corresponding to the reference or fully efficient DMUs are vertices of this frontier. Efficiency is geometrically a radial measure quantitatively equal to the rate of distances from the beginning of coordinates of the composite virtual DMU and of DMU being evaluated. This rate is just referred to as the relative efficiency. Evaluation of every single DMU gives in principle a different frontier and an individual reference set. The efficiency of DMUs in the reference set is established equal to one and there may be also other DMUs which efficiency is equal to 1 which do not belong to any reference set, they belong to the composite virtual DMU with zero weight. The best practice frontier relies only on the initial data, i.e. on inputs and outputs of all DMUs in the selection. The algorithm of efficiency estimation does not require the construction of the best practice frontier; the numerical method gives the answer without geometrical interpretation. Considering the DEA methodology one has to assume scaling properties of input parameters, returns to scale. This means the influence of inputs variation to outputs changes. Let us assume that the input parameters are changed, all in the same proportionality. If the outputs change in the same proportion we can speak about constant returns to scale. Otherwise, when outputs do not change in the same rate we can speak about variable returns to scale; more precisely, increasing or decreasing returns to scale if outputs change in a greater or smaller proportion. The scale issue associated with DEA might be under attention when different DEA models are in use and the scaling property leads us to the meaning of optimal size of DMU. Namely, the DMU is shown to be of optimal size when it is efficient both in the sense of constant returns to scale and variable returns to scale. If the DMU is less than optimal, it usually works in the conditions of increasing returns to scale. Conversely, oversized, if compared to the optimal size DMUs, works in the conditions of decreasing returns to scale. This might also be important

629

Data Envelopment Analysis in Environmental Technologies

in some aspects of application in environmental technology. The DEA approach can be input-oriented or output-oriented. In the first case the main question is: how much is it possible to decrease the input parameters for inefficient DMUs in order to keep the present output. In the case of outputoriented method the main question is: how much is it possible to increase the outputs while keeping the present input. Choosing between these two possibilities depends again on the context of application, both of the cases lead to different modifications of the DEA method.

each DMU to that of a single virtual output and virtual input. For a particular DMU the ratio of this single virtual output to single virtual input provides a measure of efficiency that is a function of the multipliers v1,…,vm, u1,…, uk. In mathematical programming parlance, this ratio, which is to be maximized, forms the objective function for the particular DMU being evaluated. Symbolically the problem is the following:

Mathematical Formulation

maximization over

In their initial study, Charnes, Cooper, and Rhodes (Charnes et al., 1978) described DEA as a ‘mathematical programming model applied to observational data that provides a new way of obtaining empirical estimates of relations, such as the production functions and/or efficient production possibility surfaces, that are cornerstones of modern economics.’ In this article they proposed following the fractional program model, known as the CCR DEA model. Assume that there are n DMUs to be evaluated. Each DMU consumes varying amounts of k different inputs to produce m different outputs. Specifically, DMU j consumes amount xji of input i and produces amount yjr of output r. We assume that xji ≥ 0 and yjr ≥ 0 and also assume without loss of generality that each DMU has at least one positive input and one positive output value. The first approach gives us the fractional program problem for evaluating the efficiency score of the DMU number s. In this form, as introduced by Charnes, Cooper, and Rhodes, the ratio of outputs to inputs is used to measure the relative efficiency of the DMU to be evaluated relative to the ratios of all of the n DMUs. We can interpret the CCR construction as the reduction of the multiple-output/multiple-input situation for

v1,…,vm, u1,…, uk

630

Find max {(v1 y1s + … + vm yms)/(u1 x1s + … + uk xks)},

subject to: {(v1 y1i + … + vm ymi)/(u1 x1i + … + uk xki)} ≤ 1, i = 1,…,n v1,…,vm, u1,…, uk ≥ 0 where v1,…,vm, u1,…, uk are weights given to real observed output values y1s , … , yms and input values x1i, …, xki, correspondingly. In this first approach, the fractional program problem has an infinite number of solutions: if (v1*,…,vm*, u1*,…, uk*) is a solution of this, i.e. optimal, then also (α v1*,…, α vm*, α u1*,…, α uk*) has the same property for every α > 0. The additional condition u1 x1s + … + uk xks = 1

Data Envelopment Analysis in Environmental Technologies

λ1, …, λn

makes it possible to convert this fractional problem to following linear programming problem for the estimation of the efficiency of DMU number s.

subject to:

Find

λ1 +…+ λn = 1

z* = max (μ1 y1s + … + μm yms),

λ1x1,i + …+ λnxn,i ≤ Θxs,i , i = 1,…,k

maximization over

λ1y1,j +…+ λnyn,j ≥ ys,j , j = 1,…, m

μ1, …, μm

λ1 ≥ 0,…, λn ≥ 0.

subject to:

As in the case of the main problem, for a complete realization of the DEA it is necessary to solve the linear programming problem for every single DMU in the selection. This gives us the array of relative efficiencies, solutions Θ* for every single DMU and only then it is possible to start with the interpretation process. Given dual problem is important because the units involved in the construction of the composite DMU, i.e. for which corresponding weight λi > 0, can be used as benchmarks for improving the inefficiency of tested DMU’s. DEA allows for computing the necessary improvements required in the inefficient DMU’s inputs and outputs in order to make it efficient. If a DMU is inefficient, the corresponding solution Θ* is less than one; the solution also gives us the parameters of the corresponding composite virtual DMU which is efficient, of course, i.e. the weights for its input and output. This virtual DMU also presents us the corresponding point on the production efficiency frontier; ‘dragging’ the inefficient DMU point to the frontier means a decrease in the inputs of this DMU. Sometimes it so happens that the composite DMU is located on the part of the production efficiency frontier which is parallel to some coordinate plane. In this case it is possible to realize an additional shift of the virtual data point of composite DMU down along of this parallel part towards the beginning of coordinates without decreasing outputs and increasing other inputs. This surplus is called a slack and it gives addi-

(μ1 y1i + … + μm ymi) – (γ1 x1i + … + γk xki) ≤ 0, i = 1,…,n γ1 x1s + … + γk xks = 1 μ1, …, μm, γ1 ,…, γk ≥ 0. This is a basic linear programming problem for obtaining the efficiency score z* of the DMU number s. The above problem needs to run n times in identifying the relative efficiency scores of all the DMUs. Each DMU selects input and output weights that maximize its efficiency score. As mentioned, a DMU is considered to be efficient if it obtains a score of 1; a score of less than 1 implies that it is inefficient. When working with literature, a reader can find several versions of DEA computational schemes and should clearly understand what version is in use in every particular case, particularly understand the assumptions, etc. In this paper one does not find any exhaustive discussion but simply an example of DEA method formulation. Below we bring the dual problem of linear program. Find Θ* = min Θ, minimization over weights

631

Data Envelopment Analysis in Environmental Technologies

tional information about the inefficiency of the DMU under consideration. Slacks show the real possibilities to decrease the corresponding input parameter on the existing real example of other DMU working in such conditions. The interpretation of inefficiency in the case of input-oriented DEA, represented by dual linear programming problem, is easy. Namely, if we have k input parameters under consideration and if the relative efficiency Θ* of a DMU is less than one, Θ* < 1, then to reach the full relative efficiency (Θ* = 1) we must decrease inputs (x1,…,xk) of this DMU Θ* times, i.e. to the values (Θ*x1,…, Θ*xk). Any further discussion of these questions does not fall under the goal of the present paper. It is interesting to mention that in numerous applications of environmental technology, the versions and possibilities of Data Envelopment Analysis have been used in a great variety. The trend is increasing; the full DEA bibliography contains more than 4000 journal articles (Emrouznejad, 1995-2001) comprising many scientific fields.

CASE STUDIES Next we explore the possibilities of DEA application in different areas of environmental technology. The use of DEA as a quantitative non-parametric performance measurement technique, based on linear programming, for assessing the relative efficiency of homogeneous peer entities called Decision Making Units (DMUs), is successfully implemented into environmental research projects. DEA makes it possible to compare the efficiency of each DMU to that of an ideal operating unit, rather than to the average performance. This ideal unit is constructed only on the base of the data for the whole set of DMUs and because of this several DMUs always become fully efficient. The definition of a DMU is generic and flexible and this enforces the dissemination of DEA into new areas. We explain the use of DEA on the base of some examples and most certainly not all

632

application areas are covered. So, the overview given in this paper is in no way exhaustive of the developments in the field of DEA possibilities. The goal is to show some approach details, possible choices of DMUs, the corresponding input and output parameters, and the conclusions made on the basis of DEA. The interpretations of environmental efficiency in different cases are of interest. The order of examples is of no importance, as it is impossible to line up urgent environmental protection problems.

Climate Policy Scenarios DEA is extended from its traditional application in (Bosetti & Buchner, 2008) as a quantitative method to assess the relative performance of different climate policy scenarios when accounting for their long-term economic, social and environmental impacts. Indeed, contemporary developments in the political, scientific and economic debate on climate change suggest that it is of critical importance to develop new approaches, particularly quantitative, to compare policy scenarios for their environmental effectiveness, their distributive effects, their enforceability, their costs and many other dimensions. As for input parameters for DEA application economic, environmental and social costs for every possible policy, also indicators of instant climate situation are considered here. Instead, among the outputs, there are indicators for which lower values are preferred and indicators of benefits and welfare. The authors discuss eleven simulated climate policy scenarios, computed three indicators for each of them (cumulated discounted GDP over the century, temperature increase by 2100, and the Gini equity indicator by 2100), add the application of DEA and make interesting conclusions. Two alternative DEA approaches to compare the sustainability of different policy scenarios are used. One of them is based on the efficiency score defined as a relative ratio and the other is based on Competitive Advantage measured in terms of

Data Envelopment Analysis in Environmental Technologies

absolute prices. The first case fits the traditional DEA application described above. Relative efficiency estimates are computed for each policy, where efficiency is measured as the ratio of the weighted sum of outputs to the weighted sum of inputs, and are obtained through solving a series of linear programming problems. Both, constant returns to scale and variable returns to scale assumptions are used. The interpretation of relative efficiency, computed using the DEA method, is interesting: the policy is 100% efficient if and only if 1) none of its outputs can be increased without either increasing one or more of its inputs, or decreasing some of its other outputs; 2) none of its inputs can be decreased without either decreasing some of its outputs or increasing some of its other inputs. In the second approach DEA is applied in order to get weights while for each scenario the net economic impact, expressed in monetary value, is aggregated through weights to the social and environmental impacts as DMUs and their efficiencies are expressed in their own unitary measures, on the base of the data from real activity. Three major findings are pointed out: 1) stringent climate policies can outperform less ambitious proposals if all sustainability dimensions are taken into account; 2) a carefully chosen burden-sharing rule is able to bring together climate stabilization and equity considerations; 3) the most inefficient strategy results from the failure to negotiate a global post-2012 climate agreement. In conclusions the simulated scenarios, also the interpretational role of the DEA, are discussed in details. It is remarkable that it is possible to support the political, scientific and economic debate on climate change using the DEA method.

Measuring Environmental Performance Index The Environmental Performance Index (EPI) is a method of quantifying and numerically benchmarking the environmental performance of a

country’s policies and has been in recent years universally adopted and quoted by corresponding policy analysts and decision makers. The construction of an aggregated EPI, which offers condensed information on environmental performance, has evolved as an important focus in systems analysis. Among the existing approaches to developing EPIs, some are data-driven while others are theory-driven. In the article of Zhou et al., (2008) we see an example of direct approach, where an aggregated EPI is directly obtained from the observed quantities of the inputs and outputs of the environmental system studied using Data Envelopment Analysis. This work is an example of application of environmental DEA technology, in which outputs are assumed to be weakly disposable (Färe et al., 2004) and which have been widely used to measure industrial productivity when undesirable outputs exist. In recent years this approach has gained popularity in environmental performance measurement due to its empirical applicability. The common procedures for applying DEA to measure environmental performance are to first incorporate undesirable outputs in the traditional DEA framework, and then calculate the undesirable outputs orientation (environmental) efficiencies. In fact, many studies have been devoted to modeling undesirable factors in DEA, e.g. the data translation approach (Seiford & Zhu, 2005) and the utilization of environmental DEA technology. In the article (Zhou et al., 2008) different DEA methods for environmental performance measurement are described (constant returns to scale and variable returns to scale, non-increasing returns to scale) and a study on measuring the carbon emission performance of eight world regions is presented. The centre of attention is on the growing concern on global climate change due to carbon dioxide (CO2) emissions worldwide. The single input, desirable output and undesirable output are: total energy consumption (Mtoe, megatonne of oil equivalent), GDP (gross domestic product, billion 1995 US$) and CO2 emissions (Mt), respectively.

633

Data Envelopment Analysis in Environmental Technologies

Eight regions under consideration (i.e. DMUs for DEA application) are: OECD, Middle East, Former USSR, Non-OECD Europe, China, Asia (excludes China), Latin America and Africa. In this study all the proposed models are radial DEA-based models where efficiency scores are computed as rate of distances from origin of coordinates, DEA frontier and data point of corresponding DMU. However, in some circumstances it may be difficult to compare some DMUs only by the proposed environmental performance indexes because of the weaker discriminating power of radial DEA efficiency measures compared to with non-radial DEA models. Authors propose to incorporate different environmental DEA technologies with the non-radial DEA efficiency scores and to combine different environmental DEA technologies with slacks-based efficiency measures. The results of the study are interesting. For instance, if the pure EPI is chosen and the reference technology exhibits variable returns to scale, OECD has a better carbon emission performance than Africa eventhough it has a larger carbon intensity and carbon factor.

Greenhouse Gas Technologies In the paper (Lee et al., 2008), the authors use the Analytic Hierarchy Process (AHP) and Data Envelopment Analysis (DEA) hybrid model to weigh the relative preferences of greenhouse gas technologies in Korea. The Analytic Hierarchy Process is a subjective method used to analyze qualitative criteria in order to generate a weighting of the operating units and is known as a decisionmaking method which could be used to solve unstructured problems. In general, decision making involves tasks such as planning, the generation of a set of alternatives, the setting of priorities, the selection of the best policy once a set of alternatives has been established, allocation of resources, determination of requirements, prediction of outcomes, designing of systems, measurement of performance, ensuring of system stability, and the

634

optimization and resolution of conflicts. Authors of the paper employed a long-term perspective when establishing the criteria to evaluate energy technology priorities for the greenhouse gas plan. They used the AHP to generate the relative weights of the criteria and alternatives in the greenhouse gas plan. Thereafter, the relative weights were applied to the data used to measure the efficiency of the DEA method. This study represents an example in which the AHP/DEA hybrid model has been used to determine the energy technology priorities for the greenhouse gas plan. The results obtained using this hybrid model provide the government with an effective decision-making tool and also represent a consensus of experts in the greenhouse gas planning sector. Nine greenhouse gas technologies were considered as DMUs for DEA application: CO2 capture storage and conversion technology, non-CO2 gas technology, advanced combustion technology, next-generation clean coal technology, clean petroleum and conversion technology, DME (di-methyl ether) technology, GTL (gas to liquid) technology, gas hydrate technology and greenhouse gas mitigation policy. The parameters of DEA consist of a single input factor and multiple output factors. The input factor consists of the investment cost associated with the development of greenhouse gas technologies; the unit of investment cost was million US dollars in 2006. There are five output factors, namely possibility of developing technology, potential quantity of energy savings, market size, investment benefit, and ease of technology spread. All outputs are multiplied by the relative weights calculated using the AHP approach (these concern United Nations Framework Convention on Climate Change, UNFCCC, economic spinoff, technical spin-off, urgency of technology development, and quantity of energy use) and are thus applied in conjunction with the output factors employed as part of the DEA approach. As a result of the application of the AHP/DEA approach, one greenhouse gas technology, namely non-CO2 gas technology with efficiency score 1,

Data Envelopment Analysis in Environmental Technologies

was found to be more efficient than the other eight greenhouse gas technologies. In conclusion, this hybrid model can be used to efficiently compute the relative efficiency scores of greenhouse gas technologies. This paper also shows decision makers and policy makers in the energy sector that multi-criteria decision making problems can be solved using scientific procedures.

Tourism Management DEA is also effectively used for assessing the performances of tourism management of local governments when economic and environmental aspects are considered as equally relevant. In (Bosetti et al., 2004) the focus is on the comparison of Italian municipalities located in coastal areas. In order to assess the efficiency status of the considered management units, DEA is applied. In this analysis, the DMU represents a municipality producing the tourism good for which two different inputs are given. The first is the cost of managing the tourism infrastructure. More exactly, the cost of the production of tourism services is fixed on the total number of beds, which is considered as an approximation for management expenses, and it is computed by adding up the number of all beds in hotels, camping, registered holiday houses and other receptive structures. The second input is the environmental cost deriving from the increased number of people depending on the same environmental endowment; this parameter is presented as the amount of solid waste in tons per year. As output parameter, an indicator measuring the rate of use of existing beds has been used as a general approximation for profit deriving from the tourist industry, is considered. The authors confirm that in the present study output-oriented models have been preferred to input-oriented ones, as they suit more issues considered as relevant for management purposes and they better help in addressing the germane questions. These applications also give the sense to the input and output indicators because in

order to augment the efficiency of an inefficient municipality, the most direct policy means is to introduce constraints on the uncontrolled deployment of environmental resources, rather than restricting the dimension of the tourism business. Also, variable return to scale models has mainly been considered, although an analysis using a constant return to scale DEA model has also been conducted on the same data set. The main result of this DEA performed over the data set is a ranking of the considered municipalities. The analysis for each municipality specifies not only the relative efficiency scores, but also the potential improvements in the case of scores lower than one.

Consumer Durables A wide program ‘Measurement of Eco-efficiency using Data Envelopment Analysis’ has been introduced in Finland by the Ministry of the Environment. Economic activity consumes natural resources as its input, and produces undesirable emissions and waste as its output. The empirical approach with the starting point in classical economic theory, which takes into account the substitution possibilities by estimating the so-called efficient production frontiers, takes to natural understanding that a production enterprise is called efficient if the consumption of any input cannot be decreased without the corresponding decrease of at least one output or increase of undesirable outputs. The objective of this project is to investigate the applicability of the DEA-method to the measurement of eco-efficiency, and develop the method further on towards a more comprehensive framework supporting the management and incentive mechanisms. The paper by (Kuosmanen & Kortelainen, 2004) is a very good introduction for how to use DEA in environmental evaluation and for comparative assessment of firm performance in this context. In (Kortelainen & Kuosmanen, 2005) a method for eco-efficiency analysis of consumer durables

635

Data Envelopment Analysis in Environmental Technologies

by utilizing the DEA was developed. The novel innovation of the paper is to measure the efficiency in terms of absolute shadow prices that are optimized endogenously within the model to maximize the efficiency of the product and producer. The approach is illustrated by an application to eco-efficiency evaluation of a sort of consumer durables, Sport Utility Vehicles. To assess eco-efficiency of a product, one needs to account for both private net economic benefits and external social costs that arise during the use phase of the product’s life-cycle. The authors mention that DEA seeks endogenously determined optimal, the so-called shadow prices that present every consumer durable in the most favorable light when compared to other products, and that the method does not require any prior arbitrary assumption as on how to set the prices of environmental pressures. The key idea of the approach is to test whether there are any nonnegative efficiency prices at which a consumer durable is efficient. The definition of socially efficient is that a product needs to fulfill the conditions of inactivity and optimality. Firstly, it means that the value added for the consumer durable has to be nonnegative at optimal prices and the rationale behind the condition is that the consumers can be inactive, and not purchase any of the goods if the costs outweigh the benefits. The optimality condition demands that the consumer durable must be the optimal choice at some efficiency prices. The goods are eco-efficient if the shadow price of at least one environmental pressure must be positive. Using the efficiency measures and the shadow prices, all goods are classified into the following categories: efficient goods; eco-efficient goods; weakly efficient, economical goods; inefficient goods; inefficient, but environmentally friendly goods; inefficient, environmentally harmful goods. The authors calculated efficiency scores for 88 different models of Sport Utility Vehicles by using the absolute shadow price approach. For the comparison, the efficiency scores with the

636

environmental efficiency DEA approach were estimated where environmental pressures were modeled as negative outputs. The fuel costs and all environmental pressures were measured per 1 kilometer, which was simultaneously the value of (desirable) output, so the DEA model was invariant to the returns to scale (RTS) specification; all alternative RTS specifications yielded exactly the same results.

Irrigation Districts Management Application of the DEA is proposed as a methodology to assign the correct weightings for the calculation of indexes and to overcome the subjectivity of the interpretations of the results in management of Andalusian irrigation districts (Spain). The case was presented and discussed by (Rodríguez Díaz et al., 2004). This study was used to select the most representative irrigation districts in Andalusia which were then studied in greater depth. Andalusia is a region of southern Spain, a typical Mediterranean region where irrigation and wealth have been closely linked over time and where 815,000 hectares of irrigated area is divided into 156 irrigation districts. In addition to allowing the production of winter crops, irrigation makes it possible to produce a larger number of crops during the extremely dry summers that are characteristic of this Mediterranean climate, something that would otherwise not be possible under dry-land agriculture. The input-oriented DEA model was used and applied for all the irrigation districts, and separately for interior districts. The authors show that two types of clearly differentiated agricultural districts coexist in Andalusia: interior and littoral districts. In this research input parameters for DEA application were: irrigated surface area in hectares, labor in annual working units and the total volume of water applied to an irrigation district. The total value of agricultural production in Euros was considered as the output parameter. Following the DEA, none of the districts in the interior achieved

Data Envelopment Analysis in Environmental Technologies

high efficiencies in numerical experiments where all districts were considered. This leads to the conclusion that the littoral districts would serve as the reference region for districts in the interior. For this reason, the DEA model was applied separately to the interior districts only, and then it was possible to point out important conclusions. Particularly, the DEA study allowed the five most representative irrigation districts of Andalusia to be selected for a more detailed benchmarking study and the DEA turned out to be a useful tool for detecting local inefficiencies and determining the possible improvements for irrigation.

Biogas Plants Today it is widely recognized that the largest source of atmospheric pollution is fossil fuel combustion, which current energy production and use patterns rely heavily on, and therefore, the most crucial environmental problems derive from the energy demand to sustain human needs and economic growth. In the paper (Madlener et al., 2006) we find an interesting study of assessing the efficiency of 41 agricultural biogas plants in Austria. The two input parameters were: the amount of organic dry substrate used and the labor time spent for plant operation. Among the three outputs two were desirable: the net electricity produced and external heat which correspondingly refer to the amount of electricity and heat delivered by the biogas plant for external consumption (i.e. net of what the biogas plant consumes itself), including farm operations not directly related to the biogas plant. The third output parameter, methane emissions to the atmosphere, was an undesirable output that contributes to the greenhouse gas problem. In the paper one can find a detailed discussion of DEA efficiency interpretation for biogas plants under consideration.

National Park Management Wilderness protection is another growing necessity for modern societies, particularly for areas where population density is extremely high and where during the twentieth century the erosion of territory, hence of ecosystems, due to human activities, was dramatically increasing, as for example in Europe. Conservation, however, implies very high opportunity costs and thus it is crucial to create incentives for efficient management practices, to promote benchmarking and to improve conservation management. A methodology based on DEA for assessing the relative efficiency of the management units of the protected area and to indicate how it could be improved, is proposed in (Bosetti & Locatelli, 2005). In it 17 Italian National Parks (National Park Management offices) are considered as DMUs. Three different models have been used to perform DEA. They differ by the choice of input and output indicators. The set of input parameters contains economic costs, computed aggregating management costs and variable costs and extraordinary expenses. The area extension was considered as a proxy of fixed costs, which were assumed to be proportional to the area covered by the park. The output parameters are: the number of visitors to the park as an indicator of its attractiveness, providing potential indirect benefit to the local economy; the number of the parks’ employees, as an indicator of the social and economic indirect and direct benefits; the number of economic businesses which are directly linked, empowered or created thanks to the presence of the park, e.g. the farmers producing within the protected area; the number of protected species, as a good proxy of the environmental quality and biodiversity of the park; and the number of students who visit the park for environmental education trips, as a proxy of social and educational benefits deriving from the park. In some models the inverse of the mentioned biodiversity indicator was included as an input.

637

Data Envelopment Analysis in Environmental Technologies

There are several different definitions of ecological efficiency known. In the case of protected areas, the problem of efficiency becomes even more complicated, because management and financial features have to be considered as well. In this research this definition appears in the DEA application results: when a DMU is scoring maximum efficiency according to all three models, one can say that the management has attained the sustainable development goal in a very broad sense. In cases where DMUs are partially inefficient the authors use the DEA to obtain information concerning potential improvements in the management. In the final conclusions it is pointed out that the DEA is a good benchmarking technique to monitor multi-objectives efficiency. The DEA also provides the possibility for a detailed analysis of potential improvements in the management of National Parks.

Measuring Residential Energy Efficiency In the paper (Grösche, 2009) the energy efficiency improvements of US single-family homes between 1997 and 2001, using a two-stage procedure, is estimated. In the first stage, an indicator of energy efficiency is derived by means of Data Envelopment Analysis, and the analogy between the DEA estimator and traditional measures of energy efficiency is demonstrated. The second stage employs a bootstrapped truncated regression technique to decompose the variation in the obtained efficiency estimates into a climatic component and factors attributed to efficiency improvements. The author claims that the improvement of residential energy efficiency is one major goal of energy policy makers. Put simply, DEA can be considered as a generalization of the energy efficiency defined as the ratio between the amount of a particular produced service s and the amount of energy e consumed for the production (s/e). The households’ total energy consumption serves as the only input for DEA,

638

measured in kWh. Considering the outputs (the ‘produced’ energy services), the author approximates the demand for space heating and cooling, and lightning with the size of living space. The number of household members serves as a proxy for the amount of hot water preparation and cooked meals. To account for energy consumption due to the use of electric appliances, the joint number of TV-sets, videos, DVDs, and computers is incorporated. The overall number of refrigerators and freezers in the household is likewise included in the estimation. The results of the study are mixed: a substantial part of the variation in efficiency scores is due to climatic influences, but households have nevertheless improved their energy efficiency. In particular, households heating mainly with fuel oil or natural gas show significant improvements. A key advantage of the applied procedure is its ease in measuring residential energy efficiency improvements.

Pig Fattening Farms In the paper (Lauwers & van Huylenbroeck, 2003) a method for analyzing environmental efficiency of Belgian pig fattening farms, based on the farm materials balance and worked out, the DEA technique is proposed. In its most fundamental sense, materials balance means the mass flow equation of raw materials used in the economic system, and of the residuals disposed of in the natural environment. The environmental efficiency of the farm is defined similarly to the economic allocative efficiency of farms as DMUs in the DEA approach. Nutrient surplus in pig fattening, as a typical balance indicator, is used to illustrate the concept in a two input – one output case. The input parameters are feed (nutrients) and the piglets or rotation. Because of juridical constraints on farm dimension, the farmers profit maximization objective turns into a maximization of gross margin per pig place, thus the pig fattening process is simplified to one output, the marketable meat

Data Envelopment Analysis in Environmental Technologies

production. Several versions of DEA are applied and discussed: input-oriented and output-oriented. The main conclusion is that, ignoring the balance feature of environmental issues, such as nutrient surplus, might be the main reason why traditional integral analyses of economic and environmental efficiency yield contradictory conclusions.

Environmentally Friendly Manufacturing The essence of environmentally friendly manufacturing is to define sustainable development in terms of manufacturing; conserving nature’s services (resource supplier, water, energy) with the development being centered on economics and trade with a timeline of 20-100 years. Today the corporate environment is an evolving strategy, and it is not surprising that manufacturing operations are required to consider environmental impacts and sustainable development more and more. In Wu et al. (2006) the application of DEA is discussed to measure the efficiency, through material loss and environmental impact of a closed-loop manufacturing process, of a process in the computer industry. The multi-stage DEA is being utilized to measure each manufacturing phase, to compare the starting materials and environmental status to the materials available once the product has been recycled, and the environmental damage (or substances added to the environment) once a product lifecycle has been completed. The multi-stage DEA becomes important for multiple processes, for instance, the closed-loop manufacturing process, and in this case, the outputs from one process can be the inputs for the next. Inputs and outputs varied, the whole set of parameters consisted of expenditure on research and development, years of experience, expenditures on raw materials, number of product parts, use of harmful materials, energy consumption, product recyclability in percents of product/components reusable, emission of pollutants, modification flexibility, amount of products returned, product

recyclability and material recovered. The values for the various inputs/outputs were obtained from the literature. This work is a good example of measuring a company’s environmental conduct in manufacturing by applying the DEA. The conclusion is that if one organization has multiple locations or is responsible for multiple products, the multi-stage DEA is a promising approach. In addition, on the base of the model it is possible to analyze a company’s environmental impact and motivate the company to improve their corresponding indicator numbers.

Grain Farms The aim of the study (Vasiliev et al., 2008) was to analyze the efficiency of Estonian grain farms after Estonia’s transition to market economy and during the accession period to the European Union in 2000–2004. Here DEA is used with the following input parameters: changing expenditures, total investment capital, the surface of land used and the yearly working units. The output was fixed as total production volume in monetary value. The results obtained were: the mean total technical efficiency varied from 0.70 to 0.78, and 62% of the grain farms are operating under the increasing returns to scale. The most pure technically efficient farms were the smallest and the largest, but the productivity of small farms was low when compared to larger farms because of their small scale. Although, solely based on the DEA model, it is not possible to determine the optimum farm scale and the range of Estonian farm sizes operating efficiently.

Final Remarks Environmental Technologies mean cleaner and resource efficient technologies which can decrease material inputs, reduce energy consumption and emissions, discover valuable by-products and minimize waste disposal problems or some combination of these. Data Envelopment Analysis

639

Data Envelopment Analysis in Environmental Technologies

is widely used in many areas for estimating the peer-based efficiency of Decision Making Units which are defined in each single case. Although not the only method of quantitative analysis and efficiency modeling, the DEA is highly recommended as an approach with interpretable output, and which has improved its effectiveness as a productivity analysis tool. The primary advantages of this technique are that it considers multiple input and output factors of DMUs and does not require parametric assumptions of traditional multivariate methods. In general, inputs can include any resources utilized by a DMU, and the outputs can range from actual products produced to a range of performance and activity measures. The DEA has several versions and modifications, and each of these models and methods can be useful in a variety of manufacturing and service areas.

REFERENCES Bosetti, V., & Buchner, B. (2008). Using DEA to assess the Relative Efficiency of Different Climate Policy Portfolio. Ecological Economics, 68(5), 1340–1354. doi:10.1016/j.ecolecon.2008.09.007 Bosetti, V., Lanza, A., & Cassinelli, M. (2004). Using Data Envelopment Analysis to Evaluate Environmentally Conscious Tourism Management. FEEM Working Paper 59. Milan: Fondazione Eni Enrico Mattei. Bosetti, V., & Locatelli, G. (2005). A Data Envelopment Analysis Approach to the Assessment of Natural Parks’ Economic Efficiency and Sustainability. The Case of Italian National Parks (May 2005). FEEM Working Paper No. 63.05. Available at SSRN: http://ssrn.com/ abstract=718621. http://papers.ssrn.com/sol3/ papers.cfm?abstract_id=718621 Charnes, A., Cooper, W. W., Lewin, A. Y., & Seiford, L. M. (Eds.). (1994). Data envelopment analysis: Theory, methodology, and applications. Boston: Kluwer. 640

Charnes, A., Cooper, W. W., & Rhodes, E. (1978). Measuring the efficiency of decision making units. European Journal of Operational Research, 2(6), 429–444. doi:10.1016/0377-2217(78)90138-8 Cooper, W. W., Seiford, L. M., & Tone, K. (2007). Data Envelopment Analysis: A Comprehensive Text with Models, Applications, References and DEA-solver Software (2nd ed.). New York: Springer. Cooper, W. W., Seiford, L. M., & Zhu, J. (Eds.). (2004). Handbook on Data Envelopment Analysis (2nd ed.). Boston: Kluwer. Emrouznejad, A. (1995-2001). Ali Emrouznejad’s DEA HomePage. Warwick Business School, Coventry CV4 7AL, UK, http://www.deazone.com/. Emrouznejad, A., Parker, B., & Tavares, G. (2008). Evaluation of research in efficiency and productivity: A survey and analysis of the first 30 years of scholarly literature in DEA. Journal of Socio-Economics Planning Science, 42(3), 151–157. doi:10.1016/j.seps.2007.07.002 Färe, R., & Grosskopf, S. (2004). Modeling undesirable factors in efficiency evaluation [comment]. European Journal of Operational Research, 157, 242–245. doi:10.1016/S0377-2217(03)00191-7 Grösche, P. (2009). Measuring residential energy efficiency improvements with DEA. Journal of Productivity Analysis, 31(2), 87–94. doi:10.1007/ s11123-008-0121-7 Kao, C., & Tai Liu, S. (2000). Data Envelopment Analysis with Missing Data: an Application to University Libraries in Taiwan. The Journal of the Operational Research Society, 51, 897–905. Kortelainen, M., & Kuosmanen, T. (2005). EcoEfficiency Analysis of Consumer Durables Using Absolute Shadow Prices. EconWPA Working Paper at WUSTL, No. 0511022.

Data Envelopment Analysis in Environmental Technologies

Kuosmanen, T., & Kortelainen, M. (2004). Data Envelopment Analysis in Environmental Valuation: Environmental Performance, Eco-efficiency and Cost-Benefit Analysis. Discussion Paper No.21, Department of Business and Economics, University of Joensuu. Lauwers, L. H., & van Huylenbroeck, G. (2003). Materials Balance Based Modelling Of Environmental Efficiency. In 2003 Annual Meeting, August 16-22, 2003, Durban, South Africa 25916, International Association of Agricultural Economists. Lee, S. K., Mogi, G., Shin, S. C., & Kim, J. W. (2008) Measuring the Relative Efficiency of Greenhouse Gas Technologies: An AHP/DEA Hybrid Model Approach. In Proceedings of the International MultiConference of Engineers and Computer Scientists 2008 Vol. II, IMECS 2008, 19-21 March, 2008, Hong Kong, (pp. 1615-1619). Madlener, R., Antunes, C. H., & Dias Luis, C. (2006). Multi-Criteria versus Data Envelopment Analysis for Assessing the Performance of Biogas Plants. CEPE Working Paper No.49, Centre for Energy Policy and Economics (CEPE), Zurich. Miidla, P., & Kikas, K. (2009). The efficiency of Estonian central public libraries. Performance Measurement and Metrics, 10(1), 49–58. doi:10.1108/14678040910949684 Piot-Lepetit, I., Vermersch, D., & Weaver, R., D. (1997). Agriculture’s environmental externalities: DEA evidence for French agriculture. Applied Economics, 29(3), 331–338. doi:10.1080/000368497327100 Rodrigues Diaz, J. A., Camacho Poyato, E., & Lopez Luque, R. (2004). Applying Benchmarking and Data Envelopment Analysis (DEA) Techniques to Irrigation Districs in Spain. Irrigation and Drainage, 53, 135–143. doi:10.1002/ird.128

Seiford, L., M., Zhu, J. (2005). A response to comments on modeling undesirable factors in efficiency evaluation. European Journal of Operational Research, 161(2), 579–581. doi:10.1016/j. ejor.2003.09.018 Taniguchi, M., Akinaga, J., & Abe, H. (2000). Evaluation for Neighboring Environment considering Comparative Stduy and DEA Analysis. Infrastructure Planning Review, 17, 423–430. Thanassoulis, E. (2003). Introduction to the Theory and Application of Data Envelopment Analysis: a Foundation Text with Integrated Software. Norwell, MA: Kluwer Academic Publishers. Vasiliev, N., Astover, A., Mõtte, M., Noormets, M., Reintam, E., Roostalu, H., & Matveev, E. (2008). Efficiency of Estonian grain farms in 2000-2004. Agricultural and Food Science, 17(1), 31–40. doi:10.2137/145960608784182272 Wu, T., Fowler, J., Callerman, T., & Moorehead, A. (2006) Multi-stage DEA as a Measurement of Progress in Environmentally Benign Manufacturing. In The 16th International Conference on Flexible Automation and Intelligent Manufacturing, (pp. 221-228), Limerick, Ireland, June, 2006. Zhou, P., & Anga, B., W. & Poh, K.L. (2008). Measuring environmental performance under different environmental DEA technologies. Energy Economics, 30(1), 1–14. doi:10.1016/j. eneco.2006.05.001

ADDITIONAL READING A Data Envelopment Analysis (DEA) Home Page. (1996) http://www.etm.pdx.edu/dea/homedea. html. Beasley, J. E. (1996) OR-Notes. http://people. brunel.ac.uk/~mastjjb/jeb/or/dea.html.

641

Data Envelopment Analysis in Environmental Technologies

Coelli, T. J., & Rao, D. P. O’Donnell. C. J., Battese, G. E. (2005) An Introduction to Efficiency and Productivity Analysis, Springer. Molinero, C. M., & Woracker, D. (2008) Data Envelopment Analysis a Non-Mathematical Introduction. Available at SSRN: http://ssrn.com/ abstract=6317 Working paper. Thore, S. A. (Ed.). (2002) Technology Commercialization: DEA and Related Analytical Methods for Evaluating the Use and Implementation of Technical Innovation. Springer, 2002. Zhu, J. (Developer) DEAFrontier. http://www. deafrontier.net/.

KEY TERMS AND DEFINITIONS Constant Returns to Scale: The outputs of a DMU change in the same proportion as inputs; the producers are able to linearly scale the inputs and outputs without increasing or decreasing efficiency.

Decision Making Unit: Decision making units form a homogenous set of peer entities which convert multiple inputs into multiple outputs and efficiency of which is under consideration in DEA. Efficiency (Computed in Data Envelopment Analysis): Weighted sum of outputs divided by the weighted sum of inputs. Environmental Technologies: Cleaner and resource efficient technologies which can decrease material inputs, reduce energy consumption and emissions, discover valuable by-products, and minimize waste disposal problems or some combination of these. Relative Efficiency: A decision making unit is to be rated as fully efficient on the basis of available evidence if and only if the performances of other decision making units do not show that some of its inputs or outputs can be improved without worsening some of its other inputs or outputs. Variable Returns to Scale: The outputs of a decision making unit change in other proportion as inputs, increasingly or decreasingly.

This work was previously published in Environmental Modeling for Sustainable Regional Development: System Approaches and Advanced Methods, edited by Vladimír Olej, Ilona Obršálová and Jirí Krupka, pp. 242-259, copyright 2011 by Information Science Reference (an imprint of IGI Global).

642

643

Chapter 37

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm Alexandros Xanthopoulos Democritus University of Thrace, Greece Dimitrios E. Koulouriotis Democritus University of Thrace, Greece

ABSTRACT This research explores the use of a hybrid genetic algorithm in a constrained optimization problem with stochastic objective function. The underlying problem is the optimization of a class of JIT manufacturing systems. The approach investigated here is to interface a simulation model of the system with a hybrid optimization technique which combines a genetic algorithm with a local search procedure. As a constraint handling technique we use penalty functions, namely a “death penalty” function and an exponential penalty function. The performance of the proposed optimization scheme is illustrated via a simulation scenario involving a stochastic demand process satisfied by a five–stage production/ inventory system with unreliable workstations and stochastic service times. The chapter concludes with a discussion on the sensitivity of the objective function in respect of the arrival rate, the service rates and the decision variable vector.

INTRODUCTION This chapter addresses the problem of production coordination in serial manufacturing lines which consist of a number of unreliable machines linked with intermediate buffers. Production coordination in systems of this type is essentially DOI: 10.4018/978-1-4666-1945-6.ch037

the control of the material flow that takes place within the system in order to resolve the tradeoff between minimizing the holding costs and maintaining a high service rate. A time-honored approach to modeling serial manufacturing lines is to treat them as Markov Processes (Gershwin, 1994, Veatch and Wein, 1992) and then solve the related Markov Decision Problem (MDP), by using standard iterative algorithms such as

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

policy iteration, (Howard, 1960), value iteration, (Bellman, 1957) etc. However the classic dynamic programming (DP), approach entails two major drawbacks: Bellman’s curse of dimensionality, i.e. the computational explosion that takes place with the increase of the system state space, and the need for a complete mathematical model of the underlying problem. The limitations of the DP approach gave rise to the development of sub-optimal yet efficient production control mechanisms. A class of production control mechanisms that implement the JIT (Just In Time) manufacturing philosophy known as pull type control policies/ mechanisms has come to be widely recognized as capable of achieving quite satisfactory results in serial manufacturing line management. Pull type control policies coordinate the production activities in a serial line based only on actual occurrences of demand rather than demand forecasts and production plans as is the case in MRP-based systems. In this chapter, six important pull control policies are examined, namely Kanban and Base Stock (Buzacott and Shanthikumar, 1993), Generalised Kanban (see Buzacott and Shanthikumar (1992), for example), Extended Kanban (Dallery and Liberopoulos, 2000), CONWIP (Spearman et al., 1990) and CONWIP/Kanban Hybrid (Paternina-Arboleda and Das, 2001). Pull production control policies are heuristics characterised by a small number of control parameters that assume integer values. Parameter selection significantly affects the performance of a system operating under a certain pull control policy and is therefore a fundamental issue in the design of a pull-type manufacturing system. In this chapter the performance of JIT manufacturing systems is evaluated by means of discrete-event simulation (Law and Kelton, 1991). In order to optimize the control parameters of the system the simulation model is interfaced with a hybrid optimization technique which combines a genetic algorithm with a local search procedure. The application of simulation together with optimization meta-heuristics for the modeling and

644

design of manufacturing systems is an approach that has attracted considerable attention over the past years. In Dengiz and Alabas (2000) simulation is used in conjunction with tabu search in order to determine the optimum parameters of a manufacturing system while Bowden et al. (1996) utilize evolutionary programming techniques for the same task. Alabas et al. (2002) develop the simulation model of a Kanban system and explore the use of genetic algorithm, simulated annealing and tabu search to determine the number of kanbans. Simulated annealing for optimizing the simulation model of a manufacturing system controlled with kanbans is applied in Shahabudeen et al. (2002), whereas Hurrion (1997) constructs a neural network meta-model of a Kanban system using data provided by simulation. Koulouriotis et al. (2008) apply Reinforcement Learning methods to derive near-optimal production control policies in a serial manufacturing system and compare the proposed approach to existing pull type policies. Some indicative applications of genetic algorithms (GAs) in manufacturing problems can be found in Yang et al. (2007), Yamamoto et al (2008), Smith and Smith (2002), Shahabudeen and Krishnaiah (1999) and Koulouriotis et al. (2010). Panayiotou and Cassandrass (1999) develop a simulationbased algorithm for optimizing the number of kanbans and carry out a sensitivity investigation by using finite perturbation analysis. It has been suggested in the literature that the results of a genetic algorithm can be enhanced by conducting a local search around the best solutions found by the GA, (for related work see Yuan, He and Leng, 2008 and Vivo-Truyols, Torres-Lapasio and Garcıa-Alvarez-Coque, 2001). On that basis, this hybrid optimization scheme has been adopted in the present study. The main contributions of this work are the following. The performance of six important pull production control policies in a hypothetical scenario is investigated using discrete event simulation. In order to determine the control parameters of each policy the proposed hybrid

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

GA is employed. The objective function to be optimized is a weighted sum of the mean Work In Process, (WIP), inventories subject to the constraint of maintaining the service level, (SL), above a specified target. Due to the fact that the objective function is stochastic we use resampling, i.e., performing multiple evaluations of the same parameter vector and using the mean of these evaluations as the fitness measurement of this individual, a practice discussed by Fitzpatrick and Grefenstette, (1988) and Hammel and Bäck, (1994). As a constraint handling technique two types of penalty functions are explored; a “death penalty” function and an exponential penalty function. The exponential penalty function is designed according to an empirical method which is based on locating points which lie on the boundaries between feasible and infeasible region from the output of the genetic algorithm with the “death penalty” function. Our numerical results support the intuitive perception that the “death penalty” approach most of the times will yield worse results than the exponential penalty function which penalizes solutions according to the level of the constraint violation. The chapter concludes with a discussion on how the objective function behaves for different levels of arrival rate and service rates as well as on its sensitivity to the decision variable vector. The remaining material of this chapter is structured as follows. Sections “Base Stock Control Policy” to “CONWIP/Kanban Hybrid Control Policy” give a brief description of six important pull production control policies for serial manufacturing lines. In sections “Optimization Problem: Objective Function” and “Hybrid Genetic Algorithm” we discuss the main aspects of the simulation optimization methodology that we followed and namely, the formal definition of the parameter optimization problem and issues concerning the genetic algorithm and local search procedure that was used. We report our findings from the simulation experiments that we conducted for one serial line starting from

section “Experimental Results: Simulation Case” and thereafter. Finally, in the last section we state our concluding remarks and point to possible directions for future research.

SYSTEM DESCRIPTION: JIT PRODUCTION CONTROL POLICIES We examined manufacturing serial lines that produce a single product type and consist of a number of workstations/machines with intermediate buffers. We assume that the first machine is never starved. Customer demands arrive at random time intervals and request the release of one finished part from the finished goods buffer. Demands are satisfied immediately from the finished parts inventory while in the case where there are no parts available in the last buffer the demand is backordered. We do not consider customer impatience in our model, so no demand is ultimately lost to the system. Manufacturing facilities have the ability to work on only one part at a time during a manufacturing cycle. All machines have random production time, time between failures and repair time. As soon as a stage i part is manufactured, it is placed in the output buffer of that station. A control policy coordinates the release of parts from the output buffer of that station to the next machine. The unreliability of the manufacturing operations along with the stochastic demand for final products dictates the use of safety buffers of intermediate and finished parts in order to attain the target service rate. However the use of safety stocks incurs significant holding costs that could bring the manufacturer to a position of competitive disadvantage, and therefore, it is essential to balance the trade-off between minimizing WIP inventories and maintaining a high service level. Figure 1 shows a manufacturing system with three stations in tandem. The following sections briefly explain the way that the Kanban, Base Stock, CONWIP, CONWIP/ Kanban Hybrid, Extended Kanban and Generalized Kanban control policies for serial lines operate.

645

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Figure 1. A three station manufacturing line

BASE STOCK CONTROL POLICY A Base Stock, (see Buzacott and Shanthikumar, 1993), manufacturing line is completely described by N parameters, the base stock levels Si of each production station, i = 1, 2,..., N where N is the number of the system’s workstations. The Si parameters correspond to the number of parts that exist in the system’s buffers at the time the system is in its initial state that is before any demands have arrived to the system. This control policy operates as follows. When a demand arrives to the system it is immediately transmitted to every manufacturing station, authorizing it to start working on a new station i part. Base Stock has the advantage of reacting rapidly to incoming demand, with the drawback of providing no control at all on the system’s inventories.

KANBAN CONTROL POLICY The Kanban control policy that was originally developed by the Toyota Motor industry and became the topic of considerable research thereafter, (Sugimori et al, 1977, Buzacott and Shanthikumar, 1993, Berkley, 1992, Karaesmen and Dallery, 2000). A Kanban manufacturing line’s control parameters are the production authorizations Ki of each station, i = 1, 2,..., N . The Ki parameter corresponds to the maximum number of parts that are allowed in station i (manufacturing facility – output buffer). Workstation i is authorized to start working on a new part as soon as a finished station i part is released from its

646

output buffer. The information of a demand arrival is transmitted from the last manufacturing station to the first one station – by – station. If there is a buffer with no parts in it then this transmission is interrupted. The Kanban policy offers very tight synchronization between the various production stations of the system at the expense of the relatively slow response to demand fluctuations.

CONWIP CONTROL POLICY CONWIP is an abbreviation for CONstand Work In Process (Spearman et al, 1990). According to this policy the total number of parts that exist in the system, (Work In Process), can never exceed a certain level, which is the C control parameter of the policy. Parameter C is equal to the sum of the system’s base stocks Si, i = 1, 2,..., N . All machines in a CONWIP line are authorized to produce whenever they have this ability, (they are operational and have a raw part to work on), except the first one. The first machine of the system is authorized to start working on a new part as soon as a unit from the finished parts buffer is released to a customer.

GENERALIZED KANBAN AND EXTENDED KANBAN CONTROL POLICIES These two control policies combine the merits of Base Stock and Kanban as they react rapidly to the arrival of demands and effectively control

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

the WIP at the same time. They are described by two parameters per station, the base stocks Si and the production authorizations Ki,(Ki ≥ Si), which are borrowed from the Base Stock and Kanban policies respectively. The finite number of production authorizations guarantees that the system’s inventories will not exceed the pre – defined levels, but the station coordination here is not as tight as in Kanban. A station can be granted a production authorization even if a part is not released from its output buffer. For a detailed description of the way Generalized Kanban and Extended Kanban operate the reader is referred to Liberopoulos and Dallery (2000) and Buzacott and Shanthikumar (1992).

CONWIP/KANBAN HYBRID CONTROL POLICY A CONWIP/Kanban Hybrid system (see Paternina – Arboleda and Das (2001) for example), as implied, operates under a combination of the CONWIP and Kanban control policies. Departure of a finished part from the system authorizes the first station to allow a new raw part to enter the system. All workstations except the last one have a finite number of production authorizations Ki, i = 1, 2,..., N − 1 . Station production authorizations Ki, the base stock SN of the last workstation and the total WIP, (parameter C), that is allowed in the system are CONWIP/Kanban Hybrid’s control parameters.

OPTIMIZATION PROBLEM: OBJECTIVE FUNCTION The mathematical formulation of the parameter optimization problem for serial lines controlled by pull production control policies is given below. Let x = [x 1 x 2  x n ], x i ∈ Z , be the control parameter vector of some pull production control policy, i.e. the station i production authorizations

(kanbans) in a Kanban system, or the initial buffer levels in a Base Stock system etc. The objective is to find the control parameter values x that maximize the expected value of the stochastic objective function f (x, ω) , subject to the constraint of maintaining the service level (SL) equal to or above a specified target t. maximize: E  f (x, ω ) , x i ∈ Z, i = 1, 2,...., n  

(1)

subject to: E SL (x, ω ) ≥ t  

(2)

ω is used to denote the stochastic nature of f, and + SL. SL is an unknown function of x and t ∈ ℜ . The evaluation of functions f and SL is the result of a simulation experiment. The value of SL is the number of demands satisfied by on-hand inventory divided by the number of total demands which arrived to the system. The value of f is a weighted sum of the mean WorkInProcess inventories and is calculated according to 3. N

f = −∑ hi H i

(3)

i =1

where hi stands for the cost of storing one item in output buffer i per time unit, and H i is the average inventory level in buffer i We know that the optimal solution in this type of problems is located very close to the boundaries between feasible and infeasible region. Additional difficulty in obtaining the optimal solution emanates from the fact that fitness measurements contain random “noise” caused by the simulation model.

647

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

HYBRID GENETIC ALGORITHM In order to solve the optimization problem stated in the previous section we propose a hybrid optimization technique which combines a genetic algorithm with a local search procedure. The genetic algorithm evolves a population of candidate solutions (individuals), where each solution is evaluated with the use of a simulation model, and the individual with the highest fitness value found by the GA is used to initialize the local search procedure. The fitness v (x) of each indi-

1.

Input GA parameters: chromosome length l, population size s, sample size m, elite count e, crossover probability Pcross , mutation probability Pmut , max number of generations g

2.

Initialize population randomly, set generation_counter←0 WHILE(generation_counter < g ) a. evaluate population. set

3.

generation _ counter ← generation _ counter + 1

b.

vidual is represented by Equation (4). v (x ) =

1 m

m i =1

 f (x, ω) + p (SL(x, ω))  

(4)

where f (x, ω) is calculated according to (3), p (x) is a properly defined penalty function of the service level, and m is a positive integer (sample size). The parameters of the GA are the chromosome length l, the population size s, the sample size m , a positive integer e called elite count, the crossover probability Pcross , the mutation probability Pmut and the max number of generations g. The individuals which constitute the genetic algorithm population are encoded as binary bit-strings, therefore parameter l controls the size of the search space. Parameter e determines the number of individuals that pass deterministically to the next generation. The local search procedure is characterized by a single parameter δ ∈ ℜ + . Let xcur be the current solution of the local search algorithm, vcur its fitness value and vbest the best fitness value found so far. If we denote the search space by S and a distance function, (e.g. Euclidean distance), by dist(), then the neighborhood of xcur is written as N (xcur ) = {y ∈ S : dist (x, y) ≤ δ } .

The pseudocode of the hybrid genetic algorithm is presented below.

648

4. 5. 6. 7.

8.

scale fitness values proportionally to raw fitness measurements c. apply selection operator ▪ select e individuals with highest fitness values ▪ select the remaining s − e individuals using stochastic uniform selection d. apply crossover operator e. apply mutation operator return individual xbest with highest fitness Initialize local search algorithm: xcur ← xbest , define neighborhood parameter δ evaluate xcur . Set vbest ← vcur , flag ← TRUE WHILE (flag = TRUE ) a. evaluate all points in N (xcur ) b.

select xnew ∈ N (xcur ) with best fitness

c.

value vnew IF (vnew > vbest ) THEN set vbest ← vnew ,

xcur ← xnew ELSE flag ← FALSE return xcur . Terminate

THEN

The selection operator (Step 3.c) determines which individuals will be chosen to create the next generation. The first e individuals in terms of fitness value pass to the next generation by default. The remaining s - e individuals are selected with the use of a stochastic uniform selection routine. This technique can be visualized as

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

a line in which each individual corresponds to a section of the line of length proportional to its scaled fitness value. The algorithm moves along the line in equal-sized steps. At each step, the algorithm selects an individual from the corresponding section it finds itself on. In the crossover stage (Step 3.d), pairs of individuals are selected at random with probability Pcross ∈ (0, 1) in order to be recombined. In the implementations of the GA for the one-parameter-per-workstation manufacturing systems we used the single-point crossover method. For the remaining systems, (Extended and Generalized Kanban) uniform crossover was used. According to this technique, two individuals exchange bits on the basis of a randomly generated binary vector of equal length called crossover mask. The mutation operator (Step 3.e) modifies the value of a bit in the population with probability Pmut ∈ (0, 1) . The genetic algorithm terminates when it completes a predefined number of iterations g and returns the individual with the highest fitness value which is used to initialize the local search algorithm. The complexity of the hill-climbing procedure is O (t × k ) , where t is the number of iterations and k the neighborhood size. The complexity of the genetic algorithm depends on the number of generations, the size of the population and the genetic operators/parameters used.

EXPERIMENTAL RESULTS: SIMULATION CASE We examined a five-machine manufacturing line with equal operation times. The base simulation scenario consists of the following parameters: Machines operate with service rates which are normally distributed random variables with mean 1.1 parts/time unit and st.d. 0.01. Repair to failure times are exponentially distributed with mean 1000 time units. Failures are operation dependent. Repair times are also assumed exponential with

a MeanTimeToRepair of 10 time units. Times between two successive customer arrivals are exponential random variables with mean 1.11 time units, i.e. the arrival rate is Ra = 0.9 . Since the service rates are all equal to 1.1 parts/ time unit and the machines are failure-prone, the maximum attainable throughput rate under any control policy will be Tmax < 1.1 . Consequently, the arrival rate set to 0.9 parts/time unit corresponds to a heavy loading conditions simulation case. The inventory costs for storing one part per time unit in buffer i are h = [h1 h2  h5 ] = [1.0 1.2 1.44 1.73 2.07]. Note that the holding costs increase at a rate of 20% when moving downstream from buffer to buffer. This increase is due to the value which is added to a part as it is progressively converted into a final product. The system operates under a complete backordering policy, which means that no customer demand is ultimately lost to the system. The justification for selecting the aforementioned probability distributions in order to model the arrival process, the service rates etc. can be found in queueing theory and in manufacturing systems literature. Some indicative references, among others, are the influential works of Law and Kelton (2000) and Bhat (2008). The input parameters of the simulation model were selected in a manner to mimic a situation where the system is under heavy loading conditions. This is a case of primary interest since the differences in performance between the various pull type control policies are most clearly illustrated when the manufacturing line is pushed towards its maximum throughput rate. In order to investigate the sensitivity of the production/ inventory system under examination for different levels of arrival rates and service rates as well as the robustness of the solutions obtained by the proposed optimization methodology, four variants of the base simulation scenario are also considered. In the first two variants, all inputs to the simulation model are kept constant except for the arrival rates which are set to Ra = 1.0 and Ra = 0.8

649

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Table 1. Simulation scenarios parameters Ra

Rp

st.d.

MTTR

inventory costs

base case

0.9

1.1

0.01

1000

10

h

variant 1

1.0

1.1

0.01

1000

10

h

variant 2

0.8

1.1

0.01

1000

10

h

variant 3

0.9

1.1

0.1

1000

10

h

variant 4

0.9

1.1

0.001

1000

10

h

respectively, i.e. we examine the system’s behaviour for increased/decreased demand for final products. In the remaining two variants of the base simulation case we vary the standard deviation of the service rates. In one case the system’s performance is evaluated for service rates that vary significantly around the mean (st.d. = 0.1) and in the other case we examine what would happen if the “randomness” of the service rates decreased (st.d. = 0.001). The system configuration for the five simulation cases is presented in Table 1. The goal is to maximize the expected value of the weighted sum of the mean WorkInProcess N

inventories f = −∑ hi H i , i = 1, 2,.., 5 , subject i =1

to the constraint of E SL(x) ≥ 90.0% .

HYBRID GENETIC ALGORITHM PARAMETERS The dimensions of the related optimization problem for the Kanban, Base Stock, CONWIP and Kanban/CONWIP Hybrid systems are dim = 5. For the Extended and Generalized Kanban systems the dimensionality of the problem rises to dim’ = 10. The authors conducted a series of pilot experiments in order to come up with the most suitable hybrid genetic algorithm parameters for this particular problem. An important issue was to resolve the trade-off between quality of final solution and computational cost as the evaluation of the fitness value of the candidate solutions is computationally expensive. We experimented

650

MTBF

with population sizes in the range [20,50], crossover probabilities in the range [0.3, 0.8] and mutation probabilities in the range [0.001, 0.1]. For the one-parameter-per stage policies the single-point crossover operator was implemented whereas for the two-parameter-per-stage policies we applied uniform crossover. The reason for making this distinction is that offspring produced with the latter crossover technique are generally more diverse compared to their parents than offspring generated by single-point crossover. This is a desirable property due to the fact that the search space for the two-parameter-per-stage policies is by orders of magnitude larger that the search space for the one-parameter-per-stage policies and therefore an intense exploration strategy is required. The neighborhood of the local search algorithm was set to include all data points around the current point x with Euclidean distance equal to or less than 1:   n 2 N (x) = y ∈ S : ∑ (x i − yi ) ≤ 1 . Given   i =1   that the decision variables are integers this is obviously the minimum neighborhood size one could select but since the major part of the search is carried out by the genetic algorithm and the local search procedure is used merely to fine-tune the already obtained solutions, it is acceptable to use a small neighborhood. Admittedly, the parameters of the optimization algorithm were initialized heuristically and one cannot discard the possibility that different values for the parameters could yield better results but a full factorial experiment for the design of the optimization scheme would

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

fall beyond the scope of this chapter. The genetic algorithm’s parameters that were ultimately selected are: population size = 30, crossover and mutation probabilities, Pcross = 0.5 and Pmut = 0.05 respectively. The individual which scored the highest fitness value passes to the next generation with probability 1, i.e. the elite count parameter was set to 1. Each individual was evaluated 50 times, m = 50, where each replicate was executed for 80,000 time units. The GA produced 100 generations for the problems with dimensionality dim = 5, (optimizing Kanban, Base Stock, CONWIP, CONWIP/Kanban Hybrid systems). For the problems with dimensionality dim’ = 10, (optimization of Extended Kanban and Generalized Kanban systems) the GA produced 240 generations of individuals.

COMPUTATIONAL COST The simulators for the six pull type manufacturing systems as well as the proposed optimization algorithm were coded in C++ and the experiments were conducted on a PC with AMD Athlon processor at 1.8 GHz and 512 MB RAM. The factor that primarily affects the execution time of the hybrid GA is the control parameter evaluation, i.e. the computational cost of the simulation model. Every solution evaluation, that is 50 independently seeded executions of the simulation model, lasts approximately 5 seconds and therefore the evaluation of a generation of candidate solutions (30 individuals) takes about 2.5 minutes to complete. The execution of the hybrid GA for a one-parameter-per-stage policy (100 generations) lasts approximately 4.7 hours, of which 4.2 hours are consumed by the simulation model. On the other hand, the execution of the hybrid GA for a two-parameter-per-stage policy (240 generations) lasts approximately 11 hours, where 10 hours are devoted to the solution evaluation phase.

DEATH PENALTY RESULTS For the implementation of the hybrid genetic algorithm with “death” penalty we used the following penalty function.   0.0, p (x ) =   −1000.0,   

if if

E SL(x) ≥ 90.0% E SL(x) < 90.0% (5)

We reiterate that the expected value E SL(x) is the arithmetic mean of m measurements of SL. This is a very straight-forward implementation. Every individual that does not satisfy the service level constraint is penalized heavily and will be probably discarded in the next iteration of the algorithm. The results from the hybrid genetic algorithm runs for each control policy are displayed in Table 2. The rows containing the results of the standard genetic algorithm, (without the local search component), are labeled with the initials GA followed by the control policy’s name, while the results of the hybrid algorithm are labeled with the initials GAL and the control policy’s name. We made this distinction in order to clarify whether the local search offers some significant improvement or not. The last column of Table 2 contains the fitness values calculated according to Equation (4) of the corresponding parameter sets. The CONWIP system scored the highest fitness value vb = −25.29 followed by the Generalized Kanban, the Extended Kanban, the Hybrid CONWIP/Kanban, the Kanban and the Base Stock systems in decreasing fitness value order. With the exception of the CONWIP system, the local search algorithm enhanced the best fitness values found by the standard genetic algorithm in a range from 1.35% to 9.34%. It is important to stress here that this improvement refers to the fitness values and not the actual objective function values of the optimization problem. In the case of the Generalized Kanban system the local search al-

651

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Table 2. Best parameter sets and fitness values for “death” penalty function (n.i. stands for “no improvement”) Policies

x1(/x1’)

x2(/x2’)

x3(/x3’)

X4(/x4’)

x5(/x5’)

C

v(x)

GA_Kanban (Ki)

2

2

1

8

13

-

-32.11

GAL_Kanban (Ki)

1

2

1

8

12

-

-29.12

GA_BaseStock (Si)

2

7

2

14

-

-31.70

GAL_BaseStock (Si)

1

6

2

14

-

-29.49

GA_CONWIP (Si, C)

5

3

5

6

19

-25.29

GAL_CONWIP (Si, C)

n.i.

n.i.

n.i.

n.i.

n.i.

n.i.

n.i.

GA_Hybrid (Ki, i=1,2,3,4, B5, C)

1

1

3

7

10

22

-28.75

GAL_Hybrid(Ki,i=1,2,3,4, S5, C)

1

1

2

7

10

21

-26.71

GA_E. Kanban (Ki/Si)

10/0

11/2

9/1

2/2

23/15

-

-26.54

GAL_E. Kanban (Ki/Si)

6/0

11/2

6/1

2/2

23/15

-

-26.18

GA_G. Kanban (Ki/Si)

8/4

2/0

15/3

16/2

14/14

-

-1028.14

GAL_G.Kanban (Ki/Si)

6/2

2/0

15/3

15/2

14/14

-

-26.15

gorithm appears to have “repaired” the infeasible solution found by the standard genetic algorithm. The average percentage of infeasible solutions in the final generations of the genetic algorithm runs was equal to 7.2%. Table 3 contains the objective function values E  f (x) * and service levels E SL(x) * % with 95% confidence bounds of

the best parameters found by both the standard genetic algorithm and the hybrid genetic algorithm with the “death” penalty function. These data were produced by running 50 replicates of each of the six simulation models for tsim = 1, 500, 000.0 time units and then averaging the corresponding variables. This is a 18.75 times

Table 3. Objective function values E  f (x) * and % service levels E SL(x) * % for best parameter sets found by standard GA and hybrid GA with “death penalty” (95% confidence). (K stands for Kanban, BS for Base Stock etc. n.i. stands for “no improvement”) GA results

Hybrid GA (with local search)

Policies

E SL(x) * %

E  f (x) *

Policies

E SL(x) * %

E  f (x) *

K

91.17 ± 0.09

-32.07 ± 0.04

K

90.08 ± 0.09

-29.13 ± 0.03

BS

90.37 ± 0.09

-31.73 ± 0.02

BS

90.03 ± 0.11

-29.53 ± 0.02

C

89.87 ± 0.08

-25.26 ± 0.01

C

n.i.

n.i.

C/K H

91.34 ± 0.08

-28.83 ± 0.03

C/K H

90.43 ± 0.11

-26.79 ± 0.04

EK

90.29 ± 0.07

-26.57 ± 0.0

EK

90.21 ± 0.09

-26.23 ± 0.03

GK

90.23 ± 0.09

-28.18 ± 0.02

GK

89.91 ± 0.01

-26.12 ± 0.02

652

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

longer simulation than that used to evaluate the fitness of the individuals in the genetic algorithm. By using exhaustively long simulation times we can compute far more accurate estimators, (indicated by the superscript *), which can be considered to approximate the true expected values of these performance measures. This way, relatively safe conclusions can be drawn regarding both the quality of the solutions found by the hybrid genetic algorithms and the performance of each of the competing pull type control policies. Of course, by increasing the simulation time and/ or the resampling, the optimization algorithm is less likely to be mislead by “lucky” candidate solutions which score well once by chance but the consequent computational cost is prohibitive of doing so. Apart from that, we are interested in establishing whether the algorithm is capable of locating good and hopefully optimal solutions in the presence of a relatively low signal-to-“noise” ratio. By observing the data in Table 3 we see that the local search algorithm produced actual improvements in the objective function values while preserving the feasibility of the solutions in all cases except the CONWIP and Generalized Kanban systems. The results regarding the Generalized Kanban system are somehow contradicting. In Table 2 the local search algorithm appears to have repaired the infeasible solution, while in Table 3

the original solution is now found to be actually feasible and the local search solution is the one which violates the constraint. Actually, these results merely demonstrate an inherent weakness of search algorithms that generate a single point per iteration when compared to genetic algorithms in “noisy” environments. A hill-climbing method, like the one used here, compares candidate solutions in each iteration only with best solution found so far, and therefore, it is easy to be mislead by a “lucky” solution. In genetic algorithms, on the contrary, for a solution to be maintained it must outweigh an entire collection of solutions and not just a previously best one. As an overall assessment, we could argue that the results presented in this section support the conclusion that even with this simple static penalty function a genetic algorithm can produce quite good solutions. Two typical plots of the best solution found by the genetic algorithm with the “death” penalty function versus the number of generations can be found in Figure 2. These two plots exhibit a somehow similar pattern. For illustration purposes we use a time window of 140 generations and divide the plot areas in two regions with a perpendicular dotted line. Notice that in the left side of the plots, if we disregard random fluctuations caused by the simulation model, the two curves are increasing

Figure 2. Typical plots of best fitness value found by GA with “death” penalty function versus number of iterations (140 iterations window)

653

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

almost monotonically. In this area, the best solution found by the algorithm is not lying somewhere near the boundaries between feasible and infeasible region. In the intersection point of the curve with the dotted perpendicular line the curve suddenly “dives”. This is indicative that the currently best individual is marginally feasible, (or infeasible), and that it failed to satisfy the constraint in this evaluation. As a consequence, it was penalized heavily and substituted by another individual which happened to have a lower fitness value. From this point on, the curve displays similar abrupt fluctuations, indicating that the population evolves towards the boundaries between feasible and infeasible region and the optimal solution.

DESIGNING EXPONENTIAL PENALTY FUNCTION Intuitively, the “death” penalty approach, as attractive as it can be due to its simplicity, does not seem to be the best approach to handle constraints. Even the slightest violation of the imposed constraints results in penalizing heavily a good solution. This way an individual which scores excellently for a series of consecutive generations may be discarded by the algorithm. This is an undesired property in the kind of optimization problem that we are dealing with, where fitness measurements are distorted by random fluctuations caused by the stochasticity of the simulation model. Another weakness of this approach is that it damages the diversity of the population, as the majority of the individuals are crowded in the feasible region. Given that the optimal solution lies on the feasibility boundaries, the search would probably be more efficient if the population evolved towards the boundaries from both feasible and infeasible regions. For example, it is unclear why a slightly infeasible individual which is located very close to the optimal solution should be assigned a worse fitness value than a feasible individual that scores poorly. For all of the above mentioned reasons, the idea of penalizing infeasible solutions accord-

654

ing to the level of the constraint violation seems more appealing, (see Venkatraman and Yen (2005) for guidelines on designing penalty functions). The problem that needs to be addressed now is how to design such a “soft-limit” penalty function. A reasonable choice is to use an exponential penaltyfunction p(x) = c u , where c = const ∈ ℜ and u = t − SL is the difference between the target service level and E SL(x) the measured

expected service level. The intuitive, minimal penalty rule, (Le Riche et al. 1995), suggests that the penalty for infeasible individuals should be just above the threshold below which infeasible solutions score better than their feasible, possibly optimal, neighbors. In practice, however, it is quite difficult to achieve this. The procedure we followed in order to implement, at least to some extent, this intuition, is the following. Using the output of the executions of the genetic algorithm with the “death” penalty function we created plots like the ones in Figure 2. By examining these plots it was easy to locate solutions that were very close to the feasibility boundaries, (these points are indicated by the characteristic “dive” of the curve). The next step was to examine the neighborhood of such a point in order to determine how a small change in parameters affected the service level SL as well as the objective function f (x) . The value of SL is affected primarily by the control parameters of the last three machines so we could limit ourselves to a relatively small neighborhood. Having collected this data, we were able to select the parameter c of the penalty function p(x) = c u , in the spirit of the “minimal penalty rule”. This is an empirical technique that may not be easy or even possible to apply to other problems, nevertheless it provides the means to design a penalty function that will work well and outperform most of the times the “death” penalty approach as supported by our experimental results presented in the following section.

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

EXPONENTIAL PENALTY FUNCTION RESULTS

The results from the hybrid genetic algorithm runs for each control policy are displayed in Table 4. The rows containing the results of the standard genetic algorithm, (without the local search component), are labeled with the initials GA followed by the control policy’s name, while the results of the hybrid algorithm are labeled with the initials GAL and the control policy’s name. The last column of Table 4 contains the fitness values of the corresponding parameter sets. The CONWIP system scored the highest fitness value vb = −25.31 followed by the Extended Kanban, the Hybrid CONWIP/Kanban, the Generalized Kanban, the Base Stock and the Kanban systems in decreasing fitness value order. The hybrid optimization algorithm outperformed the standard genetic algorithm in the cases of the Base Stock, the Extended Kanban and the Generalized Kanban systems. For the three remaining systems the local search failed to offer an improvement in minimizing v (x) . Table 5 contains the objective function values E  f (x) * and service levels E SL(x) * % with 95% confi-

After following the procedure outlined in the previous section we were able to construct the following penalty function (6).  0.0,  p(x) =  9.0u ,  3 9.0 , 

u ≤ 0. 0 (6)

0. 0 < u < 3. 0 3.0 ≤ u

where u = t − E SL(x) is the difference between the target service level t = 90.0% and the measured service level E SL(x) . Note that for service

levels equal to or lower than 87.0% we ground the penalty to a constant value. The reason we do this is that we want the raw fitness values v(x) to be within a range for the selection operator of the genetic algorithm to work properly. We reiterate that in our implementation the values of the individuals are scaled proportionally to their raw fitness measurements prior to selection.

dence bounds of the best parameters found by both

Table 4. Best parameter sets and fitness values for exponential penalty function (n.i. stands for “no improvement”) Policies

x1(/x1’)

x2(/x2’)

x3(/x3’)

x4(/x4’)

x5(/x5’)

C

v(x)

GA_Kanban (Ki)

1

1

1

7

13

-

-28.05

GAL_Kanban (Ki)

n.i.

n.i.

n.i.

n.i.

n.i.

-

n.i.

GA_BaseStock (Si)

3

17

-

-27.19

GAL_BaseStock (Si)

2

17

-

-25.95

GA_CONWIP (Si, C)

6

1

6

6

19

-25.31

GAL_CONWIP (Si, C)

n.i.

n.i.

n.i.

n.i.

n.i.

n.i.

n.i.

GA_Hybrid (Ki, i=1,2,3,4, B5, C)

1

4

5

9

1

20

-25.92

GAL_Hybrid(Ki,i=1,2,3,4, S5, C)

n.i.

n.i.

n.i.

n.i.

n.i.

n.i.

n.i.

GA_E. Kanban (Ki/Si)

4/1

10/0

2/2

4/2

22/15

-

-25.82

GAL_E. Kanban (Ki/Si)

3 /1

10/0

2/2

4/2

22/15

-

-25.72

GA_G. Kanban (Ki/Si)

12/5

7/0

14/4

13/1

16/14

-

-29.34

GAL_G.Kanban (Ki/Si)

9/2

3/0

13/4

13/1

15/14

-

-25.95

655

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Table 5. Objective function values E  f (x) * and % service levels E SL(x) * % for best parameter sets found by standard GA and hybrid GA with “exponential penalty” (95% confidence). (K stands for Kanban, BS for Base Stock etc. n.i. stands for “no improvement”) GA results

Hybrid GA (with local search)

Policies

E SL(x) * %

E  f (x) *

Policies

E SL(x) * %

E  f (x) *

K

90.10 ± 0.08

-28.01 ± 0.03

K

n.i.

n.i.

BS

90.34 ± 0.12

-27.20 ± 0.03

BS

89.95 ± 0.13

-25.97 ± 0.03

C

89.94 ± 0.10

-25.27 ± 0.02

C

n.i.

n.i.

C/K H

90.21 ± 0.09

-25.87 ± 0.02

C/K H

n.i.

n.i.

EK

90.07 ± 0.11

-25.84 ± 0.02

EK

89.98 ± 0.10

-25.71 ± 0.02

GK

90.31 ± 0.09

-29.34 ± 0.02

GK

89.73 ± 0.09

-25.89 ± 0.02

the standard genetic algorithm and the hybrid genetic algorithm with the exponential penalty function. These data were produced by running 50 replicates of each of the six simulation models for tsim = 1, 500, 000.0 time units and then averaging the corresponding variables. All three solutions found by the local search algorithm when initialized with the solutions of the genetic algorithm were marginally infeasible. The local search algorithm falsely interpreted the effect of random noise as an actual improvement and thus substituted the feasible solutions by infeasible ones. Of course, we cannot rule out that this was caused in part by the penalty function itself. However, we must mention that the amount of the constraint violation was rather trivial. The average percentage of infeasible solutions in the final generations of the genetic algorithm runs with the exponential penalty function was equal to 8.5%.

DISCUSSION ON THE PERFORMANCE OF THE TWO PENALTY FUNCTIONS By comparing the data in Tables 3 and 5 we notice that the standard genetic algorithm with the expo-

656

nential penalty function outperforms both the standard genetic algorithm and the hybrid algorithm with the “death penalty” function for all systems except the CONWIP and the Generalized Kanban. In terms of objective function value, the use of the exponential penalty function rather than the “death penalty” improved the solution by 3.84% for the Kanban system, by 7.89% for the Base Stock system and by 3.43% for the CONWIP/Kanban Hybrid system. For the Extended Kanban system we monitored a 1.5% lower value of E  f (x) *,

while for the CONWIP system the results were practically the same. Only for the Generalized Kanban system the “death” penalty approach succeeded in producing a 4% better solution than the exponential penalty approach. The superiority of the exponential penalty function over the “death” penalty function can be explained qualitatively as follows: Figure 3 shows typical plots of the genetic algorithm’s convergence with the “death” penalty and the exponential penalty function. Notice that at some point near the 60th generation both curves have approximately the same height. The best solutions found by the two implementations of the algorithm in these points probably belong to the same level set and are lying somewhere

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

close to the feasibility boundaries. In some subsequent iteration of the algorithm with the “death” penalty, this solution apparently violates the constraint and is therefore discarded. The height of the curve shows that the individual which replaced it has a significantly lower fitness value. This is not the case in the algorithm with the exponential penalty where the properly designed penalty function prevents the good solution to be discarded, at least not from a much worse candidate solution. Concluding the discussion on the performance of the hybrid GA we state summarize our major findings: i) the incorporation of a local search element can enhance the genetic algorithm’s performance with the disadvantage of that the local search algorithm is more susceptible to falsely interpreting random noise as actual objective function improvements than the genetic algorithm, ii) the “death penalty” approach most of the times will yield worse results than a function which penalizes solutions according to the level of the constraint violation like the exponential penalty function used here.

COMPARISON OF PULL TYPE PRODUCTION CONTROL POLICIESSENSITIVITY ANALYSIS Table 6 presents the objective function values and the corresponding service levels for the six JIT

control policies with the best parameters found by the proposed optimization strategy. Note that the Base stock, CONWIP and Extended Kanban solutions attain a service level below 90% but since the constraint is within the 95% confidence halfwidth we consider them to be feasible. The CONWIP policy ranks first followed in close distance by the Extended Kanban, Hybrid and Base Stock policies. The Kanban and Generalized Kanban policies occupy the last two positions of the objective function value ranking. Since in this simulation scenario the demand process pushes the manufacturing system towards its maximum throughput rate, the poor performance of the Kanban mechanism is anticipated since this policy offers tight coordination between the manufacturing stages but does not respond rapidly to incoming orders. On the other hand the performance of the Generalized Kanban system is somewhat unexpected since it is supposed to be an enhancement of the original Kanban policy. However, this is not the case for the control policy that is mostly related to Generalized Kanban, the Extended Kanban mechanism, which ranks second. The main characteristic of the Base Stock policy, that is fast reaction to demand, is supported by the experimental output. Finally, the fact that in a CONWIP or CONWIP/Kanban Hybrid system the WIP tends to accumulate to the last buffer al-

Figure 3. Typical plots of best fitness value found by GA with “death” and exponential penalty functions versus number of iterations

657

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Table 6. Objective function values – service levels of pull control policies with best parameters for base simulation case

E  f (x)

Kanban

Base Stock

CONWIP

CONWIP/Kanban Hybrid

Extended Kanban

Generalized Kanban

-28.01±0.03

-25.97±0.03

-25.27±0.02

-25.87±0.02

-25.71±0.02

-28.18±0.02

90.10 ± 0.08

89.95 ± 0.13

89.94 ± 0.10

90.21 ± 0.09

89.98 ± 0.10

90.23 ± 0.09

E SL(x)

lows this two policies to achieve a high service level while operating a lean manufacturing mode. Table 7 shows the statistics of the system’s performance measures for the four variants of the basic simulation case. In the case where the demand rate increases (first column of Table 7) we notice that the service level as well as the average WIP decreases for all policies, but some control mechanisms are more sensitive to this change than others. Specifically, the service rate in the Kanban and Hybrid systems decreases dramatically, whereas the Base Stock and Generalized Kanban policies seem to be more robust regarding the increase of the demand rate. In the second variant (decreased arrival rate) of the basic simulation case one can see that all six control policies practically achieve the same service level. This

is an indication that when the demand can be easily satisfied by the manufacturing system the role of the production control policy diminishes. In this case the distribution of the objective function values over the control mechanisms also tends to level out. The increase of the standard deviation of the processing times (variant 3) has an effect similar to that of the increase of the demand rate. The reasons for that can be attributed to the resulting decreased coordination among the various production stages which increases the frequency with which machine starvation or blockage events occur. Again, the Kanban mechanism is mostly affected by this parameter due to its tight production coordination scheme, while the Generalized Kanban mechanism seems to react rather ro-

Table 7. Objective function values – service levels of pull control policies with best parameters for variants of base simulation case Ra=1.0

E  f (x)

Ra=0.8 ∗

E SL(x)

E  f (x)

st.d.=0.1 ∗

E SL(x)

E  f (x)

st.d.=0.001 ∗

E SL(x)

E  f (x)

E SL(x)

K

-15.21

55.61

-32.77

96.52

-20.92

76.2

-28.32

90.43

BS

-23.75

64.2

-28.57

96.05

-25.48

88

-26.01

90.07

C

-19.15

62.89

-28.68

96.18

-24.54

87.91

-25.33

90.12

H

-14.71

57.91

-30.06

96.38

-21.23

79.43

-26.03

90.42

EK

-18.05

61.89

-29.33

96.21

-24.96

88.05

-25.74

90.07

GK

-20.57

62.92

-31.74

96.29

-27.56

88.72

-28.17

90.27

658

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

bustly. Finally, the decrease of the standard deviation of the processing times (variant 4) seems to have a negligible effect on the system’s behavior as indicated by the experimental data presented in the last column of Table 7. Tables 8 and 9 contain data regarding the sensitivity of the system’s behavior in respect to the parameters of the controlling policy. For example, in Table 8, the cells in the i-th row that belong to the columns labeled as “Base Stock” show the objective function value and service level that result when the i-th component (parameter Si) of the corresponding decision variable vector is increased by the minimum possible value, i.e. by one. In general, the service level (objective function) is an increasing (decreasing) function of the control parameters. However, the rate with which the service level/ objective function changes depends on the type of the control policy and the index (position) of the parameter in the parameter vector. For instance, in the five-station Kanban system, adding an additional kanban in the last stage will result in a larger decrease of the objective function value than adding an extra kanban in any of the upstream stages. It is interesting to observe the cases of the CONWIP and CONWIP/Kanban Hybrid systems where the unitary increase of a control parameter in any of the stages 2,3,4,5 seems to have the same effect.This can be explained by the fact that since

the last workstation is authorized to produce whenever it has this ability all parts in upstream buffers are continuously “pushed” towards the finished goods buffer, and therefore WIP in intermediate stages is scarce. By increasing the initial stock in the first buffer the average WIP in intermediate stages increases and thus this change has greater impact to the objective value and service level. Generalized Kanban and Extended Kanban are characterized by two parameters per stage and therefore the sensitivity analysis must consider both of these parameters. The systems’ performance for a unitary change in the base stock of the i-th stage is shown in the columns labeled as “base stocks” whereas the cells under the label “free kanbans” contain system performance information when the total number of kanbans of the i-th stage is increased by one but the base stock remains unaltered. The effect of adding an extra base stock to a stage of a Generalized/Extended Kanban system is similar to that of adding a kanban to a stage of a Kanban system. The mean WIP is also an increasing function of the control parameters (Ki - Si) but as one can see from Table 9 rather large changes in the number of free kanbans are needed for a significant change in the objective function value to occur.

Table 8. Objective function value – service level sensitivity to parameter vector (Kanban, Base Stock, CONWIP, Hybrid) Kanban

E  f (x)

Base Stock ∗

E SL(x)

E  f (x)

CONWIP

E SL(x)

E  f (x)

CONWIP/Kanban Hybrid ∗

E SL(x)

E  f (x)

E SL(x)

1

-29.1

90.32

-27.15

90.35

-27.53

90.92

-28.03

91.13

2

-29.34

90.51

-27.19

90.26

-27.24

90.82

-27.84

91.02

3

-29.52

90.65

-27.77

90.80

-27.24

90.77

-27.83

91.02

4

-29.61

90.51

-27.83

90.74

-27.24

90.81

-27.79

91.0

5

-29.86

90.94

-27.86

90.78

-27.24

90.83

-27.76

90.99

659

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Table 9. Objective function value – service level sensitivity to parameter vector (Extended Kanban, Generalized Kanban) Extended Kanban

Generalized Kanban free kanbans (Ki-Si)

base stocks (Si)

E  f (x)

E SL(x)

E  f (x)

E SL(x)

free kanbans (Ki-Si)

base stocks (Si) ∗

E  f (x)

E SL(x)

E  f (x)

E SL(x)

1

-26.75

90.21

-25.83

90.07

-29.18

90.35

-28.20

90.27

2

-27.12

90.47

-25.79

90.01

-29.65

90.63

-28.27

90.29

3

-27.16

90.55

-25.8

90.1

-29.58

90.69

-28.29

90.24

4

-27.38

90.66

-25.77

90.06

-29.81

90.78

-28.26

90.28

5

-27.61

90.86

-25.73

90.06

-30.08

91.13

-28.21

90.32

CONCLUSION AND FUTURE RESEARCH We implemented a hybrid optimization technique which combines a genetic algorithm with a local search procedure to find optimal decision variables for a family of JIT manufacturing systems. The goal was to maximize a weighted sum of the mean WorkInProcess inventories subject to the constraint of maintaining a target service level. Our numerical results indicate that the performance of a genetic algorithm can be easily enhanced by incorporating a local search component, however, the local search algorithm is more susceptible to falsely interpreting random noise as actual objective function improvements than the genetic algorithm. Moreover, our results support the intuitive perception that penalizing candidate solutions according to the level of constraint violation will yield better results than the “death penalty” approach most of the times. The performance of the JIT control policies with optimized parameters is presented analytically and commented upon. Finally, we conduct a sensitivity analysis in respect to the variation of the demand rate, the standard deviation of the service rates and the control parameter vector. The results of the analysis offer considerable insight to the

660

underlying mechanics of the JIT control policies under consideration. Constraint handling and “noisy” or dynamic environments in the context of genetic optimization of manufacturing systems are currently active research fields. Indicatively, a relatively recent and interesting direction is to use evolutionary multi-objective techniques to handle constraints as additional objectives.

REFERENCES Alabas, C., Altiparmak, F., & Dengiz, B. (2002). A comparison of the performance of artificial intelligence techniques for optimizing the number of kanbans. The Journal of the Operational Research Society, 53(8), 907–914. doi:10.1057/ palgrave.jors.2601395 Bellman, R. E. (1957). Dynamic Programming. Princeton: Princeton University Press. Berkley, B. J. (1992). A review of the kanban production control research literature. Production and Operations Management, 1(4), 393–411. doi:10.1111/j.1937-5956.1992.tb00004.x

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Bhat, U. N. (2008). An Introduction to Queueing Theory: Modelling and Analysis in Applications. Boston: Birkhauser. Bowden, R. O., Hall, J. D., & Usher, J. M. (1996). Integration of evolutionary programming and simulation to optimize a pull production system. Computers & Industrial Engineering, 31(1&2), 217–220. doi:10.1016/0360-8352(96)00115-5 Buzacott, J. A., & Shanthikumar, J. G. (1992). A general approach for coordinating production in multiple cell manufacturing systems. Production and Operations Management, 1(1), 34–52. doi:10.1111/j.1937-5956.1992.tb00338.x Buzacott, J. A., & Shanthikumar, J. G. (1993). Stochastic Models of Manufacturing Systems. New York: Prentice Hall. Dallery, Y., & Liberopoulos, G. (2000). Extended kanban control system: combining kanban and base stock. IIE Transactions, 32(4), 369–386. doi:10.1080/07408170008963914 Fitzpatrick, J. M. & Grefenstette, J. J. (1988) Genetic algorithms in noisy environments. Machine Learning: Special issue on genetic algorithms, 3, 101-120. Gershwin, S. B. (1994). Manufacturing Systems Engineering. New York: Prentice Hall. Hammel, U., & Bäck, T. (1994). Evolution strategies on noisy functions. How to improve convergence properties. Parallel Problem Solving from Nature, 3, 159–168. Howard, R. (1960). Dynamic Programming and Markov Processes. Massachussets. MIT Press. Hurrion, R. D. (1997). An example of simulation optimization using a neural network metamodel: finding the optimum number of kanbans in a manufacturing system. The Journal of the Operational Research Society, 48(11), 1105–1112.

Karaesmen, F., & Dallery, Y. (2000). A performance comparison of pull type control mechanisms for multi-stage manufacturing. International Journal of Production Economics, 68, 59–71. doi:10.1016/ S0925-5273(98)00246-1 Koulouriotis, D. E., Xanthopoulos, A. S., & Gasteratos, A. (2008). A Reinforcement Learning Approach for Production Control in Manufacturing Systems. In 1st International Workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems, (pp. 24-31). Patra, Greece. Koulouriotis, D. E., Xanthopoulos, A. S., & Tourassis, V. D. (2010). Simulation optimisation of pull control policies for serial manufacturing lines and assembly manufacturing systems using genetic algorithms. International Journal of Production Research, 48(10), 2887–2912. doi:10.1080/00207540802603759 Law, A., & Kelton, D. (2000). Simulation Modelling and Analysis. New York: McGraw Hill. Le Riche, R., Knopf-Lenoir, C., & Haftka, R. T. (1995). A Segregated Genetic Algorithm for Constrained Structural Optimization. In Sixth International Conference on Genetic Algorithms (pp. 558-565). Liberopoulos, G., & Dallery, Y. (2000). A unified framework for pull control mechanisms in multi-stage manufacturing systems. Annals of Operations Research, 93, 325–355. doi:10.1023/A:1018980024795 Panayiotou, C. G., & Cassandras, C. G. (1999). Optimization of kanban - based manufacturing systems. Automatica, 35(3), 1521–1533. doi:10.1016/S0005-1098(99)00074-6 Paternina – Arboleda, C. D, & Das, T. K. (2001). Intelligent dynamic control policies for serial production lines. IIE Transactions, 33, 65–77. doi:10.1080/07408170108936807

661

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Shahabudeen, P., Gopinath, R., & Krishnaiah, K. (2002). Design of bi-criteria kanban system using simulated annealing technique. Computers & Industrial Engineering, 41(4), 355–370. doi:10.1016/S0360-8352(01)00060-2

Venkatraman, S., & Yen, G. G. (2005). A generic framework for constrained optimization using genetic algorithms. IEEE Transactions on Evolutionary Computation, 9(4), 424–435. doi:10.1109/ TEVC.2005.846817

Shahabudeen, P., & Krishnaiah, K. (1999). Design of a Bi-Criteria kanban system using Genetic Algorithm. International Journal of Management and System, 15(3), 257–274.

Vivo-Truyols, G., Torres-Lapasio, J. R., & Garcıa-Alvarez-Coque, M. C. (2001). A hybrid genetic algorithm with local search: I. Discrete variables: optimisation of complementary mobile phases. Chemometrics and Intelligent Laboratory Systems, 59, 89–106. doi:10.1016/S01697439(01)00148-4

Smith, G. C., & Smith, S. S. F. (2002). An enhanced genetic algorithm for automated assembly planning. Robotics and Computer-integrated Manufacturing, 18(5-6), 355–364. doi:10.1016/ S0736-5845(02)00029-7 Spearman, M. L., Woodruff, D. L., & Hopp, W. J. (1990). CONWIP: a pull alternative to kanban. International Journal of Production Research, 28, 879–894. doi:10.1080/00207549008942761 Sugimori, Y., Kusunoki, K., Cho, F., & Uchikawa, S. (1977). Toyota production system and kanban system materialization of just-in-time and respect-for-humans systems. International Journal of Production Research, 15(6), 553–564. doi:10.1080/00207547708943149 Veatch, M. H., & Wein, L. M. (1992). Monotone control of queueing networks. Queueing Systems, 12, 391–408. doi:10.1007/BF01158810

Yamamoto, H., Qudeiri, J. A., & Marui, E. (2008). Definition of FTL with bypass lines and its simulator for buffer size decision. International Journal of Production Economics, 112(1), 18–25. doi:10.1016/j.ijpe.2007.03.007 Yang, T., Kuo, Y., & Cho, C. (2007). A genetic algorithms simulation approach for the multiattribute combinatorial dispatching decision problem. European Journal of Operational Research, 176(3), 1859–1873. doi:10.1016/j. ejor.2005.10.048 Yuan, Q., He, Z., & Leng, H. (2008). A hybrid genetic algorithm for a class of global optimization problems with box constraints. Applied Mathematics and Computation, 197, 924–929. doi:10.1016/j.amc.2007.08.081

This work was previously published in Supply Chain Optimization, Design, and Management: Advances and Intelligent Methods, edited by Ioannis Minis, Vasileios Zeimpekis, Georgios Dounias and Nicholas Ampazis, pp. 212-231, copyright 2011 by Business Science Reference (an imprint of IGI Global).

662

663

Chapter 38

Comparison of Connected vs. Disconnected Cellular Systems: A Case Study Gürsel A. Süer Ohio University, USA Royston Lobo S.S. White Technologies Inc., USA

ABSTRACT In this chapter, two cellular manufacturing systems, namely connected cells and disconnected cells, have been studied, and their performance was compared with respect to average flowtime and work-inprocess inventory under make-to-order demand strategy. The study was performed in a medical device manufacturing company considering their a) existing system b) variations from the existing system by considering different process routings. Simulation models for each of the systems and each of the options were developed in ARENA 7.0 simulation software. The data used to model each of these systems were obtained from the company based on a period of nineteen months. Considering the existing system, no dominance was established between connected cells vs. disconnected cells as mixed results were obtained for different families. On the other hand, when different process routings were used, connected system outperformed the disconnected system. It is suspected that one additional operation required in the disconnected system as well batching requirement at the end of packaging led to poor performance for the disconnected cells. Finally, increased routing flexibility improved the performance of the connected cells, whereas it had adverse effects in the disconnected cells configuration.

DOI: 10.4018/978-1-4666-1945-6.ch038

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Comparison of Connected vs. Disconnected Cellular Systems

INTRODUCTION Cellular Manufacturing is a well known application of Group Technology (GT). Cellular Design typically involves determining appropriate part families and corresponding manufacturing cells. This can be done either by grouping parts into families and then forming machine cells based on the part families or machine cells are determined first and based on these machine cells the part families may be formed or lastly both these formations can take place simultaneously. In a cellular manufacturing system, there may be a manufacturing cell for each part family or some of the manufacturing cells can process more than one part family based on the flexibility of the cells. The factors affecting the formation of cells can differ under various circumstances, some of them are volume of work to be performed by the machine cell, variations in routing sequences of the part families, processing times, etc. A manufacturing system in which the goods or products are manufactured only after customer orders are received is called a make-to-order system. This type of system helps reduce inventory levels since no finished goods inventory is kept on hand. In this chapter, two types of cellular layouts are analyzed, namely connected cells (single-stage cellular system) and disconnected cells (multistage cellular system) and their performance is compared under various circumstances for a make-to-order company. This problem has been observed in a medical device manufacturing company. The management was interested in such a comparison to finalize the cellular design. It was also important to research the impact of flexibility within each system for different combinations of family routings. A similar situation of connected vs. disconnected cellular design was also observed in a shoe manufacturing company, and in a jewelry manufacturing company. Authors believe that this problem has not been addressed in the literature

664

before even though it has been observed in more than one company and therefore worthy to study.

BACKGROUND The connected cells represent a continuous flow where the products enter the cells in the manufacturing area, complete the machining operations and exit through the corresponding assembly and packaging area after completion of the assembly and packaging operations. In other words, the output of a cell in the manufacturing area becomes the input to the corresponding cell in the assembly and packaging area. The biggest advantage of connected cells is that material flow is smoother and hence flowtime is expected to be shorter. This is also expected to result in lower WIP inventory. This paper focuses on a cellular manufacturing system similar to the system shown in Figure 1. There are three cells in the manufacturing area and three cells in the assembly and packaging area. In these cells, M1 through M3 represent the machines in the manufacturing area, A1, A2 and P1 through P3 represent the machines in the assembly and packaging area. The products essentially follow a unidirectional flow. The three cells in manufacturing area are similar since they have similar machines and all the products can be manufactured in any of the cells. However, the situation gets complicated in the assembly and packaging area. The three cells have restrictions in terms of the products that they can process. Therefore, deciding which manufacturing cell a product should be assigned is dictated by the packaging cell(s) it can be processed later on. This constraint makes the manufacturing system less flexible. In the disconnected cell layout, the products enter the manufacturing area, complete the machining operations and exit this area. On exiting the manufacturing area, the products can go to more than one of the assembly and packaging cells. In other words, the output from the cells in

Comparison of Connected vs. Disconnected Cellular Systems

Figure 1. Connected cells

the manufacturing area can become an input for some of the cells in the assembly and packaging area (partially flexible disconnected cells) or all of them (completely flexible disconnected cells). Figure 2 shows a partially flexible disconnected cells case where the parts from cell 1 in the manufacturing area can go to any of the cells in the assembly and packaging area. Parts from cell 2 can only go to cell 2, and cell 3 of the assembly and packaging area. Parts from cell 3 of the manufacturing area can only go to cell 3 of the assembly and packaging area. The disconnected system design allows more flexibility. On the other hand, due to interruptions in the flow, some delays may occur which may eventually lead to higher flowtimes and WIP inventory levels.

LITERATURE REVIEW A group of researchers compared the performance of cellular layout with process layout. Flynn and Jacobs (1987) developed a simulation model using SLAM for an actual shop to compare the performance of group technology layout against process layout. Morris and Tersine (1990) developed simulation models for a process layout and a cellular layout using SIMAN. The two performance measures used were throughput time and

work-in-process inventory (WIP). Yazici (2005) developed a simulation model using Promodel based on data collected from a screen-printing company to ascertain the influence of volume, product mix, routing and labor flexibilities in the presence of fluctuating demand. A comparison between one-cell, two-cell configurations versus a job shop is made to determine the shortest delivery and highest utilization. Agarwal and Sarkis (1998) reviewed the conflicting results from the literature in regard to superiority of cellular layout vs. functional layout. They attempted to identify and compile the existing studies and understand conflicting findings. Johnson and Wemmerlov (1996) analyzed twenty-four model-based studies and concluded that the results of these work cannot assist practitioners in making choices between existing layouts and alternative cell systems. Shafer and Charnes (1993) studied cellular manufacturing under a variety of operating conditions. Queueing theoretic and simulation models of cellular and functional layouts are developed for various shop operating environments to investigate several factors believed to influence the benefits associated with a cellular manufacturing layout. Another group of researchers focused on analyzing cellular systems. Selen and Ashayeri (2001) used a simulation approach to identify improvements in the average daily output through

665

Comparison of Connected vs. Disconnected Cellular Systems

Figure 2. Disconnected cells with partial flexibility

management of buffer sizes, reduced repair time, and cycle time in an automotive company. Albino and Garavelli (1998) simulated a cellular manufacturing system using Matlab to study the effects of resource dependability and routing flexibilities on the performance of the system. Based on the simulation results, the authors concluded that as resource dependability decreases, flexible routings for part families can increase productivity. On the contrary, from an economic standpoint they concluded that benefits will greatly reduce from an increase routing flexibility cost and resource dependability. Caprihan and Wadhwa (1997) studied the impact of fluctuating levels of routing flexibility on the performance of a Flexible Manufacturing System (FMS). Based on results obtained, the authors concluded that there is an optimal flexibility level beyond which the system performance tends to decline. Also, increase in routing flexibility when made available with an associated cost seldom tends to be beneficial. Suer, Huang, and Maddisetty (2009) discussed layered cellular design to deal with demand variability. They proposed a methodology to design a cellular system that consisted of dedicated cells, shared cells and remainder cell. Other researchers studied make-to-order and make-to-stock production strategies. Among them, DeCroix and Arreola-Risa (1998) studied the

666

optimality of a Make-to- Order (MTO) versus a Make-to-Stock (MTS) policy for a manufacturing set up producing various heterogeneous products facing random demands. Federgruen and Katalan (1999) investigated a hybrid system comprising of a MTO and a MTS systems and presented a host of alternatives to prioritize the production of the MTO and MTS items. Van Donk (2000) used the concept of decoupling point (DP) to develop a frame in order to help managers in the food processing industries to decide which of their products should be MTO and which ones should be MTS. Gupta and Benjaafar (2004) presented a hybrid strategy which is a combination of MTO and MTS modes of production. Nandi and Rogers (2003) simulated a manufacturing system to study its behavior in a make to order environment under a control policy involving an order release component and an order acceptance/rejection component. Authors are not aware of any other study that focuses on comparing the performance of connected cells with disconnected cells and therefore we believe this is an important contribution to the literature.

Comparison of Connected vs. Disconnected Cellular Systems

DESCRIPTION OF THE SYSTEM STUDIED: THE CASE STUDY This section describes the medical device manufacturing company where the experimentation was carried out. The products essentially follow a unidirectional flow. The manufacturing process is mainly divided into two areas, namely fabrication and packaging. Each area consists of three cells and cells are not identical. The one piece-flow strategy is adapted in all cells. The company has well defined families which are determined based on packaging requirements. Furthermore, the cells have been already formed. The average flowtime and the work-in-process inventory are the performance measures used to evaluate the performance of connected cells and disconnected cells.

Product Families The products are grouped under three families: Family 1 (F1), Family 2 (F2), and Family 3 (F3). The finished products are vials consisting of blood sugar strips and each vial essentially contains 25 strips. The number of products in families 1, 2 and 3 are 11, 21 and 4, respectively. The families that are described were already formed by the manufacturer based on the number of vials (subfamilies) included in the box. Family 1 requires only one subassembly (S), one box (B1), one label (L), and one Insert for instructions (I); family 2 (F2) requires 2 subassemblies, one box (B2), one label and one insert, and family 3 (F3) requires 4 subassemblies, one box (B3), one label and one insert to become finished product as shown in Table 1. Obviously, this family classification is strictly from manufacturing perspective and marketing department uses its own family definition based on product function related characteristics. The family definition has been made based on limitations of packaging machines. Not all packaging machines can insert 4 vials into a box. This seemingly simple issue becomes an obstacle in assigning products to packaging cells

and furthermore becomes a restriction in assigning products to even manufacturing cells in connected cellular design.

Fabrication Cells The fabrication area is where the subassemblies are manufactured. This area contains three cells which manufacture a single common subassembly and hence all three families can be manufactured in any of the three cells. The fabrication area has a conveyor system which transfers the products from one machine to another based on one-piece flow principle.

Operations in Fabrication Cells There are three operations associated with the fabrication area: • • •

Lamination Slicing and Bottling Capping

The machines used for operation 1 in all three cells are similar and work under the same velocities (120 vials/min) but the number of machines within each cell varies. Operation 2 has machines that process 17 vials/min and 40 vials/min. Similarly, operation 3 has machines that process 78 vials/min and 123 vials/min. Table 2 shows the distribution of machines and velocities among the three cells.

Table 1. Product structures of families Components Family

S

L

I

F1

1

1

1

F2

2

1

1

F3

4

1

1

B1

B2

B3

1 1 1

667

Comparison of Connected vs. Disconnected Cellular Systems

Table 2. Number of machines and their production rates in fabrication cells Op.2

Op. 1 Type I

Output of Bottleneck (vials/min)

Op. 3 Type II

Type I

Type II

Production Rate (vials/min)

120

17

40

78

123

Cell 1

1

2

2

1

114

Cell 2

1

4

1

68

Cell 3

2

3

2

2

131

Packaging Cells

Operations in Packaging Cells

The packaging area also has a conveyor system similar to the fabrication area which transfers products within packaging cells and also from the fabrication cells to the packaging cells. In the packaging area, the subassemblies produced in the fabrication area are used to produce the various finished products. The packaging cell 1 is semiautomatic while cells 2 and 3 are automatic. This difference in the types of machines results in constraints that do not allow the packaging of certain products in certain cells. There are a total of 36 finished products which differ in the quantity of vials they contain, the type of raw material the vials are made of, and the destination of the country to where they are shipped. The original cell feasibility matrix for the families is given in Table 3 and the restrictions are due to constraints in the packaging of the vials.

There are five operations performed in packaging area and each operation requires one machine. The operations are described as follows: • • • • •

Feeding (This operation is only performed in the case of disconnected cells) Labeling Assembly (Automatic in cells 2 and 3, semi-automatic in cell 1) Sealing Bar Coding

Table 4 shows the production rates of the machines in all cells.

ALTERNATE DESIGNS CONSIDERED In this section, the current product-cell feasibility restrictions are discussed for both connected and disconnected cellular systems.

Connected Cells Table 3. Feasibility matrix of families and packaging cells Packaging Cell 1

Family F1

Packaging Cell 2

X

X

F2

X

X

F3

X

668

Packaging Cell 3 X X

In this system, cells are set up such that the packaging cells form an extension or continuation of the respective fabrication cells. In other words, the output of a cell in fabrication area becomes the input for the corresponding packaging cell. Hence, it is referred to as a connected system. The connected system for the current product-

Comparison of Connected vs. Disconnected Cellular Systems

Table 4. Production rates for assembly-packaging machines in vials/minute Cell

Family

Operation 4

Cell 1

Cell 2

Cell 3

5

6

7

8

Family 1

160

135

80

150

150

Family 2

160

135

80

150

150

Family 3

160

135

80

150

150

Family 1

160

135

100

150

150

Family 2

160

135

180

150

150

Family 3

NA

NA

NA

NA

NA

Family 1

NA

NA

NA

NA

NA

Family 2

160

135

150

150

150

Family 3

160

135

280

150

150

cell feasibility is shown in Figure 3. The output of family 1, family 2, and family 3 is essentially based on the bottleneck or the slowest machine in each cell of the fabrication or the packaging area and they are shown in Table 5.

Disconnected Cells In this case, the output of a cell in the fabrication area can become an input for more than one cell in the packaging area depending upon the constraints in the packaging area. This can be considered to be a partially flexible disconnected cells type of system. The cell routing for each family is shown in Figure 4. In this figure, solid lines indicate that all the products processed in that particular fabrication cell can be processed in the assembly and

Figure 3. Cell routing of families for the connected system

packaging cell that they are connected to. On the other hand, the dashed lines show that only some of the products processed in the fabrication cell can be processed in the corresponding assembly and packaging cell. This provides a greater amount of flexibility with respect to the routing of the parts in the cellular system. The output rates of family 1, family 2, and family 3 depend on the fabrication-packaging cell combination and they are determined by the slowest machine as shown in Table 6.

Cases Considered The experimentation discussed in this chapter can be grouped in the following sections: •

Original Family-Cell Feasibility Matrix Production orders are based on customer orders. Various Family-Cell Feasibility Options Seven different family-cell feasibility options have been considered as given in Table 7. In this case too, production orders are based on customer orders.

669

Comparison of Connected vs. Disconnected Cellular Systems

Table 5. Output rates for cells in the connected system Cell #

Cell 1

Cell 2

Family #

Family 1

Output Rate of the Bottleneck Machine in Fabrication Area (vials/min) 114

80

Output Rate (vials/min) 80

Family 2

80

80

Family 3

80

80

100

68

Family 1

68

Family 2 Cell 3

Output Rate of the Bottleneck Machine/Operator in Packaging Area (vials/min)

Family 2

135 131

Family 3

135

131

135

Figure 4. Cell routing of families for disconnected system

METHODOLOGY USED

Simulation Models

This section describes the methodology used to develop the different simulation models in Arena 7.0.

The models were run 24 hours a day which basically represented 3 shifts round the clock. Setup times and material handling times were negligible. Preemption was not allowed due to material control restrictions by FDA. Vials move based on one-piece flow between machines. The simulation models are discussed for different cases separately in the following paragraphs. Case 1: Connected Cells: After the entities are created, they are routed to cells 1, 2 or 3 based on the type of family they belong to. The entities enter the fabrication area as a batch equivalent to the customer order size. Once a batch of entities enters the cell they are split and there is a one-piece flow in the cell. Entities belonging to a family go to one of its feasible cells based on the shorter queue length among 2nd operation. This is done

Input Data Analysis Input data such as customer order distributions, their respective inter-arrival times, processing times, and routings were all obtained based on the data provided by the company. The data provided was basically the total sales volume in vials for each part belonging to one of the three families for a period of nineteen months. Table 8 shows the customer order sizes and the inter-arrival time distributions for each product.

670

Comparison of Connected vs. Disconnected Cellular Systems

Table 6. Output rate of each routing combination for the disconnected system Family #

Family 1

Family 2

Family 3

Fabrication Area Cell (Output of the Bottleneck Machine in vials/ min) Cell 1 (114)

Packaging Area Cell (Output of the Bottleneck Machine in vials/min) Cell 1 (80)

Output Rate of Routing Combination (vials/min) 80

Cell 1 (114)

Cell 2 (100)

100

Cell 2 (68)

Cell 1 (80)

68

Cell 2 (68)

Cell 2 (100)

68

Cell 3 (131)

Cell 1 (80)

80

Cell 3 (131)

Cell 2 (100)

100

Cell 1 (114)

Cell 1 (80)

80

Cell 1 (114)

Cell 2 (135)

114

Cell 1 (114)

Cell 3 (135)

114

Cell 2 (68)

Cell 1 (80)

68

Cell 2 (68)

Cell 2 (135)

68

Cell 2 (68)

Cell 3 (135)

68

Cell 3 (131)

Cell 1 (80)

80

Cell 3 (131)

Cell 2 (135)

131

Cell 1 (114)

Cell 1 (80)

80

Cell 1 (114)

Cell 3 (135)

114

Cell 2 (68)

Cell 1 (80)

68

Cell 2 (68)

Cell 3 (135)

68

Cell 3 (131)

Cell 1 (80)

80

Cell 3 (131)

Cell 3 (135)

131

Table 7. Different family-cell feasibility options Cellular System

Cell Type

Cell

Options O1

Connected Cells

Fab. Cells

Pack. Cells

Disconnected Cells

Fab. Cells

Pack. Cells

O2

O3

O4

O5

O6

O7

C1

1

1,2,3

1,2

1,2,3

1

1,2

1,2

C2

2

1,2,3

2,3

1,2

2

2,3

2,3

C3

3

1,2,3

1,3

2,3

3

1,3

1,3

C1

1

1,2,3

1,2

1,2,3

1

1,2

1,2

C2

2

1,2,3

2,3

1,2

2

2,3

2,3

C3

3

1,2,3

1,3

2,3

3

1,3

1,3

C1

1

1,2,3

1,2

1,2,3

1

1,2,3

1,2

C2

2

1,2,3

2,3

1,2

2

1,2,3

2,3

C3

3

1,2,3

1,3

2,3

3

1,2,3

1,3

C1

1,2,3

1,2,3

1,2,3

1,2,3

1

1,2

1,2

C2

1,2,3

1,2,3

1,2,3

1,2,3

2

2,3

2,3

C3

1,2,3

1,2,3

1,2,3

1,2,3

3

1,3

1,3

671

Comparison of Connected vs. Disconnected Cellular Systems

Table 8. Inter-arrival time and customer order size distributions for products Family #

Family 1

Family 2

Family 3

672

Product #

Inter-arrival Time Distribution

Customer Order Size Distribution

1

0.999 + WEIB(0.115, 0.54)

1.09 + LOGN(1.56, 1.06)

2

0.999 + WEIB(0.0448, 0.512)

TRIA(18, 23.7, 52)

3

1.11 + EXPO(1.87)

9 + WEIB(7.66, 1.27)

4

2 + LOGN(3.19, 3.68)

2 + 17 * BETA(0.387, 0.651)

5

4 + LOGN(5.05, 14)

207 + LOGN(86.5, 139)

6

UNIF(0, 26)

TRIA(6, 12.5, 71)

7

-0.001 + 26 * BETA(0.564, 0.304)

UNIF(9, 80)

8

TRIA(0, 6.9, 23)

EXPO(25.3)

9

NORM(13.7, 7.49)

NORM(108, 30.8)

10

6 + WEIB(3.78, 0.738)

TRIA(98, 120, 187)

11

UNIF(0, 26)

UNIF(14, 34)

12

0.999 + WEIB(0.0126, 0.405)

5 + WEIB(7.51, 0.678)

13

1 + LOGN(0.99, 2.62)

2 + 11 * BETA(0.412, 0.527)

14

1.24 + EXPO(1.46)

30 + 26 * BETA(0.643, 1.08)

15

EXPO(7.06)

2 + 34 * BETA(0.321, 0.519)

16

0.999 + WEIB(0.0313, 0.503)

NORM(149, 57.1)

17

0.999 + WEIB(0.195, 1.12)

NORM(23, 14.2)

18

TRIA(0, 11.2, 25)

101 * BETA(0.822, 0.714)

19

26 * BETA(0.649, 0.42)

EXPO(154)

20

EXPO(7.4)

UNIF(0, 90)

21

UNIF(0, 26)

TRIA(0, 231, 330)

22

28 * BETA(1.11, 0.547)

TRIA(0, 224, 325)

23

27 * BETA(0.679, 0.429)

EXPO(119)

24

28 * BETA(0.468, 0.255)

TRIA(425, 1.05e+003, 2.5e+003)

25

1.16 + LOGN(2.48, 1.76)

NORM(867, 534)

26

EXPO(7.03)

NORM(68, 32.8)

27

TRIA(0, 4.44, 25)

EXPO(13.8)

28

9 + 17 * BETA(0.559, 0.0833)

24 * BETA(0.67, 0.969)

29

28 * BETA(0.466, 0.301)

NORM(420, 168)

30

28 * BETA(0.932, 0.479)

NORM(267, 110)

31

2 + 26 * BETA(0.314, 0.458)

TRIA(0, 274, 381)

32

UNIF(0, 26)

TRIA(0, 297, 368)

33

0.999 + WEIB(0.0117, 0.424)

TRIA(843, 1.19e+003, 2e+003)

34

1.33 + 1.96 * BETA(0.3, 0.636)

WEIB(6.83, 0.613)

35

1 + LOGN(5.23, 7.03)

37 + LOGN(147, 1.51e+003)

36

4 + 22 * BETA(0.305, 0.197)

TRIA(0, 543, 591)

Comparison of Connected vs. Disconnected Cellular Systems

because the second operation in each cell has been identified as the bottleneck operation based on trial runs conducted. In cell 1 and cell 3, the entities undergo operation 1 and go to operation 2 where there are two types of machines namely the slow (Type I) and fast (Type II) machines available for processing. The entities are routed to either type of machine based on a percentage which was decided after a number of simulation runs in order to minimize the queue lengths and hence the waiting time. In cell 1, 30% of the entities were routed to the Type I machine and the rest were routed to the Type II machine. In cell 3, 40% of the entities were routed to the Type I machine and the rest were routed to the Type II machine. Each of the entities leaving the fabrication cells enters the corresponding packaging cells. For example, entities from cell 1 in the fabrication area will enter cell 1 of the packaging area. The entities entering the packaging area undergo processing through operation 4. In the fifth operation, the vials are grouped based on the type of family they belong to. Family 1 consists of only 1 vial, family 2 consists of 2 vials and family 3 consists of 4 vials. Thus, the vials that are batched in Arena after operation 5 are processed in operations 6, 7 and 8 where they are boxed, sealed and coded. In the final batching, the vials are batched together in a box based on the final customer order sizes. The final batch sizes are the same as the input batch sizes. There is a waiting time associated since the entities might have to wait till the required batch size is reached and only then get disposed. The warm up time for the model was determined to be 2000 hours based on steady state analysis. The simulation was run for 2500 hours after the end of the warm-up period. Case 1: Disconnected Cells: The entities enter the fabrication area in batches as explained for the connected system. The batches of entities in disconnected system are routed differently as compared to the connected system. Here, the batches of entities are routed to cell 1, cell 2, or cell 3 of the fabrication area based on the shortest

queue length of the bottleneck operation which is operation 2 as explained earlier. The flexibility of routing the families to any of the cells in this type of system is the only major difference between the connected and disconnected systems in the fabrication area. The processing times of the machines and the sequence of operations for the entities for both systems are the same. Since the flow is disconnected in this system, the entities are batched again to the same customer order sizes at the end of the fabrication area. The batches of entities entering the packaging area are routed to specific packaging cells based on shortest queue length as shown earlier in Table 4. These batches are then split and the entities follow a one-piece flow. Also, there is an extra feeding operation at the start of the packaging cells in order to accommodate the transfer of entities from fabrication to packaging. The method in which the entities are transferred from fabrication to packaging and the extra feeding operation is the only major difference between the connected and disconnected systems in the packaging area. The processing times of the machines and the sequence of operations for the entities for both systems are the same. Case 2: It is very similar to case 1 except that the routings for products are varied as given in Table 7. In this table, Option 5 (O5) is the least flexible arrangement where each cell can process only one product family for both connected and disconnected cells. Option 2 (O2) is the most flexible arrangement with three cells capable of running all three product families both in connected cells and disconnected cells. The remaining options vary in flexibility between O5 and O2. In Option 1, the system is highly inflexible in connected cells whereas it is very flexible in packaging cells of disconnected arrangement (three product families for each cell). In options 3, 4, 6 and 7, each product family can be run at least in two cells. In option 3, packaging cells of disconnected arrangement is more flexible (once again three product families for each cell). In op-

673

Comparison of Connected vs. Disconnected Cellular Systems

tion 4, a little bit more flexibility is added to both connected and disconnected cells (cell 1 can run three families). In option 6, more flexibility is now added to fabrication cells of disconnected system (three product families for each cell). In option 7, each family can be run in two cells. However, models for options 1 and 5 didn’t stabilize and therefore they were not included in comparisons. Production order quantities for products 33 and 36 were both reduced by 40% and 50%, respectively to fit into existing capacity for case 1. Validation and verification are an inherent part of any computer simulation analysis. Models were verified and validated before statistical analysis was performed for all scenarios.

comparisons for the families for the same performance measures but the comparisons are made between different connected systems from cases 1 and 2. Table 13 also displays comparisons for the families for the same performance measures but the comparisons are made between different disconnected systems from cases 1 and 2. Results are denoted as significant (S) or not significant (NS) based on the conclusions reached. Also whenever significant, better option was denoted in a parenthesis. The significance of the results was based on the p-value obtained from the T-test conducted for an alpha level of 0.05. As mentioned earlier, no results for options 1 and 5 were obtained as the system did not stabilize. As observed in Table 11, for case 1, the flowtimes and work-in-process were observed to be different and the disconnected system had lower flowtimes and WIP for families F1 and F3 while the difference was significant for F1. On the other hand, WIP was significantly lower for F2 in the connected system. For case 2 with all the options considered, when there was a significant difference, this was always in favor of connected systems. For option 2, the flowtime for family 2 and the WIP for all three families for the connected system were significantly lower than those of in the disconnected system. For options 3, 6, and 7 which were the same for the connected system, the flowtimes and WIP for families 1 and 2 were significantly lower than the disconnected

RESULTS OBTAINED The results obtained from simulation analysis for average flowtime and average work-in-process inventory are summarized in Tables 9 and 10, respectively. The results are based on 100 replications. The statistical analysis was conducted using the statistical functions available in Excel. A t-test assuming unequal variances for two samples was conducted for a 95% confidence interval for each family under each system. Table 11 displays the comparison for each family with respect to flowtimes and work-in-process between connected and disconnected systems. Table 12 displays Table 9. Average flowtime results for all cases Cases and Options

Connected Cells Configuration F1

Disconnected Cells Configuration

F2

F3

F1

F2

F3

C1

42.66

50.52

87.53

31.19

54.39

71.61

C2-02

31.08

45.98

66.61

32.55

51.62

73.79

C2-03

24.91

39.84

67.06

27.24

46.93

83.48

C2-04

41.26

51.15

78.25

35.14

49.66

79.49

C2-06

Same as C2-03

31.88

51.17

73.80

C2-07

Same as C2-03

70.67

45.91

78.06

674

Comparison of Connected vs. Disconnected Cellular Systems

Table 10. Average work-in-process results for all cases Cases and Options

Connected Cells Configuration F1

Disconnected Cells Configuration

F2

F3

F1

F2

F3

C1

128.59

1403.77

1381.40

100.15

1622.52

1182.29

C2-02

90.00

1184.19

1052.06

99.70

1563.94

1267.13

C2-03

70.67

1046.90

1246.10

86.36

1425.42

1442.67

C2-04

126.10

1425.71

1269.42

111.27

1667.29

1409.31

C2-06

Same as C2-03

97.46

1555.34

1273.61

C2-07

Same as C2-03

80.34

1380.77

1532.79

Table 11. Connected vs. disconnected configuration for each family Cases and Options

FLOWTIME F1

WIP

F2

F3

F1

F2

F3

C1

S (D)

NS

NS

S (D)

S (C)

NS

C2 – O2

NS

S (C)

NS

S (C)

S (C)

S (C)

C2 – O3

S (C)

S (C)

S (C)

S (C)

S (C)

NS

C2 – O4

NS

NS

NS

NS

S (C)

NS

C2 – O6

S (C)

S (C)

NS

S (C)

S (C)

NS

C2 – O7

S (C)

S (C)

NS

S (C)

S (C)

NS

Table 12. Comparison between connected systems Cases and Options

FLOWTIME F1

WIP

F2

F3

F1

F2

F3

O2 VS O3

S (O2)

S (O2)

NS

S (O2)

S (O2)

NS

O2 VS O4

S (O2)

NS

NS

S (O2)

S (O2)

NS

O3 VS O4

S (O3)

S (O3)

NS

S (O3)

S (O3)

NS

C1 VS O2

S (O2)

NS

NS

S (O2)

S (O2)

NS

C1 VS O3, O6,O7

S (O3)

S (O3)

NS

S (O3)

S (O3)

NS

C1 VS O4

NS

NS

NS

NS

NS

NS

system. For option 4, the WIP for family 2 in the connected system was the only significant result. From Table 12, it can be observed that option 2 (O2) provided the best results when compared to rest of the options within the connected system with lower flowtimes and WIP followed by option 3 (O3). From Table 13, it can be observed that the flowtimes and WIP for options 3 and 7 (O3, O7)

were consistently and significantly better when compared to the rest of the options in the disconnected cells configuration. Also, when these two options were compared against each other there was no significant difference observed for any of the families and performance measures. A comparison between models C1 and O2 did not yield any significant results either and were definitely

675

Comparison of Connected vs. Disconnected Cellular Systems

Table 13. Summary table of results for disconnected system: cases 1 and 2 Cases and Options

FLOWTIME F1

WIP

F2

F3

F1

F2

F3

O2 VS O3

S (O3)

S (O3)

NS

S (O3)

S (O3)

NS

O2 VS O4

S (O4)

S (O4)

NS

S (O4)

S (O4)

NS

O2 VS O6

S (O6)

S (O6)

NS

S (O6)

S (O6)

NS

O2 VS O7

S (O7)

S (O7)

NS

S (O7)

S (O7)

NS

O3 VS O4

S (O3)

NS

NS

S (O3)

S (O3)

NS

O3 VS O6

S (O3)

S (O3)

NS

S (O3)

S (O3)

NS

O3 VS O7

NS

NS

NS

NS

NS

NS

O4 VS O6

NS

NS

NS

S (O6)

NS

NS

O4 VS O7

S (O7)

S (O7)

NS

S (O7)

S (O7)

NS

C1 VS O2

NS

NS

NS

NS

NS

NS

C1 VS O3

S (O3)

S (O3)

NS

S (O3)

S (O3)

S (C1)

C1 VS O4

NS

NS

NS

NS

NS

NS

C1 VS O6

NS

NS

NS

NS

NS

NS

C1 VS O7

S (O7)

S (O7)

NS

S (O7)

S (O7)

NS

less superior in performance when compared with the rest of the options.

CONCLUSION In this chapter, the performance of connected and disconnected cellular systems was compared under make-to-order strategy in a real cellular setting. In the existing system (case 1), it was observed that no cellular manufacturing design dominated the other, i.e., mixed results were obtained as to which system did better for each family. The flowtime and work-in-process for family 1 for the disconnected system were lower. On the other hand, the WIP for family 2 in the connected system was lower. The other comparisons did not yield any significant results and hence dominance could not be established in terms of better cellular system. In case 2, which is basically an extension of case 1, the impact of considering alternate cell routings for each part family was studied for both connected cells and disconnected cells. In most cases, connected cells outperformed disconnected

676

cells with respect to both average flowtime and WIP, especially for family 1 and family 2. This leads to the conclusion that the connected system is the better system in this situation since family 1 and family 2 make up for 32 of the 36 products and comprise of about 85% by volume of the production orders in the system. The average flowtime and WIP conclusions are similar but not identical, i.e. there were incidents where flowtime was significantly better but not necessarily corresponding WIP and vice versa. If one wanted to choose the best connected cell configuration, that would be option 2. This is possibly due to option 2 having the highest flexibility among all options as each family could be routed to any of the fabrication and packaging cells. Options 3, 4 and case 1 followed in the order of performance leading to the conclusion that increase in routing flexibility of the families resulted in significantly lower flowtimes and WIP. A similar comparison among all options developed for the disconnected system showed that options 3 and 7 performed better than the rest of the options. Option 3 had complete flexibility in

Comparison of Connected vs. Disconnected Cellular Systems

the packaging area but limited flexibility in the fabrication area and option 7 had limited flexibilities in both the areas. Limited flexibility as applicable to these two options means that each family could go to at least two specified cells. On the other hand, option 2 was the worst performing system among the options for case 2 even though it had the highest flexibility. This can be attributed to the fact that routing decisions are made based on queue sizes only. Family 3 products have the highest processing times and it is possible that queues in all cells may contain products from family 3 thus leading to higher lead times for the parts that join that queue. For case 1 and also option 2 from case 2, the disconnected system was modified to delete the extra feeding operation and the batching at the end of the fabrication area. This was done in order to determine the reason why the connected system performed better than the disconnected system in most of the comparisons made. The two modified simulation models were run and the results were statistically analyzed. In case 1, the flowtime for family 1 and the WIP for family 2 was significantly better for the disconnected system. In the original comparison, WIP and flowtime for family 1 in the disconnected system was better and the WIP for family 2 in the connected system was significantly better. The rest of the comparisons did not yield any significant results. For option 2, none of the comparisons yielded significant results as opposed to the original comparison when the connected system clearly performed better than the disconnected system. From these results it can be concluded that the extra operation and the extra batching increases the average WIP and flowtimes for each of the families and could be responsible for the disconnected system not performing as well as or better than the connected system.

REFERENCES Agarwal, A., & Sarkis, J. (1998). A review and analysis of comparative performance studies on functional and cellular layouts. Computers & Industrial Engineering, 34(1), 77–89. doi:10.1016/ S0360-8352(97)00152-6 Albino, V., & Garavelli, C. A. (1998). Some effects of flexibility and dependability on cellular manufacturing system performance. Computers & Industrial Engineering, 35(3-4), 491–494. doi:10.1016/S0360-8352(98)00141-7 Caprihan, R., & Wadhwa, S. (1997). Impact of routing flexibility on the performance of an FMS – A simulation study. International Journal of Flexible Manufacturing Systems, 9, 273–278. doi:10.1023/A:1007917429815 DeCroix, G. A., & Arreola-Risa, A. (1998). Make-to-order versus make-to-stock in a production inventory system with general production times. IIE Transactions, 30, 705–713. doi:10.1023/A:1007591722985 Federgruen, A., & Katalan, Z. (1999). The impact of adding a make-to-order item to a make-to-order production system. Management Science, 45(7), 980–994. doi:10.1287/mnsc.45.7.980 Flynn, B. B., & Jacobs, F. R. (1987). A comparison of group technology and process layout using a model of an actual shop. Decision Sciences, 18, 289–293. Gupta, D., & Benjaafar, S. (2004). Make-toorder, make-to-stock, or delay product differentiation? A common framework for modeling and analysis. IIE Transactions, 36, 529–546. doi:10.1080/07408170490438519

677

Comparison of Connected vs. Disconnected Cellular Systems

Johnson, J., & Wemmerlov, U. (1996). On the relative performance of functional and cellular layouts – An analysis of the model-based comparative studies literature. Production and Operations Management, 5(4), 309–334. doi:10.1111/j.1937-5956.1996.tb00403.x Morris, S. J., & Tersine, J. R. (1990). A simulation analysis of factors influencing the attractiveness of group technology cellular layouts. Management Science, 36(12), 1567–1578. doi:10.1287/ mnsc.36.12.1567 Nandi, A., & Rogers, P. (2003). Behavior of an order release mechanism in a make-to-order manufacturing system with selected order acceptance. Proceedings of the 2003 Winter Simulation Conference. Selen, J. W., & Ashayeri, J. (2001). Manufacturing cell performance improvement: A simulation study. Robotics and Computer-integrated Manufacturing, 17, 169–176. doi:10.1016/S07365845(00)00051-X

Shafer, S. M., & Charnes, J. M. (1993). Cellular versus functional layouts under a variety of shop operating conditions. Decision Sciences, 24(3), 665–681. doi:10.1111/j.1540-5915.1993. tb01297.x Süer, G. A., Huang, J., & Maddisetty, S. (2010). Design of dedicated, shared and remainder cells in a probabilistic environment. International Journal of Production Research, 48(19), 5613–5646. doi:10.1080/00207540903117865 Van Donk, D. P. (2000). Make to stock or make to order: The decoupling point in the food processing industries. International Journal of Production Economics, 69, 297–306. doi:10.1016/S09255273(00)00035-9 Yazici, J. H. (2005). Influence of flexibilities on manufacturing cells for faster delivery using simulation. Journal of Manufacturing Technology, 16(8), 825–841. doi:10.1108/17410380510627843

This work was previously published in Operations Management Research and Cellular Manufacturing Systems: Innovative Methods and Approaches, edited by Vladimir Modrák and R. Sudhakara Pandian, pp. 37-52, copyright 2012 by Business Science Reference (an imprint of IGI Global).

678

679

Chapter 39

AutomatL@bs Consortium:

A Spanish Network of Web-based Labs for Control Engineering Education Sebastián Dormido Universidad Nacional de Educación a Distancia, Spain Héctor Vargas Pontificia Universidad Católica de Valparaíso, Chile José Sánchez Universidad Nacional de Educación a Distancia, Spain

ABSTRACT This chapter describes the effort of a group of Spanish universities to unify recent work on the use of Web-based technologies in teaching and learning engineering topics. The network was intended to be a space where students and educators could interact and collaborate with each other as well as a meeting space for different research groups working on these subjects. The solution adopted in this chapter goes one step beyond the typical scenario of Web-based labs in engineering education (where research-groups demonstrate their engineering designs in an isolated fashion) by sharing the experimentation resources provided by different research groups that participated in this network. Finally, this work highlights the key points of this project and provides some remarks about the future use of Web-based technologies in school environments.

INTRODUCTION The evolution of the Internet has changed the education landscape drastically (Bourne et al. 2005, Rosen 2007). What was once considered distance education is now called online education. DOI: 10.4018/978-1-4666-1945-6.ch039

In other words, the method of teaching and learning is based on the use of the Internet to complete educational activities. A specific example of this new teaching model is the Spanish University for Distance Education (UNED). Compared to other Spanish universities, this institution has the largest number of students because distance education allows students to obtain a degree or improve

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

AutomatL@bs Consortium

their professional skills without having to change their lifestyles. UNED is not a unique institution; there are many universities around the world with an online presence, such as the Open University in Colombia, Open Universities in Australia, the Open University in UK, the Open University in Catalonian, Fern Universität in Germany, and many more. The existence of these institutions confirms the viability and importance of computerassisted teaching and learning through the Internet. The implementation of a distance learning model is not an easy task in engineering and sciences studies (Williams 2007). In addition to textual/multimedia information and other resources required to demonstrate theoretical aspects in an online course, hands-on laboratories should also be included. This requirement is particularly necessary for control engineering, which is an inherently interdisciplinary field in which progress is achieved through a mix of mathematics, modeling, computation, and experimentation (Astrom 2006). In this context, students should be able to • •

Understand the underlying scientific model of the phenomenon that was studied. Become acquainted with the limits of the model (i.e., how does the model accurately reflects real behavior and to what extent it remains a basic approximation). Learn how to manipulate the parameters of the model in order to fine-tune the behavior of the real system. (Dormido 2004)

To achieve these goals, the implementation of an effective Web-based educational environment for any engineering topic should cover three aspects of the technical education: concept, interpretation, and operation. The student should be provided with an opportunity to become an active player in the learning process (Dormido et al. 2005). In this context, the potential for Webbased experimental applications such as virtual laboratories (Valera et al. 2005), remote laboratories (Casini et al. 2004, Brito et al. 2009) and

680

games (Eikaas et al. 2006) as pedagogical support tools in the learning/teaching of control engineering has been presented in many works. In fact, in the last decade, several academic institutions have explored the World Wide Web (WWW) to develop their courses and experimental activities in a distributed context. However, most of these developments have focused only on the technical issues related to building Web-enabled applications for performing practical activities through the Internet (e.g., how to start up remote monitoring of a real device or how to build sophisticated virtual interfaces). At most, these implementations may include a set of Web pages with a list of activities that need to be carried out by the users. Some examples of these implementations are provided in the additional reading section at the end of the chapter. In general, these developments do not take into account the social context of the interactions and the collaboration that is typically generated in traditional hands-on laboratories (Nguyen 2007). Indeed, direct contact with teachers and interactions with classmates are valuable resources that may be reduced or even disappear when hands-on experimental sessions are conducted via Webbased laboratories. New trends in the use of Web-based resources for teaching and learning in the engineering disciplines include the use of Web 2.0 technologies such as social software in building virtual representations of face-to-face (f2f for short) laboratories in a networked, distributed environment (Gillet et al., 2009). This objective was grounded in the idea that educational institutions and many workplaces are equipped with a type of tool that connects people, contents and learning activities and can thus transfer information and knowledge. Learning to learn is the new challenge for the new generation of students. In other words, they have to learn to use Web resources to improve their teaching and learning. Commonly, a mix of Web-based technologies and software agents (Salzmann & Gillet 2008) is used to develop remote experimentation systems

AutomatL@bs Consortium

for pedagogical purposes. For this reason, most of the remote experimentation systems are custommade solutions. This means that the selection of software tools and global system architecture are not simple tasks due to the wide variety of software frameworks that are available. This chapter describes the structure of the remote experimentation system used in this study, which is based on the use of three software tools: Easy Java Simulations (Easy Java 2010), LabVIEW (LabVIEW 2010), and eMersion (eMersion 2010).

BACKGROUND In a typical scenario for remote experimentation, universities provide the overall infrastructure required for the remote experimentation services offered to students, including a set of didactic setups that are specially designed for hands-on laboratories, a set of server computers used to interface these processes and a main server computer providing the complementary Web-based resources necessary to use the remote labs. The system users (clients) can access experimentation services from any Internet connection. However, developing a complete environment for experimentation services is not an easy task. For this reason, this section presents a systematic approach for developing such systems. Although the development of an application with the previously described features can be structured in multiple ways, we have divided the problem into two levels or “layers.” The first layer is the experimentation layer, which includes all the necessary software and hardware components needed to develop the experimental applications for the Web-based virtual or remote laboratories. Because Web-based labs do not supply all the elements needed to provide remote experimentation services, complementary Web-based resources are needed to manage students’ learning. Thus, the e-learning layer incorporates the development of

functionalities required to support teaching and learning through the Internet (Vargas et al., 2008).

Layer 1: The Experimentation Layer The experimentation layer included the design methodology and construction of a graphical user interface for clients as well as the server application. This layer was developed with a client and server structure. The first step to start the development process was the analysis of the requirements and specifications for implementing the interface. The following subsections include some recommendations for developing this layer.

Requirements and Specifications for the Client •

The software should be multiplatform. For example, Java has the required characteristics for designing this kind of application; the user only needs a Web browser with Java support to access the Web-based lab. The protocol to communicate with the server should include low-level protocols to stream data through the Internet, such as TCP (Transfer Control Protocol) or UDP (User Datagram Protocol). These protocols allow for better control of data packet transmissions in networks. The graphical user interface must be simple and intuitive. It should be user-friendly and useful in different environments. Either virtual or remote access to the laboratory should be enabled with the same graphical interface. In simulation mode, the state of the system and its associated variables must be updated based on the evolution of a mathematical model of the process. Otherwise, in remote mode, these variables should be updated according to the real plant changes in the remote location. Video feedback should also be includ-

681

AutomatL@bs Consortium

ed to provide distance users with a sense of presence in the laboratory. Events scheduling for program faults in the system should be included. The systems could be enabled to analyze systems in the presence of noise or disturbance measurements. The robustness of the system could also be evaluated under anomalous operating situations. Finally, it is recommended that the users define experiments in an easy manner. For example, a programmed change of a setpoint value could be required to observe the process response at different operation points.

Requirements and Specifications for the Server From a software design point of view, the server is composed of a set of modules (some of them optional) that are described below: •

682

A data exchange module: this software remains in a listening state while it is waiting for remote connections from users. It receives commands and queries from clients and makes these inputs effective over a physical system. The responses are retrieved from the real plant through the instrumentation hardware and are sent back to the client. The link between the server and clients would be established via a TCP/IP protocol suite. An access management module: this software module would manage all of the information related to the management of users and timeslot bookings for the use of real plants. A database manager would be used to handle the users’ reservations and physical resources. Each register in this database would correspond to a booking scheduled for a specific time and date.

An instrumentation module: this module would incorporate all of the hardware needed to connect the physical system with the server. A remote visualization module: this module would allow users to examine what is happening with the physical system during its remote manipulation by a client. Video cameras would be used to facilitate this feature. This system must be able to transmit a sense of realism to encourage its usage and increase the motivation of the users while completing tasks. Each module in the server has a counterpart on the client side. For example, to read the video streaming captured by the server, the client application must implement a software module that allows for the retrieval and decoding of video streaming from the remote camera and then renders it in the interface.

Layer 2: The E-Learning Layer The previous section presented the main features that need to be taken into account when analyzing, designing, and building the experimentation layer. Additionally, a second key aspect that should be addressed is the development and/or use of a Web-based learning management system (LMS) to support a student’s learning process. This platform should organize user access to the experimentation modules that are available and allow for students/teachers to interact and collaborate with one another. The implementation phase would require the following: • • •

Simplification of the organization of user groups. Notification services by email, instant messaging, news, and other methods. Documentation such as practical guides, task protocols, instruction manuals and any other information needed to per-

AutomatL@bs Consortium

• •

form a remote experimentation session autonomously. A sequence of activities that students must carry out during an experimental session. There can be two types of tasks: 1) tasks in simulation mode and 2) tasks in remote mode. Tasks in simulation mode are tasks that students must carry out prior to performing the experiments in the real plant. These tasks should be completed with a graphical user interface that allows students to work in a simulated environment. The objective should be to gain adequate insight about the procedures that are involved in the experiment. These tasks would reduce the time students spend on activities using the real plant. Remote access should not be allowed if the student has not satisfactorily completed the required tasks in simulation mode. If the student’s work was evaluated positively by the teaching staff, then access to remote mode would be granted. A method for managing students and the assessment of their work as well as uploading reports. An automatic booking system to schedule access to the physical resources. At the end of the development process, the experimentation and e-learning layers have to be integrated to produce the final Web application for virtual and remote labs. This integration needs to establish certain links and channels between the Web modules from both layers. For example, in our particular framework, we made it possible to save data collected from an experimentation applet (experimentation layer) in a shared Web space that was part of the elearning layer. The data stored in this space could be retrieved later for analysis.

IMPLEMENTATION The implementation process of the remote experimentation system that is described in this chapter can be itemized into two independent processes that were combined to create the final environment. These developments can be summarized as follows: • •

Building hybrid laboratories for pedagogical purposes (the experimentation layer). Integrating the hybrid laboratories into a Learning Management System (LMS) to publish resources and provide mechanisms for accessing the real plants (the e-learning layer).

Building Hybrid Laboratories for Pedagogical Purposes A hybrid laboratory provides remote software simulations and real experiments in a single environment that can be accessed over the Internet. The client/server approach is commonly used for the technical implementation of both features (Callaghan et al. 2006, Zutin et al. 2008). Specifically, when a student is conducting an experiment in a virtual manner, he or she also works with a mathematical model of a process. When developing the simulated portion of a hybrid laboratory, developers not only need to create a technology that covers all the aspects related to the use of simulations in a local mode. The applications must also work well in a distributed environment. The graphical user interface, for instance, could be a pure HTML/JavaScript application, or it could require a plug-in such as Flash, Java or ActiveX that runs in a Web browser. Although one of the most relevant features of Java is the simplicity of the language, creating a graphical simulation in this programming language is not a straightforward task. Conceiving relatively complex Web-based applications requires advanced knowledge of object-oriented

683

AutomatL@bs Consortium

programming and other features of Java (Esquembre 2005). For this reason, the following subsection presents Easy Java Simulations (EJS), a software tool that was used to create the client interfaces for the hybrid laboratories in this chapter.

EJS as a Development Tool for Hybrid Laboratories

EJS is a freeware, open-source tool that was developed in Java and is specially designed for the creation of discrete computer simulations (Christian & Esquembre 2007). EJS was originally designed for users with little programming experience. However, users need to know the analytical model of the process and the design of the graphical interface in detail. The architecture of EJS was derived from the model-view-control (MVC) paradigm, a philosophy that is based on the fact that interactive simulations must include three parts:

Figure 1. MVC paradigm abstraction of EJS

684

The model, which describes the process under study in terms of (1) variables, which define the different possible states of the process, and (2) the relationships between these variables, which are expressed by computer algorithms. The control, which defines certain actions a user can perform on the simulation. The view, which shows a graphical representation (either realistic or schematic) of the process states.

EJS makes programming simple by eliminating the control element of the MVC paradigm and fusing one part in the view and the other part in the model, as shown in Figure 1. Thus, applications can be created in two steps: (1) defining the model to simulate with the built-in simulation mechanism of EJS and (2) building the view by showing the model state and incorporating the changes made by users. Figure 1 shows a simple virtual-lab created

AutomatL@bs Consortium

by EJS for teaching basic control concepts based on the well-known single-tank process. Although EJS was initially conceived as a software tool to create interactive simulations for teaching physics, it has been successfully applied to many other research areas, including physical systems, mechanical systems, control systems (as in this essay), and medical systems. Thus, EJS could be classified as a general purpose tool intended to create interactive simulations of scientific phenomena based on models. Finally, one of the most important features of EJS is that its applications can be easily distributed through the Internet in applet form. Applets are Java programs that can be executed in the context of a Web browser in a way similar to Flash or HTML/JavaScript applications. To find more information about the EJS mechanism, creating simulations, and additional features, please visit the EJS homepage (http://www.um.es/fem/ EjsWiki/).

LabVIEW Server for Developing Hybrid Laboratories Carrying out the remote operation of any physical device is challenging if we take into account the number of technical aspects that must be solved. In most cases, technical issues such as performance, interaction level, visual feedback,

real-time control, user perception, safety and fault tolerance require extensive research (Salzmann & Gillet 2002). In general, remote experimentation through the Internet requires an awareness of the current state of the distant real plant to change the value of any input parameter of the remote system and to perceive the effect of this change with a minimal transmission delay. Figure 2 shows the process in which a client application maintains a connection with a remote server to control a real plant remotely. The server-side sends a continuous flow of information, which is represented by the information blocks s(k) that reflect the current state of the plant, and the server receives information blocks c(k) containing changes in system input parameters that were carried out by a remote user. The client side of the system receives information on the state of the system that is sent by the server (contained in s(k) blocks) while simultaneously waiting for a user’s interaction to report the changes to the server-side with new information blocks c(k). Regarding the software solutions for the server-side, there are many options for programming the real-time control loop (Matlab/Simulink, C++, Scicos, etc.). In this chapter, we propose a working scheme for LabVIEW developers. The set of tasks that should be executed in the Lab-

Figure 2. Stream of information between client and server

685

AutomatL@bs Consortium

VIEW server to enable the remote experimentation are described below: •

Control task: This loop involves the execution of three sub-tasks: 1) recover control parameters from the communication task, 2) acquire data and closed-loop control, and 3) transmit the system state to the communication task. Video task: This loop involves the execution of two sub-tasks: 1) acquire images from the video camera and 2) transmit images to the communication task. Communication task: This loop involves the execution of three sub-tasks: 1) receive control data from clients and write the data to the control task, 2) read the system state from the control task and the images from the video task, and 3) link the state and images and then send them to the client.

Figure 3 shows a LabVIEW block diagram that corresponds to this communication architecture. The three loops in the diagram run concurrently to

perform the main tasks: control, communication, and video acquisition. The control task is a time-critical activity running at a sampling period of 20 ms with a priority greater than the other two threads. The Analog Input Block reads the analog input signal from the sensor, its output is compared to the setpoint input of the PID Block, and the result is fed into the Analog Output Block. Then the resulting value is sent to the actuator, which completes the control task. The data structure composed of the setpoint value, the PID control parameters, the command to the actuator, and other variables is known as the control vector. This vector is sent from the communication task to the control task through RT FIFO queue blocks (RT FIFO queues act as a fixed-size queue so that the writing of data to an RT FIFO does not overwrite previous elements). These variables are produced when users interact with the client interface. The data array formed by the values sent to the actuator, the measurement from the sensor, the current time, and other variables are known as the state vector, and these values are transferred from the control

Figure 3. Three loops running concurrently in the LabVIEW server

686

AutomatL@bs Consortium

task to the communication task through RT FIFO queue blocks. The video task is a non-time-critical activity because the loss of some video frames is generally acceptable for the user. For most applications, sending five images per second is enough to obtain adequate visual feedback of the remote system (Salzmann et al. 1999). The communication task concatenates the current measurements (state vector) and the video frame in a new vector. This resulting vector is sent to the client using a TCP Write Block. In parallel, the control vector is received through the TCP Read Block from the clients and is passed to the control task through RT FIFO queues. The TCP protocol is used in both implementations because it guarantees packet delivery and bandwidth adaptation, although there is the cost of extra transmission delays (Lim 2006). A possible alternative would be the UDP protocol, which provides better control of the transmission delay. However, UDP does not have a guaranteed packet delivery mechanism or a bandwidth adaptation mechanism. Thus, the designer is responsible for implementing these features. Once the server-side is completed, the EJS application on the client side must be modified to exchange information with the LabVIEW server. In other words, the virtual lab must be transformed into a remote lab by receiving data from the real system instead of the simulated one. The steps to allow a virtual lab connect to the server architecture will be explained in the next section. First, a set of Java methods were programmed in EJS to control the connection with the LabVIEW server. Table 1 shows an example of the implementation of the methods connect(), disconnect(), sender(), and receiver(). Specifically, the upper part of Table 1 shows the excerpts of Java code that are used for establishing and releasing the connection with a server computer. TCP sockets are used to access the network layer. In Java, the socket programming creates an object and generates calls to the

methods for the object. On the left part, the establishment of the connection is carried out in Line 4. To create a socket object, the domain name (or IP address) and the service port in the remote server are needed. Then, in Lines 5 and 6, the input/output stream buffers are created. These buffers act as FIFO storing queues whose filling and emptying depend on possible delays in network communication. The disconnection from the server is made by invoking the close() method in the socket and the input/output stream buffers (Lines 5, 6, 7 - right upper part of Table 1). Conversely, the receiver() and sender() methods should be launched on independent Java threads when the connect() method is started. The sender() method is used to report changes in the user view that affect the operation of the remote equipment (for example, a change in a controller parameter). The receiver() method recovers the incoming data sent from the LabVIEW server. As shown in both pieces of code, the format of variables for the exchange must be defined. For the receiver() method, the current time (t), liquid level (h), and input flow in automatic mode (qautomatic) are received. These values constitute the states of the system (measurements). In the case of the sender() method, the control mode (m/a), the input flow in manual mode (qman), the PID parameters (Kp, Ti and Td), and the setpoint value (ref) are sent to the server-side when any user interaction is detected. These data are rendered to the client with an EJS view. The interface contains the same graphical elements of the virtual lab, but now the dynamic behavior of the elements is updated using the measurements obtained from the server. Once the methods have been added to the EJS client, the programming logic that discriminates between working in a simulation or a remote mode has to be created. This logic depends on whether updating the variables in the hybrid lab is carried out based on the evolution of the mathematical model (virtual lab) or on real measurements that are obtained from the server when the remote working mode is active (remote lab).

687

AutomatL@bs Consortium

Table 1. Excerpt of Java code to communicate with the LabVIEW server from the EJS client Connect with the server 1 public boolean connect(){ 2 connected = false; 3 try{ 4 javaSocket = new Socket(“onetank.dia.uned.es”, 2055); 5 in = new DataInputStream(javaSocket.getInputStream()); 6 out = new DataOutputStream(javaSocket.getOutputStream()); 7 if (javaSocket != null) { // If connected ? 8 connected = true; // connection is ok... 9 _play(); // executing evolution 10 } 11 }catch (java.net.IOException io) { 12 System.out.println(“Problems connecting to host.”); 13 } 14 return connected; 15 }

Disconnect from the server 1 public void disconnect(){ 2 if (connected) { 3 if (javaSocket != null){ 4 try { 5 in.close();// close input stream 6 out.close();// close output stream 7 javaSocket.close();// close connection 8 javaSocket = null; 9 in = null; 10 out = null; 11 connected = false; 12 }catch (java.io.IOException e){ 13 System.out.println(“Close socket error.”); 14 } 15 } 16 } 17 }

Receive data from the server 1 public void receiver(){ 2 if (connected) { 3 try { 4 time = in.readFloat();//read time from server 5 level = in.readFloat();// read level from server 6 qautomatic = in.readFloat();// read input flow from server 7 }catch (java.io.IOException e) { 8 System.out.println(“Error receiving data.”); 9} 10 } 11 }

Send data to the server 1 public void sender(){ 2 if (connected) { 3 try { 4 out.writeBoolean(m/a);//write control mode 5 out.writeFloat(qman);//write input flow in manual 6 out.writeFloat(Kp);//write proportional gain 7 out.writeFloat(Ti);//write integral gain 8 out.writeFloat(Td);//write derivative gain 9 out.writeFloat(ref);//write setpoint 10 out.flush();// flush data to client 11 }catch (java.io.IOException e) { 12 System.out.println(“Error sending data.”); 13 } 14 } 15 }

Integrating Hybrid Laboratories into a Learning Management System Remote and virtual control laboratories do not provide all the necessary resources to teach students in a distributed scenario. This section describes the Web infrastructure used to support the learning process of students in a distributed scenario. eMersion (Gillet et al. 2005) is the LMS tool we chose for publishing the virtual and remote laboratories on the Internet. This environment was implemented based on emulating the social behavior of the interactions and collaborations that exist in an f2f laboratory.

688

eMersion Description Figure 4 shows a complete view of the experimentation environment for eMersion during a practical session with a DC motor system in remote mode. From a structural point of view, the environment is composed of five independent Web applications: navigation bar, eJournal, experimentation console, online information, and external applications. The navigation bar provides access to the other Web resources for the environment. From the link labeled “Access Protocol,” users can obtain a complete user’s guide for the environment.

AutomatL@bs Consortium

Figure 4. Learning Management System to publish web-based labs

The eJournal resource provides a shared workspace for users to communicate and collaborate during the learning process. The eJournal allows students to save, retrieve, and share their experimental results and documents. Furthermore, the presentation of results and discussions with teaching staff can be performed using the options that are provided. The users can also organize the information collected during the experimental sessions and through online repositories. Work tracking and awareness can be implemented based on this information. The experimentation console corresponds to the EJS interfaces in which students carry out their experimental activities. These interfaces can interchange data with the eJournal space (see Figure 4). Thus, students can use the results they obtained (through images of the system’s evolution or data registers) in the experimentation sessions to prepare their reports for the final assessment. Online information is a collection of HTML pages and PDF files that allow students to visualize all the documentation necessary to solve the laboratory assignments.

Finally, eMersion offers an ability to integrate external Web applications. In this context, the following subsection describes the automatic bookings and authentication system developed to organize students’ access to the physical resources. This application was successfully integrated into eMersion.

A Flexible Scheme for Authentication and Booking of Physical Resources A flexible scheme to let students book the use of a physical resource located in the laboratory was added to the LMS. Essentially, students can fill out a reservations database automatically from the client through a Web interface. The system includes three main modules. For the first module, a Java applet was developed to perform new bookings on the client-side (see Client applet for bookings in Figure 5). For the second module, a centralized server application was also developed in Java to manage reservations, synchronism, and communications between the client applet for bookings and the Lab-Server (see Bookings Main Server in Figure 5). Finally, an

689

AutomatL@bs Consortium

Figure 5. A flexible scheme for bookings and the authentication process

additional Java module located in the Lab-Server was developed (see Java Interface in Figure 5). This module informs the Bookings Main Server of the current state of the Lab-Server and other parameters that the central server requires. The full process for booking a physical resource in the laboratory is divided into two stages, which are described below (see Figure 5): Reservation Phase: A description of the states flow during a reservation occurs as follows: • •

• • • • •

690

The Applet for bookings requests a new reservation (step 1) The Bookings Main Server takes the request and saves it in a local database (DB) (step 2) The Bookings Main Server asks the Java Interface for its time zone (step 3) The Java Interface provides its time zone to the Bookings Main Server (step 3) The Bookings Main Server calculates the time lag and amends the timeslot The Bookings Main Server reports the new register to the Java Interface (step 4) The Java Interface receives the register and inserts it in the Lab-Server DB (step 5)

The Bookings Main Server tells the client that the new reservation has been made (step 1)

Authentication Phase: A description of the states flow during authentication occurs as follows: • •

The Applet for experimentation starts the process by sending user credentials (step 6) The Identity Checking Module receives the keys and checks whether the user exists in the local DB. If the user exists, it then checks whether the connection attempt is between the start-time and the end-time of the timeslot reserved (step 7) The Identity Checking Module sends the result of the checking to the Applet for experimentation (step 6) If the checking result is acceptable, then the Applet for experimentation receives free access to the Target Plant (steps 6 and 8)

The Applet for bookings in which students schedule their reservations for any experiment is shown in Figure 4. The process to make a booking requires that when the student requests a reservation, the response of the bookings system must

AutomatL@bs Consortium

indicate the date and time assigned for the student to use the remote plant. Other booking and authentication systems with similar features can be found in the literature. One of the most relevant is the booking and authentication mechanism for the iLabs Shared System (http://icampus.mit.edu/ilabs/), which was developed at the Massachusetts Institute of Technology (MIT). This program employs a middle tier Service Broker that manages the interaction between users and Lab-Servers. In this architecture, all of the reservations are hosted in the service broker and the users must pass through it each time they want to work with the real plants. Unlike the iLabs system, our system offers some additional features. When a user performs a new booking, the reservation is hosted and managed in a middle tier, which is similar to the iLabs Shared System, but the reservation is also stored in a simple database located in each Lab-Server. Thus, the post authentication process is carried out directly with the Lab-Server by passing this middle tier. Another advantage of this architecture is that in case a Lab-Server with valid bookings is damaged, these bookings could be retrieved later by the Lab-Server from the Central Server. Finally, the administrators of a Lab-Server could also manage the bookings locally. As such, if there were problems with the central servers, bookings could be made manually.

AUTOMATL@BS NETWORK The AutomatL@bs network (http://lab.dia.uned. es/automatlab) is a consortium of seven Spanish universities that decided to expand their efforts in the use of virtual and remote laboratories for engineering education to a national level. The universities taking part in this project are UNED, the University of Almería (UAL), the University of Alicante (UA), Polytechnic University of Valencia (UPV), Polytechnic University of Catalonia (UPC), Miguel Hernández University (UMH),

and the University of León (UNILEON). The main challenge of this work has been to manage and coordinate the integration of the hardware, software, and human resources in a Web-based experimentation environment hosted by the Department of Computer Science and Automatic Control of UNED in Madrid. The main aims of this project were •

Enabling students to access practical experiments that are not available at their universities. Increasing the quality and robustness of the network of virtual and remote laboratories for a higher number of students and teachers with different teaching concerns.

The Web-based laboratories were offered to a total of 112 master’s degree candidates at engineering schools in the consortium. Figure 6 shows the GUIs of the virtual and remote laboratories for each participant university. Each GUI has the same arrangement of graphical elements, which provides a uniform structure. Also, the Web-based laboratories of the AutomatL@bs network were documented following the same guidelines and criteria. In general, the documentation defines a set of tasks or activities that students should carry out so they can be evaluated effectively by the teaching staff. This sequence of activities was divided into two phases: PRE-Lab activities and Lab activities. PRE-Labs are based on the use of the experimentation console in the simulation mode. Thus, the teaching team could be assured that each student had prior knowledge of the system before using it for an actual experiment. Lab tasks are based on the use of the experimentation console in the remote mode. Remote access to the system is allowed by the teaching staff once students finish their PRE-lab work in simulation mode, and the work is considered satisfactory.

691

AutomatL@bs Consortium

Figure 6. The available remote systems in AutomatL@bs

Description of the Pedagogical Scenario Students from each university worked on three of the nine available remote systems (three from UNED and six from other universities) offered by the AutomatL@bs project (one lab from their university and two labs from other locations). Then students were required to use the system to learn how to operate the interfaces. Finally, after several sessions, the students could complement their work with the Web-based AutomatL@bs experimentation system at their convenience through an outside Internet connection. UNED students were first offered the chance to access the systems that were available at UNED (a servo-motor system, a heat-flow system, and a three-tank system). Later on, they could complete their work remotely through the Internet. During these experimental sessions, students were able to save their data measurements and parameters

692

for writing their final reports. The students placed their reports in the eJournal space for evaluation. Teaching assistants from each university were in charge of evaluation.

Outcomes To obtain feedback regarding the use of the system, the students were required to complete evaluation questionnaires. We designed questions based on the guidelines of Ma and Nickerson (2006) to evaluate the infrastructure and the technical quality of the system as well as the educational value and the experiences of students. Some of the more relevant questions are described below: Technical questions: 1. 2.

How would you describe the quality of the virtual laboratories? How would you describe the quality of the remote laboratories?

AutomatL@bs Consortium

3. 4. 5.

Have you experienced hardware or software problems? Did you appreciate the uniform structure of the client interfaces? How was the navigation experience for the global system options? Educational value questions:

1.

2.

3. 4.

5.

How would you evaluate the quality of your learning with Web-based laboratories compared to traditional methods? How would you describe your learning speed using remote and virtual labs compared to traditional methods? In general terms, are you satisfied with the usability of the system? What were the most important learning resources when you were learning to use the system? How would you evaluate the level of difficulty of using the system?

Although this assessment was not an exhaustive evaluation, it provided initial information on what was necessary, or unnecessary, to include in this methodology for future engineering courses. The outcomes obtained from this survey are summarized in Table 2.

Sub-scale Number 1 provided the first general view concerning whether students felt satisfied with this new method for performing their practical experiments. The results showed that 19% and 69% of students strongly agreed and agreed, respectively, that they were satisfied with the system. Other questions about the advantages of using remote experiments in the educational process were also reported. The results showed that the use of new technologies, especially the Internet, encouraged students to conduct most of their practical exercises using this resource. Subscale Numbers 2 and 3 show comparative information about learning with the new technological methods compared to traditional methods. In cases where students reported dissatisfaction (9%), the primary reason was that they were not able to work directly with the laboratory equipment. A way to solve this problem could be to apply an educational methodology based on blended learning. First, a face-to-face class where students could interact and experiment in situ with the real plant would be held. The students would then be allowed access to the experimental environment remotely to complete their practical exercises. Regarding the quality of the hybrid laboratories (Sub-scale Numbers 4 and 5), most students positively evaluated their development in terms of user functionality. Any negative results might

Table 2. Summary of the survey outcomes Sub-scale

A%

B%

C%

D%

E%

1. Satisfaction degree

19

69

7

5

2. Learning compared to traditional methods

15

51

25

8

1

3. Facility of using the system

19

62

11

8

4. Quality of virtual labs

33

48

15

4

5. Quality of remote labs

25

38

25

10

2

6. Most important learning resource

18

44

27

11

1. A: Strongly Agree B: Agree C: Neutral D: Disagree E: Strongly Disagree 2. A: Much better B: Better C: Equal D: less E: Much less 3. A: Strongly Agree B: Agree C: Neutral D: Disagree E: Strongly Disagree 4. A: Very good B: Good C: Acceptable D: Bad E: Very Bad 5. A: Very good B: Good C: Acceptable D: Bad E: Very Bad 6. A: Documentation B: Questions to teacher C: Simulation D: Connection to plant E: Others

693

AutomatL@bs Consortium

have been a consequence of the quality of the Internet connection speed because slow speeds lead to delays. Some of the students performed their experiments using old dialup connections (56 kbps), so the exchange of data with some processes was not fast enough and caused the user interfaces to update slowly. The experiments were also tested with low-speed ADSL lines (512/128 kbps), and the results were satisfactory. Finally, Sub-scale Number 6 shows how queries of the teaching staff and the documentation of the practical exercises were essential resources for positive student performance.

CONCLUSION Virtual and remote experimentation for engineering education can be considered a mature technology. However, the process of transforming a classic control experiment into an interactive Web-based laboratory is not an easy task. This essay provides a systematic approach for developing prototypes of remote laboratories from a pedagogical perspective using three tools: EJS, LabVIEW, and eMersion. This approach incorporates the development of online experimentation environments and provides an effective scheme to switch between the simulation and teleoperation of real systems. The AutomatL@bs project has yielded benefits to the universities that have participated in the project over the last three academic years. The results from the previous evaluation allowed us to debug the system and identify necessary improvements in the framework. First, the number and variety of available experiments will be increased by enrolling new universities in the AutomatL@bs project (with special interest in universities from South America). To cope with this challenge, other LMS, such as Moodle or Sakai, are currently being evaluated. Second, the applets and all the materials are being adapted to the SCORM standards to simplify porting to

694

another LMS. Additionally, the applets of the simulated physical processes are being integrated into the ComPADRE digital library (http://www. compadre.org) to gain visibility. These changes could help integrate and deploy our project in other institutions. We will also attempt to let students carry out their practical experiments using other devices (such as mobile phones and PDAs) and user interfaces (including e-mail, Web forms, and HTML/JavaScript thin interfaces).

REFERENCES Astrom, K. J. (2006, June). Challenges in control education. Paper presented at the 7th IFAC Symposium on Advances in Control Education (ACE), Madrid, Spain. Bourne, J., Harris, D., & Mayadas, F. (2005). Online engineering education: Learning anywhere, anytime. International Journal of Engineering Education, 91(1), 131–146. Brito, N., Ribeiro, P., Soares, F., Monteiro, C., Carvalho, V., & Vasconcelos, R. (2009, November). A remote system for water tank level monitoring and control - a collaborative case-study. Paper presented at the 3rd IEEE International Conference on e-Learning in Industrial Electronic (ICELIE), Porto, Portugal. Callaghan, M. J., Harkin, J., McGinnity, T. M., & Maguire, L. P. (2006). Client-server architecture for remote experimentation for embedded systems. International Journal of Online Engineering, 2(4), 8–17. Casini, M., Prattichizzo, D., & Vicino, A. (2004). The automatic control telelab. A web-based technology for distance learning. IEEE Control Systems Magazine, 24(3), 36–44. doi:10.1109/ MCS.2004.1299531

AutomatL@bs Consortium

Christian, W., & Esquembre, F. (2007). Modeling physics with Easy Java simulations. The Physics Teacher, 45(10), 475–480. doi:10.1119/1.2798358

LABVIEW. (2010). NI LabVIEW homepage. Retrieved November 10, 2010, from http://www. ni.com/labview/

Dormido, S. (2004). Control learning: Present and future. Annual Reviews in Control, 28(1), 115–136. doi:10.1016/j.arcontrol.2003.12.002

Lim, D. (2006). A laboratory course in real-time software for the control of dynamic systems. IEEE Transactions on Education, 49(3), 346–354. doi:10.1109/TE.2006.879243

Dormido, S., Canto, S. D., Canto, R. D., & Sánchez, J. (2005). The role of interactivity in control learning. International Journal of Engineering Education: Special Issue on Control Engineering Education, 21(6), 1122–1133. Easy Java. (2010). EJS wiki homepage. Retrieved November 10, 2010, from http://www.um.es/fem/ EjsWiki/ Eikaas, T. I., Foss, B. A., Solbjorg, O. K., & Bjolseth, T. (2006). Game-based dynamic simulations supporting technical education and training. International Journal of Online Engineering, 2(2), 1–7. EMERSION. (2010). eMersion project homepage. Retrieved November 10, 2010, from http:// lawww.epfl.ch/page28147.html Esquembre, F. (2005). Creación de simulaciones interactivas en Java. Madrid, Spain: Pearson, Prentice Hall. Gillet, D., El Helou, S., Marie, J., & Rosamund, S. (2009, September). Science 2.0: Supporting a doctoral community of practice in technology enhanced learning using social software. Paper presented at the 4th European Conference on Technology Enhanced Learning (EC-TEL), Nice, France. Gillet, D., Nguyen, A. V., & Rekik, Y. (2005). Collaborative web-based experimentation in flexible engineering education. IEEE Transactions on Education, 48(4), 696–704. doi:10.1109/ TE.2005.852592

Ma, J., & Nickerson, J. V. (2006). Hands-on, simulated, and remote laboratories: A comparative literature review. ACM Computing Surveys, 38(3), 1–24. Nguyen, A. V. (2007, July). Activity theoretical analysis and design model for web-based experimentation. Paper presented at the 12th International Conference on Human-Computer Interaction, Beijing, China. Oppenheim, A., Willsky, A., & Hamid, S. (1996). Signals and systems (2nd ed.). Upper Saddle River, NJ: Prentice Hall. Rosen, M. A. (2007). Future trends in engineering education. In Aung, W. (Eds.), Innovations 2007: World innovations in engineering education and research (pp. 1–11). Arlington, VA: International Network for Engineering Education and Research/ Begell House Publishing. Salzmann, C., & Gillet, D. (2002, July). Real-time interaction over the Internet. Paper presented at the 15th IFAC World Congress, Barcelona, Spain. Salzmann, C., & Gillet, D. (2008). From online experiments to smart devices. International Journal of Online Engineering, 4(S1), 50–54. Salzmann, C., Gillet, D., & Huguenin, P. (1999). Introduction to real-time control using LabVIEW with an application to distance learning. International Journal of Engineering Education, 16(3), 255–272.

695

AutomatL@bs Consortium

Valera, A., Diez, J. L., Vallés, M., & Albertos, P. (2005). Virtual and remote control laboratory development. IEEE Control Systems Magazine, 25(1), 35–39. doi:10.1109/MCS.2005.1388798 Vargas, H., Sánchez, J., Duro, N., Dormido, R., Dormido-Canto, S., & Farias, G. (2008). A systematic two-layer approach to develop webbased experimentation environments for control engineering education. Intelligent Automation and Soft Computing, 14(4), 505–524. Williams, R. (2007). Flexible learning for engineering. In Aung, W. (Eds.), Innovations 2007: World innovations in engineering education and research (pp. 279–290). Arlington, VA: International Network for Engineering Education and Research/Begell House Publishing. Zutin, D. G., Auer, M. E., Bocanegra, J. F., López, E. R., Martins, A. C. B., Ortega, J. A., & Pester, A. (2008). TCP/IP communication between server and client in multi user remote labs applications. International Journal of Online Engineering, 4(3), 42–45.

ADDITIONAL READING Abdulwahed, M., & Nagy, Z. K. (2009). Applying Kolbs experiential learning on laboratory education, case study. Journal of Engineering Education, 98(3), 283–294. Aliane, N., Pastor, R., & Mariscal, G. (2010). Limitations of remote laboratories in control engineering education. International Journal of Online Engineering, 6(1), 31–33. Christian, W., Esquembre, F., & Mason, B. (2009, September). Easy Java Simulations and the ComPADRE library. Paper presented at the 14th International Workshop on Multimedia in Physics Teaching and Learning (MPTL14), Udine, Italy.

696

de la Torre, L., Sánchez, J., & Dormido, S. (2009, September). The Fisl@bs portal: A network of virtual and remote laboratories for physics education. Paper presented at the 14th International Workshop on Multimedia in Physics Teaching and Learning (MPTL14), Udine, Italy. Dormido, R., Vargas, H., Duro, N., Sánchez, J., Dormido-Canto, S., & Farias, G. (2008). Development of a web-based control laboratory for automation technicians: The three-tank system. IEEE Transactions on Education, 51(1), 35–44. doi:10.1109/TE.2007.893356 Duro, N., Dormido, R., Vargas, H., DormidoCanto, S., Sánchez, J., Farias, G., & Dormido, S. (2008). An integrated virtual and remote control lab: The three-tank system as a case study. Computing in Science & Engineering, 10(4), 50–59. doi:10.1109/MCSE.2008.89 Fakas, G. J., Nguyen, A. V., & Gillet, D. (2005). The electronic laboratory journal: A collaborative and cooperative learning environment for webbased experimentation. Computer Supported Cooperative Work, 14(3), 189–216. doi:10.1007/ s10606-005-3272-3 Gillet, D., Nguyen, A. V., & Rekik, Y. (2005). Collaborative web-based experimentation in flexible engineering education. IEEE Transactions on Education, 48(4), 696–704. doi:10.1109/ TE.2005.852592 Gomes, L., & Bogosyan, S. (2007). Special section on e-learning and remote laboratories within engineering education - first part. IEEE Transactions on Industrial Electronics, 54(6), 3054–3056. doi:10.1109/TIE.2007.907007 Guzmán, J. L., Vargas, H., Sánchez, J., Berenguel, M., Dormido, S., & Rodríguez, F. (2007). Education research in engineering studies: Interactivity, virtual and remote labs. In Morales, A. V. (Ed.), Distance Education Issues and Challenges (pp. 131–167). Hauppauge, NY: Nova Science Publishers.

AutomatL@bs Consortium

ILOUGH-LAB. (2010). The Ilough-lab. A Process Control Lab in the Chemical Engineering Lab at Loughborough University, UK. Retrieved November 10, 2010, from http://www.ilough-lab.com. ISILAB. (2010). ISILab Internet Shared Instrumentation Laboratory University of Genoa. Retrieved November 10, 2010, from http://isilab. dibe.unige.it.

Tan, K. K., Wang, K. N., & Tan, K. C. (2005). Internet-based resources sharing and leasing system for control engineering research and education. International Journal of Engineering Education, 21(6), 1031–1038. TELELAB. (2010). Automatic Control TeleLab. University of Siena. Retrieved November 10, 2010, from http://act.dii.unisi.it.

Jara, C., Esquembre, F., Candelas, F., Torres, F., & Dormido, S. (2009, October). New features of Easy Java Simulations for 3D modelling. Paper presented at the 8th IFAC Symposium on Advances in Control Education (ACE09), Kumamoto, Japan.

Vargas, H., Salzmann, Ch., Gillet, D., & Dormido, S. (2009, October). Remote experimentation mashup. Paper presented at the 8th IFAC Symposium on Advances in Control Education (ACE09), Kumamoto, Japan.

LABSHARE. (2010). LabShare. University of Technology, Sydney. Retrieved November 10, 2010, from http://www.labshare.edu.au.

Vargas, H., Sánchez, J., Salzmann, Ch., Esquembre, F., Gillet, D., & Dormido, S. (2009). Web-enabled remote scientific environments. Computing in Science & Engineering, 11(3), 34–46. doi:10.1109/MCSE.2009.61

Lareki, A., Martínez, J., & Amenabar, N. (2010). Towards an efficient training of university faculty on ICTs. Computers & Education, 54(2), 491–497. doi:10.1016/j.compedu.2009.08.032 Martín, C., Urquía, A., & Dormido, S. (2007). Object-oriented modelling of virtual-laboratories for control education. In Tzafestas, S. G. (Ed.), Web-based Control and Robotics Education (pp. 103–125). Springer Verlag. doi:10.1007/978-90481-2505-0_5 Nguyen, A. V., Rekik, Y., & Gillet, D. (2006). Iterative Design and Evaluation of a Web-Based Experimentation Environment. In Lambropoulus, N., & Zaphiris, P. (Eds.), User-Centered Design of Online Learning Communities (pp. 286–313). Idea Group. doi:10.4018/978-1-59904-358-6.ch013 NUS. (2010). Internet Remote Experimentation. National University of Singapore. Retrieved November 10, 2010, from http://vlab.ee.nus.edu. sg/~vlab/. Restivo, M.T., & Silva, M.G. (2009). Portuguese Universities Sharing Remote Laboratories. International Journal of Online Engineering, 5(special issue IRF’09), 16-19.

KEY TERMS AND DEFINITIONS Control Engineering: This form of engineering is an engineering discipline that applies control theory to design systems with predictable behaviors. The practice of control engineering uses sensors to measure the output performance of the device that is being controlled (e.g., a process or a vehicle), and those measurements can be used to provide feedback to the input actuators that can make corrections toward the desired performance. Controller: This is a device that monitors and affects the operational conditions of a given dynamical system. The operational conditions are typically referred to as output variables in the system that can be affected by adjusting certain input variables. Hybrid Laboratories: A Web-based laboratory where students can work with a simulation of a dynamic system (virtual lab) or on the real counterpart (remote lab). LabVIEW (Laboratory Virtual Instrumentation Engineering Workbench): LabVIEW is 697

AutomatL@bs Consortium

a graphical programming environment from National Instruments used to develop sophisticated measurements, tests, and control systems with intuitive graphical icons and wires that resemble a flowchart. Model–View–Controller (MVC): This is a software architecture that is currently an architectural pattern used in software engineering. PID Controller: A proportional–integral– derivative controller is a generic control loop

feedback mechanism (controller) widely used in industrial control systems. Sharable Content Object Reference Model (SCORM): A collection of standards and specifications for Web-based e-learning. It defines communications between client-side content and a host system called the run-time environment, which is commonly supported by a learning management system.

This work was previously published in Internet Accessible Remote Laboratories: Scalable E-Learning Tools for Engineering and Science Disciplines, edited by Abul K.M. Azad, Michael E. Auer and V. Judson Harward, pp. 206-225, copyright 2012 by Engineering Science Reference (an imprint of IGI Global).

698

699

Chapter 40

An Estimation of Distribution Algorithm for Part Cell Formation Problem Saber Ibrahim University of Sfax, Tunisia Bassem Jarboui University of Sfax, Tunisia Abdelwaheb Rebaï University of Sfax, Tunisia

ABSTRACT The aim of this chapter is to propose a new heuristic for Machine Part Cell Formation problem. The Machine Part Cell Formation problem is the important step in the design of a Cellular Manufacturing system. The objective is to identify part families and machine groups and consequently to form manufacturing cells with respect to minimizing the number of exceptional elements and maximizing the grouping efficacy. The proposed algorithm is based on a hybrid algorithm that combines a Variable Neighborhood Search heuristic with the Estimation of Distribution Algorithm. Computational results are presented and show that this approach is competitive and even outperforms existing solution procedures proposed in the literature.

INTRODUCTION The principle objective of Group Technology is to reduce the intercellular flow of parts and to provide an efficient grouping of machines into cells. The main contribution in this chapter is to develop an efficient clustering heuristic based on

evolutionary algorithms and to apply the proposed heuristic for Machine Part Cell Formation Problem which includes the configuration and capacity management of manufacturing cells. We propose to apply a novel population based evolutionary algorithm called Estimation of Distribution Algorithm in order to form part families and machine cells simultaneously.

DOI: 10.4018/978-1-4666-1945-6.ch040

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

An Estimation of Distribution Algorithm for Part Cell Formation Problem

The objective of the proposed heuristic is to minimize exceptional elements and to maximize the goodness of clustering and thus the minimization of intercellular movements. In order to guarantee the diversification of solutions, we added an efficient technique of local search called Variable Neighborhood Search at the improvement phase of the algorithm. Many researchers have combined local search with evolutionary algorithms to solve this problem. However, they did not apply yet the Estimation of Distribution Algorithm for the general Group Technology problem. Furthermore, we have used a modified structure of the probabilistic model within the proposed algorithm. In order to quantify the goodness of the obtained solutions, we present two evaluation criteria namely the percentage of exceptional elements and the grouping efficacy. A comparative study was elaborated with the most known evolutionary algorithms as well as the well known clustering methods.

LITERATURE REVIEW A wide body of publications has appeared on the subject of Group Technology (GT) and Cellular Manufacturing Systems (CMS). The history of approaches that tried to solve this problem began with the classification and coding schemes. Several authors have proposed various ways trying to classify the methods of Cell Formation Problem. It includes descriptive methods, cluster analysis procedures, graph partitioning approaches, mathematical programming approaches, artificial intelligence approaches and other analytical methods. Burbidge (1963) was the first who developed a descriptive method for identifying part families and machine groups simultaneously. In his work “Production Flow Analysis” (PFA). Burbidge has proposed an evaluative technique inspired from an analysis of the information given in route cards

700

to find a total division into groups, without any need to buy additional machine tools. Then, researchers applied array based clustering techniques which used a binary matrix A called “Part Machine Incidence Matrix” (PMIM) as input data. Given i and j the indexes of parts and machines respectively, an entry of 1 (aij) means that the part i is executed by the machine j whereas an entry of 0 indicates that it does not. The objective of the array based techniques is to find a block diagonal structure of the initial PMIM by rearranging the order of both rows and columns. Thus, the allocation of machines to cells and the parts to the corresponding families is trivial. McCornick et al. (1972) were the first who applied this type of procedure to the CFP. They developed the Bond Energy Analysis (BEA) which seeks to identify and display natural variable groups and clusters that occur in complex data arrays. Besides, their algorithm seeks to uncover and display the associations and interrelations of these groups with one another. King (1980) developed the Rank Order Clustering (ROC). In ROC algorithm, binary weights are assigned to each row and column of the PMIM. Then, the process tries to gather machines and parts by organizing columns and rows according to a decreasing order of their weights. Chan and Milner (1981) developed the Direct Clustering Algorithm (DCA) in order to form component families and machine groups by restructuring the machine component matrix progressively. A systematic procedure is used instead of relying on intuition in determining what row and column rearrangements are required to achieve the desired result. King & Nakornchai (1982) improved the ROC algorithm by applying a quicker sorting procedure which locates rows or columns having an entry of 1 to the head of the matrix. Chandrasekharan & Rajagopalan (1986a) proposed a modified ROC called MODROC, which takes the formed cells by the ROC algorithm and applies a hierarchical clustering procedure to them. Later, other array based clustering techniques are proposed namely

An Estimation of Distribution Algorithm for Part Cell Formation Problem

the Occupancy Value method of Khator & Irani (1987), the Cluster Identification Algorithm (CIA) of Kusiak & Chow (1987) and the Hamiltonian Path Heuristic of Askin et al. (1991). McAuley (1972) was the first who suggested similarity coefficient to clustering problems. He applied the Single Linkage procedure to the CF problem and used the coefficient of Jaccard which is defined for any pair of machines as the ratio of the number of parts that visit both machines to the number of parts that visit at least one of these machines. Then, some other clustering techniques are developed namely Single Linkage Clustering (SLC), Complete Linkage Clustering (CLC), Average linkage Clustering (ALC) and Linear Cell Clustering (LCC). Kusiak (1987) proposed a linear integer programming model maximizing the sum of similarity coefficients defined between two parts The category that is the most used in literature in recent years is heuristics and metaheuristics. Such heuristics are based essentially on Artificial Intelligence approaches including Genetic Algorithms (GA), Simulated Annealing (SA), Tabu Search (TS), Evolutionnary Algorithms (EA), neural network and fuzzy mathematics. In what follows we present some research papers that used this type of heuristics for designing CM systems. Boctor (1991) developed the SA approach to deal with large-scale problems. Sofianopoulos (1997) proposed a linear integer formulation for CF problem and employed the SA procedure to improve the solution quality taking as objective the minimization of inter-cellular flow between cells. Caux et al. (2000) proposed an approach combining the SA method for the CF problem and a branch-and-bound method for the routing selection. Lozano et al. (1999) presented a Tabu Search algorithm that systematically explores feasible machine cells configurations determining the corresponding part families using a linear network flow model. They used a weighted sum of intra-cell voids and inter-cellular moves to evaluate the quality of the solutions. Solimanpur

et al. (2003) developed an Ant colony optimization algorithm to solve the inter cell layout problem by modelling it as a quadratic assignment problem. Kaparthi et al. (1993) proposed an algorithm based on neural network for the part machine grouping problem. Xu & Wang (1989) developed two approaches of fuzzy cluster analysis namely fuzzy classification and fuzzy equivalence in order to incorporate the uncertainty in the measurement of similarities between parts. They presented also a dynamic part-family assignment procedure using the methodology of fuzzy pattern recognition to assign new parts to existing part families. Recently many researchers have focused on the approaches based on AI for solving the part-machine grouping problem. Venugopal & Narendran (1992a) proposed a bi-criteria mathematical model with a solution procedure based on a genetic algorithm. Joines et al. (1996) presented an integer programming solved using a Genetic Algorithm to solve the CF problem. Zhao & Wu (2000) presented a genetic algorithm to solve the machine-component grouping problem with multiple objectives: minimizing costs due to intercell and intra-cell part movements; minimizing the total within cell load variation; and minimizing exceptional elements. Gonçalves & Resende (2002) developed a GA based method which incorporates a local search to obtain machine cells and part families. The GA is responsible for generating sets of machines cells and the mission of the local search heuristic is to construct sets of machine part families and to enhance their quality. Then, Gonçalves & Resende (2004) employed a similar algorithm to find first the initial machine cells and then to obtain final clusters by applying the local search. Mahdavi et al. (2009) presented a GA based procedure to deal with the CF problem with nonlinear terms and integer variables. Stawowy (2006) developed a non-specialized Evolutionary Strategy (ES) for CF problem. His algorithm uses a modified permutation with separators encoding scheme and unique concept of separators movements during mutation. Andrés

701

An Estimation of Distribution Algorithm for Part Cell Formation Problem

& Lozano (2006) applied for the first time the Particle Swarm Optimization (PSO) algorithm to solve the CF problem respecting the objective the minimization of inter-cell movements and imposing a maximum cell size.

ESTIMATION OF DISTRIBUTION ALGORITHM It was first introduced by Mühlenbein & Paaß (1996). The Estimation of Distribution Algorithm belongs to Evolutionary Algorithms family. It adopts probabilistic models to reproduce individuals in the next generation, instead of crossover and mutation operations. This type of algorithms uses different techniques to estimate and sample the probability distribution. The probabilistic model is represented by conditional probability distributions for each variable. This probabilistic model is estimated from the information of the selected individuals in the current generation and selects good individuals with respect to their fitness. This process is repeated until the stop criterion is met. Such a reproduction procedure allows the algorithm to search for optimal solutions efficiently. However, it considerably decreases the diversity of the genetic information in the generated population when the population size is not large enough. For this reason, the incorporation of a local search technique is encouraged in order to enhance the performance of the algorithm. As a result, the Estimation of Distribution Algorithm can reach best solutions by predicting population movements in the search space without needing many parameters. The main steps in this procedure are shown in the following pseudo code: Estimation of Distribution Algorithm 1. Initialize the population according to some initial distribution model. 2. Form P ' individuals from the current population using a selection method.

702

3.

4.

5.

Build a probability model p(x) from P ' individuals using both the information extracted from the selected individuals in the current population and the previously built model. Sample p(x) by generating new individuals from the probability model and replace some or all individuals in the current population. End the search if stop criteria are met, otherwise return to Step 2.

This method can be divided into two different classes. The first class assumes that there are no dependencies between variables of the current solution during the search. These are known as non-dependency Estimation of Distribution Algorithms: Population Based Incremental Learning (Baluja, 1994) and Univariate Marginal Distribution Algorithm (Mühlenbein & Paaß, 1996). The second class takes into account these variable dependencies: Mutual Information Maximization for Input Clustering (De Bonet et al., 1997), Bivariate Marginal Distributional Algorithm (Pelikan & Mühlenbein, 1999), Factorized Distribution Algorithm (Mühlenbein et al., 1999) and the Bayesian Optimization Algorithm (Pelikan et al., 1999a). Generally, non-dependency algorithms are expected to have a worse modelling ability than the ones with variable dependencies (Zhang et al., 2004). But combining heuristic information or local search with non-dependency algorithms can compensate for this disadvantage.

Univariate EDAs This category assume that each variable is independent; it means that the algorithm do not consider any interactions among variables in the solution. As a result, the probability model distribution,p(x), becomes simply the product of Univariate marginal probabilities of all variables in the solution and expressed as follows:

An Estimation of Distribution Algorithm for Part Cell Formation Problem

p(x ) = ∏ p(x i )

Univariate Marginal Distribution Algorithm

Due to the simplicity of the model of distribution used, the algorithms in this category are computationally inexpensive, and perform well on problems with no significant interaction among variables. In what follows, we present the well-known works related to this category.

Univariate Marginal Distribution Algorithm was proposed by Muhlenbein & Paaß (1996). We note that this category can be seen as a variant of Population Based Incremental Learning when λ=1 and μ=0 Different variants of Univariate Marginal Distribution Algorithm have been proposed, and the mathematical analysis of their workflows has been carried out (Muhlenbein, 1998; Muhlenbein et al., 1999; Gonzalez et al., 2002). The main steps in this procedure are shown in the following pseudo code:

1

i =1

Population Based Incremental Learning It was proposed by Baluja (1994). The algorithm starts with initialisation of a probability vector. In each iteration, it updates and samples the probability vector to generate new solutions. The main steps in this procedure are shown in the following pseudo code: Population Based Incremental Learning 1. Initialise a probability vector p={p1,p2,...,pn}with 0.5 at each position. Here, each pi represents the probability of 1 for the ith position in the solution. 2. Generate a population P of M solutions by sampling probabilities in p. 3. Select set D from P consisting of N promising solutions. 4. Estimate univariate marginal probabilities p(xi) for each xi. 5. Foreachi,updatepiusingpi=pi+λ(p(xi-pi) 6. For each i, if mutation condition passed, mutate pi using pi=pi(1-μ)+randon (0 or 1)μ. 7.

End the search if stop criteria are met, otherwise return to Step 2.

Univariate Marginal Distribution Algorithm 1. Generate a population P composed of M solutions. 2. Select a set P’ from P consisting of N promising solutions. 3. Estimate univariate marginal probabilities p(x) from P’ for each xi. 4. Samplep(x)to generate M new individual and replace P. 5. End the search if stop criteria are met, otherwise return to Step 2.

Bivariate EDAs In contrast with Univariate case, the probability model contains factors involving the conditional probability of pairs of interacting variables. This class of algorithms performs better in problems, where pair-wise interaction among variable exists. In what follows, we present the well-known works related to this category.

Mutual Information Maximization for Input Clustering The Mutual Information Maximization for input clustering uses a chain model of probability distribution (de Bonet et al., 1997) and it can be written as:

703

An Estimation of Distribution Algorithm for Part Cell Formation Problem

1

p(x ) = ∏ p(x π x π )p(x π x π )...p(x π i =1

1

2

2

3

n −2

x π )p(x π ) n −1

n

where Π={π1,π2,...,πn} is a permutation of the numbers {1,2,...,n} used as an ordering for the pair wise conditional probabilities. At each iteration, the algorithm first tries to learn the linkage. Then, the algorithm uses a greedy algorithm to find a permutation Π that does not always give accurate model. Once the permutation Π is learnt, the algorithm estimates the pair wise conditional probabilities and samples them to get next set of solutions.

Combining Optimizers with Mutual Information Trees The Combining Optimizers with Mutual Information Trees proposed by Baluja & Davies (1997, 1998) also uses pair-wise interaction among variables. The model of distribution used by this algorithm can be written as follows: 1

p(x ) = ∏ p(x i x j ) i =1

where, xj is known as parent of xi and xi is known as a child of xj. This model is more general than the chain model used by Mutual Information Maximization for input clustering as two or more variables can have a common parent.

Bivariate Marginal Distribution Algorithm It was proposed by (Pelikan & Muhlenbein, 1999) as an extension to Univariate Marginal Distribution Algorithm. The model of distribution used by Bivariate Marginal Distribution Algorithm can be seen as an extension to the Combining Optimizers with Mutual Information Trees model and can be written as follows:

704

p(x ) =

∏ p(x

x k ∈Y

k

)

∏ {

xi ∈ X Y

}

p(x i x j )

where, Y⊆X represents the set of root variables. As a result, Bivariate Marginal Distribution Algorithm is a more generalised algorithm in this class and can cover both univariate interaction as well as bivariate interaction among variables.

Multivariate EDAs The model of probability distribution becomes more complex than the one used by univariate and bivariate Estimation of Distribution Algorithms. Any algorithm considering interaction between variables of order more than two can be placed in this class. As a result, the complexity of constructing such model increases exponentially to the order of interaction making it infeasible to search through all possible models. In what follows, we present the well-known works related to this category.

Extended Compact Genetic Algorithm The Extended Compact Genetic Algorithm has been proposed by Harik (1999) as an extension to the Compact Genetic Algorithm. The model of distribution used in the Extended Compact Genetic Algorithm, is distinct from other previously described models as they only consider the marginal probabilities and do not include conditional probabilities. Also, it assumes that a variable appearing in a set of interacting variables cannot appear in another set. The model of distribution used by the Extended Compact Genetic Algorithm can be written as follows: p(x ) = ∏ p(x k ) k ∈m

An Estimation of Distribution Algorithm for Part Cell Formation Problem

where, m is the set of disjoint subsets in n and p(xk) is the marginal probability of set of variables xk in the subset k.

Factorised Distribution Algorithm The Factorised Distribution Algorithm was proposed by Muhlenbein et al. (1999) as an extension to the Univariate Marginal Distribution Algorithm. The probability p(x), for such linkage, can be expressed in terms of conditional probabilities between sets of interacting variables. In general, the Factorised Distribution Algorithm requires the linkage information in advance, which may not be available in a real world problem.

Bayesian Optimization algorithm The Bayesian Optimization algorithm was proposed by Pelikan et al. (1999a). The probabilistic model p(x) is expressed in terms of a set of conditional probabilities as follow: n

p(x ) = ∏ p(x i πi ) i =1

where, πi is a set of variables having conditional interaction with xi. Also no variable in πi can have xi or any children of xi as their parent. An extension to the Bayesian Optimization algorithm called hierarchical Bayesian Optimization algorithm has also been proposed by Pelikan & Goldberg (2000). The idea is to improve the efficiency of algorithm by using a Bayesian network with a local structure (Chickering et al., 1997) to model the distribution and a restricted tournament replacement strategy based on work of Harik (1994) to form the new population.

Estimation of Bayesian Network Algorithm The Estimation of Bayesian Network Algorithm was proposed by Etxeberria & Larranaga (1999) and Larranaga et al., (2000) and also uses Bayesian networks as its model of probability distribution. The algorithm has been applied for various optimisation problems, such as graph matching (Bengoetxea et al., 2000, 2001b,), partial abductive inference in Bayesian networks (de Campos et al., 2001), job scheduling problem (Lozano et al., 2001b), rule induction task (Sierra et al., 2001), travelling salesman problem (Robles et al., 2001), partitional clustering (Roure et al., 2001), Knapsack problems (Sagarna & Larranaga, 2001).

Learning Factorised Distribution Algorithm The Learning Factorised Distribution Algorithm was proposed by Muhlenbein & Mahnig (1999b) as an extension to the Factorised Distribution Algorithm. The algorithm does not require linkage in advance. In each iteration, it computes a bayesian network and samples it to generate new solutions. The main steps in the Bayesian Optimization algorithm (BOA), the Estimation of Bayesian Network Algorithm (EBNA) and the Learning Factorised Distribution Algorithm (LFDA) procedures are shown in the following pseudo code: BOA, EBNA and LFDA 1. Generate population P of M solutions 2. Select N promising solution from P. 3. Estimate a Bayesian network from selected solutions. 4. Sample Bayesian network to generate M new individual and replace P. 5. End the search if stop criteria are met, otherwise return to Step 2.

705

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Markov Network Factorised Distribution Algorithm and Markov Network Estimation of Distribution Algorithm The Markov Network Factorised Distribution Algorithm and the Markov Network Estimation of Distribution Algorithm were proposed by Santana (2003a, 2005). They used Markov network (Pearl, 1988; Li, 1995) as the model of distribution for p(x). The first algorithm uses a technique called junction graph approach, while the second one uses a technique called Kikuchi approximation to estimate a Markov network.

PROBLEM STATEMENT Manufacturing Cell Formation consists of grouping, or clustering, machines into cells and parts into families according to their similar processing requirements. The most known and efficient idea to achieve the objective of cell formation is to convert the initial Part Machine Incidence Matrix to a matrix that has a diagonal block structure. Among this process, entries with a ‘1’ value are grouped to form mutually independent clusters, and those with a ‘0’ value are arranged outside these clusters. Once a block diagonal matrix is obtained, machine cells and part families are clearly visible. However, the process engenders intercellular movements Figure 1. King & Nakornchai (1982) initial matrix

706

that require extra cost or time due to the presence of some parts that are processed by machines not belonging to its corresponding cluster. These parts are called Exceptional Elements. As a result, the objective of the block diagonalization is to change the original matrix into a matrix form minimizing Exceptional Elements and maximizing the goodness of clustering. For cell formation problem, this matrix can be regarded as a binary matrix A which shows the relationship between any given m machines and p parts. Rows and columns represent respectively machines and parts. Each element in the matrix is usually represented by the binary entries aij where an entry of 1 indicates that a part i is processed by the corresponding machine j while an entry of 0 means a contrary situation. In Figure 1, we illustrate an (5×7) incidence matrix of King & Nakornchai (1982). Figure 2 provides a block diagonal form for the initial matrix illustrated above. The obtained matrix has not any intercellular movement which means that it represents the optimal solution for the given matrix with 2 cells and 3 machines per cell. In this chapter, we will deal with two efficient evaluation criteria namely the Grouping Efficacy (GE) and the Percentage of Exceptional Elements (PE). The Grouping Efficacy, proposed by Kumar & Chandrasekharan (1990), is considered one of

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Figure 2. A block diagonal matrix with no exceptional elements

the best criteria which distinguish ill-structured matrices from well-structured ones when the matrix size increases and it is expressed as follows: GE =

e(X ) − e0 (X ) e(X ) + ev (X )

Where: e0(X): Number of Exceptional Elements in the solution X, e: Number of 1’s in the Part Machine Incidence Matrix, ev(X): Number of voids in the solution X. The second evaluation criterion is called the “Percentage of Exceptional Elements (PE)” is developed by Chan & Milner (1982) and expressed as follows: PE =

e0 (X ) e

× 100.

Some other performance measurements can be used to evaluate manufacturing cell design results. In what follows, we presents some of them. The Grouping Efficiency which is developed by Chandrasekaran & Rajagopalan (1989). It

expresses the goodness of the obtained solutions and depends on the utilization of machines within cells and inter-cell movements. This indicates that there are no voids and no exceptional elements in the diagonal blocks which imply a perfect clustering of parts and machines. Although grouping efficiency was widely used in the literature, it has an important limit which is the inability of discrimination of good quality grouping from bad one. Indeed, when the matrix size increases, the effect of 1’s in the off-diagonal blocks becomes smaller, and in some cases, the effect of inter-cell moves is not reflected in grouping efficiency. The Machine Utilization Index (MUI) which is defined as the percentage of the time that the machines within cells are being utilized most effectively and it is expressed as follows: MUI =

e

∑ (m × p ) i

i

i

where mi indicates the number of machines in cell i and pi indicates the number of parts in cell i. The Group technology efficiency which is defined as the ratio of difference between maximum number of inter-cell travels possible and number of inter-cell travels actually required by the system to the maximum number of inter-cell travels possible.

707

An Estimation of Distribution Algorithm for Part Cell Formation Problem

The Group efficiency which is defined as the ratio of difference between total number of maximum external cells that could be visited and total number of external cells actually visited by all parts to total number of maximum external cells that could be visited. The Global efficiency is defined as the ratio of the total number of operations that are performed within the suggested cells to total number of operations in the systems.

PROPOSED EDA FOR MPCF PROBLEM (EDA-CF) Solution Representation and Initial Population Generally, for a Cell Formation Problem, a solution is represented by an m-dimensional vector X=[x1,x2,...,xm] where xi represents the corresponding assignment of the machine i to the specified cell. The problem consists in creating partitions of the set of the m machines assignments into a given number of cells. The created solutions must respect all the constraints defined in Section 3.3. We choose to generate the initial population randomly following a uniform distribution.

Selection The goal is to allow individuals to be selected more often to reproduce. We adopt the truncated selection procedure to create new individuals: in each iteration, we select randomly P1 individuals from the 50% of the best individuals in the current population. These P1 individuals will

be reproduced in the next generation using the probabilistic model to form new individuals.

Probabilistic Model and Creation of New Individuals After the selection phase, a probabilistic model is applied to the P1 selected individuals in order to generate new individuals. The probabilistic model provides the assignment probability of the machine i to cell j and expressed in Box 1 where, ε>0 is a factor which guarantees that the model provides a probability Pij≠0.

Replacement The replacement represents the final step in our search procedure. It is based on the following idea: when a new individual is created, we compare it to the worst individual in the current population and we retain the best one.

Fitness Function A fitness function is used for evaluating the aptitude of an individual to be kept or to be used for reproducing new individuals in the next generation. In the proposed algorithm, we used two fitness functions F1 and F2 to perform the objectives of minimizing the percentage of Exceptional Elements and maximizing the Grouping Efficacy respectively. Let mi be the number of machines assigned to the cell i. we define F1 and F2 as follows: F1(X)=e0(X)+Pen(X)

Box 1. Pij =

708

number of times where machine i appears in cell j + ε number of selected individuals + C × ε

An Estimation of Distribution Algorithm for Part Cell Formation Problem

and

1.

Shaking: generate a point X’ at random f r o m k th n e i g h b o r h o o d o f X (X ' ∈ N k (X ))

2.

Local Search: apply some local search method with X’ as initial solution; denote with X’’ the obtained local optimum. Move or not: if this local optimum X’’ is better than the incumbent, or if some acceptance criterion is met, move there (X ← X ") , and set k=1 ; otherwise, set k←k+1.

F2(X)=GE(X)-Pen(X). where: Pen(X ) = C

C

i =1

i =1

α1 ∑ max {0, mi - k max } + α2 ∑ max {0, 1 − mi }

expressed the distance between the solution X and the feasible space. This penalty under-evaluate the fitness of solution X when X violate the constraint of the problem. i.e a penalty value is encountered either when the number of assigned machines exceeds the capacity of a cell or when machines are assigned to a number of cells that exceeds the fixed number of cells C.

Variable Neighborhood Search Algorithm Variable Neighborhood Search is a recent metaheuristic for combinatorial optimization developed by Mladenović & Hansen (1997). The basic idea is to explore different neighborhood structures and to change them within a local search algorithm to identify better local optima with shaking strategies. The main steps in this procedure are shown in the following pseudo code: Variable Neighborhood Search Select the set of neighborhood structures Nk,k={1,2,...,nmax} that will be used in the search, find an initial solution X, choose a stopping condition. Repeat the following steps until the stopping condition is met: Set k=1 Repeat the following steps until all neighborhood structures are used:

3.

Local Search Procedure Generally, obtaining a local minimum following a neighborhood structure does not imply that we obtain a local optimum following another one. For this reason, we choose to use two local search procedures which are based on two different neighborhood structures. The first neighborhood structure consists to select one machine and to insert it in a new cell. The second consists to select two machines from two different cells and to swap them. Then, we apply these two local search procedures iteratively until there is no possible improvement to the current solution.

Shaking Phase The main idea consists to define a set of neighbourhood structures that allow to obtain a distance equal to k between the solution X and the new neighbour solution X’. This distance can be defined by the number of differences between the two vectors X and X’. Then, we define Nk as the neighbourhood structure given by applying randomly k insertion moves.

709

An Estimation of Distribution Algorithm for Part Cell Formation Problem

COMPARATIVE STUDY In order to show the competitiveness of the proposed EDA-CF algorithm, we provide in this section a comparative study with the well known approaches that treated Cell Formation problem. During all experiments, the proposed algorithm is coded using C++ and run on a computer Pentium IV with 3.2 GHz processor and 1GB memory.

Test Data Set In order to evaluate the goodness of clusters obtained from the clustering heuristic for MPCF problem, 30 problems taken from the literature were tested. These data sets include a variety of sizes, a range from 5 machines and 7 parts to 40 machines and 100 parts, difficulties, and well structured and ill structured matrices. For all instances, the initial matrix is solved by Estimation of Distribution Algorithm method and then improved by the Variable Neighborhood Search procedure. Then, the cells are formed and the machine layout in each cell is obtained optimally. Table 1 shows the different problems and their characteristics. The columns illustrate respectively the sources of data sets, the problem size, the number of cells C, the maximum number per cell, kmax and the matrix density. All problems can be easily accessed from the references and they are transcribed directly from the original article they appeared. The appendix gives the block diagonal matrices for the improved solutions by the proposed algorithm. The maximum number of permissible cells C has been set equal to the best known number of cells as found in literature. The following equation expressed the density of the initial binary matrix and which informs about haw the one’s elements are distributed inside the matrix.

710

m

n

i

j

∑ ∑a

ij

m ×n

Comparative Study In this section, we evaluate the proposed algorithm by comparing it with the best results obtained by several well known algorithms respecting to the Grouping Efficacy and the Percentage of Exceptional Elements measures. In all tests, the proposed EDA-CF algorithm has proved its competitiveness against the best available solutions respecting to the same required number of cells. As a stop condition to our algorithm, we fixed the maximal computational time to 5 seconds and the maximal number of iteration of Variable Neighborhood Search algorithm to 3. The values of the following parameters are fixed as: ε=0,1; α1=50; α2=500; P=200 and P1=3.

Comparison Respecting the Grouping Efficacy Measure In this subsection we perform a comparative study with the best algorithm presented in the literature. These algorithms can be classified into two categories. The first category corresponds to the based population algorithm including Genetic Algorithm (GA) of Onwubolu & Mutingi (2001), Grouping Genetic Algorithm (GGA) of Brown & Sumichrast (2001), Evolutionary Algorithm (EA) of Gonçalves & Resende (2004) and Hybrid Grouping Genetic Algorithm (HGGA) of James et al. (2007). The second category represents the clustering based methods including ZODIAC of Chandrasekharan & Rajagopalan (1987), GRAFICS of Srinivasan & Narendran (1991), MSTClustering Algorithm of Srinivasan (1994). Table 2 reports the results obtained by the proposed algorithm and these algorithms such that their results were taken from the original citations.

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Table 1. Test problems from cellular manufacturing literature No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

References King & Nakornchai, 1982 Waghodekar & Sahu, 1984 Seifoddini, 1989 Kusiak & Cho, 1992 Kusiak & Chow, 1987 Boctor, 1991 Seifoddini & Wolfe, 1986 Chandrasekharan & Rajagopalan, 1986 Chandrasekharan & Rajagopalan, 1986 Mosier & Taube, 1985 Chan & Milner, 1982 Stanfel, 1985 McCormick et al., 1972 King, 1980 Mosier & Taube, 1985 Carrie, 1973 Boe & Cheng, 1991 Chandrasekharan & Rajagopalan, 1989 - 1 Chandrasekharan & Rajagopalan, 1989 - 2 Chandrasekharan & Rajagopalan, 1989 - 3 Chandrasekharan & Rajagopalan, 1989 - 4 Chandrasekharan & Rajagopalan, 1989 - 5 Chandrasekharan & Rajagopalan, 1989 - 6 McCormick et al., 1972 Kumar & Vanelli, 1987 Stanfel, 1985 Stanfel, 1985 King & Nakornchai,1982 McCormick et al., 1972 Chandrasekharan & Rajagopalan, 1987

As seen in Table 2, in all the benchmark problems, the grouping efficacy of the solution obtained by the proposed method is either better than that of other methods or it is equal to the best one. We note that the solutions obtained by the GA method for problems 1, 7, 13, 24, 28 and 29 were not available. In five problems, namely 20, 21, 22, 23 and 24, the grouping efficacy of the solution obtained by the proposed method is better than that of all other methods. In other words, the proposed method outperforms all the other methods and the best solutions for these problems are reported in this paper for the first time. In eleven problems, namely 2, 3, 9, 15 and 17, the solution obtained by the proposed method is as good as the best solution available in the literature. In five problems, namely 4, 8, 10, 18 and 19, all the methods have obtained the same grouping efficacy.

Size 5×7 5×7 5×18 6×8 7×11 7×11 8×12 8×20 8×20 10×10 10×15 14×14 16×24 16×43 20×20 20×35 20×35 24×40 24×40 24×40 24×40 24×40 24×40 27×27 30×41 30×50 30×50 36×90 37×53 40×100

C 2 2 2 2 3 3 3 3 2 3 3 5 6 5 5 4 5 7 7 7 9 9 9 4 11 12 11 9 2 10

kmax 4 5 12 6 4 4 5 9 11 4 5 6 7 13 5 10 8 8 8 8 8 7 7 12 6 7 7 27 35 6

Density 0.400 0.5714 0.5111 0.2987 0.2250 0.2044 0.6100 0.2400 0.3067 0.3223 0.3646 0.1726 0.2240 0.1831 0.2775 0.1957 0.2186 0.1365 0.1354 0.1437 0.1365 0.1375 0.1365 0.2977 0.1041 0.1033 0.1113 0.0935 0.4895 0.1041

Comparing with clustering methods, it is clear that the results obtained by the proposed algorithm are either equal or better than ZODIAC, GRAFICS and MST methods in all cases except for the problems 25 and 30. More specifically, the EDA-CF obtains for 6 (23%) problems values of the grouping efficacy that are equal to the best ones found in the literature by the three compared clustering methods and improves the values of the grouping efficacy for 19 (73%) problems.

Comparison Respecting the Percentage of Exceptional Elements Measure Table 3 provides a comparison of the proposed algorithm against the best reached results available in literature. The comparison was done respecting to the Percentage of Exceptional Elements

711

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Table 2. Summary of GE performance evaluation results No 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

Size

C

GA

5×7 5×7 5×18 6×8 7×11 7×11 8×12 8×20 8×20 10×10 10×15 14×24 16×24 16×43 20×20 20×35 20×35 24×40 24×40 24×40 24×40 24×40 24×40 27×27 30×41 30×50 30×50 36×90 37×53 40×100

2 2 2 2 2 3 5 3 2 3 3 4 6 4 5 4 5 7 7 7 7 7 7 6 14 13 14 17 2 10

62.50 77.36 76.92 50.00 70.37 85.25 55.91 72.79 92.00 63.48 86.25 34.16 66.30 44.44 100.00 85.11 73.51 37.62 34.76 34.06 40.96 48.28 37.55 83.90

GGA 82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25 55.32 75.00 92.00 72.06 51.58 55.48 40.74 77.02 57.14 100.00 85.11 73.51 52.41 46.67 45.27 52.53 61.39 57.95 50.00 43.78 52.47 82.25

EA 73.68 52.50 79.59 76.92 53.13 70.37 68.30 85.25 58.72 69.86 92.00 69.33 52.58 54.86 42.96 76.22 58.07 100.00 85.11 73.51 51.97 47.06 44.87 54.27 58.48 59.66 50.51 42.64 56.42 84.03

HGGA 82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25 58.72 75.00 92.00 72.06 52.75 57.53 43.18 77.91 57.98 100.00 85.11 73.51 53.29 48.95 47.26 54.02 63.31 59.77 50.83 46.35 60.64 84.03

criteria. PEa represents the best-known Percentage of Exceptional Elements found in the literature. We note that the compared solutions for problems 15, 16, 17, 24, 26, 27, 28 and 29 were not available. The results shows that in all the benchmark problems, the number of exceptional elements of the solution obtained by the proposed method is either better than the best reached values or it is equal to the best ones. In 11 problems, namely 3, 6, 7, 12, 13, 14, 19, 20, 21, 22 and 23 the PE of the solution obtained by the EDA-CF is better than that of all other methods. In other words, the proposed method outperforms all the other methods. In nine problems, namely 1, 4, 5, 8, 9, 10, 11, 18 and 25, all the methods have obtained the same Percentage of Exceptional elements.

712

ZODIAC 73.68 56.52 39.13 68.30 85.24 58.33 70.59 92.00 64.36 32.09 53.76 21.63 75.14 100.00 85.10 37.85 20.42 18.23 17.61 52.14 33.46 46.06 21.11 32.73 52.21 83.92

GRAFICS 73.68 60.87 53.12 68.30 85.24 58.33 70.59 92.00 64.36 45.52 54.39 38.26 75.14 100.00 85.10 73.51 43.27 44.51 41.67 47.37 55.43 56.32 47.96 39.41 52.21 83.92

MST 85.24 58.72 70.59 64.36 48.70 54.44 75.14 100.00 85.10 73.51 51.81 44.72 44.17 51.00 55.29 58.70 46.30 40.05 83.66

EDA-CF 73.68 69.57 79.59 76.92 58.62 70.37 68.30 85.25 58.72 70.59 92.00 70.51 51.96 54.86 43.18 76.27 57.98 100.00 85.11 76.97 72.92 53.74 48.95 54.98 45.22 59.43 50.78 45.94 55.43 83.81

CPU 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.015 0.015 0.046 0.031 1.232 0.078 0.093 0.031 0.092 5.171 0.732 0.670 0.233 7.260 0.562 0.447 1.406 5.094 4.318 7.421

CONCLUSION Cellular manufacturing is a production technique that leads to increase productivity and efficiency in the production floor. In this chapter, we have presented the first Estimation of Distribution Algorithm (EDA) method to solve the Machine Part Cell Formation Problem. Detailed numerical experiments have been carried out to investigate the EDAs’ performance. Although the EDA approach does not require any problem-specific information, the use of sensible heuristics can improve the optimisation and speed up convergence. For this reason, we used the Variable Neighborhood Search (VNS) procedure in the improvement phase of the algorithm. The results from test cases presented here have shown that the proposed EDA-CF algorithm is very a competitive algo-

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Table 3. Comparison between the obtained results and the best-known results respecting to the PE criterion No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

size

C

5×7 5×7 5×18 6×8 7×11 711 8×12 8×20 8×20 10×10 10×15 14×24 16×24 16×43 20×20 20×35 20×35 24×40 24×40 24×40 24×40 24×40 24×40 27×27 30×41 30×50 30×50 36×90 37×53 40×100

2 2 2 2 2 3 5 3 3 3 3 4 8 4 6 5 5 7 7 7 7 7 7 6 14 13 14 17 3 10

Problem Source King & Nakornchai, 1982 Waghodekar & Sahu, 1984 Seifoddini, 1989 Kusiak & Cho, 1992 Kusiak & Chow, 1987 Boctor, 1991 Seifoddini & Wolfe, 1986 Chandrasekharan & Rajagopalan, 1986 Chandrasekharan & Rajagopalan, 1986 Mosier & Taube, 1985 Chan & Milner, 1982 Stanfel, 1985 McCormick et al., 1972 King, 1980 Mosier & Taube, 1985 Carrie, 1973 Boe & Cheng, 1991 Chandrasekharan & Rajagopalan, 1989 - 1 Chandrasekharan & Rajagopalan, 1989 - 2 Chandrasekharan & Rajagopalan, 1989 - 3 Chandrasekharan & Rajagopalan, 1989 - 4 Chandrasekharan & Rajagopalan, 1989 - 5 Chandrasekharan & Rajagopalan, 1989 - 6 McCormick et al., 1972 Kumar & Vanelli, 1987 Stanfel, 1985 Stanfel, 1985 King & Nakornchai,1982 McCormick et al., 1972 Chandrasekharan & Rajagopalan, 1987

rithm comparing with the previously published metaheuristics applied to the same problem. It has been shown that the EDAs provide efficient and accurate solutions for the test cases. The results are promising and encourage further studies on other versions of the Group Technology problems where we can introduce sequence data, machine utilization and routings.

REFERENCES Andrés, C., & Lozano, S. (2006). A particle swarm optimization algorithm for part–machine grouping. Robotics and Computer-integrated Manufacturing, 22, 468–474. doi:10.1016/j. rcim.2005.11.013

PE 0.000 0.150 0.000 0.0909 0.1304 0.0952 0.1714 0.1475 0.2967 0.000 0.000 0.0328 0.3721 0.2063 0.3693 0.1985 0.1764 0.000 0.0308 0.1087 0.0992 0.2652 0.2824 0.2350 0.1094 0.2754 0.1225 0.1254 0.000 0.0907

CPU 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.015 0.015 0.031 0.031 0.078 0.031 0.062 0.031 0.451 5.171 0.732 0.670 0.233 0.203 0.219 3.109 0.406 0.969 0.109 7.421

PE a 0.000 0.125 0.1957 0.0909 0.1304 0.1905 0.2857 0.1475 0.2967 0.000 0.000 0.1639 0.4302 0.2222 0.000 0.0769 0.1527 0.1527 0.3740 0.4214 0.1094 0.0857

Askin, R. G., Creswell, J. B., Goldberg, J. B., & Vakharia, A. J. (1991). A Hamiltonian path approach to reordering the part-machine matrix for cellular manufacturing. International Journal of Production Research, 29, 1081–1100. doi:10.1080/00207549108930121 Baluja, S. (1994). Population-based incremental learning: A method for integrating genetic search based function optimization and competitive learning. (Technical Report CMU-CS, 94-163). Computer Science Department, Carnegie Mellon University. Baluja, S., & Davies, S. (1997). Using optimal dependency-trees for combinatorial optimization: Learning the structure of the search space. In Proceedings of the 1997 International Conference on Machine Learning.

713

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Baluja, S., & Davies, S. (1998). Fast probabilistic modeling for combinatorial optimization. In AAAI-98. Bengoetxea, E., Larranaga, P., Bloch, I., & Perchant, A. (2001b). Solving graph matching with EDAs using a permutation–based representation. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academic Publishers. doi:10.1007/978-1-4615-1539-5_12 Bengoetxea, E., Larranaga, P., Bloch, I., Perchant, A., & Boeres, C. (2000). Inexact graph matching using learning and simulation of Bayesian networks. An empirical comparison between different approaches with synthetic data. In Workshop Notes of CaNew2000: Workshop on Bayesian and Causal Networks: From Inference to Data Mining, fourteenth European Conference on Artificial Intelligence, ECAI2000. Berlin. Boctor, F. (1991). A linear formulation of the machine-part cell formation problem. International Journal of Production Research, 29(2), 343–356. doi:10.1080/00207549108930075 Brown, E., & Sumichrast, R. (2001). CFGGA: A grouping genetic algorithm for the cell formation problem. International Journal of Production Research, 36, 3651–3669. doi:10.1080/00207540110068781 Burbidge, J. L. (1963). Production flow analysis. Production Engineering, 42, 742–752. doi:10.1049/tpe.1963.0114 Caux, C., Bruniaux, R., & Pierreval, H. (2000). Cell formation with alternative process plans and machine capacity constraints: A new combined approach. International Journal of Production Economics, 64(1-3), 279–284. doi:10.1016/ S0925-5273(99)00065-1

714

Chan, H. M., & Milner, D. A. (1982). Direct clustering algorithm for group formation in cellular manufacture. Journal of Manufacturing Systems, 1, 65–75. doi:10.1016/S0278-6125(82)80068-X Chandrasekharan, M. P., & Rajagopalan, R. (1986a). MODROC: An extension of rank order clustering for group technology. International Journal of Production Research, 24(5), 1221– 1264. doi:10.1080/00207548608919798 Chandrasekharan, M. P., & Rajagopalan, R. (1987). ZODIAC: An algorithm for concurrent formation of part-families and machine-cells. International Journal of Production Research, 25(6), 835–850. doi:10.1080/00207548708919880 Chandrasekharan, M. P., & Rajagopalan, R. (1989). Groupability: Analysis of the properties of binary data matrices for group technology. International Journal of Production Research, 27(6), 1035–1052. doi:10.1080/00207548908942606 Chickering, D., Heckerman, D., & Meek, C. (1997). A Bayesian approach to learning Bayesian networks with local structure. In Proceedings of Thirteenth Conference on Uncertainty in Artificial Intelligence, (pp. 80–89). (Technical Report MSRTR- 97-07), Microsoft Research, August, 1997. De Bonet, J., Isbell, C. L., & Viola, P. (1997). MIMIC: Finding optima by estimating probability densities. Advances in Neural Information Processing Systems, 9, 424–430. De Campos, L. M., Gamez, J. A., Larranaga, P., Moral, S., & Romero, T. (2001). Partial abductive inference in Bayesian networks: An empirical comparison between GAs and EDAs. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academic Publishers. doi:10.1007/978-1-4615-1539-5_16

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Etxeberria, R., & Larranaga, P. (1999). Optimization with Bayesian networks. In Proceedings of the Second Symposium on Artificial Intelligence. Adaptive Systems. CIMAF 99, (pp. 332-339). Cuba. Gonçalves, J., & Resende, M. (2004). An evolutionary algorithm for manufacturing cell formation. Computers & Industrial Engineering, 47, 247–273. doi:10.1016/j.cie.2004.07.003 Goncalves, J. F., & Resende, M. (2002). A hybrid genetic algorithm for manufacturing cell formation. Technical report. Rapport. Gonzalez, C., Lozano, J. A., & Larranaga, P. (2002). Mathematical modelling of UMDAc algorithm with tournament selection: Behaviour on linear and quadratic functions. International Journal of Approximate Reasoning, 31(3), 313–340. doi:10.1016/S0888-613X(02)00092-0 Harik, G. (1994). Finding multiple solutions in problems of bounded difficulty. Tech. Rep. IlliGAL Report No. 94002, University of Illinois at Urbana-Champaign, Urbana, IL. Harik, G. (1999). Linkage learning via probabilistic modeling in the ECGA. Tech. Rep. IlliGAL Report No. 99010, University of Illinois at Urbana-Champaign. Harik, G., Lobo, F., & Goldberg, D. E. (1998). The compact genetic algorithm, (pp. 523-528). (IlliGAL Report No. 97006). James, T. L., Brown, E. C., & Keeling, K. B. (2007). A hybrid grouping genetic algorithm for the cell formation problem. Computers & Operations Research, 34, 2059–2079. doi:10.1016/j. cor.2005.08.010 Joines, J. A., Culbreth, C. T., & King, R. E. (1996). Manufacturing cell design: An integer programming model employing genetic algorithms. IIE Transactions, 28(1), 69–85. doi:10.1080/07408179608966253

Kaparthi, S., Suresh, N. C., & Cerveny, R. P. (1993). An improved neural network leader algorithm for part-machine grouping in group technology. European Journal of Operational Research, 69, 342–355. doi:10.1016/03772217(93)90020-N Khator, S. K., & Irani, S. A. (1987). Cell formation in group technology: A new approach. Computers & Industrial Engineering, 12, 131–142. doi:10.1016/0360-8352(87)90006-4 King, J. R. (1980). Machine-component grouping formation in group technology. International Journal of Management Science, 8(2), 193–199. King, J. R., & Nakornchai, V. (1982). Machinecomponent group formation in group technology: Review and extension. International Journal of Production Research, 20(2), 117–133. doi:10.1080/00207548208947754 Kumar, K. R., & Chandrasekharan, M. P. (1990). Grouping efficacy: A quantitative criterion for block diagonal forms of binary matrices in group technology. International Journal of Production Research, 28(2), 233–243. doi:10.1080/00207549008942706 Kusiak, A. (1987). The generalized group technology concept. International Journal of Production Research, 25, 561–569. doi:10.1080/00207548708919861 Kusiak, A., & Chow, W. S. (1987). Efficient solving of the group technology problem. Journal of Manufacturing Systems, 6(2), 117–124. doi:10.1016/0278-6125(87)90035-5 Larranaga, P., Etxeberria, R., Lozano, J. A., & Pena, J. M. (2000). Combinatorial optimization by learning and simulation of Bayesian networks. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, (pp. 343–352). Stanford.

715

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Li, S. Z. (1995). Markov random field modeling in computer vision. Springer-Verlag. Lozano, J. A., Sagarna, R., & Larranaga, P. (2001b). Solving job scheduling with estimation of distribution algorithms. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation (pp. 231–242). Kluwer Academis Publishers. doi:10.1007/978-1-4615-1539-5_11 Lozano, S., Adenso-Diaz, B., Eguia, I., & Onieva, L. (1999). A one step tabu search algorithm for manufacturing cell design. The Journal of the Operational Research Society, 50, 509–516. Mahdavi, I., Paydar, M. M., Solimanpur, M., & Heidarzade, A. (2009). Genetic algorithm approach for solving a cell formation problem in cellular manufacturing. Expert Systems with Applications, 36, 6598–6604. doi:10.1016/j. eswa.2008.07.054

Muhlenbein, H., & Mahnig, T. (1999b). FDA - A scalable evolutionary algorithm for the optimization of additively decomposed functions. Evolutionary Computation, 7, 353–376. doi:10.1162/ evco.1999.7.4.353 Muhlenbein, H., Mahning, T., & Ochoa, A. (1999). Schemata, distributions and graphical models in evolutionary optimization. Journal of Heuristics, 5, 215–247. doi:10.1023/A:1009689913453 Muhlenbein, H., & Paaß, G. (1996). From recombination of genes to the estimation of distribution. Binary parameters. Lecture Notes in Computer Science, 1411. Parallel Problem Solving from Nature, PPSN, IV, 178–187. Onwubolu, G. C., & Mutingi, M. (2001). A genetic algorithm approach to cellular manufacturing systems. Computers & Industrial Engineering, 39(1–2), 125–144. doi:10.1016/ S0360-8352(00)00074-7

McAuley, J. (1972). Machine grouping for efficient production. Production Engineering, 51(2), 53–57. doi:10.1049/tpe.1972.0006

Pearl, J. (1988). Probabilistic reasoning in intelligent systems. Palo Alto, CA: Morgan Kaufman Publishers.

McCormick, W. T. Jr, Schweitzer, P. J., & White, T. W. (1972). Problem decomposition and data reorganization by a cluster technique. Operations Research, 20(5), 993–1009. doi:10.1287/ opre.20.5.993

Pelikan, M., Goldberg, D. E., & Cantu-Paz, E. (1999a). BOA: The Bayesian optimization algorithm. In Banzhaf, W., Daida, J., Eiben, A. E., Garzon, M. H., Pelikan, V., & Goldberg, D. E. (Eds.), Hierarchical problem solving by the Bayesian optimization algorithm. IlliGAL Report No. 2000002. Urbana, IL: Illinois Genetic Algorithms Laboratory, University of Illinois at Urbana-Champaign.

Mladenoviç, N., & Hansen, P. (1997). Variable neighborhood search. Computers & Operations Research, 24, 1097–1100. doi:10.1016/S03050548(97)00031-2 Muhlenbein, H. (1998). The equation for response to selection and its use for prediction. Evolutionary Computation, 5(3), 303–346. doi:10.1162/ evco.1997.5.3.303

716

Pelikan, P., & Muhlenbein, H. (1999). The bivariate marginal distribution algorithm. In Roy, R., Furuhashi, T., & Chandhory, P. K. (Eds.), Advances in soft computing-engineering design and manufacturing (pp. 521–535). London, UK: Springer.

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Robles, V., de Miguel, P., & Larranaga, P. (2001). Solving the travelling salesman problem with estimation of distribution algorithms. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academic Publishers. Roure, J., Sanguesa, R., & Larranaga, P. (2001). Partitional clustering by means of estimation of distribution algorithms. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academic Publishers. Sagarna, R., & Larranaga, P. (2001). Solving the knapsack problem with estimation of distribution algorithms. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academis Publishers. Santana, R. (2003a). A Markov network based factorized distribution algorithm for optimization. Proceedings of the 14th European Conference on Machine Learning (ECMLPKDD 2003); Lecture Notes in Artificial Intelligence, 2837, (pp. 337–348). Berlin, Germany: Springer-Verlag. Santana, R. (2005). Estimation of distribution algorithms with Kikuchi approximation. Evolutionary Computation, 13, 67–98. doi:10.1162/1063656053583496 Sierra, B., Jimenez, E., Inza, I., Larranaga, P., & Muruzabal, J. (2001). Rule induction using estimation of distribution algorithms. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academic Publishers. doi:10.1007/978-1-4615-1539-5_15 Sofianopoulou, S. (1997). Application of simulated annealing to a linear model for the formation of machine cells in group technology. International Journal of Production Research, 35, 501–511. doi:10.1080/002075497195876

Solimanpur, M., Vrat, P., & Shankar, R. (2003). Ant colony optimization algorithm to the inter-cell layout problem in cellular manufacturing. European Journal of Operational Research, 157(3), 592–606. doi:10.1016/S0377-2217(03)00248-0 Srinivasan, G. (1994). A clustering algorithm for machine cell formation in group technology using minimum spanning trees. International Journal of Production Research, 32, 2149–2158. doi:10.1080/00207549408957064 Srinivasan, G., & Narendran, T. T. (1991). GRAFICS - A non hierarchical clusteringalgorithm for group technology. International Journal of Production Research, 29(3), 463–478. doi:10.1080/00207549108930083 Stawowy, A. (2006). Evolutionary strategy for manufacturing cell design. OMEGA: The International Journal of Management Science, 34(1), 1–18. doi:10.1016/j.omega.2004.07.016 Venugopal, V., & Narendran, T. T. (1992a). A genetic algorithm approach to the machine component grouping problem with multiple objectives. Computers & Industrial Engineering, 22(4), 469–480. doi:10.1016/0360-8352(92)90022-C Xu, H., & Wang, H. P. (1989). Part family formation for GT applications based on fuzzy mathematics. International Journal of Production Research, 27(9), 1637–1651. doi:10.1080/00207548908942644 Zhang, Q., Sun, J., Tsang, E., & Ford, J. (2004). Hybrid estimation of distribution algorithm for global optimisation. Engineering Computations, 2(1), 91–107. doi:10.1108/02644400410511864 Zhao, C., & Wu, Z. (2000). A genetic algorithm for manufacturing cell formation with multiple routes and multiple objectives. International Journal of Production Research, 38(2), 385–395. doi:10.1080/002075400189473

717

8

4

16

1

1

19

1

1

21

1

1

28

1

37

7

14

23

9

10

17

2

5

11

19

1

1

39

1

1

8

12

18

3

20

1

13

21

22

1

1

1

1

1

1

1

1

1

1 1

6

1

1

1

7

1

1

1

1

1

20 29

1

40

1

1

1 1

1

10

1

13

1

14

1

22

1

1

1

1

1

1

1

1

1

1

1

1

35

1

1

1

36

1

1

1

4 5

1

18 26

1

27 30 1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1 1 1

1

continued on following page

An Estimation of Distribution Algorithm for Part Cell Formation Problem

3 1

15

1

25

2

6

1

38

32

24

APPENDIX

718

Table 4. Problem 20

4

16

7

14

23

24

9

10

17

2

5

11

19

6

8

3

20

1

13

21

22

11

1

1

12

1

1

15

1

1

23

1

1

1

1

1

1

1

1

1 1

1

1

1

9

1

1

1

1

16

1

1

1

1

17

1

1

1

1

24

12

15

18

1

31

1

34

33

1

1 1

1

Table 5. Problem 21 3

20

2

1

1

11

1

1

12

1

1

15

1

1

23

1

1

24

1

1

1

1

31 34

6

8

12

15

18

1

13

21

22

2

5

11

19

4

16

9

10

17

7

14

23

24 1

1 1 1

1

1

719

4

1

1

1

1

1

5

1

1

1

1

1

18

1

1

1

1

26

1

1

1

1

1 1

continued on following page

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Table 4. Continued

720

Table 5. Continued 6

8

12

15

18

27

3

20

1

1

1

1

1

30

1

1

1

1

1

13

21

1

22

2

5

11

19

1

1

1

1

1

1

1

1

16

1

1

1

1

1

1

17

1

1

33

1

1

1

1 1 1 1

22

1

35 36

7

14

23

24

1

1

1 1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

8 1

21 28

1

1

1

1

1

1

1

37

1

1

1

39

1

1

1

1

6 7

1

1

1

1

1

1

1

1

1

29 40

17

1

14

20

10

1

13

38

9

1 1

1

1 1

1

1

3

1

1

1

25

1

1

1

1

1

1

32

1

1

An Estimation of Distribution Algorithm for Part Cell Formation Problem

10

1

16

1

9

19

4

2

3

20

1

1

11

1

1

12

1

1

15

1

1

23

1

1

24

1

1

31 34 10

2

5

11

19

8

12

15

18

1

13

21

16

7

24

9

1

17

14

23

1

1 1

1

1

1

1

1

1 1

22

1

1 1

1

1

1

1

1

1

35

1

1 1

1

1

4

1

5

1

1

18

1

26

1

1

1

1

1

1

1

1

1

1

1

1

1

1

27

1

1

1

30

1

1

1

1

1

1

1 1

1 1

1

9

1

1

1

16

1

1

1

1

1

1

1

1

17 1

6

721

8

4

1

1

20

10

1

13

33

22

1

14

36

6

1 1

1

1 1

1

1 1

1

1

1

1 1

1

continued on following page

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Table 6. Problem 22

722

Table 6. Continued 3

4

16

1

1

21

1

1

28

1

19

20

2

5

11

19

6

8

12

15

18

1

1

13

21

22

10

1

37

1

38

7

24

1

1

17

1

1

1

1

1

1

14

23

1

1

39

1

1

1

1

32

1 1

29 1

1

3

1

1

25

1

1

1

1

1

13

22

Table 7. Problem 23 1

21

9

1

1

33

1

1

2

3

20

7

14

23

24

9

4

16

10

1 1

11

1

2

5

11

1

1

15

1

1

1

23

1

1

1

34

1

1

32 6

1

12

15

18

1

1

1 1

17

1

1

1

8

1

1

3

6

1

12

25

19

1 1

1

1

1

1 1

1 1

1 1

1

continued on following page

An Estimation of Distribution Algorithm for Part Cell Formation Problem

7 40

9

1

21

3

20

7

14

23

29

9

4

16

10

2

5

11

19

6

8

1

39 1 1

1

1

1

1

28

1

37

1 1 1

1

1 1 1

14

1

1

22 35

1

36

1

26

1

1 1

1

1

1

1

1

1

1

1

1

1 1 1

1 1

1

1

1

1

30

1

1

1

1

7

1 1

1

1

31

1

1

4 5

1

18

1

1

1

1

1

1

1

1

1

27

1

1

1

1

1

16

723

17

1 1

1

13

22

1

20 10

13

1

1

24

18

1

21

38

15

1 1

1

12

1

1

8

17 1

1

40 19

24

1 1

1 1 1

1 1

1 1

1

1

1

1

1

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Table 7. Continued

724

Table 8. Problem 24 6 4 5

8

18

1

1

1

18

1

26

1

1

30

1

1

38

1

1

2

15

19

9

13

22

1

23

24

10

1

16

5

11

1

1 1 1 1

1

1

1

1

1

1 1 1

1

1 1

1

1 1

1

4

1

1

1

1

1 1

25

1

28

1

1

1

1

1

1

1

1 1

1

35

1

9

1

1

1

1

1

1 1

1

1

1 1

1

1

1

3

1 1

1 1

1

1

1

7

1 1

1

1

1

1

1 1

1 1

1

continued on following page

An Estimation of Distribution Algorithm for Part Cell Formation Problem

1 1

11

12

1

29

2

20

1

1

31

3

1

6

17

17

1

36

32

21

1

1

27

33

14

1

1

14

16

7

1

13

39

4

15

1

1

1 23

1

1

1 1

1

1 1 37

1 1 21

1 1 1 19

1 1 8

1

1

1 1 1

5 16 12 10

1

1 1 10

1 22

1 1

1

An Estimation of Distribution Algorithm for Part Cell Formation Problem

1

34

1 1 24

11 20 1 1

1 1 20

1

3 17 24 23 21 1 22 13 14 7 4 9 19 15 2 18 8 6

1 1 12

725

This work was previously published in Operations Management Research and Cellular Manufacturing Systems: Innovative Methods and Approaches, edited by Vladimir Modrák and R. Sudhakara Pandian, pp. 164-188, copyright 2012 by Business Science Reference (an imprint of IGI Global).

Table 8. Continued

726

Chapter 41

A LabVIEW-Based Remote Laboratory:

Architecture and Implementation Yuqiu You Morehead State University, USA

ABSTRACT Current technology enables the remote access of equipment and instruments via the Internet. While more and more remote control solutions have been applied to industry via Local Area Networks (LANs), Wide Area Networks (WANs), and the Internet, there exist requirements for the applications of such technologies in the academic environment (Salzmann, Latchman, Gillet, and Crisalle, 2003). One typical application of remote control solutions is the development of a remote virtual laboratory. The development of a remote-laboratory facility will enable participation in laboratory experiences by distance students. The ability to offer remote students lab experiences is vital to effective learning in the areas of engineering and technology. This chapter introduces a LabVIEW-based remote wet process control laboratory developed for manufacturing automation courses. The system architecture, hardware integration, hardware and software interfacing, programming tools, lab development based on the system, and future enhancement are demonstrated and discussed in the chapter.

INTRODUCTION As distance learning has progressed from basic television broadcasting into web-based Internet telecasting, it has become a very effective teaching tool (Kozono, Akiyama and Shimomura, DOI: 10.4018/978-1-4666-1945-6.ch041

2002). Laboratory experiences are important for engineering and technology students to reinforce theories and concepts presented in class lectures. The development of a remote-laboratory facility will enable participation in laboratory experiences by distance students. The ability to offer remote students these lab experiences is vital to effective learning. The development of a remote

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

A LabVIEW-Based Remote Laboratory

virtual laboratory is also motivated by the fact that presently, as never before, the demand for access to the laboratory facilities is growing rapidly in engineering and technology colleges. Being able to make the laboratory infrastructure accessible as virtual laboratories, available 24 hours a day and 7 days a week, goes far in addressing these challenges, and would also contribute to lowering the costs of operating laboratories. Additionally, remote virtual laboratories will provide the opportunity for students to explore the advanced technologies used in manufacturing remote control/monitor systems, and therefore, to prepare them for their future careers. This chapter introduces a LabVIEW-based remote process control system, which is established to provide web based online virtual laboratory for an online computer-integrated manufacturing course. The physical setup of the system includes a wet process system, a FieldPoint control system with a NI cFP-2000 intelligent controller and eight I/O modules, a desktop computer, an Ethernet hub, and an Internet DLink camera. The wet process system is composed of three water tanks, pumps, discrete valves, continuous valves, temperature sensors, level sensors, and pressure transmitters. The software used for the system interfacing is LabVIEW 8.0 from National Instruments and the HyperText Markup Language (HTML). The desktop computer works as a control server as well as a web server for the system, providing a local interface for system control and maintenance and a remote interface for students to control and monitor the wet process through the Internet. The desktop computer, the intelligent controller, and the Internet camera communicate with each other through the Ethernet hub, and also connect to the Internet. All the sensors, valves, and pumps in the wet process system are wired to the I/O modules of the intelligent controller. Details of the system wiring will be examined later in this chapter. This system has been used in the lab of a computerintegrated manufacturing course for graduate students in the Manufacturing Technology option

of the Technology Management program. Students are introduced to system integration, process control, LabVIEW FieldPoint programming, and the development of web-based manufacturing applications by involvement in the lab activities through the Internet. This chapter explores the integration of the mechatronic equipment, computer software, and networking techniques to achieve a remotely controllable system. It demonstrates the development of a LabVIEW-based FieldPoint control system for a virtual laboratory. The implementation of the laboratory, future enhancement, and related researches are discussed in this chapter.

BACKGROUND As in engineering and technology fields, the laboratory experiences associated with a technology curriculum are vital to understanding concepts (Saygin & Kahraman, 2004). They are also typically limited to a short group session each week due to time and space constraints. Increasingly practical and popular distance courses are hard-pressed to provide realistic lab experience at all. Simulation, which has seen increased use in education, is an especially valuable tool when it precedes instruction, but does not provide the problem-solving realism of actual hands-on experience (Deniz, Bulancak & Ozcan, 2003). Completing a project by remote operation of real equipment more nearly replicates problem solving as it would occur in the workplace and lends itself to teaching the processes and practice that are involved in true experimentation (Cooper, Donnelly & Ferreira, 2002). With the rapid developments of computer networks and Internet technologies along with dramatic improvements in the processing power of personal computers, remote virtual laboratories are now a reality. In the early 1990’s, the first remotely shared control system laboratory was proposed in the 1991 American Society of Engineering Educa-

727

A LabVIEW-Based Remote Laboratory

tion (ASEE) Frontiers in Education Conference. The system enabled sharing of laboratory data between universities using networked workstations. By its nature, automated manufacturing lends itself to remote access for education, and education that incorporates remote experimentation may better prepare students for the workplace of the future. With the development of standards for online lab sessions, the Accreditation Board for Engineering and Technology validated remote labs as educational tools (Carnevale, 2002). Saygin and Kahraman (2004) have successfully implemented lab exercises for manufacturing education and systems related courses using remote technology. Asumadu and Tanner (2006), who developed a remote wiring and measurement laboratory that utilizes a “virtual breadboard,” acknowledge the flexibility and spontaneity of the tool which has potential for global access. Gurocak (2001) describes Programmable Logic Controller (PLC) and robot labs which are delivered via the remote lab concept by Internet connection to the PLCs and closed circuit TV of the labs. Thamma, Huang, Lou, and Diez (2004) integrated Computer-integrated Manufacturing equipment into a remote lab system via a Java based web site. The virtual laboratory demonstrated in this chapter was developed by using LabVIEW FieldPoint control technology and a web-based application. The remote control panel provides real-time control and monitor function to remote clients and a live video of the real system operation. Remote clients can access the control panel from a web browser on their computers with the LabVIEW Runtime Engine installed as a plug-in. LabVIEW, developed by National Instruments, is a graphic programming language to build virtual instruments (VIs) for control systems. A VI developed in the LabVIEW environment provides an interface between a user and a control process. The main concept of such an interface is to provide a general view of the process and facilitate full control of the operations. LabVIEW is widely used in developing automatic control solutions in real

728

world industries, research studies and academic laboratories. The locally controlled setup can be turned into a remotely controlled one by moving the user interface away from the physical setup with web-based functions. LabVIEW also provides advanced communication methods for the integration of LabVIEW VIs with other applications, such as ActiveX containers, File Input and Output, and.NET constructor nodes. FieldPoint is a proprietary method for interfacing devices to computers developed by National Instruments, but it is very similar in principle to the concept of a fieldbus interfacing method used by many process control equipment suppliers. The idea of fieldbus grew out of the problem of interfacing hundreds or thousands of sensors and actuators to Programmable Logic Controllers (PLCs) and process control computers in large industrial plants. Instead of connecting each sensor or actuator to a central plant computer with hundreds or thousands of kilometers of wiring, the idea of fieldbus was to connect related groups of sensors and actuators to a local microcomputer that communicates with the central plant computer via an Ethernet local area network (LAN). The result was an enormous reduction in wiring and a corresponding increase in reliability. FieldPoint control in the LabVIEW environment is composed of four components, FieldPoint interface hardware, the FieldPoint Object Linking and Embedding for Process Control (OPC) server, the LabVIEW FieldPoint handler, and the Measurement and Automation Explorer (MAX). The FieldPoint interface hardware includes an intelligent controller, the I/O modules, and devices that connected to the modules. The FieldPoint OPC server is an invisible element of the software that can be invoked whenever the FieldPoint connection is setup with a LabVIEW application programmed for the control system. The communication with the FieldPoint unit does not necessarily occur at the precise instant when the application instructs it to happen. Instead, communication occurs both as a direct result of requests from the application

A LabVIEW-Based Remote Laboratory

and also as a result of the configuration for the intelligent controller. Any LabVIEW application that uses FieldPoint can be conveniently structured by incorporating all FieldPoint operations into a single module -- a FieldPoint handler. This module performs four different operations: initialization, close, read, and write. This handler directs all the communications between the FieldPoint unit and the LabVIEW application. The MAX provides an interface for the setup and configuration of the FieldPoint hardware. From the MAX interface, the LabVIEW application can locate and recognize the FieldPoint hardware, and the devices in the FieldPoint unit can be tested through data communication directly from the MAX. In the system introduced in this chapter, a virtual interface programmed in the LabVIEW graphical language provides a control panel for users to interact with the control process through FieldPoint Ethernet communication and the communication between the FP controller and I/O Modules. The remote laboratory introduced in this chapter provides an approach to implementing LabVIEW interfacing technology in manufacturing process control systems to provide remote

virtual lab activities for students in a manufacturing engineering program. The development of this laboratory also has students explore the integration of computer and networking technologies into manufacturing control systems for higher flexibility and productivity. Researches and experiments related to web-based manufacturing control systems and remote virtual laboratories are being conducted based on this system.

SYSTEM OVERVIEW Physical Setup of the System The technology used for the development of the virtual laboratory system in the LabVIEW environment is FieldPoint. As mentioned earlier in this chapter, the physical setup of the system includes a wet process system, a FieldPoint control system with a NI cFP-2000 intelligent controller and eight I/O modules, a desktop computer, an Ethernet hub, and an Internet DLink camera. As shown in Figure 1, the intelligent controller, the Internet camera, and the desktop computer are connected to an

Figure 1. The system setup

729

A LabVIEW-Based Remote Laboratory

Internet hub, and then connected to the Internet. Clients (students) can access the remote control panel of the system over the Internet connection. The wet process control system is comprised of three tanks, two pumps, five discrete valves, and two continuous valves, as shown in Figures 2 & 3. The sensors used in the system include three temperature sensors, three level sensors, one flow rate sensor, and two pressure sensors. Temperature sensors and level sensors were used to monitor each tank’s water level and temperature. Two pressure sensors were installed to monitor the incoming flow pressure of tank 1 and tank 2. A flow sensor was installed to measure the incoming flow rate of the main tank. Figure 2 shows a picture of the real wet process system setup. In order to better demonstrate the level changes of the water in each tank, the water was dyed with green color. Also, a physical control panel with pushbuttons and a panel with light indicators was added to the system for local maintenance and control. The physical intelligent controller and its I/O modules can be seen on the picture as a blue panel mounted on the wall. Figure 3 demonstrates the components of the physical setup to give you a clearer idea of the system setup with pipeline

Figure 2. Picture of the wet process control system

730

connections running between tanks and the locations of all the system components. All the values, pumps and sensors were wired to the Input/Output modules of the intelligent controller. There are four Input/Output modules used in this system, which include the analog input module cFP-AI-110, the analog output module cFP-AO-200, the digital output module cFP-DO-400, and the temperature module cFPRTD-124. The National Instruments cFP-AI-110 is an 8-channel single-ended input module for direct measurement of milli-volt, low voltage, or milli-ampere current signals from a variety of sensors and transmitters. It delivers filtered lownoise analog inputs with 16-bit resolution, and features overranging, HotPnP (plug-and-play) operation, and onboard diagnostics. The National Instruments cFP-AO-200 is an 8-channel analog output module for 4 to 20 mA and 0 to 20 mA current loops. The module includes opencircuit detection for wiring and sensor troubleshooting and short- circuit protection for wiring errors. It features HotPnP operation so it is automatically detected and identified by the configuration software. The National Instruments cFPDO-400 module features eight sourcing digital

A LabVIEW-Based Remote Laboratory

Figure 3. Physical setup diagram of the wet process system

output channels. Each channel is compatible with voltages from 5 to 30 VDC and can source up to 2 A per channel with a maximum of 9 A squared per module (The sum of the squares of the output currents from all eight channels must be no greater than 9). Each channel has an LED to indicate the channel on/off state. The module features 2300 V transient isolation between the output channels and the backplane. It also features HotPnP operation and is automatically detected and identified by the configuration software. The National Instruments cFP-RTD-124 is an 8-channel input module for direct measurement of 2 and 4-wire RTD temperature sensor signals. With current excitation, signal conditioning, doubleinsulated isolation, input noise filtering, and a high-accuracy delta-sigma 16-bit analog-to-digital converter, it delivers reliable, accurate temperature or resistance measurements. Table 1 provides a detailed list of the input and output devices in the physical setup of the system and the type of FieldPoint I/O module they were wired with. As shown in Table 1, the discrete valves and the pumps were wired to the digital output module; the continuous control valves were wired to the analog output module;

the temperature sensors were wired to the RTD module for temperature readings; and the level sensors, the flow rate sensor, and the pressures sensors were wired to the analog input module.

LabVIEW Interfacing As mentioned in the previous section, the devices on the wet process system including all the valves, pumps, and sensors, were wired to the four different Input/Output modules of the FieldPoint controller unit. All these devices were setup and configured through the Measurement & Automation software (MAX), and each of them was assigned a unique ID starting with an IP address, as shown in Table 1. The FieldPoint controller unit is recognized by the computer as a network node with its IP address. For the system demonstrated here, 139.102.29.56 was assigned to the controller. The four Input/Output modules then were recognized as different communication ports under this IP address. In this system, they are recognized as port 1, 2, 5, 7 respectively. Then, each device wired to the same Input/Output module was identified by a unique channel number. The unique ID for each device therefore is a combination of the IP

731

A LabVIEW-Based Remote Laboratory

Table 1. I/O addressing of the control system LabVIEW Addressing in Programming

FP@139_102_29_56\cFP-DO-400@7 (Digital Output Module)

FP@139_102_29_56\cFP-AO-200@5 (Analog Output Module) FP@139_102_29_56\cFP-RTD-124@2 (Temperature Module)

FP@139_102_29_56\cFP-AI-110@1 (Analog Input Module)

Device #

Device Description

\Channel 0

FZ-101

Pump 1

\Channel 1

FZ-102

Pump 2

\Channel 2

FV-201

Discrete valve 1

\Channel 3

FV-202

Discrete valve 2

\Channel 4

FV-203

Discrete valve 3

\Channel 5

FV-204

Discrete valve 4

\Channel 6

FV-205

Discrete valve 5

\Channel 0

ZZ-301

Continuous control valve 1

\Channel 1

ZZ-302

Continuous control valve 2

\Channel 0

TIT-301

Temperature Sensor 1

\Channel 1

TIT-302

Temperature Sensor 2

\Channel 2

TIT-303

Temperature Sensor 3

\Channel 2

LIT-101

Level Sensor 1

\Channel 3

LIT-102

Level Sensor 2

\Channel 4

LIT-103

Level Sensor 3

\Channel 5

PIT-201

Flow Rate Sensor

\Channel 6

PIT-202

Pressure Sensor 1

\Channel 1

PIT-203

Pressure Sensor 2

address (identifying the controller unit), the port number (identifying the specific module), and the channel number (identifying the device). Once the physical system was connected and configured for the LabVIEW communication, a virtual interface programmed by using the LabVIEW graphical language can provide a control panel for users to interact with the control process on the local server as well as over the Internet from a client computer as shown in Figure 4. The virtual interface is called a virtual instrument (VI) in the LabVIEW environment. As shown in Figure 4, this virtual interface has five major areas for users to interact with the real control process including a process simulator, a mode control panel with digital indicators, a stop button panel, a live video window, and waveform graphics for data tracking. The process simulator simulates the real wet control process by using control icons and indicator icons. In manual mode, users can control each individual device of the

732

real process by clicking on the control icons, such as the valves and motors. Indicator icons will display the status of those devices by changing the color of icons to green or red. The mode control panel is used to change control mode, and can also provide real data readings from sensors. The stop button panel provides different buttons to stop major devices and the system itself. The current value of tank levels, temperatures, incoming flow rate for main tank, and incoming pressure for tank 1 and tank 2 are displayed by the graphic and digital indicators on the interface. A video window was integrated into the interface for users to monitor the real process through an Internet camera. Two waveform graphics windows provide history data tracking the temperature and incoming pressure of each tank. Behind the Frontpanel of the virtual interface is the block diagram, programmed to provide the data flow, mathematic operations, and logic operations of the virtual instrument. The application

A LabVIEW-Based Remote Laboratory

Figure 4. The virtual interface programmed in LabVIEW

programming utilized Case Structure functions to provide three different modes, auto mode, manual mode, and the supervision mode. A While Loop function was used to establish continuous data retrieving and command sending cycles from and to the physical wet process system. Part of the block diagram programmed for this interface is shown in Figure 5 which is a live video retrieving diagram from the Internet camera. Real time control from the virtual interface on the real system is through the FieldPoint Ethernet communication and the communication between

the FP controller and I/O Modules. FP controller, FB OPC Server, and FP manager are installed and configured through Measurement & Automation software (MAX). The communication between the FP controller and I/O Modules is similar among different types of network modules. The FP controller communicates with each module through the Ethernet module using the TCP/IP protocol. It uses the.iak file to determine which resource to communicate with. Each I/O module cycles through its internal routine of sampling all channels, digitizing the values and updating the values

Figure 5. Block diagram programmed in LabVIEW

733

A LabVIEW-Based Remote Laboratory

on the module channel registers (buffer). This cycle time is set for each module and is specified as the all channel update rate. FieldPoint Ethernet communication uses an asynchronous communication architecture called event-driven communication. The network module automatically sends updates to a client when data changes. The server then caches the data from I/O modules and uses it to respond to read requests from the virtual interface. The network module scans all I/O channels with subscriptions to determine if a value has changed by comparing the current value to the cached value for each channel. If a change has occurred, the network module puts the difference between the two values in the transmit queue. The FP Server receives this information and sends an acknowledgement to the network module. The network module periodically sends and receives a time-synchronization signal so that it can adjust its clock and provide proper timestamping. When signals do not change over long periods of time, the client sends periodic re-subscribe messages to verify that the system is still online. LabVIEW’s architecture allows for easy integration of the laboratory environment for remote manipulation. The main concept of turning the locally controlled setup into a remotely controlled one is moving the user interface away from the physical setup. The local computer works as the web server as well as the control server. A number of clients can log onto the server, but only one user can be granted the control right. Other users can monitor the control process. They can monitor the process from their remote front panel (VI) while the one that has the control right can actually control the process from the panel. There is a waiting queue for users. When the control right is available, it will be granted to the next user in the queue. The remote client can be any computer with Internet access. The only tool that the client needs to use is a web browser with the LabVIEW Runtime Engine installed. The LabVIEW Runtime Engine is plug-in software

734

provided by National Instruments to support the web application. Normally it is installed automatically on the client’s machine the first time the user tries to view a front panel. The client can browse to the webpage integrated with the remote control panel by entering the Uniform Resource Locator (URL) address of the web server in the browser. The client only updates the screen and gets information from front panel interactions. The client cannot make changes to VIs. Execution happens only on the server machine. The local server hosts a LabVIEW web server, which publishes the VI to the Internet. Through the LabVIEW Real Time Engine (RTE), the local server can communicate with the remote client. It controls the process according to the data from the remote control panel and sends the updated data back to the remote control panel. Remote clients are not required to have the whole LabVIEW software installed to view VIs for control and monitoring. They just require the LabVIEW RTE plug-in. The security of the control system is ensured by management from the server side. On the server side, the user’s permission to access the LabVIEW control panel is managed through editing the allowed list of IP addresses for clients. Also, access to the LabVIEW control panel can be limited to a specific domain or a group of domains. The virtual interface running on the server can be configured to be available to or be hidden from certain users. While in the process of remote control and monitoring, the IP address of the active client will be shown on the server. The lab instructor can always monitor the usage of the remote control panel, and make sure only authorized clients have access permissions. In the process of remote control, the lab instructor can take over the control right on the server side at any time in case of system malfunctions, user errors, or any unusual situations.

A LabVIEW-Based Remote Laboratory

System Operation Figure 6 shows the process simulator on the LabVIEW virtual interface. This process diagram simulates the real process of the system with controls/indicators corresponding to the components of the real system. Three tank indicators represent the main tank, tank 1 and tank 2 in the wet process trainer respectively. The green bar of each tank indicates the current water level which can vary from 0 percent to 100 percent. In this diagram, valves and pumps are controls for their corresponding parts in the real process. FV-201, FV-202, FV-203, FV-204, and FV-205 are controls for discrete valves. These five discrete valves and two pumps are controlled by ON/OFF Boolean signals; their status can be changed by a mouseclick on them. The color of each control represents the status of the control. Red represents OFF status while green represents ON status. ZZ-301 and ZZ-302 represent the two continuous control valves in the real process. The status of them is controlled by the digital control below the valve

icons. The value of each digital control can be changed by clicking the arrows beside it from 0 to 100 with the increment of 10. The value of 0 represents closed status of the valve, and the value of 100 represents the totally open status of the valve. The color of ZZ-301 and ZZ-302 will be changed to green color when the value of their digital control is equal or greater than 10 to indicate an ON status. Otherwise, it will be changed to red indicating an OFF status. This process simulator provides a direct visual view of the whole wet process system for students to understand the process and identify the function of each component of the system. It can be used in the manual control mode for control testing and in the supervision mode for system maintenance and trouble-shooting. Figure 7 displays the part of the virtual interface with power control, mode controls and digital indicators. The power control is used to turn on/off the system, which will be in green color when the system is running. The six indicators to the right of the mode buttons are digital

Figure 6. Process simulator

735

A LabVIEW-Based Remote Laboratory

Figure 7. Mode controls and digital indicators

indicators displaying current values of tank levels, liquid temperatures, incoming flow rate, and incoming pressures. The buttons with red labels are used to select a control mode for the system operation. This virtual interface provides three different modes for process control which include supervision mode, manual mode, and auto mode. Clients (students) are assigned different control capabilities when different modes are selected. In supervision mode, all the valves and pumps in the wet process system can be turned on and off by users disregarding the readings from the sensors. This mode can be enabled only for maintenance and trouble-shooting purposes, and it is not available to remote clients for security reasons. In manual mode, the status change of valves and pumps will depend on both the current situation of the system and the commands from the user. For example, if the water level of the main tank is lower than 20 percent or valve FV-201 is closed, pump 1 cannot be activated even if the user intends to do so by clicking on the pump icon on the process simulator of the virtual interface. When certain conditions are met from the sensor readings (which means it is a safe situation), the user can manually control any device of the wet process system. This mode is available for both local and remote users. It helps users to test device status, get familiar with the control interface, and adjust control parameters when necessary. In auto mode, the system demonstrates a fully automated control process to users, depending on the water levels of each tank. The user can only control continuous control valves by changing the per-

736

centage values without changing the on/off status of the valves. The user can adjust the percentage of the continuous valves by clicking the toggle switch beside the digital control or typing values in the digital controls directly. These three modes provide flexibility for students to explore the wet control process, and also ensure the security to protect the system physical setup. The LabVIEW interface panel provides three emergency stop buttons and one reset button in the stop button panel. E-Stop button will stop the whole system when pressed. Stop 1 and Stop 2 buttons will disable pump 1 and pump 2 respectively when pressed. The Reset button is used only in auto mode to reset the system when the main tank level reaches its limits. This stop button panel is available for both local and remote users. Lab instructor can also disable the system by clicking the stop button on the LabVIEW window from the server side, or press the emergency stop button located on the physical system in any emergency situations. The live video window integrated in the LabVIEW virtual interface displays the live video from the Internet camera to remote users. Remote users can view the liquid levels and the light indicators clearly from a remote interface. There are seven green light indicators on the indicator panel mounted on the physical system. The seven green lights were wired to the five discrete valves and two pumps to indicate the On/Off status. This will help remote users to compare the status of devices on the virtual interface and the ones on the physical system when necessary for their op-

A LabVIEW-Based Remote Laboratory

eration. This is a great tool to help remote users observe the operations of the real system.

LABORATORY IMPLEMENTATION There are six labs that have been developed and implemented in the computer-integrated manufacturing course as part of the required lab activities.

Lab 1: Introduction to the LabVIEW Environment The purpose of this lab is to introduce the LabVIEW graphic programming environment to students. A simple single-axis motor control system is used to help students get familiar with the LabVIEW front panel and block diagram. First, students will remotely access the single-axis motor control system developed in LabVIEW to control and monitor a stepper motor over the Internet. The

LabVIEW interface for this motor control system is shown below in Figure 8. A LabVIEW program consists of two parts, the front panel and the block diagram. The front panel is used to design graphic interfaces; a control palette provides various controls and indicators to be used for a control interface. The program which runs behind the graphic interface is called the block diagram. A function palette associated with this block diagram provides all kinds of functions and operations. In the second part of this lab, students are required to install a LabVIEW student version on their computers. Students will open the LabVIEW front panel window and block diagram window to explore each control and indicator used in this simple program, so that they can get familiar with the LabVIEW programming environment.

Figure 8. Single-axis stepper motor control interface

737

A LabVIEW-Based Remote Laboratory

Lab 2: Programming the Motion Control The purpose of this lab is to help students understand the major components in a motion control system and functions of basic motion control VIs, and gain skills in programming motion control systems. Students are required to develop a simple single-axis motor control program by following instructions provided by the instructor, send the program to the lab instructor, and run their own program for remote motor control. The board ID and the axis number will be assigned to each student for their programming process. The lab instructor configures the motor and monitors students’ remote operations on the control server.

Lab 3: Introduction to Remote Process Control Using FieldPoint The purpose of this lab is to help students understand the integration of input and output devices in FieldPoint, examine the available devices and technologies used for remote process control applications, and explore the implementation of remote process control applications. Students are required to operate the virtual process control system through the Internet (as shown in Figure 2), examine each mode available on the virtual interface, and understand the mechanism for the system integration. The lab instructor will monitor the control process while students are using the virtual control interface to access the wet process setup in the lab.

Lab 4: Programming a Simple Process Control Program in FieldPoint The purpose of this lab is to gain skills in design and to program a process control system with digital input/output signals and Boolean operations. Students are required to design a process control system with part of the components available in

738

the physical system setup including valves and pumps, send their program to the lab instructor, and test their program through the Internet after the program is loaded into the control server.

Lab 5: Programming for Remote Measurements The purpose of this lab is to help students understand the mechanism for retrieving analog data remotely, examine the available devices and technologies for remote data acquisitions, and gain skills in developing a remote data retrieving system. Students are required to design a LabVIEW VI for remote analog data retrieving from temperature sensors, level sensors, and pressure sensors. These sensors are already installed as part of the wet process control system, as shown in Figure 3 and Table 1. The lab instructor will send the data sheets of the sensors to students. Students will then design the interface, and program the block diagram by using the technical data provided by the lab instructor to retrieve signals from sensors and display correct data on their program interfaces. Students will send their programs to the lab instructor and test their programs after the program is loaded into the system.

Lab 6: Design a Virtual Manufacturing Work Cell This lab is to encourage students to apply their knowledge, skills and experiences gained from the lectures and lab activities to design a virtual manufacturing cell for remote control and monitoring. Students are required to use sensors for measurements and use motors to simulate machine status. The machines in the manufacturing cell include three Computer Numeric Controlled (CNC) machines, three industrial robots, and one conveyor system. Based on what they have learnt, students need to integrate machines and devices into a connected network by using a FieldPoint system, assign an Input/Output address to each

A LabVIEW-Based Remote Laboratory

device and machine, design the control interface, and program the block diagram. Then students will send their programs to the lab instructor, test their programs by operator and monitor the sensors, valves, and motors of the process control system through the Internet. The educational value of these online lab activities has been assessed through students’ feedback. It is shown that most students find these labs are very interesting, convenient to access and easy to follow. They consider the lab necessary for them to understand the concepts of mechatronic system integration, remote control, and Human-Machine Interface (HMI). Students gain experience by exploring, operating, and programming the system. In addition to helping students understand the concepts and principles of remote control applications in manufacturing, these lab activities provide the following major benefits collected from students’ feedback. • • •

• •

Hands-on experience with LabVIEW programming. Great hands-on experience with online control and monitoring. Broader view on the future of industrial networking in implementing computerintegrated manufacturing. A convenient way to access lab facilities. Flexible schedule to work on lab activities.

However, some of the students did mention that the time delays in the control process have caused problems for their remote operation, and the programming difficulties with remote measurements have made them frustrated when testing is not available until the program has been completed. These problems could be solved in the future with system enhancement by implementing more web-based applications. Also, the online laboratory is not available 24/7 for online students now due to the safety and security concerns. But it does provide convenient access to labs for online students. They can schedule their

lab activities in evenings and weekends with the lab instructor when either the lab instructor or a lab assistant can sit by the server or monitor the process through the Internet.

ISSUES OF RELEVANCE TO THE LABORATORY There are several issues related to this virtual laboratory that will affect the future development of the laboratory according to feedback from students, lab instructors, and faculty. The issues include the influence of network bandwidth on information transmission for remote control, the user management system, and the limited functions of the LabVIEW web server to support online programming. In this virtual process control system, time delays exist in data transmission especially when the client accesses through a dial-in network connection. Obviously it is caused by the different bandwidth of networks. In the development of the remote laboratory, not only parameter and administrative data but also audio and video data need to be transmitted via network connections. Web cameras will bring live images of the physical setups to remote clients. Undoubtedly, it is critical to use the available bandwidth efficiently in data transmission in the project; otherwise, time delays will mar the whole remote experimentation execution. Several networking techniques can be used in solving this problem. For example, setting respective data priorities for the transmission rate of different data can ensure critical data is transmitted without delay. Another technique that can be used is data compression, but it involves a trade-off because of the additional delay resulted from the compression and decompression processes. This delay should be kept much smaller than the transmission delay. Data compression is especially useful for audio/video transmission which involves a huge amount of data. At the same time, the server needs to adapt to different

739

A LabVIEW-Based Remote Laboratory

bandwidth requirements of remote clients. For example, some might be on the same local LAN with the server on the same campus while others may be connected from home using a dial-up line. When implemented in the lab of the graduate course, there were only 12 students in the class. There was not any complaint from students about the access method. If implemented in classes with more students, user management becomes an issue. Not all students would like to wait in the queue for their online lab activities. A user management system will be developed using Visual Studio 2005 to integrate an interface and an ACCESS 2007 database, and communicate with the LabVIEW web server to realize a user reservation system. This will increase the flexibility for students allowing them to login and schedule their lab activity online. It will also provide a more secure way to manage users with permission assignments. The function of the LabVIEW web server is the second issue in the remote virtual laboratory. In this remote control process, the LabVIEW web server publishes the VIs to the Internet, but clients can only update and get information from front panel interactions. Clients cannot make changes to VIs directly from the remote interface. In order to let students to learn and practice programming in the LabVIEW environment, clients must have the capability to re-program the process and redownload their programs to the controller for testing purpose. This takes time for students to complete one program-test cycle, and requires more work for the lab instructor to load students’ programs to the server manually. To achieve a remote programming function, the LabVIEW web server must be separated from the control server. Some programming languages with powerful web-based function are recommended to extend the LabVIEW web server, such as JavaScript and VB. NET. To better address the requirements from students on virtual laboratories and improve the performance of the system, a form will be developed for students and lab instructors to evaluate

740

the performance of the system. As laboratories are implemented in more classes with more students, the evaluation and feedback will provide more ideas about system improvement and better implementation.

CONCLUSION Remote virtual laboratories accessed through the Internet are feasible for long-distance applications. Experiences from developing this virtual laboratory and implementing it in a Computer-integrated manufacturing course show that multiple aspects must be taken into consideration to obtain adequate performance of the online laboratory. These include the connection and communication between a web server and the physical setup (machines and processes) and the connection and communication of the web server and the Internet. In the next step in adding more systems into this laboratory, data acquisition, motion control, FieldPoint controllers, Programmable Logic Controllers (PLCs), and industrial robots will be integrated together to achieve a virtual flexible manufacturing cell that can be operated and monitored through the Internet. Technologies for system integration and web-based human-machine interfaces (HMI) need to be applied for future development. The future system will also provide an ideal research platform for studying the performance of those advanced web-based technologies in manufacturing environments, and the efficiency of system integration for improving flexible manufacturing systems.

REFERENCES Asumadu, J. A., & Tanner, R. (2006). A serviceoriented educational laboratory for electronics. Industrial Electronics, 56(12), 4768-4775.

A LabVIEW-Based Remote Laboratory

Carnevale, D. (2002). Engineering accreditors struggle to set standards for online lab sessions. The Chronicle of Higher Education, 1-3. Retrieved 25 July, 2007, from http://chronicle.com/ free/2002/02/2002020101u.htm Chen, S. Chen, R., Ramakrishnan, V., Hu, S. Y., & Zhang, Y. (2002). Development of remote laboratory experimentation through Internet. CMSU online library. Retrieved from http://cyrano.cmsu. edu:2048 Cooper, M., Donnelly, A., & Ferreira, J. M. (2002). Remote controlled experiments for teaching over the Internet: A comparison of approaches developed in the PEARL project. The ASCILITE Conference 2002. Auckland, New Zealand. UNITEC Institution of Technology, M2D.1-M2D.9 Deniz, D. Z., Bulancak, A., & Ozcan, G. (2003). A novel approach to remote laboratories. 33rd Annual Frontiers in Education (FIE’03), (pp. T3E8-12). Gillet, D., Latchman, A. H., Salzmann, C., & Crisalle, O. D. (2003). Hands-on laboratory experiments in flexible and distance learning. The Journal of Engineering Education. Gurocak, H. (2001). E-lab: Technology assited delivery of a laboratory course at a distance. Proceedings of the 2001 ASEE Annual Conference. Irwin, G. W. (2005). Nonlinear identification and control of a turbogenerator - An on-line scheduled multiple model/controller approach. IEEE Transactions on Energy Conversion, 20(1), 237–245. doi:10.1109/TEC.2004.827708 Kozono, K., Akiyama, H., & Shimomura, N. (2003). Development of distance real laboratory system. CMSU online library. Retrieved from http://cyrano.cmsu.edu:2048

Liou, P. S., Soelaeman, H., Leung, P., & Kang, J. (2002). A distance learning power electronics laboratory. CMSU online library. Retrieved from http://cyrano.cmsu.edu:2048 Salzmann, C., Latchman, H. A., Gillet, D., & Crisalle, O. D. (2003). Requirements for real-time laboratory experimentation over the Internet. The Journal of Engineering Education. Saygin, C., & Kahraman, F. (2004). A web-based programmable logic controller laboratory for manufacturing engineering education. International Journal of Advanced Manufacturing Technology, 24, 590–598. doi:10.1007/s00170-003-1787-7 Thamma, R., Huang, L. H., Lou, S., & Diez, C. R. (2004). Controlling robot through Internet using Java. Journal of Information Technology, 20(3).

ADDITIONAL READING Hua, J., & Ganz, A. (2003) A New Model for Remote Laboratory Education Based on Next Generation Interactive Technologies. Frontiers in Education Conference. Hutzel, W. (2001) Creating a Virtual HVAC Laboratory for Continuing/Distance Education. International Conference on Engineering Education.

KEY TERMS AND DEFINITIONS FieldPoint: FieldPoint is a proprietary method for interfacing devices to computers developed by National Instruments but it is very similar in principle to the concept of a fieldbus interfacing method used by many process control equipment suppliers. LabVIEW: NI LabVIEW is the graphical development environment for creating flexible and scalable test, measurement, and control applications rapidly. It is the major programming tool in developing virtual control interfaces.

741

A LabVIEW-Based Remote Laboratory

Process Control: Process control is a statistics and engineering discipline that deals with architectures, mechanisms, and algorithms for controlling the output of a specific process.

VIs: VIs are virtual instruments. In this chapter, VIs represent the graphical user interfaces programmed in LabVIEW environment for the purpose of motion control and process control.

This work was previously published in Internet Accessible Remote Laboratories: Scalable E-Learning Tools for Engineering and Science Disciplines, edited by Abul K.M. Azad, Michael E. Auer and V. Judson Harward, pp. 1-17, copyright 2012 by Engineering Science Reference (an imprint of IGI Global).

742

Section 4

Utilization and Application

This section discusses a variety of applications and opportunities available that can be considered by practitioners in developing viable and effective Industrial Engineering programs and processes. This section includes 14 chapters that review topics from case studies in Cyprus to best practices in Africa and ongoing research in the United States. Further chapters discuss Industrial Engineering in a variety of settings (air travel, education, gaming, etc.). Contributions included in this section provide excellent coverage of today’s IT community and how research into Industrial Engineering is impacting the social fabric of our present-day global village.

744

Chapter 42

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context Souleiman Naciri Laboratory for Production Management and Processes, Ecole Polytechnique Fédérale de Lausanne, Switzerland Min-Jung Yoo Laboratory for Production Management and Processes, Ecole Polytechnique Fédérale de Lausanne, Switzerland Rémy Glardon Laboratory for Production Management and Processes, Ecole Polytechnique Fédérale de Lausanne, Switzerland

ABSTRACT Computer simulation is often used for studying specific issues in supply chains or for evaluating the impact of eligible design and calibration solutions on the performance of a company and its supply chain. In computer simulations, production facilities and planning processes are modeled in order to correctly characterize the supply chain behavior. However, very little attention has been given so far in these models to human decisions. Because human decisions are very complex and may vary across individuals or with time, they are largely neglected in traditional simulation models. This restricts the models’ reliability and utility. The first thing that must be done in order to include human decisions in simulation models is to capture how people actually make decisions. This chapter presents a serious game called DecisionTack, which was specifically developed to capture the human decision-making process in operations management (the procurement process). It captures both the information the human agent consults and the decisions he or she makes.

DOI: 10.4018/978-1-4666-1945-6.ch042

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

INTRODUCTION In fast-paced markets, companies try to improve product service level and quality while decreasing costs in order to get high market shares. Whereas some companies implement solutions to improve performance without prior verification, a wiser approach is to use computer simulation to evaluate the impact of potential solutions on the performance of the company and its supply chain. In these simulations, production facilities and planning processes are modeled in order to capture company behavior. However, the main drawback of this approach is that little attention is given to the human decisions that take place in this context. Human decisions are very complex, varying across individuals and even across time for a single individual. Traditional simulation models thus neglect this component. But this in turn limits the models’ reliability; in fact, modeled companies often exhibit different behavior than their real-world counterparts. The challenge is to be able to capture how human decisions are made, use this knowledge to develop reliable human decision-making models, and then implement these models in computer simulations. For this purpose, the first task is to capture human decisions as they are made rather than how they should be made. Capturing actual human decisions is not straightforward, however, because people are not very good at verbalizing what they know (Vermersch, 2006). The utility of the conventional simulation approach in studying system behavior in general has been proved (Robinson, 2005) even though it does not involve active user participation during simulation runs. However, for the purpose of knowledge elicitation (Edwards et al., 2004) as well as user training or education, using more advanced simulation technique that integrates visual simulation and user interaction (Van der Zee & Slomp, 2009) should be a promising approach. This chapter presents a serious game called DecisionTack that was specifically designed to

capture the human decision-making process in a procurement context. The main motivation for developing the game is to be able to take full advantage of simulations that include active user interaction for the purpose of quantitatively analyzing decision-making behavior. This serious game captures both the information consulted by the player and the decision he or she makes. This is done repetitively during the game, because an operational decision (procurement) is required from the player on a daily basis. This leads to the capture of a series of decision versus consulted information pairs that can later be used to develop human decision-making models. Subsequently, the outputs of the game are analyzed using four metrics that characterize each player’s behavior in terms of data consultation and decisions. In the rest of this chapter, we lay out the basic concepts of Supply Chains, decision-making in a supply chain context, and describe previously developed serious games (Background); describe current weaknesses and define the goal of the research (Motivation & Goals); describe the details of our serious game (DecisionTrack Game); outline our analysis and interpretation approach (Analysis and Interpretation); illustrate a case study (Application case); discuss strengths, weaknesses and further challenges of our serious game (Issues & Controversies); outline potential further development (Research Directions) and draw final conclusions (Conclusions).

BACKGROUND 1. Supply Chain In the current global economy, enterprises do not act as isolated companies, but are integrated in complex networks involving many entities (manufacturing, transportation, warehousing, etc…) that are linked by complex material flows (such as products and components) and informa-

745

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

Figure 1. Schematic representation of an enterprise network (supply chain)

tion flows (such as customer orders or production orders). This is schematically illustrated in Figure 1. Terms such as ‘Supply Chain’ or ‘Value Adding Network’ are used to describe these complex networks. For simplicity, we will just use the term ‘supply chain’. Within a manufacturing company (one constitutive entity of a supply chain, see “Manufacturer” on Figure 1), the main material and information flows can be schematically described as illustrated in Figure 2.

Customer orders are received. They are entered into the order book of the company. The order book serves as the basis, together with market demand forecasts, to create a plan called Master Production Schedule (MPS). The MPS contains confirmed and expected customer orders, listed according to their desired delivery dates. Based on the MPS, a procedure is run to anticipate the need for product assembly, part production and component procurement. This procedure, called Manufacturing Resources Planning (MRP), is widely used in repetitive manufacturing.

Figure 2. Schematic representation of the material and information flows within a manufacturing company

746

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

2. Decision Making in a Supply Chain Context In each entity of a supply chain, people are constantly making decisions; these decisions are the cornerstone of the company management and have very important and direct consequences on company performance. Human decisions can be classified according to the time horizon affected by the decision as strategic (long-term), tactical (medium-term) or operational (short-term). In supply chain management, strategic decisions are tied to the network’s configuration (for example, the selection of a manufacturing location). Tactical decisions involve the calibration of the network’s main management parameters (for example the level of safety stocks). Finally, operational decisions are related to the execution of repetitive tasks (for example launching production orders). Simulations are often used in tools that support strategic and tactical decisions. If these simulation models are to help humans to make strategic and tactical decisions, they must be able to reliably reproduce the behavior of the actual Supply Chain. But the supply chain is strongly affected by operational decisions that are continuously being made by humans. Paradoxically, operational human decision-making has hardly been taken into account in Supply chain simulation models,

thus limiting the reliability of these tactical and strategic decision-making tools. Operational decisions in a supply chain context are characterized by their repetitive nature; i.e. the same decision type (for example launching a production order) must be frequently made (for example daily). The decision situation may change for each decision occurrence, however. The decision made is thus dependent on both the decision context and on the human decision-making behavior, as schematically represented in Figure 3. Many operational decisions are made in a supply chain context, from shipping to planning and procurement. In particular, one output of the MRP procedure described above is a set of lists of time-paced propositions for launching assembly, production and procurement orders. A human decision is then required to execute the MRPproposed orders. This decision involves confirming the MRP propositions, modifying them (date and/or quantity) or grouping some of them. In the specific case of the procurement process considered here, the operational decision can be illustrated as shown in Figure 4. The decision elicitation problem can therefore be formalized and formulated as follows: Each planner j makes decisions Dij at time i, according to the decision context he/she perceives at the time i (DCij). This decision context

Figure 3. Schematic representation of an operational human decision making process in a supply chain context

747

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

Figure 4. Illustration of the procurement decision-making process

encompasses the updated proposed procurement plan (MRPi), as well as the collected information (CIij) gathered by the planner j at time i. Thus, the decisions Dij made by the planner j at time i can be expressed as: Dij = fj (DCij)

(1)

Dij = fj (MRPi, CIij)

(2)

Consequently, the elicitation process consists of identifying the decision-making behavior fj of planner j in order to predict planner j’s decisions according to the information at hand. The first task in identifying fj is to capture the decision inputs and outputs: •

• •

748

Decision inputs: ◦ MRP proposed orders (MRPi), which create the decision alternatives; ◦ The information collected by the planner j (CIij). Decision outputs: MRP proposals modified and validated by the planner’s orders (Dij).

3. Serious Games and Dynamic Decision Making Serious games have been used for several decades for studying dynamic decision-making (DDM). DDM encompasses the decision-making processes that take place within an environment characterized by dynamics, complexity, opaqueness and dynamic complexity (Gonzalez et al., 2005). In this paper, the authors review the ten most significant serious games that have been developed since the 1980s. One of these is of particular interest here, as it involves operations and supply chain management. Beer Game was initially developed in the 1960s as a board game before being converted into a serious game in the late 1980s by Sterman (Sterman, 1989). The Beer Game has 4 entities (factory, distributor, wholesaler and retailer) that provide the market with beer. The goal is to achieve the highest service level with the lowest costs. Players are separated in four groups, each group being in charge of the replenishment decisions of a single supply chain entity. This game was popular with users (master’s students and managers) because it illustrates supply chain dynamics and the virtues of collaboration across supply chain partners very well. However, one disadvantage of this game is that it is difficult

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

to use it to understand player decision-making behavior. Indeed, the Beer Game does not track the information consulted by players, and thus the causal relations between the system state and the decisions made by the players cannot be explicitly rendered. This drawback is also observable in a simulation game developed to study maintenance decisions in Ford assembly lines (Robinson et al., 2005). In this game, a production process is simulated until a breakdown occurs, at which point the simulation stops and players (maintenance operators) are presented with the breakdown characteristics. Then they are asked to make a decision among a discrete set of alternatives (repair now, repair later, ask somebody else to do it, etc.). Each breakdown instance is recorded including both the breakdown characteristics and the corresponding decision. The authors then used this set of instances to model and reproduce maintenance operator decision-making behavior. However, by presenting the entire set of breakdown characteristics to maintenance operators at each breakdown, the assumption is made that all of the breakdown characteristics are equally important in the decision making process, while some of them would probably not have been consulted if the operator had had to look for relevant information by himself. Serious games are a very efficient way of representing complex and dynamic decision making contexts (time dependency, feedback loops, endogenous and exogenous variations), but often fail to handle a very important phase of dynamic decision making - the situation assessment -- that can vary greatly from person to person.

MOTIVATION AND GOALS Conventional data collection techniques such as observation, questionnaires and interviews are not well suited for the elicitation process described above (Naciri, 2010). Several hurdles (the time required, difficulty in designing suitable ques-

tionnaires, difficulty that interviewees have in explicitly describing their actions and decisions) make it difficult to rely on such techniques to collect relevant data for analyzing and modeling human decision-making behavior. On the contrary, serious games seem to be better suited for the elicitation of human decisions in an operational context. They have several demonstrated benefits: •

First, serious games require players to “act” and not to “explain,” therefore, “action satellites” can be avoided. According to Vermersch (Vermersch, 2006), “action satellites” refer to the four dimensions of the action (context, judgments, theoretical knowledge, goals) that interviewees often cite instead of discussing the action (or decision) itself; Second, in serious games several player actions can be recorded, giving results at a faster pace than using conventional techniques; Third, it is possible to analyze how decisions made at time t influence the environment at time t+1, making it possible to capture the dynamic aspects of decisions and, in particular, the notion of feedback; Finally, serious games create an environment in which player actions can be recorded, as well as the context in which decisions occur. Thus, once the simulation is finished, it is possible to pair the simulation context with the decisions that were made.

Participatory simulation via serious games thus appears to be a well-adapted tool for capturing the various dimensions of human decision-making, and thus for obtaining a quantitative understanding of human decision making behaviors. The goal of this work is to develop a serious game that will elicit operational human decisions

749

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

in a supply chain context, more specifically, in procurement. Because the objective is to reliably generate human decision-making situations that are representative of what happens in the industry, the game must fulfill the following criteria: •

Interfaces similar to typical industrial tools: this ensures that players (procurement agents) act (make decisions) as they would in their working environments. It thus ensures that the information collected in the virtual environment is representative of procurement agent behavior in the “real world.” Available information similar to that in actual ERP systems: this ensures that players are familiar with the information at hand in the simulator and that no specific training is required to explain how the information is displayed. Decision pace to avoid player stress: one of the main drawbacks of real-time serious games is that the pace is often accelerated, giving the player little time to make decisions. Consequently, players must control the simulation pace in order to get enough time for making relevant decisions.

The main function of the developed serious game, DecisionTrack, is to collect data to identify: 1. 2.

What information procurement agents are interested in, and What kinds of decisions procurement agents make.

According to Van der Zee & Slomp (2009), a framework for game design has four phases: • •

750

Initialization: definition of the scope and objectives of the game. Design: Detailed development of the basic ideas formulated in the initialization phase.

• •

The outcome of this phase is a simulation game concept. Construction: construction of the game using software or other physical elements. Operation or game running: actual use of the game, which may include a test of the game for its intended purpose.

We have covered the “Initialization” phase of the game design framework in this section. The next section explains development methodology of the DecisionTrack serious game.

DECISIONTRACK GAME: FROM DESIGN TO IMPLEMENTATION 1. Definition of Game Concept The DecisionTrack game was designed according to the following objectives: 1.

2. 3. 4.

To provide the player with a realistic decision making context (similar to the one he/ she is usually involved with), To allow the player to consult the information he/she feels relevant, To allow the player to make the decisions he/she feels relevant, To capture player actions (consulted information and decisions) and to save this information as readable data (log file).

The underlying motivation behind these four objectives is to build a decision making context that is similar to the one players are used to working with, in order to capture decision making situations that are as close as possible to those they encounter in reality. This can be done not only by creating realistic decision-making situations, but also by not constraining players to limited sets of data or decisions. Finally, keeping track of player’s actions makes it possible to subsequently identify the relation-

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

ship between the decision-making context and the decision made.

2. Virtual Decision Making Environment The virtual decision making environment chosen for this game is a two-tier supply chain, in which the player is in charge of the procurement process. His/her task is to modify (if needed) and validate the procurement orders based on MRP propositions, as described in the previous section. In order to provide players with a familiar decision making environment, a virtual supply chain with a commonly used production management policy (make to stock) is used. It is illustrated in Figure 2. Several entities representing departments of the company such as production, warehousing, planning (i.e., MPS, MRP and the procurement process under study) are included in this virtual environment. It also includes some external entities such as customers and suppliers. The circles (containing the letter “i”) on the top of entity icons in Figure 2 indicate that information concerning the corresponding entity is available for player consultation.

Table 1 summarizes the game elements that are included in DecisionTrack. The following subsections describe in detail how the game elements are constructed.

3. DecisionTrack interfaces The game is developed in Java 1.6 (Java SE 6) fully benefiting from the built-in Java Swing libraries in order to implement complex user interactions. The main goal of DecisionTrack interfaces is to provide information about the company and its supply chain. This information, which is modified daily, enables players to update their knowledge of the system.

Displayed Information The relevant information to be displayed was identified by conducting an analysis of the procurement process with experts in the procurement field. This analysis provided valuable insight into the kinds of information procurement agents consult, and the kinds of decisions they make. Two ERPs (Enterprise Resource Planning) widely used in Switzerland, SAP (SAP, 2011)

Table 1. Summary of DecisionTrack main game elements Game elements

Definition

Model and Scenarios

- A 2-tier supply chain - Context: a manufacturing company, MRP information to consult - Decision making options: accept the MRP propositions, postpone, anticipate, group orders by modifying the order launching date

Game process

1. Presentation of the game to each player 2. Each player plays the game 3. Data collection and analysis 4. Player decision making modeling

Events

New customer orders, new forecasts, component deliveries from suppliers (on time and late deliveries)

Periods

One period per simulation day A complete run with a single player lasts at least 30 periods

Roles

Procurement agent

Results

Performance indicators such as inventory levels and service level

Indicators

MRP data, supplier-related data, customer-related data Above-mentioned performance indicators (see “Results”)

Symbols, Materials

Various user interfaces (windows) that mimic real ERP systems in companies

751

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

and Proconcept ERP (Proconcept, 2011) were investigated, in order to design DecisionTrack’s interfaces to mimic the interfaces procurement agents are accustomed to working with.

Tracking Methods As stated before, players can navigate through DecisionTrack interfaces in order to update their knowledge of the decision context. Because different people search for information in different ways, it is essential to track which specific information is consulted. This is accomplished by isolating the information related to each supply chain entity in a separate tab of the game window. Each tab is associated with a mouse-listener that is activated once the tab is selected. The record of the activated tab is stored in a “log file”. In this way it is possible to track which supply chain entity a players is interested in. Because several pieces of information may be needed to describe a supply chain entity, a single interface

may not have enough space to correctly display the whole set of information. In these cases, tabs may contain two or more sub-tabs between which the entity-related information is split. In the case where a single sub-tab contains heterogeneous information, checkboxes are added to help track the consulted information. These checkboxes are by default unchecked, which makes the corresponding information unavailable. When a player is interested in a piece of information, he/she checks it, releasing the corresponding information. In sub-tabs and their checkboxes, the technique of tracking consultations is similar to the one described for the tabs. Using mouse-listeners which are attached to each graphic element, the name and the value (when relevant) of the consulted information is recorded in the log file. Tabs, sub-tabs, information panels and checkboxes are organized in a hierarchical way as illustrated in Figure 5.

Figure 5. Illustration of the hierarchical structure of the information within a DecisionTrack window

752

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

Interfaces for Collecting Data The central part of Figure 6 illustrates the window corresponding to the Customers tab. This interface provides various pieces of information. The first information panel (1) is the evolution of the service level (ratio of the orders delivered on time) since the beginning of the simulation. The second information panel (2) contains only the current service level, and the third panel (3) contains the list of pending orders, (orders that have not yet been delivered). To get to this specific interface, the player must go through the following steps: 1. 2. 3. 4.

Select the Customers tab Select the Service level sub-tab Activate the Service level checkbox in the upper information panel Activate the Service level checkbox in the second information panel

The right hand side of Figure 6 shows the corresponding data recorded in the log file.

Interfaces for Making Decisions All the interfaces except Purchaser contain information for consultation that cannot be directly

modified by the player. The information contained in the Purchaser tab (proposed procurement orders generated by the MRP algorithm) can be updated by the player with or without modifications. The Purchaser interface is illustrated in Figure 7. As shown in the bottom panel, several procurement orders with a proposed launching date and quantity are displayed. According to his/her knowledge of the system state, the player can modify the propositions by clicking on an order, and then on the line corresponding to the new launching date. Four kinds of modifications can be made: •

• •

Anticipation (the new order date is closer to the current simulation date than the former one), Postponement, Grouping (when the new launching date of the order corresponds with the launching date of existing orders), Do Nothing (no modification is made to the proposed order).

Once the player has chosen one of these alternatives, he/she can click on the padlock to validate the order and send it to the supplier. All the decisions made by the player are recorded in the log file, so that the correspondence

Figure 6. Illustration of a specific interface (left) and the corresponding log file records (right)

753

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

Figure 7. Snapshot of the “purchaser” tab

between player knowledge of the system state and the decisions he/she has made can be identified.

4. DecisionTrack Scenario Game scenarios refer to the evolution of game variables across time. When designing a new game, two categories of variables must be differentiated. The first category encompasses variables that cannot be modified by the player -- exogenous variables. The second category relates to variables that can be modified by the player through his/her decisions -- monitoring variables. The latter help the player monitor the impact of his/her decisions on company performance. The exogenous variables are identified through a literature review and discussions with domain experts. These variables must vary across time

754

to create decision-making situations that require specific attention. In this research, the goal is to identify how procurement agents make decisions according to the evolution of the environment (suppliers and customers). Table 2 provides the list of all exogenous and monitoring variables used in this study. Among the monitoring variables, it is worth noting that the service level value depends on the way the player updates the proposed procurement plan. The selected exogenous variables that should vary across the simulation are those that are related to the supply chain environment, such as component delivery times, market behavior (customer orders), and forecasts (see Table 2). By making the above-mentioned exogenous variables change across time in the game scenario, a difference appears between planning algorithm

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

Table 2. Exogenous and monitoring variables Displayed information

Location

Exogenous variable

Monitoring variable

Customer orders and forecasts

- Customers tab - Status sub-tab

YES

NO

Actual and predicted component delivery time

- Supplier tab - Graphics sub-tab

YES

NO

MRP tables

- MRP tab - Status sub-tab

NO

YES

Inventory levels

- Stock tab - Status sub-tab - Graphics sub-tab

NO

YES

Work in process

- Production tab - Status sub-tab

NO

YES

Service level

- Customers tab - Service Level sub-tab

NO

YES

propositions (that are based on MRP theoretical parameters), and the current simulated situation. Such gaps create critical decision-making situations that require the player to make decisions that impact the company performance. The gaps between the planning algorithm’s recommendations and the evolution of company’s environment correspond to realistic and very common situations that routinely appear in companies in which the MRP parameters are not updated according to variations in the environment.

Supplier Delivery Time Variations Supplier delivery times are set in the scenario to differ from the theoretical delivery time introduced in the MRP algorithm. In this way, actual delivery time may be either 1) longer than the theoretical one (which leads to delivery delays), or 2) shorter than the theoretical one (which leads to deliveries that enter the stock earlier than expected). The scenario is set so that situation 1) occurs most often, encouraging the player to make decisions instead of choosing the status-quo alternative. Supplier-predicted delivery time data are provided in the form of a table in the Supplier tab. In addition to the predicted delivery time, the actual delivery time is reported after each order delivery. All three delivery times (theoretical, predicted and actual) are reported as shown in Figure 8.

Forecast Patterns In addition to customer orders that represent the actual impact of the market on the “central” company, forecasts are designed to anticipate market requirements. The remainder of this section discusses forecast patterns (evolution of the forecasted demand over the planning periods) and whether they adequately predict customer order patterns. Forecasted demand and customer order patterns are designed so as to lead to either overestimation (Delta>0) or underestimation (Delta0, and to the Case 2, Delta 1. For both linear demand and multiplicative demand, we consider x as a nonnegative random variable with mean μ and standard deviation σ, with a density function f(∙) and a cumulative distribution function F(∙). We define the inverse function of F(∙) by F−1 (∙), and F(⋅) = 1−F(∙). Noting that the above fashion retailing model is similar to the newsvendor model with pricedependent demand. Therefore, we can apply the price-dependent demand newsvendor problem to capture the fashion retailing problem. Now, we consider the case that the fashion retailer is risksensitive and it follows the VaR approach to determine the joint optimal retail price and order quantity. We first define Πi (q, r ) as the fashion retailer’s profit under demand function i=L,M. For a given confidence level α, where 0< α 1000

9

No Info.

27

Total

74

Total

74

Production Information Systems Usability in Jordan

Table 2. Detailed type of information systems used in Jordanian factories #

Number of factories

Item (question asked)

1

Do you use any type of Information Technology

66

2

Do you use accounting information systems

60

3

Do you use special sales systems

42

4

Do you use production information systems

45

5

Do you use inventory and warehousing systems

47

6

Do you use computer-aided design systems

23

7

Do you use human resource and salary systems

52

8

Do you use quality assurance/control systems

36

9

Do you use Distribution systems

25

10

Do you use procurement information systems

37

11

Do you use manufacturing aiding systems

25

systems, 13 factories employ distributed systems, but integrated together. Also, results indicated that 20 factories employed enterprise systems that are utilized in many functions and tasks. Finally, only 8 factories have extended systems that reach suppliers, distributors and customers (SCMS or ERP). The second part of the survey included items related to the systems used in these factories. 47 managers indicated that they are interested in

extending and using part or all of the systems listed in Table 2. Also, the distribution of the source and type of these systems were as follows: 32 factories used locally designed systems, and 33 factories used ready-made exported systems (off-shelf systems). One of the objectives of this study was to see the relationships between variables like computer diffusion, sales and employee size. This was done through the correlations between those variables to test if any relationship exists between those. Also, to relate demographics with the main objective of this work, we estimated a new construct based on the count of the number of systems employed by the firm (example: manager of firm XYZ checked yes for using three systems (accounting information systems, HR systems and sales systems), so the total number of systems employed were three). This number (or set of data) was correlated against each of the variables mentioned. The correlations matrix is depicted in Table 3. The results indicated significant correlations between the three variables and the total number of systems deployed. Also, it is shown that significant correlations existed between all three variables.

The Intentions of Managers to Adopt PIS The second major objective of this study was to explore managers’ intentions to adopt or continue using PIS systems utilizing Rogers’ IDT model. In a separate study, the researcher employed a

Table 3. Correlations matrix of the demographics against the number of systems employed Number of employees

Sales size

Number of computers

Number of employees

1

Sales size

0.551**

1

Number of computers

0.636**

0.810**

1

Total number of systems

0.437**

0.394**

0.398**

** Correlation is significant at the 0.001 level

983

Production Information Systems Usability in Jordan

survey that introduced a description of PIS and asked questions related to the different constructs of the model. The main source for the items used in the study was Moore and Benbasat (1991) work. The items were translated to Arabic language and tested using 10 experts for language. The nature of this exploratory study makes it convenient to use such method and the size of data allows for such test. Data used a seven point Likert scale, where 1 indicates a high disagreement to the statement and 7 indicates a high approval to the statement. The total number of surveys collected was 91 surveys from factories in Al-Hasan Industrial Zone (total number distributed = 100). The survey included 3 items for measuring rate of adopting, 5 items for measuring relative advantage, 3 items for measuring compatibility, 3 items for measuring image, 2 items for measuring voluntariness, 2 items for measuring trialability, 2 items for measuring visibility, 4 items for measuring results demonstrability, and 4 items for measuring ease of use. Table 4 shows some descriptive statistics related to constructs. On the other hand, correlations between all variable depicted in the IDT model were calculated and they are shown in Table 5. All correlations were significant except two and as shown in the matrix (with different levels of significance). Also, all variables were entered to calculate the regression coefficients between the rate of adop-

tion and all variables. Results indicated that a significant correlation exists between the variables and the rate of adoption, where the coefficient of determination R2 = 27.6%, with a p value less than 0.001 (F8,82 = 5.285, p < 0,001). Results are shown in Table 6.

DISCUSSION OF RESULTS This exploratory work tried to answer two major questions using multiple studies and methods. The first objective was to explore the extent to which PIS are used and adopted in manufacturing companies in Jordan. Through a descriptive survey, opinions were collected from managers of 74 factories in an industrial zone in Jordan. Results indicated that 66 factories (89%) used at least one system related to production and operations. The most popular systems used in Jordan were accounting information systems (60 factories, 81%), and the least used systems were computer aided design systems (23 factories, 31%). Results indicated also that PIS were used in 45 factories (61%), and inventory and warehousing systems were used in 47 factories (63.5%). The results indicated a fair adoption rate for such systems. Part of the reason for that is the influence of partnership with global firms and international organizations that outsource part of their production within Al-Hasan

Table 4. Descriptive statistics related to constructs in the IDT model Variable

Number of Surveys

Min

Max

Mean

Standard Deviation

Rate of adoption

91

1

7

4.762

1.775

Relative advantage

91

2

7

5.777

1.123

Ease of use

91

1

7

5.409

1.334

Image

91

1

7

4.538

1.505

Compatibility

91

2

7

4.597

1.362

Result demonstrability

91

1

7

4.797

1.517

Visibility

91

1

7

4.637

1.540

Trialability

91

1

7

5.588

1.303

Voluntariness

91

1

7

4.654

1.615

984

Production Information Systems Usability in Jordan

Table 5. Correlation matrix showing the IDT variables RoA

RA

EoU

I

C

RD

Rate of adoption (RoA)

1

Relative advantage (RA)

.271**

1

Ease of use (EoU)

.030

.332**

1

Image (I)

.386**

.397**

.481**

1

Compatibility (C)

.351**

.269*

.419**

.637**

1

Result demonstrability (RD)

.440**

.281**

.489**

.579**

.598**

1

Visibility (V)

.310**

.256*

.367**

.465**

.578**

.495**

V

T

V

1

Trialability (T)

.289**

.244*

.301**

.292**

.460**

.337**

.635**

1

Voluntariness (V)

.207**

.156

.390**

.274**

.455**

.422**

.526**

.453**

1

**. Correlation is significant at the 0.01 level (2-tailed). *. Correlation is significant at the 0.05 level (2-tailed).

Industrial Zone. It also seems that accounting and human resources systems were more popular as it is a major function for any firm regardless of their size of operations. This conclusion might be as a result of another test we did on the size of firms with respect to their sales and number of employees. Using supply chain management systems (SCMS) was not that popular as only 8 firms used an extended system (11%), and the reason for that are the distinctiveness of the sample, as the sample in the first study came from an industrial

zone, where international contracts are more common and a closed system from the local market is forced in this free zone. Such situation might be in a lesser need to an extended integrated system (SCMS). Finally, enterprise systems were not that popular as only 20 factories indicated using such integrated type of systems (27%) When trying to explain the results of the correlations between the total number of systems (a measure of usability in this study) and the number of computers and employees and the total sales, it seems obvious that the size of the firm is a

Table 6. Coefficients table for the multiple regression test Variable

Beta

Std Error

Std Beta

t

Sig

Constant

1.334

1.026

1.301

0.197

Relative advantage

0.246

0.158

0.156

1.556

0.124

Ease of use

-0.506

0.148

-0.380

-3.411

0.001

Image

0.269

0.156

0.228

1.720

0.089

Compatibility

0.016

0.178

0.013

0.092

0.927

Result demonstrability

0.444

0.146

0.380

3.038

0.003

Visibility

-0.010

0.156

-0.009

-0.067

0.947

Trialability

0.210

0.164

0.154

1.283

0.203

Voluntariness

0.041

0.125

0.037

0.329

0.743

Dependent variable: Rate of adoption, method: enter

985

Production Information Systems Usability in Jordan

direct influencer (predictor) of the usability of IT. The larger firms will have higher numbers of employees and larger sales and thus they tend to utilize technology to improve operations and gain competitive advantage in the market. Also, it is logical to conclude that the firm size is directly correlated to complexity of operations and thus firms adopt IT to better control operations and improve flow of material and information. Finally, we can conclude that firms with higher sales will have larger tendency to invest in IT and thus buy more computers and adopt more types of systems. The second objective was to explore managers’ intention to adopt such systems. Results indicated a high intention to adopt PIS because of two main reasons: The first was the high mean values of predictors, which indicates the high perceptions of managers towards the adoption process. All means were above 4.5 out of 7, which indicate a high acceptance rates with respect to all variables used. The second reason for this conclusion is the high significant bivariate correlations with the rate of adoption, and this indicates that the method used and the large number of predictors was a limitation. The only variable with none significant correlation is ease of use and this supports the limitation of the method used. The highest correlation was between rate of adoption and results demonstrability (0.440**). On the other hand, when summed together, the set of variables competed on the variance and only two variables showed significant prediction of rate of adoption. The two variables are: results demonstrability; the ability to see tangible results out of the system, and ease of use; where the complexity of the system is a huge obstacle to using it. The IDT model explained 27.6% of the variance in the rate of adoption. Results might have some limitations as the IDT have 8 predictors competing on the variance in the rate of adoption and this might limit the ability to explain the dependent variable well. The regression method used was to enter all variables forcefully and this might be the reason behind this surprising result. As this study is

986

an exploratory one, we can conclude that a larger sample size and a thorough conceptual analysis of the predictors will lead to better utilization of variables and better and accurate results.

CONCLUSION This paper aimed at exploring the status of using IT and specifically PIS in the area of production and manufacturing in Jordan. The study utilized two samples for two separate studies; the first was a sample of managers mainly related to IT in a group of factories in Al-Hasan Industrial Zone and other areas mainly in the Northern part of Jordan to explore the usage of PIS and other types of systems in the industrial area. The second study utilized another sample (after four months and from a different set of factories), from the same area and from the Northern part of the country to test the IDT using an instrument translated from Moore and Benbasat work (1991). Results indicated that systems like accounting information systems and HR and payroll systems were the mostly used among firms and distribution and manufacturing aiding systems were the least used among the sample used. The role of IS in production area was highly appreciated and a major conclusion is that the size of firm indicates the high computer usage and the diversity of systems used. The second study resulted in high and significant indicators in predicting the adoption rate, and most of the constructs used in the IDT were significantly correlated to rate of adoption. But when regressing all indicators on rate of adoption, only two competed on the variance and yielded significant explanation of the variability of the dependent variable and they were results demonstrability and ease of use. One of the limitations of this research, which makes its generalizability limited, is the usage of two separate samples. This research utilized two different samples, and to relate the real usage of PIS to the adoption rate, the same sample would

Production Information Systems Usability in Jordan

have been used. Still the inferred results of this work are valid, but researchers are encouraged to use one sample and extend the size to improve the statistical generalizability. The second limitation of this study is the instrument used; this study used a translated instrument from the original one used in Moore and Benbasat (1991) in English, and thus researchers are encouraged to use the instrument in Arabic to improve the language and improve content and face validity of the instrument. Finally, research related to PIS and the factors influencing the adoption of such systems is not highly popular, which resulted in high competition between variables. The IDT needs a larger sample or dropping some of the variables based on conceptual bases. When exploring systems like ERP or PIS systems, as they are considered complicated and comprehensive systems, one needs to keep relative advantage and ease of use for sure, but further exploration needs to be done to try to deduct the scale size and improve predictability of the model.

FUTURE RESEARCH DIRECTIONS This research is needed in this area and considered a first step in validating the instrument and testing factors influencing the rate of adoption. It is highly important to continue such research using longitudinal settings to explore the adoption and check the validity of results. Future research is needed to validate the instrument and apply it to more settings and environments. Another direction that is needed is the multi-stage process applied by Moore and Benbasat (1991), where the adoption rate is investigated with time and also, a better conceptual perspective is reached through the reduction of variable. One idea is to compare other models predictability with the IDT like the Technology Acceptance Model (TAM), the theory of Reasoned Action (TRA), the Theory of Planned Behavior

(TPB) and its extension the Decomposed Theory of Planned Behavior (DTPB). As we now know the situation of PIS usability in industrial zones, would that knowledge facilitate better research in other environments like local industrial areas and other major factories in Jordan? Also, would it result in a different conclusion if we explored other types of systems? Finally, results indicated a weakness in utilizing computer aided design systems, future research can explore the reasons behind such phenomenon and would that be related to the industrial development of the sector in general or because of this global partnership with local factories specifically in the industrial zone.

REFERENCES Agarwal, R. (2000). Individual acceptance of information technologies. In Zmud, R. (Ed.), Framing the domains of IT management (pp. 85–104). Cincinnati, OH: Pinnaflex Education Resources, Inc. Agarwal, R., & Prasad, J. (1998). A conceptual and operational definition of personal innovativeness in the domain of information technology. Information Systems Research, 9(2), 204–215. doi:10.1287/isre.9.2.204 Brancheau, J. C., & Wetherbe, J. C. (1990). The adoption of spreadsheet software: testing innovation diffusion theory in the context of end-user computing. Information Systems Research, 1(2), 115–143. doi:10.1287/isre.1.2.115 Carton, F., & Adam, F. (2008). ERP and Functional Fit: How Integrated Systems Fail to Provide Improved Control. The Electronic Journal Information Systems Evaluation, 11(2), 51 – 60. Retrieved from http://www.ejise.com

987

Production Information Systems Usability in Jordan

Ciurana, J., Garcia-Romeu, M., Ferrer, I., & Casadesus, M. (2008). A Model for Integrating Process Planning and Production Planning and Control in Machining Processes. Robotics and Computer-integrated Manufacturing, 24, 532–544. doi:10.1016/j.rcim.2007.07.013 Davis, F. D. (1989). Perceived usefulness perceived ease of use, and user acceptance of information technology. Management Information Systems Quarterly, 13(3), 319–340. doi:10.2307/249008 Deloitte Consulting. (1999). ERPs Second Wave [Report]. Deloitte Consulting. Retrieved from http://www.deloitte.com DeLone, W., & McLean, E. (1992). Information Systems Success: The Quest for the Dependent Variable. Information Systems Research, 3(1), 60–95. doi:10.1287/isre.3.1.60 Department of Statistics. Jordan. (2008). Statistics related to the Jordanian Industrial Sector. Retrieved from http://www.dos.gov.jo/dos_home_a/ gpd.htm Ende, J., Jaspers, F., & Gerwin, D. (2008). Involvement of system firms in development of complementary products: The influence of novelty. Technovation. Fan, J., & Fang, K. (2006). ERP Implementation and Information Systems Success: A Test of DeLone and McLean’s Model. In PICMET2006 Conference proceedings, July 2006,Turkey (pp. 9-13). Fichman, R. G., & Kemerer, C. F. (1999). The illusory diffusion of innovation: an examination of the assimilation gaps. Information Systems Research, 10(3), 255–275. doi:10.1287/isre.10.3.255 Fitzgerald, L., & Kiel, G. (2001). Applying a consumer acceptance of technology model to examine adoption of online purchasing. Retrieved February 2004, from http://130.195.95.71:8081/ WWW/ANZMAC2001/anzmac/AUTHORS/ pdfs/Fitzgerald1

988

Gnaim, K. (2005). Innovation is one of Quality aspects. Quality in Higher Education, 1(2). Gupta, S., & Keswani, B. (2008). Exploring the Factors That Influence User Resistance to the Implementation of ERP. Hyderabad, India: The ICFAI University Press. Hardgrave, B. C., Davis, F. D., & Riemenschneider, C. K. (2003). Investigating determinants of software developers to follow methodologies. Journal of Management Information Systems, 20(1), 123–151. Hssain, A., Djeraba, C., & Descotes-Genon, B. (1993). Production Information Systems Design. In Proceedings of Int Conference on Industrial Engineering and Production Management (IEPM33), Mons, Belgium, June 1993. Hsu, C., & Rattner, L. (1990). Information Modeling for Computerized Manufacturing. IEEE Transactions on Systems, 20(4). Hunton, J., Lippincott, B., & Reck, J. (2003). Enterprise Resource Planning Systems: Comparing Firm Performance of Adopters and Nonadopters. Accounting Information Systems, 4, 165–184. doi:10.1016/S1467-0895(03)00008-3 Jordan Industrial Cities. (2008). Statistics from the website of the JIC. Retrieved from http:// www.jci.org.jo Lo, C., Tsai, C., & Li, R. (2005, January). A Case Study of ERP Implementation for OptoElectronics Industry. International Journal of The Computer. The Internet and Management, 13(1), 13–30. Lu, K., & Sy, C. (2008). A real-time decisionmaking of maintenance using fuzzy agent. Expert Systems with Applications. McCrea, B. (2008). ERP: Gaining Momentum. Logistic Management, November 2008, pp. 44-46.

Production Information Systems Usability in Jordan

Microsoft. (2003). Microsoft Business Solutions. Retrieved from http://www.microsoft.com/business solutions Mirchandani, D. A., & Motwani, J. (2001). Understanding small business electronic commerce adoption: an empirical analysis. Journal of Computer Information Systems, 41(3), 70–73. Moore, G., & Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192–222. doi:10.1287/isre.2.3.192 Mourtzis, D., Papakostas, N., Makris, S., Xanthakis, V., & Chryssolouris. (2008). Supply chain modeling and control for producing highly customized products. Manufacturing Technology Journal. Plouffe, C., Hulland, J., & Vandenbosch, M. (2001). Research report: richness versus parsimony in modeling technology adoption decisions-understanding merchant adoption of a smart card-based payment system. Information Systems Research, 12(2), 208–222. doi:10.1287/ isre.12.2.208.9697 Report, S. S. A. (2006). SSA ERP on SOA Platform. SSA Global and IBM. Retrieved from http://www. ssaglobal.com Rogers, E. M. (1983). The diffusion of innovations. New York: Free Press. Rogers, E. M. (1995). The Diffusion of Innovation (4th ed.). New York: Free Press. SAP. (2006). SAP Customer Success Story. Retrieved from http://www.sap.com Singla, A. (2005). Impact of ERP Systems on Small and Mid Sized Public Sector Enterprises. Journal of Theoretical and Applied Information Technology, 119-131.

Smadi, S. (2001). Employees’ Attitudes Towards the Implementation of the Japanese Model Kaisen for Performance Improvement and Meeting Competitive Challenges in The Third Millennium: The Jordanian Private Industrial Sector. Abhath Al-Yarmouk, 313-335. Smith, F. O. (2008 May). Oracle Says It Will Leapfrog Competitors in Manufacturing Intelligence. Manufacturing Business Technology, 26-29. Speier, C., & Venkatesh, V. (2002). The hidden minefields in the adoption of sales force automation technologies. Journal of Marketing, 65, 98–111. doi:10.1509/jmkg.66.3.98.18510 Theodorou, P., & Giannoula, F. (2008). Manufacturing strategies and financial performanceThe effect of advanced information technology: CAD/CAM systems. The International Journal of Management Science, 36, 107–121. Trari, A. (2008). ‫كومريلا ةعماج ةبتكم‬. Retrieved from http://library.yu.edu.jo/ Tsai, W., & Hung, S. (2008). E-Commerce Implementation: An Empirical Study of the Performance of Enterprise Resource Planning Systems Using the Organizational Learning Model. International Journal Of Management, 25(2). Turban, E., Leidner, D., McLean, E., & Wetherbe, J. (2008). Information Technology for Management (6th ed.). Hoboken, NJ: John Wiley. Wang, T., & Hu, J. (2008). An Inventory control systems for product with optional components under service level and budget constraints. European Journal of Operational Research, 189, 41–58. doi:10.1016/j.ejor.2007.05.025 Wang, W., Hsieh, J., Butler, J., & Hsu, S. (2008). Innovative Complex Information Technologies: A Theoretical Model And Empirical Examination. Journal of Computer Information Systems, (Fall): 27–36.

This work was previously published in Enterprise Information Systems Design, Implementation and Management: Organizational Applications, edited by Maria Manuela Cruz-Cunha and Joao Varajao, pp. 270-286, copyright 2011 by Information Science Reference (an imprint of IGI Global). 989

990

Chapter 54

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China Tao Chen SanJiang University, China, Nanjing Normal University, China, & Harbin Institute of Technology, China Li Kang SanJiang University, China, & Nanjing Normal University, China Zhengfeng Ma Nanjing Normal University, China Zhiming Zhu Hohai University, China

ABSTRACT Manufacturing transition is an important part of industrial upgrading. At present, Chinese scholars study the problem of manufacturing chiefly from two perspectives: The first is to discuss the status quo of Chinese manufacturing from the perspective of industrial competitiveness, with countermeasures put forward against manufacturing upgrading. The second is to directly discuss the upgrading of manufacturing from the perspective of global value chain, with the following proposal put forward: Chinese manufacturing upgrading should stretch from the low end to both ends of value chain. In addition, a discussion is also made to the role of producer services in promoting manufacturing, and the role of governmental regulations in upgrading manufacturing. Although these two perspectives are rational, they have some defects: Both of them are based on the hypothesis that the institutional environment in which manufacturing lies is stationary, and manufacturing is considered and measured with systems as exogenous variables; so the impact of institutional environment on manufacturing upgrading is overlooked. Based on reviewing previous literature, this chapter analyzes and discusses the path evolution of manufacturing in the transitional period in mainland China. DOI: 10.4018/978-1-4666-1945-6.ch054

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

1 INTRODUCTION Industry, especially manufacturing, is the foundation and pillar of national economy. To most developed and developing countries, the leading role and fundamental function of manufacturing cannot be replaced by those of any other industrial sector. Since reform and opening-up began over 30 years ago, especially since China joined WTO in 2001, Chinese economy has been developing very rapidly, along with the continuous rise of its economic aggregate. This is closely linked with the swift development of manufacturing. So to speak, the development of manufacturing supports the “ridge” of Chinese economy. The same trend has also occurred to other countries. The governments of many Western countries have again brought forward their plan of “reindustrialization”, i.e. paying attention to the important contribution of manufacturing to economic growth. Therefore, China cannot develop its economy without the development of manufacturing. Instead, we must pay attention to the role and function of manufacturing. At present, Chinese manufacturing develops very rapidly. In the past ten years, both the volume and value of production of Chinese industry have always been growing rapidly. If calculated as per a constant price, the annual average growth of the total production value of Chinese manufacturing from 1995 to 2003 was 14.53%. By 2003, the total production value of manufacturing had reached about 12.27 trillion yuan. According to the calculation of UN Statistics Division and Industrial Development Organization, the annual average growth rate of Chinese manufacturing from 1998 to 2003 reached as high as 9.4%, while the same figure for developing countries in the same period was only 4.4.%. The value of the total export volume of Chinese manufactured goods divided by the number of employees in manufacturing rose from 1763.92 US dollars in 1995 to 9570.09 US dollars in 2004; the annual average growth rate from 1998 to 2001 was 18.30%. Since China joined

WTO, the growth rate of export has been increasing more rapidly. The annual average growth rate from 2002 to 2004 reached 22.50% (Jin,et al., 2007). In 2008, the number of manufacturing enterprises in China was 396950, with an added value of 44135.836 billion yuan, total assets of 32340.308 billion yuan, and 77315.7 thousand employees. The number of manufacturing enterprises, total production value, total assets and number of employees in 2008 grew by 174.90%, 497.13%, 243.69% and 67.37% respectively over 2000. Among them, the total production value had the biggest growth rate: nearly 500% (Lin, 2010). However, behind the rapid development of Chinese manufacturing exists a series of problems yet to be solved in an effective way. First, the per capita added value of Chinese manufacturing is far lower than the world average. If calculated as per the constant price in 2000, the per capita added value of Chinese manufacturing in 2006 was 610 US dollars, lower than that of the developing regions in West Asia and Europe, Latin America and the Caribbean, merely equivalent to 13.5% of that of industrialized countries (Please refer to Table 1) (Li, et al., 2009). Second, the regional distribution and industrial structure of Chinese manufacturing are seriously imbalanced. From the perspective of regional distribution, a huge gap exists in the distribution of manufacturing among eastern, central and western China. The total production value, added value, the total assets and number of employees of the manufacturing in eastern China are 73.68%, 68.85%, 70.14% and 72.50% respectively; the same figures in central China are 16.23%, 18.86%, 16.91% and 16.75% respectively; the same figures in western China are 10.10%, 12.29%, 12.95% and 10.75% respectively. From the perspective of industrial structure, the distribution of manufacturing is also imbalanced. Nearly 66% of the added value of manufacturing in 2008 was distributed in ten industrial sectors including ferrous metal smeltering, communication equipment and computer (Lin,

991

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

Table 1. International comparison of the per capital added value of manufacturing (the constant price of 2000) Year Country group

1991

1994

1995

1998

2000

2006

Industrialized countries other than CIS

3573

3614

3730

3996

4291

4509

African countries located to the south of the Sahara

29

26

26

27

27

30

East Asia and South Asia

128

151

164

166

199

267

Latin America and the Caribbean

664

694

680

736

739

792

West Asia and Europe

431

448

460

518

535

664

CIS countries

422

227

216

198

239

369

North Africa

164

162

167

183

195

207

China

167

233

254

313

366

610

Note: The data of China include those of Taiwan Province and Hong Kong Special Administrative Region, but exclude those of Macao Special Administrative Region. Data source: UN Industrial Development Organization (UNIDO)

2010). This imbalance will lead to the huge waste of human, material and financial resources in eastern, central and western China, especially in central and western China, and the ineffective utilization of resources, thus setting back industrial development and economic progress. Third, the added value of Chinese manufacturing makes up a relatively high proportion in China’s GDP. From 2003 to 2007, the proportion of the added value of Chinese manufacturing in China’s GDP did not change a lot, always remaining between 34% and 40%. This proportion was not only far higher than that of such developed countries as USA, Japan, Germany, UK and France, but also far higher than such developing countries as Brazil, India and Mexico. Although some research indicates that China is now in the mature period of new industrialization (Li Gang et al, 2009), but this relatively high index not only reflects the characteristics of industrial structure of China in the middle stage of industrialization, but also indicates that there possibly exists a problem of disproportion to some extent in Chinese economic structure (Jin, et al., 2007). We should not only develop our economy, but also upgrade our manufacturing. We can solve the

992

problems existing now in Chinese manufacturing only by transforming manufacturing from a low end to a high end and from resource wastage and environmental pollution to resource conservation and environmental protection by applying a series of practical measures. Therefore, so to speak, the sustainable, rapid and healthy development of Chinese economy in the future will be closely linked with the upgrading of manufacturing.

2 THE INTERNATIONAL COMPETITIVENESS OF CHINESE MANUFACTURING Of all the fundamental theories of industrial international competitiveness having been so far put forward, the most influential are the theory of comparable advantage (Ricardo,1817) and the theory of competitive advantage (Porter,1990; Liu, et al., 2006). In Paul Krugman’s International Economics (the most extensively-distributed and authoritative textbook in this field in the world), “comparative advantage” is defined as: “If the opportunity cost for producing a certain product in a country is lower than in other countries, then

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

this country has relative advantage in producing this product”. Therefore, the fundamental principle concerning comparative advantage and international trade is: “If a country exports commodities in which it has comparative advantage to another country, then both these two countries can benefit from the trade between them”. In contrast, according to the theory of the competitive advantage of nations put forward by Professor Porter, the competition among countries in market economy in fact goes on among enterprises, and enterprises with international competitive advantage are concentrated in only a limited number of industrial sectors. Therefore, industrial sectors should be used as the basic units for studying national competitive advantage. The competitive advantage of enterprises not only comes from themselves, but also originates from the microeconomic foundations on which their development relies---a diamond system consisting of factor conditions, demand conditions, the competitive background of corporate strategy and structure and relevant supporting industrial sectors. Therefore, the attention to national competitive advantage should be focused on the cultivation of these microeconomic foundations (Liu, et al., 2006). As regards to the index-based appraisal of industrial competitiveness, the industrial competitiveness of a country can be appraised by using

more than one index. As indicated by relevant researches, we can derive somewhat different judgments if we appraise the competitiveness of Chinese manufacturing and its subsidiaries by using different indexes. Some indexes reveal that the international competitiveness of a certain industrial sector of China is elevated, while some other indexes reveal that the international competitiveness of this industrial sector of China is declining. As a matter of fact, this phenomenon often occurs when we observe a complicated thing from different perspectives, or the same complicated thing manifests itself in different ways in different aspects. Jin,et al.(2007) combine many indexes into one to express the trend of the change of the industrial competitiveness of China in a comprehensive way. Through due research, they worked out the composite index of the international competitiveness of Chinese manufacturing (Please refer to Table 2). By comparing Chinese manufacturing with American and Japanese manufacturing, they formed such an opinion: The competitiveness of Chinese manufacturing has always been rising continuously. From the perspective of its subsidiaries, the competitiveness of Chinese manufacturing in “other products” in the above table is the strongest. Office products and telecommunication equipment, whose competitiveness has exceeded

Table 2. The composite index of the international competitiveness of Chinese manufacturing The index of comparative advantage

The index of competitive advantage

The composite index of the international competitiveness

Manufacturing

102.8

132.1

117.5

1.Iron and steel

210.0

172.5

191.3

2. Chemical finished products and relevant products

88.2

115.7

102.0

3. Other semi-finished products

98.1

130.5

114.3

4. Mechanical and transportation equipment

119.8

164.1

142.0

Including: Office products and electronic products

134.9

190.9

162.9

5.Textiles

93.5

132.8

113.2

6. Apparel

83.2

113.5

98.4

7.Other products

89.2

108.1

98.7

993

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

that of textiles, are called “the second most competitive industrial sector of manufacturing”. Transportation equipment (especially automobile), chemical products, integrated circuits and components and power machinery and equipment have the weakest competitiveness in China, with their competitiveness still shrinking continuously. Compared with EU, China has the most obvious disadvantage in chemical products (especially medicines), transportation equipment (especially automobiles) and power machinery and equipment. In contrast, office and telecommunication equipment, other products (especially personal and household goods), apparels and textiles, enjoy a relatively high competitiveness advantage. At last they derived such a conclusion: The present elevation of the international competition of Chinese industrial enterprises is not determined by industrial enterprises to a very large extent; instead, this elevation depends on the development of Chinese finance and mass media. In our opinion, there are two important conditions for guaranteeing the enhancement of the international competitiveness of Chinese manufacturing: One is that Chinese manufacturing enterprises must secure a financial support from Chinese financial enterprises all over the world; the second is that China must have some media (including newspaper, TV and radio) influential in the mainstream crowds in foreign countries (especially European countries and the USA). Therefore, while assisting Chinese industrial enterprises in “going abroad”, Chinese government must consider how to assist Chinese financial institutions (especially banks) and media (especially newspaper) in “going abroad”. In addition, Chen et al. (2009) studied the international competitiveness of Chinese and American manufacturing through empirical analysis on the basis of the hierarchy-based opinion of industrial competitiveness. In their opinion, industrial competitiveness consists of four hierarchies, which are in turn (from bottom to top): the source of competitiveness—industrial environment; the

994

essence of competitiveness—productivity; the performance of competitiveness—market share; the result of competitiveness—industrial profitability. These four hierarchies are interlinked and cycled logically. The ultimate goal of industrial competitiveness is to generate profit; to generate profit, we should first prove that we have a stronger industrial competitiveness than other countries in trade; the foundation for enhancing trade competitiveness is to enhance the productivity of this manufacturing sector; to enhance the productivity, we should invest in the construction of soft and hard environments including technical innovation, advanced equipment and education & training; the capital resources for investment in environments rely on industrial profit in turn. Only by investing in environments (the first hierarchy) with profits (the highest hierarchy of competitiveness), can we enter the new-round cycle of productivity and market share, and can we acquire continuouslyreinforced market competitiveness. On this basis, they analyzed the international competitiveness level of 30 types of manufacturing in China, and derived an inconsistent conclusion: Measuring industrial competitiveness with the index of profitability and the index of productivity has a very high goodness of fit, which proves that the industries with a higher productivity will also have a higher profitability in Chinese domestic market. However, the result of sequencing based on the index of profitability and the index of productivity is quite different from the result of sequencing based on the index of market share: Gamma coefficient is negative, which proves that the industrial sectors with a higher profitability and a higher productivity do not necessarily have a bigger global market share, or things may happen the other way round. This proves that the first hierarchy and explanatory variable of industrial competitiveness---the factor of industrial environments, exerts a very great influence upon the status of competitiveness; industrial structure, industrial policy, trade policy, system environment, etc. can bring about some difference among the

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

three competitiveness indexes. Compared with Chinese manufacturing, not only are the indexes of the hierarchy of profitability and the hierarchy of productivity in American manufacturing interrelated, but also the competitiveness indexes of the hierarchy of market share and the hierarchy of productivity are somewhat interrelated; namely, the sequencing results of these three competitiveness indexes are not quite different. This proves that American manufacturing has a more mature development environment than Chinese manufacturing, so production efficiency can be smoothly transformed into market share and profit in the USA. From this we can see the importance of environments, namely, the important influence of such factors as systems upon the upgrading of manufacturing.

3 THE UPGRADING OF MANUFACTURING FROM THE PERSPECTIVE OF GLOBAL VALUE CHAIN To discuss the upgrading of manufacturing from the perspective of global value chain is the mainstream of the contemporary research into industrial upgrading. As the internationalization of contemporary economy becomes more and more intensified, features that are different from those of the previous tide of globalization have occurred to production, trade, investment, turnover, means of organization, etc. As known to all, division of labor is the source of economic growth, while industrial transfer is an important means to realize spatial division of labor. As international division of labor is intensified from among industries to among different products in each industrial sector, and then to different working procedures of each product, international industrial transfer is also evolved from spatial transition among different industries to that among different products, and then to that among different working procedures of each product. The intensification of division of

labor and industrial transfer are the key contemporary features of this globalization different from those of the previous rounds of globalization. On the other hand, global value chain is the leading force for boosting the intensification of division of labor and coordinating industrial transfer. This leading force has not only changed the microscopic foundation of globalization, but also produced a revolutionary effect upon competition models and development strategies. As regards to the theory of global value chain, an American scholar (Michael E Porter,1985) put forward the theory of Value Chain from the perspective of the theory of enterprise competition strategies. This theory is a theoretical framework for analyzing enterprise activities under the condition of international competition. Its core opinion is: The value created by enterprises in fact originates from some specific value activities on a relevant value chain. Grasping “strategic link” is a key factor for controlling the entire value chain and relevant industrial sector. In 1990s, a scholar(Gereffi, Korzeniewicz,1994,1999)put forward the theoretical framework of Global Commodity Chain. This theory directly related the value-added chain with global industrial organizations, and therefrom made a comparative research into the commodity chain driven by producers and purchasers. To eliminate the limitation of the word “commodity”, and highlight the importance of the creation of the relative value of enterprises and the acquisition of value on the chain, Gereffi and numerous researchers in this field reached an agreement at the beginning of the 21st Century to replace global commodity chain with a term “Global Value Chain (GVC)”. The classification of global value chains is basically based on the dichotomy under the framework of global commodity chain, namely, the producer-driven value chain and purchaser-driven value chain put forward by Gereffi. The producer-driven value chain means boosting market demand through producers’ investment, thus forming a system of vertical division of labor of local production

995

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

supply chain. Under this value chain, investors include not only transnational companies blessed with technical advantages and seeking for market expansion, but also national governments trying hard to boost local economic development and set up an independent industrial system. The purchaser-driven value chain means that large purchaser organizations blessed with strong brand advantages or sales channels coordinate and control production, design and marketing activities that aim at the target market. This value chain is featured by labor intensiveness and represented by consumer goods (e.g. apparel, shoe, toy, consumer electronics, etc.). Another important content of the theory of global value chain is about the governance model of value chain. So far, no uniform conclusion has been reached in the academic circle on the classification of the governance models of global value chain. As regards to the governance models of global value chain, Humphrey, Schmitz(2002) made a distinction among the following four governance models on the basis of the difference of organization coordination and power distribution: market-based model; model based on equilibrium network; capture-based model; hierarchy-based model. The enterprises in developing countries generally enter the assembly and manufacturing link of capture-based GVC as subcontractors by relying on cheap labor force. The high competition and low income brought by low-entry barriers make subcontracting enterprises face a huge pressure of upgrading. Through an empirical analysis into the textile and apparel sector of industry in the world, Gereffi(1999)worked out the sequential upgrading model under GVC, namely technical upgrading—product upgrading---functional upgrading—chain’s upgrading, and optimistically held the opinion that the subcontracting enterprises in developing counties can smoothly realize this kind of sequential upgrading by joining GVC and accepting the supports from leading enterprises in developed countries in such aspects as technical diffusion, employee training and equipment

996

introduction. With this sequential upgrading, the performance of subcontracting enterprises, namely the quantity of value created and acquired by them, also increases gradually. Gereffi’s analysis has two problems: First, as indicated by a lot of practice of developing countries, the above-mentioned model of sequential upgrading cannot be realized automatically (Humphrcy,Schmitz,2002). In addition, upgrading will change the contrast of powers and the structure of income distribution in GVC. Therefore, the upgrading of subcontracting enterprises is subject to suppression from leading enterprises. The size of upgrading barrier depends on the governance model of GVC. Next, upgrading does not necessarily mean the enhancement of the performance of subcontracting enterprises. In the capture-based GVC, the upgrading of subcontracting enterprises is a kind of passive upgrading aiming to obey the global strategies of leading enterprises. By continuously searching for and supporting new subcontractors and intensifying competition, leading enterprises capture the newly-added value created by the upgrading of subcontracting enterprises. In contrast, in the market-based and network-based governance with more equivalent powers, enterprise upgrading is a kind of active upgrading adapting to competition and seeking for profit, and can accordingly acquire the benefits brought by upgrading. According to the research of Liu (2007), and in connection with the practice of developing countries, Zhuo(2009) worked out four matching models of management, upgrading and enterprise performance: (1) Market-based management—(independent and slow sequential upgrading)—(slow enhancement of performance). Under this governance model, the transaction target is mature standardized products; the division of labor and transaction among enterprises are based on market contracts marked by “at arm’s length”; therefore, enterprise upgrading is a kind of endogenous and independent upgrading based on competency. This upgrading is free from the control and hindrance of other enterprises, but it is relatively slow, so the enhancement of

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

enterprise performance is also progressive. (2) Governance based on equilibrium network ---(independent and fast sequential upgrading)—(fast enhancement of performance). Governance based on equilibrium network is a coordination method of division of labor and transaction based on the mutual supplementation of competency, the sharing of knowledge and technology and relative equality; it does not involve a relationship between controlling and being controlled; therefore, enterprises can independently carry out upgrading in various forms. In this kind of network, the partial innovation required by high-degree division of labor greatly reduces the investment required by upgrading and mitigates investment risks, while the sharing of competency, technology and knowledge accelerates the realization of upgrading and the enhancement of performance. (3) Capture-based management---(rapid but passive technical upgrading and product upgrading)—(it is difficult to enhance the performance, or even the performance declines). In capture-based GVC, in order to guarantee the diversity of products, the timeliness of supply and the reliability of product quality, transnational companies of developed countries have to assist subcontracting enterprises in accelerating technical upgrading and product upgrading. In addition, by such means as patent pool, strategic isolation, brand reinforcement and retail market merger, they endeavor to raise the entry barriers for links with a high additional value including design, R & D and marketing, and slow down the process of the functional upgrading and chain’s upgrading of subcontracting enterprises, so as to prevent their core competency and income from being eroded. Under such circumstances, it is generally difficult to realize the sequential upgrading of subcontracting enterprises, and the income created by the rapid technical and product upgrading is also captured by transnational companies as a result of the intensification of the competition in the assembly manufacturing link. (4) Hierarchy-based management--- (rapid but passive technical upgrading, slow product upgrad-

ing)—(it is difficult to enhance the performance greatly). Hierarchy-based governance model is a method of production organization and coordination. Under this model, in order to reduce their cost and occupy the market, transnational companies of developed countries establish joint ventures through FDI in other countries, and control and operate enterprises by relying on such core competencies as ownership and R & D design. In the hierarchy-based GVC, as joint ventures can directly obtain the technology, brands, capitals and equipment of transnational companies, they can realize technical upgrading rapidly, and make their products meet the uniform global quality standards of transnational companies. Under this model, product upgrading is relatively slow because of its dependence on the market development and competition status of the countries to which the investment is oriented. Moreover, functional upgrading and chain’s upgrading are strictly controlled by transnational companies. On the other hand, it is difficult to greatly enhance the performance of joint ventures, because transnational companies try to squeeze the profit margin of joint ventures by collecting a high fee of technical transfer, key parts and components, and brand licensing. With the Yangtze River Delta as its research subject, Liu, et al.(2009) analyzed the disadvantages of Chinese manufacturing upgrading under GVC. In their opinion, merging into GVC has weakened the relationship among industrial departments in different regions in China, and exerted an unfavorable influence upon the integrated development of regional substantial economy. The technology transfer and technology spillover embedded in outsourcing and subcontracting activities have an obvious “breakpoint” and “isolation” effect upon the industrial development of developing countries. The imbalance and asymmetry of GVC income distribution lead to the difficulty in fund accumulation. In the “captured” value chain, owing to the change of the global competition environment, the continuous entry

997

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

of suppliers and the competition effect of cheap commodities, the foundation for the existence of some previous obvious advantages that can be acquired by relying on international big purchasers has already been seriously eroded, and has been replaced with more and more obvious defects and shortcomings, which constitutes a serious threat and hazard to the industrial upgrading process of developing countries. Under GVC, as international subcontractors, enterprises of developing countries will, while molding their global brands and independently building sales terminal channels at home and abroad, meet with “the positional block” of transnational companies that control core technical patents and product standard systems and international big purchasers that control the sales terminal passages and brands of international demand market. Therefore, through the so-called “Learning by exporting”, the skills of product and technical upgrading can be learned at most, and the real core technology and functional upgrading skills cannot be learned at all. For this, they held the opinion that the export-oriented development strategy based on GVC has made a huge contribution to economic takeoff during the early stage of economic development of Yangtze River Delta. However, in order to make Yangtze River Delta really become a base of advanced manufacturing and transfer from manufacturing to creation, we cannot merely rely on this development strategy. On the basis of attaching equal importance to international and domestic market, we should try to integrate the industrial relevancy and cycle system on which Chinese enterprises rely for survival and development, mold the governance structure of domestic value chain, and adjust the relationship structure among Chinese industries located in different regions, so as to lay a solid development platform for the manufacturing upgrading of the Yangtze River Delta and the integrated development of regional economy.

998

4 THE UPGRADING OF MANUFACTURING FROM OTHER PERSPECTIVES Promoting the upgrading of manufacturing by developing producer services is an indirect upgrading method. Modern Producer Services is an industrial sector offering direct services to productive or commercial activities as intermediaries, including finance, insurance, accounting, R & D design, law, technical and management consultancy, transportation, telecommunication, modern logistics, advertising, marketing, brand, personnel, administration and property management. Obviously featured by high knowledge, intelligence, growth, employment and influence, it is derived from the matrix of manufacturing. Therefore, it has a natural and intrinsic industrial relevancy or interaction with manufacturing. As indicated by the research of Park and Chan (1989), the development of manufacturing can bring about that of producer services, which can, in turn, promote the upgrading of manufacturing. There is an obvious positive relevancy between them. In contrast, by arguing from an opposite perspective, Farrell and Hitchens (1990) pointed out that the lack of producer services or the inadequate price and competitiveness of producer services in a region will set back the efficiency, competitiveness and operation of local manufacturing, thus destroying the development process of this region. Some other scholars, including Zhi (2001) and Zhou(2003) analyzed this from the perspective of industrial amalgamation. As pointed out by them, with the continuous advancement of information technology revolution, the traditional boundary between services and manufacturing becomes vaguer and vaguer, and they tend to develop in an interactive and amalgamated way. Liu(2008) also held the opinion that the development of producer services can reduce the installation cost of manufacturing enterprises, thus assisting enterprises in forming their core competitiveness and forming their interaction in industrial relevancy.

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

5 CONCLUSION There are a lot of research literatures of experts and scholars on the upgrading of manufacturing. All the researches into the upgrading of manufacturing, from such perspectives as industrial competitiveness, global value chain and “the indirect method of upgrading”, have their respective theoretical bases and realistic significances for existence. All of them have made their respective due contribution in this regard. However, all the existing literatures neglect to study the upgrading of manufacturing from the perspective of the entire national systems, which is a macroscopic perspective that is difficult to grasp. From the emergence of institutional economics in the field of economics to the gradual perfection of new institutional economics, we should abandon those things regarded as hypothetical exogenous variables in the past, and turn these exogenous variables into endogenous ones, and study some practical problems from this perspective. We can discover some useful things from both transaction costs and institutional history, and from both the national theory of new institutional economics and its enterprise theory, so that we can study and practice the upgrading of manufacturing in a better way. Economic development cannot do without systems, because economic development needs to be promoted by people, and people promote economic development by working out some systems. Such an objective fact as economic development is promoted by people subjectively. Therefore, it is very important to study this kind of subjective representation and some things hidden behind it. For example, in Chinese industrial upgrading, the promotion of government plays a very great role. However, in the process of decision making, the government does not always have a very thorough understanding of industrial upgrading. A very big misunderstanding has occurred in the present industrial upgrading, namely: as understood by many of us, the industrial upgrading can be real-

ized through a transition from the low-level facet of one industrial sector to the low-level facet of another. As a result, we always provide processing sites to large transnational companies of developed countries under economic globalization. Many people understand this simple transfer from the low end of one industrial sector to the low end of another as industrial upgrading, which is, as a matter of fact, very unreasonable. Therefore, how to standardize this kind of understanding and the behaviors of the government from the perspective of systems, and how to measure the opportunity cost of this kind of upgrading from the perspective of transaction cost, will be an indispensible part of our future research.

ACKNOWLEDGMENT I would like to express my sincere gratitude to the financial aid of Natural Science Foundation of China (grant number:70971031, 71031003), Sanjiang University (project number: K08010),and the innovation project for the graduate students of the Business School of Nanjing Normal University(project number:10CX_003G).

REFERENCES Chen, L. M., Wang, X., & Rao, S. Y. (2009). The comparison between the international competitiveness of Chinese and American manufacturing: Empirical analysis based on the hierarchy opinion of industrial competitiveness. China Industrial Economy, 6, 57–66. Farrell, P. N., & Hitchens, D. M. W. N. (1990). Producer services and regional development:a review of some major conceptual policy and research issues. Environment & Planning A, 22, 1141–1154. doi:10.1068/a221141

999

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

Gereffi, G. (1999). International trade and industrial upgrading in the apparel commodity chain. Journal of International Economics, 48(1), 37–70. doi:10.1016/S0022-1996(98)00075-0 Gereffi, G., Humphrey, J., & Sturgeon, T. (2005). The governance of global value chains. Review of International Political Economy, 12(1), 78–104. doi:10.1080/09692290500049805 Humphrey, J., & Schmitz, H. (2002). How does insertion in global value chains affect upgrading in industrial cluster. Regional Studies, 36(1), 1017–1027. doi:10.1080/0034340022000022198 Jin, B., Li, G., & Chen, Z. (2007). The status-quo analysis and enhancement countermeasures for the international competitiveness of Chinese manufacturing. Finance & Trade Economics, 3, 3–10. Li, G., Jin, B., & Dong, M. J. (2009). Basic judgment over the development status quo of Chinese manufacturing. Review of Economic Research, 41, 46–49.

Liu, Z. B., & Zhang, J. (2007). Forming, breakthrough and strategies of captive network in developing countries at global outsourcing system: Based on a comparative survey of GVC and NVC. China Industrial Economy, 5, 39–47. Liu, Z. B., & Zheng, J. H. (2008). Driving the Yangtze River Delta with service industry, (pp. 56-59). Beijing, China: The Press of Renmin University of China. Park, S. H., & Chan, K. S. A. (1989). Crosscountry input-output analysis of intersectoral relationships between manufacturing and services and their employment implications. World Development, 17(2), 199–212. doi:10.1016/0305750X(89)90245-3 Porter, M. E. (2002). National competitive advantages. Beijing, China: Huaxia Press. Zhi, C. Y. (2001). The industrial amalgamation of information telecommunication industry. China Industrial Economy, 2, 24–27.

Lin, Y. L. (2010). The status quo of Chinese manufacturing and research into its comparison with that of foreign countries. The Journal of North China Electric Power University, 3, 32–37.

Zhou, Z. H. (2003). Industrial amalgamation: The new power of industrial development and economic growth. China Industrial Economy, 4, 46–52.

Liu, L. Q., & Tan, L. W. (2006). Two-dimensional appraisal of industrial international competitiveness—The thoughts against the background of global value chain. China Industrial Economy, 12, 37–44.

Zhuo, Y. (2009). The governance of global value chain---Upgrading and the performance of 5 local enterprises---The questionnaire survey and empirical analysis based on Chinese manufacturing enterprises. Finance & Trade Economics, 8, 93–98.

Liu, Z. B., & Yu, M. C. (2009). Go from GVC to NVC - The integration and industrial upgrading of the Yangtze River Delta. Academic Learning, 5, 59–67.

This work was previously published in Comparing High Technology Firms in Developed and Developing Countries: Cluster Growth Initiatives, edited by Tomas Gabriel Bas and Jingyuan Zhao, pp. 134-144, copyright 2012 by Information Science Reference (an imprint of IGI Global).

1000

1001

Chapter 55

UB1-HIT Dual Master’s Programme:

A Double Complementary International Collaboration Approach David Chen IMS-University of Bordeaux 1, France

Jean-Paul Bourrières IMS-University of Bordeaux 1, France

Bruno Vallespir IMS-University of Bordeaux 1, France

Thècle Alix IMS-University of Bordeaux 1, France

ABSTRACT This chapter presents a double complementary international collaboration approach between the University of Bordeaux 1 (UB1) and Harbin Institute of Technology (HIT). Within this framework, the higher education collaboration (dual Master’s degree programme) is supported by research collaboration that has existed for more than 15 years. Furthermore this collaboration is based on the complementarities of competencies of the two sides: production system engineering (UB1) and software system engineering (HIT). After a brief introduction on the background and overview, the complementarities between UB1 and HIT are assessed. Then a formal model of the curriculum of the dual UB1-HIT Master’s programme is shown in detail. A unified case study on manufacturing resource planning (MRPII) learning is presented. Preliminary results of the Master’s programme are discussed on the basis of an investigation carried out on the first two cohorts of students.

BACKGROUND AND OVERVIEW Research relationships between the University of Bordeaux 1 (UB1, France) and Harbin Institute of Technology (HIT, China) exist for several years and both parties have established strong DOI: 10.4018/978-1-4666-1945-6.ch055

and long-term relationships with their industries over some 30 years. In the research domain on computer integrated manufacturing and production system engineering and integration, the cooperation between the University of Bordeaux 1 (IMS-LAPS: Laboratory for the Integration of Materials into Systems-Automation and Produc-

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

UB1-HIT Dual Master’s Programme

tion Science Department) and China started in 1993. Several Europe-China projects coordinated by UB1 have been carried out (1993-1995; 19961997; 1998-2002) in this domain, involving more than 7 major Chinese universities such as Tsinghua University, Xi’an Jiaotong University, Harbin Institute of Technology, Huazhong University of Sciences and Technologies, and others. More recently, the cooperation between the University of Bordeaux 1 and Harbin Institute of Technology has been strengthened to develop enterprise interoperability research activities in the Interop Network of Excellence (2004-2007) programme under the auspices of the European 6th Framework Programme for Research & Development (FP6) (European Commission, 2003b). There is a long and strong cooperation between UB1 and HIT in research on other topics as well, including enterprise system modelling, engineering and integration. However co-operation in higher hducation was not so well-developed in the past. Consequently, it was logical to extend the existing co-operation from the research base to incorporate higher education. Therefore, in September 2006 UB1 and HIT launched a dual master’s degree programme on enterprise software and production systems. This programme relies on the know-how of HIT in computer sciences and enterprise software applications, and of UB1 in enterprise modelling, integration and interoperability research. This joint international programme aims to train future system architects of production systems, with the ability to model, analyze, design and implement solutions covering organization, management, and computer science in order to improve performance of both manufacturing and service enterprises. It also aims to develop the capabilities of students to develop and grow in an international working environment particularly in China or France but also in most other countries where the themes covered by the programme are now and will continue to be vital.

1002

The programme is organized over two years. The first year’s courses are given in HIT and are concerned with industrial oriented computer sciences. The second year’s courses are given in UB1 and dedicated to production management and engineering. The first two cohorts of the master’s programme have successfully completed their studies and their industry internships in China and France and have obtained the Master’s Degree of the University of Bordeaux 1 and the Master’s Degree of Harbin Institute of Technology in September 2008 and 2009. Table 1 gives an overview on the organization of the two year programme. All courses are presented in English, including examinations and internship defense. One characteristic is that the industry internship can be carried out in China, or in France or in any third country in the world. Table 1. Organisation of the dual master’s programme Year 1 Teaching/training

Semester

Location

Project

First

Harbin or Bordeaux

Internship

First

World

Courses

Second

Harbin

Detail: • Project (135h / 9 ECTS - European Credits Transfer System), • Training in enterprise (305h / 21 ECTS), • Algorithm and System Design and Analysis (90h / 6 ECTS), • Database Design and Application (analysis and design) (94h / 6 ECTS), • Software Architecture and Quality (93h/6 ECTS), • Project Management and Software development (92h / 6 ECTS), • Object-Oriented Technology and UML (86h / 6 ECTS). Year 2 Teaching/training

Semester

Location

Courses

Third

Bordeaux

Training in company

Fourth

World

Detail: • Modelling of industrial systems (135h / 9 ECTS), • Production management (135h / 9 ECTS), • Industry performance measurement (45h / 3 ECTS), • Industry systems integration (90 h / 6 ECTS), • Option (45h / 3 ECTS), • Training in enterprise (450h / 30 ECTS).

UB1-HIT Dual Master’s Programme

The internship placements are mainly in companies, large as well as small/medium enterprises (SMEs), which have industrial co-operation projects with China, but not necessarily limited to that. Besides IT-oriented work, the internships are situated in the manufacturing industry sector as well as that of the services, typically as a responsible person in charge of industrial management (production, quality, and maintenance), a person in charge of design, development and implementation of software applications, a consultant, or a project leader.

COMPLEMENTARITIES UNDERPINNING THE COLLABORATION Software Engineering and Production System Engineering As mentioned above, this collaboration is based on the complementary strengths of UB1 (production system engineering) and HIT (software system engineering). Considering an enterprise from the general point of view as a system providing goods and services or, from the narrower point of view of its information system, it is clear that both of these approaches relate to the fundamental philosophy of engineering. In both cases the purpose is to design an overall architecture for the system, consistent and relevant to a predefined mission. Models and simulations have a central role in both approaches. Production system engineers view the enterprise as a system having a purpose related to a strategy. Within this purpose and strategy, performances are defined and enable the evaluation of how well the enterprise runs. The necessity for communication and cooperation between sections within a company or between companies within a network has led to the important concept of integration. Today, the numerous forms of co-operation and the versatility

they require brings into prominence the concept of interoperability that can be broadly understood as a loose integration. Because of the complexity of the enterprise, it is always considered to relate to a reference (a conceptual model or reference architecture). With respect to this reference, the engineering methodologies used are supported by modelling languages and frameworks (enterprise modelling), the role of which is to enable the understanding of the structure and behavior of the enterprise. The existing diversity of languages and software supports leads to the need to analyze them in detail, in order to compare them and potentially use them together. In this perspective, a pure syntactical approach is not enough, and therefore current scientific developments in this field are related to semantics and deal with meta-models and ontology. Furthermore, the consideration of the humanbeing as a component of the enterprise must always be remembered. For this reason the relation of the models with decision-making (of design and/or of management) is an important issue, whatever the approach used. From a software engineering point of view, the need for integration can be matched through the provision and the implementation of software tools, mainly enterprise resource planning (ERP) tools. This domain then focuses on IT solutions analysis, implementation projects, IT solution performance analysis, and the identification of the usability domain and the limitation of classical methods. The ways in which the functions of the information system are integrated using such IT tools is globally understood today. Organizational challenges are also quite well known. The main outstanding issues relate to supporting the processes of the enterprise by consistently integrating the several IT solutions that have functionalities that generally cover more than is required. In this context, the capability to match the models of the enterprise (the requirements) with the models emerging from the IT solutions (the so-called space of solutions) becomes crucial. Finally, a

1003

UB1-HIT Dual Master’s Programme

continuing core problem is ensuring a permanent alignment of the information system and its various implemented IT solutions with the strategy of the company. Because the economic environment is dynamic, this leads, of necessity, to a policy of continuous engineering. In summary, the two domains relate both to the design, integration and control of systems under performance conditions. In order to match the dynamic requirements and take changing constraints into account, it is necessary to continually improve the understanding of the interactions between the various models and to gather and integrate the various points of view such as organization, software, etc. In this drive to keep on improving performances, the exploitation of the complementarities between software engineering and production systems engineering is a thoroughly necessary requirement.

Enterprise Interoperability as an Emerging Topic Related To These Complementarities Enterprise interoperability is a topic currently emerging at the confluence of software engineering and production systems engineering. It is a topic of considerable and growing scientific and technical research, fundamentally because of the considerations presented above. Worldwide, the competitiveness of enterprises, including SMEs, will strongly depend in the future, on their ability to develop and implement massively and rapidly networked dynamic organisations. New technologies for interoperability within and between enterprises will have to emerge to radically solve the recurrent difficulties encountered - largely due to the lack of conceptual approaches - to structure and interlink enterprises’ systems (information, production, decision) (European Commission, 2003b). Today, research on interoperability of enterprise applications does not exist as such. As a result of the IST Thematic Network IDEAS (Baan, 2003),

1004

the roadmap for interoperability research emphasises the need for integrating three key thematic components, shown in Figure 1: •

software architectures and enabling technologies to provide implementation solutions enterprise modelling to define interoperability requirements and support solution implementation ontology, to identify interoperability semantics in the enterprise.

Interoperability is seen as the ability of a system or product to work with other systems or products without special effort on the part of the user/customer (Baan, 2003). The ISO 16100 standard (2002) defines manufacturing software interoperability as the ability to share and exchange information using common syntax and semantics to meet an applicationspecific functional relationship through the use of a common interface. The interoperability in enterprise applications can more simply be defined as the ability of enterprise software and applications to interact usefully. The interoperability is considered to be achieved if the interaction can, at least, take place at the three levels: data, application and business enterprise through the architecture Figure 1. Three key thematic components and their integration (Baan, 2003; European Commission, 2003b)

UB1-HIT Dual Master’s Programme

Figure 2. The three levels of interoperability (European Commission, 2003a)

of the enterprise model and taking semantics into account, as shown in Figure 2. At the beginning of the 2000s, research in the interoperability domain in Europe was badly structured, fragmented, and sometimes overlapping unnecessarily. There was no unified consistent vision and no co-ordination between various European research centres, university laboratories and other bodies. Not only was this the case with the pure research, but it was true in the training and education areas as well. To improve this situation, two important initiatives were launched by the European Commission: Interop Network of Excellence and Athena Integrated Project (European Commission, 2003a; 2003b).

The Interop Network of Excellence and the Athena Integrated Project Interop NoE was a Network of Excellence (47 organizations, 15 countries) supported by the European Commission for a three-year period (2003-2006) (European Commission, 2003b). This Network of Excellence aimed to extract value from the sustainable integration of these thematic components and to develop new industrially significant knowledge. Interop’s role was to create the conditions of a technological breakthrough to avoid enterprise investment being simply pulled by the incremental evolution of the IT becoming commercially available. Consequently, Interop’s joint programme of activities aimed to:

integrate the knowledge in ontology, enterprise modelling, and architectures to give sustainable sense to interoperability structure the European research community and influence organisations’ programmes to achieve critical research mass animate the community and spread industrially significant research knowledge outside the network.

In more detail, the joint research activities were composed of the following work packages: •

• • • • • • • • • • •

enterprise modelling and unified enterprise modelling language (UEML): unifying for interoperability and integration ontologies for interoperability domain architecture and platforms domain interoperabiliy synchronization of models for interoperability model driven interoperability model morphisms semantic enrichment of enterprise modelling, architectures and platforms business/IT alignment methods, requirements and method engineering for interoperability interoperability challenges of trust, confidence/ security services/take-up towards SMEs.

Athena (Advanced Technologies for Interoperability of Heterogeneous Enterprise Networks and their Applications) was also an Integrated Project supported by the European Commission for the three-year period (2003-2006) (European Commission, 2003a). Its objective was to be the most comprehensive and systematic European research initiative in the field of enterprise application interoperability, removing barriers to the exchange of information within and between organizations. It would perform research and apply results in numerous industrial

1005

UB1-HIT Dual Master’s Programme

sectors, cultivating and promoting the networked business culture. Research and development work was carried out hand in hand with activities conceived to give sustainability and community relevance to the work done. Research was guided by business requirements defined by a broad range of industrial sectors and integrated into piloting and training. Athena would be a source of technical innovations leading to prototypes, technical specifications, guidelines and best practices, trailblazing new knowledge in this field. It would mobilize a critical mass of interoperability stakeholders and lay the foundation for a permanent, world-class hub for interoperability. Projects running within Athena were organized in three action lines in which the activities would take place. The research and development activities were carried out in action line A. Action line B would take care of the community building while action line C would host all management activities (European Commission, 2003a). Concerning the R&D action line, six projects were initially defined as follows: •

enterprise modelling in the context of collaborative enterprises (A1)

Figure 3. Interaction of Athena action lines

1006

• • • • •

cross-organisational business processes (A2) knowledge support and semantic mediation solutions (A3) interoperability framework and services for networked enterprises (A4) planned and customisable service-oriented architectures (A5) model-driven and adaptive interoperability architectures (A6).

Relations between the three action lines are shown Figure 3. Interop NoE and Athena IP have strongly influenced and contributed to research and development on enterprise interoperability in Europe and beyond. Harbin Institute of Technology was also been invited to participate in Interop NoE meetings and in the creation of the Interop Virtual Laboratory which is considered one of the important achievements of this Network of Excellence.

The Interop Virtual Laboratory (Interop-VLab) Interop-VLab, a sustainable European scientific organization, is the continuation of the Interop

UB1-HIT Dual Master’s Programme

Network of Excellence. It aims at federating and integrating current and future research laboratories, both academic and industrial, in order to fulfil objectives that a participating organization would not be able to achieve alone. It is supported by local institutions to promote interoperability in local industry and public administration. Interop-VLab’s mission includes the following: •

Promoting the enterprise interoperability domain and acting as a reference: establishing a sustainable organization, at European level, to facilitate and integrate high level research in the domain of enterprise interoperability and be a reference for scientific and industrial, private and public organisations Contributing to the European Research Area: contributing to solving one of the main issues of the European Research Area - the high fragmentation of scientific initiatives - by synergistically mobilizing European research capacities, enabling the achievement of critical mass by aggregating resources to match major future research challenges that would not be possible by individual organisations Developing education and professional training: promoting and supporting initiatives of European higher education institutions in the domain Promoting innovation in industry and public services: facing the industrial challenge of creating networks and synergies, Interop-VLab aims to promote and support applied research initiatives addressing innovation and the reinforcement of interoperability between enterprises, at European, national and local levels; this approach will also help to create synergy between European, national and local research programmes.

Harbin Institute of Technology is the leading partner of the China Pole of Interop-VLab. The China Pole is constituted of ten important Chinese universities spread across China. Besides research related projects, an Interop master’s degree programme involving Interop-VLab members including HIT and UB1 was also planned.

FORMAL MODEL OF THE UB1-HIT DUAL MASTER’S DEGREE CURRICULUM This section presents the details of the dual UB1HIT master’s degree curriculum. Because this programme is built on two separate disciplines and carried out in two locations in two different countries, the main challenge to its success would be the development of a deep mutual understanding of the curriculum implemented in each location and a close collaboration between the two teams, to avoid unnecessary redundancies and emphasizes synergistic complementarities. To meet this objective, a detailed and explicit representation of the curricula was necessary. Usually university training curricula are presented in a textual form, often using tables. In general, inter-relationships between various courses and lectures tend not to be identified and/ or explicitly described and considered. Sometimes this can create difficulties for students in fully understanding the relationships between component courses and their logic, and consequently in mastering the overall knowledge that they need to acquire (Alix et al., 2009). Based on the feedback from the students after three years running on an experimental basis, it is necessary to present the master’s degree programme overall curriculum in a more formal and explicit way so that both students and teachers on both sides can have a clear and unambiguous understanding of the contents of the programme and of their roles within it. Therefore, the purpose of this section is to present the formal model of the

1007

UB1-HIT Dual Master’s Programme

UB1-HIT dual master’s programme curriculum. Unified Modelling Language (UML) was chosen to model the lectures delivered in the two years and the possible relationships between the series of lectures in the two years. Complementarities and potential future improvements are also discussed below.

Model of Year 1 Curriculum in HIT This section describes and model of the Year 1 curriculum carried out at Harbin Institute of Technology School of Software in China. The objective of the Year 1 training is focused on software engineering, information systems analysis and design, programming techniques and IT project management. This curriculum is mainly organized in three modules as shown in Figure 4: Language; Science and Methodology; IT Technique. In the Language module, there are two courses, English and French. •

English: Because of all the courses of this joint master’s programme are in English, a command of English is very important. The

Figure 4. UML model of year 1 curriculum at HIT

1008

objective is to give the students the ability to read and write reports/papers in English, and to communicate with professors fluently, orally and aurally, in English. French: This course aims to teach the Chinese students daily French, which can help them to adapt to French daily life when they arrive in France.

The Science and Methodology module aims to teach students how to carry out scientific research, how to analyse the objects in the universe and the relationships among them. This module contains two courses, dialectics and operational research. •

dialectics: This course is to teach students the resolution of disagreement through rational discussion and ultimately the search for truth. operational research: This shows how to use mathematical modelling, statistics, and algorithms to develop optimal solutions to solve complex problems, improve decisionmaking, and make process efficiencies, to finally achieve a management goal.

UB1-HIT Dual Master’s Programme

The IT Technique module is the main part of the first year study. This centres on software engineering. It offers a series of IT technique courses, such as databases, Java programming, etc., as well as a series of software management courses, such as software quality assurance, IT project management, etc.. In addition, there is a practical course in this module, in order to put both IT and project management knowledge into practice. •

IT: this set of modules aims to teach students the skills of design and implementation of IT solutions for different kinds of firm. The modules are as follows. ◦ databases: this module focuses on how to use a relational database, including, designing a proper entity relationship model (ERM), creating correct data view based on ERM, querying data by structured query language (SQL), defining store procedure for a database, etc. ◦ algorithm analysis: this module is an important part of broader computational complexity theory, providing theoretical estimates for the resources needed by any algorithm to solve a given computational problem: it shows how to analyze an algorithm, how to determine the amount of resources (such as time and storage) necessary to execute it, and finally achieve the goal of optimising the program. ◦ software architecture: this module shows how to analyze, design and simulate the structure or structures of the system - the software components, the externally visible properties of those components, and the relationships between them. ◦ Java programming: this module introduces one of the most popular programming languages: after complet-

ing this module, students should have the ability to implement an executable application and learn other programming languages by themselves. ◦ object-oriented design and UML: unified modelling language (UML) is a standardized general-purpose modelling language in the field of software engineering: it includes a set of graphical notation techniques to create visual models of softwareintensive systems; after this course, students should have the ability to use UML to design a proper software system model. Management: This set of modules contains lectures on the methodology of IT project management. The courses involve the following modules. ◦ software quality assurance (SQA): this topic covers the software engineering processes and methods used to monitor and ensure quality: it encompasses the entire software development process - software design, coding, source code control, code reviews, change management, configuration management, and release management. ◦ IT project management: this topic shows how to lay out the plan for an IT project, and how to realize, and anticipate and avoid the risks of failure of the IT project development: after this course, students should be able to use the methodology learned to reduce the cost of the IT project and to make the project efficient and as successful as possible. ◦ software development process management: this module gives more details about SQA and IT project management in the development phase of a project.

1009

UB1-HIT Dual Master’s Programme

Practical work: This module gives students a chance to put their knowledge into practice. Students are required to manage a full IT project by themselves, from requirement analysis, system model design to software implementation, test, and then software deployment: after completing this module, students will have an overall understanding of software engineering.

Model of Year 2 Curriculum in UB1 This section presents the model of the Year 2 curriculum at the University of Bordeaux 1 in France. The objective of this training is focused on enterprise system engineering, and in particular, enterprise modelling, production management, enterprise integration and interoperability.

The curriculum of year 2 is organised in five modules, as shown in Figure 5: MSI (industrial system modelling); ESI (industrial system management); MPI (industrial system performance); PRI (industrial system integration); OPT (option - bibliographical research work). The MSI module is mainly concerned with enterprise modelling and design. It starts with a lecture on system theory, laying down the fundamental concepts of the systemic view of the enterprise. Then enterprise modelling focuses on GRAI (graphs of interlinked results and activities) and IDEF (integration definition) methodologies (IDEF0 function modelling, IDEF1 information modelling and IDEF3 process modelling). The MOOGO (method for object-oriented business process optimization) process modelling tool developed by the Fraunhofer Institute for Produc-

Figure 5. UML model of year 2 curriculum at University of Bordeaux

1010

UB1-HIT Dual Master’s Programme

tion Systems and Design Technology (IPK) of Berlin and Petri net formal modelling are complementary to GRAI and IDEF. Productic (production science) is a lecture presenting the general problems and state-of-the-art of enterprise engineering. In parallel, design theory and innovation are presented to allow understanding of the basic concepts and principles of enterprise system design. The ESI module focuses on production planning and control techniques with the emphasis on the MRPII method. MRPII teaching is mainly organised around an extended case study (details are given below), including (a) paper exercises, (b) game based simulation, (c) computerisation using Prélude software (Chen & Vallespir, 2009). Sales forecasting and inventory management methods (for example, the order point method) support both manufacturing resource planning (MRPII) implementation and supply chain management which is also another important lecture in this module. In addition, other recent methods, such as KANBAN based on JIT (just in time) and lean manufacturing, allow complementing MRPII. In parallel, project management techniques such as the PERT (programme evaluation and review technique) method are also presented. The MPI module covers enterprise performance evaluation. Besides the Taguchi method and the reliability approach which can be related to design issues in the earlier MSI module (as shown Figure 5), a large part of the teaching is focused on quality concepts and methods. Benchmarking is also considered an important approach to improving the performance and quality of the enterprise systems and products. Another lecture is concerned with problems and solutions for recycling which is becoming more important in modern industrialised societies. Finally a game based on simulation shows how to link the flow (physical, information) in an enterprise to the performance (quality, delay), and how to act on the flow to improve the performance. The PRI module is about enterprise integration and interoperability. Here, enterprise integration is approached principally through the use of en-

terprise architecture and framework modelling approaches, such as CIMOSA (computer integrated manufacturing open system architecture), PERA (Purdue enterprise reference architecture) and GERAM (generalised enterprise reference architecture and methodology). In parallel, basic concepts, framework and metrics for enterprise interoperability are also presented, because these are becoming significant new trends replacing traditional integration oriented projects. It is also noteworthy that teaching in this module is largely based on e-learning on the one hand and on the other, on seminars presented by well-known European experts in MDI (model driven interoperability), A&P (architecture & platform) for interoperability, and ontology for interoperability. Finally the OPT module was originally designed to be a slot for optional courses. For the time being it has only one option (bibliographical research work). The students are asked to choose a subject proposed by professors and perform a bibliographical research on this. This work is done by groups of two students. Each group must write a report, present the work and answer questions in front of a jury. This work is an initiation to research work and aims at developing the capability of students to carry out bibliographical research.

Complementarities and Possible Improvements Relationships between the courses in years 1 and 2 are tentatively identified as indicated in Figure 6. Several types of relationships are defined as follows: •

is a relationship: for example the IT project management lecture given in year 1 is a particular type of project management (general) studied in year 2 part of relationship: the software quality assurance lecture in year 1 is part of more general quality course in year 2

1011

UB1-HIT Dual Master’s Programme

Figure 6. Links between the courses of the two years

support relationship: this means that one course is used as a preparation or a means for another one, such as for example software oriented design and UML that are used to develop MDI and implement A&P in year 2. Enterprise modelling techniques can also be used to model user’s requirements at higher level abstraction in software system design, for example, control and information management (CIM) level in the model drive architecture (MDA) framework.

Several complementarities can be identified. •

1012

At the global level, courses on computer science are complemented by training on enterprise and production systems. This allows HIT students to acquire supplementary knowledge to be better able to develop production system oriented software such as enterprise resource planning (ERP), customer relationship management (CRM), supply chain management (SCM) and others. On the other hand, UB1 students who are more familiar with industrial systems are empowered with software development skills.

At a more detailed level and from the modelling point of view, enterprise modelling (mainly at conceptual level focusing on global system modelling) is complementary to IT oriented modelling. This is also true from the architecture perspective where enterprise architecture needs to be detailed in IT architecture and IT architecture must also be consistent with enterprise architecture. Both Years 1 and 2 deal with design issues. Design related lectures in year 2 (design innovation, design theory, Taguchi, reliability, etc.) provide generic design concepts and principles complementary to software design techniques learned in year 1.

At the course level, several potential improvements are envisaged as follows: •

Better coordination on the project management courses of the two years is needed. A consistent framework is necessary to position each lecture to show links and complementarities. More explicit relations between IT architecture and enterprise architecture must be

UB1-HIT Dual Master’s Programme

defined, and, in particular, the alignment between business/IT, and the consistent elaboration of IT architectures in relation to enterprise architecture.

A UNIFIED MRPII TRAINING CASE STUDY Professional training in universities on MRPIIbased production planning and control techniques as well as its implementation is one of the key issues in most of the production related master’s degree programmes in France. Quite often, MRPII-based education and training do not reach a satisfactory level in university curricula. There are several reasons for this. One is the lack of production and industry concepts and experience among most master’s degree level students. Another reason relates to the high conceptual character of production planning and management methods, requiring mastery of many abstract ideas, definitions and terms. The third reason is that the lectures, exercises and practical work on computers usually deal with different discrete examples, case studies and illustrations. A unified common case study allowing students to learn, understand, analyse and practise MRPII-based production planning techniques is still elusive. In this section, an innovative and experimental MRPII training project is presented. This project was first implemented in the master’s degree programme (in engineering, direction and performance of industrial systems (IPPSI)) at the University of Bordeaux 1 during academic year 2008-2009, and has been partly used on an experimental basis in the dual UB1-HIT master’s programme. The characteristic of this project is to combine an MRPII game, enterprise modelling (the GRAI methodology) and software implementation within a single common case study. The objective of the project is to provide the students with a unified and consistent case study to learn MRPII-based production planning, from the fundamental concepts, through

paper exercises and manual game simulation to the implementation of an MRPII-based software system. After the presentation of the principles and broad organisation of the project, we will show the various phases the students follow to learn MRPII-based production planning and control in a gradual and systematic manner. The experiences of the students obtained through formal feedback and possible improvements in the approach will also be discussed.

Description of the Case Turbix (Centre International de la Pédagogie d’Entreprise (CIPE), 2008b) is a small company that manufactures reduction gears referenced from R1 to R8 (8 finished products). The reduction gears are composed of two types of parts, E1-E8 manufactured in the company, and P1-P5 purchased externally. The E1-E8 parts are manufactured using two types of raw materials, M1 and M2. Figure 7 shows the structure of R3. Turbix is organised in two workshops, the machine shop to manufacture the E parts and the assembly shop to manufacture the finished products (R). Masteel and Fournix are two suppliers providing raw materials M and purchased parts P, respectively. The overall organisation and physical flow is shown in Figure 8. Figure 7. Example: R3 product structure (Centre International de la Pédagogie d’Entreprise (CIPE), 2008a)

1013

UB1-HIT Dual Master’s Programme

Figure 8. Organisation and physical flow of Turbix (Centre International de la Pédagogie d’Entreprise (CIPE), 2008a)

Figure 9. Turbix management architecture

themselves how the MRPII method works and what are the steps one must follow to implement MRPII software in a company. Participants using this game can plan the production and purchasing orders using the MRPII technique, and simulate the execution of planned orders through various functions of the company - commercial service, manufacturing service, inventory/stocks, purchasing service. etc. During the simulation, each participant takes a precisely defined role/responsibility. In detail, the game allows students Because of different customer lead times, R1 and R2 are produced according to sales forecasts established beforehand. R3-R8 are manufactured upon firm customer orders. E1-E8 and P1-P5 are manufactured and purchased according to the needs for R1-R8 production. M1 and M2 are purchased according to the needs for E1-E8 production. On the basis of this physical organisation, the architecture of the production management implemented in Turbix is presented Figure 9.

First Component: The Manufacturing Resource Planning (MRPII) Game The objective of the MRPII game (Centre International de la Pédagogie d’Entreprise (CIPE), 2008b) is to allow a group of participants to discover for

1014

• •

• •

to understand the structure and functioning of the existing production system to plan the master production schedule (MPS) for the finished products and draw up the material requirement planning (MRP) for parts E and P to calculate load and perform load levelling and finally to simulate the functioning of the production system over a period of two months, all consistent with the management architecture in Figure 9.

Second Component: The GRAI Methodology The GRAI methodology (Vallespir & Doumeingts, 2006) was developed at the Department for

UB1-HIT Dual Master’s Programme

Automation and Production Science/Graphs of Interlinked Results and Activities (LAPS/GRAI) of the Laboratory for the Integration of Materials in Systems (IMS) at the University of Bordeaux 1. This methodology sets out to model, analyse and design the decision-making sub-systems of a production management system. The method consists of • • •

a conceptual reference model defining the set of fundamental concepts modelling formalisms, and a structured approach.

The GRAI methodology is used in the project to model and analyse the existing production system of Turbix, to detect its potential inconsistencies and to design a new improved system.

Third Component: The Prélude Production MRPII Software Prélude Production is an MRPII compliant software developed for professional training and teaching purpose (Centre International de la Pédagogie d’Entreprise (CIPE), 2008a). Its userfriendly interface allows students to learn how to manipulate MRPII software in a gradual way. This software is used in the project to computerise the production planning and management activities in Turbix Company. After the implementation of Prélude Production in the company, it is used to plan and control the daily production activities. It is also used together with the game to perform a simulation. Figure 10 shows the main functions of the Prélude Production software.

Figure 10. Main functions of Prélude Production software (Centre International de la Pédagogie d’Entreprise (CIPE), 2008a)

1015

UB1-HIT Dual Master’s Programme

The Programme and the Implementation of the Project

Figure 11. Overall logic of the project

In this section, we present the programme for the project and its organisation and implementation. The project is carried out by the students over several months. Two groups of students are formed, each group of about 10 students. Figure 11 gives the overall logic of the project.

Initialisation Phase To start the project, the objective, the organisation and time table, as well as the expected results at the end of each phase are presented to the two groups of students.

Playing the Game The next phase aims to show the students how to carry out the planning and simulation without the MRPII software tool. The objective is to allow students to develop a better understanding the basic concepts and techniques of the MRPII calculation, and, at the same time, a thorough understanding of the existing Turbix system. The game is played over one day and a half. At the beginning the students use the traditional inventory management technique (order point method) to manage the Turbix system for a one month (January) period. Then they are asked to migrate to the MRPII technique. Manual MRPII calculation is done to plan all the orders needed for the finished products (R1-R8), and parts (E1-E8) and (P1-P5). Load calculations on the four lines in the assembly workshop (the L1-L3 assembly lines, and the L4 test line) are carried out in order to validate the master production schedule (MPS). The MRPII simulation is launched on a day-by-day basis for the duration of the next month (February), for managing the production activities (purchasing, manufacturing and assembly) and the management activities (order release, production follow-up, inventory, orders close-up, etc.).

1016

Existing System Analysis After the MRPII game, the students are asked to analyze the functioning of the existing production system based on their knowledge and experience gained during the game. The GRAI method, using GRAI grid and nets, is used to model the decision-making structure of the existing project management (PM) system. Based on the model of the existing system, GRAI rules can be applied to detect possible inconsistencies. If inconsistencies are found (for example, a bad decision horizon or faulty period values), the students will propose necessary corrections to the existing system in order to improve its functioning.

UB1-HIT Dual Master’s Programme

Simulation of Improved System

DISCUSSION AND REMARKS

After the analysis and possible redesign of the production system in Turbix, the students then play the game again. The game simulation is done on the new system, having implemented the set of suggested corrections and modifications to the existing system. For example, one of the possible suggestions that might be proposed by the students is to adjust the value of the planning horizon in the MPS and the MRP levels to allow an improved co-ordination between them.

The experimentation carried out among students of the master’s class in 2008 showed strongly the student interest and feasibility of the project. The main added values of the project were found to be the following: •

Implementation of Prélude Production Software During this phase, the students are asked to computerise the production planning and control activities in Turbix using Prélude Production software. For this task, the students are divided into small groups (2 students per group per computer). Firstly, the students need to make a compilation of all the relevant technical data (bill of materials, routings, items and workstations) and put them in an appropriate form to be entered in the computer. Then a small scenario (using a number of sales forecasts and firm orders) is given to students to allow them to test the Prélude Production implemented for Turbix. This tends to be a very interesting task because the students need to find errors they may have made during the data collection.

MRPII Software Based Simulation After the validation of these test results, the simulation can begin. During the simulation, the students are asked to perform the same activities they did during the game but this time using the MRPII software. This phase allows students to compare the two simulations, the game simulation without computer aid and the simulation with the MRPII software (Prélude Production).

The project allowed the students not only to learn MRPII concepts and techniques, but also to practise the MRPII-based production control in a concrete and unique case study. The students could evaluate and compare the problems, difficulties and benefits at the different stages of the project using the same case. The game played before the computerisation stage allowed the students to take an active part in the activities of the enterprise as if they were actors in the company, thus putting them in a situation similar to that in the real enterprise The use of the GRAI method before computerisation allowed the detection of possible inconsistencies in the system. The benefits are to show the usefulness of enterprise modelling to improve company performance, and to computerise a reengineered system after the correction of inconsistencies. The project showed that computerisation of production management is not only a matter of software. Before introducing an MRPII package in a company, it is necessary to analyse and re-engineer the existing system to make it consistent, to have the appropriate technical data, to define the most suitable parameters for the software, etc..

This project has contributed to improving MRPII-based production management training courses in French universities by providing a unified case study framework which covers the various types of exercises (understanding funda-

1017

UB1-HIT Dual Master’s Programme

mental concepts, paper-based MRPII planning and manual simulation, enterprise modelling/analysis and re-engineering, computerisation, MRPII software-based simulation). One of the improvements planned for the near future is the reinforcement of the use of the GRAI methodology in the second phase of the project. It will also be necessary to investigate ways of increasing the time horizon of the simulation (from two months, possibly to 6 months or preferably one year). The extension of the time horizon will allow simulation of a long term production plan and the incorporation of some strategic production management decisions.

PRELIMINARY ACHIEVEMENTS AND ASSESSMENT This section describes the results of the two first cohorts of students on the master’s programme, and the feedback received from them. The students are asked to give a personal overview and overall appreciation on the content of the programme as well as the difficulties encountered in studying and comprehending each year and the benefits expected at the end of the programme. Finally the students who have earned all the European Credit Transfer System (ECTS) credits at the end of the second year of the programme and are eligible for the dual master’s degrees, are asked for a professional perspective/discussion on the advantages of the programme and degree. The first cohort of students, class 2008, had thirteen students, twelve Chinese and one French. The second cohort, class 2009, had fourteen students in year 2, ten Chinese and four French. The third cohort, class 2010, has fifteen students in its year 1, ten Chinese and five French. The relatively low number of French students, although growing, is probably because the predisposition to go abroad for study is weak in France and the students who go aboard tend to be pioneers. The employment opportunities for the graduates are in both manufacturing and service companies.

1018

Graduates can become managers and more specifically production, quality, or maintenance managers, R&D engineers and managers, consultants, project coordinators and managers in the general domain of implementing enterprise software applications (such as ERP, SCM, PLM and many others) in large companies and in SMEs. If, as would seem likely, the internship is a springboard to employment, another employment opportunity is in research teams and projects in academic institutions. Indeed, in 2008 eight students did their final internship in an academic or research laboratory, three in France and five in other European countries. In 2009, ten students have chosen research internships, five in French laboratories and five in other European ones.

Survey of the Opinions of the Students In December 2008, a questionnaire was sent to all the students of class 2008 and class 2009. The objective was to obtain an evaluation of the programme, taking into account the student’s difficulties, the facilities and their expectations before, during and after the programme, and to obtain feedback on the professional experience gained after the two periods of internship by the two cohorts. A simplified view of the questionnaire used is presented below.

Questionnaire Used 1. 2. 3. 4.

5.

Position, name and address of the company or university? Position of the internship activity (daily job) in the company? Competencies before year 1, before year 2 and at the end of year 2 Difficulties met and facilities provided during the first and the second year of the programme? Advantages and disadvantages offered/ encountered in relation to the double com-

UB1-HIT Dual Master’s Programme

6. 7.

8.

9.

petency, EM (enterprise modelling) and IT (information technology), of the programme ? Thoughts about the continuity between Harbin and Bordeaux In your daily job do you use the double competency (if not, which one do you use), advantages offered by IT / EM knowledge in your job? Differences and similarities between the form and operation of the internship in Harbin and in Bordeaux? Is the double competency an advantage in finding a job or PhD position?

Seven students of class 2008 replied, two of them employed in private companies, three PhD students, and two looking for a job or further training opportunities. Twelve students of class 2009 replied.

Results from Class 2008 •

Competencies: Most students (5) had low or only a fundamental level of software programming skills, and some students (2) had no software domain knowledge but principally mathematics or control theory and engineering respectively before the programme year 1. At the end of the first year, almost all the students (6) had achieved competency in software engineering, especially software architecture, software development, Java and databases. At the end of two years, most students believed they had acquired (i) knowledge about enterprise modelling and production management, (ii) knowledge about enterprise modelling methods like GRAI and IDEF, (iii) deep understanding of SCM, quality assurance and performance measurement, and (iv) knowledge through the bibliographic research work in academic fields like ontology and interoperability. After the full pro-

gramme, most of the students agreed that they had made progress in the English and French languages. Difficulties and facilities: During the first year of the programme, most problems came from language misunderstanding which made some courses difficult to assimilate. Three students thought that the courses were heavy even though they had a good studying environment (2 of them lacked knowledge and experience in software engineering). In the second year of the programme, 4 out of the 7 students who responded thought that topics such as interoperability and service oriented architecture were too conceptual and difficult to comprehend. With insufficient background knowledge of practical enterprise cases, models that are abstract and connections between these models are hard to understand. Double competency statement: Students have acquired knowledge of software development and enterprise modelling by the end of the two years. They have good knowledge of how IT works in the enterprise and also a good understanding of business processes which can help them to find the right technology when they design an enterprise management system. In their daily jobs five students out of the seven use this double competency. IT knowledge is used directly and regularly by persons in employment in companies while the PhD students use IT to implement programs to prove, analyze and show their research results. For those in employment, enterprise management knowledge supports their understanding of the framework and architecture of the issues they work on and supports the design of solutions in their daily work. Teaching specificities: The teaching in the IT domain tends to be considered more theoretical while the teaching in the EM domain is considered more practical because

1019

UB1-HIT Dual Master’s Programme

of the game-based simulations and exercises that can be seen as playing realistic roles. The enterprise games are also considered a useful tool to explore a particular context and have special values because most these games tend to be team-oriented. In Harbin the internship takes place at the same time as the course. Consequently, students have a complete project in which they use IT technology to carry it out. An advantage, according to the students, is that they can go deeper into detail through asking for information from the teachers but that sometime this becomes too closely detailed to form a proper overall view. In Bordeaux, the internship has a specific period and the subject in question is sometimes disconnected from the course even if that subject deals with management. This requires more individual initiative and creativity because the students can feel alone in confronting their problems even if they can ask their teacher. But it is considered a strong advantage that the students are totally immersed in the company.

Results from Class 2009 •

1020

Competencies: At the beginning of year 1, 9 of these students had competencies in software engineering: operating system, data structure, databases, IT project management, software quality assurance and some popular development languages such as Java, C++ and.NET. One student had specialized in automatic control and another had knowledge linked to mechanical engineering and production management. Thus the competencies were much more diverse than in the previous cohort. Before year 2, most of the students (9) had improved their programming skills as software engineers. By then they had more experience in programming and project

management, and knowledge of advanced databases, algorithm, software architecture and so on. They had also improved communication skills, with a good level of French and fluent English. The other two students had acquired knowledge of programming using Java, database design, and IT project management. At the end of semester 4, all the students had gained knowledge in enterprise computing and engineering, including production management, enterprise modelling, and quality management. Difficulties, facilities: Like the previous cohort, the first difficulty cited is language. The second arises from the fact that students are not au fait with the production environment and so concepts relative to an enterprise are difficult to comprehend, the concept of interoperability for example is understood, but the finer details are not, and while the model-driven architecture and enterprise modelling methods are readily learned, the lack of experience makes their use far from obvious. All the students complained about the schedule of the course, with too many courses planned in too short a period and too many different types of knowledge to be learned in different areas/ domains. Double competency statement: Despite difficulties, students agreed that they had acquired a double competency. Not only did they know how to do programming but also understood how an enterprise works using IT technologies. The background of one domain was felt to be a great help when working in the other. The dual competency provides more choices for a future career. Even if it is not easy to re-orient one’s mind from the software view to the enterprise view, they were confident that they would be able to bring these views together in the future.

UB1-HIT Dual Master’s Programme

Teaching specificities: In Bordeaux, there are more games-based training exercises as opposed to the programming practicals in Harbin. As regards the internship, they did not find major differences between Harbin and Bordeaux. In the first year, the goal was to develop software systems, and students worked directly from the analysis, and then designed the system and wrote the code. In the second year, students needed to read the materials about the production system to gain a holistic understanding of the subject.

Remarks The dual master’s degee programme represents a good challenge for all the responding students because of the challenging multidisciplinary and cross-domain training during the two years. The students also became very aware of the interests and needs of companies which are very close to the topics and subjects dealt with in the programme.

Needs Expressed During the Internship (So-Called, M2) This section provides the analysis of the internship of year 2 of class 2008, because internships of class 2009 are only in progress. As mentioned earlier, in 2008 five students did their internship in private companies. One internship topic concerned pure management issues, while all the others combined IT problems and the use of enterprise modelling methods to analyze and model enterprise systems.

Topic Relative to Management Only A well known large company in the domain of material construction had proposed a study on pricing strategy because the market is becoming more and more competitive, and the pricing strategy must be adjusted to take into account product turnover, life cycle phase and other dynamic variables. This study focused on the analysis and comparison of

the commonly used pricing strategies: premium pricing, value pricing, cost/plus pricing, competitive pricing and penetration pricing. The internship project led to the proof that the value strategy was the best strategy for new products and highend products, but that for all other products, the competitive strategy was shown to be the best one. This conclusion has enabled the company to improve customer loyalty, keep market share and make expected profit (Jia, 2008).

Internship Studies Involving Combined Topics One study led to the analysis of the possibilities of applying data mining techniques in crossselling to increase the overall sales of a company specialising in material construction. The study elaborated a process methodology based on data mining software, and described the way to build mining models to do cross-selling analysis. The student described how to write associative prediction queries, integrate these queries into a Web cross-selling application and then discussed the architecture of a web application with data mining predictions (Li, 2008). Another subject proposed by the same company concerned the exchange of data between servers of 120 commercial agencies which constitute the company. The objectives of the company were to (i) find a solution which could monitor the servers, (ii) analyse their performance, and (iii) predict potential problems and inform the system administrator in advance. Furthermore the company needed software to help the administrator do his daily work, in verifying the backup machine and the working situation of the servers, and other tasks. The mission of the student was to choose a correct solution to satisfy the company’s needs and then design and implement the architecture on the existing system (Yang, 2008). A third combined topics study related to a small company specializing in internet search engines. The main challenge of the company was to offer

1021

UB1-HIT Dual Master’s Programme

to internet users the relevant information about an enterprise, a product or a service. For this task the search engine limits the referencing to the web site of the enterprise in order to have consistent and precise information. Blog, personal page and forum pages are avoided. The student participated fully in the whole project, from the requirement analysis phase to the development phase, including learning and using specific languages, technologies, etc. The student acted as an actor in the project but also as project manager during the development phase (Wang, 2008). The fourth internship was carried out in a large French worldwide company, in the Oracle project pole. The student worked on the ERP technology taking into account the requirements of a specific customer that is a public sector administration. The company maintains the Oracle IT system for the administration. The student worked on the purchase order process from the demand, to the invoice payments, including the orders and receipts. This allowed him to study the complete acquisition workflow, and introduce some new concepts of finance and accounting, and new concepts and ontology in the financial area (Fausser, 2008).

Analysis of the Topics by the Private Companies Involved Information on projects carried out by the students during their internships in private companies was collected through the report they gave at the end of year 2 of their programme. Three subjects out of the five were proposed by one enterprise. This indicates the difficulty of finding industrial internship in France, mainly due to the language barrier. French companies find the double competency of the student very interesting but most are not prepared to integrate students who do not speak French into their company. A second important conclusion is that most of the topics (4 out of 5) required a double competency, and in those cases the students successfully applied IT techniques to improve the performance of those companies.

1022

CONCLUSION This chapter presents an international collaboration between the University of Bordeaux 1 and Harbin Institute of Technology. This collaboration is characterised by the fact that it is based on: •

a long-term strategy of both institutions (UB1 and HIT) to develop sustainable cooperation in the domain of interoperability which is considered a priority subject on both sides the two competencies, in UB1, enterprise modelling and interoperability, and production system sciences, and in HIT, computer sciences and software engineering, are complementary in the development of R&D and in this education programme the combination of research activities and education/training allow benefits to flow from the latest advances in research in enterprise software application interoperability (such as the European Union R&D projects, Athena, Interop, and others).

This collaboration model has considerable potential to be duplicated and extended to other universities and other countries. The formal UML model to represent the joint master’s programme curriculum allows explicit identification of all elementary lectures and the possible relationships between the lectures and modules. We believe that this formal modelling approach can help students to better understand the training curriculum and lead to an improved quality of education. Furthermore it also allows the teachers involved to check the overall consistency of the curriculum, to better coordinate and organise their lectures, to avoid unnecessary redundancies and overlapping coverage, introduce possible contractions and bring out synergies and complementarities. The feedback on the students’ experience of the dual master’s degree programme shows that

UB1-HIT Dual Master’s Programme

it responds to real business needs and concerns. Even with the language barrier, more companies are becoming interested in students with the double competency. For the students, even though the programme is difficult to assimilate during the two years because of its breadth and density, they are satisfied at the end because they have come to understand the crucial impact of IT on enterprise performance.

ACKNOWLEDGMENT The authors thank Zhiying Tu and Zhenzhen Jia (University of Bordeaux 1) for their contribution to this chapter.

REFERENCES Alix, T., Jia, Z., & Chen, D. (2009). Return on experience of a joint master programme on enterprise software and production systems. In B. Wu & J.-P. Bourrières (Eds.), Educate adaptive talents for IT applications in enterprises and interoperability. Proceedings of 5th China-Europe International Symposium on Software Industry Oriented Education (pp. 27-36). Talence, France: University of Bordeaux. Baan, A. Z. (2003, March). IDEAS roadmap for e-business interoperability: Interoperability development for enterprise application and software – roadmaps (IST-2001-37368). Paper presented at the e-Government Interoperability Workshop, Brussels, Belgium. Centre International de la Pédagogie d’Enterprise (CIPE). (2008b). Jeu de la GPAO (gestation de la production assistée par ordinateur). Retrieved October 30, 2010, from http://www.cipe.fr

Centre International de la Pédagogie d’Entreprise (CIPE). (2008a). Manufacturing resourses planning software package. Retrieved October 30, 2010, from http://www.cipe.fr Chen, D., & Vallespir, B. (2009). MRPII learning project based on a unified common case-study: Simulation, reengineering and computerization. In B. Wu & J.-P. Bourrières (Eds.), Educate adaptive talents for IT applications in enterprises and interoperability. Proceedings of 5th China-Europe International Symposium on Software Industry Oriented Education (pp. 233-240). Bordeaux, France: University of Bordeaux. Chen, D., Vallespir, B., & Bourrières, J.-P. (2007). Research and education in software engineering and production systems: A double complementary perspectives. In B. Wu, B. MacNamee, X. Xu, & W. Guo (Eds.), Proceedings of the 3rd China-Europe International Symposium on Software IndustryOriented Education (pp. 145-150). Dublin, Ireland: Blackhall. Chen, D., Vallespir, B., Tu, Z., & Bourrières, J. P. (2010, May). Towards a formal model of UB1-HIT joint master curriculum. Paper presented at the 6th China-Europe International Symposium on Software Industry Oriented Education, Xi’an, China. European Commission. (2003a). ATHENA Advanced Technologies for Interoperability of Heterogeneous Enterprise Networks and their Applications: Integrated project proposal. European 6th Framework Programme for Research & Development (FP6-2002-IST-1). Brussels, Belgium: European Commission. European Commission. (2003b). INTEROP - interoperability research for networked enterprises, applications and software, network of excellence, proposal part B. European 6th Framework Programme for Research & Development. Brussels, Belgium: European Commission.

1023

UB1-HIT Dual Master’s Programme

Fausser, J. (2008). Maintenance of Oracle ebusiness suite V11 (Internal report of Internship of M2). Talence, France: University of Bordeaux. International Organization for Standardization (ISO DIS 16100). (2000). Manufacturing software capability profiling - part 1: Framework for interoperability (ISO TC/184/SC5, ICS 25.040.01). Jia, Z. (2008). Pricing strategy for Point P based on analysis and comparison of commonly used pricing strategy (Internal report of Internship of M2). Talence, France: University of Bordeaux. Li, Z. (2008). Data mining applied in cross-selling (Internal report of Internship of M2). Talence, France: University of Bordeaux. Vallespir, B., & Doumeingts, G. (2006). The GRAI (graphs of interlinked results and activities) method. Talence, University of Bordeaux 1: Interop Network of Excellence Project tutorial. Wang, Y. (2008). Improvement and extension of professional search engine (Internal report of Internship of M2). Talence, France: University of Bordeaux. Yang, W. (2008). Monitoring work situation of servers in Saint-Gobain Point P (Internal report of Internship of M2). Talence, France: University of Bordeaux.

KEY TERMS AND DEFINITIONS Dual Master’s Degree: A master’s degree programme involving at least two different universities / institutions from two different countries, and allowing students to obtain two degrees from the two institutions. Enterprise Modelling: Representing the enterprise in terms of its structure, organisation and operations according various points of views (technical, economic, social and human). Interoperability: A property referring to the ability of diverse systems and organizations to work together (inter-operate). The term is often used in a technical systems engineering sense, or alternatively in a broad sense, taking into account social, political and organizational factors that impact system performance. Production Management: A set of techniques for planning, implementing and controlling industrial production processes to ensure smooth and efficient operation. Production management techniques are used in both manufacturing and service industries. Software Engineering: A profession dedicated to designing, implementing, and modifying software so that it is of higher quality, more affordable, maintainable and rapid to build.

This work was previously published in Software Industry-Oriented Education Practices and Curriculum Development: Experiences and Lessons, edited by Matthew Hussey, Bing Wu and Xiaofei Xu, pp. 57-81, copyright 2011 by Engineering Science Reference (an imprint of IGI Global).

1024

Section 5

Organizational and Social Implications

This section includes a wide range of research pertaining to the social and behavioral impact of Industrial Engineering around the world. Chapters introducing this section critically analyze and discuss trends in Industrial Engineering, such as participation, attitudes, and organizational change. Additional chapters included in this section look at process innovation and group decision making. Also investigating a concern within the field of Industrial Engineering is research which discusses the effect of customer power on Industrial Engineering. With 13 chapters, the discussions presented in this section offer research into the integration of global Industrial Engineering as well as implementation of ethical and workflow considerations for all organizations.

1026

Chapter 56

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs:

Absorptive Capacity Limitations Kathryn J. Hayes University of Western Sydney, Australia Ross Chapman Deakin University Melbourne, Australia

ABSTRACT This chapter considers the potential for absorptive capacity limitations to prevent SME manufacturers benefiting from the implementation of Ambient Intelligence (AmI) technologies. The chapter also examines the role of intermediary organisations in alleviating these absorptive capacity constraints. In order to understand the context of the research, a review of the role of SMEs in the Australian manufacturing industry, plus the impacts of government innovation policy and absorptive capacity constraints in SMEs in Australia is provided. Advances in the development of ICT industry standards, and the proliferation of software and support for the Windows/Intel platform have brought technology to SMEs without the need for bespoke development. The results from the joint European and Australian AmI-4-SME projects suggest that SMEs can successfully use “external research sub-units” in the form of industry networks, research organisations and technology providers to offset internal absorptive capacity limitations.

DOI: 10.4018/978-1-4666-1945-6.ch056

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

INTRODUCTION Through case study research, this chapter discusses some of the challenges Small and Medium Enterprises (SMEs) in the manufacturing sector face in identifying and adopting Ambient Intelligence (AmI) technologies to improve their operations. Ambient Intelligence technologies are also known as Pervasive computing or Ubiquitous computing, and we include the descriptions of these terms when we refer to AmI technologies. Our study includes case studies of three Australian SMEs and a comparison with similar application requirements in a German SME manufacturer. The outcomes of the study are likely to be applicable to small firms in many nations. The 1980s and 90s saw the operations of many large manufacturers revolutionized by the introduction of process and technological innovations (Gunasekaran & Yusuf, 2002). While there have been uneven adoption rates in smaller businesses and across different nations (Chong & Pervan, 2007; Oyelaran-Oyeyinka & Lal, 2006) it is clear that technological innovations such as Electronic Data Interchange, Business Process Re-engineering, Enterprise Resource Planning and robotic automation, amongst others, have played key roles in increasing manufacturing productivity. At the beginning of the twenty first century this transformation continues. Ambient Intelligence (AmI) technologies are being positioned as the next performance and productivity enhancing purchase for manufacturers, and a potential means for manufacturers in developed nations to counter perceived threats from lower labour cost countries (Kuehnle, 2007). Thus, the key objectives of this chapter are to consider potential applications of AmI technologies in Australian SME manufacturers, and discuss the opportunities and shared challenges faced by such firms in adopting these technologies. In doing this we will compare different levels of absorptive capacity and technological readiness in Australian firms, seeking possible reasons for similarities and differences in their comparative technology adop-

tion processes. The chapter also examines the role of intermediary organisations in alleviating these absorptive capacity constraints. Our overarching research question is: “Can external intermediaries overcome absorptive capacity limitations in SMEs seeking process innovation through the application of AmI technologies?” In order to understand the issues surrounding this problem, a brief overview of ICT (Information and Communication Technologies) adoption in manufacturing and an explanation of Ambient Intelligence (AmI) technologies are provided in the following section. Following that we examine the role of SMEs in the Australian manufacturing industry plus the impacts of government innovation policy and absorptive capacity constraints in SMEs in Australia.

BACKGROUND ICT Adoption for Business Performance Improvement Brown and Bessant (2003) described the global manufacturing environment developing in this new century as an increasingly competitive landscape, characterised by on-going demands for improved flexibility, delivery speed and innovation. A frequently occurring element in manufacturers’ responses to these pressures is the implementation of increasingly sophisticated ICTs. The benefits of incorporating ICTs on business responsiveness have been identified as: more effective and more efficient information flows; assisting in value-adding improvements for current processes; greater access to efficiency enhancing innovations throughout the value chain (Australian Productivity Commission, 2004); and the ability to access world markets through e-commerce (Kinder, 2002). ICT adoption has been considered worth the risk, given the competitive pressures placed on business to keep pace with technology. For example, in Australia, the uptake of ICTs has in-

1027

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

creased dramatically towards the later part of the 90’s and into the 21st Century. Reports show that in 1993-94, 50 per cent of firms used computers with 30 per cent having internet access; by 200001 these figures had increased to 85 per cent and 70 per cent respectively (Australian Productivity Commission, 2004). Recent figures (Australian Bureau of Statistics, 2009) reveal that almost all Australian SMEs use ICTs, and 96% of them access the internet through a broadband connection. One of the latest developments in the application of ICTs to business improvement is that of Ambient Intelligence (AmI) technologies. The objective of AmI is to broaden and improve the interaction between human beings and digital technology through the use of ubiquitous computing devices. By using a wider range of simplified interaction devices, ensuring more effective communication between devices (particularly via wireless networks) and embedding technology into the work environment, AmI provides increased efficiency, more intuitive interaction with technology (Campos, Pina, & Neves-Silva, 2006) and improved value and productivity (Maurtua, Perez, Susperregi, Tubio, & Ibarguren, 2006).

Ambient Intelligence (AmI) Technologies Existing literature (Kopacsi, Kovacs, Anufriev, & Michelini, 2007; Li, Feng, Zhou, & Shi, 2009; Maurtua et al., 2006; Vasilakos, 2008; Weber, 2003) points to the co-existence of three features in any AmI technology: ubiquitous computing power, ubiquitous communication and adaptive, humancentric interfaces. Regardless of arguments about terminology and definitions (the terms “pervasive computing” and “ubiquitous computing” are in common use in the US, while “ambient intelligence” is favoured in the EU), these technologies are already commonplace. The beep signalling the automatic deduction of a road toll from your account as your car passes under a toll gate is one aspect of an AmI technology known as Radio-Frequency

1028

Identification (RFID). RFID technology is having an impact in many industries, some of which are not normally associated with high levels of ICT adoption. For example, during 2006, in NSW alone (one of the 7 states and territories within mainland Australia), more than 1.2 million head of cattle were automatically tracked from farm to saleyard to abattoir as their RFID ear tags passed through RFID sensor gates (NSW Farmers Association, 2007). In addition to increasing process speed and efficiency, AmI technologies have the potential to provide tracking of employee and customer activity. While concerns about the impact of technology upon power relations in the workplace are not new (Zuboff, 1988), the characteristics of AmI technologies present new challenges to worker privacy, informed consent and dignity. AmI technologies may, intentionally or not, dramatically increase employee surveillance and monitor consumer activity over the entire product life cycle. This potential and the very nature of ‘ubiquitous computing’ raises important ethical issues. Proposals to use RFID tags to track sufferers of Alzheimer’s disease (Caprio, 2005) and children provide examples of the ethical dilemmas AmI technologies can present. While these issues are beyond the scope of this chapter, we suggest Cochran et al (2007) for a review of ethical challenges associated with RFID. Social factors associated with the introduction and implementation of AmI technologies may be exacerbated in small and medium businesses. In addition to concerns shared with corporate workers, such as disquiet about their personal data potentially being sold to marketing groups and anxiety about the security of the information gathered, members of small businesses are particularly prone to AmI’s ability to ‘break the boundaries of work and home through their pervasiveness and ‘always on’nature’ (Ellis, 2004, p.8). While some profit-maximising small business owners welcomed this blurring of work and home boundaries, others did not, preferring to keep work and family spheres separate. Ellis cautions that in order to overcome existing negative preconceptions ofAmI technologies, users must feel

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

they control the devices and the data they produce, and be able to override them and cope with systems failure. In particular, AmI needs to be presented as “smarter” than existing ICTs and able to correct some of the problems associated with traditional forms of ICT support. In short, Ellis (2004, p. 9) asserts, “AmI needs to be associated with undoing some of the more problematic aspects of existing ICTs, to be accepted and not resisted as a more invasive, insidious and controlling form of what already exists.” Much of the promise of Ambient Intelligence (AmI) technologies rests upon connecting increasingly sophisticated and powerful sensors with existing computing facilities. McCullough (2001) identified the need to expand our thinking beyond the notion of filling environments with physical objects when considering Ambient Intelligence technologies. There is no longer such a thing as “empty space” when sensors and processing power combine to produce an environment that is “aware” of the locations, actions and information needs of humans. Clearly, the extent of existing Information and Communications Technology (ICT) infrastructure in an organisation will impact AmI technology implementations, providing either a “clean slate” from which to start or the opportunity to integrate new AmI capabilities with existing ICT systems and processes. In much the same way as the advent of mobile phones in China and India provided an opportunity for people unable to afford a landline, to access telephonic services, AmI technologies may prove a way for SME organisations to “leap frog” a stage of ICT implementation, and move directly to wireless and similar AmI technologies. Many other applications of AmI technologies are appearing as technologists extend the concept into areas such as “wearable technology” (clothing that incorporates sensors and interface devices), more intuitive home space designs, shopping assistance and the creation of seamless interfaces between work, home and leisure activities. While many of these applications currently seem unrelated to improving business productivity, it is clear that

the applications for business can only grow as the technologies become more sophisticated and less expensive. As Rao and Zimmerman (2005, p.3) state “there is a gap in the scholarly discussion addressing the business issues related to it, and the role of pervasive computing in driving business innovation”. It is in this context that the following case studies of four small-to-medium (SME) manufacturers - three Australian and one German firm – have been undertaken. In each firm, critical process analysis was carried out to examine possible process weaknesses and existing ICT systems, and recommendations were made concerning a selection of AmI technologies with the potential to boost business performance.

AMBIENT INTELLIGENCE TECHNOLOGY IN MANUFACTURING This section considers the applicability of several emerging AmI technologies to three SME manufacturers in New South Wales, Australia and compares the situation within these SMEs with one German SME manufacturer undertaking a similar technological adoption. In doing this the section also addresses questions about the preparedness of SMEs, particularly concerning their absorptive capacity limitations and how these may be overcome. Later sections also consider the potential impact of Ambient Technologies on the employees of the organisations studied. AmI technology is much more than RFID inventory control systems. Wireless, multi-modal services and speech recognition systems have the potential to increase manufacturing flexibility by supporting dynamic reconfiguration of process and assembly lines, and improving human-machine interfaces to reduce process times (Maurtua et al., 2006). Also, maintenance and distribution processes may be improved by linking common mobile wireless devices, such as mobile phones, Personal Digital Assistants (PDAs) or even pagers

1029

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

to production alert systems (Stokic, Kirchhoff, & Sundmaeker, 2006).

Small and Medium Manufacturers in Australia Organisations with between 20 and 199 workers employ 56% of Australia’s workforce (Wiesner, McDonald, & Banham, 2007). The Australian Bureau of Statistics (ABS) defines a small business as employing less than 20 people, and a medium enterprise as employing between 20 and 200 employees (ABS, 2001). The most recent ABS figures available (2007) for Australia indicate that there are around 47,000 manufacturing firms employing between 1 and 20 people, around 10,000 employing between 20 and 200 people, and only 873 employing over 200 people. In turnover terms, around 29,000 manufacturing firms reported annual turnover between $500,000 and $10 million, while only 3,300 firms reported turnover of $10 million or above. It is clear that the bulk of manufacturing in Australia occurs in small-to medium firms. While SME firms employ the majority of manufacturing workers their expenditure on R&D notably lags behind that of large manufacturers. Within the manufacturing industry companies with more than 200 employees were responsible for 73% of total industry R&D expenditure, with only 27% being contributed by the SME sector (ABS, 2007). However, in their exploration of the cost and impact of logistics technologies in US manufacturing firms Germain, Droge & Daugherty (1994) found that for manufacturing managers wanting to innovate with logistics technology, organisational size provides an advantage that transcends both the cost and nature of the technology. These authors confirmed the established view in 1994; that organisational size was positively correlated with technology adoption, as found in many previous studies. This link between manufacturing organisation size and increased ability to extract benefit from technological innovations may provide some explanation for the fact that while Australia’s manufacturing

1030

output has quadrupled since the mid 1950s, the Australian Government Productivity Commission (2003) states that overall, it has not grown at the same rate as the service sector. The Productivity Commission also describes Australia’s manufacturing sector as having “missed out on the productivity surge” of the mid 1990s while noting signs of improved manufacturing productivity in 2002 and 2003. The widespread availability of off-the-shelf ICT systems has probably meant that a great many more SMEs are adopting ICTs today than in 1994, however the limited resources of many such firms (both financial and human) almost certainly mean reduced awareness and limited capacity to exploit newer technologies commonly appearing in larger manufacturers. Given the significance of SMEs in Australian employment and the perceived need to increase manufacturing productivity, examination of the potential improvements available through the systemic application of AmI technology to SME manufacturers forms an important topic for research and government policy.

Absorptive Capacity This chapter applies the concept of absorptive capacity to manufacturing SMEs. We argue that SMEs can benefit from AmI technologies, using specialised intermediary organisations to overcome the “absorptive capacity” limitations evident in many SME organisations. Cohen and Levinthal (1990) proposed that internal Research & Development activities serve two purposes: to generate innovations, and to provide the ability to absorb relevant knowledge appearing in the external environment. The absorptive capacity of a firm is comprised of these two categories of activity. Their foundational paper conceptualised absorptive capacity in the context of large U.S. manufacturers, as evidenced by their survey of identifiable “R&D lab managers” (Cohen & Levinthal, 1990, p. 142) and their discussion of “communication inefficiencies” between business units. But