Workshop on Evolutionary Computing and Explainable AI 2023
The workshop will be held 2-3:50pm on 16 July 2023, at the GECCO conference in Lisbon, Portugal, 15-19 July 2023.
Welcome & Introduction by the workshop organizers (5 min)
Invited Talk: Mengjie Zhang. Genetic Programming for Explainable Artificial Intelligence (30min + 10min Q+A)
Explaining a Staff Rostering Genetic Algorithm using Sensitivity Analysis and Trajectory Analysis Martin Fyvie, John A.W. McCall, Lee A. Christie, Alexander E.I. Brownlee (15min + 5min Q+A)
From Fitness Landscapes to Explainable AI and Back Sarah L. Thomson, Jason Adair, Alexander E.I. Brownlee, Daan van den Berg (15min + 5min Q+A)
Towards Principled Synthetic Benchmarks for Explainable Rule Set Learning Algorithms David Pätzel, Michael Heider, Jörg Hähner (15min + 5min Q+A)
Concluding remarks (5 min)
Call for papers
Explainable artificial intelligence has gained significant traction in the machine learning community in recent years because of the need to generate “explanations” of how these typically black-box tools operate that are accessible to a wide range of users. Nature-inspired optimisation techniques are also often black box in nature, and the attention of the explainability community has begun to consider explaining their operation too. Many of the processes that drive nature-inspired optimisers are stochastic and complex, presenting a barrier to understanding how solutions to a given optimisation problem have been generated.
Explainable optimisation can address some of the questions that arise during the use of an optimiser: Is the system biased? Has the problem been formulated correctly? Is the solution trustworthy and fair? By providing mechanisms that enable a decision maker to interrogate an optimiser and answer these questions trust is built with the system. On the other hand, many approaches to XAI in machine learning are based on search algorithms that interrogate or refine the model to be explained, and have the potential to draw on the expertise of the EC community. Furthermore, many of the broader questions (such as what kinds of explanation are most appealing or useful to end users) are faced by XAI researchers in general.
From an application perspective, important questions have arisen, for which XAI may be crucial: Is the system biased? Has the problem been formulated correctly? Is the solution trustworthy and fair? The goal of XAI and related research is to develop methods to interrogate AI processes with the aim of answering these questions. This can support decision makers while also building trust in AI decision-support through more readily understandable explanations.
Following the success of the first workshop hosted at GECCO 2022, we seek contributions on a range of topics related to this theme, including but not limited to:
- Interpretability vs explainability in EC and their quantification
- Landscape analysis and XAI
- Contributions of EC to XAI in general
- Use of EC to generate explainable/interpretable models
- XAI in real-world applications of EC
- Possible interplay between XAI and EC theory
- Applications of existing XAI methods to EC
- Novel XAI methods for EC
- Legal and ethical considerations
- Case studies / applications of EC & XAI technologies
Papers will be double blind reviewed by members of our technical programme committee.
Authors can submit short contributions including position papers of up to 4 pages and regular contributions of up to 8 pages following in each category the GECCO paper formatting guidelines. Software demonstrations will also be welcome.
- Submission opening: 13 February 2023
- Submission deadline: 14 April 2023
- Notification: 3 May 2023
- Camera-ready: 10 May 2023
- Presenter mandatory registration: 10 May 2023
- Workshop: TBC - either 15 or 16 July 2023
Workshop papers must be submitted using the GECCO submission system. After login, the authors need to select the “Workshop Paper” submission form. In the form, the authors must select the workshop they are submitting to. To see a sample of the “Workshop Paper” submission form, go to GECCO’s submission system and select “Sample Submission Forms”. Submitted papers must not exceed 8 pages (excluding references) and are required to be in compliance with the GECCO 2023 Papers Submission Instructions. It is recommended to use the same templates as the papers submitted to the main tracks. Each paper submitted to this workshop will be rigorously reviewed in a double-blind review process. In other words, authors should not know who the reviewers of their work are and reviewers should not know who the authors are. To this end, the following information is very important: Submitted papers should be ANONYMIZED. This means that they should NOT contain any element that may reveal the identity of their authors. This includes author names, affiliations, and acknowledgments. Moreover, any references to any of the author’s own work should be made as if the work belonged to someone else. All accepted papers will be presented at the ECXAI workshop and appear in the GECCO 2023 Conference Companion Proceedings. By submitting a paper, the author(s) agree that, if their paper is accepted, they will:
- Submit a final, revised, camera-ready version to the publisher on or before the camera-ready deadline.
- Register at least one author by the author registration deadline to participate in the conference.
- Provide a pre-recorded version of the talk and be present during its online transmission (which will occur during the days of the conference) to answer questions from the (online) audience.
As a published ACM author, you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects.
Technical Programme Committee
- Mauro Castelli
- Matthew Craven
- Alberto Franzin
- Julie Jacques
- Ed Keedwell
- Benjamin Lacroix
- Eric Medvet
- Fabrício Olivetti de França
- Roman Šenkeřík
- Ryan Urbanowicz
- Marco Virgolin
- Sean Walton
- Ciprian Zavoianu
Organisers (in alphabetical order)
Jaume Bacardit is Reader in Machine Learning at Newcastle University in the UK. He has receiveda BEng, MEng in Computer Engineering and a PhD in Computer Science from Ramon Llull University, Spain in 1998, 2000 and 2004, respectively. Bacardit’s research interests include the development of machine learning methods for large-scale problems, the design of techniques to extract knowledge and improve the interpretability of machine learning algorithms, known currently as Explainable AI, and the application of these methods to a broad range of problems, mostly in biomedical domains. He leads/has led the data analytics efforts of several large interdisciplinary consortiums: D-BOARD (EU FP7, €6M, focusing on biomarker identification), APPROACH (EI-IMI €15M, focusing on disease phenotype identification) and PORTABOLOMICS (UK EPSRC £4.3M focusing on synthetic biology). Within GECCO he has organised several workshops (IWLCS 2007-2010, ECBDL’14, ECXAI 2022), been co-chair of the EML track in 2009, 2013, 2014, 2020 and 2021, and Workshops co-chair in 2010 and 2011. He has 90+ peer-reviewed publications that have attracted 5600+ citations and a H-index of 35 (Google Scholar).
Alexander (Sandy) Brownlee is a Senior Lecturer in the Division of Computing Science and Mathematics at the University of Stirling, where he leads the Data Science & Intelligent Systems research group. His main topics of interest are in search-based optimisation methods and machine learning, with a focus on decision support tools, and applications in civil engineering, transportation and software engineering. He has published over 70 peer-reviewed papers on these topics. He has worked with several leading businesses including BT, KLM, and IES on industrial applications of optimisation and machine learning. He serves as a reviewer for several journals and conferences in evolutionary computation, civil engineering and transportation, and is currently an Editorial Board member for the journal Complex And Intelligent Systems. He has been an organiser of several workshops and tutorials at GECCO, CEC and PPSN on genetic improvement of software.
Stefano Cagnoni graduated in Electronic Engineering at the University of Florence, Italy, where he also obtained a PhD in Biomedical Engineering and was a postdoc until 1997. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997 he has been with the University of Parma, where he has been Associate Professor since 2004. Recent research grants include: a grant from Regione Emilia-Romagna to support research on industrial applications of Big Data Analysis; the co-management of industry/academy cooperation projects: the development, with Protec srl, of a computer vision-based fruit sorter of new generation and, with the Italian Railway Network Society (RFI) and Camlin Italy, of an automatic inspection system for train pantographs; a EU-funded “Marie Curie Initial Training Network” grant for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing. He has been Editor-in-chief of the “Journal of Artificial Evolution and Applications” from 2007 to 2010. From 1999 to 2018, he was chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, then a track of the EvoApplications conference. From 2005 to 2020, he has co-chaired MedGEC, a workshop on medical applications of evolutionary computation at GECCO. Co-editor of journal special issues dedicated to Evolutionary Computation for Image Analysis and Signal Processing and of ECXAI 2022. Member of the Editorial Board of the journals “Evolutionary Computation” and “Genetic Programming and Evolvable Machines”. He has been awarded the “Evostar 2009 Award” in recognition of the most outstanding contribution to Evolutionary Computation.
Giovanni Iacca is an Associate Professor in Computer Engineering at the Department of Information Engineering and Computer Science of the University of Trento, Italy, where he founded the Distributed Intelligence and Optimization Lab (DIOL). Previously, he worked as postdoctoral researcher in Germany (RWTH Aachen, 2017-2018), Switzerland (University of Lausanne and EPFL, 2013-2016), and The Netherlands (INCAS3, 2012-2016), as well as in industry in the areas of software engineering and industrial automation. He is currently co-PI of the PATHFINDER-CHALLENGE project “SUSTAIN” (2022-2026). Previously, he was co-PI of the FET-Open project “PHOENIX” (2015-2019). He has received two best paper awards (EvoApps 2017 and UKCI 2012). His research focuses on computational intelligence, distributed systems, and explainable AI applied e.g. to medicine. In these fields, he co-authored more than 120 peer-reviewed publications. He is actively involved in the organization of tracks and workshops at some of the top conferences in the field of computational intelligence, and he regularly serves as reviewer for several journals and conference committees.
John McCall is Director of the National Subsea Centre at Robert Gordon University. He has researched in machine learning, search and optimisation for 30 years, making novel contributions to a range of nature-inspired optimisation algorithms and predictive machine learning methods, including EDA, PSO, ACO and GA. He has 150+ peer-reviewed publications in books, international journals and conferences. These have received over 2700 citations with an h-index of 24. Professor McCall has made a strong contribution to the theory and practice of estimation of distribution algorithms EDA and has been active in this area since 2004. He has made many contributions to the use of Markov Random Fields in EDAs and also developed RKEDA for special application in permutation spaces. He has applied EDAs to several real world problems including cancer chemotherapy, agricultural biocontrol, prostate cancer staging, offshore energy exploration, flow metering and analysis and transport network design. Professor McCall and his research team at RGU specialise in industrially-applied optimization and decision support, working with major international companies including BT, BP, EDF, CNOOC and Equinor as well as a diverse range of SMEs. Major application areas for this research are: vehicle logistics, fleet planning and transport systems modelling; predictive modelling and maintenance in energy systems; and decision support in industrial operations management. John and his team attract direct industrial funding as well as grants from UK and European research funding councils and technology centres. Professor McCall is a founding director of Celerum, which provides consultancy and Optimisation as a Service (OaaS) software, specializing in freight logistics. He is also a founding director and CTO of PlanSea Solutions, which focuses on marine logistics planning. HE has served on a number of industry advisory bodies including the OGTC Academic Panel, and the ScotlandIS - SDS Digital Skills Partnership Advisory Board. He chaired the Education Board of The Data Lab Technology Centre from 2014 – 2017. Professor McCall has served as a member of the IEEE Evolutionary Computing Technical Committee, an Associate Editor of IEEE Computational Intelligence Magazine and the IEEE Systems, Man and Cybernetics Journal, and he is currently an Editorial Board member for the journal Complex And Intelligent Systems. He frequently organises workshops and special sessions at leading international conferences. Most recently, he co-organised the Workshop on Evolutionary Computation and Explainable AI (ECXAI) at GECCO 2022. This was the first major workshop on explainable AI held at a major conference in evolutionary computation and attracted significant interest with over 100 attendees, over 10% of attendees. Professor McCall is currently co-editing a special issue of TELO on Evolutionary Computation and Explainable AI. In January 2022, Professor McCall attended Dagstuhl seminar 22182 where he presented on EDAs, Structure and Epistasis, on concepts of essential and inessential structure discovery in EDAs. He also lead an introductory discussion session on explainability between EDA and Theory researchers at the seminar.
David Walker is a Lecturer in Computer Science at the University of Plymouth. He obtained a PhD in Computer Science in 2013 for work on visualising solution sets in many-objective optimisation. His research focuses on developing new approaches to solving hard optimisation problems with Evolutionary Algorithms (EAs), as well as identifying ways in which the use of Evolutionary Computation can be expanded within industry, and he has published journal papers in all of these areas. His recent work considers the visualisation of algorithm operation, providing a mechanism for visualising algorithm performance to simplify the selection of EA parameters. While working as a postdoctoral research associate at the University of Exeter his work involved the development of hyper-heuristics and investigating the use of interactive EAs in the water industry. Since joining Plymouth Dr Walker’s research group includes a number of PhD students working on optimisation and machine learning projects. He is active in the EC field, having run an annual workshop on visualisation within EC at GECCO since 2012 in addition to his work as a reviewer for a number of EC-related journals. He is a member of the IEEE Taskforce on Many-objective Optimisation. At the University of Plymouth he is a member of both the Centre for Robotics and Neural Systems (CRNS) and the Centre for Secure Communications and Networking.