Workshop on Evolutionary Computing and Explainable AI 2022

Description

The workshop will span two sessions (10:50–12:40, 13:40–15:30) on the first day of the GECCO conference, Saturday, 9 July 2022.

Programme

Slot 1 (Intro, invited talk & XAI4EC papers)

10:50. Paper 1 and introduction from organizers: Jaume Bacardit, Alexander E.I. Brownlee, Giovanni Iacca, John McCall, Stefano Cagnoni, David Walker. “The intersection of Evolutionary Computation and Explainable AI” [15min + 5min Q&A] [onsite]

11:10. Invited talk: Will N. Browne. “Inherent explainability in AI via EC”. [30m+10min Q&A] [online]

11.50. Paper 2: Manjinder Singh, Alexander Brownlee, David Cairns. “Towards Explainable Metaheuristic: Mining Surrogate Fitness Models for Importance of Variables” [15min + 5min Q&A] [online]

12.12. Paper 3: Mathew Walter, David Walker, Matthew Craven. “An Explainable Visualisation of the Evolutionary Search Process” [15min + 5min Q&A] [onsite]

12.30. Open Discussion [10 min] [onsite/online]

Slot 2 (EC4XAI papers & discussion)

13.40. Paper 4: Martina Saletta, Claudio Ferretti. “Towards the Evolutionary Assessment of Neural Transformers Trained on Source Code” [15min + 5min Q&A] [onsite]

14.00. Paper 5: Leonardo Lucio Custode, Giovanni Iacca. “Interpretable AI for policy-making in pandemics” [15min + 5min Q&A] [onsite]

14.20. Paper 6: Hormoz Shahrzad, Babak Hodjat, Risto Miikkulainen. “Evolving Explainable Rule Sets” [15min + 5min Q&A]

14.40. Paper 7: Hayden Andersen, Andrew Lensen, Will N. Browne. “Improving the Search of Learning Classifier Systems Through Interpretable Feature Clustering” [15min + 5min Q&A] [online]

15.00. Open Discussion [30 min] [onsite/online]

Call for papers

Explainable artificial intelligence has gained significant traction in the machine learning community in recent years because of the need to generate “explanations” of how these typically black-box tools operate that are accessible to a wide range of users. Nature-inspired optimisation techniques are also often black box in nature, and the attention of the explainability community has begun to consider explaining their operation too. Many of the processes that drive nature-inspired optimisers are stochastic and complex, presenting a barrier to understanding how solutions to a given optimisation problem have been generated.

Explainable optimisation can address some of the questions that arise during the use of an optimiser: Is the system biased? Has the problem been formulated correctly? Is the solution trustworthy and fair? By providing mechanisms that enable a decision maker to interrogate an optimiser and answer these questions trust is built with the system. On the other hand, many approaches to XAI in machine learning are based on search algorithms that interrogate or refine the model to be explained, and have the potential to draw on the expertise of the EC community. Furthermore, many of the broader questions (such as what kinds of explanation are most appealing or useful to end users) are faced by XAI researchers in general.

From an application perspective, important questions have arisen, for which XAI may be crucial: Is the system biased? Has the problem been formulated correctly? Is the solution trustworthy and fair? The goal of XAI and related research is to develop methods to interrogate AI processes with the aim of answering these questions. This can support decision makers while also building trust in AI decision-support through more readily understandable explanations.

We seek contributions on a range of topics related to this theme, including but not limited to:

Papers will be double blind reviewed by members of our technical programme committee.

Authors can submit short contributions including position papers of up to 4 pages and regular contributions of up to 8 pages following in each category the GECCO paper formatting guidelines. Software demonstrations will also be welcome.

Important dates

Submission

Workshop papers must be submitted using the GECCO submission system (https://ssl.linklings.net/conferences/gecco/). After login, the authors need to select the “Workshop Paper” submission form. In the form, the authors must select the workshop they are submitting to. To see a sample of the “Workshop Paper” submission form, go to GECCO’s submission system and select “Sample Submission Forms”. Submitted papers must not exceed 8 pages (excluding references) and are required to be in compliance with the GECCO 2022 Papers Submission Instructions (https://gecco-2022.sigevo.org/Paper-Submission-Instructions). It is recommended to use the same templates as the papers submitted to the main tracks. Each paper submitted to this workshop will be rigorously reviewed in a double-blind review process. In other words, authors should not know who the reviewers of their work are and reviewers should not know who the authors are. To this end, the following information is very important: Submitted papers should be ANONYMIZED. This means that they should NOT contain any element that may reveal the identity of their authors. This includes author names, affiliations, and acknowledgments. Moreover, any references to any of the author’s own work should be made as if the work belonged to someone else. All accepted papers will be presented at the ECXAI workshop and appear in the GECCO 2022 Conference Companion Proceedings. By submitting a paper, the author(s) agree that, if their paper is accepted, they will:

As a published ACM author, you and your co-authors are subject to all ACM Publications Policies (https://www.acm.org/publications/policies/toc), including ACM’s new Publications Policy on Research Involving Human Participants and Subjects (https://www.acm.org/publications/policies/research-involving-human-participants-and-subjects).

Technical Programme Committee

Organisers (in alphabetical order)

Jaume Bacardit

jaume.bacardit@newcastle.ac.uk

Jaume Bacardit is Reader in Machine Learning at Newcastle University in the UK. He has receiveda BEng, MEng in Computer Engineering and a PhD in Computer Science from Ramon Llull University, Spain in 1998, 2000 and 2004, respectively. Bacardit’s research interests include the development of machine learning methods for large-scale problems, the design of techniques to extract knowledge and improve the interpretability of machine learning algorithms, known currently as Explainable AI, and the application of these methods to a broad range of problems, mostly in biomedical domains. He leads/has led the data analytics efforts of several large interdisciplinary consortiums: D-BOARD (EU FP7, €6M, focusing on biomarker identification), APPROACH (EI-IMI €15M, focusing on disease phenotype identification) and PORTABOLOMICS (UK EPSRC £4.3M focusing on synthetic biology). Within GECCO he has organised several workshops (IWLCS 2007-2010, ECBDL’14), been co-chair of the EML track in 2009, 2013, 2014, 2020 and 2021, and Workshops co-chair in 2010 and 2011. He has 90+ peer-reviewed publications that have attracted 4600+ citations and a H-index of 31 (Google Scholar).

Alexander Brownlee

alexander.brownlee@stir.ac.uk

Alexander (Sandy) Brownlee is a Lecturer in the Division of Computing Science and Mathematics at the University of Stirling. His main topics of interest are in search-based optimisation methods and machine learning, with a focus on decision support tools, and applications in civil engineering, transportation and software engineering. He has published over 70 peer-reviewed papers on these topics. He has worked with several leading businesses including BT, KLM, and IES on industrial applications of optimisation and machine learning. He serves as a reviewer for several journals and conferences in evolutionary computation, civil engineering and transportation, and is currently an Editorial Board member for the journal Complex And Intelligent Systems. He has been an organiser of several workshops and tutorials at GECCO, CEC and PPSN on genetic improvement of software.

Stefano Cagnoni

cagnoni@ce.unipr.it

Stefano Cagnoni graduated in Electronic Engineering at the University of Florence, Italy, where he also obtained a PhD in Biomedical Engineering and was a postdoc until 1997. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997 he has been with the University of Parma, where he has been Associate Professor since 2004. Recent research grants include: a grant from Regione Emilia-Romagna to support research on industrial applications of Big Data Analysis, the co-management of industry/academy cooperation projects: the development, with Protec srl, of a computer vision-based fruit sorter of new generation and, with the Italian Railway Network Society (RFI) and Camlin Italy, of an automatic inspection system for train pantographs; a EU-funded “Marie Curie Initial Training Network” grant for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing. He has been Editor-in-chief of the “Journal of Artificial Evolution and Applications” from 2007 to 2010. From 1999 to 2018, he was chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, then a track of the EvoApplications conference. From 2005 to 2020, he has co-chaired MedGEC, a workshop on medical applications of evolutionary computation at GECCO. Co-editor of journal special issues dedicated to Evolutionary Computation for Image Analysis and Signal Processing. Member of the Editorial Board of the journals “Evolutionary Computation” and “Genetic Programming and Evolvable Machines”. He has been awarded the “Evostar 2009 Award” in recognition of the most outstanding contribution to Evolutionary Computation.

Giovanni Iacca

giovanni.iacca@unitn.it

Giovanni Iacca is an Associate Professor at the Department of Information Engineering and Computer Science of University of Trento, Italy, where he founded the Distributed Intelligence and Optimization Lab (DIOL). Previously, he worked as postdoctoral researcher in Germany (RWTH Aachen, 2017-2018), Switzerland (University of Lausanne and EPFL, 2013-2016) and The Netherlands (INCAS3, 2012-2016), as well as in industry in the areas of software engineering and industrial automation. He was co-PI of the FET-Open project “Phoenix” (2015-2019) and received two best paper awards (EvoApps 2017 and UKCI 2012). His research focuses on computational intelligence, stochastic optimization, and distributed systems. In these fields, he co-authored almost 100 peer-reviewed publications, and he is actively involved in the organization of tracks and workshops at leading international conferences. He also regularly serves as reviewer for several journals and he is in the program committee of various international conferences.

John McCall

j.mccall@rgu.ac.uk

John McCall is Head of Research for the National Subsea Centre at Robert Gordon University. He has researched in machine learning, search and optimisation for 25 years, making novel contributions to a range of nature-inspired optimisation algorithms and predictive machine learning methods, including EDA, PSO, ACO and GA. He has 150+ peer-reviewed publications in books, international journals and conferences. These have received over 2400 citations with an h-index of 22. John and his research team specialise in industrially-applied optimization and decision support, working with major international companies including BT, BP, EDF, CNOOC and Equinor as well as a diverse range of SMEs. Major application areas for this research are: vehicle logistics, fleet planning and transport systems modelling; predictive modelling and maintenance in energy systems; and decision support in industrial operations management. John and his team attract direct industrial funding as well as grants from UK and European research funding councils and technology centres. John is a founding director and CEO of Celerum, which specialises in freight logistics. He is also a founding director and CTO of PlanSea Solutions, which focuses on marine logistics planning. John has served as a member of the IEEE Evolutionary Computing Technical Committee, an Associate Editor of IEEE Computational Intelligence Magazine and the IEEE Systems, Man and Cybernetics Journal, and he is currently an Editorial Board member for the journal Complex And Intelligent Systems. He frequently organises workshops and special sessions at leading international conferences, including several GECCO workshops in recent years.

David Walker

david.walker@plymouth.ac.uk

David Walker is a Lecturer in Computer Science at the University of Plymouth. He obtained a PhD in Computer Science in 2013 for work on visualising solution sets in many-objective optimisation. His research focuses on developing new approaches to solving hard optimisation problems with Evolutionary Algorithms (EAs), as well as identifying ways in which the use of Evolutionary Computation can be expanded within industry, and he has published journal papers in all of these areas. His recent work considers the visualisation of algorithm operation, providing a mechanism for visualising algorithm performance to simplify the selection of EA parameters. While working as a postdoctoral research associate at the University of Exeter his work involved the development of hyper-heuristics and, more recently, investigating the use of interactive EAs in the water industry. Since joining Plymouth Dr Walker’s research group includes a number of PhD students working on optimisation and machine learning projects. He is active in the EC field, having run an annual workshop on visualisation within EC at GECCO since 2012 in addition to his work as a reviewer for journals such as IEEE Transactions on Evolutionary Computation, Applied Soft Computing, and the Journal of Hydroinformatics. He is a member of the IEEE Taskforce on Many-objective Optimisation. At the University of Plymouth he is a member of both the Centre for Robotics and Neural Systems (CRNS) and the Centre for Secure Communications and Networking.