HOME
TOPICS
COMMITTEE
CALL FOR PAPERS
SUBMISSIONS
KEYNOTE SPEAKERS
JOINT EVENTS
REGISTRATION
LOCATION
PRACTICAL INFORMATION
ORGANIZERS
AIMSA HISTORY

 

KEYNOTE SPEAKERS


Plamen Angelov

Email:
Organization: School of Computing and Communications, Lancaster University, UK
Homepage: https://angeloventelsensys.wixsite.com/plamenangelov


Prof. Plamen Angelov (MEng 1989, PhD 1993, DSc 2015) is a Fellow of the IEEE, of the IET and of the HEA. He is Governor of the International Neural Networks Society (INNS) being his Vice President for two terms till end of 2020. He holds a Personal Chair in Intelligent Systems at Lancaster University, UK. He has authored or co-authored 350+ peer-reviewed publications in leading journals, peer-reviewed conference proceedings, 6 patents, three research monographs (by Wiley, 2012 and Springer, 2002 and 2019) cited over 12300+ times with an h-index of 58. He is the founding Director of LIRA (Lancaster Intelligent, Robotic and Autonomous systems) Research Centre (https://www.lancaster.ac.uk/lira) which includes over 50 academics across 15 Departments from all Faculties of the University. He has an active research portfolio in the area of computational intelligence and machine learning and internationally recognised results into online and evolving learning and explainable AI. Prof. Angelov leads numerous projects (including several multimillion ones) funded by UK research councils, EU, industry, UK MoD. His research was recognised by 2020 Dennis Gabor Award "for outstanding contributions to engineering applications of neural networks" as well as 'The Engineer Innovation and Technology 2008 Special Award' and 'For outstanding Services' (2013) by IEEE and INNS. He is also the founding co-Editor-in-Chief of Springer's journal on Evolving Systems and Associate Editor of several leading international scientific journals, including IEEE Transactions on Fuzzy Systems, of the IEEE Transactions on Cybernetics, IEEE Transactions on AI as well as of several other journals such as Fuzzy Sets and Systems, Soft Computing, etc. He gave over a dozen plenary and key note talks at high profile conferences. Prof. Angelov was General co-Chair of a number of high profile conferences. He was also a member of International Program Committee of 100+ international conferences (primarily IEEE). More details can be found at https://angeloventelsensys.wixsite.com/plamenangelov.

KEYNOTE TALK: From Hyper-parametric towards Prototype-based Deep Learning

Machine Learning (ML) and AI justifiably attract the attention and interest not only of the wider scientific community and industry, but also society and policy makers. However, even the most powerful (in terms of accuracy) algorithms such as deep learning (DL) can give a wrong output, which may be fatal. Due to the hyper-parametric, cumbersome and opaque model used by DL, some authors started to talk about a dystopian “black box” society. Despite the success in this area, the way computers learn is still principally different from the way people acquire new knowledge, recognise objects and make decisions. People do not need a huge amount of annotated data. They learn by example, using similarities to previously acquired prototypes, not by using parametric analytical models. Current ML approaches are focused primarily on accuracy and overlook explainability, the semantic meaning of the internal model representation, reasoning and its link with the problem domain. They also overlook the efforts to collect and label training data and rely on assumptions about the data distribution that are often not satisfied. The ability to detect the unseen and unexpected and start learning this new class/es in real time with no or very little supervision is critically important and is something that no currently existing classifier can offer. The challenge is to fill this gap between high level of accuracy and the semantically meaningful solutions. The most efficient algorithms that have fuelled interest towards ML and AI recently are also computationally very hungry – they require specific hardware accelerators such as GPU, huge amounts of labelled data and time. They produce parameterised models with hundreds of millions of coefficients, which are also impossible to interpret or be manipulated by a human. Once trained, such models are inflexible to new knowledge. They cannot dynamically evolve their internal structure to start recognising new classes. They are good only for what they were originally trained for. They also lack robustness, formal guarantees about their behaviour and explanatory and normative transparency. This makes problematic use of such algorithms in high stake complex problems such as aviation, health, bailing from jail, etc. where the clear rationale for a particular decision is very important and the errors are very costly. All these challenges and identified gaps require a dramatic paradigm shift and a radical new approach. In this talk the speaker will present such a new approach towards the next generation of computationally lean ML and AI algorithms that can learn in real-time using normal CPUs on computers, laptops, and smartphones or even be implemented on chip that will change dramatically the way these new technologies are being applied. It is explainable-by-design. It focuses on addressing the open research challenge of developing highly efficient, accurate ML algorithms and AI models that are transparent, interpretable, explainable and fair by design. Such systems are able to self-learn lifelong, and continuously improve without the need for complete re-training, can start learning from few training data samples, explore the data space, detect and learn from unseen data patterns, collaborate with humans or other such algorithms seamlessly.


Julian. F. Miller

Email:
Organization: Department of Electronic Engineering, University of York, UK
Homepage: https://www.cartesiangp.com/julian-miller

Dr. Miller obtained BSc in Physics from the University of London, PhD in Nonlinear Mathematics from the City University and Postgraduate Certificate in Learning and Teaching in Higher Education from University of Birmingham. He is currently an Honorary Fellow (formerly Reader) in the Department of Electronic Engineering at the University of York. He has chaired or co-chaired seventeen international workshops, conferences and conference tracks in Genetic Programming (GP), Evolvable Hardware. He is a former associate editor of IEEE Transactions on Evolutionary Computation and an associate editor of the Journal of Genetic Programming and Evolvable Machines and Natural Computing. He is on the editorial board of the journals: Evolutionary Computation, International Journal of Unconventional Computing and Journal of Natural Computing Research. He has publications in genetic programming, evolutionary computation, quantum computing, artificial life, evolvable hardware, computational development, and nonlinear mathematics. Dr. Miller is a highly cited author with over 10,000 citations and over 230 refereed publications in related areas. He has authored or co-authored 44 journal publications. He has given fourteen tutorials on genetic programming and evolvable hardware at leading conferences in evolutionary computation. He received the prestigious EvoStar award in 2011 for outstanding contribution to the field of evolutionary computation. He is the inventor of a highly cited method of genetic programming known as Cartesian Genetic Programming and edited the first book on the subject in 2011. He is also well known for proposing "evolution-in-materio" which asserts that computational functions could be evolved directly in materials by evolving configurations of applied physical variables without requiring a detailed understanding of the materials.

Keynote talk - Cartesian Genetic Programming

Cartesian Genetic Programming (CGP) is a very general technique that can be applied to a wide range of computational problems in many fields. It is a form of Genetic Programming (GP) which finds, solutions to computational problems by evolving programs and other structures. CGP uses a very simple integer address-based representation of a program in the form of a directed graph. In various studies, CGP has been shown to be comparatively efficient to other GP techniques. The classical form of CGP has undergone a number of developments which have made it more useful, efficient and flexible in various ways. These include self-modifying CGP (SMCGP), cyclic connections (recurrent-CGP), encoding artificial neural networks (CGPANN) and automatically defined functions (modular CGP). SMCGP uses functions that cause the evolved programs to change themselves as a function of time. Recurrent-CGP allows evolution to create programs which contain cyclic, as well as acyclic, connections. CGP encoded artificial neural networks represent a powerful training method for neural networks.

CGP has been applied successfully to a variety of real-world problems, such as digital circuit design, visual object recognition and classification. More, recently it has been used to create a general AI approach inspired by biological development in which evolved neural programs create ANNs that simultaneously solve multiple problems. CGP has a dedicated web site at www.cartesiangp.com.


Alessandro Lenci

Email:
Organization: Department of Philology, Literature, and Linguistics, University of Pisa, Italy
Homepage: https://people.unipi.it/alessandro_lenci/

Alessandro Lenci, PhD, is Full Professor in Linguistics at the University of Pisa, and the director of the Computational Linguistics Laboratory (CoLingLab: http://colinglab.humnet.unipi.it/) at the Dept. of Philology, Literature, and Linguistics. He has extensively published on Natural Language Processing (NLP) and cognitive science. His main research areas are distributional semantics and its applications in linguistic and cognitive research, computational lexical semantics, computational models of verb argument structure, tools and resources for NLP. He has been the co-organizer of workshops and conferences (e.g., The Dagstuhl Seminar on Computational Models of Semantics: Formal and Distributional Approaches to Meaning 11-15 November 2013), and has appeared in the program committee of several conferences (ACL, COLING, EMNLP, LREC, etc.). He was co-chair for the Computational Models of Human Language Acquisition and Processing area at EMNLP 2013, and co-chair for the Semantic Processing, Distributional Semantics and Compositional Semantics are at COLING 2014. He was co-organizer of the ACL 2016 and 2018 Workshops on Cognitive Aspects of Computational Language Acquisition. He was PC co-chair of *SEM 2018 and is local co-organizer of ACL 2019 in Florence. In 2020, he received the "10 year Test-of-Time-Award" by the Association for Computational Linguistics. His last main effort is a book on Distributional Semantics to be published soon by Cambridge University Press in the series Studies in Natural Language Processing.

Keynote talk - Do machines really talk like us? The knowledge of language in AI systems

State-of-the-art Natural language Processing (NLP) and Artificial Language (AI) models have reached an unprecedented ability to "mimic" human linguistic skills, from machine translation to text generation. But how much of this success depends on their having really acquired human-like abilities to learn and use language? One key feature of human cognition, which grounds natural language too, is a particularly sophisticated ability to learn, from a relatively limited exposure to data, knowledge that is general enough to abstract from the input it has been learnt from and can be therefore applied to interpret unseen situations. Moreover, humans learn by experiencing and acting in the world and using advanced inferential skills. On the other hand, the most sophisticated Neural Language Models, like GPT and its friends, typically acquire their linguistic knowledge only from huge amounts of text. Understanding to what extent machines can match human linguistic skills lies at the core of the current debate in computational linguistics and AI. Exploring this issue requires investigating the nature of the generalizations in machines and comparing them with those in humans. Therefore, AI research should never ignore the question of the cognitive plausibility of the computational models it uses and of the linguistic abilities they acquire. The relevance of such issue is huge not only on the theoretical ground, but also for the possibility of designing AI systems and applications that could really use and understand natural language like humans do.