Artificial Intelligence has huge potential to bring accuracy, efficiencies, cost savings and speed to a whole range of human activities and to provide entirely new insights into behaviour and cognition. AI, either embedded in systems or embodied in artefacts (e.g. robots), is increasingly everywhere. It affects everyone, and has the capability to transform public and private organisations and the services and products they offer.
The development and use of AI raise fundamental ethical issues for society, which are of vital importance to our future. There is already much debate concerning the impact of AI on labour, social interactions (including healthcare), privacy, fairness and security (including peace initiatives and warfare). The societal and ethical impact of AI encompasses many domains, for instance, machine classification systems raise questions about privacy and bias, and autonomous vehicles raise questions about safety and responsibility. Researchers, policy-makers, industry and society all recognise the need for approaches that ensure the safe, beneficial and fair use AI technologies, to consider the implications of ethically and legally relevant decision-making by machines, and the ethical and legal status of AI. These approaches include the development of methods and tools, consultation and training activities, and governance and regulatory efforts.
Responsible Artificial Intelligence Agents (RAIA), will bring together researchers from AI, ethics, philosophy, robotics, psychology, anthropology, cognitive science, law, regulatory governance studies and engineering to discuss and work on the complex challenges concerning the design and regulation of AI systems as these become part of our daily life. RAIA focuses on three aspects that together can ensure that AI is developed for societal good (e.g. contributing to the UN sustainable development goals), using verifiable and accountable processes, and that its impact is governed by fair and inclusive mechanisms and institutions.
The workshop will focus on three areas of research:
- Responsible Design of Intelligent Systems: concerns the integrity of all stakeholders as they research, design, construct, use, manage and dismantle AI agents, and the governance issues required to prevent misuse of these agents. The focus here is the prioritization of ethical, legal, and policy considerations in the development and management of AI agents to ensure responsible design, production.
- Machine Ethics: understanding, developing and evaluating ethical agency and reasoning abilities as part of the behaviour of artificial autonomous systems (such as artificial intelligence agents and robots). Even though AI agents are increasingly able to take decisions and perform actions that have moral impact, AI agents are artefacts and therefore are neither ethically nor legally responsible. Individual humans or human corporations should remain the moral (and legal) agent. We can delegate control to purely synthetic intelligent systems without delegating responsibility or liability to them. With the term machine ethics, we refer to the computational and theoretical methods and tools that support the representation, evaluation, verification, and transparency of ethical deliberation by machines with the aim of supporting and informing human responsibility on shared tasks with those machines. That is, machine ethics concerns the methods, algorithms and tools needed to endow AI agents with the capability to reason about the ethical aspects of their decisions, and the ethically informed methodologies for developing AI agents whose behaviour is guaranteed to remain within acceptable ethical constraints.
- Ethics and values in multi-cultural contexts: Underlying the two objectives above, it is important to acknowledge that different groups, societies and application contexts may have different needs and expectations concerning human values and ethical principles. Responsible AI therefore should follow accepted human values and priorities (e.g. the UN’s sustainable development goals, or human rights), at the same time take into account special interests, and respect for e.g. minorities. Besides aligning ethics and identifying shared moral bounds and legal rules, in multi-cultural societies there is also the need to ensure awareness of the different value priorities and interpretations that different members of the society may have. In this workshop we will therefore also consider contributions on the analysis and development of adequate methodologies for value elicitation and awareness, in order to identify values and priorities held by different groups and to explore mechanisms for addressing value conflicts.
The responsible design, development and use of AI agents is of utmost relevance to applications such as self-driving vehicles, companion and healthcare robots, electronic health records (EHR), and ranking and profiling algorithms, which are already affecting society. In all these applications, AI agent reasoning should be able to take into account societal values, moral and ethical considerations, weigh the respective priorities of values held by different stakeholders in different multicultural contexts, explain its reasoning and guarantee transparency. In fact, the concept of Responsible AI Agents is more than the ticking of some ethical ‘boxes’ in a report, or the development of some add-on features, or switch-off buttons in AI systems. Rather, responsibility is fundamental to autonomy and should be one of the core stances underlying AI research. For example, advances in computer vision and classification must go hand in hand with the ethical considerations on their use as autonomous “decision makers” in target identification.
Thus the workshop is looking for papers on amongst others the following topics:
- AI design methodologies taking into account ethical and social consequences of AI agents
- Computational methods for understanding, developing, and evaluating ethical agency
- Engineering techniques for autonomous systems to incorporate ethical principles and social norms
- Ethically informed design methodologies for AI agents
- Formalisms (logics, algebras, argumentation, case-based reasoning etc.) for representing and reasoning about ethics, legal constraints, and social norms for AI agents
- Social simulation approaches for evaluation of AI agents and socio-technical systems
- Verification and validation of ethical behaviour
The post-proceedings of the workshop will be published by Springer, containing improved and/or extended versions of the workshop contributions as well as invited contributions.