Deutsch Intern
Würzburg Centre for Social Implications of Artificial Intelligence (SOCAI)

Machine normativity

Research Area 3. Machine Normativity

Technological advances in the internal structure of machines allow them to make increasingly complex decisions and to evaluate issues. They also enter a domain that until now was reserved for humans: the capacity to make value judgments. Value judgments are distinct from classical optimisation tasks, which can be perfected in the field of machine intelligence. When humans evaluate facts, one can at least assume that they recognise the fundamental meaning behind the course of events in the world. Machine recognition of the correlation between events has so far not been materialised.

At the same time, there is a great need to enable machines to carry out such assessments. An autonomous vehicle will have to cope with a large number of these evaluative tasks during a typical assignment period. The often discussed dilemma situations of unavoidable accidents, which would require a consideration between the lives of humans, may well remain an exceptional circumstance. Yet, the programming of these vehicles will have to face complex decisions: If one considers the situation whereby a wild animal jumps onto the road, the associated decision would require a moral assessment which the machine would need to be able to address. Not only in autonomous driving, but also in the context of health care or autonomous weapons systems, the normative capacity to assess certain events would become a precondition for the use of the technology. Moreover, the evolving field of Legal Tech attempts to present solutions dealing with the open structure of legal norms.

Normative decisions by machines pose diverse technical issues. In what way would it be possible, both at software and hardware levels, for a machine to strike a balance, in terms of different possibilities for action, between moral and legal assessments? In what way could security systems be created which would prevent particularly dangerous system breakdowns? In what way can we ensure that decisions in the interest of humanity made by machines would reflect the established cultural and social values? What would it imply for our relation with machines when these can somehow conclude contracts and arguably act morally? The research in this area is dedicated to the full range of similar questions with an interdisciplinary approach among law, philosophy and computer science.

The subproject “Normative Decision Making by Machines” examines in particular technical aspects of these decisions. In what way can we reach the point whereby machines would acquire the capacity to make decisions based on human values? Which machine learning processes would then be required? Which security systems would be available to avoid individual catastrophic system breakdowns?

In the subproject “Machines as Legal, Political and Moral Actors” we ask ourselves first and foremost which legal, philosophical and social consequences would be connected with the capacity of machines to make normative decisions. Would machines herewith be able to acquire a particular status which could somehow be similar to that of animals, and what would ultimately distinguish them from humans? Similarly, what legal consequences would be connected to the automated contract formation by machines? 

In the subproject “Automation of Legal Argumentation” we explore automation issues specifically tailored to law. Can lawyers be replaced by algorithms? What would automated administrative and legal processes look like? What special legal remedies would be required for automated legal reasoning? From a technical perspective, we are interested in practical solutions in the automation of these processes in both, the areas of hardware and software implementation.