Advances in the field of machine learning would make it conceivable that complex judgment processes by machines could become reality. This increase in practical capacities has been made available through properties of artificial neural networks, that are being able to react better to environmental complexity than traditional forms of computer programming. Normative judgments rarely allow clear-cut distinctions that that be represented more adequately in the open structure of neural networks. The subproject "Normative Decision-Making by Machines" examines in particular technical aspects of these decisions. In what way can we reach the point whereby machines would acquire the capacity to make decisions based on human values? Which machine learning processes would then be required?
An important aspect of normative decisions taken by machines would be the realization of these decisions in a complex environment and the prevention of technical errors. This would raise the question of adequate security systems. Together with Viacheslav Gromov (AITAD, Offenburg), who will support the project for its technical aspects, we will examine potential solutions in order to create a sufficient defence mechanism concerning value judgments produced by machines. Approaches discussed here would range from principal software solutions to hardware integration of security systems which would monitor those value judgments and would correct them in the case of an emergency. Which security systems would be available to avoid individual catastrophic system breakdowns?