Digital transformation has an impact on the constitutional order of the state and its public institutions, challenging its legal foundation. In order to keep up with the increasing presence of digitalisation in the lives of citizens, technological advancement would therefore need to be integrated within the context of state action. Such, first and foremost, is showcased in the sphere of public administration. In this context, state actions are subject to constitutional checks and balances reflective of democratic principles. The state legislature is entrusted with the task of striking a balance between upholding constitutional principles and at the same time not stifling innovation.
Public administrative action must comply with constitutional principles. At the same time, actions taken by public authorities would need to reflect democratic principles in order to ensure legitimacy. The traditional understanding of democratic legitimacy implies decision making by legal officials that are, ultimately, humans who have come in a decision-making position through a legitimating process of selection. This understanding would need to evolve with the increasing role of artificial intelligence (AI) in official decision making.
The question discussed in the first pillar is whether the official use of machine learning tools could be a legitimate option in the context of democratic processes. The second pillar addresses the existing status of digitalisation in the lives of citizens and their increasing expectation as to receive efficient and effective public services by means of machine learning tools. The use of these tools would in principle lead to simplified access to public administrative authorities and faster decision making. Here we ask the question of whether the use of machine learning tools would trigger the constitutional obligation to introduce machine learning given that the state should serve the interests of its citizens, under the presumption that their applications would be considered as legitimate. The third pillar deals with the question of state liability for errors generated by machine learning tools. To what extent and in what manner are errors generated by machine learning tools attributable to individual public administrators? The decisive factor here would be whether the so-called individual administrator as a representative of the state would bring about liability to the state as a whole.