Homo Digitalis

Neue Technologien ziehen mit rasanter Geschwindigkeit in unsere Lebenswelt ein und erfordern ein Nachdenken, wie wir als Gesellschaft damit umgehen. Die Arbeitsgruppe Homo Digitalis befasst sich mit eben diesen Fragestellungen.

Ob autonome Fahrzeuge oder so genannte „ethische Algorithmen“ in der Künstlichen Intelligenz – es ist mittlerweile nicht mehr damit getan, Technik einfach nur zuverlässig zu bauen. Gerade am Beispiel der rasanten Entwicklung des Internets zeigt sich, dass nach einiger Zeit gar nicht bedachte Kollateralphänomene in unsere Lebenswelt einziehen. Diese erfordern ein fundamentales Nachdenken darüber, wie wir als Gesellschaft damit umgehen können und sollen.

Arbeitsgruppenleiter:
Univ.-Prof. Dr. Peter Reichl
Universität Wien, Forschungsgruppe Cooperative Systems
peter.reichl(at)univie.ac.at

Aktuelle Beiträge zum Thema Homo Digitalis

The AI Crisis Group

On the one hand AI systems are present in more and more aspects  of our life, on the other hand they seem to be responsible for a myriad of global crises ranging from the decline of liberal democracy to the loss of faith in scientific institutions. But we have to go further and to explain how the paradigm crisis of AI gets entangled with broader social crises.

Hamid Ekbia

The recent resurgence of AI presents a Janus-faced image of a broad but indeterminate set of practices that has come to be carried out under the rubric of AI. One face represents a miracle cure for many social and economic issues of our times – from world poverty to disease, drought, and economic disparity, while the other face projects AI as the cause célèbre of the same social ills and many more. The first image encourages the introduction of AI systems into more and more aspects of our life, while the second one holds AI responsible for a myriad of global crises ranging from the decline of liberal democracy to the loss of faith in scientific institutions. Neither image is accurate and realistic because AI systems can neither cause these crises, nor can they cure them on their own.Rather, they are expressions of existing problems, which can be either mitigated or irritated through the introduction of AI systems. The recent global pandemic brought this reality into light, exposing deeply entrenched fractures in our societies. AI systems have been inserted into these cleavages on the basis of eithermisguided trust in their abilities or extrinsic agendas, often reinforcing and amplifying the gaps. But they didn’t have to. The reason they do is that AI is currently in a state of paradigm crisis in the Kuhnian sense of the term. To avoid this and to put AI into positive and productive use in dealing with contemporary issues, a change of paradigm is needed. That this is the case can be demonstrated in almost any area of AI – from face recognition to language translation and from corporate recruiting to public services.

The common denominator in all of this is that technology and its capabilities is put at the center of AI research, driven by a thinking that can be best described as a “challenge-me-if-you-dare” paradigm. While that might have worked for a while, it is becoming increasingly clear that it doesn’t work anymore. Following decades of ups and downs, with false starts and hyped hopes, that paradigm has run its course, leading to a growing crisis in AI. To divert the crisis, we have to change the way we think about AI in deep ways – we need a paradigm shift, in other words. To shift the paradigm, we propose to reformulate its central question as “What are the conditions of possibility for computers to do X?”

But we have to go further and to explain how the paradigm crisis of AI gets entangled with broader social crises. We have to identify the mechanisms, in other words, that enable the insertion of AI in the preexisting fractures of the social system, as well as the conditions of possibility that give rise to such mechanisms. Many commentators have recently noted the flaws and fallibilities of AI systems, often framed in terms of the “ethics” of system design or the imposition of constraints on application domains such as military. Such measures, while valuable, are not adequate in explaining the underlying social, material, and cultural conditions of possibility for the emergence of the flaws and biases. We need to uncover and explain the conditions of possibility that have given rise to crisis both in AI and in the broader social arena.

We approach this question with a focus on the following:

  1. the mechanisms that have enabled the intervention of AI in the preexisting cleavages of the social system;
  2. the conditions of possibility that drive such mechanisms along the dimensions of
    • infrastructures (physical, technical, organizational);
    • institutions (social, economic, cultural, regulatory, legal); and
    • information (technoscientific, media, public);
  3. the centrality of the reigning paradigm in AI in cultivating the above conditions; and
  4. the alternative scenarios that can emerge from changing the paradigm and the attendant conditions.

This is what we seek to do in the AI Crisis Group. We are a group of researchers and practitioners from around the globe with various backgrounds (AI, human-computer interaction, informatics, law, political science, science and technology studies, etc.) and different degrees of engagement with AI. We invite anyone with an interest in these topics – activists, artists, intellectuals, policy makers, researchers, and members of the publicto join us in our conversations and in our efforts to understand and address these complex issues. You can reach us at hekbia(at)iu.edu if you are interested.