

Martin Zimper; William Crook (Animation); Gäste: Severin Klingler (ETH Media Technology Center)
Assistenz: Caroline Feder, Nina Rothenberger
4 CreditsBDE-VCA-V-I-5555.21H.001


Chris Elvis Leisi
4 CreditsBDE-VGD-V-I-5555.21H.001


Dr. phil. Björn Franke
NN
4 CreditsBDE-VIAD-V-I-5555.06.21H.001


Sebastian Stroschein
4 CreditsBDE-VID-V-I-5555.14.21H.001


Meike Eckstein
Morganti-Pfaffhauser Rebecca
4 CreditsBDE-VSV-V-I-5555.21H.001


Barbara Liebster
4 CreditsBDE-VSD-V-I-5555-11.21H.001


Jonas Vögeli / Hauptleitung
Lea Michel / Assistenz
4 CreditsBDE-VVK-V-I-5555.21H.001
Vertiefungsmodul Interdisziplinär VIAD - The Rule of Algorithmic Decisions 


Explorations of the aesthetics of algorithmic decision-making
Wird auch angeboten für
Nummer und Typ | BDE-VIAD-V-I-5555.06.21H.001 / Moduldurchführung |
---|---|
Modul | Vertiefungsmodul Interdisziplinär VIAD – The Rule of Algorithmic Decisions |
Veranstalter | Departement Design |
Leitung | Dr. phil. Björn Franke NN |
Zeit | Di 26. Oktober 2021 bis Fr 12. November 2021 / 8:30–17 Uhr |
Anzahl Teilnehmende | 8 - 18 |
ECTS | 4 Credits |
Voraussetzungen | None |
Lehrform | Design Studio |
Zielgruppen | Wahlpflichtmodul Bachelor Design, 5. Semester |
Lernziele / Kompetenzen | The design studio course should stimulate a critical examination of the role of algorithms in human decision-making the implied biases and ideologies. |
Inhalte | Decisions made by machines often appear to be objective, rational and free of arbitrariness and bias. Countless films portray this view of computerised machines whose only fault seems to be that they act too “rationally.” However, this algorithmic decision-making is by no means free of bias, since on the one hand their construction follows a certain ideology and worldview, and on the other hand their selection mechanisms and decision-making processes are dependent on the data available to them for decision-making and training. However, the bias of algorithms is also evident in less visible ways, where decisions favour one group of people over another. In the vast majority of cases, this happens without intention and without the developers being aware of it (“indirect discrimination”). Especially in the area of social equality and justice, seemingly fair decisions can lead to gross injustices that favour one group over another. In this design studio course, we will materialise and visualise the problems of algorithmic decision-making, both in social and personal contexts based on texts, films and artifacts. Through design experiments and explorations we will speculate about situations of algorithmic decision-making through various media, for example words, sounds, images, artefacts or performances. The outcome of our investigation with be presented in a final exhibition. |
Bibliographie / Literatur | Cathy O'Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (London: Penguin Books, 2017). Michael Kearns and Aaron Roth, The Ethical Algorithm: The Science of Socially Aware Algorithm Design (Oxford: Oxford University Press, 2019). Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: St. Martin's Press, 2017). Sarah Brayne, Predict and Surveil: Data, Discretion, and the Future of Policing (Oxford: Oxford University Press, 2020). Brian Jefferson, Digitize and Punish: Racial Criminalization in the Digital Age (University of Minnesota Press, 2020). |
Leistungsnachweis / Testatanforderung | Active and regular attendance (min. 80%); reading; practical course work; presentations; exhibition. |
Termine | 26. Oktober - 12. November 2021 (ohne Montage) |
Dauer | 3 Wochen |
Bewertungsform | Noten von A - F |
Bemerkung | The seminar will be in English and German, with texts and films being mainly in English. |