Previously in VOILA!

Season 2

Thursday, October 3, 2024, 4:00 pm to 6:00 pm: AI, Data Science and Social Justice, Pr. Ben Green

Conference "Algorithmic Realism: Data Science Practices to Promote Social Justice"
By Pr. Ben Green


Registration is now closed.



Abstract

Data scientists have confronted a gap between their desire to improve society and the harmful impacts of their creations. While algorithms once appeared to be valuable tools that advance social justice, today they appear to be dangerous tools that exacerbate inequality and oppression.

In this talk, Ben Green argues that improving society with algorithms requires transforming data science from a formal methodology focused on mathematical models into a practical methodology for addressing real-world social problems. The current approach to data science - which he calls “algorithmic formalism” - focuses on the formal mathematical attributes of algorithms. As a result, even when data scientists follow disciplinary standards of rigor, their attempts to improve society often entrench injustice.

In response to these limitations, he introduces “algorithmic realism,” a data science methodology that designs and evaluates algorithms with a focus on real-world impacts. Algorithmic realism expands the data science pipeline, providing concrete strategies for how data scientists can promote social justice. This new methodology also suggests more socially beneficial directions for the burgeoning fields of data science ethics, algorithmic fairness, and human-centered data science.

Bio

Ben Green is Assistant Professor in the School of Information at the University of Michigan School of Information and Adjunct Professor (by courtesy) at the Gerald R. Ford School of Public Policy. He holds a PhD in applied mathematics from Harvard University, with a minor in science, technology and society.

Ben Green studies the ethics of government algorithms, focusing on algorithmic fairness, human-algorithm interactions and AI regulation.

Through his research, Ben Green aims to support design and governance practices that prevent algorithmic harm and advance social justice.

His first book, The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future, was published in 2019 by MIT Press. He is working on a second book, Algorithmic Realism: Data Science Practices to Promote Social Justice. Ben Green is also an Associate at Harvard's Berkman Klein Center for Internet & Society and a Research Fellow at the Center for Democracy & Technology.

Schedule

The round-table discussion following the conference will address the theme of AI, Data Science and Social Justice, and will be moderated by Prof. Lucile Sassatelli, professor at Université Côte d'Azur, and Rémy Sun, researcher at Inria.
Pr. Green's presentation and the round-table will be in English, with the option of French subtitles.

Thursday, November 7, 2024, 4:00 pm to 6:00 pm: AI and (non)existential risk, Pr. Jack Stilgoe (event postponed)

Conference "The organized irresponsibility of artificial intelligence"
By Pr. Jack Stilgoe


SEMINAR CANCELLED
For personal reasons, Jack Stilgoe will not be able to present his lecture on November 7, so the event has been postponed until season 3.




Abstract

As the promises of artificial intelligence attract growing social, political and financial attention, risks and responsibilities are being imagined in ways that serve the interests of a technoscientific elite. In the UK and elsewhere, organisations are starting to institutionalise a mode of governance that presumes to know and take care of public concerns. And new research communities are forming around questions of AI ’safety’ and ‘alignment’. In this talk, Jack Stilgoe will draw on research into public and expert attitudes and reflect on his role as a proponent, analyst and actor in debates about ‘Responsible AI’.

Bio

Jack Stilgoe is Professor of Science and Technology Studies at University College London, where he studies the governance of emerging technologies. He is a member of the UKRI Responsible AI leadership team. He was principal investigator of the ESRC Driverless Futures project (2019-2022). He worked with EPSRC and ESRC to develop a framework for responsible innovation that is now used by the Research Councils. Among other publications, he is the author of "Who's Driving Innovation?" (2020, Palgrave) and "Experiment Earth: Responsible innovation in geoengineering" (2015, Routledge). Previously, he worked in science and technology policy at the Royal Society and the Demos think tank. He is a Fellow of the Turing Institute and a Trustee of the Royal Institution.

Schedule

The round-table discussion following the conference will address the theme of AI & (non) existential risk, and will be moderated by researchers from Université Côte d'Azur.
Pr. Green's presentation and the round-table will be in English.
A convivial moment will be offered at the end of the round table.

Thursday, December 5, 2024, 4:00 to 6:00 pm: AI and industrial policy, Pr. Susannah Glickman

Conference "The Politics of Exponential Growth: AI, Hardware and Industrial Policy"
By Pr.
Susannah Glickman

Replay of the conference "The Politics of Exponential Growth: AI, Hardware and Industrial Policy" available on our YouTube channel.

Les inscriptions sont closes.



Abstract

Discussions of AI and technology policy more generally have often elided the role of the state. This elision is a product of the shifting politics, institutions and infrastructures around this industry but it does not reflect an honest accounting of the historic role of industrial policy. This talk will examine this oft-neglected history, braiding together political histories, ideological shifts, political economy, and material infrastructure in order to give an account and appraisal of the current resurgence of interest in tech industrial policy.

Bio

Susannah Glickman is an assistant professor at Stony Brook University. Her research and teaching focus on the history and political economy of computation and information through the transformations in global American science that occurred at the end of the Cold War. Relatedly, she is interested in how institutions deal with the category of the future, history and the origins of the category “tech.” She has a background in mathematics and anthropology and works between the fields of science and technology studies and history, mixing archival and oral history methods.

Her current book project examines the infrastructures which make ever-improving semiconductors and quantum technologies possible historically, with particular attention to how ideology and other kinds of narratives get translated into policy and granular practices, and how reciprocally those material practices get translated back into ideology.

Schedule

The round-table discussion following the conference will address the theme of AI and industrial policy and will be moderated by Prof. Ludovic Dibiaggio, lecturer and researcher at Université Côte d'Azur (GREDEG), and Prof. Simone Vannuccini, Junior Professor Chair in the Economics of Artificial Intelligence and Innovation at Université Côte d'Azur.
Pr. Glickman's presentation and the round-table will be in English, with the option of French subtitles.

Thursday, December 19, 4:00 pm to 6:00 pm: AI and the environment, Pr. Aurélie Bugeau

Conference "Quels impacts environnementaux pour l'IA ?" ("What are the environmental impacts of AI?")
By Pr.
Aurélie Bugeau

Replay of the conference "Quels impacts environnementaux pour l'IA ?" available on our YouTube channel.

Registration is now closed.



Abstract

Artificial intelligence is presented as an essential tool for adaptation and mitigation of environmental problems. Nevertheless, a growing number of studies show that the environmental impacts and energy costs of AI methods can be significant, especially when considering the entire lifecycle of the AI service and the digital equipment required. In this presentation, I'll start by explaining why we're interested in the link between AI and the environment, and then outline the state of knowledge on the environmental impacts of AI.

Bio

Aurélie Bugeau is Professor of Computer Science at the University of Bordeaux and has been a junior member of the IUF since 2022. She carries out her LaBRI research within the Numérique et Soutenabilité (NeS) team. Her research initially focused on image processing and analysis, particularly for image and video restoration applications. Since 2020, she has been focusing on the environmental impacts of digital technology, and how to measure and predict them.

Schedule

The round-table discussion following the conference will address the theme of AI and environment and will be moderated by Martine Olivi and Guillaume Urvoy-Keller, both researchers from Université Côte d'Azur.
Pr. Bugeau's presentation and the round-table will be in French, with the option of English subtitles.

Season 1

EFELIA-3IA Côte d'Azur - VOILA seminar season 1
EFELIA-3IA Côte d'Azur - VOILA seminar season 1
> April 11, 2024, 4-6pm: AI and work

Par
Pr. Paola Tubaro, CNRS et ENSAE

"The global work of AI: A journey between France, Brazil, Madagascar and Venezuela"

Abstract: Work plays a major, if largely unrecognized, role in the development of artificial intelligence (AI). Machine learning algorithms rely on data-intensive processes that call on humans to perform repetitive, hard-to-automate yet essential tasks such as labeling images, sorting items from lists and transcribing audio files. Networks of subcontractors recruit "data workers" to perform these tasks, often in low-income countries where labor markets stagnate and the informal economy dominates. We take a closer look at the working conditions and profiles of data workers in Brazil, Venezuela, Madagascar and, as an example from a wealthier country, France.
The globalized supply chains that link these workers to the main AI production sites extend the economic dependencies of the colonial era and reinforce the inequalities inherited from the past.

Bio : Paola Tubaro is Director of Research at CNRS and Professor at ENSAE. Trained as an economist turned sociologist, she conducts interdisciplinary research at the crossroads of social sciences, network analysis and computer science. She is currently studying the place of human labor in global artificial intelligence production networks, social inequalities in work on digital platforms, and the social mechanisms of online disinformation production and dissemination. She has also published in the fields of data and research methods and ethics.

Schedule: The round table from 5 to 6 p.m. following the presentation will address the theme of AI and work, and will be moderated by Léonie Blaszyk-Niedergang, doctoral student in law at Université Côte d'Azur, and Gérald Gaglio, professor of sociology at UniCA. Pr. Tubaro's presentation and the round-table discussion that follows will be in French; the slides will be in English.

Replay of the conference "The global work of AI: A journey between France, Brazil, Madagascar and Venezuela" available on our YouTube channel.


> April 18, 2024, 4-6pm: AI and ethics

By Pr. Giada Pistilli, Senior Ethicist, Hugging Face

"Exploring the ethical dimensions of major language models across languages"

Abstract: This paper presents the study of the ethical implications of large language models (LLMs) in several languages, based on philosophical concepts of ethics and applied ethics. Using a comparative analysis of open and closed LLMs with prompts on sensitive issues translated into several languages, the study employs qualitative methods such as thematic analysis and content analysis to examine the outcomes of LLMs for ethical considerations, in particular looking at cases where models refuse to respond or trigger content filters. In addition, key areas of investigation include the variability of LLM responses to identical ethical questions in different languages, the effect of question wording on responses, and the consistency of LLM refusals to answer high-value questions in varied linguistic and thematic contexts.

Bio: Giada Pistilli is a philosophy researcher specializing in ethics applied to conversational AI. Her research focuses on ethical frameworks, value theory, applied and descriptive ethics. After obtaining a master's degree in ethics and political philosophy at Sorbonne University, she continued her doctoral research at the same faculty. Giada is also the lead ethicist at Hugging Face, where she conducts philosophical and interdisciplinary research on AI ethics and content moderation.

Schedule: The round-table discussion from 5pm to 6pm following the presentation will address the theme of AI and ethics. It will be moderated by Frédéric Precioso, Professor of Computer Science and AI at UniCA, and Jean-Sébastien Vayre, Senior Lecturer in Sociology at UniCA. Giada Pistilli's presentation will be in English, while the subsequent round table will be in French.

Replay of the conference "Exploring the ethical dimensions of major linguistic models across languages" available on our YouTube channel.


> May 23, 2024, 4-6pm: AI and bias

Place
: Inria Sophia Antipolis (Amphi Kahn Morgenstern)

By: Pr. Sachil Singh, York University

"The datafication of healthcare: An eye on racial surveillance and unintended algorithmic biases."

Abstract: Algorithms are increasingly used in healthcare to improve hospital efficiency, reduce costs and better inform patient diagnoses and treatment plans. When implemented, end-users may have the impression that algorithms are abstract, autonomous and detached from their designers. On the contrary, I present preliminary findings from ongoing interviews with data scientists to challenge the perception of algorithms as objective, neutral and unbiased technologies. I also share concerns about patient surveillance, particularly in relation to the collection of racial data purportedly to improve healthcare algorithms. Add to this healthcare professionals' own racial biases, and the cumulative impact can exacerbate already existing racial inequalities, even beyond healthcare.

Bio: Sachil Singh is Assistant Professor of Physical Culture and Health Technologies in Computerized Societies. Based in the Faculty of Health at York University in Toronto, his main areas of research are medical sociology, surveillance, algorithmic biases and race. A sociologist by training, Dr. Singh is currently studying the unintended biases of data scientists in their creation of healthcare algorithms. He is also co-editor of the top-ranked interdisciplinary journal Big Data & Society. His faculty profile is available here.

Schedule: The round table from 5 to 6 p.m. following the presentation will address the theme of AI and socio-technical systems. It will be moderated by Anne Vuillemin, Professor of Science and Techniques of Physical and Sports Activities (STAPS) at UniCA, and Valentina Tirloni, Senior Lecturer (HDR) in Communication Sciences at Université Côte d'Azur. Sachil Singh's presentation will be in English, while the round-table discussion that follows will be in English and French.

Replay of the conference "The datafication of healthcare: an eye on racial surveillance and unintended algorithmic biases" available on our YouTube channel.


> June 6, 2024, 4-6pm: AI and language technologies for education

This session takes the form of a cineforum, where we encourage everyone in the audience to engage in exchanges and shared reflections. We'll start by watching Professor Emily M. Bender's recorded presentation on "Meaning making with artificial interlocutors and risks of language technology". Professor Bender is co-author of major contributions (such as [1]) identifying possible risks associated with large language models (such as ChatGPT) and possible measures to mitigate these risks. We will then discuss the video together, taking your questions and comments and providing further clarification and explanation from experts in large language models. We'll offer you a framework for exchanges around the applications of language models to teaching at university, in order to start thinking about how best to approach current moves to incorporate AI tools into teaching.

[1] E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?," in Proceedings of the 2021
ACM Conference on Fairness, Accountability, and Transparency, Virtual Event Canada: ACM, Mar. 2021, pp. 610–623. doi: 10.1145/3442188.3445922.


Regardez la présentation de la professeure Emily M. Bender, sur le thème "Meaning making with artificial interlocutors and risks of language technology".