The 2024 Elkana Forum breaks new ground by addressing the epistemological and social questions currently pivotal to AI research from a multidisciplinary perspective that crosses the sciences-humanities divide.
The first decades of the twenty-first century have been marked by profound upheavals. Arguably, the two most significant disruptions have been the crumbling of the political order—in terms of both international security and the stability of liberal democracies—and the digitalization of knowledge, recently peaking with the rise of generative AI. These twin crises have created deep uncertainty, shaking the foundations of how we understand knowledge, govern ourselves, and trust each other and our institutions.
The 2024 Elkana Forum, themed “Digital Trust,” will respond to these challenges. We aim to foster a multidisciplinary conversation exploring the complex connections between scientific and technological disruption, political fragmentation, and the changing ways we understand knowledge. We believe these issues are deeply intertwined, and understanding their connections is key to navigating the complexities of the digital age. With this Forum, we hope to take a first step towards establishing a common ground between AI researchers and scholars interested in the dynamics of knowledge in its broader historical, philosophical, and political contexts.
At the heart of our inquiry lies the challenge of knowledge production and dissemination in an era dominated by AI-driven systems. Our emphasis here is on the interplay between epistemology and legitimacy: the factors that distinguish human intelligence from artificial intelligence, the limits and errors of each intelligence, and the operations of prediction and forecasting that have become so vital for modern science. Ethical and social considerations arising from generative AI and large language models (LLMs), such as biases in training data, privacy concerns, and the potential for misinformation, are deeply tied to those more fundamental epistemological questions. How we address these issues depends on our conception of knowledge, its sources, and its reliability.
In this process, the problem of the interplay between information technologies and the humanities has become acute. Questions that have long occupied scholars of history, philosophy, and science and technology studies are now key to the design of LLMs and generative AI systems. These include the notion of “hallucinations,” levels of validity (or “truth”), defining context, distinguishing between language and concepts, and many more. We would like to explore how seemingly purely technical processes—such as model training and tokenization, transformer architectures, decoding algorithms, and prompt dynamics—intersect with deeper philosophical and historical inquiries about knowledge generation and representation. This reflection, which goes beyond the technology alone, may prove critical in understanding the broader implications of AI for human self-conceptions and societal norms. Indeed, the rise of generative AI and LLMs encourages and even compels us to reevaluate what it means to “think,” “reason,” and “understand.”
The changed role of knowledge systems and knowledge-based authority in society coincided with the destabilization of democratic institutions and the challenging of political authority. Whether the renaissance of populism, questioning of the traditional separation of powers, or the rise of unelected actors—all go hand in hand with the dramatic growth of digital and AI-driven platforms in the political arena. Historically, one could debate the causal relationship between these two processes, but the intricate interweaving of their objectives cannot be ignored. How do we reconcile the efficiency and data-driven insights of algorithmic decision-making and LLM-generated knowledge with the need for public trust and legitimacy? What is the trade-off between transparency, privacy, and political stability? What new images of knowledge and processes of legitimacy may help address our current predicament?