Background Background

The special session of the Elkana Forum “Research with Generative Chatbot AI? Rethinking Scientific Practice” brings together leading experts from different fields, including AI specialists, to explore the broader implications of the use of generative chatbot AI technologies (GC-AI) for research. The event is designed to facilitate a maximal dialogue through short statements and panel discussions. The goal of the special session is to establish an interdisciplinary platform for inquiry into GC-AI technologies, considering them from a wider multidisciplinary, historical, and theoretical perspective.

Digital and AI-based methods have long played an important role in scientific research. Examples are numerous and can be found in fields ranging from biology (e.g., predicting protein structure and function, modeling regulatory networks) to digital humanities (e.g., topic modeling). However, since the introduction of GPT-4 and in view of experiences with its deployment, it has become increasingly clear that AI may alter scientific research radically in the near future—possibly, a future that has already arrived. The new abilities of generative chatbots and the likely future developments of large language models (LLM) and similar software systems could change research methods as well as scholarly writing and publication practices in a radical way. Documenting experimental results in natural science papers, composing philosophical arguments, and crafting historical narratives may soon become semi-automated or even fully automated processes carried out by AI.

One major challenge is GC-AI technology’s ability to generate scientific hypotheses and experimental designs when prompted appropriately. These two stages of the scientific method—the formulation of a hypothesis and the design of a research methodology—have traditionally been considered the core competencies of good scientists, as well as an inseparable part of their agency. This seeming shift in agency raises pressing fundamental questions about scientific agency, intellectual property, secrecy, patents, and industrial designs, to name just a few.

Another challenge raised by GC-AI is the need to rethink what textual authorship means. Noam Chomsky recently described the use of this technology as “high-tech plagiarism.” We think the problem goes deeper. At stake are pressing questions that require computational, legal, and philosophical competencies to be answered: What is the causal relationship between the scientist as author and the words on paper or screen as her owned output? Does authorship mean the prompting or instigating of a text or does it have to mean writing that text in full? And how do possible revisions of these criteria of authorship and ownership affect related categories such as quotation, paraphrase, and originality? Possibly, the answers to this challenge may vary across different disciplines. The experimental sciences, for instance, may be more at ease with the automatic generation of texts than the humanities or law, where human authorship is the decisive criterion for an original scholarly contribution.

With the help of specially invited experts, we expect a fruitful exchange of ideas that will help shape the future of research in the service of humanity.