Day-colloquium on Image Corpora and AI
8 octobre 2024 • 9h30 17h
Salle C-2059, Carrefour des arts et des sciences, Université de Montréal et en ligne
Under the leadership of Emmanuel Château-Dutier and Julien Schuh, our center is happy to organize a day-colloquium on Image Corpora and IA with the PictorIA consortium, which brings together various French research stakeholders around the challenges of automatic image processing.
This event gathers experts and scholars from various fields to explore the intersection of image collections, data structuring, and artificial intelligence. Through a series of presentations, experience-sharing sessions, and practical workshops, we aim to foster interdisciplinary collaboration and dialogue. This day-colloquium is preceded by a workshop (“Working with vast collections of digital images today”) the day described here.
[Registration for the zoom link.]
Program
- 9.30am-9.45am -– Welcome
- 9.45-12pm –- Press photography archives
- 9.45am-10.30am: Thierry Gervais (The Image Centre, Toronto Metropolitan University), “Facing Black Star and Beyond”
In 2023, I co-edited with Vincent Lavoie the last IMC Books volume Facing Black Star. In this book, 18 scholars—established academics, recent graduates, curators— discussed their encounter with the photo agency collection of black and white prints preserved at the Image Centre at Toronto Metropolitan University. Made of almost 300,000 prints, this collection has generated many research expectations but also some frustrations due to its volume, its materiality, its accessibility. The book was the opportunity to propose a snapshot of the state of research with this collection after more than a decade in a public institution. In this talk, I will discuss the outputs of this book, the difficulties encountered when working with a large press photography collection and how AI could be an efficient tool.
- 10.30am-11.15am: Chantal Wilson (The Image Centre, Toronto Metropolitan University), “AI Driven Cataloguing: A Case Study for The Rudolph P. Bratty Family Collection”
In 2023, The Image Centre (IMC) digitized more than 21,000 items that comprise the Rudolph P. Bratty Family Collection of press photographs drawn from the New York Times Photo Archive. The IMC worked collaboratively with Edward Burtynsky and Arkiv360 to automate the digitization process and effectively capture the collection, along with their folded captions, tear sheets and attached ephemera in a matter of weeks. Post-digitization, Arkiv360 developed AI software was leveraged to apply keywords and subject information in English and French prior to IMC cataloguing revisions. This case study will review workflows and metadata results, as well as propose new questions that have arose from developing this project’s AI driven cataloguing approach.
- 11.15am-12pm: Béatrice Joyeux-Prunel (Université de Genève), “A Corpus for the Study of Globalization through Images: Challenges and Perspectives”
The “Visual Contagions” project employs automated image analysis to track the global circulation of images (patterns, subjects, styles, individual images) in illustrated periodicals, before moving on to studies using other sources, at different scales, and more traditional historical and critical methods. One of the main challenges in this endeavor is the preparation of a global corpus that is not only as representative and comprehensive as possible, but also allows for accurate dating and localization of the images being analyzed, while minimizing the loss of images during the segmentation and matching stages. Although there are numerous strategies to address this challenge, perhaps the biggest hurdle is moving away from the desire for perfect corpora and, above all, understanding what algorithmic manipulation does to the corpusóand thus, what can truly be made of the results it yields.
- 12pm-1pm — Lunch
- 1pm-3pm –- Methodologies
- 1pm-1.45pm: Robert Sanderson (Yale University), “IIIF and Linked Art: Putting the AI in FAIR”
With the content being about how linked open usable data can make it easier to both build baseline corpora, to have human or machine annotators, and then to use (Generative or other) AI to enrich the knowledge graph for discovery and research.
- 1.45pm-2.30pm: Julien Schuh (Université Paris Nanterre), “Complexifying image annotation”
Annotation formats for training visual AI models (such as COCO) excessively simplify image content. How can we produce annotations that reflect the complexity of interpretive processes generated by the reception of images? We will propose reflections to operationalize various models of image reading and to produce annotations based on different objectives, approaches, levels of precision, and depth.
- 2.30-3pm — Coffee break
- 3pm-4pm Panel Discussion : “Enriching Metadata with AI” – moderated by Emmanuel Château-Dutier (Université de Montréal); Panelists: Clarisse Bardiot, Emmanuel Château-Dutier, Robert Sanderson, Julien Schuh, Anne-Violaine Szabados (CNRS), Alice Truc (Université de Montréal)
- 4pm-5pm — Perspectives
- 4pm-4.45pm: Clarisse Bardiot (Université de Rennes 2), “Exploring the Visual Digital Traces of Performing Arts”
From Stage to Data is an ERC-funded project that began in 2024, exploring the paradigm shift of digital traces for the historiography of contemporary performing arts. Drawing on the Avignon Festival and its approximately 2,000 performances, and through the lenses of resurgence and collaboration, the project examines the historical and aesthetic dimensions of mise en scène since World War II. Three corpora are considered: the programs of the approximately 2,000 performances, thousands of photographs and videos, and the study of specific creative processes through their digital traces. For each of these corpora, machine learning techniques are applied to uncover creation contexts and networks, aesthetic resonances, and models of creative processes. The project is currently in the early stages of processing the data and setting up methods and workflows, which will be presented to show the ongoing development of the research.
- 4.45pm-5pm — Closing remarks
Ce contenu a été mis à jour le 2 octobre 2024 à 10 h 57 min.