Issue #90
by Gary Marchionini (UNC School of Information & Library Science)
Artificial Intelligence continues to dominate scientific, academic, business, and popular imagination as 2024 draws to a close. Several significant events illustrate how attention has intensified. Two Nobel Prizes announced in October went to scientists who strongly influenced AI foundations and applications.¹ The prize in physics was shared by two scholars who developed neural network algorithms, John Hopfield for Hopfield Networks that mimic associative memory and Geoffrey Hinton for Boltzman Networks. The prize in chemistry was shared by David Baker who constructed novel proteins in the laboratory and by Demis Hassabis and John Jumper who created the AlphaFold system that helps scientists predict protein structures (the protein folding problem) that subsequently guide protein design. These recognitions demonstrate both the growing significance of computer and information sciences and the interdisciplinary and evolving nature of science and scholarship. Earlier the same week, Eric Schmidt, former CEO of Google caused headlines and controversy with comments at an AI conference in Washington DC by suggesting that humanity will not meet it climate change mitigation goals and so we should speed ahead with massive, energy-consuming data centers so that AI can eventually solve the climate challenge.² Meanwhile, the fruits of scholarly discussions and debates led to publication in September by The Stanford Cyber Policy Center along with the Digital Economy Lab at HAI of The Digitalist Papers: Artificial Intelligence and Democracy in America.³ This set of essays by prominent legal, technical, and political scholars addresses the ways that AI and other technologies may affect democratic life. These three examples stand along the contemporary stream of articles, podcasts, postings, conferences, and books devoted to AI.
This “all AI all the time” buzz surrounds the daily, incremental discussions and actions that information professionals and professors apply to work and learning. How many iSchool professors and students have already changed the way that they work and learn by using generative AI or other tools? What policies for how we execute and document our work have arisen in your workplace? In the moment of extreme attention, it is difficult to understand what AI developments are significant, let alone revolutionary. History tells us that socio-technical changes take place in years, decades, or generations rather than minutes and days. Just as Nobel Prize worthy work gets done decades before its impact is honored, we would do well to examine how long-term efforts directed toward fundamental problems of information science are progressing.
In the midst of the high-profile developments above, I was pleased to read a paper in the October issue of Communications of the ACM The Semantic Reader Project* that reports on progress toward a fundamental information processing challenge that has long bedeviled humanity: how can we improve the consumption and understanding of written information. From Doug Engelbart’s 1960s grand challenge to augment the intellect through tools to ideate, write, and communicate, engineers and HCI researchers in our field have made significant progress on representing text (e.g., from word processing tools to PDF formats, markup, and policies), finding pertinent texts (e.g., search engines), collaborative writing (e.g., Google Docs), and multimedia augmentations of text. We have some understanding of how children learn to read and have developed tools and interventions to support that process, including for those with special needs (e.g., dyslexia, visual impairments). Helping humanity cope with the exponentially exploding amounts of technical and informal text in our lives remains a grand challenge for our field. We want to help individuals or groups browse, read, gloss, interpret, and understand the meaning of text and then integrate that understanding into our personal intellects. New tools and techniques are crucial to continued progress. The Semantic Reader Project is one example of how incremental and systematic research can be executed by interdisciplinary teams of scholars who apply intelligent user interfaces to assist readers. This project now focuses on reading PDF documents, however, the design principles, user studies, and evolving tools illustrate how science progresses with patient and systematic work that adopts and applies new ideas and technologies to address long-standing problems of human information interaction and processing. Our students must be learning about this cadence of systematic and incremental group work driven by important problems. Viral mentions, moments, and memes are ephemeral, broad impact research and especially Nobel Prizes take time.
1: Nobel Prize Outreach AB 2024 . (14. 10 2024). The Nobel Prize. https://www.nobelprize.org/
2: Varanasi, K., & Niemeyer, L. (06. 10 2024). Business Insider. Former Google CEO Eric Schmidt says we should go all in on building AI data centers because 'we are never going to meet our climate goals anyway': https://www.businessinsider.com/eric-schmidt-google-ai-data-centers-energy-climate-goals-2024-10
3: The Stanford Cyber Policy Center, Digital Economy Lab at HAI. (09 2024). The Digitalist Papers: Artificial Intelligence and Democracy in America. https://www.digitalistpapers.com/
*: Lo, K. et al. (19. 09 2024). The Semantic Reader Project. Communications of the ACM, 50-61. doi:https://dx.doi.org/10.1145/3659096
Comments