Issue #97
by Gary Marchionini (UNC School of Information & Library Science)
As 2024 draws to a close, informed and reflective scholars reflect, take pause in the moment, and look forward to the new year. The ‘all-AI-all-the-time’ drumbeat has continued to deafen; ranging from critics like Gary Marcus asking hard and embarrassing questions about technique and intent, to the self-serving billionaires promoting technical determinism, with increasing numbers of thoughtful scholars like Arvind Narayanan and Sayash Kapoor (AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference)¹ filling the information spaces between. Even popular venues like the Economist World Ahead 2025² includes a top-ten trend on whether AI will sparkle or fizzle and whether the world’s trillion dollar data center investments will yield better lives for humanity or toxic energy sinks that blister our increasingly volatile planet.
The December issue of Communications of the ACM³ includes a rich suite of articles and opinion pieces on AI topics. These include an overview of the European Union AI Act and how it may influence public trust and risk (Bellogin, Grau, Larson, Schimpf, Sengupta, and Solmaz); a call to make AI more accessible to people with disabilities (Mankoff, Kasnitz, Camp, Lazar, and Hochheiser), and an interesting provocation by Meredith Ringel Morris titled “Prompting Considered Harmful.” Morris discusses two kinds of limitations of prompting as the user interface for AI. First, she rightly points out that prompting is not natural language interaction but stilted one-way prodding that in today’s systems also includes some invisible backend changes that together lead to errors and frustration. I and others have suggested that prompt engineering is an important activity of research and education for information professionals, harking back to librarian expertise with Boolean queries in the early online search systems and subsequent work with natural language search strategies taught in what used to be called reference courses. On reflection, Morris is correct that we must be more ambitious in broadening how people interact with AI agents beyond today’s prompting. Morris’ argument is that we must look toward more natural, high-bandwidth interfaces (e.g., direct, gesture, affective, non-invasive brain UI) and mixed initiative systems that use the information seeker’s context to engage in something more like conversation, as well as offering more traditional constraint-based UIs such as menus or templates.
Morris’ second point is that prompting used by researchers and developers as part of their evaluation and description may lead to ‘prompt-hacking.’ This includes crafting many prompts to obtain a ‘good’ result without reporting how the crafting was done, not checking subtle prompt variants, not testing prompts across multiple models or systems, or over time. She calls for journals and review committees to develop guidelines for authors to specify details of prompt engineering---a challenge close to the hearts and expertise of information professionals and scholars. This short paper is an example of how science progresses---questions arise on assumptions about fundamental problems, that in turn lead to systematic studies and policies over time, that over longer periods of time and in conjunction with related work steadily drive us forward connecting what we knew, know now, and hope to know in the future.
There will always be hype cycles, some of which turn out to be Gartner Hype Cycles that include eventual adoption and integration of innovation into the ordinary ways of life. Dizzying hyperventilation (e.g., cloud, blockchain, data science, large language models, generative AI, quantum computing, artificial life) of the ‘latest’ thing are typically iterations on an evolutionary process of human curiosity and adaptation. Significant technical, political, and economic changes perch on the 2025 horizon. To information scholars with long-term perspectives, I offer a new year wish: Buckle up, pay attention, and go forth with critical eyes wide open and fundamental optimism about human resilience.
1: Narayanan, A., & Kapoor, S. (2024). AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. Princeton University Press.
2: Standage, T. (18. November 2024). The Economist. The World Ahead: Tom Standage’s ten trends to watch in 2025. https://www.economist.com/the-world-ahead/2024/11/18/tom-standages-ten-trends-to-watch-in-2025
3: ACM. (December 2024). Communications of the ACM December 2024 - Vol. 67 No. 12. https://cacm.acm.org/issue/december-2024/
Comments