The futures of agency
Exploring the liminal spaces between action and responsibility
26 October 2023 (Thursday), 14:00-15:00, cowork

Weak signals of change: human intelligence or/ and artificial intelligence?

AI now holds a lot of expectations in terms of automated scanning of change signals. Acknowledging the potential it holds, we propose HI+AI augmentation as a desirable model for futures studies and for signals scanning in particular.
Based on our own applied tests of AI for scanning purposes, analysis of the existing body of works on AI-assisted futures research and comparative analysis of technological platforms for monitoring, scenario building etc., we explore the following limitations of applying AI to futures research: (1) limitations of scope and digital representation of signals, (2) limitations of prioritization - AI prioritizes strong signals over weak ones, (3) limitations of AI and LLMs as a black box, which requires specific insight validation methods, (4) limitations of noise and relevance - evading noisy findings requires detailed prompts which may not be clear at the beginning of research etc.
The proposed topic is especially relevant for weak signals of change that are often the key motivation of signals scanning. We explain the limitations and ways to work with them with references to identifying weak signals with high potential impact.
However, the goal of this paper is not to focus on current limitations but rather to suggest an approach to use AI tools to successfully augment human analysts. We will speak of what each of the limitations implies for use of AI and what can be an answer to current issues (e.g. prioritization of stinger signals can be balanced by using inverse models).

Head of research
Algorithm Trend Intelligence