Home » Insights » The path forward for Humans + AI forecasting and foresight

The path forward for Humans + AI forecasting and foresight

Forecasting and foresight are instrinsically human, as has been clearly shown by Marty Seligman and colleagues in their wonderful book Homo Prospectus, saying “it is anticipating and evaluating future possibilities for the guidance of thought and action that is the cornerstone of human success.”

Yet AI can be, in particularly conditions, be exceptionally competent at forecasting. It is also showing strong capabilities in the broader domain of foresight.

In a Humans + AI world, the issue is how do we best combine human and AI capabilities for effective forecasting and foresight.

Research in AI-Augmented predictions

Still a strong reference in the space is the paper from last year AI-Augmented Predictions: LLM Assistants Improve
Human Forecasting Accuracy.

I interviewed the lead author Philipp Schoenegger for the Humans + AI podcast shortly after the paper came out, and did a summary of the paper here, including that human forecasters’ predictions were improved by 23% by use of LLMs.

Humans + AI implications

The implications of the paper are that the era of the lone human forecaster is officially over; the baseline for high performance is now the “centaur”: a synergistic human-AI team. While this research confirms the extraordinary potential, it also illuminates the immediate, critical work required to move from these initial experiments to robust, high-value foresight capabilities.

The findings in the paper suggest four areas of focus.

Primary domains for improving Humans + AI foresight

Shift focus from AI instruction to interaction design

The most profound insight is that the process of engaging with a quantitative AI may be as powerful as the quality of its advice. The fact that a deliberately “noisy,” biased assistant still generated significant accuracy gains reveals that the true value lies in the dialogue—the structured back-and-forth that forces deliberation and surfaces assumptions.

The immediate research imperative is therefore to stop obsessing over the “perfect prompt” and begin designing and testing a portfolio of interaction modalities. We must build AI not as an oracle, but as a Socratic partner, a sparring partner, and a dedicated Red Team analyst.

Engineer for resilience, not just accuracy

The study’s results proved to be powerful but brittle, with a single outlier question dramatically skewing the outcomes. This demonstrates the critical vulnerability of current approaches. Our next wave of work must be to design for robustness. We need to build systems that can sense and adapt to anomalous data, high uncertainty, or user confusion.

The immediate engineering task is to create interfaces where the AI can transparently flag its own confidence levels and actively stress-test the user’s assumptions, building an anti-fragile capability that can be trusted when the stakes are high.

Redefine the core skills of the expert forecaster

The research challenged the prevailing narrative that AI primarily helps novices, finding the accuracy uplift was not significantly different for high- and low-skilled forecasters. This implies the emergence of an entirely new meta-skill: the ability to architect insight in collaboration with a non-human intelligence.

The most valuable human experts will no longer be those with the best recall, but those best able to interrogate AI systems, identify hybrid biases, and design effective human-machine workflows. The urgent task for organisations is to begin retraining their analysts for this new reality, focusing on strategic collaboration as a core competency.

Scale from the centaur to the collective intelligence “herd”

While the research showed that individual performance was enhanced, the impact on collective intelligence was ambiguous. We now have proof of concept for the single centaur; the grand challenge is orchestrating a “wise herd.” Further work must explore how to use AI to augment and protect the cognitive diversity that is the very foundation of crowd wisdom.

This requires designing platforms where a network of AIs can act as facilitators, deliberately introducing conflicting viewpoints, surfacing minority opinions, and preventing the premature convergence that kills collective insight. This is the most fertile and vital ground for the next phase of research.


If you are interested in this topic please join the world’s leading community on AI-Augmented Foresight in the Humans + AI Explorers Community.

by | Jul 15, 2025