From Idea to Durable Impact: What the cAIre Final Report Reveals About AI4Good
- Dra. Begoña G. Otero
- 5 days ago
- 4 min read

Today we publish the final report of cAIre Subproject 1.4 (AI4Good)[1], titled From Idea to Durable Impact. It is the outcome of eighteen months of research into why so many socially oriented AI initiatives fail to move from prototype to real-world impact, and what public policy can do to change that equation.
The starting point: many good ideas, few that survive
Europe's innovation ecosystem does not lack creativity or social motivation. The two editions of the Hackathon OdiseIA4Good prove the point: in 2025, over 300 participants developed 25 projects; in 2026, 112 teams from five continents competed, with 60 reaching the grand finale. Topics ranged from school-dropout prevention and adolescent depression to food traceability, labour-market access for persons with disabilities, and support for caregivers and social workers.
Yet the longitudinal follow-up of six projects from the 2025 cohort reveals a recurring pattern: the principal bottleneck is not technology, nor a shortage of ideas, nor team motivation. It is the structural fragility surrounding social innovation. Absence of legal form, lack of institutional adoption channels, and a persistent mismatch between available support, which is designed for conventional start-ups, and what public-purpose projects actually need.
A problem of incentives, not just resources
The report argues that the gap between promising ideas and durable impact is best understood as a problem of misaligned incentives. Using principal-agent theory as an analytical lens, it shows that ecosystem actors (funders, juries, mentors, accelerators) claim to seek social impact, yet often reward what is easiest to observe in the short term: polished presentations, demonstrable prototypes, media visibility. Meanwhile, the elements essential for durability, i.e., governance, contextual adaptation, institutional embedding, long-term maintenance, are systematically undervalued.
This diagnosis connects with broader European debates on innovation, translating their macro-level diagnoses to the micro level of social AI initiatives in the context of this project.
Vulnerability: not only of beneficiaries, but of innovators themselves
One of the report's most significant findings is that vulnerability does not only affect the people whom projects aim to help. It also affects many of the innovator teams themselves: they work without stable funding, without organisational structure, and without institutional support. They are trying to design solutions for vulnerable contexts while operating under vulnerable conditions of their own. This carries direct policy implications: support measures should focus not only on end users, but also on the organisational resilience of those building public-interest AI.
A set of recommendations for policymakers
The report offers seven concrete recommendations addressed to innovation policymakers:
Redesign innovation support to value governance and long-term public impact, not only technical novelty.
Enable harmonized legal and regulatory frameworks that sustain mission-oriented innovation.
Strengthen public-sector adoption channels, using urban sandboxes and testing environments such as those already operating in several local areas in Spain.
Foster AI localism, recognising that cities and local administrations are often the best arena for responsible experimentation and practical adoption.
Connect recognition with continuity, ensuring that prizes and visibility form part of a sustained support pathway.
Evaluate with governance-sensitive criteria, incorporating structural vulnerability, inclusion, accountability, institutional fit, and the capacity to sustain impact over time.
What this report does not claim
The findings presented here are based on a qualitative, case-level methodology centred on the Hackathon OdiseIA4Good and a small-scale longitudinal follow-up. The report does not claim statistical representativeness, nor does it attempt to draw continent-wide conclusions from a single programme. Its empirical base is deliberately limited in scale: six projects tracked over time, two hackathon editions observed in depth, and a curated set of practice-based documents analysed for recurring patterns. The connections drawn to broader European policy debates are analytical, not empirical: they situate the findings within a larger structural context, but they do not substitute for the larger comparative research that would be needed to generalise across jurisdictions and innovation systems. The value of this work lies in making visible patterns that are consistent with, and illustrative of, wider concerns, not in providing definitive answers. Future research, as intended by the OdiseIA4Good Foundation, ideally involving cross-programme and cross-country designs, would strengthen the evidence base considerably.
Looking ahead
The report also identifies a future research agenda centred on three key questions: which organisational models best sustain AI4Good initiatives over time, what conditions facilitate the transition from prototype to public-sector adoption and how to adapt existing innovation-support instruments for AI4Good ventures. These questions are fundamental if the next cycle of innovation policy is to go beyond generating more ideas and instead create the conditions for the best ones to survive.
An invitation
This report is not an endpoint but a starting point for a debate Europe needs to have: How to make social innovation with AI something more than a showcase of good intentions. If you work in innovation policy, impact finance, local public administration, or social-enterprise ecosystems, we invite you to read it and join the conversation.
To download the full report, click here
[1] This work is part of the cAIre project (Caring for vulnerable groups through AI Governance and fair AI in the workplace), within the Digital Futures Project funded by Google.org.





Comments