top of page
10google-layoffs-02-fmjh-mediumSquareAt3X.jpg

cAIre Project

cAIre: “Caring for vulnerable groups through AI Governance and fair AI in the work place”. We research AI governance, reimagine the future of work and strengthen pan-European digital collaboration for a positive, equitable and inclusive impact on society.

Diseño sin título (3).png

Driving AI Governance, the Future of Work and Pan-European Collaboration

It will investigate governance approaches, explore AI opportunities to support vulnerable groups, and promote collaboration between grantees of the Digital Futures Fund in Europe.

  • AI Governance and Opportunities for Vulnerable Groups: Research on AI governance and support for vulnerable groups.

  • Impact of AI on the Future of Work: Analysis of work transformation and development of educational programmes.

  • Pan-European Digital Collaboration: Promoting the exchange of ideas and best practices in Europe.

AI Documentation Directory

Artificial Intelligence Resource Directory on Governance and Employment Impact in the framework of OdiseIA's cAIre action (cAIre Project) we present the largest collection of reports, specialised literature, judgments, web resources, tools, guides, legislation and case law (select ‘document type’) on the topics of Artificial Intelligence Governance and Artificial Intelligence Employment Impact, as well as cybersecurity or education.

This directory is compiled by more than 60 team members from these subgroups.

AI Governance

Documentation on AI governance initiatives, their impact on democratic processes, AI for Good initiatives and associated jurisprudence.

Employment impact

Documentation from multiple perspectives of the impact that AI has and will have on employment such as: success stories, professions of the future, inclusion and risks.

Publications

1.1 Enhanced Generation Recovery (EGR)

This document details the specifications and development of an Augmented Generation Retrieval (AGR) system designed to query the European AI Act. The system was developed using LangChain technology and employs several advanced technologies, including Hugging Face embeddings with the all-mpnet-base-v2 model, the OpenAI GPT-4 model, and the Pinecone vector database.

1.1 Essential aspects. Protected and vulnerable subjects

The EU AI Regulation, a global regulatory milestone, must be considered in its entirety for both AI systems and general-purpose AI models. As such, it should be borne in mind that virtually any AI tool developed and/or used in the EU is subject to the AI Act in its entirety, with some exceptions. Therefore, its study should be approached from a holistic perspective, as any AI system or model subject to the Regulation is bound by the objectives predetermined by the EU co-legislators.

1.1 General-purpose AI models

Content

1. Regulation of AI systems. A horizontal regulatory approach

2. Following the horizontal regulatory approach, general-purpose AI models emerge

3. Codes of good practice

1.1 WE ALL (GO THROUGH SITUATIONS IN WHICH) WE ARE VULNERABLE.

I've met a few astronauts. Their image is the furthest thing one can imagine from a vulnerable being. Their level of physical and intellectual preparation, their composure, and their ability to reason under pressure are not inventions of the movies. However, when, in the film 2001: A Space Odyssey , astronaut David Bowman orders HAL 9000 to open the hatch of the Discovery , the actor Keir Dullea, who plays Dr. Bowman, is the very image of vulnerability...

1.3 Another inconvenient truth: the social emergency of AI incidents - We must do something about it
1.3 Assessment of the impact of AI externalities on society and vulnerable groups

Taxonomy of AI externalities on vulnerable individuals, differentiating between technical and socio-psychological factors according to the context.

Lineas de trabajo Google

Media coverage

Lines of work

bottom of page