evald.ai Sources

Google DeepMind Evaluation Filter

Protecting People from Harmful Manipulation

Google DeepMind releases new findings and an evaluation framework to measure AI's potential for harmful manipulation in areas like finance and health, with the goal of enhancing AI safety.

Google DeepMind Evaluation Filter

Deepening AI Safety Research with UK AI Security Institute (AISI)

Google DeepMind and the UK AI Security Institute (AISI) strengthen collaboration through a new research partnership, focusing on critical safety research areas like monitoring AI reasoning and evalua…

Google DeepMind Evaluation Filter

Google DeepMind strengthens the Frontier Safety Framework

Today, we’re publishing the third iteration of our Frontier Safety Framework (FSF) — our most comprehensive approach yet to identifying and mitigating severe risks from advanced AI models. This updat…

Google DeepMind Evaluation Filter

Taking a responsible path to AGI

We’re exploring the frontiers of AGI, prioritizing technical safety, proactive risk assessment, and collaboration with the AI community.