The Frontier Safety Framework Version 1.0

Sunday 19th May, 2024 - Bruce Sterling

*I have yet to read any “AI Safety” document that made me feel any safer.

*Not that I fret all the time about AI safety, it’s that the documents themselves don’t make much sense or inspire any confidence. They read like scholastic theology about angels on pins.

https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/introducing-the-frontier-safety-framework/fsf-technical-report.pdf

Frontier Safety Framework
Version 1.0

The Frontier Safety Framework is our �first version of a set of protocols that aims to address severe risks that may arise from powe�rful capabilities of future foundation models. In focusing on these risks at the model level, it is intended to complement Google’s existing suite of AI responsibility and safety practices, and enable AI innovation and deployment consistent with our AI Principles.

In the Framework, we specify protocols for the detection of capability levels at which models may pose severe risks (which we call “Critical Capability Levels (CCLs)”), and a�rticulate a spectrum of mitigation options to address such risks. We are sta�rting with an initial set of CCLs in the domains of Autonomy, Biosecurity, Cybersecurity, and Machine Learning R&D.

Risk assessment in these domains will necessarily involve evaluating cross-cu�tting capabilities such as agency, tool use, and scientifi�c understanding. We will be expanding our set of CCLs over time as we gain experience and insights on the projected capabilities of future frontier models.

We aim to have this initial framework implemented by early 2025, which we anticipate should be well before these risks materialize. The Framework is exploratory and based on preliminary research, which we hope will contribute to and bene�fit from the broader scienti�fic conversation. It will be reviewed periodically and we expect it to evolve substantially as our understanding of the risks and benefi�ts of frontier models improves….