Scientists have developed a new benchmark to assess the safety of Artificial General Intelligence (AGI) models. This benchmark, an early warning system, evaluates factors like decision-making autonomy, goal alignment, and scalability to identify potentially harmful AGI models before deployment. The goal is to mitigate risks associated with AGI's immense power and potential for unintended consequences, such as damage to critical infrastructure or societal instability. Concerns about AGI's rapid development and potential misuse necessitate proactive safety measures, making this benchmark a crucial tool for responsible AI development. Ultimately, the benchmark aims to ensure that AGI benefits society while minimizing existential risks.