The United Kingdom released its five “ambitions” for its global artificial intelligence (AI) safety summit on Sep. 4, with a big focus on risks and policy to support the technology.
The summit, which will take place on Nov. 1-2, is anticipated to unite thought leaders from around the world, including academics, politicians and major tech companies developing AI, in order to create a common understanding of how to regulate the technology.
According to the announcement, it will primarily focus on “risks created or significantly exacerbated by the most powerful AI systems” and the need for action. It will also focus on how safe AI development can be used for public good and overall quality of life improvement.
Additionally, the summit will touch on a way forward for international collaboration on AI safety and how to support international laws, AI safety measures for individual organizations and areas for “potential collaboration on AI safety research.”
The summit will be spearheaded by the U.K. Prime Minister Rishi Sunak’s representatives for the AI Safety Summit Jonathan Black and Matt Clifford.
Sunak called the U.K. a “global leader” in AI regulation and highlighted that his government wants to accelerate AI investment to improve productivity. Earlier this year it was announced that the U.K. would be receiving “early or priority access” to Google and OpenAI’s newest AI models.
On Aug. 31, the U.K’s Science, Innovation and Technology Committee (SITC) released a report that recommended Britain align itself with countries holding similar democratic values to safeguard against the misuse of AI by malignant actors.
Prior to that announcement, on Aug. 21, the U.K. government said it will spend $130 million on AI semiconductor chips as a part of its effort to create an “AI Research Resource” by mid-2024.