TechScape: What we learned from the global AI summit in South Korea

TechScape: What we learned from the global AI summit in South Korea

One day and six (very long) agreements later, can we call the meeting to hammer out the future of AI regulation a success?

Don’t get TechScape delivered to your inbox? Sign up here

What does success look like for the second global AI summit? As the great and good of the industry (and me) gathered last week at the Korea Institute of Science and Technology, a sprawling hilltop campus in eastern Seoul, that was the question I kept asking myself.

If we’re ranking the event by the quantity of announcements generated, then it’s a roaring success. In less than 24 hours – starting with a virtual “leader’s summit” at 8pm and ending with a joint press conference with the South Korean and British science and technology ministers – I counted no fewer than six agreements, pacts, pledges and statements, all demonstrating the success of the event in getting people around the table to hammer out a deal.

The first 16 companies have signed up to voluntary artificial intelligence safety standards introduced at the Bletchley Park summit, Rishi Sunak has said on the eve of the follow-up event in Seoul.

“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” Sunak said. “It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.”

Those institutes will begin sharing information about models, their limitations, capabilities and risks, as well as monitoring specific “AI harms and safety incidents” where they occur and sharing resources to advance global understanding of the science of AI safety.

At the first “full house” meeting of those countries on Wednesday, [Michelle Donelan, the UK technology secretary] warned the creation of the network was only a first step. “We must not rest on our laurels. As the pace of AI development accelerates, we must match that speed with our own efforts if we are to grip the risks and seize the limitless opportunities for our public.”

Twenty-seven nations, including the United Kingdom, Republic of Korea, France, United States, United Arab Emirates, as well as the European Union, have signed up to developing proposals for assessing AI risks over the coming months, in a set of agreements that bring the AI Seoul summit to an end. The Seoul Ministerial Statement sees countries agreeing for the first time to develop shared risk thresholds for frontier AI development and deployment, including agreeing when model capabilities could pose “severe risks” without appropriate mitigations. This could include helping malicious actors to acquire or use chemical or biological weapons, and AI’s ability to evade human oversight, for example by manipulation and deception or autonomous replication and adaptation.

Continue reading…