AI security
The Bletchley gathering, held in 2023, a year after ChatGPIT shocked the world, was billed as the AI Security Summit.
The names of the meetings have changed as they have grown in size and scope, and last year at the AI Action Summit in Paris, dozens of countries signed a statement calling for efforts to make AI technology “open” and “ethical” with regulation.
But the United States did not sign on, with Vice President J.D. Vance warning that “excessive regulation… could destroy a transformative sector just as it is taking off”.
The Delhi Summit has the loose theme of “People, Progress, Planet” – referred to as the three “points”.
Still, AI security remains a priority, including the dangers of misinformation like deepfakes.
Last month saw a global backlash over Elon Musk’s Grok AI tool as it allowed users to create erotic photos of real people, including children, using simple text prompts.
“Child safety and digital harm are also moving up the agenda, especially as generative AI lowers the barrier of harmful content,” Kelly Forbes, director of the AI Asia Pacific Institute, told AFP.
“There is real scope for change” although it may not happen so fast, said Forbes, whose organization is researching how Australia and other countries need platforms to confront the issue.
