Scientists react to The Bletchley Declaration from countries at the AI Safety Summit.
Scientists react to The Bletchley Declaration from countries at the AI Safety Summit.
NOVEMBER 1, 2023
Scientists react to The Bletchley Declaration from countries at the AI Safety Summit.
Dr Stuart Armstrong, Co-Founder and Chief Researcher, Aligned AI, said:
"It's our view that alignment is a capability: if an AI is likely to act against its users' interests, then it's unusable – and hence unprofitable. Only the most reliable AIs can be effectively used. And the short-term harms are a preview of the long-term risks: AIs that don't do what we want them to do, and deviate in unforeseen and unexpected ways. We strongly believe that by focusing on this core issue, bringing AIs in alignment with human values, we can address the big and the small dangers of AI."
Will Cavendish, global digital leader at sustainable development consultancy Arup, and former DeepMind and UK Government advisor:
"It's great to see the communique acknowledge the enormous global opportunities for AI as well as safety and regulation, which is rightly an important part of the summit. We can't afford to be scared of AI, as we simply can't solve humanity's biggest challenges, like climate change and the biodiversity crisis, without embracing safe and effective AI. When examining regulation, attendees at the summit must remember to consider an 'ethic of use' – where we have an obligation to use technology that will help humanity – rather than only an ethic of 'do no harm'."
Professor Anthony Cohn, Professor of Automated Reasoning at the University of Leeds and Foundational Models Theme lead at the Alan Turing Institute, said:
"The Bletchley declaration is to be welcomed as an initial step towards regulating and managing risks of AI whilst still allowing the world to benefit from the many potential (and existing) benefits that AI can bring. The involvement of many of the key countries and organisations worldwide adds to the credibility of the declaration. This is inevitably the first of many steps that will be needed and indeed two further meetings are already envisaged within the next year.
The present declaration is heavy on vision, but, unsurprisingly, light on detail, and it is likely that the "devil will indeed be in the detail" to actually create an effective regulatory regime internationally, which ensures safe deployment of AI, whilst also facilitating beneficial applications. The degree of regulation should be commensurate with the potential risks, though these may not always be apparent or easy to estimate. Whilst further meetings are already planned for 2024, there are also immediate risks of AI, particularly in the context of forthcoming elections, where AI generated text, and "deep fakes", have the potential to disrupt the democratic process.
There is an urgent need to educate the public on the risks and limitations of AI and to be wary of believing everything they see or read in the unregulated media. Moreover, anyone using an AI system, like a chatbot, should clearly be made aware that they are engaging with an AI, not a human. These short-term risks, which also include problems deriving from biased training data and resulting biased outputs are more urgent to address than possible, but still distant, risks relating to the conceivable creation of AGI which the declaration is more focused on. It is worth noting that is likely that further agreements will be needed to make clear where the responsibility for a deployed AI system rests. Finally, the declaration makes clear the need for further research on AI safety which will be important not only to evaluate the safety of AI systems but also to ensure their safety and trustworthiness."
Dr Andrew Rogoyski, Director of Innovation and Partnerships at the Surrey Institute for People-Centred AI, University of Surrey, said:
"The Bletchley communique is welcome and illustrates the importance of AI but it falls short of binding arrangements that will control and shape the type of AI being developed at such great pace. The UK and US AI Safety Institutes are a sensible step forward but my fear is that by the time they are established and understand how to operate, the world of AI will have already moved on.
"Ian Hogarth illustrated the anticipated growth of AI power in 2024, indicating that the control of the development of AI needs to go beyond good intentions and fine principles, and to do so at a pace that will be unfamiliar to many policy makers.
"Vera Jourova, on behalf of the EU, made the important point that the AI debate shouldn't be about risk vs benefit, it should be both at the same time, it's not one or the other. In order to reap the rewards of such powerful technologies, we will need to create controls, technically and legislatively, ideally in international accord.
"The UAE made the point that governance of AI needs to take new forms, unfamiliar to traditional governmental approaches, a way that can more effectively match the extreme pace of technical development.
"It was interesting to see China stress the importance of open source, alongside the general consensus emerging that AI must be a benefit for all, and not serve to further concentrate power and wealth in the hands of the few."
"One of the concerns in Europe is that the EU AI Act will disadvantage European AI companies. Not everyone agrees but I noted a throwaway line by Vera Jourova that the EU were going to make available some major high performance computing facilities, free of charge, to companies wishing to develop AI platforms, which could prove to be a major draw for talent in the EU."
Professor Michael Huth, Head of the Department of Computing, Imperial College London, said:
"This joint declaration of 29 nations on the safety of AI is important as an expression of intent for future collaborations that transcend, yet are consistent with, national approaches to AI regulation. The development of global standards for the safety of AI, and the establishment of similar standards for privacy and other aspects of trustworthiness of AI should be a key objective of those collaborations. The declaration strengthens the value of transparent approaches to AI R&D – which researchers in our Department of Computing and at my start-up xayn.com already practice."
Dr Marc de Kamps, Associate Professor in the University of Leeds' School of Computing, whose areas of expertise include Machine Learning, said:
"The creation of an AI safety institute could be a very positive step as it could play a major role in monitoring international developments and in helping to foster a broad discussion about the societal impacts of AI.
"However, a moratorium on risky AI will be impossible to enforce. No international consensus will emerge about how this should work. From a technological perspective it seems impossible to draw a boundary between 'useful' and 'risky' technology and experts will disagree on how to balance these. The government, rightly, has chosen to engage with the risks of AI, without being prescriptive about whether certain research is off limits."
"However, the communique is unspecific about the ways in which its goals will be achieved and is not explicit enough about the need for engagement with the public."
...
Illustration. SourceL (**)
Source:
(**) https://www.academia.edu/108965915/Chair_s_Summary_of_the_AI_Safety_Summit_2023_Bletchley_Park
Komentar
Posting Komentar