AI Safety Summit: Where were the real data experts?

Paul_AlexanderYou can’t fail to have noticed that last week the UK held a landmark event in the field of artificial intelligence at the historic Bletchley Park. The AI Safety Summit allegedly brought together international governments, leading AI companies, civil society groups, and research experts to discuss and address the risks associated with frontier AI.

It was supposed to be marking a significant step towards fostering a shared understanding of the risks and creating a framework for international collaboration to ensure the responsible development of AI technology.

But, in reality, was the summit all mouth, no trousers? The headlines smack of hysterical clickbait, a case in point: “Elon Musk tells world to plan for the best but prepare for the worst.” Hardly helpful.

The need for the Summit is clear. As AI technology evolves rapidly, the stakes are higher than ever, and it is crucial for all stakeholders to acknowledge the potential challenges and threats that arise with its advancement. Frontier AI, with its unprecedented capabilities, without a doubt, requires a more robust approach to safety, ethics, and governance.

One of the stated goals of the Summit was to establish a forward process for international collaboration on frontier AI safety. Again, a no brainer. It is vital to create a coordinated approach to address the emerging challenges and support national and international frameworks for AI safety, bridging gaps and fostering effective solutions.

In addition to international cooperation, the summit discussed the responsibilities that individual organisations must take to enhance frontier AI safety. While innovation is essential, it must be accompanied by a commitment to ethical AI development, accountability, and transparency. Once again, a sensible and much needed discussion point.

Another key agenda item at the summit was the identification of areas for potential collaboration in AI safety research. This includes evaluating model capabilities, establishing benchmarks, and developing new standards to support governance. These research initiatives will play a pivotal role in ensuring AI technologies are developed and utilised safely. Tick. Agree with this one too.

And, finally, the AI Safety Summit aimed to emphasise the positive impact of AI when developed responsibly. From healthcare to climate change, AI has the potential to address some of the world’s most pressing issues. By ensuring the safe development of AI, this technology can be harnessed for the greater good globally.

I’m not arguing with this either. Already, we are seeing the value of AI on both of the aforementioned artefacts. In healthcare we are using AI to better manage supply and demand of care workers and patients, ultimately saving lives. For climate change we’ve helped businesses reduce their energy consumption by more than a third through the application of AI and ML technology.

But, and it’s a big one, how could the summit achieve all of its aims, when the frontline AI workers weren’t at the table?

It is frustrating that the summit primarily attracted the “usual suspects”, as well as top-level managers and administrators who discussed the future of AI without sufficient input from the front-line practitioners who code, develop, and work directly with AI on a daily basis.

In essence, the absence of these crucial front-line voices can lead to a superficial discussion that doesn’t consider the practical challenges and opportunities that developers face daily. Their perspective is invaluable in shaping the future of AI, as they are the ones working hands-on with the technology and understanding its intricacies.

Moreover, as AI becomes increasingly integral to various industries, understanding the specific needs and goals of businesses is crucial for tailoring safety measures and governance frameworks that align with their strategies. By actively involving data experts in the discussion, the summit can more easily bridge the gap between the theoretical aspects of AI safety and the real-world applications that drive economic growth and innovation. This collaboration ensures that AI is not just seen as a technological marvel but as a tool to catalyse business success while maintaining ethical and responsible practices.

The UK’s AI Safety Summit at Bletchley Park certainly represented an important step in the right direction – a collaborative effort to address the risks and challenges associated with frontier AI. However, to truly meet its goals, moving forward it will be essential to heed the voices of those who are actually on the front line. Only with our insights can a comprehensive framework that truly promotes the responsible and safe development of AI be developed.

In my book genuine collaboration means involving all stakeholders, and from where I’m sitting there are gaping holes.

Paul Alexander is chief executive of Beyond: Putting Data to Work