AI Safety Summit: ISBA and IPA outline ad industry code

ipad2Two key advertising and marketing trade bodies have published 12 guiding principles for agencies and clients on the use of generative AI in advertising.

Timed to coincide with the opening of the UK Government’s AI Safety Summit, agency body the IPA and client organisation ISBA say the principles are broad-brush and designed to ensure that the industry embraces AI in an ethical way that protects both consumers and those working in the creative sector.

They cover issues around transparency, intellectual property rights, human oversight and more.

These principles are not exhaustive and apply only to the creative process rather than other areas of the industry. The IPA and ISBA will consider publishing additional best practice guidance around the use of AI in other areas in due course. The 12 principles are:

– AI should be used responsibly and ethically.

– AI should not be used in a manner that is likely to undermine public trust in advertising (for example, through the use of undisclosed deepfakes, or fake, scam or otherwise fraudulent advertising).

– Advertisers and agencies should ensure that their use of AI is transparent where it features prominently in an ad and is unlikely to be obvious to consumers.

– Advertisers and agencies should consider the potential environmental impact when using generative AI.

– AI should not be used in a manner likely to discriminate or show bias against individuals or particular groups in society.

– AI should not be used in a manner that is likely to undermine the rights of individuals (including with respect to use of their personal data).

– Advertisers and agencies should consider the potential impact of the use of AI on intellectual property rights holders and the sustainability of publishers and other content creators.

– Advertisers and agencies should consider the potential impact of AI on employment and talent. AI should be additive and an enabler – helping rather than replacing people.

– Advertisers and agencies should perform appropriate due diligence on the AI tools they work with and only use AI when confident it is safe and secure to do so.

– Advertisers and agencies should ensure appropriate human oversight and accountability in their use of AI (for example, fact and permission checking so that AI generated output is not used without adequate clearance and accuracy assurances).

– Advertisers and agencies should be transparent with each other about their use of AI. Neither should include AI-generated content in materials provided to the other without the other’s agreement.

– Advertisers and agencies should commit to continual monitoring and evaluation of their use of AI, including any potential negative impacts not limited to those described above.

IPA director of legal and public affairs Richard Lindsay said: “The use of AI has grown exponentially in all industries, bringing with it huge opportunities as well as a wealth of new legal, regulatory and ethical challenges that need to be understood and addressed. The importance of AI is evidenced by the Government’s bringing together of world leaders and tech giants at this week’s AI Safety Summit.

“Generative AI will undoubtedly be transformative for our industry, but it is vital that it is used in an ethical, responsible, and legally compliant way. These principles, which we have worked on with our colleagues at ISBA, are designed to help agencies and advertisers navigate that process and continue to produce outstanding creative work while taking advantage of the remarkable new tools available to them.”

ISBA director of public affairs Rob Newman added: “From individual brands to trade bodies, in sector after sector of the economy, people are scrambling to work out what the AI revolution means for them. In many ways, the jury is out. AI could help us create transformational marketing… or it could exacerbate existing crises in trust and transparency which already plague the industry.

“We’re pleased to have made this start on laying down some guardrails so that AI doesn’t create new problems, but contributes towards the trusted and responsible advertising environment that the public, regulators and lawmakers want to see.”

Separately, in September, the Advertising Association formed a new AI Taskforce, bringing together senior representatives from across its membership, with the aim of building a coordinated policy approach to the technology. The Taskforce has yet to publish its initial findings.

Related stories
AI Safety Summit: SMEs call for new laws to fight abuse
CMOs admit lack of expertise is thwarting AI adoption
Govt paints grim picture of future as AI Summit looms
AI will give us more time to think creatively, say CMOs
JP Morgan boss: We’ll live to 100 and work less with AI
Ad industry using ChatGPT more than any other sector
Industry launches Taskforce to tackle AI ethics concerns
Ethics issues block roll-out of AI despite ROI bonanza
Generative AI ‘now essential part of marketer’s toolkit’