The advertising and marketing industry might be going gangbusters for artificial intelligence but brands risk of the wrath of their customers if they do not disclose whether campaigns are being created using the seemingly ubiquitous tech.
So says a new IPA study, conducted by Opinium, which quizzed 2,000 people aged 18+ on the ethics and etiquette of using AI and compared the findings of similar research carried out in 2018.
The report comes at a time when brands and their agencies are increasingly turning to AI-driven systems. Just last week, Publicis Groupe launched a new data-driven content platform, PX, which is powered by AI, while VCCP has become the first agency group to launch a standalone global AI creative agency and Google has launched a raft of AI creative services in a direct challenge to the agency market.
The IPA study reveals that almost three quarters (74%) of consumers believe brands should disclose the use of AI-generated content and that fully automated AI-driven marketing campaigns should be carefully regulated.
The core findings show a continued high level of desire by consumers for AI transparency; a decrease in consumers’ belief that AI should police them and a significant decrease in the belief that “the robots” deserve rights and respect.
Coupled with this, 75% of people want to be notified when they are not dealing with a real person. While this overall figure remains high, it is down on the 2018 figure of 84%, although the technology was arguably in its infancy at the time.
Furthermore, the report shows how two-thirds (67%) of Brits think AI should not pretend to be human or act as if it has a personality. Again, while this overall figure remains high, it is lower than 74% recorded in 2018.
The survey reveals a considerable increase in desire by consumers to not be policed, nor disagreed with, by the AI, when comparing 2023 data with that recorded in 2018.
While just over half of consumers (51%) believe AI should have the right to report them if they are engaging in an illegal activity in this latest dataset, this is a significant decrease from the 67% recorded in 2018.
This same trend is seen when asking consumers whether they believe AI should be allowed to make it known if it disagrees with them. In 2018 51% said they believe this is acceptable, in 2023 this fell to 42% of consumers.
Regarding who holds responsibility and liability for the use of AI, three-fifths (61%) think humans must accept liability if the use of AI results in an accident. This is a slight decrease from the 64% measured in 2018.
The survey also shows how consumer respect when dealing with AI has dropped considerably in recent years, with a 25% decrease in the number of people who think that they should be polite and exhibit good manners when interacting with virtual assistants, from 64% in 2018 to 48% in 2023.
In addition, less than a quarter (24%) of Brits believe that “robot rights” should be introduced to ensure human treatment of AI, a decrease from 30% in 2018.
IPA President Josh Krichefski, who is also GroupM chief executive for EMEA & UK, said: “AI provides incredible opportunities for our business. As these findings demonstrate, however, the public are understandably cautious about its use – and increasingly so in some areas. It is therefore our responsibility to be transparent and accountable when using AI to ensure the trust of our customers.”
Related stories
Publicis eyes content boom with AI-driven PX platform
VCCP launches first AI global creative agency, Faith
Google to take on agencies with AI creative services
Marketers ditch metaverse to embrace generative AI
ChatGPT fuels major investment – and new CMA probe
Brands risk backlash as consumers balk at AI stampede