Advertising and marketing bosses might talk a good game about the implementation of AI tools but those at the coalface are increasingly concerned about job security and feeling unheard in the workplace, leaving the technology at risk of being a fragmented.
That is according to a new study from Behave and Mediaplus UK, which quizzed 200 CTOs and COOs and 1,000 white collar workers across media, FMCG, finance and tech industries, and reveals a growing disconnect that poses a serious risk to successful AI integration in the UK.
Among the stark findings, is the fact that around three quarters (74%) of media industry workers are moderately to extremely concerned about their personal job security in the next five years, with over-dependence the biggest issue for people who work in advertising departments (33%), tied with ethical concerns (33%) and followed by ensuring data privacy (26%). In the media industry overall, maintaining human oversight was seen as the biggest challenge at 33%.
Meanwhile, media had the lowest levels of management responsiveness, with 29% of industry respondents feeling they are rarely or not heard at all when it comes to AI concerns.
Finance and tech industries recorded the highest level of management responsiveness, with 83% and 82% respectively feeling moderately to extremely heard.
The tech industry is most likely to use AI for work compared to the media industry who are the least likely of the surveyed industries at 59%, but specifically, 69% of advertising workers use AI for work and are least likely to use AI for any personal use (50%).
In addition, 73% of those in media see AI as an opportunity, compared to the average across industries at 79%, while the same proportion (73%) of those in the media industry have intermediate to expert proficiency with AI tools, but lower than the average at 80%, with tech (87%) and finance (82%) workers leading the way.
Advertising workers also want guidelines established by a dedicated AI ethics committee (25%) over senior management (17%) or external regulatory bodies (21%), with the media industry as a whole preferring external bodies (24%) to lead in regulating AI use.
Behave Innovation & Strategy Director Dr Alexandra Dobra-Kiel said: “AI is too often implemented, not adopted. This detached approach risks AI becoming a fragmented tool confined to isolated pockets of the business.
“AI’s true potential lies in elevating us toward a new horizon of human excellence, not just efficiency. But this requires us to move beyond mere implementation. We must motivate teams to embrace AI as an enabler, provide the proficiency to leverage it, and instil ethical responsibility in its development. Only then can we achieve true adoption.
“Without a nuanced approach to AI ethics, organisations risk creating a ‘black box’ – a tool deployed without sufficient transparency or understanding, which can stifle adoption and innovation.
“We must transcend the shallowness of checklists and regulatory compliance. True ethics demands open discourse, questioning our deepest assumptions, and a profound consideration of AI’s impact on both those who create it and those it affects. Ethical AI adoption is not a matter of rules, but of conscience.”
Related stories
Demand for enhanced loyalty to drive take up of AI tools
Marketing AI revolution ‘still three to five years away’
AI adoption soars as top marketers drive transformation
Big issues to tackle in 2025: What’s the future for AI?
Cavalier approach to data holding back adoption of AI
Industry vows to work with Labour to boost UK economy
Come clean: ‘Dirty data’ costs UK firms £900bn a year
Get data shipshape for GenAI or risk sinking, firms told
Be the first to comment on "‘AI is too often implemented, not adopted,’ study warns"