Firms admit AI fears justified; 27% have even banned it

robot_ai_cyborg 2More than 9 out of 10 businesses recognise they must do more to reassure customers that their data is used for intended and legitimate purposes when adopting artificial intelligence tools with more than a quarter feeling so strongly that they have banned its use until they can get their house in order.

So says the seventh edition of Cisco’s Data Privacy Benchmark Study, based on responses from 2,600 privacy and security professionals across 12 geographies, which concludes that privacy is now so much more than just a regulatory compliance matter.

In fact, among the top concerns about the rise of AI, businesses cited the threats to an organisation’s legal and intellectual property rights (69%), and the risk of disclosure of information to the public or competitors (68%).

Even so, most organisations are aware of these risks and are putting in place controls to limit exposure: as well as the 27% which have banned its use altogether, more than three-fifths (63%) have established limitations on what data can be entered, while a similar proportion (61%) have limits on which generative AI tools can be used by employees.

Nonetheless, many individuals have entered information that could be problematic, including employee information (45%) or non-public information about the company (48%).

Interestingly, organisations’ priorities to build consumer trust differ from those of individuals and customers.

Consumers identified their top priorities as getting clear information on exactly how their data is being used, and not having their data sold for marketing purposes.

When asked the same question, businesses identified their top priorities as complying with privacy laws (25%) and avoiding data breaches (23%). It suggests additional attention on transparency would be helpful — especially with AI applications where it may be difficult to understand how the algorithms make their decisions.

Organisations do at least recognise the need to reassure their customers about how their data is being used, and 98% said that external privacy certifications are an important factor in their buying decisions. This is the highest level seen over the years.

Cisco chief legal officer Dev Stahlkopf said: “Organisations see GenAI as a fundamentally different technology with novel challenges to consider. More than 90% of respondents believe GenAI requires new techniques to manage data and risk. This is where thoughtful governance comes into play. Preserving customer trust depends on it.”

Despite the costs and requirements that compliance with privacy laws may impose on organisations, 80% of professionals said such legislation has had a positive impact on them, and only 6% said the impact has been negative. Strong privacy regulation boosts consumer confidence and trust in the organizations they choose to share their data with.

At a time when many governments and organisations are putting in place data localisation requirements to keep certain data within country or region, most businesses (91%) believe their data would be inherently safer if stored within their country or region. However, 86% said that a global provider, operating at scale, can better protect their data compared to a local provider.

Over the past five years, privacy spending has more than doubled, benefits have trended up, and returns have remained strong.

This year, 95% of professionals believe that privacy’s benefits exceed its costs, and the average organisation reports getting privacy benefits of 1.6 times their spending. Further, 80% indicated getting significant “loyalty and trust” benefits from their privacy investments, and this is even higher (92%) for the most privacy-mature organisations.

In 2023, the largest organisations (10,000+ employees) increased their privacy spending by up to 8% compared to the previous year. However, smaller organisations saw lower investment; businesses with between 50 and 249 employees on average decreased their privacy spend by 25%.

Cisco vice president and chief privacy officer Harvey Jang said: “Nearly all (94%) of data security professionals said their customers would not buy from them if they did not adequately protect data.

“They are looking for hard evidence the organisation can be trusted. Privacy has become inextricably tied to customer trust and loyalty. This is even more true in the era of AI, where investing in privacy better positions organisations to leverage AI ethically and responsibly.”

Related stories
That was then, this is now… the world according to AI
What’s in store for 2024…for AI-driven marketing?
Marketers baulk at breakneck speed of AI deployment
AI will give us more time to think creatively, say CMOs
Industry launches Taskforce to tackle AI ethics concerns
Row over AI ‘light touch’ turns into chorus of disapproval
Generative AI threatened by unresolved martech issues
AI to turbocharge economy but staffing threat looms