
Target security and compliance gaps to achieve responsibly sourced AI
Executives say compliance and cybersecurity are the toughest obstacles blocking their AI readiness. But leading organisations have identified coping mechanisms to navigate these challenges
A new study focused on the quality of data sources that inform AI models warns that organisations are struggling to work toward AI readiness in a secure and compliant way.
The research from Iron Mountain, published in partnership with FT Longitude and based on a survey of senior leaders at 500 large organisations worldwide, finds the majority of organisations (64 per cent) have gaps in their information management frameworks for AI readiness. The executives surveyed consider cybersecurity and compliance risks to be the toughest challenges.
They are right to be concerned. Stricter data protection and privacy laws, AI regulations such as the EU AI Act and rising security threats increase the risk of severe penalties, including fines, reputational damage and customer safety issues. For instance, enthusiasm to quickly connect AI models to information sources can mean access permissions are neglected. This leaves AI models at risk of exposing users to information they should not have access to. “When sourcing AI models, it’s vital to take critical cybersecurity measures, such as controlling access layers,” warns Narasimha Goli, CTO of Iron Mountain.
The significance of automating compliance and security as AI-led operations scale
How, then, can organisations move safely and securely towards AI readiness? The research identifies a cohort of leaders that is demonstrating superior results, including greater revenue and profitability uplifts, because of how these organisations manage their data. These organisations are optimising their systems for the collection, storage and elimination of their proprietary data while keeping security and compliance front of mind.
One standout example of this advanced preparation is that these leading organisations are much more likely than their peers to be automating their adherence to essential compliance and security measures. Nine in 10 leaders (91 per cent) use automation-driven compliance, particularly for data sharing, privacy and retention. This is 23 percentage points higher than the average for non-leader organisations. Significantly, almost every organisation in the leadership cohort (98 per cent) has introduced automated validation checkpoints for data accuracy.
Similarly, every single leader has introduced data encryption and other security measures into their workflows. As organisations ramp up their use of AI to the point at which manual security and compliance checks become impractical, this focus on automation will be critical. Human intervention will remain integral to setting up appropriate guardrails and validating outputs, but leading organisations are adopting governance and risk management by design.
Rohit Dhawan, Director of AI and Advanced Analytics at Lloyds Banking Group, says, “Technologies ensuring adherence to data privacy regulations will become essential as scrutiny increases.”
One organisation that recognises this point is investment management company AllianceBernstein, which is building an internal chatbot harnessing AI to support its risk and compliance teams. Andrew Chin, Chief AI Officer at AllianceBernstein, emphasises that the bot’s high accuracy rate is due to the fact that it draws data only from a source of “golden documents” that employees have verified as highly credible and error-free. “We know that the answers from this dataset are going to be 100 per cent accurate, and that creates momentum,” Chin says. “[Also,] it motivates colleagues to add more documents.”
The importance of good data lineage
The leaders in this research are also more likely to have strong data lineage strategies, accurately tracking how employees have generated, managed and exploited data across their systems. Around 94 per cent of the leaders consistently track data lineage to ensure AI models are trained on the highest-quality and most relevant data available.
“We need models grounded in our own datasets and the capability to show where the data came from and how the model has arrived at a particular answer,” says Swami Jayaraman, Senior Vice President and Chief Enterprise Architect at Iron Mountain. “Having human beings looking at each step taken by the AI model and assessing that against regulatory and compliance metrics is going to be paramount.”
Building a single, unified view of data ownership and sourcing will facilitate that process, and resources such as taxonomy glossaries and guidance on risk appetite will ensure consistency. Jayaraman suggests that the best approach is multidisciplinary, with comprehensive input from legal and compliance teams.
It will also be important to explain data lineage and outcomes to stakeholders, who may lack technical expertise or understanding of fundamental data protection and security issues. For example, employees may be unaware of how data regulation may block the use of some data in specific contexts and may misinterpret outputs from the model by assuming it has considered such data. Almost all (96 per cent) of the leaders in this research use AI dashboards to explain data lineage to non-technical stakeholders, compared with 83 per cent of other organisations.
AI “nutrition labels” for visibility
According to the data, 98 per cent of the leaders use AI nutrition labels, similar to the nutrition labels widely used to show the salt, fat, fibre and sugar content of different foods. Iron Mountain’s Goli adds, “Nutrition labels create transparency by providing more information about the datasets used to train the AI model.”
Developing such labelling can help organisations demonstrate compliance with critical regulations and reassure stakeholders of the reliability of the model’s outputs. Indeed, given the proliferation of open-source AI models, tools that offer advanced levels of transparency will be essential, giving users easy access to source codes and model weightings. In some jurisdictions, AI nutrition labels may become mandatory, and organisations should expect the broader regulatory burden to grow.
The value of pursuing AI readiness in this context is significant from a compliance perspective. But the bigger opportunity is to leverage the right data, with transparency, to feed robust AI models that enable the organisation to ensure growth and productivity while protecting it from breaches and failures that could undermine trust.
See Iron Mountain’s executive summary for more on how large organisations are working toward AI readiness
