Heightened Risks Seen as Agentic AI Gains Traction: Report

With 95 percent of enterprises facing cyber incidents, new research reveals a wide gap between AI adoption and responsible AI readiness, exposing most enterprises to reputational risks and financial loss, according to Infosys Knowledge Institute (IKI).

The research arm of Infosys, a global provider of next-generation digital services and consulting, highlighted critical insights into the state of responsible AI (RAI) implementation across enterprises, particularly with the advent of agentic AI.

The report, “Responsible Enterprise AI in the Agentic Era,” surveyed over 1,500 business executives and interviewed 40 senior decision-makers across Australia, France, Germany, the UK, the US, and New Zealand.

Findings show that while 78 percent of companies see RAI as a business growth driver, only 2 percent have adequate RAI controls in place to safeguard against reputational risk and financial loss

The report analyzed risks resulting from poorly implemented AI, like privacy violations, ethical violations, bias or discrimination, regulatory non-compliance, inaccurate or harmful predictions, among others. It found that 77 percent of organizations reported financial loss, and 53 percent of organizations have suffered reputational impact from such AI-related incidents.

“Enterprises are navigating a complex landscape, where AI’s promise of growth is accompanied by significant operational and ethical risks,” said Jeff Kavanaugh, head of Infosys Knowledge Institute, “Our research clearly shows that while many are recognizing the importance of Responsible AI, there’s a substantial gap in practical implementation. Companies that prioritize robust, embedded RAI safeguards will not only mitigate risks and potentially reduce financial losses but also unlock new revenue streams and thrive as we transition into the transformative agentic AI era.”

AI risks are widespread and can be severe, with 95 percent of C-suite and director-level executives reporting AI-related incidents in the past two years.

Nearly 39 percent characterize the damage experienced from such AI issues as “severe” or “extremely severe,” and 86 percent of executives aware of agentic AI believe it will introduce new risks and compliance issues.

Responsible AI (RAI) capability is patchy and inefficient for most enterprises, the research arm of Infosys found.

Only 2 percent of companies (termed “RAI leaders”) met the full standards set in the Infosys RAI capability benchmark – termed “RAISE BAR”, with 15 percent (RAI followers) meeting three-quarters of the standards.

The “RAI leader” cohort experienced 39 percent lower financial losses and 18 percent lower severity resulting from AI incidents.

To achieve these results, leaders developed improved AI explainability, proactively evaluated and mitigated against bias, rigorously tested and validated AI initiatives, and had a clear incident response plan.

Executives view RAI as a growth driver, with 78 percent of senior leaders viewing RAI as aiding revenue growth, and 83 percent indicating that future AI regulations would boost, rather than inhibit the number of future AI initiatives.

On average, companies believe they are underinvesting in RAI by 30 percent.

“With the scale of enterprise AI adoption far outpacing readiness, companies must urgently shift from treating RAI as a reactive compliance obligation to embracing it proactively as a strategic advantage,” the report stated.

“Drawing from our extensive experience working with clients on their AI journeys, we have seen firsthand how delivering more value from enterprise AI use cases would require enterprises to first establish a responsible foundation built on trust, risk mitigation, data governance, and sustainability,” said Balakrishna D.R., EVP – global services head, AI and Industry Verticals at Infosys. “This also means emphasizing ethical, unbiased, safe, and transparent model development. To realize the promise of this technology in the agentic AI future, leaders should strategically focus on platform and product-centric enablement and proactive vigilance of their data estate. Companies should not discount the important role a centralized RAI office plays as enterprise AI scales, and new regulations come into force.”

To help organizations build scalable, trusted AI systems that fuel growth while mitigating risk, Infosys recommends the following actions:

  • Study the practices of high-maturity RAI organizations that have already faced diverse incident types and developed robust governance.
  • Combine decentralized product innovation with centralized RAI guardrails and oversight.
  • Use platform-based environments that enable AI agents to operate within pre-approved data and systems.
  • Create a centralized function to monitor risk, set policy, and scale governance.

Source link

Leave a Comment