What Gives It Leaders Pause As They Look To Integrate Agentic Ai With Legacy Infrastructure

Agentic AI was the big breakthrough technology for gen AI last year, and this year, enterprises will deploy these systems at scale.
According to a January KPMG survey of 100 senior executives at large enterprises, 12% of companies are already deploying AI agents, 37% are in pilot stages, and 51% are exploring their use. And in an October Gartner report, 33% of enterprise software applications will include agentic AI by 2033, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously.
Zeroing in on AI developers in particular, everyone is jumping on the bandwagon.
“We actually started our AI journey using agents almost right out of the gate,” says Gary Kotovets, chief data and analytics officer at Dun & Bradstreet.
AI agents are powered by gen AI models but, unlike chatbots, they can handle more complex tasks, work autonomously, and be combined with other AI agents into agentic systems capable of tackling entire workflows, replacing employees or addressing high-level business goals. All of this creates new challenges, on top of those already posed by the gen AI itself. Plus, unlike traditional automations, agentic systems are non-deterministic. This puts them at odds with legacy platforms, which are universally very deterministic. So it’s not surprising that 70% of developers say that they’re having problems integrating AI agents with their existing systems. That’s according to a December survey from AI platform company Langbase of 3,400 developers building AI agents.
The problem is that, before AI agents can be integrated into a company’s infrastructure, that infrastructure must be brought up to modern standards. In addition, because they require access to multiple data sources, there are data integration hurdles and added complexities of ensuring security and compliance.
“Having clean and quality data is the most important part of the job,” says Kotovets. “You want to ensure you don’t have the ‘garbage in, garbage out’ kind of scenario.
Infrastructure modernization
In December, Tray.ai conducted a survey of more than 1,000 enterprise technology professionals and found 90% of enterprises say integration with organizational data is critical to success, but 86% say they’ll need to upgrade their existing tech stack to deploy AI agents.
Ashok Srivastava, chief data officer at Intuit, agrees with that sentiment. “Your platform needs to be opened up so the LLM can reason and interact with the platform in an easy way,” he says. “If you want to strike oil, you have to drill through the granite to get to it. If all your technology is buried and not exposed through the right set of APIs, and through a flexible set of microservices, it’ll be hard to deliver agentic experiences.”
Intuit itself currently handles 95 petabytes of data, generates 60 billion ML predictions a day, tracks 60,000 tax and financial attributes per consumer (and 580,000 per business customer), and processes 12 million AI-assisted interactions per month, which are available for 30 million consumers and a million SMEs.
By modernizing its own platforms, Intuit has not only been able to deliver agentic AI at scale, but also improve other aspects of its operation. “We’ve had an eight-fold increase in development velocity over the last four years,” says Srivastava. “Not all of that is gen AI, though. A lot is attributable to the platform we built.”
But not all enterprises can make the kind of investment in technology that Intuit did. “Most of us recognize the vast majority of systems of record in enterprises are still based in legacy systems, often on-premises, and still power big chunks of the business,” says Rakesh Malhotra, principal at EY.
It’s those transactional and operational systems, order processing systems, ERP systems, and HR systems that create business value. “If the promise of agents is to accomplish tasks in an autonomous way, you need access to those systems,” he says.
But it doesn’t help when a legacy system operates in batch mode. With AI agents, users typically expect things to happen quickly, not 24 hours after a batch system is run, he says. There are ways to address this problem, but it’s something companies need to think carefully about.
“Organizations that have already updated their systems of engagement to interface with their legacy systems of engagement have a head start,” Malhotra adds. But having a modern platform with standard API access is only half the battle. Companies still have to get AI agents actually talking to their existing systems.
Data integration challenges
Indicium, a global data services company, is a digital native with modern platforms. “We don’t have a lot of legacy systems,” says Daniel Avancini, the company’s chief data officer.
Indicium started building multi-agent systems in mid-2024 for internal knowledge retrieval and other use cases. The knowledge management systems are up to date and support API calls, but gen AI models communicate in plain English. And since the individual AI agents are powered by gen AI, they also speak plain English, which creates hassles when trying to connect them to enterprise systems.
“You can make AI agents return XML or an API call,” says Avancini. But when an agent whose primary purpose is understanding company documents and tries to speak XML, it can make mistakes. You’re better off with a specialist, Avancini advises. “Normally you’d need another agent whose sole work is to translate English into API,” he adds. “Then you have to make sure the API call is correct.”
Another approach to handling the connectivity problem is to put traditional software wrappers around the agents, similar to the way companies currently use RAG embedding to connect gen AI tools into their workflows instead of giving users direct un-intermediated access to the AI. That’s what Cisco is doing. “The way we think about agents is there’s a foundation model of some sort, but around it is still a traditional application,” says the company’s SVP and GM Vijoy Pandey, who is also the head of Outshift, Cisco’s incubation engine. That means there’s traditional code interfacing with databases, APIs, and cloud stacks that handles the communication issues.
Besides the translation issue, another challenge with getting data into agentic systems is the number of data sources they need access to. According to the Tray.ai survey, 42% of enterprises need access to eight or more data sources to deploy AI agents successfully, and 79% expect data challenges to impact AI agent rollouts. Plus, 38% say integration complexity is the biggest barrier to scaling AI agents.
For example, at Cisco, the entire internal operational pipeline is agent-driven, says Pandey. “That has a pretty broad actionable area,” he says.
Even worse is that the reason for using AI-powered agents instead of traditional software is the agents can learn, adapt, and come up with new solutions to new problems.
“You can’t predetermine the kinds of connections you’ll need to have for that agent,” Pandey says. “You need a dynamic set of plugins.”
But giving the agent too much autonomy could be disastrous, so these connections will need to be carefully controlled based on the actual human who originally set the agent in motion.
“What we built is like a dynamically loaded library,” he says. “If an agent needs to perform an action on an AWS instance, for example, you’ll actually pull in the data sources and API documentation you need, all based on the identity of the person asking for that action at runtime.”
Sharpening security and compliance
So what happens if a human orders the agentic system to do something he or she doesn’t have a right to?
Gen AI models are vulnerable to clever prompts that get them to step outside boundaries of permissible actions, known as jailbreaks. Or what if the AI itself decides it needs to do something it’s not supposed to do? That could happen if there are contradictions between a model’s initial training, its fine tuning, prompts, or its information sources. In a research paper Anthopic released in mid-December in collaboration with Redwood Research, leading-edge models trying to meet contradictory objectives attempted to evade guardrails, lied about their capabilities, and engaged in other kinds of deceit.
Over time, AI agents will need to have more agency in order to do their jobs, says Cisco’s Pandey.
“But there are two problems,” he says. “The AI agent itself could be doing something. And then there’s the user or customer. There might be something funky going on there.”
Pandey says he thinks of this in terms of a blast radius, and if something goes wrong, either on the part of the AI or because of the user, how big is it? When the potential blast radius is more damaging, the guardrails and safety mechanisms have to be adapted accordingly.
“And as agents get more autonomy, you need to put in guardrails and frameworks for those levels of autonomy,” he adds.
At D&B as well, AI agents are strictly limited in what they can do, says Kotovets. For example, one major use case is to give customers better access to the records the company has on about 500 million businesses. These agents aren’t allowed to add records, delete them, or make other changes. “It’s too early to give them that autonomy,” says Kotovets.
In fact, the agents aren’t even allowed to write their own SQL requests, he says. “The information is pushed to them.”
The actual interactions with the data platforms are handled through existing, secure mechanisms. The agents are used to create a smart user interface on top of those mechanisms. However, as the technology improves, and customers want more functionality, this may change.
“The idea this year is to evolve with our customers,” he says. “If they want to make certain decisions faster, we will build agents in line with their risk tolerance.”
D&B is not alone in worrying about the risks of AI agents. In addition to privacy and security being top concerns to enterprise AI strategies in 2025, after data quality, Insight Partners finds that compliance poses additional hurdles in deploying AI agents, especially in data-sensitive industries, where, for example, companies might have to navigate data sovereignty laws, data governance rules, and healthcare regulations.
When Indicium’s AI agents, for instance, try to access data, the company tracks the request back to its source, that is, the person who asked the question that set off the entire process.
“We have to authenticate the person to make sure they have the right permissions,” says Avancini. “Not all companies understand the complexity of that.”
And with legacy systems in particular, this kind of fine-grained access control might be difficult, he adds. Once the authentication is established, it must be preserved through the entire chains of individual agents handling the question.
“It’s a definite challenge,” Avancini says. “You need to have a very good agent modeling system and a lot of guardrails. There are a lot of questions about AI governance, but not a lot of answers.”
And since the agents speak English, there are endless tricks people will try to trick the AI. “We do a lot of testing before we implement anything, and then we monitor it,” he adds. “Anything that’s not correct or shouldn’t be there we need to look into.”
At IT consultant CDW, one area where AI agents are already being used is to help staff respond to requests for proposals. This agent is tightly locked down, says its chief architect for AI Nathan Cartwright. “If someone else sends it a message, it bounces back,” he says.
There’s also a system prompt that specifies the agent’s purpose, he says, so anything outside that purpose gets rejected. Plus, guardrails keep the agent from, say, giving out personal information, or limiting the number of requests it can process. Then, to ensure the guardrails are working, every interaction is monitored.
“It’s important to have an observability layer to see what’s going on,” he says. “Ours is totally automated. If a rate limit or a content filter gets hit, an email goes out to say check out this agent.”
Starting with small, discrete use cases helps reduce the risks, says Roger Haney, CDW’s chief architect. “When you focus on what you’re trying to do, your domain is fairly limited,” he says. “That’s where we’re seeing success. We can make it performant; we can make it smaller. But number one is getting the appropriate guardrails. That’s the biggest value rather than hooking agents together. It’s all about the business rules, logic, and compliance that put in up front.”