L-R: Neha Bothra, Associate Editor, Forbes India, moderated a thought-provoking panel discussion with Nitin Seth, Co-Founder & CEO of Incedo, Srikanth Velamakanni, Co-Founder & Group CEO of Fractal Analytics, and Senthil Ramani, Global Lead – Data and AI at Accenture, at the NASSCOM Technology & Leadership Forum 2025.
As artificial intelligence (AI) opens new frontiers, conversations in Silicon Valley has shifted from GenerativeAI (GenAI) to Agentic AI. At the Nasscom Technology and Leadership Forum 2025, held on February 25 in Mumbai, the excitement around Agentic AI was palpable: CEOs, consultants, and strategists exchanged notes on how they can use the transformative power of this technology to drive efficiency and innovation for large-scale impact across industries.
Agentic AI uses a combination of AI technologies such as machine learning (ML) and large language models (LLM). It evolves from tools to agents capable of independent decision-making. On the one hand, it unlocks immense potential for innovation and efficiency, but it also raises critical concerns about responsible AI behaviour and highlights the need for guardrails and robust frameworks to address challenges such as AI hallucinations, bias, misuse, and other unintended consequences.
In a special panel discussion moderated by Forbes India’s Associate Editor Neha Bothra, Senthil Ramani, global lead, Data and AI, Accenture; Nitin Seth, co-founder and CEO, Incedo; and Srikanth Velamakanni, co-founder, Group CEO, Fractal, discussed the application of Agentic AI in diverse sectors including health care, finance and manufacturing, the need for ‘experimentation’, and more. Edited excerpts:
Q. What is Agentic AI and why is everyone talking about it?
Ramani: I tend to tell my clients that every Monday, AI changes. I’m scared to be here because in the next 20 minutes, I don’t know what’s going to happen. It’s an extremely nonlinear world that we live in on AI. And the reason is because if GenAI brought the marginal cost of generation to practically zero, what Agentic AI is doing is bringing the marginal cost of action to zero. And I think that’s a profound point for us.
If you say, hey, GenAI cracked the code on language, what is Agentic AI doing? It is fundamentally fusing different forms of AI, creating them into actions, into reasoning, to be able to perform human-like behaviour, which is fundamentally important for us. That’s what is happening, which is why Agentic AI is very cool. It is there to stay. And in fact, it is going to be the one catalyst for scaled AI in the future.
Q. How is it being used by companies to solve real-world problems?
Seth: The most pervasive use of Agentic AI is in customer service. AI has been there in customer service through bots for a long time; it’s just reaching the next level. If there was an L1, L2, L3 and different levels of support, now there’s a much deeper set of problems that you are able to solve through Agentic AI. So that’s one thing happening across multiple industries.
Also read: Is India’s talent pool ready for India Inc’s AI requirements?
The second thing is around fraud detection, across telecom and financial services. Telecom is very aggressive in terms of applying AI, while financial services, especially banks, are extremely conservative. So, the only AI use cases I see my banking clients adopting is around fraud, anti-money laundering (AML), compliance, and risk.
The third area is broadly called anomaly detection. In telecom, there is this notion of autonomous networks. The heart of a telecom system is the network, and it’s a very complex and large beast. It’s worth tens of billions of dollars. And there’s a move towards making it self-managed in terms of detecting issues, self-curing, preventive maintenance.
Q. In India, we are trying to push manufacturing to create jobs. Do you think it’s in conflict with the use of Agentic AI?
Ramani: I see this as convergence and not conflict. Imagine the power of adding language models and teaching two world models, as we call it, you can have a single robot do multiple tasks. Imagine the power of the multiplier that you get from that. And you can actually take that back and start creating a greenfield site. So, Amazon wants to create a site, or Ikea wants to create a site, they can now do it through a digital twin concept at a 50 percent faster rate, better cost reductions, and figure out where to bring in the labour, into creating the convergence as part of what they do.
We’ve also used a lot of AI and Agentic AI within Accenture. Marketing is a huge department for us. And within marketing and communications, we’ve been applying Agentic AI. We have hundreds of agents running in Accenture. We’ve had a 6 percent cost reduction in our ability to do it, and at least a 55 percent reduction in terms of overall spend. Our ability to go-to-market has increased as well. I just want to caution that it’s still early days. The economics is still setting down.
Q. What’s your outlook for the wide-spread adoption of Agentic AI in India and how’s your experience with Cogentiq been as you focus on Agentic AI to improve enterprise productivity?
Velamakanni: The first thing is the emergence of, what I call, Software 2.0. So far, we have been living in the world of Software 1.0, which was about creating a bunch of rules. And this is how the software should behave for specific users and specific things. It’s all menu-driven, structured, logical, organised into rules. That is a software.
Listen: Madhumita Murgia on the good, bad and ugly of artificial intelligence in our daily lives
In the last one or two years, a layer of intelligence has been added to that software. What I now see is the emergence of Software 2.0, which is products that have been built ground-up. Agents are fundamentally the backbone of that product. And then you add maybe a UI, UX layer that interacts with the user, and that’s how you get stuff done. So that is a new world we are walking into.
Cogentiq as a product, for example, is imagining that kind of a Software 2.0 world. For example, today in customer service, we are able to create an agent-assist solution, which is assisting the agent in answering questions, auditing and summarising calls, and so on, and is delivering tremendous performance. Sometimes they make mistakes; they have a 10 percent error rate. Plus, it takes them a long time.
So, can I get answers in a way that I can reduce error rates, improve accuracy, reduce the hold times? That’s something we are doing with an agent-assist solution. [Also] think about these sorts of things across the board, in augmenting human performance, and creating a software to point to an autonomous world.
Right now, everything is document intensive. There’s a lot of manual reviewing of everything. If an agent can perform all these tasks, we will free up a lot of people to do things. There are very few people who are experienced enough to do this work, and they take a full day to review one 500-page document of this kind. With Agentic AI, you can bring that down to maybe 5 minutes and it can automate that entire process.
What you’re seeing is a once-in-a-lifetime opportunity to reimagine every business process with AI. Whether it is marketing and campaigns, trade, finance, customer service or health care, you can reimagine every business process with AI.
Q. How soon do your clients want to incorporate this in their strategies? What are the budgets for this?
Seth: I can’t talk very intimately about India, but I can certainly talk about the US, which is where a lot of this is being led. Number one, the level of activity is very different across industries. And it depends on the level of regulation in that industry and the industry’s legacy. A tremendous level of activity is happening in telecom, whereas financial services and health care are very, very cautious.
The biggest thing that is happening is software development. Any company worth its salt is looking to drive productivity improvement in the software development process. So, the majority of what I see happening is still internal. In the external use cases, my wealth management clients are certainly experimenting with more autonomous advice. The banks are more regulated, while wealth managers are less regulated. So, they are certainly experimenting with advice. But I would say at this point, the use cases that are closest to getting to production, are internal. I would say that the ratio is 90:10, where the majority is internal processes, and 10 percent are customer-facing processes.
Also read: How AI coaches are transforming leadership in tech companies
In the next 12 to 18 months, this will change dramatically. Because if you look at the progress in the last six to 12 months, it’s hard to believe. Foundational models were launched just a couple of years back. And we were trying to figure out how they fit into the overall tech data infrastructure. From there we are able to drive significant actions in a number of areas. In those areas, the productivity improvements are significant. In parts of it, it’s like 40 percent productivity improvement. These are very large numbers.
If you think about all the internal work that happens, and if you start getting 20 to 40 percent productivity improvement, what does it mean for jobs and for our industry? There’s a lot to be unpacked there. In the infrastructure that is getting set up the focus will shift to more business, customer-oriented use cases. They are tougher to do because there is more re-envisioning to be done.
Q. Agentic AI highlights the need for responsible AI behaviour in autonomous systems. What frameworks are required to ensure these models do not have unintended consequences?
Ramani: I want to reframe that into the objective of what we’re trying to solve. The purpose is more important than how we solve it. So, the reframing of responsible AI is just not about ethics, it’s about the fact that there is a growth agenda associated with it and that reframes the conversation significantly. So, it is purpose and profit coming together. So, that’s really the first thing.
The second thing is how you put the guardrails in place; the guardrails just don’t start with policies. A lot of organisations take their cybersecurity policy and reframe that into an RAI [responsible AI] policy. One, it’s a good thing to do because you can get the deliverable done faster. It’s not a great thing because, with AI becoming Software 2.0 and the bedrock of every organisation, you would want to reframe responsibly right from where you source data through the use case. It starts from there to the final point of decision making. So, the end-to-end needs to be done as part of these frameworks as well.
Also read: AI in marketing: How customer-centric companies can benefit from data privacy regulations
What’s interesting is that there are two momentums. There’s an enterprise momentum in AI, and a government momentum in AI. Now, these two need to converge and not collide.
Velamakanni: If we think of responsible AI as a way to enable innovation, it is far more powerful. See, if you talk to the big AI companies, the ones who are building frontier models and so on and the big thought leaders of AI, they’re always talking about safety: “AI could come and kill everyone”. So, first of all, you may have the right intentions, you may have the right purpose, but it may take the wrong way and solve some other problem and, in the process, kill everyone. For example, to reduce carbon emissions [AI could] say, “Human beings are creating carbon, let’s kill them”. AI safety is a paramount topic, and you need that big red button where you can push and stop everything from happening.
AI safety has to be thought ground-up in every model. But the immediate problems are not that. The immediate problems are of bias and unfairness. AI has to be much more responsible in making sure it is accurate, transparent, and fair. There’s a whole host of policies. You have to take these responsible AI principles and then create toolkits and automate that toolkit in such a way that before you launch any product or service with AI embedded in it, you’re running through the toolkit and it says, ‘I tested all these scenarios, based on all these principles and all these checks have been made and, therefore, it’s a green and it is a go for launch’. That’s what we should see, whether it is India at a country level or at an industry level, or at a product level.
Q. How can companies prepare for this journey in terms of empowering teams and addressing these challenges to minimise the downside?
Seth: Obsessive experimentation. See, we are in a world of duality. So, the safety and governance question is very important. But if you over index on it, you will not do anything. At this point of time, especially when I think about India, there has to be a clarion call for experimentation. A lot of it is going to fail, and we have to be prepared for it. Even if you don’t do it, nothing will happen. There are three kinds of imperatives that are out there in terms of where this is going and how you derive value from it.
The first imperative is the 90-10. In 90 percent of pilots that are happening, how do you take those pilots to scale? It requires changes in policy, how people work, and all of that. It’s a big management challenge and you need a lot of force of will to drive that kind of change. That’s the first thing: A lot of pilots have taken place over the last six, 12, 18 months, and how do we make them real, and to scale.
The second bit is the shift in the 90/10, so that you begin to not just improve productivity in your internal organisation but impact your customer and business model in more fundamental ways. It’s a tougher problem, because you need to understand the value chain very deeply.