Topics In Demand
Notification
New

No notification found.

Agentic AI is here and ethics can't be an afterthought
Agentic AI is here and ethics can't be an afterthought

July 21, 2025

17

0

We're entering the third wave of AI, known as agentic AI, and it's reshaping how we work and live. Imagine an AI system that, in addition to analyzing your medical scans, autonomously coordinates with specialists, schedules follow-ups, and adjusts treatment plans in real-time. This isn't science fiction anymore. It's happening around us across industries, from finance and retail to healthcare and cybersecurity. Read my previous blog about what makes agentic AI the future of autonomous intelligence.

Gartner predicts that by 2028, 15% of daily work decisions will be autonomous, up from 0% in 2024. That's a massive shift in how critical choices affecting our lives get made. That's why developers, businesses, and governments need to act now. Regulations like the EU AI Act are leading the charge, but we should not rely solely on policies. This blog delves into the ethical complexities of agentic AI and the crucial steps we must take to ensure this powerful technology serves everyone fairly, not just efficiently.

Why agentic AI systems need ethical frameworks

Traditional AI systems followed strict rules and required constant human oversight. Agentic AI is different because it operates like an independent team member. It can analyze data, select the appropriate tools, and take action without waiting for approval.

Consider a hospital AI that monitors inventory levels and automatically orders supplies when its stock runs low. Or an IT security system that can detect a cyberattack and immediately block suspicious activity. These capabilities can save time and avoid problems, but they also create new risks.

The autonomous nature of these systems means that mistakes get amplified quickly. If an AI system has biased training data, it might unfairly deny loans to thousands of applicants before anyone notices the pattern. If it misinterprets a task, it could send confidential information to the wrong recipients or delete essential files.

Then there's the transparency challenge. Many agentic AI systems are so complex that even their creators struggle to explain precisely how they make decisions. If an AI flags you as a credit risk, how can you appeal the decision if no one understands the reasoning behind it? This "black box" problem creates serious accountability gaps.

The data foundation problem with agentic AI

Modern AI systems consume enormous amounts of data. Your smartwatch tracks your activity, your credit card records your purchases, and your smartphone monitors your location. Estimates suggest that we generate around 400 million terabytes of data every day.

Agentic AI systems utilize this data to make predictions about behavior, assess risks, and inform decisions that impact a person’s life. These AI systems not only store this information but also analyze it. They also actively use it to make predictions and decisions about your life. The challenge is that most people are unaware of how much data they share or how it's being used. Those lengthy, complex agreements are often deliberately designed to be confusing. Research shows that very few users read these documents thoroughly. As a result, people remain unaware of how their data might influence an AI's decision about their job application, insurance rates, or loan approval.

This lack of informed consent creates a trust problem. When people don't understand how their data is being used, they lose confidence in the systems that rely on that data. This is especially concerning when AI systems utilize personal information in unexpected ways, such as analyzing fitness tracker data to predict health risks or examining work patterns to assess job performance.

What are the primary ethical challenges associated with agentic AI? 

Algorithmic bias and fairness: One of the most serious concerns with agentic AI is its potential to perpetuate and amplify existing biases. These systems learn from historical data, which often reflects past discriminatory practices. For example, if an AI system is trained on hiring data from a company that historically favored certain demographic groups, it might continue to discriminate against women or minorities. The problem becomes worse because AI systems can process applications much faster than human recruiters, meaning biased decisions affect a larger number of people in a shorter amount of time.

Privacy and data security: Agentic AI systems can analyze your data in ways you may not have anticipated. An IT security system might monitor your login patterns and flag you as a potential security risk based on unusual activity. While this might seem reasonable for security purposes, it could lead to restricted access or even job consequences if the system makes incorrect assumptions.

Transparency and explainability: When an AI system makes a decision that affects an individual, like denying a loan or flagging their account, the individual should have the right to understand why. Unfortunately, many agentic AI systems operate as "black boxes," making it impossible to trace their reasoning. This lack of transparency is particularly problematic in high-stakes situations, such as healthcare, finance, or criminal justice. If individuals can't understand how Artificial Intelligence made a decision, they can't effectively challenge it or learn from it to prevent similar issues in the future.

Guideline to build ethical frameworks for agentic AI

  • Establish accountability: Every AI decision should leave a clear trail that humans can follow and understand. This means implementing systems that can explain their reasoning in simple terms and provide audit trails for important decisions. Companies need to establish clear policies about when and how AI systems can act autonomously. These policies should align with business objectives and regulatory requirements, ensuring that AI-driven decisions support, rather than undermine, organizational goals.
  • Promote fairness through better data: Bias in AI systems starts with biased data. To build fair systems, companies must actively work to ensure their training data represents the diversity of people who will be affected by AI decisions. This means regularly auditing datasets for bias, seeking out underrepresented voices, and correcting historical imbalances. It's not enough to collect more data. The data must be more accurate and representative of the real world.
  • Implement transparency standards: People deserve to know when AI systems are making decisions about them and how those decisions are reached. This doesn't mean overwhelming users with technical details, but rather providing clear and understandable explanations of the key factors that influenced the decision. For example, suppose an AI system increases someone's insurance premium. In that case, it should clearly explain the main factors that contributed to that decision, without using technical jargon that most people won't understand.
  • Human-centered design: The goal of agentic AI shouldn't be to replace human judgment entirely, but to enhance human capabilities. The most effective systems maintain human oversight while automating routine tasks and providing valuable insights. Consider a cybersecurity system that detects potential threats but requires human approval before taking significant actions, such as blocking user accounts.
  • Education and digital literacy: As agentic AI becomes more prevalent, education becomes crucial. People need to understand how these systems work, when they're being used, and how to respond when AI decisions seem unfair or incorrect. This education should encompass practical skills, such as understanding privacy settings, recognizing AI-driven decisions, and knowing how to appeal or challenge those decisions. It should also encompass broader digital literacy that enables people to navigate an increasingly AI-powered world.
  • Ongoing monitoring and improvement: Ethical AI development is never a one-time effort. It is an ongoing process. As agentic AI systems learn and evolve, we need mechanisms to monitor their performance, identify emerging ethical issues, and make necessary adjustments. This includes establishing regular auditing processes, creating feedback mechanisms for users who are affected by AI decisions, and developing rapid response protocols when systems cause harm or operate outside expected parameters.

That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


© Copyright nasscom. All Rights Reserved.