The concept of “fiduciary duty” is one of the most ancient and ingrained foundations in human commerce. Rooted in Roman law and entrenched in English equity, it is the code that governs interpersonal trust. It is the specific legal mechanism we invoke when one person holds significant power over another’s well-being, whether that involves their wealth, health, or rights. For centuries, this duty has been inextricably linked to human consciousness and conscience.
But as we enter 2026, the technological reality of professional services is changing faster than the legal doctrine can adapt. We are moving from AI-as-tool, where a human uses a machine to support their own analysis or recommendations, to AI-as-agent, where the machine initiates, decides, and executes, potentially without human involvement.
This shift will create a legal tension. As we replace human experts with autonomous agents, we risk trading professional duty, based on loyalty and judgment, for product liability, which addresses safety and defects. This fundamentally alters the user’s relationship to the service and the protection they receive.
A product has no conscience. If the services provided by your lawyer, doctor, or financial advisor suddenly start coming directly from a software product, where does the duty to act in your personal best interest go? We are facing a future where our trusted advisors may be more consistent and accurate than ever, yet be completely dissociated from any direct human obligation to act in our best interest over all other incentives.
The Human and the Service Standard
Today, the legal liability for licensed and certified professionals like doctors, lawyers, and financial advisors, among others, is anchored in professional services standards. This system is based on the legal concept of Fiduciary Duty, informed by two component factors:
- The Duty of Care: This requires competence, diligence, and prudence. It asks if the work was done well.
- The Duty of Loyalty: This requires the professional to put the client or patient first, to manage conflicts of interest, and to avoid self-dealing. It asks if the work was done for the right reasons.
There is an important distinction. The duty of care is a quality obligation: Did the professional act as another reasonably prudent doctor, lawyer, or adviser would? The duty of loyalty is an alignment obligation: Did the professional treat the client’s interests as primary, even when incentives pulled the other way?
A surgeon who makes a preventable error may violate care. A doctor who accepts incentives that bias his treatment decisions may violate loyalty. A lawyer who misses a filing deadline may violate care; a lawyer who conceals a conflict may violate loyalty. A financial adviser who fails to understand a product may violate care; an adviser who churns an account to generate fees may violate loyalty.
That distinction matters because “care” can often be supported by process (training, checklists, monitoring). “Loyalty,” by contrast, lives in conflicts and incentives, places where process helps only if the actor is legally obligated to choose the client over themselves. You cannot easily audit loyalty by reviewing outcomes in the same way you audit accuracy.
In the current system, what drives loyalty is the human, professional relationship. The credentialed and certified professional stands as the primary interface and the fiduciary shield. They act as the “learned intermediary,” whose specific job is to select, interpret and apply a tool’s output for the client’s benefit. The modern liability stack functions because it has a hierarchy:
- The Licensed Professional: Liable for malpractice and fiduciary breach.
- The Firm/Institution: Liable for management, supervision and policies.
- The Tool Maker: Liable for product design flaws or production defects.
- Regulators/Certifying Bodies: Capable of imposing penalties and revoking licenses.
All parties may provide oversight and quality control, but the professional is the only one who owes loyalty directly to the client. It is this obligation, this human judgment, that holds the system together and ensures the service is provided with the client’s best interest at heart.
The Agentic Shift
The rise of agentic AI threatens to collapse this stack. The problem is that if the professional linchpin role is occupied by a product, the human fiduciary anchor disappears. To the extent an AI demonstrates what would, in a person, be called disloyalty, it does so because this behavior, even inadvertently, was present as the system was built. For example, a developer might build in optimization targets or training incentives that shape the system’s recommendations. Right now, they have no reason to do otherwise, and these are choices made long before any client interaction occurs.
Today, the developer has no inherent fiduciary relationship with end users. They owe duties to shareholders, perhaps to their deploying customers, but not to the individual end user whose account the AI manages or whose treatment it recommends. Only the deploying firm – the wealth management platform, healthcare provider, or law firm – has fiduciary duties to the end user.
AI-as-tool can still fit the old model. The system drafts, suggests, or predicts, and then a human professional reviews and decides. In this paradigm, the human remains the “learned intermediary” exercising judgment on the client’s behalf. AI-as-agent, however, is different. It considers, decides, initiates, and executes actions, all without human input. It may place trades, adjust a dosage, send a notice, or schedule care. In this coming paradigm, the professional is no longer the decision-maker. So who owes the client the underlying duty?
This creates a “supervision paradox.” In a traditional firm, a senior partner might supervise a few junior associates or an investment advisor might provide final confirmation for trade decisions. But in an AI-driven model, a single human advisor might nominally “supervise” 10,000 AI-managed clients or accounts. In such a scenario, a human’s ability to exercise actual judgment over any specific decision is effectively zero.
Consequently, the “human in the loop” protections become a legal fiction. When the human is essentially removed from the decision-making process, the fiduciary duty attached to them retreats as well. The law can still impose obligations and assign liability to the firm, the developer, and the operator. But that is not the same as the traditional expectation that a professional is obligated to put you first in moments of discretion.
Thus a fiduciary gap emerges in a world of agentic AI. It is not just about accuracy in terms of the AI’s performance. It’s about whose incentives the system embodies. Without a fiduciary standard, the Duty of Loyalty functionally disappears.
Nor does product liability easily fill this gap. Product liability addresses defects and safety. It addresses whether a product works as designed, and whether it is dangerous. But a product can be safe, operate exactly as intended, and still compromise the user’s interest to maximize profit, minimize resource use, or serve some other objective. A safe and functional product is not necessarily a loyal one.
Possible Paths for Regulation
If we acknowledge that the current framework is inadequate, we must anticipate ways to address it, by assessing the roles of the cast of characters involved: the Developer (who designs and manufactures the model), the Deployer/Operator (the firm that markets, operates and profits from it), the Professional Supervisor (if relevant; often nominal), and the end User.
In general, legal liability grows from the idea that duties should attach to whoever from these roles can actually control the risk. Developers control core behaviors; Operators control permissions, optimization goals, monitoring, and updates; Supervisors (if meaningful) confirm final outputs; Users control consent but rarely see the system’s true incentives or understand the full scope of their options.
We can argue, of course, that this multi-party legal liability is already complex. If a doctor uses a faulty MRI scanner that leads to a misdiagnosis, the legal system can indeed process a suit against the doctor and a product liability claim against the manufacturer. But our future problem is not that “no one can be sued.” The problem is that the existing web of liability fundamentally relies on a central trust relationship with the human professional, which may not have a functional equivalent in the future.
Even as multiple liability will persist with AI agents, fiduciary duty is not just another claim. It is the foundation of a relationship. It makes someone legally responsible for being in the user’s corner, to make their interests the highest priority behind any decision. To preserve it in any form, we can consider at least two regulatory paths (there may be others).
1. The “Digital Fiduciary”
This path attempts to preserve the relationship model by translating it to an agentic world. This does not require pretending the machine has a conscience. Instead, it requires a rule that when a system is deployed as a professional substitute, someone behind it must carry fiduciary-like duties. We could accomplish this by extending scope and definitions under fiduciary law.
A machine can defensibly produce “duty of care-like” performance metrics; we can measure its performance and accuracy against an objective benchmark representing competency. But a machine does not naturally sit inside a “loyalty” framework. It does not easily incorporate conflicts, self-dealing, or the nuance of prioritizing a beneficiary. Therefore, this path requires that software be programmed with some version of rules-based “loyalty” as a hard constraint.
In addition to the responsibility held by the deploying organization, fiduciary-like duties could be imposed on the developers who create AI systems marketed for use in these contexts. Companies who build and sell agentic lawyers or doctors would, therefore, assume obligations to the users who rely on the outputs. This could be true regardless of whether or not the developer has a direct relationship with end users or sells the systems to intermediary deployers. This duty would effectively hold developers accountable for design choices that put any financial or performance interests over those of the user.
The idea of extending such duties to parties that do not have a direct relationship with the end beneficiary is not without precedent. ERISA shows that fiduciary duties can attach to entities that are not in a one-to-one professional relationship with each beneficiary. Under ERISA’s functional definition, a plan service provider becomes a fiduciary (and owes duties of loyalty and prudence to participants/beneficiaries) to the extent it exercises discretionary authority or control over plan assets, or provides investment advice for a fee.
Using this example, if an Agentic AI system has the power to make decisions about money, then the developer of that system could be a fiduciary regardless of being a vendor without a direct contract to the user.
2. Adapting Product Law
A second option is to accept that agentic AI is a product, and then to question whether product liability can be updated to handle a role that looks fiduciary-like. That means at least broadening what counts as product failure. “Defect” would not only mean “buggy” or “unsafe.” It might also mean design choices that foreseeably produce harmful, conflicted, or disloyal outcomes in foreseeable contexts, especially when the system is marketed to as a professional quality service.
Historically, software has dodged this sort of treatment as a “product” under law. Courts have most often assessed software as “information-like” (speech or data) rather than a commercial product. However, agentic systems blur this line because they take action; they do not just provide information to a human actor.
Outside the legal setting, this approach matches how software is already managed: testing regimes, safety cases, monitoring, update controls, post-incident remediation. But importing this commercial governance into a framework for legal accountability may be a weak substitute for true fiduciary duty, because product governance frameworks are naturally better at “don’t injure the user” than “don’t betray the user.” Translating affirmative loyalty into preventative defects is arguably possible, but it’s a major policy choice, and it risks flattening a rich, interpersonal, trust-based relationship into a product compliance checklist.
If we follow either of these paths, regulatory frameworks would need to be updated to address these issues in a preventative context. Regulators might impose a variety of specific design requirements on AI systems deployed in fiduciary contexts, such as requiring transparency on developers, mandating disclosure of optimization targets, requiring certain rules-based controls, or describing system prompts.
Developers could also be prohibited from participating in compensation structures that create conflicts between their own revenue and user outcomes. Third-party audits might be required before deployment in professional settings. We are already seeing varying types of legal requirements for foundation models, AI companion apps, and other specific types of AI systems. Targeting fiduciary roles and responsibilities is another area where preemptive standards would be essential.
Conclusion
We are potentially facing the Industrial Revolution of professional services. Just as we moved from individual craftsmen to industrial assembly lines, we seem to be moving from individual experts to automated agents. Such agents may expand access, reduce cost, and raise baseline competence. They may be beneficial, and they’re almost certainly inevitable.
But while the industrial century brought cheaper goods and higher standards of living, it also required new labor laws and safety standards. We need a regulatory bridge that demands not just that our machines are safe, but that they are on our side. Whether we achieve this by forcing firms to act as digital fiduciaries; by redefining product defects to include disloyalty; or by some other solution, we should strive to ensure that the fiduciary duty survives the digital age. For medicine, law, and finance, “on our side” is not just a sentiment. It is the legal architecture of trust.