Artificial Intelligence and Agency Attribution:
Institutional Frameworks for Algorithmic Accountability
Abstract
This Article examines whether the Third Restatement of Agency provides adequate frameworks for attributing AI-generated representations and transactions to deploying organizations, or whether agentic AI's distinctive characteristics require doctrinal innovation. Commentators argue that AI creates a categorical crisis for agency law: algorithmic opacity prevents meaningful control, machine autonomy breaks the principal-agent relationship, and emergent AI outputs exceed authorization frameworks designed for human agents. This Article demonstrates that the crisis narrative overstates the doctrinal challenge by underappreciating the institutional sophistication the Third Restatement already achieved. The Restatement's reconceptualization of agency for organizational contexts--through structural manifestation, architectural control, and institutional apparent authority--already accommodates the characteristics commentators identify as novel for AI. Organizations deploying AI in institutional roles manifest authority through observable deployment decisions, exercise control through ex ante design and ex post monitoring, and bind themselves through apparent authority grounded in third-party reliance on institutional features. Fiduciary duties require monitoring, design, and response obligations that prevent governance failures from expanding attribution liability. The Article concludes that AI attribution is an institutional problem, not a technological one, requiring careful application of existing frameworks rather than categorical doctrinal innovation.
I. Introduction: The Crisis Narrative and Institutional Response
In 2006, the American Law Institute promulgated the Restatement (Third) of Agency, a landmark reformulation that achieved what this Article terms the "institutional turn"--a decisive shift from conceiving agency law as governing paradigmatically interpersonal relationships to recognizing that organizations constitute the dominant context for agency relationships in modern commercial practice. Twenty years later, that institutional turn confronts the "computational turn": the deployment at scale of "agentic AI," autonomous systems capable of goal formulation, multi-step planning, tool use, continuous learning, and independent execution that operate as genuine commercial actors rather than passive instruments.1
The computational turn has generated a crisis narrative in legal scholarship. Commentators argue that AI creates categorical challenges for agency doctrine: algorithmic opacity prevents the meaningful control agency law requires,2 machine autonomy breaks the principal-agent relationship by eliminating ongoing oversight,3 AI-generated outputs (particularly "hallucinations") exceed authorization frameworks designed for human agents,4 and machine-speed operations make ex post accountability fictive because humans cannot intervene in processes occurring at microsecond timescales.5 These arguments share a common premise: doctrine developed for bilateral relationships between natural persons cannot accommodate AI's distinctive characteristics without fundamental reconceptualization. Proposed solutions range from new categories of "automated agents" with specialized rules to strict liability for AI deployers to legal personhood for AI systems.6
This Article challenges the crisis narrative. The Third Restatement of Agency provides adequate analytical frameworks for AI attribution without requiring categorical doctrinal innovation. The key is recognizing that the Restatement's institutional reconceptualization already addresses the characteristics commentators identify as novel for AI. The Restatement developed frameworks for organizational principals that cannot observe agent decision processes, for agents operating with substantial autonomy at speeds exceeding real-time oversight, for authority manifested through structural features rather than bilateral communications, and for outcomes emerging from complex systemic interactions. These are not challenges AI introduces. They are challenges organizational contexts have always presented, and the Third Restatement's institutional frameworks already address them.
The Article's thesis operates at two levels. Descriptively, apparent authority doctrine, applied through institutional frameworks the Third Restatement developed for organizational contexts, extends to AI deployments without requiring new doctrines. Organizations deploying AI systems in institutional roles manifest authority through observable deployment decisions, exercise control through architectural mechanisms (system selection, parameter configuration, monitoring), and bind themselves through apparent authority based on reasonable reliance on institutional features. Normatively, the institutional approach is superior to alternatives because it preserves doctrinal coherence, maintains technology neutrality, and creates accountability without requiring AI personhood or other conceptual innovations that might fragment responsibility.
The Article proceeds in seven Parts. Part II examines the Third Restatement's institutional reconceptualization, demonstrating how its frameworks already transcend anthropocentric assumptions even for human agents. Part III shows that AI characteristics (opacity, autonomy, speed, emergence) parallel organizational characteristics the Third Restatement already accommodates. Part IV applies apparent authority doctrine to AI deployments through "scope-of-deployment" analysis. Part V addresses fiduciary duties and AI oversight, demonstrating how monitoring, design, and response obligations create coherent internal governance. Part VI examines the instrumentality doctrine's evolution and areas requiring doctrinal clarification. Part VII concludes.
A. Methodology and Analytical Strategy
This Article employs doctrinal analysis focused on the Third Restatement of Agency, examining how its frameworks apply to AI deployments. The analytical strategy proceeds through three steps designed to establish the Article's core claim.7
First, Part II establishes what the Third Restatement accomplished in reconceptualizing agency for organizational contexts. The Second Restatement treated agency as paradigmatically bilateral: relationships between identified individuals who communicate intentions and understand commitments.8 The Third Restatement inverted this framework, making organizational applications "the focal point for the application of agency doctrine."9 This reconceptualization required developing frameworks adequate to principals that act only through agents, that cannot personally observe or direct agent conduct, and that coordinate multiple agents through institutional structures. Part II examines how the Third Restatement addressed these challenges through structural manifestation, architectural control, and institutional apparent authority.
Second, Parts III and IV demonstrate that AI deployments in organizational contexts exhibit characteristics parallel to those the Third Restatement already addresses. Part III examines four characteristics the crisis narrative identifies as novel for AI (opacity, autonomy, speed, emergence) and shows that organizational agency already accommodates parallel features through institutional frameworks. Part IV applies apparent authority doctrine to AI-generated representations through scope-of-deployment analysis. Third, Parts V and VI address governance and implications.
B. Scope and Limitations
This Article addresses attribution of AI-generated representations and transactions in commercial contexts. The focus is on apparent authority and related attribution doctrines (manifestation, control, ratification) rather than tort liability or strict liability regimes. When AI systems make representations to customers, execute transactions with suppliers, or generate commitments affecting third parties, apparent authority doctrine determines whether deploying organizations are bound. This is distinct from questions about when organizations are liable in tort for AI-caused harms or when AI failures trigger product liability. While these broader questions are important, this Article's contribution is demonstrating that existing institutional frameworks prove adequate for the bounded but foundational attribution problem.
C. The Stakes: Accountability, Innovation, and Doctrinal Coherence
The choice between "crisis requiring innovation" and "institutional frameworks already adequate" matters for three reasons. Accountability: the institutional framework creates clear accountability by focusing on deploying organizations' observable manifestations and architectural control, preventing organizations from disclaiming responsibility by arguing AI "decided on its own." Innovation: technology-neutral frameworks preserve innovation incentives by applying the same institutional analysis to AI deployments that governs other complex organizational instrumentalities. Doctrinal coherence: extending existing frameworks preserves predictability, leverages existing jurisprudence, and avoids AI-specific doctrines that risk inconsistency, obsolescence, and complexity.10
II. The Third Restatement's Institutional Reconceptualization
Understanding why the Third Restatement's frameworks extend to AI attribution requires appreciating what the Restatement accomplished in reconceptualizing agency for organizational contexts. This Part examines five themes: the displacement of the bilateral paradigm by organizational focus; manifestation through structural rather than communicative mechanisms; control through architectural rather than supervisory mechanisms; the elimination of inherent agency power through sophisticated apparent authority doctrine; and the treatment of computer programs as organizational instrumentalities. The unifying theme is that the Third Restatement already emancipated agency doctrine from dependence on psychological states, interpersonal communications, and real-time oversight--the features the crisis narrative claims AI destroys.
A. From Bilateral to Systemic: The Displacement of the Interpersonal Paradigm
The Second Restatement, adopted in 1958, "excluded the special applications of the principles of agency to persons or combinations of persons concerning whom special rules exist, such as partnership and corporation law."11 This exclusion reflected conceptual commitment to agency as paradigmatically bilateral: relationships between identified individuals who communicate intentions and understand commitments. Each party was conceived as an individual capable of communication, understanding, and consent.
The Third Restatement inverted this framework, making organizational applications the "focal point for the application of agency doctrine."12 The Introductory Note explains that "the focal point for the application of agency doctrine is determining either the duties owed the organization by those holding positions within it or the consequences of interactions between actors in positions defined by one organization with individuals external to the organization or with actors who hold positions in another organization."13
This shift was not merely quantitative but conceptual. Organizations are juridical constructs that cannot personally observe agents, communicate intentions, or form beliefs. The Third Restatement had to develop frameworks adequate to principals that act only through agents and coordinate multiple agents through institutional structures. In this reconceptualization, the individual human agent in a bilateral relationship becomes a special case of the organizational paradigm rather than the paradigm itself. The systemic character of organizational agency means the Third Restatement already had to solve problems of attribution when the principal lacks direct observational capacity, when manifestation cannot operate solely through bilateral communication, and when control must function through architectural design rather than real-time oversight. These are precisely the challenges the crisis narrative identifies as novel for AI.
B. Manifestation Through Structure Rather Than Communication
Section 1.01 defines agency as arising when a principal "manifests assent to another person (an 'agent') that the agent shall act on the principal's behalf and subject to the principal's control." Section 1.03 defines "manifests" as written or spoken words or other conduct where the person has notice that another will infer assent or intention from such conduct.14 Traditional doctrine conceived manifestation as communicative.
The Third Restatement recognized that organizational manifestation operates differently. Comment b to Section 1.01 states that "when a principal is an organization, manifestations may be made indirectly and in generalized ways."15 Comment f elaborates: "Organizations manifest their assent by appointing that person to a position defined by the organization." The "observable connections between the individual and the organization"--position, title, function--constitute manifestation.16 When a corporation creates a "Vice President of Sales" position, the organization has manifested that the position-holder has authority customarily associated with sales vice presidents. No one necessarily communicated to Jane exactly what authority the position encompasses. Yet Jane possesses actual authority and third parties can rely on apparent authority because the structural fact of her position manifests authority. Comment f elaborates: organizations "operate by subdividing work or activities into specific functions that are assigned to different people." This functional differentiation creates authority independent of specific communications.
Critically, this structural manifestation does not require the principal to intend or understand the specific manifestation. The manifestation operates at a higher level of abstraction--the organization manifests that persons holding certain positions have certain categories of authority, without necessarily intending or foreseeing each specific exercise of that authority. This emancipation from intentionality is essential: manifestation doctrine already operates at an institutional level without requiring psychological states like intention, understanding, or belief from the organizational principal.
C. Control Through Architecture Rather Than Supervision
Section 1.01's control requirement distinguishes agency from other legal relationships. For individual principals, control operates through personal oversight. Organizations cannot exercise control this way. Comment f to Section 1.01 explains that organizational control "is often" exercised by "another agent, one holding a supervisory position" rather than directly by the principal organization.17 Moreover, organizational control operates through structural mechanisms: "incentive structures that reward the agent for achieving results"; "assigning a specified function with a functionally descriptive title to a person," which "tends to control activity because it manifests what types of activity are approved by the principal to all who know of the function and title, including their holder."18
These structural forms of control--compensation systems, job descriptions, performance metrics, reporting hierarchies, budgetary authority, approval thresholds--operate ex ante through institutional design rather than ex post through personal direction. This architectural control does not require the principal to observe or understand specific agent actions. The organization controls by setting parameters and constraints, not by monitoring compliance at every moment. This emancipation from real-time oversight is essential: control doctrine already operates architecturally without requiring continuous observation or comprehension by the principal.
D. Elimination of Inherent Agency Power Through Sophisticated Attribution
The Third Restatement's most theoretically significant innovation was eliminating "inherent agency power"--the Second Restatement's residual category for binding principals despite absence of actual or apparent authority.19 The Third Restatement "does not use the concept of inherent agency power," instead covering those situations through broadened apparent authority.20 Comment b to Section 2.01 explains that manifestations creating authority can be "informal, implicit, and nonspecific." They need not "use the word 'authority'" and need not "consist of words targeted specifically to a third party."21
The elimination succeeded because apparent authority operates through institutional features observable to third parties rather than requiring specific communications. When a person sits at the loan officer's desk in a bank branch, wearing the bank's uniform and using the bank's forms, third parties reasonably believe that person has authority to approve loans--even if the bank never specifically communicated that authority to third parties. The manifestation is the bank's structural decision to place someone in that position. By broadening manifestation to encompass institutional forms--structure, position, custom, pattern--the Third Restatement could cover through apparent authority many situations the Second Restatement addressed through inherent agency power. Attribution operates through institutional features and custom rather than requiring proof of specific manifestations regarding particular transactions.
E. The Instrumentality Doctrine and Its Technological Premises
Section 1.04(2) defines "person" as "an individual or entity that has legal capacity to possess rights and incur obligations." Comment e elaborates: "A computer program is not capable of acting as a principal or an agent . . . computer programs are instrumentalities of the persons who use them."22 This instrumentality doctrine reflected 2006 technological reality: computer programs were predominantly deterministic, executing pre-programmed instructions without meaningful autonomy.
Twenty years later, the technological landscape has transformed fundamentally. But the instrumentality doctrine's classification of AI as property neither displaces the institutional attribution analysis nor requires treating AI as a "person" capable of being an agent under Section 1.01. The critical move is functional: the attribution doctrines the Third Restatement developed for organizational contexts apply equally when organizations act through autonomous instrumentalities integrated into their commercial operations. The instrumentality doctrine addresses AI's formal status; institutional attribution doctrine addresses when organizations are bound by acts performed through their instrumentalities. These are distinct questions, and the answer to the second does not depend on the answer to the first.
III. Algorithmic Characteristics as Organizational Characteristics
The crisis narrative frames AI characteristics--opacity, autonomy, speed, emergent behavior--as categorically novel, requiring fundamental doctrinal innovation. For each characteristic the crisis narrative identifies as creating doctrinal crisis, there is a corresponding organizational characteristic that the Third Restatement already addresses through institutional frameworks. This Part examines each characteristic and its organizational parallel, then analyzes the leading AI attribution case as an illustration of institutional analysis in action.
A. The Commercial Reality of Agentic AI
To evaluate whether existing doctrine provides adequate conceptual tools, we must understand what distinguishes agentic AI deployed in 2025--2026 business operations from earlier automation. The shift is from automation (doing the same thing faster) to autonomy (deciding what to do and how to do it).23
The defining characteristic of agentic AI is its proactive, goal-driven nature. An enterprise user assigns a goal such as "optimize inventory levels for Q3 to reduce holding costs by 10%," and the agent autonomously determines the sequence of actions required to achieve it. This autonomy is underpinned by "function calling" or "tool use"--the capacity for AI systems to generate executable code, SQL queries, or API calls that trigger real-world actions: sending payments, updating databases, modifying production schedules, executing contracts. Contemporary AI does not merely produce text; it creates consequences.24
By 2025, enterprise AI architecture had shifted toward Multi-Agent Systems where specialized agents collaborate: a Planner Agent breaking down strategic goals, a Research Agent gathering intelligence, a Coder Agent writing scripts, and a Compliance Agent reviewing actions--all orchestrated without human intervention.25 2025-era agentic systems increasingly employ continuous learning techniques, updating parameters in real-time. An agent deployed in January 2025 may by June 2025 have developed novel strategies neither programmed nor foreseen by its human principals.26 And modern agentic systems often operate as "black boxes" where even developers cannot fully explain why specific outputs were generated.27
B. Opacity: Already Accommodated by Organizational Decision-Making
The crisis narrative's first claim is that AI's "black box" nature prevents effective control and makes manifestation incoherent. But organizational decision-making has always been substantially opaque to organizational principals. Consider a large multinational corporation. The board cannot possibly understand how most operational decisions are made. Yet agency doctrine attributes contracts made by regional managers to the corporation without difficulty. The board manifested authority by creating the position; it exercises control through compensation systems and approval hierarchies; third parties rely on apparent authority created by the manager's title and position.28
The opacity does not prevent attribution because agency doctrine does not require the principal to understand the agent's decision-making process. It requires only that the principal manifested authority (structurally), exercises control (architecturally), and that third parties can reasonably rely on observable features. Algorithmic opacity is no different in kind from complex organizational decision-making's opacity. The Third Restatement's institutional frameworks handle opacity because they were designed to handle it: the framework's structural, observable-features-based analysis never required transparency into internal decision processes.
C. Autonomy: Already Accommodated by Subsidiary Corporations and Franchises
The crisis narrative's second claim is that AI autonomy breaks the control element. But substantial agent autonomy has always been compatible with agency relationships in organizational contexts.29 Subsidiary corporations operate with significant independence. Franchisees exercise substantial autonomy in running their businesses. McDonald's does not provide real-time instructions to franchisees about each hamburger sold, yet franchisors can be held liable for franchisee conduct when sufficient control has been exercised through standards, training, inspection, and operational requirements. The Third Restatement accommodates agent autonomy because it reconceived control as architectural rather than operational.
Algorithmic agents' autonomy is no different in kind from subsidiary or franchisee autonomy. The corporation deploys the AI agent and retains control through selection, configuration, approval hierarchies, monitoring, and revocation capacity--precisely the architectural control mechanisms the Third Restatement recognized for organizational agents. Autonomy within a designed structure is not freedom from control; it is the exercise of delegated discretion within institutionally set parameters.
D. Speed: Already Accommodated by Market-Making and Algorithmic Trading
The crisis narrative's third claim is that machine-speed operations prevent interim control, making the "right of control" a fiction. But human agents have long operated at speeds exceeding principals' capacity for real-time intervention, and regulatory frameworks already recognize that algorithmic agents' control requirements are satisfied architecturally rather than through real-time supervision.30
Securities exchange market-makers make split-second decisions across thousands of transactions daily. The trading firm cannot observe or approve each trade in real-time. Yet agency doctrine attributes those trades to the firm through architectural controls: risk limits, compliance systems, performance monitoring, and compensation incentives. The SEC's Market Access Rule confirms this understanding, requiring broker-dealers to implement risk management controls "reasonably designed to manage the financial, regulatory, and other risks" of algorithmic trading through pre-trade controls and post-trade surveillance. Algorithmic trading simply automates what human market-makers do; the structure of control is unchanged, only its implementation. The "right of control" was never about real-time comprehension of each agent action. It was about structural capacity to define parameters, monitor outcomes, and terminate the relationship.
E. Emergence: Already Accommodated by Complex Organizations
The crisis narrative's fourth claim is that multi-agent AI systems produce emergent outcomes that break the chain from manifestation to authorization. But complex organizations have always produced emergent outcomes through interactions among multiple agents, and agency doctrine attributed those outcomes to principals through institutional analysis.31
An automotive product recall emerges from interactions among design engineers, manufacturing specifications, supplier components, and quality control processes. No single agent intended the defect; the board could not foresee it. Yet the corporation is fully liable. The Third Restatement handles emergence through institutional manifestation: the organization manifests the institutional structure that generates patterns of conduct. When third parties interact with agents operating within that structure, apparent authority exists based on the institutional context--regardless of whether the principal specifically authorized or could have foreseen the particular outcome. Multi-agent AI systems producing emergent representations and commitments operate within this framework through the same analysis.
F. Case Study: *Moffatt v. Air Canada* as Institutional Analysis
Moffatt v. Air Canada illustrates how institutional frameworks handle what the crisis narrative treats as categorically novel AI challenges. Air Canada deployed a chatbot on its official website as a customer service channel bearing Air Canada branding. The chatbot provided information contradicting the airline's actual bereavement fare policy. Air Canada attempted to disclaim responsibility, arguing the chatbot was a separate legal entity and that the correct policy, available elsewhere on its website, should govern.32
The tribunal rejected these arguments and held Air Canada responsible under apparent authority principles. Air Canada manifested the chatbot's authority by deploying it in the customer service function on its official website with corporate indicia. Third parties would reasonably believe, based on the chatbot's position as part of the official customer service system and the custom that official customer service channels provide accurate policy information, that the chatbot had authority to make representations about company policies. The representation fell within the functional scope of deployment. The fact that the representation was incorrect did not defeat apparent authority: apparent authority is determined by reasonable third-party beliefs about scope, not by accuracy of specific content.33
Air Canada's defense--that the chatbot was a separate entity responsible for its own actions--is precisely the disclaimer the institutional framework prevents. Organizations cannot deploy AI systems in institutional roles with all the indicia of organizational authority and then disclaim responsibility when outputs prove problematic. The instrumentality doctrine means the chatbot is the airline's tool, not an independent actor. The apparent authority doctrine means the airline's institutional manifestation--deploying the chatbot in an official, branded customer service role--creates third-party reliance the airline cannot defeat by pointing to its internal architecture.
IV. Apparent Authority and the Scope of Deployment
The institutional frameworks examined in Parts II and III establish that organizations manifest authority through structural deployment decisions and that third parties reasonably rely on observable institutional features. This Part addresses the critical application question: when AI systems generate representations or execute transactions, what determines whether those outputs fall within the scope of apparent authority the deployment created? This question becomes particularly acute when AI systems generate outputs that exceed actual authority.
A. Apparent Authority's Scope: Position, Custom, and Reasonable Reliance
Section 2.03 establishes that apparent authority exists when "a third party reasonably believes the actor has authority to act on behalf of the principal and that belief is traceable to the principal's manifestations." Comment c to Section 2.03 provides the foundational framework: "If a principal places an agent in a position in the principal's business, the agent has apparent authority to do acts that third parties would reasonably believe the agent has authority to do given the agent's position."34
Custom and practice perform critical work in determining scope. Comment c explains that "custom and practice bear on whether the agent's action is within the agent's apparent authority," particularly where "written job descriptions do not exist" for positions.35 The Restatement's illustrations demonstrate this analysis. Illustration 2 involves a store manager: A has apparent authority to accept merchandise returns "if it is reasonable for T to believe, given A's position as manager and the store's practices, that A has such authority."36 Illustration 4 examines CFO authority: A lacks actual authority to sell P Corporation's assets, but has apparent authority based on what T "reasonably believes" given what "corporations of P's type" customarily delegate to CFOs.37 The principal's internal limitations on authority do not defeat apparent authority when not communicated to third parties.
The scope inquiry is multidimensional: What position did the principal place the actor in? What authority do third parties reasonably associate with that position based on custom and industry practice? What patterns of conduct has the principal permitted or acquiesced in? Apparent authority exists for acts within this reasonable scope, even if the principal did not specifically authorize those acts or internally limited the actor's authority in ways not communicated to third parties.
B. Scope of Deployment: Applying Apparent Authority to AI Systems
When organizations deploy AI systems in institutional roles, the same apparent authority framework determines scope. The analysis asks: (1) What institutional role did the principal place the AI system in? (2) What authority do third parties reasonably associate with that role based on the function performed and observable features? (3) What patterns of conduct has the principal permitted through continued deployment? (4) Given these observable features, what belief is reasonable for third parties to form about the system's authority?
This framework, applied to AI deployments, produces what this Article terms "scope-of-deployment" analysis. It is not a novel doctrine but an application of Section 2.03's position-based apparent authority to AI systems functioning as organizational instrumentalities in defined institutional roles. The Moffatt case illustrates the analysis.38 The scope-of-deployment analysis extends to other AI deployment contexts. When a corporation deploys an AI pricing system on its e-commerce platform, the system has apparent authority to quote binding prices based on its position (integrated into the official sales channel) and function (providing pricing information to potential customers). When an AI scheduling system confirms meeting times, it has apparent authority to make those commitments based on its institutional position. When an AI screening system evaluates applicants, its determinations carry apparent authority based on deployment in that function.
C. Limits on Apparent Authority: Extraordinary Outputs
Apparent authority is not unlimited. Section 2.03's reasonableness requirement means that some AI-generated outputs may exceed the scope of authority that deployment manifests. Comment c notes that apparent authority depends on reasonable belief "traceable to the principal's manifestations." Illustration 5 establishes limits: P Corporation's regional sales manager has no apparent authority to sell corporate real property because a reasonable developer would not believe such authority existed based on the position's customary scope.39
This creates a boundary-drawing framework examining: industry custom (what do third parties reasonably expect systems in this position to have authority to do?), observable integration (what do deployment features suggest about scope?), disclaimers and limitations (has the organization communicated authority limits in ways observable to third parties?), and third-party sophistication (what level of knowledge should third parties in this context reasonably have?). Three zones result: actual authority (outputs specifically authorized); apparent authority within reasonable scope (outputs within functional scope that reasonable third parties would believe authorized, even if incorrect); and extraordinary outputs outside apparent authority (acts so far outside customary scope that reasonable third parties would not believe them authorized). The existence of these limits creates corresponding monitoring duties for deploying organizations, addressed in Part V.
D. Ratification Through Continued Deployment
Section 4.01 provides that ratification occurs when "a person affirms another's act, and the act, as to some or all persons, is given effect as if it had originally been authorized." Comment b explains ratification can be implied through "accepting the benefits of the transaction or failing to repudiate it with knowledge of material facts."40
When organizations deploy AI systems, observe their outputs, and continue deployment, they ratify the outputs through continued operation with knowledge. Consider an organization that learns through complaint patterns that its customer service chatbot is making representations inconsistent with company policy. The organization need not understand the algorithmic reasons for these outputs. But it has knowledge of the material fact that matters for ratification: the system is producing problematic representations. Continued operation without modification or corrective communication constitutes ratification through continued deployment with knowledge. Section 4.01's requirements are met: affirmance of the chatbot's conduct by continuing deployment, with knowledge of material facts, manifesting intention to give validity to the conduct.
Constructive knowledge through observable outputs (complaint logs, transaction records, customer service escalations) suffices even when the organization cannot access internal decision processes. The relevant knowledge is not "why did the algorithm do this" but "what is the system doing and what effects is it producing." Willful blindness cannot serve as a defense. The ratification doctrine creates incentives for organizations to monitor AI system outputs and take corrective action when problems emerge--closing the accountability gap that opacity might otherwise create.
E. Synthesis: The Institutional Attribution Framework for AI
Synthesizing the scope-of-deployment analysis produces a coherent framework for AI attribution through apparent authority doctrine. An organization that deploys an AI system in an institutional role with corporate indicia, performing functions third parties would reasonably associate with that role based on custom and industry practice, without effective disclaimers limiting apparent authority, and with knowledge of the system's outputs through continued deployment, is bound to the system's representations and commitments that fall within the reasonable scope of that role, regardless of whether the specific outputs were internally authorized or algorithmically predictable.
This is not a special AI doctrine. It is the application of Section 2.03's position-based apparent authority combined with Section 4.01's ratification doctrine to AI systems functioning as organizational instrumentalities. The analytical work is determining the institutional role, assessing the customary scope of that role, evaluating the reasonableness of third-party reliance given observable deployment features, and examining whether continued deployment with knowledge ratifies outputs. These are standard apparent authority inquiries applied to AI deployments as institutional phenomena.
V. Fiduciary Duties and AI Oversight: Internal Governance and External Attribution
Parts III and IV addressed external attribution: when do AI-generated representations and transactions bind deploying organizations to third parties? This Part pivots to internal governance: what fiduciary duties do organizations and their directors and officers owe when deploying AI systems, and how do those duties relate to the attribution frameworks examined above? AI systems themselves cannot have fiduciary duties: loyalty, good faith, and care are attributes of persons capable of forming intentions and making moral choices, and AI systems, as instrumentalities under Section 1.04, lack these capacities.41 But the deploying organization and its human fiduciaries retain all such duties, and how they discharge them shapes both internal and external accountability.
A. Monitoring Duties: *Caremark* Applied to AI Deployments
The Caremark framework establishes that directors have a duty to ensure adequate corporate information systems exist and to monitor for known risks.42 The Delaware Supreme Court expanded these obligations in Marchand v. Barnhill, requiring active oversight of the company's most significant operational risks.43 These monitoring duties extend naturally to AI deployments. AI systems deployed in mission-critical roles present precisely the category of operational risk for which monitoring duties exist. The AI system's opacity makes monitoring of outputs more rather than less important: because boards cannot comprehend internal algorithmic processes, monitoring observable outputs and their effects becomes the primary mechanism for detecting problems.
Monitoring AI deployments is technically feasible even when internal processes are opaque. Organizations can monitor outputs (what representations is the system making?), impacts (what are customers, applicants, or counterparties experiencing?), patterns (are there systematic problems with particular types of outputs?), and exceptions (are outputs falling outside expected ranges?). These monitoring mechanisms track observable effects rather than internal processes. The Caremark duty does not require comprehension of algorithmic internals; it requires attention to what systems are doing and what effects they are producing.
B. Design Duties: Ex Ante Obligations for AI Deployment Decisions
Fiduciary duties govern the deployment decision itself. Directors and officers have duties of care governing how they select, configure, and deploy AI systems.44 System selection requires informed decision-making based on reasonable investigation into the AI system's capabilities, limitations, accuracy, bias history, and fitness for the intended deployment context. Directors may rely on AI systems only if they exercised due care in selecting, testing, and monitoring them. Blind reliance on an unverified black box without reasonable investigation falls below the care standard for high-stakes deployments affecting many third parties.
Parameter configuration and constraint design are equally important. Organizations deploying AI systems have duties to configure parameters reasonably for the deployment context. An organization deploying an AI chatbot without configuring constraints preventing representations clearly outside actual policies falls below the care standard. Design duties require that the architectural constraints shaping AI behavior be thoughtfully configured: these are the mechanisms through which principals exercise control in the AI context. Pre-deployment testing proportionate to stakes is the third component: organizations have duties to test AI systems before deployment to understand operating characteristics, identify failure modes, and assess behavior within the planned deployment context.
C. Response Duties: Corrective Action When Monitoring Reveals Problems
Monitoring without response provides no protection. Once monitoring reveals problems, organizations have duties to respond promptly and appropriately.45 The response duty encompasses: investigating the source and scope of the problem; determining appropriate corrective action (technical modifications, additional constraints, corrective communications to affected parties, operational modifications requiring human review for certain output categories, or termination of deployment if the problem cannot be adequately resolved); and implementing that corrective action with speed proportionate to harm severity.
Delayed response extends the period during which outputs are ratified through continued deployment with knowledge, potentially expanding both the class of third parties affected and the scope of liability. The coupling of response duties with ratification doctrine creates strong incentives for prompt corrective action: organizations that fail to respond to known AI performance problems face both fiduciary liability for the failure and attribution liability for outputs ratified through continued deployment.
D. The Intersection: How Governance Duties Shape Attribution Liability
The monitoring duties examined in this Part intersect with the attribution frameworks of Parts III and IV in three ways that demonstrate the coherence of institutional frameworks. First, monitoring duties determine the scope of manifested authority: organizations deploying AI systems with robust monitoring, clear constraints, and visible oversight mechanisms manifest narrower authority than those deploying AI without observable limitations. Second, failures in monitoring duties expand attribution through ratification: organizations that fail to monitor allow problematic patterns to continue, strengthening ratification arguments. The fiduciary duty breach expands attribution liability. Third, design duties create the architectural features that satisfy control requirements: the fiduciary design duties--system selection, parameter configuration, constraint design, pre-deployment testing--are the operational implementation of architectural control.
This alignment creates consistent incentives: deploy AI with adequate governance structures, monitor outputs, respond to problems, and face attribution within reasonable scope; fail to govern adequately, and face both fiduciary liability internally and expanded attribution liability externally. The institutional framework thus creates accountability without personhood, responsibility without requiring AI to have duties, and governance structures that align internal obligations with external attribution.
E. The Non-Delegation Paradox: A Framework for AI Decision Authority
The deployment of AI creates what might be termed a "non-delegation paradox" for corporate fiduciaries. As AI analytics become superior to human intuition in certain domains, directors who fail to use AI tools might be found liable for gross negligence--just as a doctor would be negligent for failing to use diagnostic imaging now standard in the specialty. Yet corporate law prohibits directors from abdicating core oversight functions: a board that blindly relies on AI recommendations without understanding underlying logic may breach its duty of oversight. Too little AI reliance creates liability for failing to use modern tools; too much creates liability for abdication of judgment.46
The resolution lies in the design and monitoring duties examined above. Directors may rely on AI systems that they have diligently selected, appropriately configured, and adequately monitored. Blind reliance on an unverified black box will be treated as bad faith. The Third Restatement's institutional framework provides the conceptual structure: control is maintained through architectural design and monitoring; fiduciary duty is discharged through informed selection, constraint design, and sustained attention to AI performance. Delaware law allows directors to rely on "experts," and courts will likely treat AI systems as qualifying expert systems only if the board exercised due care in selecting, testing, and monitoring them--a standard the design and monitoring duties precisely implement.
VI. The Instrumentality Doctrine's Evolution and Areas for Development
Parts II through V demonstrated that the Third Restatement's institutional frameworks provide adequate analytical resources for AI attribution. This Part turns to three doctrinal areas where clarification would be valuable as AI deployments proliferate, implications for potential Restatement updating, the institutional approach's answer to the AI personhood debate, and the emerging problem of algorithmic entities.
A. Vendor Liability and Distributed Agency Relationships
The analysis in Parts III through V focused primarily on scenarios where organizations deploy AI systems they control. But AI deployments increasingly involve complex vendor relationships where liability allocation becomes contested. When does an AI vendor providing systems to deploying organizations create direct agency relationships with end users? When both vendor and deploying organization exercise control over different aspects of AI operations, should both bear attribution liability?47
Mobley v. Workday illustrates these questions. Workday argued it was merely a technology vendor, not an agent of the employers using its system. The court's initial determination allowing the case to proceed on agency theories reflects judicial receptivity to treating vendors performing traditional employer functions as agents for attribution purposes. Traditional agency doctrine addresses vendor relationships through apparent authority and dual agency analyses.48 The franchise cases provide doctrinal templates: franchisors manifest authority through operational integration even when franchisees have formal independence. AI vendor relationships raise further questions these analyses do not fully address.
A shared-control framework is appropriate: both vendor and deploying organization exercise control over distinct aspects of AI operations, both manifest authority to different degrees, and both should bear responsibility proportionate to their contribution to harm and ability to prevent it.49 Courts addressing AI vendor liability should consider what manifestations the vendor makes directly to third parties, what manifestations the deploying organization makes by integrating vendor systems into official processes, how control is actually divided between vendor and deployer, and which party is better positioned to prevent harm. The cheapest cost avoider principle provides useful guidance: responsibility should rest with the party best positioned to prevent harm at lowest cost, which will sometimes be the vendor (for design defects in the underlying model) and sometimes the deployer (for deployment decisions and operational context).
B. Scope-of-Deployment Boundaries for Generative AI
The second area requiring clarification involves scope boundaries for generative AI systems, which present distinctive challenges because their outputs are inherently unpredictable and their apparent capabilities can mislead users about scope.50
When organizations deploy generative AI chatbots, what scope of authority do they manifest? A customer service chatbot trained on company documentation and deployed with corporate branding manifests authority to provide information about company policies, as Moffatt established. But generative AI systems can produce plausible-sounding statements about topics outside their training, hallucinate facts with apparent confidence, and engage in extended conversations creating impressions of comprehensive knowledge. The institutional framework approach suggests that scope depends on observable deployment features. Organizations deploying generative AI with clear functional boundaries, visible disclaimers, and architectural constraints manifest narrower authority than organizations deploying general-purpose conversational AI with minimal constraints and no visible limitations.
When organizations choose to deploy generative AI in institutional contexts without effective limitations, that deployment choice manifests authority for outputs within the functional scope. Third parties cannot reasonably be expected to independently verify each AI output when the deploying organization has integrated the system into official channels without visible limitations. Deploying organizations are the cheapest cost avoiders: they can implement disclaimers, constrain system outputs, and design escalation protocols far more readily than individual users can audit AI reliability. Case law development addressing scope boundaries for generative AI--considering deployment context, observable limitations, industry custom, and third-party sophistication--will be essential as these systems proliferate.
C. Multi-Agent Systems and Distributed Manifestation
The third area requiring clarification involves multi-agent systems producing emergent outputs through interactions among specialized AI agents.51
Consider an organization deploying a multi-agent supply chain management system: a demand forecasting agent, procurement agent, inventory optimization agent, and logistics agent interact to make sourcing and delivery decisions. A supplier receives what appears to be a purchase commitment from the integrated system. Later, the organization claims no commitment was made because the commitment "emerged" from agent interactions in ways the organization did not program or authorize. The institutional framework suggests attribution should follow if the organization deployed the integrated multi-agent system in its supply chain operations with institutional indicia, third parties (suppliers) reasonably believed based on observable features that the system had authority to make procurement commitments, and the commitment fell within the scope of authority the deployment manifested. Emergence from agent interactions does not defeat attribution.
Doctrinal development would be valuable on: when multi-agent interactions produce outcomes "within scope" versus "outside scope" of deployment authority; how courts should assess reasonableness when the deploying organization itself may not have anticipated the emergent behavior; and whether deployment of systems with known emergent properties creates duties to monitor for and correct unanticipated outputs before they become established patterns on which third parties rely.
D. Implications for Restatement Development
The American Law Institute has not announced an Agency Fourth project, and the Third Restatement remains current.52 Such updating as may occur should maintain technology neutrality--making explicit how existing institutional frameworks extend to organizational instrumentalities including AI systems--without creating AI-specific doctrines that would be both unnecessary and vulnerable to rapid obsolescence.
Four targeted clarifications would serve these goals. First, updating comment e to Section 1.04 to address how organizations manifest authority when deploying instrumentalities in institutional roles--explaining that organizational decisions to deploy instrumentalities in operational roles with institutional indicia constitute manifestations by conduct that can create apparent authority. Second, adding illustrations to Section 2.03 demonstrating position-based apparent authority for organizational instrumentalities deployed in operational roles. Third, clarifying in comment f to Section 1.01 that architectural control applies when organizations deploy automated instrumentalities, making explicit that architectural mechanisms satisfy the control requirement. Fourth, addressing in comments to Section 4.01 how continued deployment of instrumentalities after knowledge of problematic outputs constitutes ratification. None of these would create new doctrine. They would clarify how existing institutional frameworks extend to organizational instrumentalities.
E. Accountability Without Personhood: The AI Governance Debate
A persistent question in AI policy discussions is whether AI systems should be granted legal personhood.53 This Article's institutional framework analysis suggests that personhood for AI is both unnecessary and undesirable for attribution purposes.
Personhood is unnecessary because institutional attribution operates successfully without requiring AI to be an "agent" under Section 1.01 or a "person" in any legal sense. Organizations deploying AI manifest authority through deployment decisions, exercise control through architectural mechanisms, and bind themselves through apparent authority and ratification. Third parties can recover from deploying organizations based on reasonable reliance on institutional manifestations. Attribution works without personhood.
Personhood is undesirable because granting AI legal personhood would likely weaken rather than strengthen accountability.54 If AI systems were "persons" capable of being "agents," organizations could argue that AI agents acted outside their authority or violated duties--potentially creating gaps in attribution that institutional frameworks currently close. The instrumentality approach combined with institutional apparent authority creates strong attribution: organizations cannot disclaim responsibility by arguing the AI "decided on its own" because AI are instrumentalities, not independent agents, and apparent authority depends on organizational manifestations, not on internal authorization.
This does not mean AI regulation is unnecessary. Organizations deploying AI may need enhanced regulatory duties regarding testing, monitoring, bias detection, impact assessment, and transparency. But these regulatory duties should run to human and organizational actors--requiring them to govern AI deployment responsibly--rather than creating AI personhood that might fragment accountability. The institutional framework provides a template: deploy AI in institutional roles with observable indicia, you manifest authority; exercise architectural control through design, you satisfy control requirements; third parties reasonably rely on your manifestations, you are bound through apparent authority; continue deployment after knowledge of problems, you ratify. Accountability without personhood, through institutions rather than innovations.
F. The Algorithmic Entity Problem
A more radical challenge to attribution architecture comes from algorithmic entities--organizational structures designed to place AI in formal control positions. Scholars have theorized that U.S. LLC statutes allow LLCs to be managed by any entity or mechanism specified in the operating agreement. In 2025, innovative actors began forming LLCs with operating agreements designating AI systems as sole managers, with human members subsequently withdrawing--leaving the AI in functional control of a legal person.55 This structure creates a potential liability shield: the "agent" acting in the market is the LLC (which is a legal person); the "mind" directing it is the AI. This arguably severs the chain of attribution to any human principal.
The algorithmic entity problem falls somewhat outside the Third Restatement's attribution framework. Courts and regulators confronting algorithmic entities should address them through organizational law (imposing duties on LLC members who create structures designed to evade accountability), fraudulent transfer doctrine, and abuse of organizational form principles, rather than through agency law reconceptualization. The institutional framework approach remains adequate for the vast majority of AI deployments: organizations that deploy AI systems they control, in institutional roles that benefit from the organization's authority, remain subject to attribution through manifestation and apparent authority analysis.
VII. Conclusion: Institutions, Not Innovations
The Restatement (Third) of Agency's twentieth anniversary arrives at a moment of technological transformation as significant as the institutional transformation it addressed. The Third Restatement equipped agency law to handle organizational principals through sophisticated reconceptualization of manifestation, control, and attribution. The computational turn--the deployment at scale of genuinely autonomous AI systems in commercial operations--tests whether that institutional sophistication provides adequate foundation for the AI age. This Article has argued that the answer is yes.
The Third Restatement's institutional frameworks provide adequate analytical resources for attributing AI-generated representations and transactions to deploying organizations without requiring categorical doctrinal innovation. The characteristics the crisis narrative identifies as novel for AI--opacity, autonomy, speed, emergence--parallel characteristics that organizational agency already accommodates through institutional frameworks. The Third Restatement developed frameworks that work for non-human principals (organizations); those same frameworks work for non-human agents (algorithms) because both require analysis at structural, systemic, institutional levels rather than psychological, interpersonal, bilateral levels.
The institutional framework approach rests on four analytical pillars. Deployment as manifestation: placing an AI system in an institutional role with corporate indicia manifests authority for acts within the customary scope of that role. Architecture as control: selecting, configuring, and monitoring an AI system constitutes control adequate to the agency relationship. Scope of deployment as authority: third parties can reasonably infer authority for functions the AI system is deployed to perform, based on the same position-and-custom analysis that determines apparent authority for human agents in institutional roles. Alignment as fiduciary function: the duty to configure constraints, monitor performance, and respond to problems discharges the fiduciary obligation's institutional coordinating function when the "agent" lacks the moral capacity for loyalty.
Three doctrinal areas warrant clarification: vendor liability in distributed AI relationships (requiring shared-control frameworks recognizing both vendor and deployer as principals); scope-of-deployment boundaries for generative AI (requiring courts to develop reasonable-reliance intuitions for systems with inherent unpredictability); and attribution for multi-agent systems producing emergent outputs (connecting emergence analysis to monitoring duties). These are questions within institutional frameworks, not categorical failures of those frameworks.
The computational turn reveals rather than creates the Third Restatement's achievement. The institutional turn was more radical than often recognized: by developing frameworks that work for non-human principals (organizations), the Third Restatement inadvertently created frameworks that work for non-human agents (algorithms). The common element is structural rather than psychological foundations. AI simply makes explicit what was already implicit: that agency law operates structurally, systemically, and institutionally rather than psychologically, interpersonally, and bilaterally.
AI is not the first technology to challenge attribution doctrine, and it will not be the last. Each technological shift prompted concerns that existing doctrine could not handle new operational realities. Each time, doctrine adapted through institutional analysis, focusing on observable manifestations, structural control, and reasonable third-party reliance rather than attempting to map new technologies onto bilateral paradigms designed for simple cases. The Third Restatement's institutional reconceptualization provides the analytical resources for AI attribution. Courts and practitioners need not innovate; they need only recognize that AI deployments are organizational phenomena governed by organizational frameworks.
Institutions, not innovations, are the key to AI accountability.
-
The Restatement (Third) of Agency was promulgated by the American Law Institute in 2006, twenty years after the process of revision began in 1986. See Restatement (Third) of Agency, introductory note (Am. Law Inst. 2006) [hereinafter Restatement (Third)]. Reporter Deborah A. DeMott guided the project through completion. For the foundational argument that AI creates a categorical crisis for agency doctrine, see, e.g., Bryan Casey & Mark A. Lemley, Remedies for Robots, 86 U. Chi. L. Rev. 1311, 1320--26 (2019); Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 Harv. J.L. & Tech. 353, 362--70 (2016); Karni A. Chagal-Feferkorn, Artificial Intelligence Liability and the AI Respondeat Superior Analogy, 48 Mitchell Hamline L. Rev. 1043, 1050--55 (2022). ↩
-
See Chagal-Feferkorn, supra note 1, at 1067--70 (arguing that algorithmic opacity prevents meaningful principal control); Frank Pasquale, The Black Box Society 3--8 (2015) (documenting algorithmic opacity as a structural feature of commercial AI systems); Lilian Edwards & Michael Veale, Slave to the Algorithm? Why a "Right to an Explanation" Is Probably Not the Remedy You Are Looking For, 16 Duke L. & Tech. Rev. 18, 22--35 (2017). ↩
-
See Scherer, supra note 1, at 370--75 (analyzing how AI autonomy affects the control requirement); Jack M. Balkin, The Path of Robotics Law, 6 Calif. L. Rev. Cir. 45, 55--62 (2015); Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513, 544--52 (2015). ↩
-
See Casey & Lemley, supra note 1, at 1324--26; Ryan Abbott, The Reasonable Robot: Artificial Intelligence and the Law 89--112 (2020); Moffatt v. Air Canada, 2024 BCCRT 149 (Can. B.C. Civ. Res. Trib.) (airline held liable for chatbot misrepresentation of bereavement fare policy under apparent authority principles). ↩
-
See Chagal-Feferkorn, supra note 1, at 1068--69 (machine-speed operations prevent meaningful human oversight); Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI 15--22 (2020); Calo, supra note 3, at 530--36. ↩
-
See Casey & Lemley, supra note 1, at 1362--72 (proposing specialized remedies for AI harms, including modified strict liability and mandatory insurance); Shawn Bayern, The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems, 19 Stan. Tech. L. Rev. 93 (2015) (examining entity-law approaches); Mark A. Lemley & Bryan Casey, You Might Be a Robot, 105 Cornell L. Rev. 287 (2020) (examining personhood implications); cf. Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. Rev. 1231 (1992). ↩
-
Restatement (Third), introductory note ("many agents hold positions in organizations" and organizational contexts provide the focal point). The Introductory Note explains: "the focal point for the application of agency doctrine is determining either the duties owed the organization by those holding positions within it or the consequences of interactions between actors in positions defined by one organization with individuals external to the organization or with actors who hold positions in another organization." Id. ↩
-
Restatement (Second) of Agency scope note (Am. Law Inst. 1958). The Second Restatement excluded "special applications of the principles of agency to persons or combinations of persons concerning whom special rules exist, such as partnership and corporation law." Compare Restatement (Third) § 1.01 cmt. b (making organizational applications "the focal point for the application of agency doctrine"). See also Kristen David Adams, Blaming the Mirror: The Restatements and the Common Law, 40 Ind. L. Rev. 205, 208--12 (2007) (discussing methodological approaches to reading Restatements). ↩
-
Restatement (Third), supra note 7. ↩
-
See Jonathan Zittrain, The Generative Internet, 119 Harv. L. Rev. 1974, 1980--88 (2006) (examining benefits of technology-neutral legal frameworks for promoting innovation while maintaining accountability). The Third Restatement's technology-neutral institutional approach exemplifies what good legal architecture accomplishes: it builds on foundations general enough and functional enough to accommodate developments within their conceptual space without attempting to anticipate every future change. ↩
-
Restatement (Second) of Agency, supra note 8. ↩
-
Restatement (Third) § 1.01 cmt. b. See id. introductory note ("Organizations constitute the dominant context for agency relationships in modern commercial practice."). On the transformation of organizational forms from 1958 to 2006 that made organizational contexts paradigmatic, see Henry Hansmann & Reinier Kraakman, The End of History for Corporate Law, 89 Geo. L.J. 439, 441--45 (2001). ↩
-
Restatement (Third), supra note 7. ↩
-
Restatement (Third) § 1.01. The full text of Section 1.01 defines agency as "the fiduciary relationship that arises when one person (a 'principal') manifests assent to another person (an 'agent') that the agent shall act on the principal's behalf and subject to the principal's control, and the agent manifests assent or otherwise consents so to act." ↩
-
Restatement (Third) § 1.01 cmt. f. Comment f explains: "Organizations manifest their assent by appointing that person to a position defined by the organization." The "observable connections between the individual and the organization"--position, title, function--constitute manifestation. Id. The significance is that structural appointment, not specific bilateral communication, does the doctrinal work of manifestation. ↩
-
Id. § 1.01 cmt. f ("Organizations operate by subdividing work or activities into specific functions that are assigned to different people."). Illustration 8 to Section 1.01 demonstrates that authority flows from organizational position and assigned functions, not solely from specific bilateral communications. Id. § 1.01 cmt. f, illus. 8. Comment f further notes that when organizations create positions, they manifest authority "at a higher level of generality" than in bilateral relationships, manifesting that persons holding certain positions have certain categories of authority without necessarily foreseeing each specific exercise. ↩
-
Restatement (Third) § 1.01 cmt. f. "Within an organization the right to control its agents is essential to the organization's ability to function, regardless of its size, structure, or degree of hierarchy or complexity." Id. This control "is often" exercised by "another agent, one holding a supervisory position" rather than directly by the principal organization. Id. Organizations additionally control agents through "incentive structures that reward the agent for achieving results" and through "assigning a specified function with a functionally descriptive title," which "tends to control activity because it manifests what types of activity are approved by the principal to all who know of the function and title, including their holder." Id. ↩
-
Restatement (Third), supra note 13. ↩
-
See Restatement (Second) of Agency § 8A (1958) (defining inherent agency power as "the power of an agent which is derived not from authority, apparent authority or estoppel, but solely from the agency relation and exists for the protection of persons harmed by or dealing with a servant or other agent"). The Second Restatement needed inherent agency power because it conceived manifestation too narrowly. When situations arose where fairness demanded binding the principal despite absence of specific communications, doctrine needed a residual category. See Restatement (Third) § 1.01 introductory note (explaining why the concept was eliminated). ↩
-
Restatement (Third) § 2.01 cmt. b. The Third Restatement "does not use the concept of inherent agency power," instead covering those situations through "other doctrines, as explained specifically where relevant"--primarily through broadened apparent authority. Id. § 1.01 introductory note. Comment b to Section 2.01 explains that manifestations creating authority can be "informal, implicit, and nonspecific." They need not "use the word 'authority'" and need not "consist of words targeted specifically to a third party." Id. § 2.01 cmt. b. ↩
-
Restatement (Third), supra note 15. ↩
-
Restatement (Third) § 1.04(2) cmt. e. The full comment states: "A computer program is not capable of acting as a principal or an agent as defined by this Restatement. Notwithstanding terminology used in digital technology, such as 'intelligent agent,' a computer program . . . [is an] instrumentalit[y] of the person[] who use[s] [it]." Id. The instrumentality doctrine reflected foundational technological assumptions: computer programs in 2006 were predominantly deterministic, executing pre-programmed instructions without meaningful autonomy. ↩
-
See Stuart Russell & Peter Norvig, Artificial Intelligence: A Modern Approach 58--72 (4th ed. 2020) (defining autonomous agents as systems that perceive their environment and take actions to achieve goals); Michael Wooldridge, Intelligent Agents, in Multiagent Systems 3, 15--27 (Gerhard Weiss ed., 2d ed. 2013) (characterizing agent autonomy as operating without direct intervention). For a description of agentic AI in commercial deployment contexts, see What Is Agentic AI?, U. Cin. (June 13, 2025), https://www.uc.edu/news/articles/2025/06/what-is-agentic-ai.html. ↩
-
The distinction is stark: generative AI creates content; agentic AI creates consequences. See Agentic AI Takes Over: 11 Shocking 2026 Predictions, Forbes (Dec. 31, 2025). Walmart's AI Super Agent deployment is documented in Walmart Deploys AI Super Agent for Supply Chain, Supply Chain Mgmt. Rev. (Jan. 15, 2025). On function calling and tool use enabling real-world consequences, see Ryan Calo & David Mintz, The New Generativity, 92 Geo. Wash. L. Rev. (forthcoming 2026). ↩
-
On Multi-Agent Systems in enterprise contexts, see Multi-Agent Collaboration in AI: Solving Complex Problems, Kubiya (June 25, 2025), https://www.kubiya.ai/blog/multi-agent-collaboration (describing systems where "different agent roles communicate peer-to-peer" to produce outcomes through "role-based collaboration"). On emergent complexity and distributed attribution challenges, see Casey & Lemley, supra note 1, at 1330--36. ↩
-
Francesca Rossi, Building Trust in Artificial Intelligence, 24 J. Int'l Aff. 47, 50--52 (2019) (distinguishing adaptive AI systems from deterministic automation); Scherer, supra note 1, at 365--68 (analyzing how continuous learning creates post-deployment adaptation that strains traditional authorization frameworks). For the "drift" phenomenon, see Seth Oranburg & Peter Gianiodis, The Dual-Bound Framework: Epistemic and Legitimacy Limits on Algorithmic Governance (working paper 2025). ↩
-
On algorithmic opacity and the legal system's assumptions, see Pasquale, The Black Box Society, supra note 2, at 3--8; Edwards & Veale, supra note 2, at 22--35 (surveying technical limitations on algorithmic transparency). The Mobley v. Workday litigation involves an AI system that allegedly discriminated in ways its developers could not fully explain. Derek Mobley v. Workday, Inc., No. 3:23-cv-00770 (N.D. Cal. filed Feb. 16, 2023). ↩
-
On complex organizational decision-making opacity and its accommodation in attribution doctrine, see In re Wells Fargo Derivative Litig., 282 F. Supp. 3d 1074, 1090--93 (N.D. Cal. 2017) (attributing systemic misconduct arising from organizational culture to corporate defendants). See also Restatement (Third) § 2.03 cmt. c (apparent authority arises from position and custom observable to third parties, not from specific authorization of particular outcomes). ↩
-
On subsidiary corporations and franchises as models for agent autonomy consistent with agency, see Restatement (Third) § 1.01 cmt. f (explaining that control includes the principal's right to control the agent, not necessarily constant exercise of direction). On franchisor liability under apparent agency analysis, see Miller v. McDonald's Corp., 945 P.2d 1107, 1113--15 (Or. Ct. App. 1997); Billops v. Magness Constr. Co., 391 A.2d 196, 198--99 (Del. 1978). ↩
-
On high-frequency trading and firm-level oversight mechanisms as a model for AI control, see Yesha Yadav, The Institutional Design of Financial Markets, 166 U. Pa. L. Rev. 1, 45--48 (2017). On regulatory requirements for controls over algorithmic trading, see 17 C.F.R. § 240.15c3-5 (2010); FINRA, Regulatory Notice 16-21, at 4--5 (June 2016) (requiring registration of persons monitoring algorithmic trading strategies and emphasizing that "a firm's trading activity must always be supervised by an appropriately registered person"). ↩
-
Restatement (Third), supra note 23. ↩
-
Moffatt v. Air Canada, 2024 BCCRT 149, ¶¶ 8--21 (Can. B.C. Civ. Res. Trib.). The tribunal's critical findings were: Air Canada placed the chatbot on its official website as a customer service tool integrated into its information system; the chatbot was presented with Air Canada branding and institutional indicia; customers would reasonably believe the chatbot had authority to provide accurate information about Air Canada policies; Air Canada provided no effective disclaimers visible to customers; and the customer reasonably relied on the chatbot's representation. See also Air Canada Chatbot Misinformation Case, CanLII Connects (Feb. 16, 2024), https://canliiconnects.org/en/commentaries/103133. ↩
-
Am. Soc'y of Mech. Eng'rs, Inc. v. Hydrolevel Corp., 456 U.S. 556, 566 (1982) ("Under general rules of agency law, principals are liable when their agents act with apparent authority . . . . An agent who appears to have authority to make statements for his principal gives to his statements the weight of the principal's reputation."). This principle applies directly to AI deployments: the deploying organization's institutional indicia give the AI system's statements the weight of the organization's authority. ↩
-
Restatement (Third) § 2.03 cmt. c. Comment c provides the foundational framework: "If a principal places an agent in a position in the principal's business, the agent has apparent authority to do acts that third parties would reasonably believe the agent has authority to do given the agent's position." This position-based apparent authority operates through observable structural features rather than requiring specific communications about scope. Id. ↩
-
Restatement (Third) § 2.03 cmt. c ("custom and practice bear on whether the agent's action is within the agent's apparent authority," particularly where "written job descriptions do not exist" for executive and managerial positions). Third parties may reasonably infer authority based on what is customary for such positions in the relevant industry and on patterns of conduct the organization has permitted or acquiesced in. Id. ↩
-
Restatement (Third) § 2.03 cmt. c, illus. 2 (store manager illustration establishing that apparent authority scope depends on what is "reasonable for T to believe, given A's position as manager and the store's practices"). ↩
-
Restatement (Third) § 2.03 cmt. c, illus. 4 (CFO illustration establishing that apparent authority exists for transactions within the customary scope of the position even where actual authority is absent, based on what "corporations of P's type" customarily delegate to chief financial officers). See also id. § 2.03 cmt. c ("Apparent authority is a consequence of a principal's conduct in holding out another as having authority . . . . Thus, [a limitation] known to the agent but not to the third party does not eliminate the apparent authority."). ↩
-
Moffatt v. Air Canada, supra note 29. ↩
-
Restatement (Third) § 2.03 cmt. c, illus. 5 (P Corporation's regional sales manager has no apparent authority to sell corporate real property because "T knows A's position and responsibilities" and that transaction type falls outside the customary scope of that position). The illustration establishes limits on position-based apparent authority: apparent authority does not exist when reasonable third parties familiar with the position would not expect that authority to be included within its customary scope. ↩
-
See Restatement (Third) § 4.01(1) (defining ratification as "the affirmance of a prior act done by another, whereby the act, as to some or all persons, is given effect as if it had originally been authorized"); id. § 4.01 cmt. b (ratification can be express or implied, including through "accepting the benefits of the transaction or failing to repudiate it with knowledge of material facts"); id. § 4.01 cmt. d (ratification can occur by accepting benefits of unauthorized transaction or by failing to repudiate after reasonable opportunity to do so). ↩
-
Restatement (Third) § 1.04(2) cmt. e. AI systems cannot have fiduciary duties because fiduciary duty requires the capacity to form intentions and make moral choices. Cf. Restatement (Third) § 8.01 (fiduciary duty requires agents to "act loyally for the principal's benefit in all matters connected with the agency relationship"); id. § 8.08 (duty of care requires diligence in exercising delegated authority). The deploying organization and its human fiduciaries retain all such duties. ↩
-
See In re Caremark Int'l Inc. Derivative Litig., 698 A.2d 959, 971 (Del. Ch. 1996) ("[A] director's obligation includes a duty to attempt in good faith to assure that a corporate information and reporting system . . . exists, and that failure to do so under some circumstances may, in theory at least, render a director liable for losses caused by non-compliance with applicable legal standards."); Marchand v. Barnhill, 212 A.3d 805, 822--24 (Del. 2019) (expanding Caremark to require active oversight of mission-critical risks). ↩
-
See Marchand, 212 A.3d at 824 (emphasizing that Caremark requires monitoring of "the most critical risks" in the company's operations). The extension of Caremark monitoring duties to AI deployments is doctrinal evolution rather than innovation: AI systems deployed in mission-critical roles present precisely the category of operational risk for which monitoring duties exist. ↩
-
See Del. Code Ann. tit. 8, § 141(e) (allowing directors to rely on experts, creating a due-diligence standard for such reliance); Abbott, supra note 4, at 65--85 (developing "reasonable robot" standard applicable to AI deployment decisions). The parallel to medical decisions about diagnostic tools is instructive: directors deploying AI must exercise informed judgment in system selection, scaling diligence to operational stakes. ↩
-
See In re Carvana Co. S'holders Litig., 302 A.3d 1220, 1241--44 (Del. 2023) (discussing adequate board responses to red flags about illegal conduct, requiring prompt investigation and corrective action); In re Boeing Co. Derivative Litig., No. 2019-0907, 2021 WL 4059934, at *17--19 (Del. Ch. Sept. 7, 2021) (finding red flags where board received reports of safety concerns but failed to ensure adequate response). These cases illuminate the response duties applicable when monitoring reveals AI performance problems. ↩
-
Oranburg & Gianiodis, supra note 20 (presenting the Dual-Bound Framework distinguishing epistemic limits--AI systems' inability to manage Knightian uncertainty--from legitimacy limits--stakeholders' rejection of decisions lacking procedural justice). The four-quadrant framework distinguishes Q1 (centralized automation for routine, low-stakes decisions), Q2 (distributed automation for high-volume, high-legitimacy decisions), Q3 (centralized judgment requiring human decision-making for strategic, high-uncertainty decisions), and Q4 (distributed deliberation requiring collective human assent for constitutional or values-based decisions). Id. ↩
-
Derek Mobley v. Workday, Inc., No. 3:23-cv-00770 (N.D. Cal. filed Feb. 16, 2023). The plaintiff alleged that Workday's AI-powered hiring screening system discriminated based on disability across more than 100 job applications. Workday argued it was merely a technology vendor, not an agent of the employers using its system. The court's initial determination allowing the case to proceed on agency theories reflects judicial receptivity to treating vendors performing traditional employer functions as agents for attribution purposes. See Third-Party Liability and Product Liability for AI Systems, Int'l Ass'n Privacy Profs. (Nov. 12, 2023). ↩
-
Restatement (Third) § 3.15 (addressing dual agency where an agent acts for multiple principals regarding the same transaction, with both principals potentially bound by the agent's conduct within the scope of authority each manifested). The franchise cases provide doctrinal templates for multi-party attribution analysis. See Miller v. McDonald's Corp., 945 P.2d 1107, 1113--15 (Or. Ct. App. 1997); Billops v. Magness Constr. Co., 391 A.2d 196, 198--99 (Del. 1978) (franchisor liable where it "cloaked franchisee with all the indicia of the [franchisor's] business"). ↩
-
See The AI Vendor Blind Spot: Third-Party Tools, First-Class Risk, Risk Bus. (July 23, 2025) (noting that "the 2024 EU AI Act places legal accountability on both providers and deployers of AI systems, particularly those deemed high-risk"); European Commission Withdraws AI Liability Directive from Consideration, Int'l Ass'n Privacy Profs. (Feb. 11, 2025) (reporting withdrawal of proposed AI Liability Directive). The EU regulatory approach of shared responsibility between providers and deployers provides comparative context for developing U.S. shared-control frameworks. ↩
-
On scope-of-deployment boundaries for generative AI and the "cheapest cost avoider" principle, see Guido Calabresi, The Costs of Accidents 135--73 (1970) (developing cheapest cost avoider principle). The deploying organization controls deployment design and can implement warnings and constraints; users cannot audit algorithmic reliability independently. Cf. Moffatt v. Air Canada, 2024 BCCRT 149 (Can. B.C. Civ. Res. Trib.) (tribunal rejecting airline's argument that correct policy information elsewhere on website limited chatbot's apparent authority). ↩
-
On multi-agent attribution and emergent outputs, see Multi-Agent Collaboration in AI, supra note 19. Attribution of emergent outcomes from multi-agent systems follows standard institutional frameworks: the manifestation is deploying the integrated system in an official role; the scope is what third parties reasonably believe that role entails; emergence affects internal operation but not external attribution. See Restatement (Third) § 2.03 cmt. c (apparent authority based on observable manifestations, not on foreseeability of specific outputs). ↩
-
The American Law Institute has not announced an Agency Fourth project. See Restatement to the Rescue, Harv. L. Sch. Today (July 24, 2024), https://hls.harvard.edu/today/restatement-to-the-rescue/ (describing current Restatement projects). Technology-specific legal rules face rapid obsolescence. Cf. Bayern, supra note 6, at 95 (arguing that "modern business-entity law provides a surprisingly adaptable framework" for autonomous systems without requiring technology-specific rules). ↩
-
See Legal Personhood of Potential People: AI and Embryos, 113 Calif. L. Rev. Online 104, 105--09 (2025) (concluding AI does not qualify for personhood as natural or juridical person); Should AI Be a Legal Person?, Courting the Law (Oct. 15, 2025), https://courtingthelaw.com/2025/10/16/commentary/should-ai-be-a-legal-person-why-the-debate-exists-and-what-we-really-need-instead/ (arguing that rather than debating AI personhood, focus should be on "clear framework of accountability and regulation" holding developers and deployers legally answerable for harm). ↩
-
Should AI Be a Legal Person?, Courting the Law (Oct. 15, 2025) (arguing "treating AI as a legal person today would not make it more accountable; it would give corporations a way to deflect liability by blaming the machine"). See also Artificial Intelligence and Liability, Norton Rose Fulbright (June 23, 2024), https://www.nortonrosefulbright.com/en/knowledge/publications/7052eff6 (describing EU regulatory framework addressing AI liability through provider and deployer obligations rather than AI personhood). On the "algorithmic entity" structure and its risks, see Bayern, supra note 6, at 95--103 (analyzing LLC-based algorithmic entities and arguing entity-law frameworks must account for this development). ↩
-
Supra note 45. ↩