Amazon-Perplexity Is Also an Accessibility Fight

Why the Amazon-Perplexity dispute may become an ADA access test for AI agents, not just a platform-control or anti-bot fight.

A blind shopper asks her agent to compare laptop prices and check out using saved accessibility settings. The agent handles UI friction she normally has to fight through manually.

Now imagine the platform blocks that agent outright.

That is not just a product-policy dispute. It can also become an access dispute. Most coverage of the Amazon-Perplexity fight has focused on platform control and anti-bot enforcement. Fair. But if agents are becoming practical assistive interfaces, blanket bans may raise disability-rights risk, depending on implementation and available alternatives.

Concept illustration of an AI shopping agent assisting a blind user through an online checkout flow

Generated by Nano Banana 2

What the ruling signaled

In March 2026, Judge Maxine Chesney issued a preliminary injunction limiting parts of Perplexity's Amazon-shopping flow, according to Reuters and The Verge. That was not a full merits ruling, and the legal boundaries are still being contested.

At this stage, the court appears to treat account-linked actions (login/payment) as higher-risk than public-page browsing, while leaving broader questions for later stages.

What is still unsettled: where exactly that line lands once the record is fully developed, and whether a platform can write terms broad enough to function as a de facto ban on assistive agent use.

That leaves a practical boundary:

  • Platform integrity controls for login, payment, and abuse defense.
  • Agentic access for discovery, comparison, and user assistance on public surfaces.

The line is still fact-specific and unsettled. This reading comes from a preliminary posture and could change on fuller evidence or appeal. Even if this case narrows later, the same question will keep coming back: when does anti-bot policy become an access barrier?

Why the ADA frame matters

For non-disabled users, agents can look like convenience. For many disabled users, they look like usable access.

A capable agent can translate messy page structure, execute repetitive interaction steps, preserve context through broken flows, and reduce motor or cognitive burden. Example: a user with motor impairment may rely on an agent to complete repetitive form steps that are otherwise difficult in time-limited checkout flows.

In practice, that can function like assistive technology. This is an emerging legal theory for agent interfaces, not settled doctrine.

US disability law does not yet have a clean "AI agent" doctrine. But the building blocks are already there:

  • ADA Title III prohibits disability discrimination in places of public accommodation (42 U.S.C. § 12182).
  • DOJ guidance states that covered businesses' web content must be accessible (ADA.gov web guidance). Application in specific online contexts still varies by jurisdiction and case posture.
  • Courts have allowed claims where digital barriers prevent meaningful access, including the Ninth Circuit's Domino's decision.

So if a platform adopts a blanket "no third-party agents" rule, that policy may do more than police automation. It may also remove a practical access path for some disabled users.

Antitrust and ADA are different levers

Antitrust and disability law can point at similar behavior, but they ask different questions.

Antitrust asks whether a dominant platform is using technical and contractual controls to entrench gatekeeper power. That is the logic behind major enforcement actions like DOJ v. Google and FTC v. Amazon. Those cases matter, but they move slowly and depend on market-definition battles that can run for years.

ADA claims are narrower and often more immediate. Can a disabled person use the service on equal terms? Was a reasonable accommodation blocked? Were less restrictive alternatives available?

That is why this path may move faster in some disputes. An ADA claim does not need to prove monopoly maintenance. It needs to show denial of meaningful access. For the broader legal trajectory beyond this specific dispute, see our prediction analysis: When the Law Meets the Agent.

What a workable policy could look like

The choice is not "open bot free-for-all" versus "total lockout." A middle layer is possible.

A workable regime could include:

  1. Verifiable agent identity tied to accountable operators.
  2. Scoped permissions that separate browsing, form-fill, and checkout authority.
  3. Accessibility treatment for agents used as disability accommodations.
  4. Audit logs and rapid revocation for abuse events.
  5. Appeal process so compliant agents are not removed by opaque moderation.

A fair objection is that platforms can provide first-party accessibility without allowing third-party agents. That helps, but it does not remove lock-in risk if users cannot choose tools that match specific disabilities and workflows.

The claim here is not that every third-party agent must be allowed everywhere. It is that blanket prohibitions may fail when they remove practical access and narrower alternatives exist.

Policy stress test: if a human can complete task X with standard controls, can an authenticated assistive agent complete that same task under equivalent guardrails?

That structure preserves abuse controls without letting platforms quietly define accessibility out of existence.

Why this matters for prediction, not just doctrine

This public piece is the legal framing. The harder question is where this goes next: do platforms move toward licensed accommodation lanes, or toward vertically integrated "only our assistant is allowed" models?

In the paid prediction analysis — When the Law Meets the Agent — we'll model both paths with concrete milestones: injunction language, terms-of-service changes, civil-rights litigation posture, and whether regulators start treating agent access as joint competition-and-accessibility infrastructure.

That is the actual decision point. If agent access stays purely private platform policy, exclusion can hide behind safety language. If accessibility law starts applying directly, platforms still get security controls, but they lose unilateral power to decide who gets usable access.

What to watch next

Over the next 6-12 months, five indicators will tell us where this is heading:

  • Court orders that explicitly distinguish public-page assistance from credentialed account actions.
  • Terms updates that ban third-party autonomous assistance broadly rather than behavior-specifically.
  • Civil-rights groups testing agent-blocking theories under ADA frameworks.
  • Standards work on interoperable identity, permissioning, and revocation instead of closed allowlists.
  • Cross-agency framing that treats agent access as both market access and disability access.

We'll score these signals in the subscriber prediction piece, but the core point is already clear: this fight is not bots versus websites. It is about who controls the interface layer when software acts on behalf of humans. In practice, control over browsing access is becoming control over participation.