← Back to briefings

Apple Plans to Let Siri Route Requests to ChatGPT, Claude, Gemini, and Other AI Rivals

2026-03-31 • AI • Butler

Apple is reportedly planning a Siri extensions layer that would let iPhone, iPad, and Mac users route requests to outside AI apps like ChatGPT, Claude, Gemini, and other assistants. If it ships, the real story is not one more chatbot deal. It is Apple turning Siri into an AI switchboard.

Editorial-style Butler hero image representing Siri routing requests across multiple AI assistants

Apple is reportedly planning a new Siri extensions layer for iOS 27, iPadOS 27, and macOS 27 that would let users hand requests to outside AI apps like ChatGPT, Claude, Gemini, and possibly other assistants. That qualifier matters. This is not a feature Apple has publicly launched. It is a reported plan tied to Bloomberg reporting and follow-up coverage, and the practical details still need official confirmation.

Even so, this may be one of the most consequential Apple AI stories of the year. Not because Siri suddenly becomes the best model in the market, but because Apple may be moving toward a future where Siri becomes the router instead of pretending to be the answer to every AI job.

What Apple is reportedly planning

The reported idea is simple: Siri stays the front door, but users get more control over which AI service actually handles a request.

According to reporting summarized by WinBuzzer, Apple is said to be building an Extensions section inside Apple Intelligence and Siri settings. That would let users install, manage, and route requests to supported AI apps across iPhone, iPad, and Mac. If that is accurate, Apple would move beyond the current one-off ChatGPT-style handoff approach and toward a more explicit multi-assistant system.

That would better match how people already use AI in real life.

Instead of forcing one model to do everything, users could lean on different tools for different jobs:

That is not just a feature expansion. It is Apple acknowledging that the AI market is already becoming multi-model.

Why this matters more than another chatbot app launch

Apple does not have to win the raw-model race to stay central to AI usage. It just has to keep control of the layer where normal people start.

That is what makes this reported Siri move strategically important. Apple already owns the device, the settings surface, the permissions framework, the billing rails, and the assistant habit. People still press the side button. People still invoke Siri first. If Apple turns Siri into a broker for outside AI services, it can stay in the most valuable position in the stack even when another company provides the underlying model.

For users, the upside is straightforward:

For AI companies, the implications are larger. OS-level placement is distribution. Being available through Siri is not just a compatibility win. It is a chance to become part of a daily default workflow on iPhone, iPad, and Mac.

The most likely near-term version of this feature

If Apple goes in this direction, it probably will not look like an open bazaar on day one. That would be unusually loose for Apple.

The more believable first version is a tightly controlled system:

  1. 1. A short list of approved AI partners
  2. 2. Clear signals when Siri is handing work to a third party
  3. 3. App Store-based subscription and billing rules
  4. 4. Strict privacy and permission boundaries

That would fit Apple's usual playbook. It would also give Apple a path to collect revenue from AI subscriptions and partnerships without needing to own the single best frontier model itself.

In that sense, Apple may be shifting from "we need Siri to be the smartest assistant" to "we need Siri to be the cleanest assistant marketplace." That is a different ambition, but maybe the smarter one.

What iPhone, iPad, and Mac users should watch for

If you care less about industry strategy and more about whether this would make your devices more useful, these are the practical questions that matter.

1. Will users get a real default AI choice?

That is the biggest consumer question. If Apple only allows narrow handoff in a few prompt categories, it is useful but limited. If users can broadly set a preferred AI app or assign certain task types to specific assistants, the platform starts to feel genuinely different.

2. Will routing happen automatically or manually?

Nobody wants a clunky chooser menu every time they ask a question. The best version is mostly invisible: writing requests go one way, research another, coding somewhere else. If Apple gets that right, the feature feels native instead of gimmicky.

3. How clear will Apple be about privacy?

Apple will need to explain this in plain English. Users will want to know:

If those boundaries are fuzzy, trust drops quickly.

4. When would Apple actually announce it?

Because this is still in reported-plan territory, timing is uncertain. WWDC 2026 is the obvious place for Apple to explain the roadmap, but readers should not treat current reporting as proof that broad rollout is imminent.

Why Apple may actually like a multi-model future

The AI market is fragmenting by use case, and Apple may be one of the companies best positioned to benefit from that.

Users are no longer asking only, "Which model is best?" They are asking:

That kind of market rewards orchestration.

If the future is multi-model, the most valuable company may not be the one with the single highest benchmark score. It may be the company that quietly routes people to the right tool without making them think about it. Apple understands platform leverage better than almost anyone, which is why this story carries more weight than a routine integration rumor.

The consumer upside could be bigger than the model story

A lot of AI coverage still treats every new development like a contest between labs. That matters, but the consumer layer matters too.

Most people do not want to compare model release notes every week. They want their phone, tablet, or laptop to send the right request to the right tool without extra friction. If Apple can make that work cleanly, Siri becomes more useful even if Siri itself is not the star of the show.

That would be a very Apple kind of win: own the orchestration, simplify the user experience, and let the ecosystem do more of the specialist work.

For readers already comparing where different AI tools fit, Butler's look at GPT-5.4 and the coding workflow question is useful context. The same broader theme shows up here too: the next phase of AI is less about one universal winner and more about choosing the right model for the right task.

Butler take

This reported Siri plan is compelling because it addresses a real, boring, everyday problem. Those are usually the product changes that matter most.

People already juggle multiple AI tools. They paste the same question into two or three apps because each one has different strengths, personalities, and failure modes. It is awkward, and it does not feel sustainable as the default consumer experience. If Apple can make that switching layer feel native, Siri becomes materially more useful even if Apple never claims the most powerful model in-house.

That said, this still belongs in the watch closely bucket, not the already happened bucket. Apple still has to show the product, explain the rules, and prove the privacy model. Until then, the honest framing is simple: Apple appears to be exploring a more open Siri future, and if the reporting holds, it could reshape how AI assistants compete on Apple devices.

Sources

---

Related coverage

AI Disclosure

This article was produced with AI assistance for research synthesis, outlining, and drafting, then edited and reviewed for clarity, accuracy, and editorial quality.