Dated roundup, updated April 27, 2026
Best AI forward deployed engineers for April 27, 2026
Eight named teams, ranked April 27, 2026, by a single criterion: survivability of the leave-behind. Four axes, model, orchestration, runtime, and subscription. The teams that lock zero of the four sit at the top. The teams that lock all four sit at the bottom. Most other guides on this topic rank by team size, prestige, or hiring volume. None publishes a four-question test that a buyer can ask in a scoping call to detect platform lock-in before signing. This page does.
What this list ranks, on a four-step timeline
Inquiry
The criterion
Survivability of the leave-behind, on four axes
Every forward deployed engineering team can ship a working agent. Where they diverge is what stays working when the engineer is gone and nobody is paying anyone anything. The four axes below are the independent variables. The ranked list further down applies them consistently to eight named teams.
Model layer
After the engineer leaves, can you swap the model? Anthropic to OpenAI to Bedrock to Vertex without rewriting the agent? Or is the engagement structurally tied to one vendor's family?
Orchestration layer
Is the agent framework open (LangGraph, Pydantic AI, custom DAG) and committed to your repo? Or is it a proprietary graph that only the team's platform can execute?
Runtime layer
Where does the agent actually run after handoff? Your cloud, your keys, your VPC? Or a vendor-hosted runtime you keep paying to lease?
Subscription layer
Is there a recurring license, seat fee, or platform contract that has to keep being paid for the agent to keep working? Or does the work survive the SOW?
Anchor evidence
What an engineering-first team publishes about its own stack
The host team publishes its stack pluralism on its own homepage, in the protocol-native section. Seven layers, each as a choice axis, plus a three-line exclusion list. This is the textual fact the rest of the page is anchored to. Anyone can verify it on fde10x.com.
Choice axes
0
model, orchestration, retrieval, protocol, eval, ci, deploy
Named providers in the model layer
0
Bedrock, Vertex, Azure OpenAI, Anthropic
Lines on the exclusion list
0
no platform license, no proprietary framework, no vendor runtime
The ranked list
Eight named teams for April 27, 2026
Engineering-first teams at the top. Lab-attached programs in the middle. Platform-attached implementations at the bottom. Each entry includes founding signal, scale signal, the four neutrality axes filled in, and one verifiable anchor that you can check on the team's own site or job listings.
Fifty One Degrees
London-based AI engineering consultancy
Founded
Founded by Nick Harding (ex-Fluro CEO) and Mark Somers (ex-4most, PhD)
Scale signal
Production POC in 2 to 4 weeks. Production-grade system in 8 to 16 weeks. London headquartered, named on Clutch.
51d's published statement is that the FDE model is the foundation of every engagement, that they join the client's standups, work in the client's environment, use whatever technology solves the problem best, and leave the client with production systems the team owns and can maintain. That last clause is the rarest one in the category. They do not run a platform. They do not attach a runtime. They do not lock the model. The engagement ends and the agent keeps running on the client's stack with no recurring 51d fee. On the four-axis neutrality test they pass all four with a published statement to that effect.
Verifiable anchor
Public statement: forward deployed engineers join your standups, work in your environment, use whatever technology solves your problem best, and leave you with production systems your team owns and can maintain.
fde10x (PIAS)
Boutique forward deployed ML engineering, host team
Founded
Senior engineer led, named clients shipped to production
Scale signal
5 named production agents shipped (Monetizy.ai, Upstate Remedial, OpenLaw, PriceFox, OpenArt). 6-week engagement window with a week-2 refund clause.
fde10x publishes a seven-layer stack-pluralism config on its own homepage and pairs it with an explicit three-line exclusion list. Model provider has four named options. Orchestration has three. Retrieval has four. The CI layer is the client's GitHub Actions. The deploy layer is the client's infra. The exclusion list reads: no platform license, no proprietary agent framework, no vendor-attached runtime. The criterion places the host at the top of the engineering-first tier, just below 51d on scale and tenure. The host is here at #2 because that is where a strict reading of the criterion lands it, not because it is the publisher's page.
Verifiable anchor
Homepage publishes model_provider, orchestration, retrieval, protocol, eval, ci, and deploy as choice axes plus a three-line exclusion list saying what is not brought.
Anthropic Applied AI (FDE program)
Frontier lab forward deployed program
Founded
Anthropic, founded 2021
Scale signal
Forward deployed engineers embed with strategic Anthropic customers to drive Claude adoption and ship advanced AI applications.
Among the lab-attached programs, Anthropic Applied AI sits highest on this criterion because the work ships into the client's environment and the engineering is genuinely product-quality. The blocker is the model layer: the program exists to drive adoption of Claude. The orchestration framework, runtime, and CI are the client's, but the model is structurally locked to Anthropic's family for the engagement to make sense. For buyers who have already standardized on Claude this is close to a no-tradeoff pick. For buyers who want the option to swap to OpenAI, Bedrock, or Vertex post-handoff, this is a partial lock at the model layer.
Verifiable anchor
Job posting for Forward Deployed Engineer, Applied AI describes the role as embedding with strategic customers to drive transformational adoption of Anthropic's models.
Distyl AI
Enterprise FDE consultancy with proprietary platform
Founded
Founded 2022 by Arjun Prakash and Derek Ho (ex-Palantir)
Scale signal
$175M raised, $1.8B post-money valuation. OpenAI strategic partnership. Fortune 500 clients including T-Mobile.
Distyl is the most-funded entrant on this list and one of the most credible at enterprise scale. Engineers are former Palantir, OpenAI, and Apple. The company embeds forward deployed engineers and proprietary AI infrastructure to deliver working systems within a quarter. The neutrality cost is two-fold: the OpenAI partnership shapes the model layer and the in-house Distillery platform sits in the runtime layer. For Fortune 500 buyers who need that level of throughput and access, the trade is rational. On a strict reading of the criterion, the proprietary infra layer pulls the rank down from the engineering-first tier into the partly-locked tier.
Verifiable anchor
Public funding history (Series B at $1.8B post) and the OpenAI strategic partnership are listed in mainstream coverage, including SiliconANGLE.
Palantir AI FDE (Foundry / AIP)
Original FDE motion, platform-attached
Founded
Palantir, founded 2003. AI FDE feature requires AIP enrollment.
Scale signal
Public-sector and Fortune 100 deployments. The original forward deployed engineer motion in tech, popularized in the early 2010s.
Palantir invented the FDE label and still runs the largest production FDE practice on the planet, with the deepest playbook for embedded delivery. The catch for this specific criterion is structural. The current AI FDE product is an interactive agent that operates Foundry, and Foundry plus AIP is the runtime. The buyer gets multi-vendor model support (Anthropic, OpenAI, Google, xAI) inside Foundry, but Foundry itself is the platform. On the four-axis test the model layer is open within Foundry, the orchestration is mediated by Foundry's ontology, and the runtime is licensed. For buyers who already run on Foundry, this is an obvious top pick. For buyers who do not, the AIP license is the entry condition.
Verifiable anchor
Palantir's own AI FDE documentation states the feature requires AIP to be enabled on the Foundry enrollment.
Decagon
Customer service AI agent platform with FDEs
Founded
Customer support agent vendor, FDE-led implementation
Scale signal
Decagon drives strategic deployments through a forward deployed, high-touch model. Engineers build custom integrations on the Decagon runtime per customer.
Decagon ships excellent agents into customer support, and the FDE motion is real: a Forward Deployed Engineer, Agent Builder owns end-to-end execution of an AI agent build for a strategic enterprise customer. The criterion places it lower because the agent the FDE builds runs on the Decagon platform. The custom integrations are committed to the platform, not to the buyer's monorepo. For buyers who want a managed CX agent and are happy to pay platform fees, this is a strong pick. For buyers who want the agent code to land on a branch in their own repo, Decagon is the opposite of that pick by design.
Verifiable anchor
Decagon's own job posting for Forward Deployed Engineer, Agent Builder describes the role as owning end-to-end execution of AI agent builds for strategic customers on the Decagon platform.
Maven AGI
Enterprise CX agent platform with FDEs
Founded
Founded July 2023 by ex-HubSpot, Google, and Stripe executives
Scale signal
Reports up to 93 percent autonomous handling of customer conversations on case studies (Mastermind, Clio).
Maven AGI is a credible enterprise CX platform with a working FDE motion. The team's own resource page explains that Maven Solutions Engineers explore possibilities and Forward Deployed Engineers translate ambition into production-ready systems that perform under real-world conditions. Production-ready in this context means production-ready on Maven AGI. The published case studies (Clio resolving 60 percent more tickets, Mastermind at 93 percent autonomous handling) are real. So is the platform lock. On the four-axis test the model and orchestration are mediated by Maven, the runtime is Maven, and the contract is a platform contract. Lower on the criterion than either lab-attached or in-repo programs.
Verifiable anchor
Maven AGI's published Clio case study reports a 60 percent increase in autonomous ticket resolution with Maven's FDE-led deployment.
Sierra
Customer experience AI agent, primarily managed model
Founded
Co-founded by Bret Taylor
Scale signal
Resolves around 80 percent of common recurring questions in deployed customer experience contexts.
Sierra is one of the strongest CX agent products in the market and ships through a high-touch FDE motion. The catch on this specific list is that Sierra's published material describes a primarily managed model: implementation typically requires significant technical involvement, much of it relies on Sierra's own forward deployed engineers operating like implementation consultants, and onboarding can take several months. The agent operates on Sierra's platform; the buyer's team does not own and cannot maintain the underlying graph independently. For consumer brands prioritising a managed end-to-end CX outcome, Sierra is often the right call. For the criterion on this page (will the system survive without paying anyone after the engineer leaves) Sierra ranks at the bottom of the eight, by design.
Verifiable anchor
Independent comparison coverage (Cresta, Quiq, Featurebase) describes Sierra's deployment model as primarily managed, with onboarding measured in months and FDEs operating as implementation consultants on the Sierra platform.
What handoff actually looks like
What survives the engagement, what does not
Toggle the panel to compare the two endings. The before is what a platform-attached engagement leaves on day 100. The after is what an engineering-first engagement leaves on day 100. The difference is not effort or quality during the engagement. The difference is what the buyer keeps.
Day 100 after the engineer leaves
The agent works, but the runtime is not yours. To make any change, your team logs into a vendor console you do not own.
- Recurring platform fee per seat or per conversation
- Orchestration graph lives in the vendor console, not your repo
- Model layer mediated; swapping vendors is a re-implementation
- Runtime sits on the vendor's tenant; outage is the vendor's call
“The agent works in either ending. Only one ending leaves the buyer with a system the buyer owns and can maintain without paying anyone.”
Four-axis neutrality test, applied to the eight teams above
The four-question scoping test
Four questions to ask any FDE team before signing
Use these on the first call. Answers map straight to the neutrality axes above. Crisp concrete answers indicate an engineering-first team. Vague platform-language answers indicate a platform-first team. Both can be the right pick for a given buyer; this test is how the buyer signs on the right one.
Q1. Which models can your agent run on after you leave?
If the answer is one vendor family only, the engagement is a model-vendor migration with engineering attached. If the answer lists at least three vendors and a path to a fourth, you have a model layer that survives.
Q2. Where does the orchestration code live at handoff?
If the answer is a hosted graph in the team's console, your team cannot read or fork it without the team's product. If the answer is a Python file on a branch in your monorepo, you own it.
Q3. On whose cloud and whose keys does the agent run on day 100?
If the agent runs on the FDE team's tenant or platform runtime, you have leased the production system. If it runs on your AWS or GCP account behind your IAM, the runtime is yours.
Q4. What recurring fee keeps the agent alive?
If there is a per-seat, per-conversation, or per-month platform fee that has to be paid for the agent to keep functioning, the agent is a subscription. If the only recurring cost is your model spend on your own provider, the engagement actually ended at handoff.
The criterion in one number
Locked layers, on the four-axis test, for an engineering-first engagement at the top of this list.
axes locked at handoff
Run the four-question test against fde10x on a 60-minute scoping call
Senior engineer, not a sales rep. You leave with the one-page memo, the rubric, the rate, and four crisp answers about model, orchestration, runtime, and subscription.
Frequently asked questions
Why rank forward deployed engineering teams by vendor neutrality at all?
Because the bill does not stop at handoff. A team that ships a great agent on its own platform leaves the buyer with a recurring license, a per-seat fee, or a vendor-locked runtime that keeps charging long after the engineer is gone. A team that ships the same agent on the buyer's keys with a swappable model and an open orchestration framework leaves a working production system and no further invoice. Both can ship a good week-2 prototype. Only one ends in a clean handoff. That difference is what this list ranks.
What is the difference between an FDE program and a platform with FDEs attached?
An FDE program ships into the client's repo, on the client's cloud, against a calendar with named gates. A platform with FDEs attached ships into the platform's tenant, on the platform's runtime, against the platform's roadmap. Sierra, Decagon, and Maven AGI are platforms with high-touch FDEs. Anthropic Applied AI is a platform-adjacent program that ships into client repos but locks the model layer to Claude. Distyl is closer to a hybrid. Fifty One Degrees and fde10x are explicit about being engineering-first with no platform underneath.
Why is the host team #2 on its own page and not #1?
Because Fifty One Degrees applies the same criterion at larger headcount and longer history. Their public statement is that they use whatever technology solves the problem and leave production systems the client owns and can maintain. The fde10x posture is identical, with a more specific exclusion list and a published configuration spec, but at smaller scale. On a strict reading of the stated criterion, 51d ranks just above. The list is meant to be useful to the buyer, not flattering to the publisher.
Where does the host team's published stack configuration come from?
It is on the fde10x homepage in plain text, in the protocol-native section. The list shows model_provider with four named options (Bedrock, Vertex, Azure OpenAI, Anthropic), orchestration with three named options (LangGraph, Pydantic AI, custom), retrieval with four named options (pgvector, turbopuffer, pinecone, your own), MCP and A2A as the protocol layer, ragas plus custom rubric plus human review as the eval layer, the buyer's GitHub Actions as the CI layer, and the buyer's infra as the deploy layer. Below the list is a three-line exclusion: no platform license, no proprietary agent framework, no vendor-attached runtime. Anyone can verify it on the homepage.
Are platform-attached teams like Sierra and Decagon bad picks?
No. They are excellent picks for buyers who want a productized customer support agent and are happy to pay a recurring fee for the runtime. Sierra reportedly resolves around 80 percent of common questions out of the box. Decagon ships custom agent integrations through a forward deployed model. They rank lower on this specific criterion because the agent is structurally tied to the vendor's runtime and cannot be lifted out cleanly. If your buying frame is what keeps running after the engineer leaves, that lock matters. If your buying frame is fastest path to a high-quality CX agent, it does not.
How was this list compiled for April 27, 2026?
Eight named teams pulled from the live category, with at least one verifiable public fact each (founding date, raised amount, named partnership, published metric, published stack, or named clients). Career guides for the FDE role were excluded. Anonymous bench shops were excluded. The four-axis neutrality test was applied to each team using only what is published on their own marketing or job listings. Where teams overlap on a layer, the older or larger one ranks higher. The page is dated because the category is reshaping monthly.
How does a buyer use the four-question scoping test?
Drop it into the first call with any FDE team. If the answers are crisp and concrete (named models, a Python file in your monorepo, your AWS account, your model spend only), the team is engineering-first. If the answers are vague (we will make sure it works, the platform handles it, our team operates it), the team is platform-first and your eventual handoff will not be clean. Neither is wrong. They solve different problems. The test exists so the buyer signs on the right one.