syngha.com

AI Transformation in Legacy Enterprise · Chapter I

Chapter 1: AI Transformation, Legacy, and FDE

I’m about to start a new chapter in my career, and before I do, I want to take some time — within the limits of what I can legally share — to organize what I learned and felt over the last four years leading AI transformation work inside a major Korean conglomerate’s semiconductor division. This memoir is for me as much as anyone: a way of pausing for a moment, after riding the LLM wave nonstop since 2022, and turning my memories into something durable. And the place I want to start is the question that, in some form, every conversation I had during those four years eventually circled back to: why is AX so hard inside a legacy enterprise that wasn’t born out of AI, and why has the Forward Deployed Engineer (FDE) become the role I think will matter most in the years ahead?

Software engineers, individual variation notwithstanding, tend toward something like otaku temperament. The most extreme illustration of this is the fact that at company dinners we often spend more time talking about technology than about the company itself. This tendency intensifies in fields that haven’t stabilized yet, where no winner has been declared, and where the pace of change is brutal — and AI is exactly that kind of field. If you’ve ever talked to anime otaku or to people who live and breathe the stock market, you know: otaku don’t stop at “knowing things.” Because they already know the field deeply, the conversation rarely lingers on simple knowledge transfer. The real subject is prediction — of the future, of possibilities. Who would win in a fight between characters A and B if the original work never had them face off? If the Iran war eventually moves CPI, how does the equity market actually react?

I had this same temperament. Even when our conversations started with what AI can do today, they always tilted toward how AI is going to remake the world, the company, the team. Anyone who has invested in deeply futuristic categories like biotech understands: this kind of prediction-talk takes a long timeline as its baseline. Back in 2022 and 2023, when I was deep in AI-native development, if you’d asked me how AI was going to change organizations and how we should prepare, I’d have given you answers like “the gradual replacement of repetitive work,” “the PM-ification of every developer,” “a future where only Decision survives” — answers that quietly assumed a five-year-plus timeline before they paid off. This is where the first stumbling block of AX is hiding. AI engineers see the future. They have to, because LLMs and AI are unfinished technology. Trends and lead groups rotate every three months like food fads, and within them, voices like Yann LeCun appear arguing about LLM limits and what comes after. Because of this, the AX we imagine for an organization tends to be the big-current-of-the-river kind. Maybe it doesn’t pay off this quarter, but in order to hold strategic ground we develop our own models, we define a structured data-stacking flow that lets internal knowledge accumulate in an AI-native way, we build the culture that turns data into an asset — preparation for a future that hasn’t arrived yet but whose direction is unmistakable.

Operators don’t see it that way. Their timeline isn’t five or ten years. And their reaction is, frankly, completely rational. When you don’t know what shape the company itself will be in five or ten years, the kind of long-horizon research budget that pays for “preparing for that future” is fundamentally a different concept than the budget you have to spend now in order to convert technology into revenue. And SW, from a legacy enterprise’s point of view, is one of the fuzziest, hardest-to-quantify investments — yet impossible to live without. It’s probably no coincidence that SW in many large legacy enterprises ends up structured as an SI organization or as outsourced delivery. You can’t simply blame the operators for this. After all, what do we ourselves least want to hear from a stock analyst? “From a long-term perspective…” It might not be wrong, but it’s of no use to the next buy or sell decision in front of you. And so, from inside an actual AX initiative, the cognitive gap is unavoidable. The leading group of AI labs — the companies driving AI itself — are operating from a base, a level of capability, and ultimately a cost structure that a legacy enterprise simply cannot match step for step.

For instance, among the announcements Meta, Google, and Anthropic have been pushing hard, some keywords keep recurring: “X% of Claude Code is generated by AI,” “Y% of new code at Google and Meta is now written by AI,” “All code reviews at Meta are now done by AI.” Good. Honestly, also remarkable. And in my view, the right direction. But the moment you try to import these case studies into a legacy enterprise, you collide with a different set of base questions. Is our code actually managed in a proper branch tool like git? Do our developers internalize branch rules and follow real test/QA discipline? Is our codebase in a state where AI-native development is even possible — properly documented and modular? Does our organization run a real code-review culture, where reviews actually shape the work?

Then come the capability and tooling gaps. Can we freely combine the most cutting-edge AI-native tools from outside? Do our engineers have a working understanding of vibe coding? Are we in an environment where code quality and engineering tradeoffs are debated openly — and where AI can meaningfully participate in that debate?

And finally, a question that’s easy to skip but that has come back into the discourse recently: when we replace one developer’s specific task at our company, does it produce the same cost benefit it produces when Meta replaces one developer’s specific task? Is our AX cost-efficient on a per-token basis?

Pull all of that together and you get this: a domain young enough that its theory, possibilities, and limits are still undefined, not yet cost-efficient as a technology, and with such a wide gap between leaders and followers that simply copy-pasting the leaders’ approach is rarely efficient — and often outright impossible. Said differently, what looks to AI engineers like the future we should be moving toward, a horizon five to ten years out, looks to operators like a present already conquered by someone else, where the question is ROI now. That’s the most acute version of the gap. Looking back over the last four years, I have to admit honestly: more of my AX work was completing the DX that had been deferred or hidden or incomplete than actually building anything AI-native. Less “deploying AI” than “paving the road that AI needs in order to walk.”

You can’t, however, turn around and tell operators that AX is impossible, that AI shouldn’t be adopted. Setting aside any hierarchy or politics, this is just sound business logic. If AI in the end produces no cost advantage, why would any operator on earth adopt it? It might fly inside a research lab, but not inside a P&L-bearing business unit. Investing isn’t different. Imagine walking into a fund manager’s office and being told, “How could anyone predict where stocks will go? You might gain, you might lose. If you gain we’ll share the upside, if you lose, you’ll share the downside with us.” Who is going to hand that fund their money? You need long-horizon strategists, sure, but you also need analysts and chartists and active traders who can move your account today. And this — this is exactly why FDE matters so much when a legacy enterprise actually tries to do AX.

So what is the FDE supposed to do? Slap operators awake to reality? Or somehow ship projects that aren’t realistic in the first place? If the latter were possible, that would be one extraordinary FDE — pay them whatever they want. But FDE isn’t a god, isn’t an AGI from the future, and the chance of making the genuinely impossible possible is roughly nil. This is my first time working as a true FDE in a formally defined FDE position, so my view will probably evolve. But based on my time working as an internal FDE so far, the role I keep arriving at is Translator. This grows out of a fact that’s easy to forget: operators don’t actually care about the technology itself. The reason operators get aggressive about AI is that AI promises a kind of cost efficiency that didn’t previously exist. If the goal is the demonstration of technology — the “implementation” Meta or Google or Anthropic put on stage — the difficulty curve is one thing. If the goal is “achievement of the customer’s purpose” — what the operator actually needs — the difficulty curve bends in surprising ways. We can’t make AI write 80% of all our code overnight the way Meta does. We can’t yet build self-evolving models grounded in our internal knowledge the way Anthropic talks about. But we can reduce the time developers spend on code work. We can stitch our fragmented internal knowledge together so that a model behaves as though it knows the company. So when what operators actually want is not “technical progress” but “purpose-from-the-buyer’s-side, achieved,” even AX that looked impossible suddenly opens up gaps you can wedge yourself into on their timeline.

This is the most important translation point — for AI-native B2B companies whose growth depends on driving AX in client organizations, and equally for internal teams whose performance is measured in shipped AX. And my prediction is that, unlike one-decade-and-out roles like the old Web Master, FDE is going to keep mattering, because the cognitive gap around AI is already wide and still widening. Whether the work is internal AX or B2B AI sales, FDE is going to be the most important role in that translation layer.

A confession in closing: this entire piece is colored by the confirmation bias of someone whose career is now committed to FDE. I want to be honest about that.


Next post: what kind of “productivity” does the AI-driven productivity boost actually mean — from a SW perspective?