essayFRAMEWORK: Filter Equity

The Distribution Moat in the AI Era

The record labels survived the Compact Disc and almost survived Napster; what they did not survive was Apple's list of buyers. The AI era is running the same move, faster and larger, and the producers most exposed to the loss do not yet understand they are the producers. I call the position that outlasts them Filter Equity, and it is the only asset class I would underwrite for the rest of this decade.

April 26, 2026 · 3,610 words · 16 min read

The Distribution Moat in the AI Era

The record labels survived the Compact Disc, and they almost survived Napster. What they did not survive was the combination of two shifts that, at the time, looked independent. Digital distribution collapsed the physical supply chain; then a hardware company from Cupertino with no music expertise, a long list of buyers already predisposed to trust it, and the willingness to sell each song for ninety-nine cents inherited the margin the labels had assumed would stay with the catalog. The labels owned the songs. Apple owned the buyers. Within a decade, the industry had settled into a shape the labels had not planned for and could not reverse — the buyer relationship had absorbed almost all of the surplus that the labels had spent a century assuming was theirs, and the asset the labels had been protecting was no longer the asset that commanded the price.

The AI era is running the same move now, faster and larger, and the producers most exposed to the loss do not yet understand they are the producers. The difference this time — the reason the transfer is about to be one of the larger wealth migrations in modern business history rather than another Craigslist-quietly-ends-classifieds cycle — is a macro condition almost no one in tech investing has spent the last decade taking seriously.

Aggregation Theory, Unretired

The easy mistake in 2026 is to treat Aggregation Theory as a 2015 artifact. It is not. It is the only frame that correctly predicts the winners of the current AI cycle, and it does so better when you take the framework literally rather than metaphorically.

Thompson's original claim had three parts. First, the internet commoditizes the supply of anything digitizable. Second, once supply is commoditized, the limiting factor in the value chain is the relationship with the user — attention, trust, default behavior. Third, whoever holds that relationship aggregates the surplus, because they can force modularity on the supply layer while remaining integrated on the demand layer. Google did this to publishers. Facebook did it to advertisers. Amazon did it to retailers. Netflix did it to studios, then to itself. Each one picked one side of a two-sided relationship and defended that side to the exclusion of everything else.

For most of the 2015–2022 period, the AI layer looked like it would break the pattern. Training a frontier model was a capital-intensive, integrated act — compute, data, algorithms, safety, serving, all coupled — and the conventional wisdom was that integration was where the value would sit. A billion dollars to train a frontier model meant the lab owned the business. Claude's differentiation meant Anthropic owned the business. The hyperscalers owned the compute, therefore the hyperscalers won. That is the frame that produced the current valuation stack.

The frame broke this quarter. Thompson noted earlier in April that in AI, marginal costs really are close to zero. What you actually have is a very large fixed cost — the training run — and then near-free replication, plus a second-order inference cost that has been collapsing at roughly the pace of the charts every investor memo now carries. Big fixed cost, near-zero marginal cost on the thing itself: these are exactly the conditions Aggregation Theory was written to describe. The value has nowhere to sit except on the demand side.

The standard rebuttal is that models are still differentiated — Claude has a voice, GPT leads on reasoning, Gemini owns multi-modal — so the supply layer is not truly commoditized. The rebuttal is wrong on a timescale the market has not yet priced. Open-weight models have already crossed the quality threshold where, for most production use cases, the output is indistinguishable from the best proprietary frontier. At that point the question is no longer which model is best. The question is who stands between the user and any model. The model becomes a component. The relationship becomes the business.

Aggregation Theory, taken literally, does not predict that the best model wins. It predicts that the party deciding which model the user reaches for wins, on a timescale faster than the models can differentiate themselves. A year of audience-building can win against a decade of compute-building. That is the uncomfortable piece for almost everyone currently employed inside a frontier lab.

Why AI Makes Supply More Supply-Like

The deeper argument — the part where most operators, including me a year ago, have been quietly wrong — is that AI is not merely commoditizing digital goods. It is commoditizing the making of digital goods, which is a different and more violent shift.

Consider the marginal unit of value on the old internet: a blog post, a SaaS tool, a mobile app. Each one took human labor to produce, which meant the supply side was always finite relative to demand. Aggregators captured value, but so did the best individual producers, because producing was genuinely hard. The creator-economy thesis rested on the idea that a human-produced supply curve had a ceiling, and so the top producers could capture scarcity rents on the part of the curve that only they could reach.

AI removes the ceiling. A single operator can now, in a weekend, produce what would have taken a three-person team two weeks in 2022. The tooling is not indistinguishable from a senior engineer — it still misses, still hallucinates, still requires taste — but the gap has narrowed to the point where the supply curve for "one more app, one more blog post, one more workflow" has effectively flattened. When the supply curve flattens, price approaches marginal cost, and the only way to earn above marginal cost is to own something that is not on the supply curve at all.

Look at the places where money is still visibly being made at full margin in 2026. Stratechery's Filter Equity — a single operator's decade of accumulated reader trust on the subject of tech strategy, valued by around fifty thousand paying subscribers at roughly three hundred dollars a year — is an asset that has survived two platform cycles, three algorithm overhauls, and the entire generative-AI wave without a visible discount. Lyn Alden's Filter Equity on macro is a comparable asset, built on a comparable cadence, earning comparably in a different sector. Doomberg's anonymous Filter Equity on energy and commodities out-earns the bank-backed research desks covering the same material, which have spent the last decade losing to a pseudonymous newsletter the way the labels lost to a hardware company from Cupertino. None of these operators sells analysis. They sell a reader's continuing decision to route attention toward a specific voice on a specific subject, delivered in a register that reader has learned to trust through receipts accumulating too slowly for any competitor to short.

The position has the structure of an equity claim on a cognitive asset. Each issue published at quality adds to the stake; each silent week depreciates it; acquired trust cannot be hedged, cannot be inherited, and does not transfer when the operator retires or changes subjects. It is illiquid by construction, which is precisely why it does not commoditize when the rest of the supply curve flattens. An operator who has spent five years earning a thousand units of it is in a better strategic position than the same operator with a finished product and no filter, because the filter holder can plug any commoditized supply into the reader relationship and capture the margin, while the product builder has to keep rebuilding a supply curve that AI keeps flattening underneath them. The filter was the business the whole time. The rest of it — the analysts, the terminals, the distribution infrastructure, the models, the enterprise sales motion — was scaffolding. Scaffolding commoditizes. Trust does not.

The Lyn Alden Correction

The Thompson version of the story ends roughly here, with a confident aggregation thesis and a side note about capex. The story is incomplete without a macro correction, and the clearest statement of that correction comes from Lyn Alden's framework of fiscal dominance. Alden, in June 2025, offered the AI-specific version of the conclusion this essay is building toward more directly than any summary can improve on: "Rest of World equities do well, and AI does well, but the AI gains mainly benefit the end-user with only thin profit margins (or outright unprofitable) for the companies themselves, due to so much capex." That is the conclusion. What follows is the mechanism.

Fiscal dominance, as Alden defines it, is the regime in which sovereign debt is large enough that raising interest rates increases the federal deficit by a larger dollar figure than it slows down bank lending. "And that's a pretty key transition," she writes — the Fed's reaction function, which in a normal regime disciplines credit formation, becomes structurally subordinate to the Treasury's funding calendar. Alden, bluntly: "amid fiscal dominance, almost every choice the Fed makes sucks." Monetary policy, she notes elsewhere, "is in the backseat now."

The transmission into AI capex runs in three specific moves. Move one is at the Treasury: the funding need that fiscal dominance produces forces persistent, structurally large debt issuance, which the primary-dealer system absorbs onto the balance sheets of the largest financial institutions, which are the same institutions that then extend credit to the hyperscalers. Move two is at the pricing: when sovereign duration is being suppressed by the Treasury's own bill-heavy issuance mix and by periodic reverse-repo drains, the term premium on long corporate debt compresses in sympathy, and the marginal cost of financing another training cluster sits closer to a semi-sovereign rate than to any defensible estimate of the cluster's risk-adjusted cost of equity. Move three is at the board: when every hyperscaler sees the same subsidized financing curve, the capital-allocation hurdle for overbuild collapses, because the cost of being the hyperscaler that got outbid on compute is structurally higher than the cost of building one more region that might sit at forty percent utilization for two years. Not one of the three moves requires a miscalculation by a specific actor. All three are the rational response to a regime nobody in the chain has the authority to unilaterally escape.

The ambient numbers are bigger than most technology readers carry in their heads. M2 money supply has grown from roughly $15.4 trillion at the start of 2019 to roughly $22 trillion in the most recent print, a 43% expansion in six years. CPI has risen roughly 22% from January 2020 through the most recent release. The wage-price gap has widened against the consumer every year since. These are the conditions inside which the hyperscaler capex cycle is being financed, and they are the conditions Alden has been publishing through in the fiscal-dominance essays and her monthly newsletter, which is itself one of the clearer living examples of Filter Equity in macro.

Run the Aggregation Theory prediction through this macro filter. If fiscal dominance produces capex overbuild on the supply side of AI, and if Aggregation Theory already predicts that the supply side loses to the demand side, then the overbuild is not a standalone risk. It is an active subsidy from the compute producers to the demand owners, paid in the currency of commoditization. Every additional GPU that ships, every additional hyperscale region that goes online, puts a specific number of dollars of surplus in motion — dollars that have to land somewhere — and the only position not priced in compute is the position that owns the end-user relationship the compute is about to serve.

the investor-memo thesis places the surplus here Fiscal Issuance TREASURY · DEALERS Compute HYPERSCALER CAPEX Models FRONTIER LABS Filter Equity DEMAND the surplus migrates here
Figure 1. Fiscal dominance subsidizes hyperscaler financing at a semi-sovereign rate, flattening the supply curve at Compute and Models. Commoditization passes the dollars through the middle of the chain; the residue settles at whoever owns the demand-side filter.

The best hedge against an AI capex cycle is not to short the compute layer. Alden has been clear that these setups can outlast any individual's liquidity. The best hedge is to hold the demand that will consume whatever the capex produces, because the demand is the only position whose price rises as the supply side over-builds.

What the Spreadsheet Actually Says

The operator case is the one I can show numbers for. The brand I operate is a portfolio of six revenue lines — education data, AI infrastructure, consumer health, single-family rentals, macro commentary, and a newsletter — with a five-year target of $500,000 in annual brand revenue, excluding rental income, drawn from software and content alone. The quarterly plan sitting underneath that target, public in the operating doc, is aggressive enough that a single capital-allocation error — overbuilding product at the expense of audience — would compound into a permanent miss.

Three things from that spreadsheet are worth quoting. First, of the six revenue lines, exactly zero depend on owning a proprietary AI model. Every AI-dependent piece of the portfolio is built on top of third-party inference the operator does not own and does not control. Second, five of the six lines have the same T-12-month limiting factor — audience size — and only the rental book is limited by something else. Third, the share of operator time budgeted to content and distribution in the current quarter is roughly 17% of the thirty-five-hour week, against roughly 50% budgeted to product work.

The ratio is upside down. The marginal return on an additional unit of Filter Equity, at the portfolio's current scale, is roughly three times the marginal return on an additional shipped feature, because M2 has grown 43% while CPI has risen 22% and real wages are rising in fractions of a percent, and that is not a regime that rewards producing more of what is already oversupplied. It is a regime that rewards owning the demand attached to a specific, continuous, trusted identity, because attention is the one good whose supply curve actually tightens as AI makes every other supply curve flatter.

If I were starting over in April 2026, with the same capital base and the same operator constraints, I would not build a sixth product. I would double the time budget for content, cut the active product lines from five to three, and redirect the freed capital into extending the owned-surface writing cadence from monthly to weekly, at this caliber, for twelve months. The math is straightforward: each additional unit of Filter Equity published at this depth compounds into the same flywheel that Stratechery, Alden, and Doomberg have already proven pricing on, and each deferred feature-build avoids a sixth bet on a supply curve I do not control.

The counter — the one I keep interrogating — is that I am overfitting to my current moment. The Stratechery–Alden–Doomberg set is a survivorship sample of N=3, drawn from a population of would-be filter operators most of whom quit before compounding ignited; the inference that Filter Equity is the durable position may simply be the inference that winners look like winners, which is a cheaper insight than it appears. It remains possible that the next generation of models opens a product space where compute is the moat again, at least for a window, and that the right move for a solo builder is to trade against distribution for the duration of that window. I do not believe that is what happens. I am not willing to bet the portfolio against it.

Filter Equity Beats Compute

So: the bet. Filter Equity is the correct unit of analysis for a solo operator in 2026. Aggregation Theory is more load-bearing today than when Thompson wrote it a decade ago, because AI has turned the supply curve flatter than any prior commoditization cycle produced. Fiscal dominance is the macro condition that extends the distribution-over-technology regime further than most builders expect — through the end of this decade, probably longer, because the fiscal issue that produces the capex subsidy is not a near-term political variable.

Then again. If compute scaling delivers one more jump function in capability — something qualitatively new, the way GPT-4 was qualitatively new versus GPT-3 — the supply side re-integrates for a window, and the owners of the best compute reclaim surplus before the market can route around them. The capability jump can outlast the attention an audience lends to a specific filter in exactly the way Alden warns a fiscal regime can outlast an individual's liquidity. A Filter Equity position built for a commoditized supply curve does not automatically survive a re-integrated one.

If that scenario lands, the Filter Equity position does not disappear, but it has to narrow. The operator picks a single subject, defends it to an extreme against the newly re-integrated supply, and accepts a smaller total addressable audience in exchange for a position the compute winners cannot aggregate around. The filter that survives is the one narrow enough that no model, however good, can replace the reader's reason to come back to this specific voice on this specific topic — the one whose every subject-specific inflection of the AI wave produces one or two paragraphs of commentary the reader has come to depend on from exactly one source. A capability jump changes what the operator writes about. It does not change who the reader trusts to metabolize it.

The real cost of the trade, in either direction, is not the capex or the audience. It is the number of years the operator has to compound before the position matures. The distribution bet compounds over five to ten years, slower than any single product cycle and faster than any macro regime is likely to. The capability jump compounds over the eighteen months between announcements, faster than any individual has ever sustainably rotated. The builders who thrive in this decade will be the ones who choose early and narrow aggressively into one or the other; the ones who attempt both end up owning neither, paying the compounding cost of each trade without accruing the compounded return.

The labels that survived the shift to digital did not survive by shipping more catalog. The ones that survived did so by accepting, earlier than their peers, that the scarce asset had migrated and repositioning for it in its new home. The AI industry is running the same migration now, on a faster curve, against a capex regime the Fed cannot discipline and a demand side nobody in the chain has taken seriously enough. The operators who pick their side first will be the ones still working in 2035. The operators who do not will be reading about them.


This is a free monthly dispatch from JJ Fisher LLC — one operator running a portfolio of software products, single-family rentals, and a macro/market-signal practice. Commentary, not investment advice. The real-estate line is excluded from the revenue target; see the public operating doc for the full math. If this was useful, subscribe here and it lands in your inbox once a month. Weekly paid tier begins 2027.

SUBSCRIBE

Get the next one in your inbox.

Monthly dispatch today. Weekly paid tier starting 2027. No tracking pixels.

NO TRACKING · UNSUBSCRIBE ANYTIME