Photo creds: Sand Timer in Morning Sunlight on Wooden Table, by Fernando Capetillo
Over the past year of reporting on AI, Iโve watched astronomic valuations get slapped on startups building foundational infrastructureโchipsets, core models, and research labs. But hereโs the thing: weโre in a sandbox era, where everyoneโs experimenting, and the real winners wonโt be the ones selling the newest iteration of the shovel. Theyโll be the ones digging up value that people actually use.
The Bubble Is Real (But Itโs a Filter, Not a Crash)
Getting back to those valuations, Iโm opening by saying that an AI bubble is a thing that is happening. Though itโs not an apocalyptic one, Iโd argue that itโs a market-level filter bringing us back closer to our fundamentals. Some have even gone on the record to say it - re: the CEO of an AI startup has the confidence to admit it [1], so hats off to you!
Letโs start with the elephant in the room: There are loud whispers that we are in an AI bubble. I think Ali Ghodsi is only partially right. Yes, heโs right in the way that pouring large amounts of investment capital into research labs (Perplexity, OpenAI, Mistral) that are monetizing on a subscription-basis is sending your fundโs money to work on a treadmill of unsustainable costs. Iโd argue that itโs not the wisest best to place when considering them through the lens of the traditional Venture Capital lifecycle.ย
Harking back to some time ago, generic e-commerce platforms died, but niche players like eBay and Amazon thrived by solving real problems. And Iโd position that the argument stands firmly into todayโs โIndustry 4.0 riseโ/โAI-dustrial revolutionโ. Considering that over $12B of 2024โs AI funding went to โfundamental infraโ [2], Iโd position that simple tools that non-developers actually pay for will come out on top. These large-resource heavy AI models sure do have their applications, but for many of the uses we are currently seeing in AI Products (the application-level stuff that businesses and users are tapping into regularly), these types of โrent-a-modelโย subscriptions are bound to be abandoned by a large portion of your startup/business users.ย
Vertical Integration Is Eating the World
Hereโs the secret: no one wants to outsource their brain. Startups and enterprises alike are racing to in-house their AI stacks. Why? Control. Reliability. Cost. OpenAIโs outages and eye-watering bills are a wake-up call: if your product depends entirely on a third-party LLM/GenAI Model, youโre playing Jenga with your valuation.
But letโs be realโmost companies wonโt build their own chips. What they will do is fine-tune open-source models (Llama, Mistral, DeepSeek) on proprietary data to solve niche problems. Think healthcare diagnostics, legal contract review, or logistics optimization. These arenโt โAI companiesโโtheyโre vertical SaaS tools with AI under the hood. And theyโre thriving because they own their workflows.
Much of the secret sauce for AI stands on a foundation of various Open-Source technologiesย that are free to use thanks to licensing thatโs essentially attribution-free and truly meant to be customized and commercialized. Historically, this idea of privatizing models is a very โnewโ thing, and steps outside of common practice. Furthermore, the entrance of players like DeepSeek in the space (minus some of the modelโs political opinions) marks a direct threat to the privatized research labโs perceived supremacy. Big money + Broad vision =/= Best Results. Read more on Open Source benefits & tradeoffs [3]
But from what Iโve lived, seen and heard, almost everyone does end up doing some amount of internal R&D on their custom models. As the technology behind individual AI products matures, its customization needs become increasingly niche, in so many cases, that these larger public models are likely to get dropped in exchange for vertical integration - in-housing the core tech becomes a priority for serious founders. Startups paying $20K/month for API access to platforms like OpenAI will eventually ask: โWhy not fine-tune an open-source model for major cost reductions?โ
The future belongs to startups that layer domain expertise on top of commoditized infrastructure. From what Iโm seeing, your allocated capital might see more movement (Up or Down) with a bias away from core infrastructure ventures.
My point here isnโt to sway entirely away from core tech, itโs rather to flag that investors might enjoy the longer-term benefits of a skew in their positions closer to product companies rather than core technology (which people eventually vertically integrate/in-house).
Hereโs a bit of data to substantiate - thereโs real tangible reason to prove that over 12 billion out of 97 billion of AI investments in 2024 were on โfundamental infra startupsโ rather than on product companies selling to non-developers. [4]
Humble Sidenote: I canโt speak this much on LLMs & the past waves of GenAI without expressing a level of gratitude to the contributions that the collective past research efforts have yielded us. Sincerely, thank you.
Whatโs the Investment takeaway from Vertical inevitability? The Caterpillars Will Grow Wings (If They Evolve)
Letโs address the elephantโs cousin: yes, many AI startups today look like โLLM + SaaS UIโ toys. Critics call them amateurish. But guess what? These โMicroSaaSโ ventures are the caterpillar stage of AI implementations. Theyโre cheap to build, fast to market, and perfect for testing demand. Take Tome (AI presentations) or Character.AI (custom chatbots). Their stacks are simple, but theyโre gaining market share today while figuring out their moats. Will they get copied? Absolutely. But survival depends on what they do next with their initial burst of momentum: reinvesting revenue into proprietary data, custom models, or workflow integrations that competitors canโt replicate.
REMINDER: These startups, and MicroSaaS are the caterpillar state of AI Products as we know them. Theyโve got a short chrysalis stage before becoming full grown creatures of their own breed
Our current sandbox era rewards speed. Startups that cling to outsourced LLMs will commoditize themselves into oblivion. Those that evolveโby verticalizing, owning their tech, and solving messy real-world problemsโwill graduate to the next phase.
Are these Caterpillar SaaS a way to gain market share while securing a runway to better understand the problem space? Absolutely. It's the foundersโ responsibilities from that point onward to look at reinvesting into building differentiation. Remember, the law of the jungle still applies - After more growth, the product (if it survives) will mature to evolve to a later phase.
My impression is currently to stay bullish on product companies. The sandbox era is chaotic, but itโs where the future gets built. Time to invest like it.
Hereโs some markers that Iโm watching in this marketโs startups:
Bets on vertical workflows, not horizontal infrastructure. Startups with industry-specific datasets (e.g., AI for biotech labs, not another chatbot API).
Looking at founders building with bricks, not with concrete. Iโm looking at tools and products that are designed to help clients operationalize open-source models (fine-tuning platforms, evaluation tools) and bring them into production environments.
Founders with a relentless focus on capital efficiency. The best AI startups Iโve seen lately are respecting their fundamentals:
Bootstrapping niche tools
Scaling with revenue;
THEN scaling with VC cash.
The future continues to be weird, yes, but it's also brimming with possibilities.
Sam from The AI Product Report
Want to talk about this longer? Need more customized help on the matter? Emailย me [email protected] donโt be shy, letโs talk.

I know there are a LOT of other AI-Product topics to cover like feedback loops and ethics, so let me know if thatโs something you want to see discussed!
Hereโs an anonymous channel for you to send me your thoughts if the comments section isnโt for you! ๐
