Financial Models Are Now Commoditized. Thinking Is Not.
I’ve been reading more about how AI and large language models are now building full financial models—work that once required entire teams of analysts.
The common refrain in the comments is that the only real bottleneck left is interpretation: the ability to choose the right assumptions and trends.
But that’s precisely the point—very few people, even at senior levels, consistently extract the right assumptions from the noise. Corporate access, data feeds, and institutional tools only matter if the mind using them knows what to prioritize.
Take Cassandra Unchained, for example—a sharp thinker who wrote nearly 10,000 words essentially questioning whether Palantir’s revenue should be viewed through the lens of military-industrial contract limitations, and therefore whether it deserves a premium multiple.
That’s a nuanced conclusion built from abundant data.
So if AI hands everyone sophisticated models, the true edge still lies in the feedback loop of the individual interpreting them—and the reality is that very few individuals meaningfully enhance that loop.
When it comes to long-term fundamentals, the signal often becomes ambiguous, and discernment—not computation—remains the scarce resource.


