Last updated on 22/06/2025
Stop me if you’ve heard this one before.
A plucky corporation, headed by bright and relatively charismatic scientists, is about to change the world forever. They’ve built a system capable of reasoning through problems once thought exclusively the domain of human cognition. Industry, academia, and defense are on the cusp of transformation. Every technologist in the room has clenched their collective asshole.
The year is 1984, and Doug Lenat has just introduced Cyc, a bold attempt to encode the common sense of the world into a giant logic-based knowledge base.
Or maybe it’s 2011, and we’re all watching IBM Watson wipe the floor with Jeopardy! champions. It’s corporate owners have convinced us all that medical diagnostics and enterprise search success on the same scale are just around the corner.
Or perhaps it’s 1956, and Logic Theorist, built by Newell, Simon, and Shaw, is proving mathematical theorems from first principles, launching what we’d come to call the field of artificial intelligence.
We’ve been here before. Several times. This is not to say today’s breakthroughs aren’t impressive, they are, but calm the hell down. This scale of breakthrough isn’t unprecedented, even in my lifetime. LLMs aren’t going to disrupt every job and every business overnight, and all evidence indicates they’re going to follow a trajectory so tried and true that we have a name for it.
Nth Verse, Same as the First
Right now, GenAI feels urgent. Hype cycles feed on that urgency. If you’re unfamiliar, the Gartner Hype Cycle is the FOMO machine that drives basically tech’s emotional rollercoaster: first the Innovation Trigger, then the Peak of Inflated Expectations (“This changes everything!”), followed by the Trough of Disillusionment (“Oh no, this isn’t turn key!”), and finally a slow climb toward the Plateau of Productivity, where it quietly does one or two useful things and only gets mentioned on Hacker News when someone launches a side project.
Right now, I think we’re close to that peak. The headlines are breathless. The demos are magical. The pitch decks write themselves. I’ve lost colleagues to start ups that can’t quite explain what it is they’re actually offering.
The thing is, if you’ve been in tech long enough, you start to see a pattern: bold claims, jaw-dropping demos, the promise of generality… followed by the soul-crushing, wallet shattering realization that everything useful still takes real engineering work.
This piece isn’t about dismissing generative AI, much as I might like it to be. The market has spoken, and one voice in the wilderness won’t change what way the wind is blowing.
Instead, it’s about equipping you, whether you’re a developer, leader, or just GenAI-curious, with a little context. I keep hearing that’s the thing I need to add to my LLMs to get them to behave in the way everyone claims they will. Every technology revolution looks obvious in hindsight, and by looking at similar events in our industry’s past, maybe we can better understand the current moment.
We’ve Seen This Before
The Symbolic Dream
In the early days of AI, intelligence was treated as a logic puzzle. If we could just write down all the rules, the machine would reason like we do.
And to be fair, that wasn’t entirely foolish. Programs like MYCIN in the ’70s outperformed human doctors on narrow medical tasks. SHRDLU, built by Terry Winograd, could interpret natural language and manipulate virtual blocks in response. It felt… alive.
These systems were impressive. Fragile, sure, but impressive. They broke the moment you pushed them out of their sandbox. More than a little bit off the demo script and they fell over.
Still, ambition ran high. Marvin Minsky famously quipped that with a few graduate students over a summer, we could build human-level AI. The tools were promising. The researchers were brilliant. The grad student labor was cheap, and the path seemed clear.
It was in this environment that our protagonist, Doug Lenat, enters the scene.
Enter Cycorp
In 1984, Lenat launched the Cyc project. Its mission: encode the entirety of human common sense into a formal logic-based ontology. Not just facts, but reasoning, defaults, context, contradiction handling. If a child could infer it, Cyc should be able to, too.
That might sound nuts to you, or it might sound familiar given the current talk about LLMs and pre-trained transformers. It wasn’t at the time though. Reasonable people, giants of research, thought this was a reasonable path forward.
It wasn’t an engineering feat without precedent either, even for Lenat. Lenat had previously built Eurisko, a system that seemed to invent new heuristics for itself. He drew on work like Newell and Simon’s General Problem Solver, McCarthy’s Advice Taker, and Feigenbaum’s expert systems. And like all systems built upon the shoulders of giants, he aimed higher: Cyc would do not special purpose, but general reasoning.
For a while, it looked like it might work. Cyc had funding from DARPA. It positioned itself not as flash, but rigor. They were the sober ones. The principled ones. They made incremental progress and celebrate each new milestone, until they went quiet. Suddenly, they’d been obviously doomed to failure all along.
Cyc and the Hype Cycle
Cyc rode the hype cycle like it was on rails.
Innovation Trigger
In 1984, Douglas Lenat launched the Cyc project, aiming to encode all of human common sense into a machine-readable format. Unlike earlier expert systems confined to specific domains, Cyc’s ambition was universal. Capture “all common sense knowledge in all domains that humans have ever common-sensed.” (If you want the longer, weirder, and frankly much more interesting version of this story, go read Yuxi Liu’s essay on Cyc. It’s excellent, and will absolutely derail your next 20 minutes.)
Peak of Inflated Expectations
By the mid-1990s, Cyc had amassed a vast knowledge base, and its potential applications seemed boundless. Lenat envisioned Cyc as the foundation for a future AGI system, capable of reasoning across diverse domains. The project’s scope expanded dramatically, with the number of required rules growing from an initial estimate of 1 million to 4 million, reflecting the complexities of human knowledge and ambiguity.
Trough of Disillusionment
Despite its grand vision, Cyc faced significant challenges. Encoding knowledge manually proved to be slow, brittle, and expensive. The system’s reliance on hand-crafted rules made it difficult to scale and adapt to new information. While every mad dash to encode new knowledge enhanced the system, it fell short of the promise.
As machine learning and statistical approaches gained prominence, Cyc’s symbolic methodology began to appear outdated. Critics pointed out that Cyc couldn’t learn by itself; all knowledge had to be painstakingly entered by developers.
Slope of Enlightenment (Sort of)
Cyc never entirely disappeared. It found niche applications in areas like healthcare and military projects, where its structured knowledge base could be leveraged effectively. However, its broader vision of achieving general intelligence remained unfulfilled. The project’s legacy lies in its influence on ontology design and knowledge representation, serving as a cautionary tale about the limits of hand-coded AI systems.
Now I ask you, sincerely, does any of this sound familiar? History rarely repeats, but it often rhymes…
Progress is Real, But It’s Not Free
It’s easy to forget how hard-won modern AI progress has been. In 2015, DeepMind released a landmark paper showing how a deep reinforcement learning system could learn to play Atari games from pixels alone. This was, in technical terms, a big freaking deal. Previous systems had access to game memory. This one learned from what the screen looked like, same as a human.
As I said, that’s a big fucking deal!

But it didn’t get there on vibes. It took:
- Careful reward shaping
- Recurrent models to add memory
- Compression tricks to handle high-dimensional inputs.
Progress was real, but nonlinear, expensive, and brittle. Every leap required major engineering insight, research investment, and iteration. And each leap required more than the last. That’s research in a nutshell. Small ratcheting advancements and the occasional lurch forward.
Today’s tools are better. They are more robust, more flexible, and undeniably more impressive in demos. They have to be. It’s been a decade since we started laying the groundwork for modern large-scale reinforcement learning.
The thing is though, as impressive as they are, they’re not magic. They’ve come an incredibly long way over “we can play pitfall now”, absolutely. However, they still fail in weird ways. They need careful deployment. Their generality is bounded, and narrow applications still dominate real-world use.
We aren’t anywhere close to “the beginning” of investment in this type of technology. We’re deep in the throes of diminishing returns, and barring some breakthrough in the underlying technology, what we can do is throw more process around the outputs of LLMs, and throw more hardware at the problem to get ever larger context windows and numbers of parameters.
There are theoretical limits to how far we can go by adding more hardware, and each step down that road prices out another group of eager researchers and hobbyist, ultimately slowing down progress overall. These tools may be the worst they’ll ever be, but I think they’re also nearing a ceiling on their current performance, until we hit the next big breakthrough.
Panic, Passivity, and the Middle Path
So how should we react to GenAI?
Don’t panic. Yes, some organizations are throwing money at LLMs hoping to ride the wave. They build brittle prototypes, burn out talent, and get disillusioned when the return isn’t immediate. Big, world changing technologies are long lived. It’s a marathon, not a sprint folks.
But don’t freeze, either. Skepticism is warranted, but complete inaction carries its own risks. Remember how long many dismissed statistical learning, even as it quietly started to outperform symbolic systems in the wild?
There is a middle path: skeptical pragmatism.
Start from problems, not press releases. Build pilots, not platforms. Be curious before being committed.
You don’t have to believe in AGI to believe GenAI is useful. And you don’t have to join the cult to learn how the tool works.
How to Ride the Cycle Without Getting Burned
AI has been “revolutionary” before.
It was revolutionary in 1956, when Logic Theorist mimicked human reasoning.
It was revolutionary in 1984, when Cyc promised artificial common sense.
It was revolutionary in 2011, when Watson conquered Jeopardy.
It was revolutionary in 2015, when RL agents learned to play games from raw pixels.
And it’s revolutionary now.
But most revolutions don’t end in glory or catastrophe. They end in infrastructure. In things we come to rely on without thinking twice. That’s not failure. That’s how technology actually lands.
So, how can you take advantage of the current enthusiasm without over committing and touching the hot stove?
- Start with a real problem. Don’t ask “How do we use GenAI?” Ask “What do we need to solve?” Cyc, Watson, and a dozen other systems you’ve never heard of failed to deliver on their potential in part because they chased generality without a grounding use case.
- Build pilots instead of platforms. Domain-specific prototypes give feedback faster. Cyc tried to be universal from day one, and subsequently drowned in edge cases.
- Disregard FOMO, Acquire Fluency Learn what GenAI is good for. Understand the tradeoffs. Stay curious—but stay skeptical. Especially when someone tells you “this changes everything.”

Still Here, Still Shipping
AI has always over promised. Sometimes it under delivers too. More often though, once the buzz fades and the headlines move on, what’s left standing becomes part of the infrastructure. We talk about constraint solving, optimization, and machine learning like they aren’t distinct sub-fields of AI because they’re products in their own right.
Cyc didn’t give us AGI. But it helped clarify what intelligence isn’t. Watson didn’t revolutionize medicine, but it pushed the industry toward language-first systems that models like GPT now power. Even the misses move us forward.
You don’t need to predict the future to make progress. You need focus, fluency, and a clear view of what’s worth building right now.
That’s what we help our partners do. We work with teams to cut through the noise, identify what matters, and deliver software that actually works—at scale, under constraint, and in the real world.
Because in the long run, it’s not the hype that wins.
It’s what you ship.