I started working for PowerSurge Technologies, inc. in the 1995-1996 school year. I was in high school. The next year I left and got my GED, and was invited to become the CEO. I advertised JMac’s CGI Archive and JMacLabs.com in my high school yearbook. It was a different time.
That sounds ridiculous now, and it was a little ridiculous then. The internet was still niche enough that this did not read as a normal teenage activity. It read as a very particular kind of obsession. But that is part of why I keep coming back to the dot-com era when I think about AI. Before the market got truly insane, before analysts learned the vocabulary of internet inevitability, before the average person had decided this was going to matter, there was already a class of people building as if the future had plainly arrived and everyone else was just late.
I was one of them.
The dot-com era was not clean or coherent. It was exhilarating, sloppy, useful, destructive, and full of people who were directionally right long before they had any idea how to turn that insight into durable economics.
That also sounds familiar.
Mania Arrives in Layers
One thing I want to be careful about in this series is time. Historical analogies get stupid quickly when people flatten chronology.
If you look at the broad market, later academic work makes a persuasive case that the real aggregate dot-com bubble did not fully show up until roughly 1998 through early 2000. That matters. It keeps us honest. But if you were inside the early technical subculture, the feeling of mania started well before that.
When I first got online in any meaningful way, it did not feel like a normal product category slowly maturing. It felt like a door had opened in the wall.
Until a small ISP called Silverlink came along, even getting onto AOL meant long-distance charges. Then suddenly there was a local path into this strange emerging world where people built BBS systems, maintained link directories, traded traffic, reviewed websites, wrote CGI scripts, and treated distribution itself as something that had just been rewritten. I became jmac, all lower case, because that was the world I entered. I ran link-exchange systems. I helped review sites and hand out little web awards that doubled as marketing loops. I built logging tools. I was offering a transparent pixel analytics service before most normal people had any reason to know such a thing existed.
We were trying to make the internet more usable, more visible, and more obviously valuable. We were builders. We were also evangelists. We weren’t intentionally trying to drive any hype, but those two categories are not always as separable as people pretend in hindsight. Or as inseparable as some would have you believe.
I was absolutely one of the people who thought the internet was going to change everything. Not a lot of things. Everything.
And I could not understand why every around me around me did not immediately see it.
That emotional structure is one of the clearest parallels to AI now. Long before the current spending wave, long before enterprise buyers started putting “AI strategy” on slides, there was already a set of true believers who had touched something real and concluded, not irrationally, that this was going to alter the shape of work. They were not wrong. But being right at that stage does not protect you from getting the business timing, the market structure, or the investment economics badly wrong.
I Built Real Things
My vantage on the dot-com run-up was not as a venture capitalist or an analyst. From 1995 to 2001, I was first the CEO of a moderately successful web hosting and dedicated server provider, then part of a self-funded and unsuccessful hardware-less competitor focused on services and internet adoption for businesses. By the end of 2001 I was exhausted enough by the whole thing that I was helping start a media company and an aviation business just to get away from technology for a while.
That progression is important to my future career because it gave me a close look at both the real utility and the real delusion.
At my first company we built on other people’s hardware first. Then, on a shoestring, we began deploying our own. We were not one of the companies that everyone threw money at. We were more likely to get picked up by local news as scrappy teenagers who were ahead of the curve. That is a useful place to stand if you want to understand a technology shift. You are close enough to the work to see what actually helps, but not so insulated by capital that you can mistake scale theatre for traction.
Some of what we built now sounds ordinary enough to be invisible. That is exactly the point.
I built a frontend to BIND because normal people were never going to love raw DNS configuration. Making DNS accessible to ordinary users felt important because it was important. I built portals and automation that tied billing to self-service hosting account provisioning. Today that sounds banal. At the time it was part of making internet infrastructure legible to people who were not going to become sysadmins. This was the time when to register a domain name you did it online and then a few months later you had to pay a paper bill for $70.
That is one reason I keep thinking about current AI interfaces. ChatGPT did not invent machine learning, and it certainly did not invent intelligence. Listen to one of my in person rants on Big Data -> Machine Learning -> AI. What it did, very effectively, was remove a large amount of friction between ordinary people and a capability they were not going to access through research papers, APIs, or a command line. That is structurally similar to what many of us were doing in the early internet era. The first durable value often does not come from the deepest underlying system. It comes from the layer that lets ordinary people finally touch it.
The Split Between Operators and Hype Machines
The dot-com years are often remembered as if everyone inside them were either a visionary or a fraud. Reality was more crowded than that.
There were people building real operating businesses. Some succeeded. Some failed honourably. Some were too early. Some were undercapitalised. Some were good at the work and bad at the financing story. There were also people who mostly learned to talk wealthy friends into bankrolling an idea long enough to make it look larger than life, attract venture backing, and aim for public markets with massively negative earnings.
That also sounds familiar.
I do not want to over moralise this. Some of the people in that second camp were cynics. Most genuinely believed their own story. Some were just surfing incentives that rewarded narrative over discipline. Markets do not require corruption to behave badly. They only require enough people to believe that someone else will buy the story at a higher price.
I was a real operating believer. That had advantages. I built things I thought were useful. I scoped more reasonably than some of the pure hype merchants. I came through in better shape than people who believed every fantasy around them.
It also had disadvantages.
I did not position myself to capture the most absurd upside. I did not jump on every bandwagon I did not believe in. I was one of the people the larger fortunes were built on without actually receiving them. That is not a complaint. It is just one of the operator lessons from that era. Being closer to reality does not automatically mean you win the financial cycle. If you’ve followed me through AI you’ll see that I’ve approached it similarly, but I am older, wiser, and financially stable.
5GuysTech.com, LLC Was a Useful Failure
I learned at least as much from 5GuysTech as I did from the more successful ventures around it.
The premise was not insane. A lot of businesses should have been using the internet and were not. I thought the gap was obvious. So we tried to bridge it in a way that, in retrospect, tells on the period beautifully: a real brick-and-mortar office for an internet company, where people could come in, sit down, explain their web needs, and get help becoming part of this new world.
That was one of the biggest wastes of money of my early career.
The core problem was timing. We were too late for the first wave of technical enthusiasts, the people already willing to piece things together themselves, and too early for the second wave of real mainstream users, the ones who would eventually absorb the internet into ordinary commercial life. That timing error cascaded into everything else. Demand was weak. Margins were compressed. The sales model was terrible. Without a huge amount of capital to subsidise market education, the whole setup was a sink.
There is a broader lesson in that which applies far beyond the internet and very much applies to AI: being right too early is still a way of being wrong in business.
This is one reason I am skeptical of simple triumphalist retrospectives. When people look backward from a world the internet already transformed, they tend to imagine that anyone who saw the future clearly should have done well. That is not how technological transitions work. They are full of good ideas that arrive before demand, before trust, before habit, before the surrounding economics are ready.
Directionally right is not the same thing as economically right.
What the Bubble Built Anyway
That said, I do not think the right lesson from the dot-com bust is that the whole thing was fake. And I believe the same is, ultimately, true of AI.
It was a bubble. A lot of money was misallocated. A lot of investors got crushed. A lot of workers got hurt. A lot of companies disappeared for good reason.
And society still got something out of it.
What it got was not a magical guarantee that every internet business deserved to exist. What it got was infrastructure, interface layers, and eventually consumer behaviour that would have taken much longer to emerge under a tidier, more disciplined capital cycle.
Search is high on that list. Before search became good, the internet was far more directory-driven, artisanal, and awkward than people remember. Discovery was a real constraint. Search changed what the internet was for because it changed what the internet was like to use.
Online payments matter just as much. E-commerce does not become ordinary until paying online becomes boring. Once payment trust becomes infrastructure instead of an event, entire categories of commerce stop feeling weird.
Dark fibre and network overbuild also belong in the story. I have spent enough years in network operations to have a long memory for this. A lot of investor capital went into capacity that looked absurdly excessive. Some of it was. But later advances, including optical improvements people had not fully priced in, changed the value of that underlying buildout. The bubble did not simply finance websites with bad earnings. It also helped finance transport capacity, hosting structure, and data centre habits that later industries inherited more cheaply than they otherwise would have.
What it did not do was create mainstream consumer behaviour overnight. That part took longer. Gen X and then Millennials had to grow up inside the change and then become the spending base for it. The people who adopted early often did best, but society as a whole needed a generational handoff before the internet became less of a tool you used and more of a condition of normal life.
That is another point worth holding onto now. Public benefit and investor return are not the same thing. A market can massively misprice a technology wave and still leave behind rails that change everything later.
That Is Why AI Feels Familiar
This is the narrow sense in which AI reminds me of the dot-com era.
Not because every detail matches. It does not. AI is arriving through existing cloud platforms, existing devices, and existing distribution channels in a way the early commercial internet did not. Adoption is faster. Interfaces are simpler. The starting base is larger.
But the structure repeats.
There are true believers. There are opportunists. There is an enormous amount of money rushing toward whatever looks like the next defensible layer. There are serious capabilities sitting next to unserious business models. There are people insisting this changes everything tomorrow and people insisting it changes nothing at all.
I do not buy either position.
I think AI is here. I think it already does many useful things. I think it is going to alter how work gets done across a wide range of fields. I also think a lot of current spending will prove undisciplined, a lot of current products will turn out to be expensive ways to produce mediocre output, and a lot of people are mistaking subsidised usage for durable economics.
One place where I differ from some of the loudest evangelists is on cost. People keep saying this is the worst AI will ever be. Maybe. But that is not the same as saying it will simply keep getting cheaper in every meaningful sense.
At the commodity layer, some AI clearly has gotten cheaper and may keep doing so. But at the frontier, and especially at the level of total enterprise deployment, the picture is much messier. Power, capacity, depreciation, integration, workflow redesign, and quality control all cost real money. It is entirely plausible that the cheapest low-end tokens are still ahead of us while the all-in cost of using top-tier AI effectively in serious business remains higher than many people now assume.
That is why I have started framing the current moment differently. This may not be the worst AI we will ever have. But there is a good chance it is the cheapest AI many people will see for years once subsidy of free capital departs and then scarcity and real operating costs settle into view.
What Skeptics and Evangelists Both Miss
The skeptics who dismiss AI entirely are making the same mistake many people made about the internet. They are reacting, understandably, against bad claims by deciding the underlying capability does not matter. That is usually the wrong move.
AI is not going away. Some jobs are going to be compressed, reshaped, or eliminated. We should not lie about that. It will hurt people. It already is. Standing in the way of adaptation by trying to build artificial moats around categories of work is more likely to destroy whole industries than save them.
At the same time, the evangelists often speak as if every improvement in model quality automatically translates into lower cost and broad human replacement. I do not think that follows from the evidence we have now.
AI is often better than an average person at bounded knowledge work. It is rarely better than a true expert in a narrow domain. What it does well, often extremely well, is synergise with experts. A strong developer can direct multiple agents and get more done. A good doctor can use AI as a second opinion. A capable analyst can interrogate data faster. That is real. It is also not the same thing as saying expertise no longer matters.
In some ways, I think the near-term effect is less “AI replaces the expert” and more “AI gives more people partial access to expert-shaped assistance.” That is a major change. It may also be one of the reasons creative fields are being hit emotionally and commercially faster than some of the technical ones. Creative work feels personal. Once people see synthetic systems competing in that territory, the shock lands differently.
The Question Leaders Actually Have to Answer
The most expensive delusion I see right now is not believing that AI matters. It is forcing expensive AI solutions on customers who do not like them and employees who are not using them, even though they cost more, produce lower-quality results, and take more time.
That is not transformation. That is executive theater with a compute bill attached.
The practical question is much less glamorous. Where does AI actually improve cost, quality, speed, or decision support all at once? Where does it fit the workflow instead of insulting it? Where does it make strong people stronger rather than creating a second-rate substitute for judgment?
Those are operating questions, not branding questions.
That is also the real lesson I took from living through the dot-com era from the inside. Bubbles are not just stories about greed. They are stories about sequencing. Real capability arrives. Believers overinterpret it. Capital overreacts to it. A lot of bad businesses get built around it. Some good infrastructure gets financed anyway. Then the bill comes due, and the serious work begins.
I think we are somewhere in that cycle now.
If I have one sentence I want an executive reader to keep, it is this: every revolution has huge winners and huge losers, and the path is not chasing either extreme but making the changes that keep you and your company relevant, useful, and profitable after the mania burns off.
That was true of the internet.
I believe it is about to be true of AI.
Further Reading
- J. Bradford DeLong and Konstantin Magin, “A Short Note on the Size of the Dot-Com Bubble”
- Shane Greenstein and Ryan McDevitt, “The Broadband Bonus: Accounting for Broadband Internet’s Impact on U.S. GDP”
- Anders Humlum and Emilie Vestergaard, “The Rapid Adoption of Generative AI”
- Daron Acemoglu, “The Simple Macroeconomics of AI”