One Evening. One Person. One Website.

5 min read
AIFuture of WorkHuman-Centered AIAutomationProductivityDigital TransformationArtificial Intelligence

A few weeks ago I built my personal website. It took one evening.

Thirty minutes designing specs with an AI agent. An hour walking through eleven years of career history. Another thirty minutes with Claude Code writing the actual site. Ninety percent complete by the time I went to bed. I've been adding to it since, not because it needs fixing, but because it's become the place where I experiment with tools and ideas I'm curious about. A place that belongs to me.

The chatbot on the site knows more about my background than any CV could hold. You can ask it about client projects, side projects, my university thesis, or the articles I've published. It's context-aware, knows where you are on the site, and has access to tools I've been building out as I go. There's also more to the site than what's on the surface, but I'll leave that for the curious.

None of this is meant to be taken too seriously. It's part portfolio, part sandbox, part excuse to keep learning. What it also happens to be is a working demonstration of the point I'm about to make.

What used to require a developer, a writer, and a designer, weeks of coordination, rounds of revision, is now a one-person evening project. I was at the center of every decision. AI handled the rest.

I am one person. I replaced a team. That's not a complaint. It's just what happened.

And that's where I get unsettled.

The phrase I keep hearing across the industry is Human-Centered AI. The idea that AI systems should be designed around human oversight, that there will always be a human at the center making the decisions that matter. I don't disagree with that. When I built this site, I approved every commit, every architectural choice, every line of content. I was the human at the center.

But that framing quietly sidesteps the harder question: which humans, and how many?

My website project used to be a team. A developer, a writer, a designer, a project manager coordinating between them. Today it's one person with the right tools and a free evening. Human-centered, yes. One human. The others aren't slower or less skilled. They're just not in the picture anymore.

Here is what bothers me most about the conversation we're having as a society.

AI is genuinely liberating technology. That's not marketing. I experienced it directly. It creates time, it expands what one person can do, it removes friction between having an idea and making it real. The promise is that humans get freed from low-leverage work to focus on higher-order thinking, creativity, things that actually matter to them.

But liberation only means something if the freed time belongs to you.

In the system we're actually deploying this technology into, that's not what happens. Companies optimize for efficiency, that's not a criticism, it's just what companies do. So the person who remains after AI is introduced doesn't work less. They produce more, in the same hours, with the same salary, and an expanded scope of what's expected from them. The productivity gain flows up.

And the person who gets displaced? They don't get free time. They get nothing. Not liberation. Not leisure. Not an opportunity to do something more meaningful. Unemployment.

Some aren't even being displaced by AI yet. They're being displaced by the announcement of AI, which turns out to be enough. A recent New York Times report found that AI was cited in the announcements of more than 50,000 layoffs in 2025, with many of those companies having no mature AI systems ready to fill the roles being cut. The jobs go first. The technology arrives later, maybe.

So the real question isn't whether AI can free human potential. It clearly can. The question is whether the economic structure we're deploying it inside is built to deliver that freedom to anyone other than shareholders. Right now, the answer is mostly no.

I hear a lot of "don't worry, new jobs will emerge, this is just another industrial revolution." I don't think the people saying this are being dishonest. I genuinely hope they're right. Maybe fifty years from now we look back at this moment the way we look back at fears about the printing press, embarrassed by the pessimism.

But the speed of this transition is different in kind, not just degree. Previous automation waves played out over decades, giving labor markets, education systems, and policy time to adapt, imperfectly but somewhat. That adjustment mechanism assumes time. I'm not sure we have it.

The technology is not the problem. The incentive structure it's being deployed into is. That's a regulatory and structural question, and it deserves a more serious conversation than "humans will always be at the center."

Some will. Not all of them.

A note on how this article got written

I drafted this on a bike. Not long ago, I would have had to choose: stop cycling to capture the idea, or keep going and hope I remembered it well enough later. Instead I started talking through it with my AI agent mid-ride, and by the time I got back I had a clear structure and knew exactly what I wanted to say.

I didn't have to choose.

Human-Centered AI. One human. Remarkably productive.

That's both the upside and the concern.

Related Articles