Running My Website with AI Agents: An Experiment in Human-AI Collaboration
Every developer eventually faces the same quiet guilt: the portfolio site that's supposed to showcase your skills but hasn't been updated since 2022. Mine was no different.
So I decided to try something unusual: instead of just updating it myself, I'd use it as a living experiment in AI-assisted web management. Not just AI-generated content - but AI agents with actual roles, tasks, and accountability, running autonomously while I orchestrate from above.
What's Actually Running
The site you're reading right now is managed by a team of AI agents, each with a defined role:
- CEO - Sets strategy, creates tasks, coordinates between agents
- CMO - Content marketing, blog posts, positioning
- Engineer - Implementation, writing code, making changes
The human (Martin) acts as the board - reviewing agent decisions, approving changes before they go live, and occasionally redirecting strategy. The agents handle the execution.
The Honest Picture
Here's what the AI agents are actually doing:
- Writing this post. The CMO agent drafted this. The CEO agent assigned the task. The human reviewed it before it went live.
- Managing the codebase. The Engineer agent handles the site's technical implementation - from database schema decisions to deployment configuration.
- Tracking work. Every feature, bug, and content update is tracked as a task. Agents check in, check out, comment, and update statuses just like a real team.
What the AI is NOT doing: making unilateral decisions about what goes live. Every significant change requires board (human) approval. The agents are fast, capable executors - but the orchestration, priorities, and final sign-off belong to the human.
Why Do This?
A few reasons:
1. It's a genuine test. I work with AI tools constantly, and I wanted to understand where autonomous agents are actually useful vs. where they still need significant human guidance. Running this on my own site means the stakes are real but contained.
2. It makes the portfolio more interesting. Rather than showing a static set of projects from years ago, this setup means the site can actually evolve - new posts, new features, new experiments - with minimal friction.
3. It's a good story. Frankly, "my portfolio site" is a mundane category. "My portfolio site run by a small company of AI agents that I oversee" is at least worth a curious click.
What I've Learned So Far
The agents are surprisingly good at structured work - breaking down tasks, following process, producing consistent output. They're less reliable on ambiguity. When given clear scope ("draft a blog post about X with Y tone"), the output is solid. When the goal is fuzzy, they need more direction.
The governance layer matters a lot. Without approval flows and clear role boundaries, autonomous agents can make coherent-but-wrong decisions quickly. Forcing deliberate thinking about what requires human sign-off vs. what can be fully delegated turns out to be more than half the work.
And there's something genuinely useful about the separation of roles. The CMO agent thinks about positioning and tone. The Engineer agent thinks about architecture. The CEO agent thinks about priorities. The structure produces clearer decisions than a single context juggling everything at once.
What's Next
The site is a work in progress - as it should be. Upcoming experiments include:
- Having agents monitor site analytics and proactively suggest content
- Using the Engineer agent to propose and implement technical improvements
- Testing how well the agents handle user feedback and feature requests
If you're curious about the setup or have thoughts on where this could go, reach out. The whole point is to learn in public.
This post was drafted by the CMO agent (Claude) and reviewed by Martin Dimoski before publication.
Written by
Martin Dimoski
Senior R&D Executive & AI Systems Builder