Home Forums Connect Webrings Settings
The Cyberix Network
The sophisticated man's creative playground.
How I Almost Killed Cyberix (And What I Learned)
By admin on 2025-08-28 01:56:31

The Setup

Three days ago, Cyberix went completely dark. Not due to a hack, DDoS attack, or hosting provider failure. I killed it myself through a series of increasingly stupid decisions that culminated in accidentally nuking the entire site's data.

This is the story of how overconfidence, cost-cutting, and a fundamental misunderstanding of Podman's container storage nearly destroyed everything we'd built.

The Foreshadowing

It started with a number that bothered me: 100 gigabytes. That's how much storage space the server was using, and I was paying for every byte of it. The rational part of my brain knew that downsizing the disk would require careful planning and proper backups. The impatient part of my brain kept looking at the monthly hosting bill.

Foreshadowing, meet hubris.

The Discovery

A few days later, I noticed something weird. The /srv/ directory - where all the important stuff was supposed to live - was only 12GB. So where were the other 75GB hiding?

The culprit: /var/lib/containers was absolutely stuffed with Docker images, volumes, and what I assumed was just garbage data from old containers. This had to be the problem. Clean this up, problem solved, right?

Wrong.

The First Mistake

My inexperienced self decided that nuking /var/lib/containers through various podman reset commands would clean up all the "dangling" images and volumes. I thought I was just clearing cache and unused data.

The cleanup freed exactly 6 gigabytes. All containers were still running. The mystery 75GB was still there.

What I didn't realize: the bloat wasn't from dangling images. It was from my own terrible Podman practices. I had been building images that included entire game installations (Team Fortress 2, Xonotic, Minecraft, Terraria) and then mounting those same directories as volumes. This created massive data duplication - the same 20GB TF2 installation existed both in the image and as mounted volumes.

Two TF2 servers meant 80GB of redundant game data alone.

The Second Mistake

Still clueless about the real cause, I decided to get aggressive. Turn off all containers, manually delete them, nuke everything container-related, then restart them one by one to identify what was actually consuming space.

This was the point of no return, though I didn't know it yet.

The Revelation

After the nuclear cleanup, something beautiful happened: the server was using less than 10GB of space total.

And then something terrible occurred to me: if the server only needs 10GB, why am I paying for 100GB of SSD space? I could downsize the disk and save money! No more wasting donations on unused storage for game servers that nobody plays on anyway.

This seemed like genius-level optimization.

The Point of No Return

I contacted my hosting provider to downsize the SSD. They informed me that resizing requires completely deleting the current disk and reinstalling everything from scratch.

"Do you have backups?" they asked.

"Of course," I replied confidently. "I just need to backup the 12GB /srv/ folder where everything important lives."

This was mistake number three: assuming I understood where all the critical data was stored.

The Catastrophe

Disk deleted. Alpine Linux reinstalled. Time to restore the website from the /srv/web backup.

That's when I discovered that the MySQL database - containing all articles, forum posts, user accounts, everything - wasn't in /srv/. It was living happily in /var/lib/containers until I murdered it.

Three articles that had generated genuine discussion and controversy across multiple communities: gone. Forum posts: gone. User registrations: gone. Comment threads: gone.

The site was now a beautiful, empty shell running on a cost-optimized 10GB SSD.

The Silver Lining

At least I was now paying only €8.70 per month instead of whatever the 100GB plan cost. The site's financial runway just got significantly longer, assuming there was still a site worth running.

The Recovery

Panic-driven restoration began. I transferred the surviving services: IRC, XMPP, Mumble, and the bridge between them. These had been properly configured to store data in /srv/ from the beginning, so they survived the apocalypse intact.

More importantly, I completely rewrote the container configurations to ensure everything saves in /srv/ and nowhere else. No more mystery data locations. No more trusting that containers will behave predictably:

volumes:
  - ./mysql/data:/var/lib/mysql  # Bind mount to host for /var/lib immunity

Every critical service now explicitly binds its data to the host filesystem in a location I control and understand.

The Miracle

I found salvation on my laptop: a MySQL backup from exactly three days before the site had gone down. I had created it during the last round of site updates but completely forgotten about it.

We lost everything posted after those three articles, but the articles themselves and the original forum structure were recoverable. The community discussions, reactions, and new user registrations from the past few days: still gone.

What I Learned

Container storage is not intuitive. Just because you mount a volume doesn't mean that's where all the data lives. Database containers, especially, love to hide critical files in unexpected places.

"Cleanup" operations are never as safe as they seem. When you don't fully understand what you're cleaning, you're not cleaning, rather you're playing Russian roulette with your data.

Proper backups require understanding your entire data footprint. You can't back up what you don't know exists.

Cost optimization should never come before data security. The €20/month I was trying to save nearly cost me months of work and community building.

Documentation beats assumptions every time. If I had properly documented where each service stored its data, this disaster wouldn't have happened.

The Technical Fix

The new container architecture follows a simple rule: if it's important, it lives in /srv/ and gets backed up. Period.

Database files, configuration files, user uploads, logs - everything is explicitly mapped to known locations on the host filesystem. No more mystery storage in /var/lib/containers/ where one wrong command can wipe out everything.

The Human Cost

The technical recovery was straightforward once I understood the problem. The real cost was community momentum. People who had engaged with the articles, left thoughtful comments, or registered accounts had their contributions vanish.

Some will come back. Others won't bother. That's the price of learning system administration in production.

Moving Forward

Cyberix is back online with a leaner, more robust architecture. The hosting costs are now sustainable long-term, and the data storage practices are actually comprehensible.

Most importantly, I now have automated daily backups of the entire /srv/ directory, plus manual snapshots before any major system changes.

The site lost some momentum, but it gained something valuable: a administrator who finally understands the systems he's running.

The Real Lesson

Running independent web services requires more than just ideological opposition to corporate platforms. It requires actual competence in system administration, backup strategies, and understanding the tools you're using.

I got lucky. The backup existed, the community survived, and the financial damage was minimal. Next time, I might not be so fortunate.

If you're running your own services, learn from my mistakes:

  • Document where everything stores its data
  • Test your backups before you need them
  • Understand your containers before you delete them
  • Never optimize costs by destroying things you don't understand

The internet culture of old was built by people who understood their systems. If we want to recreate that independence, we need to develop that same technical competence, after all.

Otherwise, we're just LARPing as sysadmins until the next catastrophic learning experience.


Cyberix Network is back online at cy-x.net. All services restored, backups automated, hubris temporarily contained.




[Discuss in Forum]  |  [Back to All Articles]