About Boring Reliability

Boring Reliability is about building software that is calm, predictable and surprisingly robust - even when the metaphors are a bit unhinged.

Under the banner of Calm UX & Dark Magic the focus is on readable interfaces, test-driven workflows for AI-assisted development, and turning legacy systems into stable foundations instead of haunted crypts.

I experiment with GS-TDD (Gold Standard TDD), responsible AI tooling, and frontends that feel quiet even when the underlying stack is anything but.

Read the origin story: The Day My AI Cheated

Who is the Dark Magician?

My name is Dennis Schmock. I'm a Senior Software Engineer with a reliability bias: I care less about hype and more about whether your system is still calm and boring at 03:00 on a Tuesday during a deploy.

I don't think “years of experience” is a useful proxy for competence. You can do this for a decade and still ship chaos. I care about boring, verifiable outcomes: predictable releases, fewer incidents, and systems that are easy to change without breaking production.

That's why I created Gold Standard TDD (GS-TDD): a pragmatic way to use tests as contracts so you can refactor, ship AI-assisted code, and evolve legacy systems without turning your stack into a haunted castle.

Why boring?

My philosophy is simple: Software should be boring, so life can be fun.

I'm allergic to Hype Driven Development. Shiny stacks and copy-pasted AI code might look impressive on day one — but they're terrible at 10,000 users, three years of feature creep, and a rotating team.

Boring Reliability means:

  • Calm UX – interfaces that are quiet, legible and predictable.
  • GS-TDD – tests as contracts, not checklist theater.
  • Legacy as an asset – treat old systems as foundations to reinforce, not haunted crypts to bulldoze.
  • Responsible AI – AI-assisted workflows that are testable, observable and auditable, not just “magic”.

This site exists to show that you can have dark magic in the UI and boring reliability in the guts — at the same time.

How this site is tested

This site practices what it preaches. Every feature follows GS-TDD (Gold Standard TDD): tests define behavior first, implementation comes second, and refactoring happens without breaking contracts.

The test suite includes:

  • Unit tests with Vitest + Testing Library - Component behavior, API routes, and utilities are tested in isolation using BDD-style (Given-When-Then) assertions.
  • E2E tests with Playwright - Critical user journeys (navigation, blog reading, interaction flows) are verified end-to-end in real browsers.
  • Mutation testing with Stryker - The rate limiter and other critical paths are mutation-tested to catch weak tests that pass but don't actually verify behavior.

If you see a feature marked "Coming soon", it means the tests aren't written yet - not that the feature is half-built. GS-TDD applies to this blog too: no tests, no feature.

Mr. Reliable (the chat assistant)

The chat interface uses a rate-limited OpenAI API route. The rate limiter is currently in-memory (resets on deploy), so in multi-instance production setups you'd want Redis or similar. For a single-instance blog, it's boringly adequate.

If Mr. Reliable is temporarily unavailable, it's likely an OpenAI API issue or rate limit hit - not magic, just reality.