The journey of a thousand lines of code begins with a single semicolon
About Old Dog, New Flex
Old Dog, New Flex is not just a fun play on an old saying. It’s a long-game project about staying useful, responsible, and grounded in an industry that rarely pauses long enough to ask why.
I've spent nearly three decades building software — through multiple hype cycles, architectural revolutions, and more than a few moments where "this changes everything" quietly became "this just adds complexity."
I’m not here because I’ve stopped learning.
I’m here because I haven’t.
I'm here because this old dog can still learn a new trick or two.
What "Old Dog, New Flex" Actually Means
The phrase isn’t self-deprecating. It’s descriptive.
Old Dog
Scar tissue. Pattern recognition. Knowing how systems fail, not just how they compile.
New Flex
Learning continuously. Updating mental models. Adapting tools without abandoning judgment.
Together, they describe a way of working that values:
- context over novelty
- responsibility over automation
- durability over momentum
This isn’t about resisting change.
It’s about surviving it with your principles intact.
What You'll Find Here
I write in three modes, depending on what the moment calls for:
Learn
Clear explanations of what’s actually changing in technology — especially around AI — and what’s just being renamed. These are grounded in real systems, real tradeoffs, and real consequences.
Share
What I’m building, how I’m thinking about tools, and why I make certain architectural or product decisions. Not polished case studies — working notes from the field.
Journey
Reflections on burnout, longevity, reinvention, and what it means to keep learning later in your career without pretending you’re starting from scratch.
Some posts are technical.
Some are reflective.
All of them are intentional.
What This Site Is Not
This isn’t:
- A news site.
- A trend forecast.
- A personal brand content machine.
I don’t publish hot takes for engagement or chase every new tool for relevance. If something doesn’t hold up after the noise fades, it doesn’t belong here.
Silence is allowed.
Iteration is expected.
Restraint is a feature.
Why AI Shows up So Often Here
AI Isn't interesting because it is powerful.
It's interesting because it reditrubutes responsibility.
A lot of modern tooling optimizes for speed without adequately addressing judgment, accountability, or downstream impact. That gap is where things tend to break — technically and ethically.
Much of what I write here explores that tension:
- Where humans stay in the loop.
- Where AI and automation helps.
- Where it quietly makes things worse.
Not from theory - from use.
Why I'm Building This in Public
I'm not trying to teach from a podium.
I’m documenting how I’m thinking and working now, with everything that comes from having done this for a long time — including mistakes, recalibration, and changed opinions.
If you’re early in your career, maybe this saves you time.
If you’re further along, maybe this feels familiar.
Either way, you don't need permission to evolve - just context and courage.
A Final Note
Old Dog, New Flex isn’t about proving relevance or chasing the next wave.
It’s about staying sharp without becoming cynical.
Learning without erasing experience.
And building things that hold up after the noise fades.
I’m not here because I’ve stopped learning.
I’m here because I haven’t.
After all, I'm living proof that you can teach an Old Dog, a New Flex.
Latest
What I Mean By Learning, and What I Don't
Learning isn’t consumption or speed. It’s what survives interruption, shapes decisions, and holds up when tools and certainty change.
Prediction Is Not Resilience
An examination of why better prediction — especially with AI — doesn’t automatically make systems resilient, and how over-trust in forecasts can increase fragility when real-world failures cascade.
Why Old Dog, New Flex Exists
A personal and editorial statement on why Old Dog, New Flex exists: documenting how an experienced software engineer is adapting to AI, learning in public, and drawing clear lines around responsibility, clarity, and long-term thinking.
The Responsible AI Adult: Why We Still Need Humans in the Loop
We need Responsible AI Adults in software—just like we need them in healthcare, education, journalism, and yes, graphic design. Not to control the outcomes, but to safeguard the inputs and validate the behavior.
Stop Coding, Start Supervising
In a bold statement at CES 2026, Nvidia CEO Jensen Huang told engineers to stop writing code and start supervising it. This post explores how that single line reveals a major paradigm shift in software engineering—from coding as craft to AI-guided orchestration. For veteran developers, it’s not just a workflow change—it’s an identity crisis. And it’s not limited to tech. Drawing inspiration from adjacent fields like design, this post argues that engineers aren’t being replaced, but promoted.
AI is Fast, Production is Fragile
AI can write code fast — but production is fragile. A practical look at why bad prompts cause real outages, and the system I use to apply AI safely in real software engineering.