It’s time to come to grips with the fact that some versions of roles are already disappearing. Project managers who only track tasks. Product managers who only write specs and manage backlogs. QA engineers who only execute scripted tests. These execution-only layers are being compressed because AI is very good at structured output and procedural repetition. That does not mean the people in those roles are obsolete. It means the narrow version of the role is.
The encouraging part is that most people in those roles already understand more than they give themselves credit for. A PM who has written hundreds of specs understands user intent, sequencing, trade-offs, and the difference between a good idea and a viable one. A QA engineer who has tested dozens of releases understands edge cases, user behavior, product fragility, and where real-world usage diverges from documentation. A project manager who has navigated stakeholders understands incentives, dependencies, and how work actually moves through a company. None of that knowledge is “just coordination.” It’s adjacent depth. The shortest path to a second discipline is not starting over. It’s expanding from the thinking you were already doing.
A practical way to begin is one many people are underusing. LLMs compress apprenticeship, not by doing the work for you, but by walking you through it while you are doing your daily work. There are no stupid questions to ask a model, and they are unusually efficient at going down rabbit holes in useful ways if you let them. Instead of just accepting output, push back. Ask why a decision was made, what breaks at scale, and maybe more importantly, what we are overcomplicating by scaling too early. When you write a spec, have the model explain the architectural implications. When you review a pull request, surface the product trade-offs embedded in the code. This doesn’t require around-the-clock exploration. It requires curiosity layered into work you are already doing. I used to delegate infrastructure work to specialized ops engineers and stay focused on backend systems and AI. Instead of continuing to delegate, I spent a few days creating environments from scratch, breaking them, understanding why they broke, and rebuilding them with an LLM as a guide. I wasn’t copying generated code blindly. I was asking it to explain each layer until I understood what was happening under the hood (IAM remains the most creatively punishing permissions model I’ve ever navigated). That same setup now takes me hours instead of days. That is what expanding into an adjacent discipline looks like. You compound what you already know and remove the parts that used to require permission.