Copilot for Managers?
Much like AI came for artists first, it might affect the Manager role quicker than we think.
I've been poking around chatgpt and others trying to decide whether it can replace me. Spoiler: no.
But LLMs are really good at capturing intent and synthesizing solutions based on available literature. Which, assuming that the literature is high quality, does lead to pretty solid solutions for simple common problems.
I’m talking about everyday problems like:
Bob is an experienced developer but lately the quality of his work has gone down
I’m stuck on this project. These are my stakeholders and this is the story so far. How can I get unstuck?
Alice is a great system designer but her new team isn’t listening. How can I facilitate?
There are tons of great books and blog posts out there that cover these situations in exhaustive detail. LLMs like ChatGPT know about them and are really good at giving answers including:
Things you might not have considered
Checklists to go through
Possible steps to take
Some tools like phind.com even link back to content which has informed the solution, which is a great opportunity to do further research and learn a bit.
So far the main limiter is that I can’t feed real information about problems I’m encountering. Because of privacy and security concerns.
Also, I know there's unique value to my experience, context and perception that an LLM can't replace. Quite often, problems are merely symptoms and the root cause goes deeper. An LLM won’t be able to diagnose those.
And I just don’t see ChatGPT making strategic, staffing, budgeting, product design, technology direction, etc, decisions any time soon if ever. Those are definitely out of the question until we get some kind of General AI.
But for simple, everyday situations, I think an LLM could save a lot of time. If we can solve the privacy problem (and companies like Azure and Lengoo offer options) then an AI Assistant for Managers would be totally doable.
Someone somewhere must be building Copilot for Managers and it’s going to be not only a time saver, but also raise interesting existential questions about our role.
Oi Pedro!
Great initial exploration of the topic. Having worked on a service that is trying to do exactly what you say is not possible, let me tell you! It is!
No, we can go into more details, if you want, but my own experience with LLM's and tools based on them is that we can easily mimic the behavior of an average manager, and even some great ones (with the help of a human) using LLM's and some extra spice on top.
My own hypothesis is that LLM's will have a much bigger impact on management / leadership than they will on coders/testers. (even if they will have an impact on all of them).
Here's my reasoning: the problems you mention as examples of what LLM's cannot help with is exactly what they can help with the most. In other words, reflection, sumarization, and generation of ideas are the easy things for an LLM.
What they can't do (yet) is to then implement the ideas they came up with, and collect feedback from the affected team members, and then react to that. This is where the manager/human comes in, in filtering, applying, collecting feedback and then getting back to the LLM with more information.
I'd say that LLM's - even today - are already better than the average manager, at most of the tasks. They just can't interact with the world around yet. But they will. It's a question of time.
What do you think about this take?
Interesting question and reflection, Pedro.
I'm probably biased, but what we know about change and learning is that it requires us to make our own connections and arrive at our own insights. That's why coaching, when done well, is probably the most effective way of helping people change.
I am not (yet) seeing anything AI-based that can act in such a way as to spark that type of insight that can only come from being asked the right questions at the right time within the person's unique context. Not to mention that a big part of coaching is the resonant connection between two human beings that trust each other, something we're evolutionarily very attuned to.
Never say never. But for now, I'll say "not yet."