It’s 2:15 PM. A ticket just landed in your L2 queue. Priority: High. Client: frustrated. Description: “User can’t access email.”
That’s it. That’s the whole ticket.
Your L2 tech opens it, stares at it for a moment, and starts from scratch. What version of Outlook? Cloud or on-prem? Is it one user or five? Did anyone try anything, or did L1 just forward the call? Ten minutes later, they’ve figured out enough to actually start troubleshooting. The client has been waiting that whole time.
Nobody did anything wrong. And that’s exactly the problem.
The Dispatcher Gets the Blame. The Ticket Deserves It.
When service desks slow down, the conversation usually turns to the same suspects: not enough staff, the wrong tools, a dispatcher who needs to be faster. But in most cases, the queue isn’t the problem, what’s entering the queue is.
Dispatchers route tickets based on what’s in them. If the information is vague, incomplete, or missing the one detail that actually matters, no amount of routing skill compensates. The dispatcher looks at “user can’t access email,” guesses at priority, assigns it to whoever seems available, and moves on. It’s not a failure of judgment. It’s a failure of inputs.
The same is true at the L1-to-L2 handoff. When an escalated ticket arrives without scope, without context, without steps already taken, L2 doesn’t inherit a ticket. They inherit a mystery. And solving the mystery comes out of their billable time.
This is the part MSPs have quietly normalized: the assumption that some amount of reconstruction work is just part of the job. It isn’t. As research into MSP dispatch workflows points out, dispatchers and L1 techs operate under constant cognitive load, and this work is invisible to leadership but central to service quality. It’s a tax on your most expensive techs, paid out in 10-minute increments, dozens of times a day.
What That Tax Actually Costs
Here’s a rough calculation worth running on your own team.
Assume your L2 techs spend an average of 12 minutes per escalated ticket reconstructing context that should have been in the ticket to begin with. If your service desk handles 20 escalations a day, that’s four hours of L2 time — every single day — spent on work that happened before any real troubleshooting began.
Annualized, across a team of three L2 techs, that’s somewhere north of 600 hours per year. The cost compounds fast: HDI’s benchmarking research shows that escalation costs are cumulative — a ticket that could have been resolved at L1 for around $22 costs an additional $62 once it reaches L2, for a combined $84. Every unnecessary escalation driven by incomplete ticket information is paying that premium for no reason.
It doesn’t show up in any report. There’s no line item called “context reconstruction overhead.” It just looks like L2 is slow, or the queue is backed up, or you need to hire.
The Root Cause Is Upstream
The fix isn’t at the L2 level and it’s not a dispatcher problem. It’s an intake problem.
What does a complete ticket actually need to contain before it moves? At minimum: the scope of impact (one user or many), the environment context (what platform, what version), what’s already been tried, the specific error or behavior, and a clear SLA risk flag if one applies. That’s not a lot. But left undefined, none of it reliably shows up.
The reason is simple: L1 techs working fast under pressure will do the minimum the process requires. If the process doesn’t define what “complete” looks like, “complete” becomes “I created the ticket and moved on.” That’s not laziness, it’s a rational response to an undefined standard.
MSPs that close this gap don’t do it by coaching L1 harder. They do it by building accountability into intake itself: defining what a routable ticket looks like, making incomplete tickets visible before they consume senior tech time, and giving dispatchers the authority to push work back when the information isn’t there.
What “Fixed” Looks Like
When intake standards are defined and enforced, the entire service desk runs differently.
L2 opens a ticket and the context is already there. Scope: three users. Platform: M365 Business Premium. Steps tried: OWA test confirmed the issue is account-level, not client-side. Probable cause: permissions change in the last 24 hours. That ticket takes 20 minutes to resolve instead of 35, not because the tech is faster, but because they started at step three instead of step zero.
Dispatchers make better routing decisions because they’re working with real information. Senior techs stay on senior work because incomplete tickets get flagged before they reach the L2 queue. And the queue itself becomes a more accurate picture of what your team is actually dealing with, not a mix of real work and unfinished context-gathering.
The service desk didn’t get bigger. The process just got cleaner.
One Thing to Try This Week
Pull your last 20 escalated tickets. Count how many had enough information for an L2 tech to start troubleshooting immediately without asking a follow-up question or calling the client back for clarification.
If the number is less than half, you don’t have a dispatcher problem or a staffing problem. You have an intake problem. And that’s the more fixable one.
If you’re curious how other MSPs are building structure around service delivery workflows, including how scheduling fits into a tighter dispatch model, TimeZest is worth a look.