Live Chat with Jen Weaver

Stop Hoarding QA: How to Use Feedback to Fuel Support Team Growth | Episode 18

Jen Weaver Season 1 Episode 18

"I'm not here to catch your mistakes, I'm trying to get customers to stop yelling at you." - Amanda Drws

That’s how Amanda Drws thinks about QA, and it’s a big reason her teams trust the process instead of fearing it.

In this episode, we dig into how Amanda built a QA approach that actually makes life easier for agents and more valuable for the whole company. She shares why she audits a tiny fraction of tickets, how she decides what’s worth flagging, and the surprising ways QA can uncover customer trends you’d never think to track.

What we cover:

  • Why “less QA” can lead to more insight
  • How to make QA a culture-builder instead of a compliance drill
  • A simple way to catch big issues without nitpicking typos
  • Using QA to surface trends before your dashboards do
  • Getting other teams to actually care about support insights

If you’ve ever thought QA was just about catching mistakes, Amanda’s going to change your mind

Take the Next Step:

📬 Subscribe for weekly tactical tips → Get Weekly Tactical CX and Support Ops Tips

🔍 Follow Amanda Drws on LinkedIn for more  insights →Amanda Drws on LinkedIn

🎙 Keep listening → More Episodes of Live Chat with Jen Weaver

🗣 Follow Jen for more CX conversations →Jen Weaver on LinkedIn 

🤖 Sponsored by Supportman: https://supportman.io


Episode Time Stamps:

00:00 – Why QA Shouldn't Catch Mistakes

01:32 – Amanda’s Weekly QA and CSAT Ritual

04:15 – QA Is Not Stress Relief

07:48 – Audit Less Than 10% of Tickets

09:55 – QA Insights for Marketing Wins

13:12 – Training Doesn’t Stop After Onboarding

16:44 – How QA Builds Team Safety

20:08 – Weighted Scorecards, Not Gotchas

23:31 – Share QA Gold in Slack Channels

Speaker 1:

Hey, I'm not here to catch your mistakes. I'm trying to get customers to stop yelling at you Like that's. My only goal is for you to have as smooth of a day as possible with as few customers who are angry and stressed out and like this is how it gets done, because all of these things, like all of these errors, you're not treating it as well. The agent made a mistake, it's. How do I set up systems that make it impossible for this to happen?

Speaker 3:

Hey friends, welcome to Live Chat with Jen Weaver. Today I'm chatting with Mandy Drews, an absolute expert in QA and a support leader who treats quality assurance less like a compliance drill and more like a calming force for anxious teams. She argues that QA over 10% of tickets is really just a signal, not a solution. Instead, she focuses on amplifying the QA wins what's discovered during your QA process to many different teams at your company. She walks us through her weighted, point-based rubric that dings a stray typo lightly but flags systemic errors loudly. Even better, she shows how a lean QA loop can surface trends like sudden spikes in certain types of questions, specifically menopause, sleep connections. We'll get into that, but doing it before any dashboard knows how to track them.

Speaker 3:

So grab your headphones. Let's learn how less QA can drive more insight, more safety and more trust on your team. Before we get started, though, our QA tool, support man, is what makes this podcast possible, so if you're listening to this podcast, head over to the YouTube link in the show notes to get a glimpse. Supportman sends real-time QA from intercom to Slack, with daily threads, weekly charts and done-for-you AI-powered conversation evaluations. It makes it so much easier to QA intercom conversations right where your team is already spending their day in Slack. All right on to today's episode. We're here with Amanda Drews, who is close to my heart, a QA expert, and I've been so inspired by Amanda's work. We're doing a new feature not really new anymore about a week in the life, and I love that you can give us a perspective on a week in the life of a support leader in your previous role.

Speaker 1:

Yeah, sure. So at my last role I was the head of support for a consumer tech product wellness device. It was definitely pretty chaotic. I think anybody who works at a startup knows just how different weeks to week can go. But at the same time, you know, there were some things that were pretty regular.

Speaker 1:

And in a normal week, Monday was always kind of dedicated to what went wrong over the weekend and getting caught up. Tuesday and Wednesdays were typically the days where I was meeting with other departments, giving them updates about what we had found in the previous week and hearing their questions for the next week, and also meeting with my team to just touch base and make sure that everybody knew what was coming down the pipeline. And then Thursday was always my QA CSAT overview day and I would meet with my quality assurance specialist. I would talk to all of my supervisors and we would all sit down and say, okay, how is QA going, how is CSAT going, what are we seeing in social media and what questions did the other departments have that maybe we can focus on answering in the next week.

Speaker 3:

I know we jumped the gun a little bit with the week in the life, but for folks who don't know you, could you share a little bit about your previous role? You were the senior manager and head of support and you, I think I remember you telling me you grew the team quite a bit during your time.

Speaker 1:

Yeah, so at my most recent role, we had a crazy backlog of tickets. We were building this department and also kind of at the same time, finding out what even were the technical issues that our customers were running into. What were the issues that we were running into with e-commerce? You know, this was June of 2020. So USPS was losing like half of our international orders. What do we do about that?

Speaker 3:

So as you grew that team, you also worked on QA quite a bit and became pretty opinionated. What do you think the most important part of a QA program is?

Speaker 1:

My impression of QA from conversations with other people is that a lot of it is really just stress. Soothing is what I call it. People are anxious. It's an anxiety tax.

Speaker 3:

An anxiety tax. I love that.

Speaker 1:

Yeah, and it's not about necessarily improving training or improving our messaging or anything like that. It's really just a case of, like, people are worried that there's something going wrong in the background and they want to soothe that anxiety to make sure there's nothing going wrong. But they're leaving it at that and they're not because their goal, the question that they're trying to answer, is is there something going wrong in the background? Which?

Speaker 3:

they're anxious about and so like if I just give myself the illusion that I'm doing due diligence here.

Speaker 1:

Yeah and then, but there's no, also not really like a set metric or understanding of like. Well, what does it mean if things are going wrong? What's an acceptable error rate? They don't go into QA trying to think about that or trying to answer that question, and so it never gets answered.

Speaker 3:

You have to know what questions you want to answer as you go into it Exactly and they have to be pretty specific, Otherwise, again, you're not going to end up answering them.

Speaker 1:

So when I go into QA, a lot of the things that I'm thinking about is you know, how good is my training, and is this new hire that is getting more QA than the rest? Are they really internalizing and understanding the training, or do I need to make a change? My agents understand that feature, Are they like? Did I fully adequately prep them for this?

Speaker 3:

big feature launch or no, and if you're asking that question in QA it's kind of a last catch of whether that worked. But really you should be answering that question in training, but really you should be answering that question in training, Exactly, Exactly.

Speaker 1:

And so that's kind of that last calibration, to say like yes, I got it right. And then the other part of that is you can't just say like yes, no, you have to say okay, if the answer is no, what do I do next time, so that we are not catching it after we have already made this mistake?

Speaker 3:

Right, yeah, if you just go well, we caught this and the specialist just needs to do better then that doesn't really solve the systemic problem. Yeah exactly.

Speaker 1:

And, additionally, you know, one of the things that I've been seeing a lot is companies talking about like being customer obsessed, and that's something that gets thrown around a lot. But then I also have to ask, like, okay, well, how is your QA supporting that? Because there are plenty of opportunities in QA to not only be finding out, like, what is going wrong, but also what's going really well and making sure that one customer who has this amazing experience that's maybe not being captured in your data collection or not being captured in CSAT, is still being distributed to the rest of the team, to marketing, to product, to give a better understanding of, like, what makes a truly exceptional experience for a customer.

Speaker 3:

Yeah, and a lot of teams are doing a small number of QA conversations, a small percentage of their total conversations, and even if you do a large percentage, you're not guaranteed to catch every issue.

Speaker 2:

Yeah, exactly, I definitely have the opinion that QA should be like 10% or less of your tickets.

Speaker 1:

It should be really like a spot check of confirming anything new is working well. It should be a little bit of a check to make sure that old ideas and old concepts haven't gotten rusty and there should be. You know that last quarter or so of QA should be like okay, like what's working well. What happened in these really good CSATs, what happened in, hopefully? I mean I love when my agents flag conversations that say hey, this went really well, like I want to brag about it, put it in a brag box, and that's when that really support. Qa has implications for every team. Can you break that down Like?

Speaker 3:

what are some of the teams that really should be utilizing QA data, if?

Speaker 1:

marketing is doing A-B testing. They know, based on click rates, based on open rates, which of their A-B worked better, but they don't necessarily know why, and that's something like that is data that support can provide. They can say this is what resonated. But there will be times where there are concepts or things that just really resonate with customers where either it's not easy to collect automatically in any kind of sidebar or we don't know that we should be collecting it. And that's where QA comes in. We're, you know, working for this wellness company.

Speaker 1:

A big concept was sleep, so we did have some kind of general data collection happening around. Is this helping people sleep? Yes, no, but it was QA that came to us and said, hey, I'm seeing a lot more questions around sleep for women in menopause and like, why would we collect that data? We wouldn't know to collect that data. But QA was the one who said I'm looking at all of these general questions, all of these general tickets, and here's a trend that I'm seeing and that means I can go to marketing and I can say, hey, like now that I'm looking for this, menopause is being mentioned in like 10% of our tickets around sleep. Just doing a word search for menopause, and then marketing can turn around and say, okay, like let's have a webinar about this, let's write some articles about this, and then we ended up having like a 500 person webinar talking exclusively about how to manage stress and sleep for women in menopause.

Speaker 3:

That blows my mind, because what you've done there is marketing. Maybe that's not even your ICP right is that age group of women particularly, but marketing has learned from customer support interactions what actual customers are really interested in, which means potential customers would be interested in that, and that's something you can't really get from anywhere else.

Speaker 1:

Yeah, exactly Exactly, and they weren't our ICP. They're not even our main, I guess, like purchasing demographic, but they are a main demographic of people reaching out to support Uh-huh. Yeah, but they are a main demographic of people reaching out to support. Uh-huh, yeah. And so you know, here's this whole segment of our market that was not being utilized and whose questions were not being answered, needs not being met. And yeah, it was great, it was super interesting too.

Speaker 3:

That's a great example. I know we also talked about the like the support training team a little bit and how they can benefit from QA. I think a lot of times those things people go from training to graduating their onboarding and then they are just that data is not returned to the training team. But what kind of a program would you recommend for that, for getting that information back to the training team to iterate?

Speaker 1:

Yeah, so my trainer was part of the QA meetings. You know it's not just about experience or practicality, it's also getting that data and getting that feedback. And training doesn't ever stop. It's really important to make sure that you know when you're going through your training. Hopefully your training is good. You're looking at example tickets. You're looking at example questions. That doesn't need to stop when training ends.

Speaker 1:

It's really helpful for agents to see how other agents are answering those questions, how they're handling de-escalation, how they're handling issues where they don't know how to answer it. And when you kind of set up this rolling system of your trainer gets to see how these new hires are doing in QA and CSAT and roll that into training, you can start doing interesting things, like these listening sessions. And every other week we had a CS only listening session and that was for calls that went really well. And again QA would be supplying reasons like very specific reasons of like what exactly went really well and what exactly did this agent pick up on that? Like made that experience better. And it also gave agents an opportunity. You know we set out bonuses. We're like hey, did you have a call that went like a little wonky and you don't know why, like, we'll give you a bonus if you share it with the team and you describe what you're not sure about.

Speaker 3:

That's great. I love that. It incentivizes something that is usually disincentivized, which is being vulnerable in public about something I'm not sure I did well.

Speaker 1:

Exactly. And you know I always say like low stakes, high quality. Because when you say like things like that, when you're like this went wrong and the stakes are really low, then you can bring that quality of everybody up. I know one of the things I was always trying to tell my agents is like we don't all need to have the same bad phone call, we don't all make the same mistake. We could just have like one person could do it once and we could learn from that and that would be fine.

Speaker 1:

And so we had these listening sessions, and you know even the ones where the conversations went well. You know you open the floor and you say, hey, have you had a call that went like this what worked well for you? This worked really well in this situation. When would it not work Like what are other times? And then once a month we also did whole company listening sessions. We gave bonuses for those too, because the agents were horrified. They were like what are you talking about? I'm going to listen to my call and we're like this is a call where we think you did so great though we don't care.

Speaker 3:

We hate this. Yeah, I totally understand that, but on the other hand, it's the customer information that the rest of the company really needs so often in support. We have this real closeness to the customer, and it's hard to get that across to other teams.

Speaker 1:

Yeah, and there's certainly something about, you know, if you can tell people like, oh, this problem is a real pain point for us. But then if you hear a customer on the phone with their voice breaking because they are so stressed out Like it is, it's a different experience. And when you see or when you hear an agent like picking up on these clues and picking up on these like undertones, it also does a lot for really hammering home the level of skill, experience and professionalism that customer support agents have to bring. I think it's really considered an entry-level job. It can be really hard to convince other departments that this is an exceptionally skilled role and a highly technical role, and so having that opportunity for these agents to really show off the level of things that they are picking up on that other departments don't even hear in, that first listen to the call, I think that's also just a fabulous opportunity for support to really show its value.

Speaker 3:

Yeah, and we think of it as just another day. But often other teams are blown away by the ability we have to deal with issues. It's a complex job. So, with QA and data collection, are there interesting trends or feedback that you want to kind of talk through?

Speaker 1:

I think that ops you know that kind of system ops runs into the same issue of QA, of okay, but what are you doing with that data? Are you just collecting it because you know it's there and you feel like you should be collecting it and you have all this data around your customer to feel good about it. But, like, what are you doing with it? Are you customer obsessed or are you customer good enough?

Speaker 3:

A lot of teams would be grateful to shoot for customer. Good enough, unfortunately. But a lot of teams talk about being customer obsessed. Qa is not just the data and it's not even just those projects where you prove value to marketing, for example. It's also a culture tool. So when you have a culture of ongoing learning as a team instead of just coaching mistakes, right, how does I guess, how does QA support that? How can you structure a QA program so that it does that?

Speaker 1:

One of the other commentary that I've seen around QA, and around QA and AI in particular, is using AI means that QA is no longer a popularity contest, and that immediately put my hackles up because I was like absolutely no. You have learned something very valuable there, which is that you have a department problem, not a QA or a tool problem, because certainly QA should not be a popularity contest. If it is, I want to know. I don't want AI to fix it. It's the idea of like, well, do you need to fix QA or do you need to fix your department or your customer journey as a whole? These are very different things. You know very band-aid treat the symptoms versus treat the root cause, and QA can be this system that is set up as support, particularly if you do kind of go into it with the assumption that your agents are professionals, that they are the experts in their field and your QA is there not to catch their mistakes but to align their expertise with your training and to make sure that when things are going particularly well, when an agent does go a little bit off script and a customer loves it, that everybody does find that out, that that's celebrated.

Speaker 1:

I mean, like I said, when there is something behind these hard numbers and these data that really points to the humanity and the soft skills that come in with support, that are so valuable, like that's something that QA can be doing, and then it turns into this much more open and supportive culture where agents are less concerned about making errors. They're a lot more open when the idea is like hey, I'm not here to catch your mistakes, I'm trying to get customers to stop yelling at you like that's, my only goal is for you to have as smooth of a day as possible with as few customers who are angry and stressed out and like this is how it gets done, because all of these things like all of these errors. And like this is how it gets done because all of these things like all of these errors, you're not treating it as well. The agent made a mistake, it's. How do I set up systems that make it impossible for this to happen?

Speaker 3:

So QA really needs to ally itself with the customer support reps so that it's on their side, not here to nitpick at them. Um, I like what you've shared with me about. You have a rubric, a very, a beautiful spreadsheet. That is amazing that. I was just like, okay, can I have that? Um and uh. But one thing about it is sometimes what can make QA feel adversarial is if I make one typo then I get dinged for that right and on the reverse. Sometimes it can miss serious errors. That do happen only one time, but it was about billing or it was really difficult. And so what's your solution for keeping QA on our side? Right, it's here for me as a specialist, but making sure it finds the right problems and not just randomly some things.

Speaker 1:

So I have looked at some QA tools and my process is certainly that of a startup with no money, so we had Google Docs and that was what we could afford when we were first building this QA template.

Speaker 3:

I think a lot of teams are in that position. For sure, qa is scrappy more often than not for small teams, I think.

Speaker 1:

Yes, we built this template instead of you know 100% and ticking down. What we did was this weighted QA with points that added up and we made it pretty granular. So there are things that you know. Like you said, a typo I don't care about. A typo Like one typo is not a big deal unless it's coming up in every single ticket. You QA, you know sometimes, like I said, some typos are a one-point error. It's not a big deal unless it's trending across a lot of tickets. Other things, like accidentally leaving in a macro insert that you forgot to edit out. Maybe you wait that more as a four or five. I have done that and it's so painful.

Speaker 1:

Who among us has not done this. I always tell my agents like if you haven't done it, don't worry, you will soon Not a big deal once you're doing it all the time. And then there might also be things that you would consider either auto fails or much more significant errors that you want to weigh a little heavier. And it also kind of gives you the option to decide okay, what's your hard cutoff of how many errors can occur in a ticket before you get properly concerned? And if you're also tracking, are they happening multiple times over the course of the month? Are they happening in any particular type of ticket? Then you can get this really good understanding of exactly how your team is functioning.

Speaker 3:

For someone who is listening, who's on a team, who maybe does have a QA program, but it's really just scorecards and compliance. What would you say is one thing that they can do to really make QA have more impact than just we're testing out whether a rep is doing a good job or not?

Speaker 1:

I think that, particularly with like online and the fact that we are all on Slack all the time, or whatever we use Slack. So that's my example If you want to show value, if you want to show value, don't wait to be asked. And I would say, don't necessarily put it in an email, but put it in a Slack message. You know, maybe if you're part of like the marketing general Slack communication or something like that, I mean, hey, find those channels, if they're public, join them and just say, hey, here is something that I found, here is something that might be useful for you. And if you are consistently doing that, if you are consistently paying attention to the questions that they're asking in their meetings, in their Slack channels, and being able to come back the next week and say, our QA found this answer, even if they weren't asking you, right, you have to just kind of announce it.

Speaker 1:

And you know that's one of the things that we did with these whole company listening sessions was like nobody asked for it, we were just we think that this would be useful for you, and sometimes there would be, like our first meeting, I think, like one person from another department showed up, but we recorded it, we went through it. We posted a list of like some of the findings from that call and said here's what we found. Here's some things that might be useful. Here's the recording and the next month, a lot more people showed up.

Speaker 1:

I love that. That's a drop that you know. I can raise my hand and say I have fallen into that as well and in the process of doing data analytics and QA, got humbled. But I think that's really what I wish that they understood is investment in QA and well-done QA will save you so much time and money in the long run. You know, maybe this month it does not impact your profits, maybe this quarter it's net neutral, but as time goes on, if you really do want to be customer obsessed, you have to invest in QA, you have to invest in data analytics. Yeah, it's necessary, it truly is, and it will assuming you're doing it right, then it will pay you back tenfold.

Speaker 3:

I can't thank you enough for being on the podcast. You're a delight to chat with.

Speaker 1:

Thank you, jen. You're such a great interviewer. I will come back anytime you want me, thanks.

Speaker 3:

I appreciate that. Thanks for listening. Before you head out, here's a quick recap of Mandy's proven QA playbook, because small, repeatable moves will help your team more than giant wishlist items. So first, you want to audit under 10% of your tickets? So first, you want to audit under 10% of your tickets. Keep reviews below a tenth of total tickets so QA stays a spotlight, not a bottleneck. This is what Mandy recommends for teams who are doing manual QA.

Speaker 3:

Two aim at what's new, so point a spotlight at fresh launches or training rollouts so you can make sure you catch problems as they crop up. Three wait what matters. A granular, points-based rubric lets a missed macro or accuracy problems outweigh something like a stray typo. Also, four automate the basics. Dashboards can track first response, but other easy metrics are also important so humans can hunt for other insights. Five broadcast what's gold. You'll put findings back to product marketing training so that wins and fixes can scale. Six recalibrate your QA. Often when your patterns shift, new regular typos or new stressors for customers, tweak your systems before tickets start to pile up. If you put just those six habits into action, you'll trade catching mistakes for empowering your specialists to solve problems ahead of time. I hope that's super helpful. I'll see you in the next live chat episode.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.