I need some advice.
I’m a talent coordinator at a private school system, and I had a bit of an embarrassing situation recently. I was writing a job description for a teaching role, and I listed different requirements like experience with classroom tech tools, state teaching certification, etc. At the time, all felt important. But we ended up getting a bunch of applicants who were great with tech… and completely unqualified to teach legally.
The problem is… I’m not actually sure how to prioritize. I’ve always just thrown everything in one list: must-haves, nice-to-haves, soft skills - all mixed together. And now I’m seeing how confusing that can be for candidates. And since we’ve started using Talantly, I’ve started to suspect that my approach is far from industry standard when it comes to prioritization.
So I’m curious: how do you decide what’s truly essential for a role versus what’s just a bonus? Do you sit down with hiring managers and sort it out? Do you go by what’s tripped you up in the past? Or is it more instinct?
It definitely stalled the hiring process more than I expected, so any advice would be hugely appreciated.
Oh, I totally get this - we had similar chaos when I first started organizing requirements. I've found that sitting down with the actual hiring manager and literally asking "if we could only screen for 3 things, what would they be?" helps cut through all the wishlist items to find the real deal-breakers.
The three-question approach is solid - I've found it forces hiring managers to really think beyond their initial wish list. What's helped me lately is creating a clear hierarchy: legal/certification requirements first, then core competencies that directly impact day-one performance, and finally the developmental skills we can train for. The tricky part is getting everyone aligned on what actually constitutes "trainable" versus "must-have," especially when different stakeholders have varying tolerance for onboarding time.
This resonates so much with what we deal with in consulting staffing - though admittedly, our stakes are a bit different since we're matching consultants to specific client engagements rather than permanent hires.
The hierarchy approach mentioned above is spot-on, but I'd add that timing considerations are crucial too. In our world, we've learned to distinguish between "client-facing day one" requirements versus "project completion" requirements. For example, a client might need someone with advanced financial modeling skills for a six-month engagement, but if we have a strong analyst who's 80% there and can get up to speed in the first few weeks, that's often acceptable - especially if they bring other valuable skills to the table.
What's been game-changing for us is actually mapping requirements against project phases. We'll sit down with the engagement manager and literally walk through: "Week 1-2, what does this person absolutely need to do? Month 2-3, what becomes critical then?" It sounds tedious, but it prevents that kitchen-sink mentality where everything feels equally important.
The certification piece you mentioned is particularly tricky in consulting because client requirements can be... fluid. We've had situations where a client initially insisted on specific industry certifications, only to later prioritize problem-solving ability and communication skills when they realized those were harder to find. Now we always ask the follow-up: "If you had to choose between candidate A with perfect credentials but limited client presence, versus candidate B with 90% of the technical requirements but exceptional stakeholder management skills, which would you take?"
One thing that's helped reduce our screening time is being brutally honest about development timelines upfront. Instead of listing "project management experience preferred," we'll specify "ability to manage 3+ workstreams simultaneously within 30 days" - it gives candidates a clearer picture and helps us avoid the tech-savvy-but-unqualified situation you experienced.
The alignment challenge is real though. Getting partners, engagement managers, and clients all on the same page about what's truly non-negotiable versus what's aspirational remains one of our biggest ongoing challenges.
The phased mapping approach mentioned here is brilliant and something we've started implementing in manufacturing roles where technical skills can be developed over time but safety certifications are absolutely non-negotiable from day one. We've found that sitting down with production managers to walk through the first 90 days versus six-month expectations helps distinguish between "must have immediately" and "must develop quickly" - though I'll admit this process can be time-intensive when you're dealing with multiple hiring managers who each think their priorities are the most critical. The key insight for us has been realizing that some requirements we thought were essential were actually just reflecting our most recent bad hire, rather than genuine role needs.
Oh wow, I totally feel this! I had a similar wake-up call recently when I mixed certification requirements with "preferred experience" for an e-commerce role and ended up with candidates who had amazing marketing backgrounds but zero understanding of our specific compliance needs. What's been helping me is actually doing a quick "day one vs. day 90" exercise with hiring managers - like, what absolutely cannot be trained and what can we reasonably expect someone to pick up? The tricky part I'm still figuring out is when hiring managers insist everything is "critical" - I've started asking them to rank just the top three must-haves, which forces some real prioritization conversations.
This resonates so much with what we've been wrestling with in consulting. The "everything is critical" problem is real, and honestly, it's taken us a while to get better at this prioritization piece.
What's worked for us is creating what we call a "role impact matrix" - basically mapping requirements to actual business impact if someone doesn't have them. We sit down with practice leads and walk through scenarios like: "If we hire someone without this specific skill, what happens in month one? Month six?" It's eye-opening how often something that feels essential is actually just a convenience.
The day one vs. day 90 approach mentioned above is spot-on. We've started adding a third bucket though - "day 180" - because in consulting, there are skills that aren't immediately critical but become make-or-break as someone ramps up to full client responsibility. For example, advanced Excel modeling might not matter week one, but by month six, if they can't build a solid financial model, we have a problem.
One thing that's helped with the "everything is critical" pushback from hiring managers is asking them to think about their best performers. We'll literally pull up profiles of top consultants and ask: "Did Sarah have all these requirements when we hired her three years ago?" Usually the answer is no, and it opens up a conversation about what actually predicts success versus what we think we need.
The certification piece you mentioned is tricky because it's binary - you either have it or you don't. We've started being very explicit about legal/compliance requirements versus everything else. Those go in a separate "non-negotiable" category, and then we can have more nuanced conversations about the rest.
Since you mentioned using Talantly, I'm curious if you're finding their requirement categorization helpful? We've been piloting it for a few months, and while it's definitely pushed us to be more structured about this whole process, there are still some customization limits that make it challenging when we have very specific technical requirements for certain client projects. But it has forced better conversations with our practice leads about what we're actually looking for.
The other reality check we do now is looking at our pipeline. If we're in a tight market for certain skills, we'll deliberately lower some "nice to have" requirements to expand the candidate pool, then invest more in onbo
That certification trap is so familiar - we've had similar issues in telecom where we'd get candidates with all the technical certifications but couldn't actually troubleshoot a network outage under pressure. What's helped us is creating a simple three-tier system: legal/regulatory requirements (non-negotiable), core competencies that directly impact day-one performance, and everything else as development opportunities. The key breakthrough was when we started asking hiring managers "what would actually cause someone to fail in their first 90 days?" - it usually cuts through the wish list pretty quickly.
That 90-day failure question is brilliant - I've started using a similar approach where we map requirements directly to early performance indicators rather than theoretical ideals. The challenge I've found is that hiring managers often struggle to distinguish between "would be great to have" and "will actually make or break success," especially when they're thinking about their dream candidate rather than minimum viable performance. What's helped us scale this across teams is having talent partners facilitate those conversations with hiring managers, because they can push back on the wish list items more objectively than HR alone. The certification versus capability disconnect you mentioned is exactly why we've shifted toward competency-based frameworks - it forces much clearer thinking about what actually predicts success.
Oh, this resonates so much! We've definitely been through similar growing pains, especially in healthcare tech where you have this weird mix of clinical knowledge, technical skills, and regulatory requirements all competing for "must-have" status. What's really helped us is doing a quick failure analysis exercise - we literally look at our last few bad hires and ask "what missing requirement actually caused problems in their first 90 days?" versus "what would have been nice but didn't matter." It's eye-opening how often the thing that seemed critical in the job description had zero impact on actual performance. The other game-changer has been getting our hiring managers to rank requirements by "how quickly we'd know if someone was struggling without this" - it forces much more honest conversations about what's truly day-one essential versus what someone could learn or work around.
That failure analysis approach is brilliant - we started doing something similar after realizing our "required" programming language lists were eliminating great candidates who could pick up new languages in weeks. The ranking exercise sounds like it would force some tough but necessary conversations with our engineering managers who love to kitchen-sink their wish lists.
Yeah, getting hiring managers to actually rank their wishlist is like pulling teeth sometimes - they want everything to be "critical" until you show them the pile of unqualified resumes. We've had good luck doing a quick post-mortem after each bad hire to figure out what we should've screened harder for versus what was just nice-to-have fluff.
Oh wow, I feel this so hard! I'm still pretty new to this whole thing (about 5 weeks with Talantly actually), but I've already had a couple of those "oops, that's not what I meant to prioritize" moments. What's been helping me is literally creating three buckets before I even talk to the hiring manager: deal-breakers (like your teaching certification), strong preferences, and nice bonuses. Then I make them physically sort their wishlist into those buckets - it's amazing how quickly they realize half their "must-haves" are actually just preferences when forced to choose! The post-mortem idea is brilliant too - I started keeping a little notes doc after each hire about what actually mattered versus what we thought would matter. It's been eye-opening how often we get hung up on the wrong things. Still learning though - curious what others do when hiring managers push back on the bucket exercise?
The bucket exercise is such a smart approach! I've been wrestling with similar prioritization challenges, and what's helped me is actually starting with the opposite question - "what would make someone completely fail in this role?" That usually surfaces the true deal-breakers pretty quickly. Then I work backwards from there. I've also started doing quick 15-minute alignment calls with hiring managers before writing anything, just to gut-check their assumptions. You'd be surprised how often they'll say something is "absolutely critical" until you ask them to explain why, and suddenly it becomes "well, it would be nice." The data piece you mentioned resonates too - I've been tracking which requirements actually correlate with successful hires versus which ones just sound good on paper. Still figuring out the best way to present this back to stakeholders without making them feel like their input isn't valued, but the patterns are definitely eye-opening.
This hits so close to home! I've been on both sides of this - as someone writing requirements and as someone reviewing them when we're staffing client projects. The teaching certification example is perfect because it shows how we can get tunnel vision on what feels important in the moment.
What's really helped me is borrowing from how we scope client engagements. I started treating job requirements like project deliverables - there are the absolute must-haves that define success, the nice-to-haves that add value, and then the "wouldn't it be great if" items that honestly just clutter things up. The key is being brutally honest about which bucket each requirement actually belongs in.
One thing I've learned the hard way is that hiring managers often conflate "this would make the job easier" with "this is essential for the job." Like, yes, it would be fantastic if every consultant we hire already knows our specific methodology, but what we actually need is someone who can learn methodologies quickly. Those are very different things, but they often get presented the same way in job descriptions.
I've started doing what I call "failure mode analysis" with hiring managers - basically walking through what would actually cause someone to bomb in the role versus what would just mean a steeper learning curve. It's amazing how that shifts the conversation. Suddenly that "5+ years experience with X software" becomes "comfortable learning new software platforms" because the real issue isn't the specific tool, it's adaptability.
The data tracking piece someone mentioned is spot on too. Since we've been using Talantly, I can actually see patterns in what requirements correlate with successful placements versus what just sounds good on paper. Turns out some of our "must have" criteria were actually negatively correlated with performance - we were screening out people who would have been great fits.
The trickiest part is managing stakeholder expectations when you start pushing back on their wish lists. I've found it helps to frame it as "let's optimize for the candidates we actually want to attract" rather than "your requirements are wrong." Same outcome, but it feels more collaborative.