The 30% Problem
You’ve seen it before: two candidates arrive with the same license, the same certificates, and similar résumés. One becomes a steady contributor; the other becomes a recurring problem. That gap is the 30% problem: roughly 30% of a job’s predictable success is due to technical knowledge or skill; the remaining 70% is made up of human-factor behaviors — the small, repeatable acts that determine whether competence becomes consistent performance. Scheig’s work is built on this insight: credentials confirm who can; behavior reveals who should.
Our practical mantra says it plainly:
Hire for behaviors. Train for skills.
This post explains why that 70% matters and offers a clear, practitioner-focused account of how Scheig converts top-performer insights into behavior-based hiring tools — from SME job analysis to the Applicant Score Report.
Why the 70% is where the outcome lives
Technical skill is necessary, but it’s rarely sufficient. The behaviors that separate reliable performers from risky hires are often small and mundane: following checklists, documenting work, escalating appropriately, persistently following through on routine tasks, and learning from feedback. These behaviors prevent rework, protect safety, and sustain quality. Scheig’s job analyses across sectors repeatedly show that human-factor behaviors account for most of the variance in productivity, safety, and retention — hence the 70% emphasis.
That’s why hiring systems that stop at credentials, polished résumés, or unstructured interviews leave organizations exposed. Those signals are easy to read; they are not the most predictive.
A high-level view of how Scheig builds behavior-based assessments
We’ll keep this a “fly-over” of the design path: the steps are straightforward, but the rigor is what makes them predictive.
- Start with the proven performers
Every instrument begins with the people who actually produce the outcomes the business wants. Scheig works with clients to identify and validate the recognized top performers — the subject-matter experts (SMEs) whose day-to-day work becomes the content base for the assessment. These are not just long-tenured staff: they are the people whose outputs the organization wants to replicate.
- Capture the work in behavioral detail
Through interviews and facilitated workshops with SMEs, Scheig collects a comprehensive inventory of job behaviors—often hundreds of discrete statements that describe how the job is actually performed. These behaviors encompass knowledge, skills, abilities, willingness, and personal characteristics, all expressed through observable actions. This is the raw material: the job described in behavioral language.
- Rank and prioritize what truly predicts success
Scheig applies a proprietary methodology—a mix of guided SME feedback and psychometric/statistical analysis—to rank-order those behaviors from most important to least important for job success. The result is a prioritized list of top high-performance behaviors: the Job Success Profile™. This profile is not a quota of items; it’s the distilled, prioritized signal that powers every downstream deliverable.
- Measure those behaviors reliably
From the Job Success Profile™, Scheig builds the SelectRight™ assessment, which typically contains up to three parts:
- Interest & Willingness Checklist — a job preview and communication tool where applicants indicate experience and willingness to perform listed behaviors; it flags potential fit issues but is not scored into the primary T-Score.
- Self-Rating Checklist — paired high/low behavior statements controlled for social desirability; both paired statements sound equally good. Applicants select statements that best describe them, producing a score that reflects behavioral match to the Job Success Profile™.
- Critical Incidents — multiple-choice items drawn from SME input that present real, on-the-job scenarios and ask applicants to choose how a superior performer would respond. These MCQs are written to expose judgment, escalation, safety focus, and other job-specific behaviors; they are scored against SME benchmarks and folded into the applicant’s behavioral score.
- Standardize, report, and decide
Scores are standardized to a T distribution (T=50, SD=10). The Applicant Score Report (ASR) summarizes each candidate’s T-Score, highlights unwillingness or experience gaps, and presents behavioral match information for hiring teams. The ASR, paired with a behaviorally anchored interview guide, becomes the primary decision support tool.
Why this approach dissolves the mystery of success
Two features make Scheig’s approach decisive.
First, job specificity. Items and scenarios come from the people who actually do the work. That content validity, measuring the person for the job rather than the person in the abstract, materially improves predictions of on-the-job outcomes. Scheig emphasizes that measuring behavior, not general personality, is the more predictive path.
Second, end-to-end alignment. The ranked behaviors power not only the assessment but also the structured interview guides, post-hire evaluations, and training modules (the AdvanceRight™ development suite). That coherence means that hiring, onboarding, and development use the same language and targets: you hire for the behaviors you value, then train for the technical gaps.
Practical examples and returns
Consider a licensed electrician. Two candidates may carry identical certifications. The behaviors that separate the dependable electrician from the problematic one are simple and observable: consistently running safety checks, documenting testing, calling out questionable conditions immediately, and replacing suspect parts rather than deferring them. An assessment that measures those behaviors reduces callbacks, safety incidents, and insurance exposure. Across roles, this pattern repeats — teachers, clinicians, drivers, attorneys — the behavior explains the performance delta, not the credential alone.
When organizations shift resources from guessing (polished interviews and résumé reading) to measuring behavior (SelectRight™, ASR, and structured interviews), the returns are fast: faster time-to-productivity, fewer safety incidents, less managerial remediation, and improved retention. Scheig’s validation work documents strong reliability and criterion validity when this rigorous, SME-anchored pathway is followed.
What hiring teams get — and why it matters
When organizations shift from credential-first selection to behavior-first selection, outcomes change quickly. You stop betting on charm, rehearsal, and titles, and start hiring people who habitually do the small things that prevent problems. The practical returns are clearer: faster time-to-productivity, fewer safety incidents, less managerial firefighting, and lower turnover. Scheig’s client materials indicate strong reliability and criterion validity when assessments are based on rigorous job analysis and SME anchoring.
A final, simple test you can use tomorrow
If you’re hiring for a technical role, pause before you give the résumé too much power. Ask the hiring team: “Which specific behaviors must this person do every day to make the job work in month one and month six?” Make that list your truth test. Use it to shape interview questions and the short assessment items that should be part of every finalist’s process.
The 30% problem is a practical lesson: technical skill opens the door; behavior keeps the door open, reliably, every day. Measure those behaviors, and much of the mystery of good hiring disappears.
Hire for behaviors. Train for skills.




