Organizations hire based on hope more often than they admit. A polished résumé, a confident interview, a relevant credential — they all ease uncertainty. But they do not, reliably, predict who will perform consistently over time. Scheig’s assessment methodology goes beyond guesswork to deliver proven predictions. Built from comprehensive job analysis, SME anchoring, and rigorous psychometrics, it delivers defensible, repeatable, and highly accurate judgments about who is likely to succeed on the job. Here’s why that claim is not just marketing language — it’s the logical outcome of how our assessments are developed and validated.
1) We Measure the Job, Not an Abstract Trait
Most conventional instruments assess general intelligence, personality traits, or isolated skills. While these measures are valuable, they are indirect: they describe the person in an abstract way and rely on inference to connect traits to behavior. Scheig reverses that model. Every assessment starts with a detailed inventory of actual job behaviors observed in recognized top performers. These behaviors are not hypothetical or abstract; they are the concrete actions, decisions, and routines that high performers use to achieve results. By measuring a candidate’s alignment with how the role is genuinely performed, we develop an assessment with direct content validity — the strongest type of validity for selection tools.
2) We Start with the Best People — The Right Content Anchor
Validity starts with content. Scheig’s method gathers hundreds of behavioral statements from subject-matter experts (SMEs)—the individuals whose results the organization most aims to replicate. This SME-driven inventory (often 300–500 items) captures the full complexity of the role, including knowledge, skills, abilities, willingness, and personal characteristics phrased as observable behaviors. This comprehensive approach prevents the common error of omission: you cannot validly measure what you never asked about. Because our items reflect the real-world practices of top performers, the assessment’s items are inherently job-relevant and legally defensible.
3) We Prioritize What Actually Predicts Performance — Statistically and Experientially
Gathering behaviors is necessary, but not enough. The key step is identifying which behaviors truly influence performance. Scheig’s panels combine expert judgment with proprietary statistical analysis to rank the behaviors by predictive importance. This combination — expert insights turned into measurable priorities — offers two benefits. First, it creates a concise, actionable Job Success Profile™: a straightforward, prioritized standard for hiring and development. Second, it prevents overemphasizing flashy but non-predictive traits. The result is an evidence-based blueprint: not a wish list of ideal traits, but the specific behaviors that, when present, are linked to better outcomes.
4) The Instrument Design Neutralizes Common Measurement Problems
Scheig’s SelectRight™ instruments are designed to address common threats to validity:
- Social desirability: The Self-Rating section pairs high and low behavior statements so applicants cannot simply select the socially desirable option. This design yields a more honest behavioral signal than simple self-report items.
- Job relevance: Critical Incidents (Section 3) are SME-sourced, multiple-choice scenarios that present actual tradeoffs and plausible distractors. Because the “correct” answers reflect the judgments of top performers, these items assess job-specific decision-making rather than abstract reasoning.
- Standardization: Anchored scoring rubrics and T-Score normalization ensure fair comparisons. By converting raw responses to a standardized distribution (T=50, SD=10), Scheig allows hiring teams to compare candidates on a common scale rather than raw, incomparable scores.
These design choices directly counter the common validity threats that often affect ad-hoc hiring tools.
5) Empirical Validation and Reliability Are Built into the Process
Scheig’s methodology is not a one-time design; it is an iterative, evidence-based system. After creating items, assessments are piloted, item statistics are calculated, and psychometric properties (reliability, item discrimination, criterion-related validity) are analyzed and improved. The SelectRight™ assessments have strong test-retest reliability and a high overall accuracy rate when instruments are carefully developed from thorough job analysis — evidence that the method works when applied rigorously. This empirical cycle — design, pilot, analyze, refine — drives ongoing validity.
6) The Profile-To-Practice Alignment Increases Predictability
A valid test is useful only if it influences decisions in ways that lead to desired outcomes. Scheig’s Job Success Profile™ is not just an academic concept; it becomes the standard for selection, structured interviews, and training. The same prioritized behaviors that define the assessment also shape behaviorally-anchored interview questions and post-hire development diagnostics. That alignment results in predictable outcomes: organizations hire for the key behaviors, then train for skills — a practical sequence that shortens time-to-productivity and reduces risk.
7) Legal Defensibility and Organizational Trust
Because the content is directly linked to job analysis and SME input, Scheig’s assessments withstand common legal criticisms: they assess job-relevant behaviors, are standardized, and depend on transparent scoring. The combination of content validity, pilot validation, and standardized reporting creates tools that are both effective and defensible in high-stakes situations. That legal and ethical rigor is as important as predictive accuracy: organizations must be able to explain and justify their selection process.
Bottom Line
Scheig’s assessment method isn’t just accepted because we claim it is; it’s credible because of its foundation: comprehensive SME-driven behavior inventories, proprietary and statistically supported prioritization, design features that reduce bias, empirical testing and refinement, and alignment from selection to development. The result is an evidence-based system that measures what matters and predicts who will perform reliably.
If your hiring still relies on polished résumés and subjective impressions, the predictive gap remains—and it can be costly. Scheig’s approach bridges that gap by revealing the job’s hidden blueprint as something measurable and actionable. When you focus on the behaviors that lead to outcomes, your hiring becomes less of a gamble and more of a dependable investment.
FAQs
What makes Scheig’s assessments different from traditional personality or skills tests?
Scheig evaluates job-specific behaviors instead of abstract traits. Every item is based on a detailed list of observable behaviors from top performers, giving the assessments direct content validity for the role. Rather than inferring job fit from general traits, Scheig focuses on concrete actions and decisions that predict job success.How does Scheig ensure the assessment asks about the right things?
Scheig starts with a thorough job analysis and SME anchoring: subject-matter experts develop hundreds of behavioral statements (300–500 items) that capture the full complexity of the role. These behaviors are then ranked through expert judgment and proprietary statistical methods to create a focused Job Success Profile™ that emphasizes the behaviors most predictive of performance.How does the instrument design reduce common measurement problems, such as faking or irrelevant content?
Design features directly address typical threats to validity: the Self-Rating section pairs high- and low-behavior statements, controlled for social desirability bias; Critical Incident multiple-choice scenarios evaluate job-specific decision-making, with distractors selected from SME practice; and standardized scoring (anchored rubrics and T-score normalization) guarantees fair, comparable results across candidates.Are Scheig’s assessments empirically validated and reliable?
Yes. Scheig follows an iterative empirical cycle—pilot testing, item-statistical analysis, and psychometric evaluation (reliability, item discrimination, criterion-related validity). The SelectRight™ instruments demonstrate strong test–retest reliability and high accuracy when developed from a thorough job analysis. Continuous refinement maintains and improves validity over time.Will these assessments hold up under legal or fairness review?
Scheig ensures legal defensibility by connecting content directly to job analysis and SME input, standardizing administration and scoring, monitoring for adverse impact, and documenting validation efforts. Because the items are relevant to the job and the methodology is transparent, the assessments are both ethically and legally solid for high-stakes selection.




