HRM Chapter 5: Personnel Selection and Assessment PDF
Document Details
![FearlessPlutonium9446](https://quizgecko.com/images/avatars/avatar-17.webp)
Uploaded by FearlessPlutonium9446
VUB
Tags
Summary
This document provides an overview of personnel selection and assessment methods. The chapter covers job analysis, selection tools, and performance measures, drawing on insights from experts in the field. It details the process, from job examination to decision making.
Full Transcript
5. Personnel selec,on and assessment 5.1. What is this chapter about? The personnel selec,on process involves choosing the most qualified applicant for a job, and it extends beyond hiring to decisions about employee transfer, rota,on, and promo,on. This tradi,o...
5. Personnel selec,on and assessment 5.1. What is this chapter about? The personnel selec,on process involves choosing the most qualified applicant for a job, and it extends beyond hiring to decisions about employee transfer, rota,on, and promo,on. This tradi,onal selec,on paradigm comprises three key stages: 1. Job Examina+on and KSAOs Hypotheses: In the first stage, the job is analyzed to iden,fy tasks and responsibili,es. Hypotheses are then formulated about the Knowledge, Skills, Abili,es, and Other characteris,cs (KSAOs) required for effec,ve job performance. This aligns with the concepts discussed in the job analysis and competency modeling covered in Chapter 3. 2. Selec+on Tool Development or Choice: The second stage involves selec,ng or crea,ng tools (e.g., cogni,ve ability tests, personality inventories) to measure the iden,fied KSAOs. The choice of a selec,on method is closely linked to the development of on-the- job performance measurements, as the ul,mate goal is to predict an individual's success in the job. 3. Informa+on Comparison and Decision Making: In the third stage, the informa,on collected during the selec,on process is compared to the cri,cal job requirements. This comparison leads to decisions regarding an applicant's suitability, such as being unqualified, qualified but not top-ranked, placed on a candidate reserve list, or receiving a job offer. Performance appraisals, discussed in the next chapter, play a crucial role in evalua,ng the effec,veness of the selec,on process. This summary is based on the insights of experts in personnel selec,on and assessment, including Walter Borman, David Chan, Filip Lievens, Kevin Murphy, Robert Ployhart, Ann Marie Ryan, Paul SackeY, and Neal SchmiY. The chapter's content also draws inspira,on from the work of Robert Roe, whose methodical approach and clear descrip,ons influenced the author's passion for the selec,on profession. 5.2. Binning and Barre8’s selec;on model Personnel selec,on is based on the belief that enduring traits can predict an individual's future job performance. Binning and BarreY's model outlines the rela,onships between psychological constructs and their opera,onal measures. Star,ng with the performance HRM Summary 1 domain, it encompasses job-related tasks and achievements. This model helps unravel the complexi,es of conduc,ng research in personnel selec,on and performance measurement. 5.3. The performance domain Performance Dimensions: Binning and BarreG's Defini+on: Performance is a subset of behaviors contribu,ng to organiza,onal goals. It's mul,dimensional, encompassing various work behaviors. Campbell's Eight Major Dimensions: 1. Job-Specific Tasks Proficiency Behaviors: Technical or core tasks central to the job (e.g., programming, teaching). 2. Task Proficiency of a Non-Job-Specific Nature: Common tasks across jobs (e.g., maintaining a clean workplace). 3. Communica+on: Clear expression in wri,ng and orally, regardless of content. HRM Summary 2 4. Demonstra+ng Effort: Commitment, exer,ng effort, and persistence. 5. Maintaining Personal Discipline: Adherence to rules, arriving on ,me, and avoiding substance abuse. 6. Facilita+ng Peer and Team Performance: Helping colleagues with work and personal problems, serving as a role model. 7. Supervisory/Leadership Behavior: Influencing subordinates, including leadership, coaching, and empowerment. 8. Management/Administra+ve Tasks: Tasks defining organiza,onal ac,ons without direct subordinates' interac,on. Rotundo and SackeG's Performance Components: Task Performance: Ac,vi,es recognized as part of the job contribu,ng to the technical core. Contextual Performance (Extra-Role Behavior): Voluntary tasks not formally part of the job, contribu,ng to the social and psychological atmosphere. Clusters of Contextual or Extra-Role Behavior: Altruism: Helping others (e.g., volunteering, assis,ng overworked colleagues). Conscien+ousness: Going beyond minimum requirements (e.g., punctuality). Sportsmanship: Maintaining posi,ve actudes for the greater good. Civic Virtue: Construc,ve involvement in the organiza,on's poli,cal process. Courtesy: Gestures to avoid problems for colleagues (e.g., leaving shared equipment in good condi,on). Counterproduc+ve Performance (Counterproduc+ve Work Behavior): Inten,onal behavior contrary to the organiza,on's interests. Examples of Counterproduc+ve Performance: Absenteeism, aggression, thed, fraud, harassment, sabotage, discrimina,on, drug use, fraud, etc. HRM Summary 3 Understanding these components provides a comprehensive evalua,on of job performance. 5.4. Performance measures Transla+ng Performance Domain into Measurable Criteria: Once someone is hired, the next step is determining how to measure their success on the job. The validity of the selec,on procedure is limited by the chosen criteria. Choosing appropriate criteria is challenging due to the nuanced nature of job success. Challenges in Defining Success: Job success is not binary; it varies among employees. Employees succeed to different extents - from exceeding expecta,ons to underperforming. Determining what defines success is influenced by policy decisions. Success criteria may change over ,me due to evolving job requirements or environmental factors. Objec+ve Criterion Measures: Objec,ve measures involve tangible, quan,fiable data stored in HR informa,on systems: Personnel data: turnover, transfers, training, promo,ons, salary growth. Produc,vity data: units produced, units sold, commission-based salary. Quality data: cost of failed work, waste, errors, customer complaints, absenteeism. Drawbacks of Objec+ve Measures: 1. Limited Insight into Job Requirements: Objec,ve measures might not capture essen,al job competencies. Example: Focusing on a secretary's typing speed and mistakes overlooks interpersonal skills and problem-solving abili,es. 2. Contamina+on of Measures: Objec,ve measures can be influenced by external factors beyond the individual's control. Example: A sales representa,ve's success may be influenced by market trends (e.g., energy crisis). Subjec+ve Criterion Measures: HRM Summary 4 Subjec,ve measures involve assessments of staff performance collected semiannually or annually (e.g., performance appraisals). These assessments should align with crucial job competencies. Complementarity of Objec+ve and Subjec+ve Measures: Objec,ve and subjec,ve measures are not interchangeable; they provide complementary insights. They generally show a moderately posi,ve correla,on (between.20 and.40). Collec,ng both types of data is recommended for a comprehensive evalua,on. In summary, the challenge lies in balancing tangible, objec,ve measures with subjec,ve assessments to holis,cally evaluate an individual's performance, considering both job requirements and external influences. 5.5. Predictor constructs In the context of personnel selec,on, the led part of Binning and BarreY's model focuses on predic,ve hypotheses about future job performance. This involves iden,fying key knowledge, skills, abili,es, and other constructs (KSAOs) that are believed to be related to success in the job. For instance, one company may priori,ze extraversion as a predictor for sales performance, while another may emphasize agreeableness for customer loyalty. These choices depend on the specific job requirements and the organiza,on's defini,on of successful performance. Constructs, such as gravity in science, are abstract summaries used to explain and predict paYerns, and in the case of personnel selec,on, they are essen,al for forming predic,ve theories about candidate success on the job. 5.5.1. Personality Personality refers to a person's dynamic and organized set of characteris,cs influencing their thoughts, mo,va,ons, and behaviors. Origina,ng from the La,n word "persona," meaning mask, personality is analogous to the theatrical representa,on of characters. In personnel selec,on, personality research has transi,oned from diverse traits to a focus on the Big Five personality model— emo,onal stability, extraversion, openness, conscien,ousness, and agreeableness. These traits, oden remembered as OCEAN, are assumed to be normally distributed, sta,s,cally independent, and rela,vely stable throughout life, although alterna,ve models like HEXACO also exist. 5.5.1.1. Openness (to experience) Openness to experience reflects a person's inclina,on for curiosity and innova,on over consistency and cau,on. High scorers seek new experiences, embrace cultural diversity, and HRM Summary 5 are open to novel thoughts and emo,ons, while low scorers prefer rou,ne, tradi,on, and have more limited interests. 5.5.1.2. Conscien+ousness Conscien,ousness involves being efficient, organized, and dependable, emphasizing traits like carefulness, thoroughness, responsibility, and planning. Those high in conscien,ousness are task-focused, orderly, hardworking, achievement-oriented, and beYer at mo,va,ng themselves to accomplish desired tasks. 5.5.1.3. Extraversion Extraversion involves being outgoing, energe,c, and comfortable in social interac,ons, with extraverts being enthusias,c, talka,ve, asser,ve, and energized by the company of others. In contrast, introverts are introspec,ve, reserved, and prefer solitude, oden feeling overwhelmed in social secngs, but they contribute unique talents and abili,es to the world, as highlighted by Susan Cain in her TED talk "The power of introverts.” 5.5.1.4. Agreeableness Agreeableness involves being friendly, compassionate, and coopera,ve, with agreeable individuals being likable, good-natured, and gentle. They tend to avoid conflict, show empathy, and engage in prosocial behaviors, while those scoring low may exhibit coldness, antagonism, and selfish tendencies, poten,ally becoming manipula,ve and decep,ve. 5.5.1.5. Neuro+cism Neuro,cism relates to being sensi,ve, nervous, and prone to stress, with traits like anxiety, depression, and insecurity. Those scoring high on neuro,cism may face challenges coping with nega,ve events, while those scoring low are typically calm, even-tempered, and less reac,ve to stress. 5.5.2. Cogni3ve abili3es Cogni,ve abili,es or intelligence, as defined by the American Psychological Associa,on, involve understanding complex ideas, adap,ng to the environment, learning from experience, and engaging in various forms of reasoning. Despite this defini,on, ongoing scholarly debates persist about the precise meaning and structure of intelligence. 5.5.2.1. The CaGell-Horn-Carroll theory The CaYell-Horn-Carroll theory (CHC theory) of cogni,ve abili,es views intelligence as hierarchical. At the lowest level are specific factors ,ed to a par,cular task (e.g., repea,ng sentences). Narrow abili,es are clusters of these specific factors, and broad abili,es are groups of narrow abili,es that correlate with each other. General intelligence (g) at the top suggests a common cause for all cogni,ve abili,es. HRM Summary 6 HRM Summary 7 5.5.2.2. The triarchic theory of intelligence Robert Sternberg's triarchic theory of intelligence introduces three interconnected forms of intelligence: 1. Analy+c Intelligence: HRM Summary 8 Defini+on: Involves problem-solving, analysis, evalua,on, explana,on, and comparison. Example: Measured by standard IQ tests, focusing on tasks like solving puzzles or logical reasoning. 2. Crea+ve Intelligence: Defini+on: Relates to using exis,ng knowledge in innova,ve ways to handle new problems or adapt to new situa,ons. Example: Showcased by individuals who can think outside the box, create novel solu,ons, or approach challenges with originality. 3. Prac+cal Intelligence: Defini+on: Encompasses the ability to interact successfully with the everyday world, oden involving tacit knowledge—ac,on-oriented knowledge acquired through personal experience. Example: Demonstrated by individuals who navigate real-world situa,ons effec,vely, relying on skills like adaptability and interpersonal understanding. Sternberg argues that these intelligences are interconnected, and individuals exhibit a unique blend of strengths across these three categories. Prac,cal intelligence, crucial for everyday success, involves the applica,on of knowledge to achieve personally valued goals in real-world scenarios. However, cri,cs like Linda Gopredson ques,on the dis,nc,on between prac,cal and analy,cal intelligence. Gopredson contends that tradi,onal intelligence tests already predict prac,cal success indicators, such as income, job pres,ge, and even avoiding legal issues. She argues against the need for a separate concept of prac,cal intelligence, sugges,ng that it might be task-specific knowledge rather than a broad cogni,ve aspect. 5.5.2.3. The theory of mul+ple intelligences Howard Gardner proposed the theory of mul,ple intelligences, challenging the tradi,onal view of intelligence focused on verbal-linguis,c and logical-mathema,cal abili,es. He suggested that there are at least seven other human intelligences, including musical, visual- spa,al, bodily-kinesthe,c, naturalis,c, interpersonal, intrapersonal, and existen,al intelligence. However, Gardner's theory has faced cri,cism for lacking empirical evidence, relying on subjec,ve judgment, and conflic,ng with the high correla,ons found in tradi,onal intelligence tests, suppor,ng the prevailing concept of general intelligence. Addi,onally, the theory discourages the use of standardized tests, making it challenging to verify or refute. HRM Summary 9 5.5.3. Emo3onal intelligence Emo,onal intelligence, gaining significant aYen,on since the mid-1990s, is debated for its classifica,on as an ability, a personality trait, or a blend of both due to its dual nature involving intelligence and emo,on. This construct has sparked discussions in both popular and academic literature. 5.5.3.1. Ability-based model of emo+onal intelligence The ability-based model of emo,onal intelligence posits that individuals differ in their capacity to process emo,onal informa,on and connect it to broader cogni,ve processes, providing an objec,ve standard for gauging emo,onal intelligence. This model iden,fies four broad abili,es: 1. Perceiving emo+ons: Recognizing and accurately iden,fying others' emo,ons, oden through nonverbal cues like facial expressions and vocal tones. It also involves understanding one's own emo,ons. 2. Using emo+ons: Effec,vely leveraging the impact of emo,ons on cogni,ve ac,vi,es, such as crea,vity and risk-taking. For instance, emo,onally intelligent individuals may adjust their behavior based on the systema,c effects of different emo,ons. 3. Understanding emo+ons: Accurately reasoning about various aspects of emo,ons, including labeling emo,ons precisely and an,cipa,ng emo,onal reac,ons to future events. This ability involves recognizing connec,ons between events and emo,onal responses. 4. Regula+ng emo+ons: Effec,vely managing emo,ons by increasing, maintaining, or decreasing their intensity or dura,on. Emo,onally intelligent individuals can set goals for modifying emo,ons and choose appropriate strategies for regula,on, such as delivering a fiery speech or a mo,va,onal talk during half-,me in sports.; explain easier 5.5.3.2. Trait models of emo+onal intelligence Trait models of emo,onal intelligence see it as a blend of how individuals perceive their emo,onal abili,es, like regula,ng emo,ons, and inherent traits such as asser,veness and self-esteem. Advocates argue that emo,onal intelligence in trait models can be reliably measured through individuals repor,ng on their own emo,onal percep,ons and disposi,ons. 5.5.3.3. Mixed models of emo+onal intelligence Mixed models of emo,onal intelligence encompass mental abili,es related to intelligence and emo,ons, alongside personality traits like warmth and persistence, as well as mo,va,on. Despite claims by Goleman in his widely-read 1995 book, which suggested that HRM Summary 10 this combina,on predicts leadership success, current research tends to dispute the idea that emo,onal intelligence significantly predicts leadership outcomes when considering factors like personality and intelligence. 5.5.4. Work experience Work experience, a fundamental aspect in personnel research, is a nuanced construct. Scholars like Quińones and colleagues have iden,fied two key dimensions of work experience: specificity (related to tasks, jobs, or organiza,ons) and measurement modes (including amount, ,me, and type). Specificity Dimension: 1. Task Level: Individuals may differ in their experience performing specific tasks, which can be measured by the number of ,mes they perform a par,cular task (amount), the types of tasks (type), and the ,me spent on a task (,me). 2. Job Level: Work experience at this level involves varia,ons in the total number of jobs held (amount), the types of jobs performed (type), and the ,me spent in each job (,me). 3. Organiza>onal Level: Differences in organiza,onal experience can be measured by the number of organiza,ons an individual has worked for (amount), the types of organiza,ons (type), and the ,me spent in each organiza,on (,me). 4. Team Level and Occupa>on Level: These addi,onal levels of specificity, introduced by Tesluk and Jacobs, include experiences related to teams and occupa,ons. Measurement Modes: 1. Amount: Reflects the quan,ty of experiences, such as the number of ,mes a task is performed, the total number of jobs, or the number of organiza,ons an individual has worked for. 2. Time: Captures the temporal aspect of experiences, including the ,me spent on tasks, in jobs, or with organiza,ons. 3. Type: Involves the nature or characteris,cs of experiences, such as the types of tasks performed, the kinds of jobs held, or the organiza,ons worked for. Addi+onal Dimensions by Tesluk and Jacobs: 1. Density: Introduced to capture the intensity of experiences, like the number of challenging situa,ons in a specific period, influencing subsequent outcomes. HRM Summary 11 2. Timing: Considers when a work event occurs rela,ve to a career sequence, recognizing that the ,ming of experiences can impact their effec,veness. Understanding these dimensions provides a comprehensive view of work experience, considering not only the quan,ty and ,me spent but also the specific nature and context of the experiences individuals accumulate in their careers. 5.5.5. Voca3onal interests Voca,onal interests, reflec,ng preferences for work ac,vi,es and environments, are vital in employee selec,on, assuming that people thrive and perform best in jobs they find interes,ng. John Holland's six-factor model, known as the RIASEC model, categorizes individuals into six types based on their interests: 1. Realis+c (Doers): Prefer prac,cal, hands-on ac,vi,es, oden outdoors. Examples include sodware technicians and mechanical engineers. 2. Inves+ga+ve (Thinkers): Enjoy scholarly, intellectual, and scien,fic work, focusing on observa,on, learning, and problem-solving. Examples include economists and mathema,cians. 3. Ar+s+c (Creators): Thrive in crea,ve, expressive, and unconven,onal ac,vi,es, using imagina,on and innova,on. Examples include architects and copywriters. HRM Summary 12 4. Social (Helpers): Prefer work involving helping, teaching, and caring for others. Examples include teachers and social workers. 5. Enterprising (Persuaders): Enjoy asser,ve, persuasive, and leadershiporiented ac,vi,es, influencing and managing for organiza,onal goals. Examples include sales representa,ves and business managers. 6. Conven+onal (Organizers): Prefer well-ordered and rou,ne ac,vi,es, oden with clerical or numerical tasks. Examples include tax accountants and financial analysts. Holland arranges these types in a circular order, forming a hexagon with varying distances indica,ng the degree of similarity. The model is oden represented as a circumplex, interpreted dimensionally as Data-Ideas and People-Things or Sociability and Conformity. This model helps understand and categorize individuals' voca,onal interests for career- related decisions. 5.5.6. Person-organisa3on fit Person-Organiza,on (PO) fit refers to the compa,bility between individuals and organiza,ons, with other types of fit like job, work group, and supervisor compa,bility also being important. Amy Kristof-Brown dis,nguishes two types of PO-fit: HRM Summary 13 1. Supplementary Fit: This occurs when an individual shares characteris,cs (e.g., personality, preferences) with others in the environment. For example, a sales firm hiring an extravert to match the exis,ng extravert team. 2. Complementary Fit: This occurs when an individual's characteris,cs fill a gap or add something missing to the environment. For instance, a sales firm hiring an introvert to bring a reflec,ve perspec,ve that was lacking. Kristof-Brown also dis,nguishes between: Needs-Supplies Fit: When an organiza,on fulfills individuals' needs, desires, or preferences, like mee,ng salary expecta,ons. Demands-Abili+es Fit: When an individual possesses the abili,es required to meet organiza,onal demands, such as having the right skills for a specific job. PO-fit involves more than just a "feeling of connec,on," encompassing various dimensions such as skills, preferences, and values that contribute to the overall compa,bility between individuals and their organiza,onal context. 5.5.7. Physical and psychomotor abili3es Physical and psychomotor abili,es remain crucial despite the growing importance of intellectual skills in the evolving nature of work. Unlike cogni,ve abili,es, physical and psychomotor abili,es lack a clear hierarchical structure, meaning that excelling in one doesn't guarantee proficiency in others. Fleishman's taxonomy outlines psychomotor abili,es, focusing on factors like speed, control, and precision in movement, while also iden,fying physical abili,es related to strength and physical proficiency. HRM Summary 14 HRM Summary 15 5.6 Predictor measures 1. Sign vs. Sample Method: Sign Method: Defini+on: Assumes a theore,cal link between unobservable traits (e.g., personality) and job performance. Measurement: Indicators of latent characteris,cs are taken as expected job performance. Example: "Extrovert people make beYer salespersons" – this is a hypothesis linking a trait (extroversion) to job performance. Sample Method: Defini+on: Involves direct representa,on of job content in the selec,on instrument. Measurement: Uses work samples or other sample instruments that mirror aspects of the job. Example: Asking an applicant to give a sales pitch as a representa,on of a sales task. 2. Temporal Perspec3ve: Defini+on: Measures differ based on whether they focus on past, present, or future behaviours. Examples: Past: Behavioral interview asking about past experiences. Present: Role play where applicants demonstrate current skills. HRM Summary 16 Future: Situa,onal judgment test where applicants imagine how they would react in future work situa,ons. 3. Psychometric Proper3es: Reliability: Defini+on: Concerned with the consistency of scores. Methods: Test-retest reliability (stability), internal consistency, and equivalence (parallel forms). Example: A reliable intelligence test should produce consistent scores when taken by the same person at different ,mes. Defini+on: Focuses on whether a test measures what it's supposed to. Types: Content Validity: Ensures that the measure covers all relevant aspects. Construct Validity: Examines if the scale measures the intended construct. Criterion-Related Validity: Compares the scale with a criterion measure. Example: A sales skills test is valid if it correlates with actual sales performance. 4. Incremental Validity: Defini+on: Assesses if a new predictor measure adds predic,ve ability beyond exis,ng measures. Method: Hierarchical mul,ple regression analyzes the addi,onal variance explained by the new measure. Example: Adding a situa,onal judgment test to a selec,on baYery to improve predic,on beyond interviews and cogni,ve tests. 5. Face Validity: Defini+on: Concerns whether applicants perceive the content of the predictor measure as relevant to the job. Considera+ons: Tasks should be perceived as related to the job for the test to be judged as face valid. HRM Summary 17 Example: A role-play scenario for a sales posi,on is more face-valid than unrelated childhood experiences ques,ons. 6. Adverse Impact: Defini+on: Evaluates subgroup differences in scores, impac,ng hiring or promo,on decisions. Concern: Subgroup differences may lead to underrepresenta,on of minority groups. Example: If a test consistently disadvantages one gender or ethnic group, it may be causing adverse impact. 7. Concerns in Measurement: Relevance to Applicants: Issue: Seemingly irrelevant tests may lead to a nega,ve percep,on by applicants. Impact: Applicants may not take the test seriously, affec,ng the validity of the measure. Adverse Impact: Issue: Systema,c differences in scores between majority and minority groups. Impact: Unfair disadvantage to minority groups, leading to underrepresenta,on in the organiza,on. This detailed breakdown provides a clearer understanding of the various aspects of predictor measures. If you have further ques,ons or need more clarifica,on on any specific point, feel free to ask! HRM Summary 18 HRM Summary 19 5.6.1. Biodata Biodata, short for biographical data, involves gathering informa,on about an individual's background and life history, including objec,ve details like educa,on and employment history, and subjec,ve preferences. Biodata enthusiasts believe that a person's past performance is the best predictor of future performance. This informa,on is typically collected through applica,on forms, but what sets biodata apart is that the ques,ons are scored based on their empirical or theore,cal connec,on to job performance. Key Points: 1. Defini+on: Biodata includes various aspects of an individual's life, from factual details (e.g., educa,on and employment history) to personal preferences. 2. Predic+ve Power: Advocates argue that past performance is a strong indicator of future job success. 3. Data Collec+on: Typically gathered through applica,on forms, where ques,ons are carefully designed to predict specific work-related performance. 4. Scoring: Biodata ques,ons are scored based on how well they relate to the job criteria, either empirically (from previous samples) or theore,cally (informed by job analysis). 5. Items in Biodata: Biodata items can be verifiable (e.g., birthplace) or non-verifiable (e.g., actudes). Verifiable items are consistent but less suscep,ble to faking. 6. Validity: Biodata has been shown to be a valid predictor of suitability, with varying degrees of validity across studies (typically in the range of 0.20-0.35). 7. Incremental Validity: Studies suggest that biodata provides addi,onal useful informa,on beyond established personality and cogni,ve ability measures. In summary, biodata is a systema,c way of collec,ng and scoring informa,on about an individual's background to predict their suitability for a job. It goes beyond the typical HRM Summary 20 applica,on form by scoring the responses, and studies indicate its value in predic,ng job outcomes. 5.6.2. References and leRers of recommenda3on References, or leYers of recommenda,on, are commonly used in hiring processes. They are leYers from people who know the job applicant, like former employers or teachers, and provide insights into the applicant's suitability for the job. There are different ways to get references: Applicants can provide recommenda,on leYers directly from their recommenders. Applicants can share contact details of references, and the employer contacts them for informa,on. Employers may use structured online forms to gather informa,on from references. References can be categorized as structured or unstructured. Unstructured references, like recommenda,on leYers, can be overly posi,ve and lack specific jobrelated details. Structured references involve specific ques,ons for referees, making the process more organized. However, references have some challenges: Different referees may provide varied informa,on, and their ra,ngs may not align well (low inter-rater reliability). References, especially structured ones, oden have limited predic,ve power for job performance (low validity). Several reasons contribute to these challenges: Referees may tend to be overly posi,ve, a phenomenon known as the Pollyanna principle. This is the tendency to focus on the posi,ve aspects and remember more HRM Summary 21 posi,ve experiences. Referees might not have incen,ves to be cri,cal, and their main concern is the applicant, not the hiring company. Referees might provide similar references for all applicants, revealing more about themselves than the applicants. Personal biases and mood can also influence the content of references. Referees may want to keep good employees and might write posi,ve references even for those they want to see leave. To improve the reliability and validity of references, using standardized forms, involving mul,ple referees, using compara,ve ranking scales, and ensuring referee anonymity can be helpful. However, it's s,ll uncertain whether references can offer unique insights compared to other assessment methods like tests, interviews, and biodata. 5.6.3. Cogni3ve ability tests 1. Hierarchical Structure of Cogni+ve Abili+es: Cogni,ve abili,es exhibit a hierarchical structure, encompassing general mental abili,es (GMA) and specific skills like numerical, spa,al, verbal, and perceptual abili,es. Tests such as Stanford-Binet, Wechsler Adult Intelligence Scale, and others measure these cogni,ve abili,es, evolving con,nuously through developments from various psychometric test developers and publishers. 2. Valida+on through Extensive Research: Over 50 years of research involving a vast sample size of over five million individuals establishes the robustness of cogni,ve ability tests. The comprehensive meta-analyses conducted by researchers like Hunter and Hunter, with a database of 515 studies and over 38,000 par,cipants, solidify the conclusion that cogni,ve ability tests are unparalleled predictors of job and training performance. 3. Training Performance Predic+ons: Cogni,ve ability tests excel in predic,ng learning, job knowledge acquisi,on, and training performance, with correla,ons ranging from.50 to.70. Validi,es generalize across diverse jobs, organiza,ons, and secngs, with GMA and specific quan,ta,ve and verbal abili,es showing the highest predic,ve power. 4. Job Performance Predic+ons: HRM Summary 22 Cogni,ve ability tests demonstrate strong predictability for overall job performance, with correla,ons in the range of.35 to.55. Validi,es extend across different job categories, organiza,ons, and secngs, showcasing their consistency and reliability. 5. Adverse Impact Concerns: The analysis of race and ethnic group differences reveals varia,ons in cogni,ve ability scores. On average, Blacks and Hispanics score lower than Whites, raising concerns about adverse impact, especially for lowercomplexity jobs. 6. Applicant Percep+ons and Reac+ons: Meta-analy,cal research highlights that cogni,ve ability tests receive posi,ve ra,ngs from applicants. Applicants recognize these tests as scien,fically valid methods that respect privacy, providing them with a plauorm to demonstrate their capabili,es. 7. Drawback - Perceived Interpersonal Coldness: A common drawback is the percep,on of cogni,ve ability tests as 'interpersonally cold,' par,cularly among applicants for higher posi,ons. To address this, some consul,ng firms develop tests with business-related content, incorpora,ng realis,c scenarios to enhance engagement. In essence, cogni,ve ability tests emerge as powerful tools in predic,ng performance, yet considera,ons about the adverse impact and the need for improved applicant engagement remain prominent in discussions surrounding their applica,on in personnel selec,on. 5.6.4. Personality inventories 1. Personality Inventories Overview: HRM Summary 23 Personality inventories are standardized ques,onnaires that assess an individual's typical behaviour, thinking, and reac,ons to various situa,ons. These are not tests with right or wrong answers but compare an individual's responses to a norm group, determining their posi,on on different personality scales. 2. Examples of Personality Inventories: Well-known examples include NEO-PI, HEXACO-PI-R, TIPI, OPQ, CPI, and Hogan Personality Inventory. A website (hYps://ipip.ori.org/) offers over 250 measures of broad and narrow personality traits without a fee. 3. Predic+ve Power of Conscien+ousness: Decades of research consistently show that conscien,ousness is a strong predictor of job performance, especially for roles with high autonomy. 4. Big Five Personality Traits and Job Performance: Table 5.6 illustrates the rela,onships between the Big Five personality traits (Openness, Conscien,ousness, Extraversion, Agreeableness, Neuro,cism) and overall job performance. Different traits are predic,ve of specific criteria; for example, openness correlates with crea,vity and adaptability, while agreeableness predicts success in customer service and team performance. 5. Correla+ons with Cogni+ve Ability Measures: Personality variables generally have low correla,ons with cogni,ve ability measures but contribute incrementally to predic,ng job performance when used together. 6. Limited Ethnic Differences and Low Adverse Impact: Studies indicate few differences between Whites and ethnic minori,es in most personality variables. Personality inventories are unlikely to result in adverse impact, making them fair across diverse groups. 7. Concerns and Cri+cisms: Cri,cisms include concerns about inten,onal response distor,on or "faking" by applicants to present a socially desirable image. HRM Summary 24 Despite this, social desirability doesn't significantly reduce the predic,ve validity of personality inventories, as the ability to behave in socially desirable ways itself predicts success in job performance. Some ques,ons may be cri,cized for not respec,ng privacy or relevance to the job. 8. Other-Report Measures: Organiza,ons are increasingly using other-report measures, where individuals who know the applicant well (supervisors, colleagues, family) provide ra,ngs of the applicant's personality. Meta-analy,c research suggests that these other-reports have incremental validity over self-reports, capturing not just the self-assessed "iden,ty" but also the "reputa,on" of the person. In summary, personality inventories are valuable tools in employee selec,on, offering insights into an individual's traits, with conscien,ousness being a par,cularly strong predictor of job performance. While concerns about inten,onal response distor,on exist, research suggests that personality assessments remain effec,ve and fair in predic,ng success in job roles. HRM Summary 25 5.6.5. Emo3onal intelligence tests Emo,onal intelligence (EI) is approached through two main models: the trait model and the ability model. In the trait model, individuals self-report their emo,onal abili,es based on statements. However, this approach has limita,ons, including the selfserving bias, where individuals tend to overes,mate their emo,onal intelligence. Moreover, there is evidence of people faking responses on self-report ques,onnaires, raising concerns about the accuracy of self-percep,ons. Metaanaly,c evidence indicates that self-report measures of emo,onal intelligence are more strongly linked to personality traits than performance-based measures. On the other hand, the ability model assesses emo,onal intelligence through tasks and problem-solving related to emo,ons. The Mayer-Salovey-Caruso Emo,onal Intelligence Test (MSCEIT) is a widely used performance-based measure that covers four branches of emo,onal intelligence: perceiving, using, understanding, and regula,ng emo,ons. Despite its advantages, the MSCEIT is challenging to access due to copyright restric,ons. Other performance-based measures include assessing empathic accuracy, where respondents iden,fy emo,ons in various media recordings, and the Situa,onal Test of Emo,onal Understanding (STEU), which evaluates the ability to understand emo,ons based on scenarios. The STUE assesses respondents' choices of emo,ons in different scenarios, linking events to emo,onal responses. In summary, while the trait model relies on self-repor,ng, the ability model employs performance-based measures to provide more reliable insights into individuals' actual emo,onal intelligence. 5.6.6. Integrity tests Organiza,ons are increasingly incorpora,ng integrity tests into their hiring processes to evaluate the values, integrity, and poten,al dark side behaviours of poten,al employees. This trend is mo,vated by the growing challenges of thed and counterproduc,ve work behaviour in the workplace. Here are more detailed insights into the types of integrity tests and their applica,ons: HRM Summary 26 Overt Integrity Tests: Purpose: Overt integrity tests have a dual focus. One sec,on seeks admissions of past wrongdoing, while the other gauges actudes toward thed. Examples: Ques,ons are designed to elicit admissions of past thed or wrongdoing and measure actudes toward illegal behaviour. Concerns: These tests are suscep,ble to faking, as candidates might provide socially desirable responses, compromising their accuracy. Personality-Oriented Integrity Tests: Structure: Similar to personality inventories, these tests encompass a broader set of scales, including traits like dependability, conscien,ousness, conformity, risk-taking, and hos,lity. Examples: The Hogan Development Survey (HDS) is a representa,ve personality-oriented integrity test that assesses traits such as dominance. Purpose: These tests aim to predict counterproduc,ve performance by iden,fying deviant personality profiles that may be detrimental to organiza,onal well-being. Other Integrity Measurement Methods: Situa+onal Judgment Tests (SJT) and Condi+onal Reasoning Tests: Approach: These tests indirectly measure implicit biases and ra,onaliza,ons linked to various mo,ves, such as aggression. Example: Condi,onal reasoning tests present candidates with reasoning problems, assessing their unconscious biases related to aggression. Validity: The effec,veness of these tests relies on candidates being unaware that they are being evaluated for specific traits. Ra3onale for Integrity Tes3ng: Preven+ng Issues: Organiza,ons use integrity tests to iden,fy poten,al maladap,ve traits, especially subclinical forms of the dark triad: Narcissism, Machiavellianism, and Psychopathy. Discre+on and Righteousness: Certain professions requiring a high level of discre,on, personal discipline, and righteousness, such as cash couriers or police officers, integrate integrity tests into their selec,on process. HRM Summary 27 Values-Based Competencies: With values like integrity and ethical conduct becoming integral to organiza,onal competency frameworks, measuring them in the selec,on process is deemed essen,al. In summary, integrity tests serve as a proac,ve measure for organiza,ons to mi,gate poten,al risks associated with employee behaviour, par,cularly in roles demanding high levels of trust and ethical conduct. 5.6.7. Voca3onal interest tests Interest measures, oden used in voca,onal guidance, are gaining relevance in employee selec,on. The broader goal of selec,on is to match a person's characteris,cs with job requirements. While interests are crucial, they are more commonly applied in situa,ons where many applicants may be assigned to various posi,ons, such as in the military. Key Interest Ques+onnaires: 1. Self-Directed Search (SDS) 2. Jackson Voca+onal Interest Survey (JVIS) 3. Kuder Occupa+onal Interest Inventory (KOIS) 4. Strong Interest Inventory (SII) Role of Interests: Correla+on with Job Performance: Research indicates a weak correla,on (.14) between interests and job performance. Predic+ve Power: Interests are beYer predictors of training performance, employee turnover, and notably, job sa,sfac,on. HRM Summary 28 Mo+va+on vs. Ability: Interests predict mo,va,on to do a job but have a weaker connec,on to the ability to perform the job effec,vely. Use and Limita+ons: Faking Measures: Interests can be easily manipulated, making them more suitable for counselling, career guidance, and outplacement than strict selec,on processes. In summary, while interests play a vital role in predic,ng mo,va,on and job sa,sfac,on, they are considered a necessary but insufficient condi,on for overall job performance, especially in situa,ons where applicants may be assigned to various roles. 5.6.8. Employment interviews Employment Interviews Overview: 1. Defini+on: Employment interviews are interac,ve sessions where one or more interviewers ask applicants ques,ons to evaluate their qualifica,ons for employment decisions. 2. Popularity: Interviews are widely used globally and preferred by supervisors and HR prac,,oners for assessing applicants. 3. Applicant Percep+on: Applicants find interviews fair compared to other selec,on methods and expect them as part of the hiring process. Objec+ves of Employment Interviews: 1. Informa+on Exchange: Interviewer seeks details about the applicant's job history and educa,on. The applicant aims to explain and present herself posi,vely. 2. Assessing Suitability: The interviewer evaluates if the applicant has the required qualifica,ons and assesses personality fit. Applicant gauges the organiza,on's work environment for suitability. 3. Personal Contact: The interview adds a personal touch to the selec,on process, ac,ng as its flagship. 4. Presen+ng Job and Organiza+on: HRM Summary 29 Each interview aims to provide an accurate, realis,c view of the job, organiza,on, and rewards offered. Objec+ves' Importance: The significance varies based on the economic climate and labour market condi,ons, influencing the power dynamic between applicants and organiza,ons. Interview Types: 1. Recruitment Interview: Focuses on selling the organiza,on and posi,on to applicants, oden conducted at career fairs. 2. Screening Interview: Ini,al short interview based on a CV or applica,on form. Types of Employment Interviews: 1. One-on-One Interview: Involves one interviewer and one applicant. 2. Tandem Interview: Two interviewers and one applicant. 3. Panel Interviews: More than two interviewers and one applicant. 4. Series Interview: Mul,ple interviews in succession with different individuals, provide a comprehensive view. 5. Group Interview: One interviewer and mul,ple applicants. Structured vs. Unstructured Interviews: Structured Interview: Follows a predetermined script and standardized ques,ons. Unstructured Interview: Lacks a strict format, allowing more flexibility in the conversa,on. HRM Summary 30 Note: Most interviews fall on a spectrum between fully structured and unstructured, with varying degrees of organiza,on and flexibility. 5.6.8.1. Unstructured interviews Unstructured interviews aim to go beyond facts, focusing on truly understanding the applicant. The goal is to establish trust, postpone judgments, and create an environment where the interviewee can openly discuss personal aspects. The interviewer adapts to the interviewee's preferred topics, seeking spontaneity and authen,city. Success depends on a shared goal of finding a good fit between the person and the organiza,on. Challenges include applicants presen,ng socially desirable impressions, impac,ng the accuracy of predic,ng future job performance. 5.6.8.2. Structured interviews Structured Interview Overview: 1. Defini+on: Planned, standardized, and led by the interviewer or panel, with predetermined themes. 2. Objec+ve: Enhance psychometric proper,es by increasing standardiza,on or assis,ng the interviewer in ques,on selec,on and evalua,on. 3. Components of Structure: HRM Summary 31 Content Dimension: Basing ques,ons on job analysis. Asking the same ques,ons to each applicant. Limi,ng promp,ng and follow-up. Using beYer ques,on types. Controlling ancillary informa,on. Not allowing ques,ons from applicants un,l ader the interview. Evalua+on Dimension: Ra,ng each answer or using mul,ple scales. Using anchored ra,ng scales. Taking notes. Using mul,ple interviewers. Using the same interviewer(s) across all applicants. Not discussing applicants/answers between interviews. Providing interviewer training. Using sta,s,cal analyses for decision-making. 4. Autonomy Impact: More structure may improve psychometric proper,es but can impact interviewer mo,va,on by limi,ng autonomy. Common Structured Interviews: 1. Situa+onal Interview: Based on cri,cal incidents turned into ques,ons about how applicants would behave in specific situa,ons. Assesses inten,ons as predictors of future behaviour. HRM Summary 32 Can be perceived as boring and impersonal, with the poten,al for socially desirable responses. 2. Behaviour Descrip+on Interview: Structured interviews where applicants share past cri,cal job-related situa,ons. Probes for real-life situa,ons and past reac,ons, reducing socially desirable answers. Applicants generally prefer the personal nature of unstructured interviews. Scien+fic Research on Structured vs. Unstructured Interviews: 1. Biases in Unstructured Interviews: Subject to ra,ng errors (halo effect, horns effect, etc.). Confirma,on bias leads interviewers to interpret informa,on to confirm their first impressions. Unrelated characteris,cs (gender, aYrac,veness) can influence judgments. 2. Inter-rater Reliability: Structured interviews (>.80) show higher reliability than unstructured interviews (<.70). Situa,onal interviews slightly outperform behaviour descrip,on interviews in reliability. 3. Validity and Predic+ons: Structured interviews have higher predic,ve and concurrent validity than unstructured interviews. Predic,ve validity coefficients for structured interviews are around.40-.50, beYer than unstructured interviews (.40-.20). 4. Unbiasedness: Structured interviews show less bias toward minority groups and lead to fewer court disputes than unstructured interviews. 5. Training and Self-Control: Unstructured interviews require thorough training, flexibility, and high selfcontrol, making them harder to learn. Conclusion: HRM Summary 33 Structured interviews are psychometrically superior to unstructured interviews. The choice depends on the interview's objec,ve: structured for measuring competencies, unstructured for rapport and informa,on exchange. Interviewers' preferences for autonomy should be considered, especially for seasoned interviewers. 5.6.9. Work samples Work Sample Test Overview: 1. Defini+on: A test where applicants perform actual tasks similar to those on the job, assessing physical and/or psychological aspects. 2. Fidelity Level: Work samples aim for high realism, replica,ng job condi,ons closely. 3. Scoring Types: Objec+ve Work Samples: Reflect actual work tasks. Scored based on ,me, errors, or standardized checklists. Oden mimics industrial func,ons, typing tasks, driving tests, or simula,ons. Subjec+ve Work Samples: Focus on decision-making, nego,a,on, or discussion. Scored subjec,vely through ra,ngs. Commonly used in assessment centers. Subjec+ve Work Samples Examples: 1. In-Basket Exercises: Simula,ons with materials found in a manager's in-basket. Applicants priori,ze, make decisions, and propose ac,ons within a ,me limit. 2. Fact-Finding: Involves problem-solving with limited informa,on. Applicants ask a panel of resource persons ques,ons to gather informa,on for a solu,on. 3. Role-Play: HRM Summary 34 One-to-one exercises where applicants conduct interviews or conversa,ons. Assessed by an observer, and the strategy and outcome are discussed aderwards. 4. Case Analysis: Applicants analyze a realis,c problem, interpret data, and provide recommenda,ons in a report. 5. Oral Presenta+on: Applicants present an analysis and solu,on to assessors, who ask ques,ons. Visual aids like flip charts can be used. 6. Group Discussion: A group of applicants collaborates to solve a case or management problem. Assessors observe and record contribu,ons. Predic+ve Validity and Recep+on: 1. Predic+ve Validity: Work sample tests, in some cases, show higher predic,ve validity than general mental ability. A recent meta-analysis indicates a predic,ve validity coefficient of.33. 2. Applicant Recep+on: Generally well-received by applicants due to high face validity (correspondence between the test and job tasks). Drawbacks of Work Sample Tests: 1. Adverse Impact: Recent research shows a systema,c disadvantage for Black applicants, leading to adverse impacts. 2. Development Cost: High development cost. Job-specific (different tests for different jobs). 3. Criterion Domain Limita+on: Captures only a small por,on of the criterion domain (what the job entails). HRM Summary 35 4. Experience Requirement: Oden requires applicants to have work experience to take the tests. 5. Mul+dimensional Nature: Measures mul,ple constructs, making it challenging to iden,fy exactly which constructs are being measured. 6. Maximum Performance Measurement: Measures what the applicant is maximally capable of, not their typical performance level on the job. 5.6.10. Assessment centres Assessment Centre Overview: 1. Defini+on: A procedure, not a loca,on, where both group and individual exercises, tests, and simula,ons are used to iden,fy quali,es essen,al for good job performance. 2. History: Originated in the 1930s for recrui,ng German army officers, later adopted by Bri,sh and American militaries, and reluctantly applied in business in the 1960s. 3. Usage: Standard technique in various sectors, with about 25% of larger companies in Flanders and the Netherlands regularly using them. Characteris+cs of Assessment Centre: 1. Par+cipant Ac+vi+es: Go through various exercises (group and individual) reflec,ng real job situa,ons, similar to work sample tests. 2. Exercise Dura+on: Exercises relevant to the job, are oden called situa,onal tests. Assessment centres usually take no more than half a day today. 3. Assessors: Psychologists and line managers (two levels higher in the hierarchy than applicants). Focus on applicant behaviour rather than verbal claims. 4. Behavioural Emphasis: Role plays systema,cally elicit behaviour, and assessors evaluate competencies based on observed behaviour. HRM Summary 36 5. Mul+ple Assessors: Mul,ple assessors observe and evaluate, some,mes using video recordings for feedback. 6. Integra+on of Ra+ngs: Ra,ngs are integrated through discussion or sta,s,cal methods for beYer predic,ons. Exclusions from Assessment Centre Label: Panel or series interviews as the only technique, using only one simula,on, paper-and- pencil tests, using only one assessor, or using several simula,ons without integra,ng results. Effec+veness and Benefits: 1. Predic+ve Validity: The predic,ve validity coefficient is around.29, indica,ng incremental validity. 2. Reliability: Interrater reliability varies between.60 (moderate) and.90 (high) based on assessor experience and training. 3. Cost-Benefit Analysis: Despite being expensive, benefits exceed costs, considered an investment. 4. Face Validity: With high face validity, applicants see the relevance of their managerial job. 5. Social Desirability: Applicants are less likely to behave in socially desirable ways during exercises. 6. Consistency Challenges: Assessors tend to simplify evalua,ons, using a limited number of competencies. Low correla,on between competency ra,ngs across different exercises. Conclusion: Assessment centres are an effec,ve method for evalua,ng competencies, but challenges exist in maintaining consistency across different exercises, making final judgments more complex. HRM Summary 37 5.6.11. Situa3onal judgment tests High-Fidelity Simula+ons: Work Samples and Assessment Centres 1. Defini+on: Work samples and assessment centres are selec,on procedures known as high-fidelity simula,ons because they provide realis,c job situa,ons for applicants to respond to immediately. 2. Characteris+cs: Realism and job relevance are high. Confronts applicants with realis,c job scenarios. 3. Drawbacks: Rela,vely expensive. Can only be conducted with small groups of applicants (typically in the final phase of selec,on). Alterna+ve Selec+on Tool: Situa+onal Judgment Tests (SJTs) 1. Purpose: Developed as an alterna,ve to high-fidelity simula,ons, offering a reasonable degree of realism and job relevance suitable for large groups of applicants. 2. Defini+on: SJTs assess an applicant's judgment in workplace situa,ons. They present scenarios and a list of possible ac,ons, and applicants must evaluate the courses of ac,on. 3. Formats: WriYen, verbal, video-based, or computer-based. An example SJT item involves presen,ng situa,ons and response op,ons in wriYen form (low-fidelity simula,ons). 4. Skills Measured: Primarily assess applied social skills like interpersonal skills, teamwork, and leadership, but can measure various other skills. 5. Predic+ve Validity: Moderate predic,ve validity of approximately.26. Measures procedural knowledge about effec,ve ways to handle situa,ons. 6. Instruc+ons: HRM Summary 38 Behavioural instruc,ons (what one 'would' do) or knowledge instruc,ons (what one 'should' do). Strengths of SJTs: 1. Incremental Validity: Provide addi,onal predic,ve informa,on beyond cogni,ve ability tests and personality inventories. 2. Subgroup Differences: Generate fewer subgroup differences between minority and majority groups compared to other selec,on methods. 3. Face Validity: High face validity, especially with video and mul,media material. Video-based SJTs, known as medium-fidelity simula,ons, stop at key moments, promp,ng applicants to indicate their responses. 4. Predic+ve Validity (Medium Fidelity): Medium-fidelity SJTs with video and mul,media technology have higher predic,ve validity than wriYen counterparts, jus,fying higher development costs. HRM Summary 39 5.6.12. Physical fitness tests Physical Fitness Examina+on for Job Selec+on: 1. Approach: The physician assumes a person is generally fit for work, except if there are specific physical limita,ons revealed during the examina,on. 2. Ac+vi+es Assessed: The exam focuses on basic physical ac,vi,es like standing, walking, climbing, bending, etc. 3. Working Condi+ons Considered: Physical working condi,ons, such as indoor/outdoor work, cold/humid atmospheres, heights, and underwater work, are examined. 4. Purpose: Determines if performing the job could harm the applicant's physical health. 5. Selec+on Decision: Occupa,onal health provides informa,on on whether applicants are allowed or not allowed to do the job. No ranking of applicants based on physical fitness is typically involved. 6. Rela+on to Job Performance: Physical fitness is not always directly linked to job performance and is oden trainable. Special Cases: Physical Tests for Certain Professions (e.g., Military, Firefighter, Police Officer): 1. Tests Involved: Extensive physical tests, like obstacle courses, ,med running, cycling, and balance beam walking. 2. Gender Differences: Due to natural gene,c differences, men oden outperform women in these tests. To ensure equal opportuni,es, organiza,ons may set different standards for men and women. 3. Equal Opportunity Measures: HRM Summary 40 Lower standards for women are oden set to increase the number of women in these professions. Despite different standards, men and women are expected to perform the same tasks on the job. 5.7. Choosing predictor measures When designing a selec,on process, it's crucial to focus on the effec,veness (predic,ve and incremental validity) of instruments while also considering their cost, acceptance by applicants and interviewers, and the poten,al for adverse impact. Balancing these factors ensures a well-rounded and efficient selec,on process that aligns with organiza,onal goals and values. 5.7.1. Predic3ve validity The structured interview is more effec,ve than the unstructured interview, with a predic,ve validity of about 0.42. This means that 18% of the differences in job performance can be explained by varia,ons in interview scores. While this might seem modest, it's comparable to or even higher than the effec,veness of various medical interven,ons like Viagra, Ibuprofen, or coronary artery bypass surgery. It's important to note that these values are averages across different studies, jobs, and organiza,ons, and the actual predic,ve power depends on factors like interview structure and ques,on quality. Addi,onally, the results use overall job performance as a criterion, but the effec,veness of selec,on methods can vary for different aspects of job performance, such as task performance, contextual performance, and counterproduc,ve performance. HRM Summary 41 5.7.2. Stakeholder reac3ons Besides how well a selec,on method predicts job performance, it's crucial to consider how applicants perceive the process. Factors like feeling treated fairly, the relevance of tests to the job (face validity), and overall acceptance of the procedure maYer. Interes,ngly, methods with high predic,ve validity are generally viewed as more relevant by applicants, except for the unstructured interview. It's important to take into account the reac,ons of both applicants and interviewers during the design of a selec,on procedure, as their sa,sfac,on can impact the procedure's reliability, validity, and the organiza,on's image. HRM Summary 42 5.7.3. Subgroup differences and adverse impact When making decisions about selec,on procedures, it's important to consider how different groups might be affected. The challenge is that the most effec,ve selec,on methods can also lead to more significant differences between groups. For example, cogni,ve ability tests, while highly effec,ve, may show notable gaps between different ethnic groups. Organiza,ons face a choice between op,mal predic,on, which may result in less diversity, or priori,zing diversity and poten,ally using less effec,ve selec,on tools. One approach is to use alterna,ve methods that measure various skills and minimize subgroup differences. Another involves crea,ng items that are fair to all groups, and there are strategies like prac,ce opportuni,es to address these concerns. 5.7.4. Incremental validity and cost When using different selec,on methods together, it's essen,al to understand how much each method adds to the overall predic,ve validity. Cogni,ve ability tests are oden considered a star,ng point because they are cost-effec,ve. However, structured interviews, personality assessments measuring conscien,ousness, and work samples significantly improve predic,ve validity beyond cogni,ve ability tests. HRM Summary 43 This means that these addi,onal methods measure different aspects, making the combined selec,on process more valuable in predic,ng job performance. In summary, a cost-effec,ve and highly valid selec,on process could include a cogni,ve ability test along with a conscien,ousness measurement. HRM Summary 44