Finding and Measuring Talent PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This document explores talent management from both a business and psychological perspective, focusing on employee selection and the accurate measurement of talent for current and future employees. It examines various selection methods and considers the issue of fairness and bias in job assessments.
Full Transcript
FINDING AND MEASURING TALENT **2** **INTRODUCTION** We saw in Chapter 1 that one of the key areas of work psychology theory and application is around employee selection; that is, finding the best people to fill organisa- tional roles. Finding and developing these people is often referred to as ta...
FINDING AND MEASURING TALENT **2** **INTRODUCTION** We saw in Chapter 1 that one of the key areas of work psychology theory and application is around employee selection; that is, finding the best people to fill organisa- tional roles. Finding and developing these people is often referred to as talent management (TM). We should note that there is some debate over whether TM should focus only on 'high-potential' employees or on identifying the talent and contribution that each employee can make. In this chapter we take the latter, broader view and look at ways in which psychology can help us to identify and measure potential job performance for both current and future employees. We begin by examining talent management from both a business and a psychological perspective. Selecting the right people for the right jobs is important as the first key step in managing the organisation's talent effectively. We are con- cerned with how identification and management of talent rest on accurate measurement of complex phenomena, and we look at how job analysis and competencies form the foundation for good selection processes. Because measurement is such a key issue here, we con- sider how the reliability and validity of our measures affect the quality of selection decisions, before moving on to review the challenge inherent in attempting to conduct fair assessments of people's future performance. Having estab- lished the ways in which we can compare different measure- ments of talent, we move on to a detailed review of different selection methods available to organisations. Throughout this section, we will be reminded that these methods can also be used in assessing current employees' performance and training needs, which is the subject of Chapter 3. This chapter will help you answer the following questions: - How can psychology contribute to talent management? - What are the different methods of identifying employee talent and how do we know which are the best ones to - Why is accurate measurement of employee talent - How can we ensure fairness and overcome bias in job page50image9427136 **Case Study JPMorgan crosses the line -- *guanxi* and job selection in China** HR-related challenges, such as finding qualified\ talent, have been identified as the biggest\ challenge facing foreign companies in China (EU\ SME Centre, 2015), affecting 70--80% of businesses.\ Economic reforms in China from the late 1970s have\ increasingly enabled companies to have flexibility\ in recruiting to match their needs, rather than\ being told how many employees to recruit by the\ central government (Zheng et al., 2009). But they\ also face significant pressures in finding qualified\ candidates. Consultancy company McKinsey\ noted that HR professionals believed less than 10%\ of job candidates in China had the requisite skills\ demanded by world-class companies (Farrell & Grant, 2005). This was down to a lack both of practical skills developed in an educational environment that instead emphasises theory and of the requisite language skills to work internationally, resulting in intense pressure on companies trying to recruit and select appropriate people for their needs. In China, *guanxi* (usually translated as relationships and networks) is critical to business survival as well as the job search process and can influence the job selection process in two ways. *Guanxi* acts first as a channel to distribute both formal and informal job information and second as a means to influence employers and increase a candidate's chances of gaining the job (Weng & Xu, 2018). There is evidence that the informational advantage gained through *guanxi* is important in gaining a job, but the influence element becomes less important in a market-based approach to selection with open recruitment processes (Weng & Xu, 2018). ![page50image6989776](media/image2.jpeg) 21 Getty Images/PhotoAlto/James Hardy 22 WORK PSYCHOLOGY IN ACTION Where a candidate recognises they have a lower match on the required qualifications and experience for a job, they are more likely to use *guanxi* to gain that job. However, the widespread use of *guanxi* does not necessarily mean it is viewed positively by the population. Research has shown that while Chinese people recognise that hiring based on personal ties can be profitable, in that it may bring future business opportunities through the extended network, they do not believe it is fair (Liu et al., 2015). So at what point does *guanxi* turn into bribery and corruption? A high-profile example of a company that got it wrong is JPMorgan Chase. JPMorgan Chase is one of the oldest financial institutions in the USA and one of the largest in the world, with assets of \$2.6 trillion (JPMorgan, 2019). It operates in over 100 markets and employs a quarter of a million people; but in 2013, it hit the headlines when a US-based Securities and Exchange Commission (SEC) investigation into its hiring practices in China was opened. The SEC is a regulatory body responsible for protecting investors and ensuring that markets are fair and orderly. As part of its mission, it investigates allegations of foreign corruption. In 2006, JPMorgan started a programme called 'Sons and Daughters' that provided a separate recruitment track for the children of Chinese officials. It was initially portrayed as a means to reduce nepotism and avoid problems with bribery through having a higher level of scrutiny of these well-connected applicants. But instead of this, job applicants from powerful Chinese families often had fewer interviews and sometimes did not meet the bank's academic and expertise standards (Silver-Greenberg & Protess, 2013). While there is nothing wrong with hiring well-connected people, a problem arises if a job is given as a bribe to gain further business from that person's connections. For example, Tang Xiaoning is the son of the chairman of the China Everbright Group, and after his hiring in 2010, JPMorgan's business dealings with China Everbright went from non-existent to substantial. And after Zhang Xixi, the daughter of a very high-ranking official in China's railway ministry, was hired, the bank was chosen to advise the ministry on becoming a public company and later to advise the operators of a Beijing high-speed railway on its public stock offering (Silver-Greenberg & Protess, 2013). Was this just sensible hiring of well-connected young people or was it a sign of corruption and bribery? The investigation revealed that over a seven-year period from 2006, JPMorgan hired 100 people in China and the rest of Asia at the request of government officials and subsequently gained business amounting to \$100 million. The director of the SEC Enforcement Division, Andrew J. Ceresney, concluded, 'JPMorgan engaged in a systematic bribery scheme by hiring children of government officials and other favoured referrals who were typically unqualified for the positions on their own merit' (Gera, 2016). The bank even had spreadsheets to track how much money came from each client whose referrals were given jobs. The end result of the investigation was that the bank had to pay a \$264 million fine. Far from its vaunted aim of reducing nepotism and corruption, the Sons and Daughters programme had in fact enhanced and encouraged unfair and unethical hiring practices. **TALENT MANAGEMENT: BUSINESS AND PSYCHOLOGY** TM can be approached from two different directions. The first is from a business perspective, where the focus is on having the right people in the right place at the right time, and considers issues such as employee turnover, succes- sion planning and development. The main question we need to answer here is: *Why think about talent?* The sec- ond way to approach talent management is from a psy- chological perspective, where the focus is on scientifically differentiating between people and identifying the fac- tors that contribute to success. The main question here is: *How can we identify talent?* Both of these approaches need to be incorporated into a successful talent manage- ment strategy. *The business perspective: why think about talent?* Talent management is often used as a term for the way an organisation manages its employees, from identifying the best applicants for a job through to developing them and managing their performance. Originally defined as managing human resources in a way that would contribute to organisa- tional profitability, TM is now a broader issue. As businesses have increasingly recognised wider, more sustainable meas- ures of organisational success than simple profitability, there have been calls for TM to prioritise employees as stakehold- ers (Collings, 2014). For a more in-depth look at the debates around TM, see the Fad or Fact feature. The aim of talent management is to maximise the com- petitive advantage an organisation can gain from utilising its human capital effectively (Collings & Mellahi, 2009). To achieve this, Collings and Mellahi suggest that a strategic approach to talent management needs to fulfil three key aims: 1\. **Identify pivotal positions** that have a differential impact on an organisation's competitive advantage. It is worth noting that these key positions are not limited to higher management, but can include posts at any level of the organisation. 2\. **Develop a talent pool** of high-performing employees who can fill those positions. This means the organisation will have to think ahead: not just look for people to fill positions when they become vacant, but develop people for those future vacancies. This needs to be done both internally (by developing current employees) and externally (by recruitment) in order to meet the organisation's requirements. 3\. **Create a differentiated HR architecture** that allows the organisation to invest appropriately in different groups of employees. With the aim of encouraging motivation, commitment and development, the organisation will invest more in those employees who can contribute most to its strategic objectives. FINDING AND MEASURING TALENT 23 page52image9417472 **Fad or Fact A war for talent?** The 'War for Talent' was a phrase coined by consultants at McKinsey & Company at the turn of the twenty-first century and popularised in the book of the same name (Michaels et al., 2001). The phrase was meant to capture what the authors described as a **critical shift** in the criteria for business success. They claimed that success is dependent on the organisation's ability to attract, develop and retain 'talent'. Although very influential in its recognition of the centrality of employee talent to organisational success, the **war for talent** approach was also limited in that it defined that success solely in terms of profits or shareholder value. While many authors have made a convincing case for the centrality of TM in organisational success, there remains debate over what 'talent' actually is and especially whether it is inborn or something that can be developed. There are four different approaches to distinguishing between more and less talented employees (Winsborough & Chamorro-Premuzic, 2016). The first is known as the 80/20 rule and defines the talented employees as the 20% or so who contribute 80% of the productivity. The second defines talent as the best or maximum performance an individual can give; while the third views talent as effortless performance, that is, the talented person can easily perform well, whereas the less talented person has to put in a lot of effort for the same level of performance. And finally, the fourth approach suggests that talent is when our skills, knowledge and abilities match the requirements of the job. However, these different approaches are **heuristics** -- that is, ways of making sense of talent -- rather than clearly defined and evidenced distinctions, and serve to emphasise what a wide-ranging and ill-defined concept 'talent' is. Lewis and Heckman (2006) suggest that this debate over what talent and TM are presents a problem for those wishing to make claims about its importance. They recommend a more strategic approach based on reliable and valid TM **measures**. These measures are essential if we are to answer important questions accurately: **How do we identify high performers? How can we identify untapped potential? Who will respond best to development?** It is here that work psychology can make an invaluable contribution. - What is 'talent'? Write down your own definition and then compare it with the four heuristics above: which one represents how you view talent? - If you were to sum up your own 'talent' in a few words or a short phrase, what would it be? - With a partner, discuss the extent to which you think 'talent' is natural or acquired over the course of your life. How - What are the implications of this debate for businesses wishing to manage the talent within their organisations? *The psychological perspective: how can we measure talent?* When we are talking about talent in a business context, what we really mean is how well a person performs a job. But assessing performance is notoriously difficult. Who judges how well someone is performing? We may get very different answers if we ask a person's manager, co-workers, or their customers and clients. What kinds of performance measurements do we want to use? Should we consider how well people contribute to a team or how well they complete individual tasks? And how can we separate individual from team or organisational performance? This problem of meas- uring talent and performance is even more difficult when it comes to selecting people for jobs, because we have such limited time and experience of the person to make relevant judgements. Psychological theory and research can make a real con- tribution to talent assessment. At heart, this is about taking a scientific approach to assessing people. We start by iden- tifying what needs to be measured and then develop relia- ble and valid measurements that can be used fairly to distinguish between people. The advantage of these meth- ods of assessment is that they can sometimes even identify 24 WORK PSYCHOLOGY IN ACTION hidden talents. For example, an employee might have great potential as a team leader but never have had the opportu- nity to demonstrate it. Using work samples or psychometric testing, the organisation may be able to uncover and utilise this hidden talent. Once the talent has been identified, the organisation can then build its talent pool by recruiting the right people and developing current employees. Within an overall TM framework, assessment, selection and development should all be integrated. This starts with a job analysis in order to identify what is needed in each job and can then develop into an organisation-wide competency framework. job analysis The aim of job analysis is to identify the knowledge, skills and behaviours that are associated with job performance. In the USA, the Occupational Information Network (O\*NET, https://www.onetonline.org) has collected job analysis information for over 974 occupations and combined it with other organisational information to create a database that identifies everything needed for entry into and success in a Knowledge range of different occupations. It includes required qualifica- tions, descriptions of the day-to-day tasks involved in each job and detailed lists of the specific skills and behaviours that job holders need to display. There are two approaches that can be taken in job analy- sis. The first focuses on the tasks and the second on the worker. Task-oriented analysis identifies the duties and tasks required in a specific job and views the job role as independ- ent of the people who perform those tasks. Worker- oriented analysis, on the other hand, focuses on the individual characteristics, such as knowledge, experience and personality, needed for performance. While task-ori- ented job analysis sees individual attributes as representing unimportant variability, the worker-oriented approach believes that those individual attributes are the key to high performance rather than just adequate performance. This latter approach also tends to be broader, aiming to identify characteristics that will be helpful across the organisation. The aim of job analysis is to identify the knowledge, skills, abilities and other characteristics (KSAOs) needed to perform the job: ![page53image9377984](media/image4.png)page53image9378176 - is the foundation learning about the context, content, process or procedures of the job. - For example, a work psychologist might need foundational knowledge of specific work motivation - are specific, practiced psychomotor tasks or application of abilities to a job task. - To continue our example, a work psychologist would need critical thinking skills to identify and - are physical, mental or social capabilities that can be applied in a range of contexts. - For example, a work psychologist\'s ability to verbally communicate the information and ideas they - includes personality traits, motivation, interests, values and experiences. - For example, work psychologists often value working with others in a non-competitive way and ![page53image9378368](media/image6.png)page53image9378560![page53image9378752](media/image8.png)page53image22401968![page53image9379328](media/image5.png)page53image22402080![page53image9379904](media/image11.png) **Figure 2.1** KSAOs in job analysis Much of the work psychology research around predict- ing work performance has focused on finding out what KSAOs a specific measure can predict and how good it is when compared to other measures. Bartram (2005) suggests that instead of this measurement-focused approach, it is more useful to adopt a *criterion-focused approach*. A crite- rion-focused approach looks at meaningful workplace behaviours or outcomes (known as competencies) and tries to identify what will best predict them. This change in focus from job- or task-based management to competency-based management of human resources is seen as an essential part of an organisation's continuous evolution (Soderquist et al., 2010). Rather than a long list of unrelated KSAOs for each job, this alternative approach to utilising job analysis infor- mation is to develop clear descriptions of *competencies* that are important for good performance, both in a specific role and in terms of contributing to overall organisational goals. the role of competencies There is some discussion over the precise definition of com- petencies, from a person-centred approach that defines them as characteristics of an individual that underlie effec- tive or superior performance (Boyatzis, 1982) to a more out- come-oriented approach that defines them as 'sets of behaviours that are instrumental in the delivery of desired results or outcomes' (Bartram et al., 2002, p. 7). It is this latter definition that is most useful in identifying and measuring talent within an organisation, because it focuses on the aspect that is most important to an organisation: what a person actually does. The usefulness and popularity of com- petencies in TM and general HR literature are primarily due to the way in which they capture a range of individual char- acteristics in a performance-focused manner, incorporating not only what a person *can* do, but what they *want* to do (Ryan et al., 2009). Competencies are a common feature of many people management policies and practices, and a whole industry has developed around their application in organisations. Matching employee competencies and job requirements is claimed to improve employee and organisa- tional performance, as well as leading to increased satisfac- tion (Spencer et al., 1992). Competencies need to be based on behavioural indica- tors -- that is, on observable behaviours that indicate certain levels of performance. It is useful for an organisation to specify both positive and negative indicators so that good and bad performance can be identified. This competency framework can then be used at every stage of the talent management process: to select candidates for roles, to assess current employees' performance and to evaluate development needs. There are, of course, many workplace behaviours that we might be interested in and the various lists of competen- cies developed by different organisations and researchers may at first seem overwhelming. How can we be sure that any competency framework is sufficient to measure the important factors of work performance while not including any extraneous elements? Tett et al. (2000) constructed a comprehensive and detailed list of management competen- cies that included 53 different competencies in 9 groups. For example, the *Traditional Functions* group included *Problem Awareness* and *Strategic Planning*, while the *Communication* group included *Listening Skills* and *Public Presentation*. The drawback of this approach is that the sheer volume of com- petencies that it produced made it unwieldy. One suggestion for an integrative and comprehensive model of competencies, based on meta-analyses of many dif- ferent competency models, is the Great Eight. This model is supported by research evidence showing how different com- petencies cluster together, and is consistent with practition- ers' models (Bartram, 2005). The eight competencies are: FINDING AND MEASURING TALENT 25 page54image22350464 Leading and Deciding Analysing and Interpreting Supporting and Cooperating Organising and Executing Interacting and Presenting Adapting and Coping Creating and Conceptualising Enterprising and Performing ![page54image22352256](media/image13.png)page54image22352368![page54image22352480](media/image14.png)page54image22352592 A generalised model like the Great Eight has the advan- tage of being more cost-effective, as the organisation does not need to conduct detailed job analysis of every single role in order to identify a unique framework. It does, of course, still need tailoring to individual roles and levels within the organisation. The *Apply: Case Study Competency Framework in the UK Civil Service* activity illustrates how one organisation has developed an organisation-wide com- petency framework with each competency tailored to indi- vidual jobs. ![page54image9382592](media/image15.png) **Apply Case Study: Competency Framework in the UK Civil Service** The Civil Service Competency Framework (2012--17) is a good illustration of how a generic competency framework can provide a large organisation with strategic direction. Based on the organisation's values of honesty, integrity, impartiality and objectivity, it identifies three clusters of competencies that will lead to high performance within the organisation: Setting Direction, Delivering Results and Engaging People. Each cluster consists of three or four compe-\ tencies and each competency is clearly defined\ with behavioural indicators of both effective and\ ineffective behaviour. These behaviours are further\ organised by the level of the person's post, which is\ essential if the competencies are going to be implemented throughout the organisation. page54image7206928 Getty Images/skynesher 26 WORK PSYCHOLOGY IN ACTION - Go to https://www.gov.uk/government/publications/civil-service-competency-framework and look through the detailed framework, particularly the descriptions of behavioural indicators. Why do you think it is important to develop both positive and negative behavioural indicators? - Compare the competencies in this framework with the Great Eight framework (you can read the details in the full paper: Bartram, D. (2005). The Great Eight Competencies: A Criterion-Centric Approach to Validation. *Journal of Applied Psychology*, *90*(6), 1185--1203. https://doi.org/10.1037/0021-9010.90.6.1185). Can you see any similarities or overlaps between the two models? What benefits are there to an organisation developing its own competency model? In our discussion of competencies, we have already started addressing one of the major issues in TM and indeed in work psychology in general: measurement. We will now move to consider this in more detail. **MEASURING PSYCHOLOGICAL VARIABLES** A central issue for psychology at work is how we can accu- rately measure the variables we are interested in. Whether it is in identifying talent or assessing someone's current job per- formance for promotion or development opportunities, our decisions will only be as good as the measures we rely on. In this section we will review the essential criteria by which we can judge how good those measurements or assessments are, and build a more detailed understanding of how they can be used to inform employee-resourcing decisions. There are two main ways of judging how accurate a psy- chological measure is: *reliability* and *validity*. A reliable measure is one that gives consistent results. A valid measure is one that is actually measuring what it claims to. It is easy to understand these concepts when we think about physical measurements. For example, we can use centimetres to measure length or height. If you were to measure your height in centimetres with a tape measure today and do the same tomorrow, you would get the same result: centimetres are a reliable measure. Trying to measure your height using millili- tres, on the other hand, would be an example of an invalid measure: millilitres are a valid measure of volume, but not of length. Yet assessing the validity and reliability of measures of complex phenomena in the workplace is not so straight- forward. We need to have a clear understanding of what psy- chological measurements can and cannot do if we are to use them appropriately to enhance our decision-making. *Reliability* We saw above that a reliable measure is one that gives us con- sistent results. There are two main types of reliability we might want to know about, depending on the kind of measurement we are looking at. The first is the reliability of the *assessors* and the second is the reliability of the *measure* itself. Reliability of assessors is established using **inter-rater reliability**. Several assessment methods try to achieve an objective measure of a person by ensuring that different assessors use standardised forms and questions. For exam- ple, different interviewers may use the same interview schedule or several observers of a role-play activity may use the same form to assess candidates' performance. The objectivity of these measures depends entirely on how much agreement there is between raters scoring the same candidate. Reliability of measures can be assessed in several ways: - **Test--retest reliability**: the test must produce consistent results if given to the same people on different occa- sions. Usually, a gap of 4--6 weeks is used to ensure that memory effects do not impact on the reliability scores. - **Internal consistency**: this is used to check that different parts of the test are measuring the same thing. For exam- ple, if we have a questionnaire that measures cognitive ability, are a person's scores on one half of the question- naire similar to their scores on the other? - **Parallel forms**: if we have more than one version of a test, a person's scores must be similar on both versions. - **Face** validity is the easiest type of validity to demon- strate and simply asks whether the measure *appears* to assess what it claims to. Good face validity helps candi- dates to 'buy in' to the assessment and to believe that the process is fair. - **Content** validity is more theoretical. While the average person in the street might believe that a measure is valid, it also needs to be based on solid research and up-to-date theory about what the concept includes. To establish con- tent validity, we would need to ask experts in the field the extent to which a measure is representative of the *whole* of the ability or attribute that it claims to measure. One way to do this is to conduct thorough literature reviews and gain feedback from experts in the area. **Construct** validity asks whether the assessment accu- rately measures the construct that it claims to. For exam- ple, does a particular teamwork role-play at an assessment centre really measure a candidate's teamworking ability or is it actually measuring their leadership skills? This is a more difficult type of validity to demonstrate because often the constructs we are trying to measure are quite abstract. However, one way in which this can be done is by establishing that a new measure of a construct corre- lates with older, established measures in the ways we would theoretically expect. Scores on the new measure need to converge with similar measures and diverge from different measures in order to demonstrate that the measure is really capturing what it claims to: o *Convergent*: a person's teamworking score in the role-play correlates with their teamworking score on a self-report questionnaire. o *Divergent*: a person's teamworking score in the role- play has a low correlation with a leadership score on a self-report questionnaire. **Criterion** validity is perhaps the most important type of validity to consider and refers to how well a measure pre- dicts something important in the real world of work. The 'criterion' is the outcome that we are trying to predict. For example, we might want to use an intelligence test to select employees who will benefit the most from a train- ing programme. In this case, the intelligence test is the 'predictor' and the benefit from the training programme is the 'criterion', and we are assuming that the more intel- ligent employees will benefit the most from the training. But if we are to use measurements accurately and respon- sibly, we cannot just assume this kind of link between our measurement and outcome criterion -- we need to dem- onstrate it clearly. This link can be established *concur- rently* or *predictively*: o *Concurrent* criterion validity is where the test and criterion measure are taken at the same time, indicat- ing how much the test scores predict current employees' performance. o *Predictive* criterion validity is where the criterion is measured some time after the test, indicating whether the test predicts future performance. Although criterion-related validity is the gold standard of any measure we might use in selecting employees, it is also quite difficult to achieve. Its usefulness depends entirely on how carefully the criterion was chosen and how well it can be assessed. Job performance, for example, is difficult to assess and is subject to many influencing factors that may be completely unrelated to the predictor. We can use other cri- teria that could be equally important to an organisation and perhaps more easily measured, such as absenteeism or turn- over rates. Whatever the difficulties, establishing criterion validity for any measure used to assess candidates is a key part of applying these psychological tools well. *Useful rules of thumb for evaluating reliability and validity information* There are two main statistics employed for demonstrating the reliability or validity of an assessment method. Both are measures of 'agreement' or similarity between numerical scores. The first statistic is a correlation and tells you how simi- lar two sets of scores are. These two sets of scores could be from two versions of the same test, two different raters or the same test completed by the same people at different times. Correlations can be positive (an increase in score on one measure is associated with an increase on the second measure) or negative (an increase on one measure is associ- ated with a decrease on the second). They are represented by *r* and can have an absolute value between 0 and 1. The stronger the relationship between the two sets of scores, the closer *r* is to 1. The second statistic is the reliability coefficient, also known as the Cronbach alpha and represented by *α*. It assesses how closely the different items on a questionnaire agree with each other and also gives a score between 0 and 1, where 1 indicates perfect agreement between the differ- ent items. A high agreement means that the different items are all assessing the same concept. Psychometric tests produced by reputable publishers will have this information clearly available in the test manu- als, but it is also worth trying to find it for other assess- ment methods you might use. You can use the rules of thumb in Figure 2.2 when evaluating whether a measure is reliable and valid. FINDING AND MEASURING TALENT 27 ![page56image9292416](media/image17.png)page56image9292608![page56image9292800](media/image19.png) Reliability Internal consistency a *\>* 0.7 Test--retest or interrater\ r *\>* 0.7 Construct Validity Ability tests r *\>* 0.7 Personality questionnaires r \> 0.6 Criterion-Related Validity r \> 0 shows some predictive power. Remember:\ (a) ensure that the criterion is relevant to your purpose and \(b) the higher the correlation, the better page56image22198720![page56image22198832](media/image21.png)page56image22198944![page56image22199056](media/image23.png)page56image22199168 **Figure 2.2** Rules of thumb for assessing reliability and validity 28 WORK PSYCHOLOGY IN ACTION ![page57image9094272](media/image25.png) **Apply** **Assessing validity and reliability** Go online and find a psychometric test manual (e.g. Psytech provides all its technical manuals as free downloads: https://www.psytech.com/Resources/TechnicalManuals). Write a short report or prepare a brief presentation that evaluates your chosen test in terms of its reliability and validity, using the rules of thumb outlined in this chapter to understand the\ statistical results presented in the manual. Make recommendations about the kinds of job roles or applications for which the tests might be appropriate and note any cautions regarding their use. *Fairness in assessment* So far, our discussions of measurements and selection methods have focused on the need to choose reliable and valid methods to underpin talent management decisions. But there is another important issue to consider, and that is how these methods are perceived and experienced by the people who take part in them. Fairness in selection and assessment is one of the most important determinants of how people perceive these methods. Perceptions of fairness are influenced by (Arvey & Renz, 1992): - The components and processes of the selection proce- dure (e.g. how objective the procedure is or how consist- ently it is applied across the applicants). - The nature of the information used to make decisions (e.g. job-related items are perceived as fairer than varia- bles that seem unrelated, information collected in a way that seems to invade a candidate's privacy is seen as unfair). The results or outcomes of the selection procedure (e.g. do there seem to be the 'right' number of people from disadvantaged groups?). Fairness is a complicated concept in selection, because selection is necessarily based on discriminating between people. This discrimination is fair if it is able to distinguish between people who (will) perform well and those who (will) perform less well -- that is, when the measurement is related to work performance. The discrimination is unfair if it is based on measurements or characteristics that are unre- lated to a person's work performance. Unfair discrimination can occur in two ways, referred to as 'sameness' and 'difference' (Liff & Wajcman, 1996). Discrimination can be based on treating people differently when they are actually the same; for example hiring a younger person rather than an equally well-qualified older person. Or it can be based on treating people the same when account should be taken of their differences. For example, the British police force used to have a height requirement of 5ft 10in. This was changed to 5ft 8in for men and 5ft 4in for women and then ultimately removed, as it was recognised that a height requirement was not essential to job performance and it unfairly discriminated against women and people from certain ethnic backgrounds. The French police force removed its minimum height require- ment in 2010, also recognising that it was not a justifiable measure of how well someone could do the job. Equality laws are an attempt to ensure that employers are not able to discriminate unfairly against applicants based on their race, ethnicity, gender, religion, disability and so on. The advantage of using more reliable and scientific approaches to assess people's suitability for jobs or work performance is that this kind of discrimination can be reduced. However, it is unfortunately not true that these methods will guarantee equality of treatment: there are the problems of *bias* and *adverse impact*. page57image9095040 **Apply Case study: discrimination and fairness** Joe is the CEO of a large insurance company. When he first took over the position two years ago, there was a lot of upheaval in the top management and he appointed a new Finance Director and HR Director, both of whom were women. He was pleased with their performance and received good feedback from their departments. However, within the last six months, both new directors have gone on maternity leave and one has given notice that she will not be returning to work. Joe is now involved in the selection process for a new Finance Director. - If you were in Joe's position, how would you feel? What do you think the impact of this experience will be on the selection efforts of this organisation? - Do you think it would be fair of Joe to ask potential female candidates for the post if they intended to have a baby in the next few years? ![page57image7038720](media/image26.png) Getty bias Bias occurs when a measure (whether a psychometric test or scores on a structured interview or any other measure) sys- tematically over- or underestimates the performance of people from different groups. Imagine we are evaluating a sales team and we compare supervisor ratings of perfor- mance with actual sales figures. We might find that the supervisor ratings are consistently lower for women than for men, but that this does not correspond to a difference in their respective sales volumes. In this case, the measure (supervisor rating) is either overestimating the men's perfor- mance or underestimating the women's. Bias can also occur with psychometric tests. To continue our example, imagine that all the salespeople complete a test designed to measure their persuasiveness and interper- sonal skills. We find that women score higher on this test than men, but it is again not reflected in a difference in sales figures, showing that this test would provide a biased assess- ment of the salespeople's performance, in this case underes- timating the men's performance in comparison with the women. If we were to use this test in selection, it might well lead to adverse impact. adverse impact Adverse impact occurs when a difference between groups on a particular measure results in a lower success rate for one group. This could be success rates in hiring decisions or in the likelihood of being selected for development oppor- tunities. Yet many methods of assessing people that are commonly used in selection show sub-group differences in scores. In a comprehensive review of mainly US evidence, Hough et al. (2001) summarised the differences between groups (age, gender and ethnic groups) on three main types of criteria commonly used in employee selection (cognitive abilities, personality and physical abilities). They concluded that many methods could result in adverse impact on differ- ent groups and made detailed suggestions for how organisa- tions could combat this, for example in providing test coaching for candidates who are not familiar with psycho- metric tests. Many psychometric test publishers also offer example questions and items on their websites to allow test-takers the opportunity to become familiar with the way questions are phrased and how they should respond to them. In assessing whether a selection method shows adverse impact, a good rule of thumb is the *four-fifths rule*, which originated in the USA. It states that selection is biased, or shows adverse impact, if the selection rate for one group of people is *less than four-fifths* (or 80%) of the rate of another group. This is only a rule of thumb rather than a definite indi- cation of bias, and should be seen as a warning flag that fur- ther investigations might be needed. An argument is sometimes made against efforts to increase diversity by claiming that it would result in a reduc- tion in performance if we include diversity as a selection criterion. We already know that there are significant disad- vantages to some groups, including bias in methods or restrictions on access to opportunities. But we can also address this argument from a different perspective. What if we assume that this claim is true and that reducing our requirements in order to recruit someone from an under- represented group has a negative effect on performance? What effect would it have? An interesting study addressing this issue was conducted using an approach from operations research that looks at the trade-off between multiple objectives, in this case aim- ing for the highest predicted performance combined with the lowest adverse impact (De Corte et al., 2007). It showed that accepting a 5% reduction in selection quality can increase the minority hiring rate by more than 50%, thus achieving the dual purpose of effective selection *and* a diverse workforce. These issues of fairness and bias are also important to understand when evaluating the contribution of technology to selection, so we return to them throughout this book. See, for example, the Psychology and Technology features about artificial intelligence (Chapter 6 and 8) and big data (Chapter 12). Remember that technology-based selection decisions are only as good as the data and the decision parameters they are programmed with: if the data are flawed or the ways the decision is made are biased, the result will be too. The best way to avoid bias and adverse impact, as well as increase the perceived fairness of an assessment measure, is of course to ensure that the measure is closely related to actual job performance. As we have seen throughout the chapter, this relies on an accurate job analysis that identifies competencies associated with high performance and an informed choice of method that will reliably and validly measure those competencies. FINDING AND MEASURING TALENT 29 page58image9107904 **International Perspectives Variation in selection practices** Most job applications begin with a CV or application form. In some countries you are nearly always required to attach a photo (e.g. Germany or Austria). In others (e.g. the UK or USA) this seldom happens. There are relatively few studies com- paring what type of selection methods are most popular in different countries, and those that do exist tend to focus on comparing countries within a particular region (e.g. Europe). However, several differences have been noted; for example assessment centres are more popular in the UK and Germany than in other Western European countries (Shackleton & Newell, 1994). An interesting question is why these differences might exist. Is it tradition or differences in available resources? Or could it be due to different cultural perceptions of how appropriate each of these methods is? 30 WORK PSYCHOLOGY IN ACTION Ryan et al. (1999) addressed this question in a survey of nearly 1,000 organisations in 20 different countries from all continents except South America. They looked at the relationship between the country's culture and the type and number of selection methods used. They found clear differences in the selection procedures used in different countries, the most notable of which was a difference in the use of structured interviews, with about 60% of organisations in Australia and New Zealand reporting they used fixed interview questions compared with just over 10% in Italy and Sweden. Graphology was used very rarely in every country except France. The researchers also found that countries high in uncertainty avoid- ance (i.e. where people tend to feel uncomfortable with uncertainty and ambiguity) prefer to have more objective data to base their decisions on, which was evident in their greater use of selection tests. Educational qualifications were a popular contributor to hiring decisions worldwide. While cultural differences are certainly related to the types of selection methods used, several authors have suggested that cultural dimensions alone are not enough to explain the variation in selection practices. Steiner and Gilliland (2001), for example, note that there is a certain level of consistency in people's perceptions of the fairness of different selection methods across several countries. In fact, it seems that the differences between countries are not that large and that the main determinant of how fair a method is perceived to be is job relatedness, no matter what country the applicant is from. **Discussion point** Do you think attaching a photo to your CV is a good idea or not? Why? To what extent do you think your view on this is affected by your culture and the perceived fairness of this approach? ![page59image9266944](media/image28.png) **IDENTIFYING TALENT** In this section, we will explore the different methods that organisations can use to identify the people they need to fulfil their talent management strategy. While you may be familiar with some of these as selection methods, remem- ber that 'selection' in this case is broader than just choosing the right person to fill a vacant post. These methods can also be used to decide who should go on a development course or what secondment someone might benefit from. In fact, while this type of internal development decision is often made in quite an informal manner, these decisions are key to the effective internal management of talent and ide- ally should be made using the most up-to-date, reliable and valid measures that we have. Anderson and Cunningham-Snell (2000) collated the find- ings of several meta-analyses of the predictive validity of differ- ent selection methods and compared them with the popularity of the same methods. The results, shown in Table 2.1, high- lighted how the most popular methods used in organisations were often not the best at predicting performance. In this sec- tion, we will review the selection methods in detail and consider when and how to use them most effectively. *Interviews* As we saw in Table 2.1, interviews are the most popular method in employee selection. The Chartered Institute of Personnel and Development in the UK reports that 78% of selection processes involve competency-based interviews (CIPD, 2017). Not just used in selection, interviews are also a popular format for performance appraisals, job analysis and training needs identification. Interviews are often criticised because of their potential for bias and subjectivity, and we need to be aware of what these biases are in order to avoid discrimination in selection and to train interviewers appro- priately. The good news is that by drawing on decades of psychological research and recommendations, we can ensure that interviews have one of the top validity ratings of any selection method. In interviews we are relying on the interviewer's judge- ment about a candidate, which can unfortunately be subject to several biases and distortions. These distortions are the result of shortcuts that our brains make: they are not in them- selves 'bad' or 'wrong', but are ways of helping us make sense of the complicated world we live in without using up too much time or effort. While they can be useful in everyday life, they can cause problems when we are trying to make accurate assessments of others in an interview situation. There are sev- eral different cognitive shortcuts that have been identified as sources of bias in interviews (see Table 2.2). Although these biases are natural ways in which our brains function, organisations can take steps to ensure that they are reduced and that interviews are conducted in a standardised, view and why. To what extent do you think candidate views should be considered when choosing selection methods? Why do you think so many organisations continue to use less valid selection methods? page59image9052032![page59image9052224](media/image25.png) **Apply** **Selection methods** With a partner, talk about your experiences of the selection process for different jobs you have applied for. Compile a list of all the different selection methods you have experienced and heard about. When your list is complete, compare it with the validity and popularity lists in Table 2.1. Discuss with a partner which methods you personally think are best or worst from the candidate's point of **Table 2.1** Comparative predictive validity and popularity of various selection methods *Source*: adapted from Anderson and Cunningham-Snell (2000). FINDING AND MEASURING TALENT 31 +-----------------+-----------------+-----------------+-----------------+ | **Predictive | **Popularity** | | | | validity** | | | | +=================+=================+=================+=================+ | **High** | Work samples | **High** | Interviews | | | Cognitive | | References | | (validity | ability tests | (used by \>80% | Application | | \>0.5) | Structured | of | forms Ability | | | interviews | organisations) | tests | | | | | Personality | | | | | tests | +-----------------+-----------------+-----------------+-----------------+ | **Medium** | Unstructured | **Medium** | Assessment | | | interviews | | centres | | (validity | Personality | (used by around | | | between 0.35 | tests | 60% of | | | and 0.4) | Assessment | organisations) | | | | centres Biodata | | | +-----------------+-----------------+-----------------+-----------------+ | **Low** | References | **Low** | Biodata | | | Self-assessment | | | | (validity | | (used by \ | | | | - **Action**. Explain what action you took to complete the task or | | solve the problem. | | | | - **Results**. Explain the results of your actions. Try to focus on | | how your actions resulted in a success for the company. | +=======================================================================+ | | +-----------------------------------------------------------------------+ | | +-----------------------------------------------------------------------+ | | +-----------------------------------------------------------------------+ | Now, find a job that you are interested in and go through the job | | information provided (e.g. person specification and job | +-----------------------------------------------------------------------+ | description) to identify the competencies or qualities the | | organisation is looking for. Try writing questions for yourself, | | using a mixture of behavioural or situational approaches. You can | | then role-play the interview with a friend, gaining experience of | | being both interviewer and interviewee. This kind of practice can | | make the real job interview a much less intimidating experience, and | | knowing how these questions are constructed as well as being able to | | provide the interviewee with exactly the information they are looking | | for will certainly enhance your prospects. | +-----------------------------------------------------------------------+ 34 WORK PSYCHOLOGY IN ACTION twofold: first, an applicant is unlikely to provide referees who will criticise them; and second, an increasing awareness of the need to evidence all statements made about previous employees has led to this kind of reference becoming quite bland and reporting only basic factual information. In fact, the best use of references may simply be fact-checking, and it may be even more useful for the employing organisation if it informs candidates at the outset that references will be used to check details of employment or education. *Psychometrics* Psychometric means 'measure of the mind' and psychomet- ric tests are widely used in all areas of psychology. In their broadest sense, it could be argued that they are the basis of psychological science; without a way to measure psycho- logical phenomena, psychology as a science would simply not be possible. Within the field of selection or assessment, psychometric tests refer to measures of either *ability* or *personality*, and Smith and Smith (2005, p. 187) define them as 'carefully chosen, systematic, standardised procedures for evoking a sample of responses from a candidate, which are evaluated in a quantifiable, fair and consistent way'. More and more organisations are using psychometric tests, with a report in 2016 showing a global increase of 18% over the previous three years and over half of all those sur- veyed indicating that they use online assessments (AON, 2016). In the UK, 41% of organisations use general mental ability tests, and up to 53% use tests of specific skills (CIPD, 2017). There has been a significant increase in the number of organisations using these tests over the last couple of dec- ades, and they are particularly popular in the selection of graduate, managerial and professional candidates. Psychometric tests do not measure a person's performance on the job, but rather the psychological characteristics that can contribute towards performance. This means that they are more flexible than some other methods, such as work samples, and the same test can be used as part of the selection process for a range of different jobs. Tests are also ideally suited to large- scale selection processes because they are cost-effective and many of them can now be completed online. It is important, as with all other selection methods, that whatever psychological criterion we are assessing with a test has a clear and demonstrable link with job performance. If this link is not present, the test simply becomes an invalid measure. Reputable test publishers will have information on the types of people or jobs for which each of their tests is suitable, as well as publishing appropriate norm groups for score comparisons. Norm groups are an essential component of interpreting and using psychometrics appropriately. A person's raw score on a test is of very little use on its own; it only becomes useful when we can compare it with other people's scores. Norm groups provide this comparison: they allow us to see how a person scored when compared with others. There are some important reasons we should pay attention to norms: Remember that, despite all the efforts of test administra- tors to standardise test-taking, any measure will have a certain amount of error in it due to factors we cannot control. Converting raw test scores using norm groups will help the selectors know whether candidates' test scores are significantly different from each other or should be treated similarly. Some tests may have differential validity; that is, they predict work performance more or less accurately for different groups of people. For example, a university graduate's score on a certain test might predict their job performance better than a high school graduate's score. In this case, we would need to interpret their scores with reference to an appropriate norm group before using them as a predictor of future performance. If we want to make comparisons between an individual's scores on different tests, the scores will also need to be converted to normed scores. For example, we might want to know whether a graduate applicant's numerical reason- ing skills are better than their verbal reasoning skills. Looking at the raw scores will not help with this because the tests are likely to be scored on different scales. Given the complexities of interpreting and using psycho- metric scores, it is worth noting that in most countries, peo- ple conducting psychometric tests need to have specific qualifications to demonstrate their competence and under- standing. In the UK, these qualifications are overseen by the British Psychological Society and there is a register of quali- fied Test Users that organisations or individuals can consult (http://www.psychtesting.org.uk). Three levels of qualifica- tion in psychometric test use are recognised by the European Federation of Psychological Associations, depending on com- petence and knowledge (British Psychological Society, 2018): Assistant Test User -- is trained in administering tests within well-defined and specific organisational contexts, for example routine selection procedures. Test User -- has the knowledge to be able to choose the appropriate test for a specific application and can work independently to interpret test scores. Specialist Test User -- is an experienced psychologist able to develop tests for occupational settings, advise and provide consultancy on psychometric testing and train other people in test use. Tests can be classified according to whether they seek to measure maximum performance or typical performance/ behaviour. Tests of maximum performance are also known as ability or aptitude tests, while tests of typical perfor- mance include personality assessments. We will now look at each of these in more detail. maximum performance Tests of maximum performance include cognitive ability tests as well as physical or sensory-motor tests. They aim to measure how well a person can do something, whether physical or mental. The tests can measure broad concepts, such as general intelligence, or specific skills, such as finger dexterity or error detection. General mental ability (GMA) is a psychometric term for intelligence. Although there is a lot of discussion among theorists over precise definitions of what intelligence actually is, and it is often confused with general knowledge by laypeo- ple, there is good agreement that intelligence is not *what* you know, but what you can *do* with it. Essentially, it is a measure of how accurately and quickly you can process complex infor- mation. It is easy to see why it would be important in job performance, particularly for complex jobs or those that require the ability to learn quickly. GMA is an excellent pre- dictor of both job performance and training success in many countries across the world (Salgado et al., 2003). More specific tests can be used when we are seeking detailed, job-related abilities and can be measures of men- tal or physical ability. Common mental abilities that are tested in organisations include numerical, verbal, spatial and mechanical. For example, when recruiting graduates to a managerial position, an organisation might want to use a numerical reasoning test as part of its selection procedure rather than a minimum educational requirement. This would ensure that it did not overlook potentially promis- ing applicants who did not complete a numeracy-focused degree. Physical tests can include sensory-motor and sen- sory acuity. For example, in recruiting people for assembly jobs, an organisation will want to identify those candidates who have good manual dexterity skills, and there are psy- chometric tests that will enable this. Interestingly, GMA seems to be a better predictor of job performance than more specific tests (Salgado et al., 2003), but candidates tend to view tests with more concrete items as more relevant to the application process than those that are more abstract (Smither et al., 1993). This may be an important consideration for an organisation wishing to make itself more attractive to potential candidates. typical performance These types of test include measures of personality, inter- ests and values. Many people make judgements about these aspects when they are conducting an interview, but of course, there is no way of knowing how accurate that judge- ment might be (look back at the list of subjective biases). Using rigorously constructed psychometric tests enables us to assess these criteria in a more reliable and valid way. These types of tests can be used in selection, just like ability tests, when there is a clear link with performance or trainability. There are many different personality tests on the mar- ket. Tests used in selection tend to be based on a 'trait' model of personality because they enable us to compare scores against an average and make fine discriminations between people. Traits are consistent and enduring patterns of behaviour that these tests can measure on a scale. Some personality tests measure very broad conceptualisations of traits, while others focus on more specific ones. Personality type approaches, while often helpful in development appli- cations, should be avoided for selection for two reasons: first, it is rare that a personality type as a whole is related to job performance; and second, we cannot make fine distinc- tions between people if they are in mutually exclusive categories. The most well-supported model of personality, and one that has considerable support in occupational settings, is the FINDING AND MEASURING TALENT 35 the most recent meta-analyses. The validities reported in this study are therefore based on thousands of studies involving millions of employees. **Findings:** - GMA is recommended as the 'primary personnel measure for hiring decisions, and one can consider the remaining 18 personnel measures as supplements to GMA measures' (p. 266). - GMA is also the best predictor of success in job training. - While work samples and structured interviews have a similar validity for job performance, they are much more costly to the organisation. - Job experience has a low validity for predicting performance (remember, this is only the number\ of years in a similar job, it does not include any information about how well that job was performed), while age is completely unrelated. ![page64image7766592](media/image32.png) **Key Research Study Validity and utility of selection methods** The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings\ Frank Schmidt and John Hunter (1998) **Background:** The most important characteristic of any employee selection method is its predictive validity -- namely, how well it can predict a person's future work performance. Validity determines the practical value of a method: the better a method is at predicting high performance, the greater the payoff for the organisation. Previous meta-analyses have shown that the validity of a measure can be generalised across different situations and jobs, so new validity studies do not need to be conducted for every single job. **Aim:** This study evaluates 85 years of psychological research into the use of different assessment methods for selection, training and development, in an effort to identify which methods have the highest validity. **Method:** The validities for 19 different selection methods (such as General Mental Ability, Work Samples, Job Experience and even Graphology) were collected from 36 WORK PSYCHOLOGY IN ACTION on its validity (Chamorro-Premuzic et al., 2016). Game- based assessment (GBA) is an example of one of these very attractive and increasingly popular approaches to assessing people at work. Gamification is 'using game-based mechanics, aesthetics and game thinking to engage people, motivate action, promote learning, and solve problems' (Kapp, 2012). Game mechanics include aspects such as moving up through levels, earning badges or points and having time constraints, while aesthetics refer to the elements of design that encourage engagement with the game. Game thinking includes whether the game contains elements of competition, exploration, cooperation or storytelling. In the gamification of assessments used for selection, these three principles are applied to try to engage job applicants and motivate them to complete assessments. There are some misconceptions about GBA, most notably that an assessment can be 'gamified' very easily, for example by simply adding some animations or badges to it. While game elements can be added to assessments fairly simply, for instance by providing a 'progress bar' to give feedback to the respondent about how far through an assessment they are (Armstrong et al., 2016), these simple approaches will certainly not offer the whole range of rewards people are often looking for when using games. Developing good GBAs can be a resource-intensive and expensive process because they need to utilise a range of gaming principles *and* meet strict psychometric standards. A GBA developer needs to employ a range of specialists to ensure their games are developed right, from psychologists and other scientists through to game designers and software developers. Another misconception is that gamification can provide 'stealth assessments' because the players become so engaged in playing the game that they pay less attention to the fact they are actually being assessed (Fetzer, 2015). This has led some to claim that 'true' behaviours emerge in the game and the assessment is less affected by social desirability, but there is no evidence that the information captured in gamified assessments is any more 'real' than in traditional assessments. Church and Silzer (2016) caution against the uncritical adoption of new technological trends such as gamification because of the lack of research on their validity. Like any assessment method, game-based applications need to meet psychometric requirements of validity and reliability, and the result they produce is only useful in so far as it is related to important elements of job performance. **Discussion point** Try out a demo GBA (for example: Arctic Shores, https:// www.arcticshores.com) and analyse it in terms of how it utilises elements of game mechanics, aesthetics and game thinking. How do these elements impact on how you interact with the game? Big Five model (Barrick & Mount, 1991). It identifies five broad personality traits: - Extraversion -- outgoing and confident vs reserved and quiet. - Agreeableness -- friendly and considerate vs forthright and argumentative. - Emotional stability -- calm and unemotional vs sensitive and easily upset. - Conscientiousness -- organised and dependable vs flexi- ble and disorganised. - Openness to experience -- creative and imaginative vs down to earth and conventional. page65image7788160 **Psychology and Technology** **Gamification** The use of technology in assessment and selection has increased so fast in recent years that it has often outpaced ethical discussions on its use or psychological research a note on 'faking' in psychometrics Many people wonder whether psychometric tests can be faked. Given that these tests are commonly used in selec- tion (whether for a job or a development opportunity), the motivation to increase one's chances of success will cer- tainly be high. Ability tests cannot be faked as long as there is sufficient control to ensure that it is actually the candidate who is taking the test. However, there is evidence to show that about 30--50% of job applicants fake personality ques- tionnaires, to an extent that would affect hiring decisions (Griffith et al., 2007). There are now special analyses for most personality questionnaires to indicate how 'socially desira- ble' someone's responses are. This helps the person analys- ing the scores to interpret them in relation to how other people have answered and identify whether the respondent may be faking their responses. However, an interesting question revolves around the extent to which this 'faking' might actually continue into workplace behaviour. You might try to live up to the person- ality description you gave because you believe the job requires it. There is also recent research confirming that a large proportion of people report having different personali- ties at home and work. Many report that they behave very differently in the two roles but that both are still part of who they are (Sutton, 2017). While maintaining different 'role' personalities can cause some people significant amounts of stress, what is more important is how *authentic* a person feels. People who have very different role person- alities, but nonetheless feel they are being authentically themselves, will not experience the stress of those who feel they are putting on an act at work (Sutton, 2018). *Biodata* Biodata is a collection of information about a person's life and work experience, such as qualifications or job history, that is used to predict how well they will perform in a specific job. Biodata questions ask about a person's concrete experi- ence, rather than hypothetical or abstract scenarios (Gunter et al., 1993). For example, a biodata question might ask for someone's reasons for leaving a job or when they left school and then use the scores for these questions to predict future performance. Results from these questions are weighted according to how much they contribute to the desired out- come (e.g. high performance or low absenteeism) and col- lated much as they would be in a test, to give an overall score. You will probably be familiar with the way this kind of data is used because insurance companies employ it when calculat- ing risks, and therefore what premiums they will charge. Biodata is best used at the initial screening stage of selection and can be particularly useful for organisations that receive hundreds of applications, as the scoring and cal- culations, once set up, can be computerised and help to sift through the many applicants. When the biodata predictors are correctly developed (by establishing validity as described earlier in the chapter), they show a reasonable level of pre- dictive validity, on a par with assessment centres and per- sonality tests. Yet as we have seen, biodata is not a popular method of selection. Partly, this is likely to be due to the costs involved in developing validity data for each job; but it is also because these biodata items can have low face valid- ity for applicants (Anderson et al., 2008). *Assessment centres* Although assessment centres are usually listed as a separate type of selection method, they are in fact simply a combina- tion of other methods. They are based on the idea that the best way to accurately assess competencies is to use several different measures and triangulate the measurements. So for a particular job (say, an entry-level managerial role), a list of competencies is drawn up and a series of work sample exercises, psychometric tests and interviews are developed that will allow those competencies to be thoroughly assessed. As with interviews, it is essential that the assessors are properly trained (Lievens, 2001). While a multi-measurement approach seems like a good idea and there is certainly some evidence showing that assessment centre ratings are related to supervisory perfor- mance ratings (Hermelin et al., 2012), research also indicates that there may be problems around the measurement of the competencies across different exercises. Sackett and Dreher (1982) found that ratings across all competencies *within* an exercise were consistent, but that ratings for each compe- tency *across* the different exercises were not. This indicates that the ratings that were given do not actually represent the competencies very well and may explain why assess- ment centres, despite including several very valid methods of assessment, do not perform as highly overall. *Work samples* Work samples consistently show high validity in predicting job performance, as well as having high face validity for can- didates. Instead of looking for signs that may indicate a per- son will be good at a job, work samples can provide evidence of how the person performs in a situation as close to the real job as possible (Wernimont & Campbell, 1968). This has the additional advantage of giving the candidate a realistic pre- view of what the job will be like, allowing a level of self-selec- tion as well. Work samples are, however, reasonably costly to develop as they need to be specific to a particular job. Work samples can range from a description of a situa- tion, to which a candidate then has to provide a written response, to a role-play scenario, which may include trained actors. A simple example might be asking candidates for a position involving data input to undertake a data input task and then assessing them for accuracy and speed. However, different types of tasks are needed for jobs where the key performance criteria are more complex: **Group exercises** -- candidates take part in a group discus- sion or task while being observed by assessors. These assessors must, of course, be using a consistent scoring scheme for this approach to be valid. Tasks can include discussions over business strategy, budgets, expansion into new markets, etc. FINDING AND MEASURING TALENT 37 38 WORK PSYCHOLOGY IN ACTION - **In-tray exercises** -- these work sample exercises typically take the form of a collection of documents, memos, emails, etc. that the candidate has to read through, pri- oritise and develop an action plan for. They assess how well people can deal with a key part of many managerial and administrative roles. - **Role-plays** -- these can include scenarios such as giving performance feedback to a subordinate or conducting a disciplinary meeting. The candidates are assessed on how well they take account of the preparatory information as well as their interpersonal skills. - **Presentations** -- this is a key skill for many jobs and is commonly used as part of a selection procedure. The candidate can either be asked to prepare a presentation before attending the selection event or be given a lim- ited amount of time to prepare within the event. potential after a training period? In addition, even the best data are no use if not used well. So at the decision stage we need to consider how to combine or weight data from dif- ferent sources appropriately, make sure we follow standard- ised procedures and be confident in our strategy about how the final decision is made. *Errors in selection decision-making* The basic error that can be made in selection decisions is offering the job to the 'wrong' person. This can happen in two ways: false positives or false negatives. False positives occur if an applicant is accepted who should have been rejected; that is, they did well in all the selection phases and were offered employment, but performed badly in the job itself. This can be costly, even disastrous, to the organisation and may be difficult to rectify. False negatives occur if we reject an applicant who should have been accepted. This could be someone who was rejected at some stage of the selection process but would have performed well in the post. The costs here include not only losing out on that applicant's high-quality work, but also the future impact if that applicant goes on to work for a com- petitor. In addition, if the rejection happens because of bias in selection procedures, there could be legal implications. As we have seen in this chapter, an important route to avoiding these errors is by choosing the best (most valid and reliable) selection measures that are demonstrably related to job competencies. Once we have these measures, we also need a clear and standardised procedure for combining them all to reach a decision. For example, if a candidate has a barely sufficient educational background but performs very well in interview, is this weighted more positively than a candidate with stellar educational achievements who only had average interview performance? This is where we need to consider how we combine data from different selection methods. *Combining data from selection methods* The information from our selection methods can be com- bined using *holistic* or *mechanical* approaches, as illustrated in Table 2.3. Meta-analysis of selection decisions has shown that while the data collection phase of selection tends to be effec- tive, the combination of all this information into a final score is much less so because the decision-makers still prefer using a holistic approach (Kuncel et al., 2013). Mechanical decisions are 50% more effective in predicting job performance than holistic decisions, so unfortunately this preference has a sub- stantial negative impact on the organisation's performance. At the heart of mechanical selection decisions is how we decide to 'weight' the different elements of the selection assessments. For example, is it more important that our can- didates are able to perform well in the interview or on a test of numerical reasoning? In the mechanical approach, this can be done in several ways, such as developing regression equa- tions or assigning weights based on theory or job analysis. Qualified work psychologists can be especially useful to the organisation with helping to develop these methods in ways that are tailored to the specific job and work context. ![page67image7859264](media/image25.png) **Apply** **Selecting an administrative assistant** You work in HR, and your manager has tasked you with developing a selection process for a new administrative assistant to work in the department. - First, identify the key skills and knowledge that you would want to assess in potential applicants. You can look back at the section on job analysis to find suggestions for how to evaluate these or gather information. - Second, develop an outline of how you will assess each of the KSAOs or competencies you have drawn together. - Third, consider some of the important details in these selection methods. For example, if you are using an interview, develop some questions and rating scales, noting how they are related to the selection criteria. Or if you would like to use a psychometric test, explore test publisher websites to find an appropriate one. **DECISION TIME** We are now in a position to make a decision about the best candidate for a job. Unfortunately, even with excellent job analysis and research-informed choice of selection meth- ods, the decision of who to hire is not simple. For example, do we hire based on the person as they are now or on their +-----------------------------------------------------------------------+ | FINDING AND MEASURING TALENT 39 | | | | +--------------------------------+--------------------------------+ | | | **Holistic** | **Mechanical** | | | +================================+================================+ | | | - This approach aims to | - This approach uses | | | | assess the 'whole person' | standardised | | | | using human judgement and | administration and | | | | may adapt or use different | objective scoring, | | | | assessments with different | ensuring that all | | | | applicants. | applicants complete the | | | | | same assessments. | | | | - Information from the | | | | | different assessments is | - There are standard rules | | | | reviewed to gain an | for combining the results | | | | overall impression of the | of these assessments to | | | | candidate and a decision | produce an overall score | | | | made based on gut instinct | for the candidates' | | | | or expert intuition. | predicted work | | | | | performance. | | | +--------------------------------+--------------------------------+ | | | | **Table 2.3** Combining data for selection decisions *Strategies for | | making selection decisions* | | | | Now that we have the final scores -- that is, our best predic- tion | | of how well someone will perform the job -- how do we decide who to | | offer the job to? It might be an easy response to say we offer the | | job to whoever scores highest, the *top -- down* approach. But this | | is likely to cause prob- lems with adverse impact, especially if we | | use predictors closely associated with GMA. In addition, all of our | | methods still involve some level of error: they are not perfect meas- | | ures of someone's potential job performance. This means that small | | differences in scores due to error could make a big difference in who | | is offered the job. One way to take account of this is to use | | *banding*, which groups candidates into bands within which scores are | | treated the same. This makes it clear to the decision-maker that all | | applicants scor- ing within a band are equally qualified. | | | | Alternatively, scores can be used as *cut-offs*, rejecting candidates | | below the minimum score, a useful approach when there are several | | positions that need to be filled at once. The effectiveness of this | | approach depends on how we choose the cut-off: whether it is the | | minimum standard needed to perform the job effectively or just a | | desirable or traditionally expected score. For example, if we have a | | test | | | | of computer programming skills but the organisation is also keen to | | recruit people who can be trained in this skill, the score on this | | test should not be used as a minimum cut-off. | | | | If we have several selection methods, the scores can be used as | | *multiple hurdles*: candidates have to pass one in order to proceed | | to the next stage. This is quite common, for example, in the use of | | online application forms followed by psychometric testing and then | | interviews. It is a cost- effective method for organisations that | | have large numbers of applicants that they need to filter down | | quickly. In this case, the cheaper methods of selection can be used | | first and the more expensive methods kept for later in the process | | when there are fewer candidates. | | | | Where banding is used or there is a tie in candidates' scores with | | other methods, the selection panel will have to come to a decision, | | which needs to be fully documented. Just as with interviews, having | | more than one person involved in the decision can help to reduce | | individual bias. If the selection process has been designed well, | | from job anal- ysis through to choice of selection method, | | appropriate scoring and combination of data, we can be confident that | | the scores represent the best prediction of future work performance. | +=======================================================================+ | page68image7830528 | | | | **SUMMARY** | | | | We started this chapter by considering talent management from both a | | business and a psychological perspective. While the former focuses on | | why TM is beneficial to the organisa- tion, the latter considers | | questions around how we can accurately measure talent. We reviewed | | the central role of competencies here, a topic we will return to in | | the next chapter. | | | | The following section introduced the concepts of relia- bility and | | validity in measuring talent and how we can evalu- ate selection | | processes for bias and adverse impact, making recommendations for | | managers and organisations wishing to avoid these pitfalls. We then | | considered different meth- ods of identifying talent in the selection | | process, including | | | | interviews, references, psychometric testing, biodata, assessment | | centres and work samples. One of the takeaway lessons from this | | chapter is how selection processes have to be closely linked to the | | specific job for both validity and perceived fairness. | | | | **Case study revisited** | | | | In the opening case study, we saw how the financial services giant | | JPMorgan had engaged in dubious hiring practices. Initially proposed | | as a means to develop relationships (*guanxi*) in new markets, the | | Sons and Daughters programme actually offered jobs to unqualified | | referrals in exchange for the referee's business. The bank was fined | | millions of dollars and made commitments to end this kind of unfair | | hiring. | +-----------------------------------------------------------------------+ 40 WORK PSYCHOLOGY IN ACTION *Guanxi* (or in English, networks and relationships) is an important part of business. Given what you have learned in this chapter, consider the following questions: 1. Are there situations where you think it is justifiable for an organisation to hire someone for their connections rather than their skills and expertise in a particular job? If so, is there an ethical way to do this? 2. As the concept of *guanxi* has become more widely understood in the West, it can lead to the assumption that most Chinese people would prefer a selection process that utilises it. However, the research reported in the case study shows this is not the case. Are there any common approaches to finding jobs in your own culture that you do not think are fair? 3\. To what extent do you think the selection methods that we have reviewed in this chapter, and especially the concepts of reliability and validity of measures, are applicable across different cultures? **Test yourself** *Brief review questions* 1 What is a competency framework? 2. 2 Define reliability and explain why it is important in 3. 3 List and describe the different types of validity. 4. 4 What is the difference between behavioural and situa- 5. 5 How would you define adverse impact? *Discussion or essay questions* 1. 1 What is talent management and how can it be informed by psychology? 2. 2 To what extent is 'faking' in the job application process a problem for organisations? 3. 3 Evaluate the different methods of selection and make a recommendation as to which is the 'best'. 4. 4 How important is fairness in employee assessment? Do you think there are ever any circumstances when it is justifiable to use an assessment procedure that does not seem to be fair? **Further reading** - For more detail on the use of psychometrics in employee assessment, see Smith, M. & Smith, P. (2005). *Testing People at Work*. Oxford: Blackwell. - There is a significant amount of research on cross-cultural perceptions of fairness in selection, for example this paper comparing the perceptions of people in six differ- ent Western cultures: Steiner, D. D., & Gilliland, S. W. (2001). Procedural Justice in Personnel Selection: International and Cross-Cultural Perspectives. *International Journal of Selection and Assessment, 9*(1-- 2), 124--37. https://doi.org/10.1111/1468-2389.00169 For useful and practical suggestions to improve selection decision-making, see Kuncel, N. R. (2008). Some New (and Old) Suggestions for Improving Personnel Selection. *Industrial and Organizational Psychology*, *1*(3), 343--6. https://doi.org/10.1111/j.1754-9434.2008.00059.x **REFERENCES** Anderson, N., & Cunningham-Snell, N. (2000). Personnel Selection. In N. Chmiel (Ed.), *Introduction to Work and Organizational Psychology: A European Perspective*. Oxford: Blackwell. Anderson, N., Salgado, J., Schinkel, S., & Cunningham-Snell, N. (2008). Staffing the Organization: An Introduction to Personnel Selection and Assessment. In N. Chmiel (Ed.), *An Introduction to Work and Organizational Psychology: A European Perspective* (2nd ed.). Oxford: Blackwell. AON. (2016). *HR's Quest to Predict Success and Get Meaningful Talent Data Fuels Growth in Online Assessment Industry Claims Global Study*. Retrieved April 12, 2019, from https://insights.humancapital.aon. com/talent-assessment-press-releases/hrs-quest-to- predict-success-and-get-meaningful-talent-data-fuels- growth-in-onl Armstrong, M. B., Ferrell, J. Z., Collmus, A. B., & Landers, R. N. (2016). Correcting Misconceptions About Gamification of Assessment: More Than SJTs and Badges. *Industrial and Organizational Psychology*, *9*(03), 671--677. https:// doi.org/10.1017/iop.2016.69 Arvey, R., & Renz, G. (1992). Fairness in the Selection of Employees. *Journal of Business Ethics*, *11*(5--6), 331--340. https://doi.org/10.1007/BF00870545 Barrick, M. R., & Mount, M. K. (1991). The Big Five Personality Dimensions and Job Performance: A Meta-analysis. *Personnel Psychology*, *44*(1), 1--26. Barrick, M. R., Mount, M. K., & Judge, T. A. (2001). Personality and Performance at the Beginning\ of the New Millennium: What Do We Know and Where Do We Go Next? *International Journal of Selection and Assessment*, *9*(1--2), 9--30. https://doi. org/10.1111/1468-2389.00160 Bartram, D. (2005). The Great Eight Competencies: A Criterion-Centric Approach to Validation. *Journal of Applied Psychology*, *90*(6), 1185--1203. https://doi. org/10.1037/0021-9010.90.6.1185 Bartram, D., Robertson, I. T., & Callinan, M. (2002). Introduction: A Framework for Examining Organizational Effectiveness. In I. T. Robertson,\ M. Callinan, & D. Bartram (Eds.), *Organizational Effectiveness* (pp. 1--10). Chichester: Wiley. https://doi. org/10.1002/9780470696736.ch British Psychological Society. (2018). *The BPS Qualifications in Test Use*. London: BPS. Campion, M. A., Palmer, D. K., & Campion, J. E. (1997). A Review of Structure in the Selection Interview. *Personnel Psychology*. https://doi. org/10.1111/j.1744-6570.1997.tb00709.x ![page69image7649920](media/image35.png) Chamorro-Premuzic, T., Winsborough, D., Sherman, R. A., & Hogan, R. (2016). New Talent Signals: Shiny New Objects or a Brave New World? *Industrial and Organizational Psychology*, *9*(03), 621--640. https://doi.org/10.1017/ iop.2016.6 Church, A. H., & Silzer, R. (2016). Are We on the Same Wavelength? Four Steps for Moving from Talent Signals to Valid Talent Management Applications. *Industrial and Organizational Psychology*, *9*(03), 645--654. https:// doi.org/10.1017/iop.2016.65 CIPD. (2017). Survey Report Resourcing and Talent Planning 2017. *CIPD Survey Report*, 1--49. https://doi.org/10.1108/ PR-12-2012-0212 Collings, D. G. (2014). Toward Mature Talent Management: Beyond Shareholder Value. *Human Resource Development Quarterly*, *25*(3), 301--319. https://doi. org/10.1002/hrdq.21198 Collings, D. G., & Mellahi, K. (2009). Strategic Talent Management: A Review and Research Agenda. *Human Resource Management Review*, *19*(4), 304--313. https:// doi.org/10.1016/j.hrmr.2009.04.001 De Corte, W., Lievens, F., & Sackett, P. R. (2007). Combining Predictors to Achieve Optimal Trade-Offs Between Selection Quality and Adverse Impact. *Journal of Applied Psychology*, *92*(5), 1380--1393. https://doi. org/10.1037/0021-9010.92.5.1380 EU SME Centre. (2015). *HR Challenges in China*. https:// www.eusmecentre.org.cn/article/hr-challenges-china Farrell, D., & Grant, A. (2005). *Assessing China's Looming Talent Shortage*. Shanghai: McKinsey Global Institute. https://www.mckinsey.com/\~/media/McKinsey/ Featured%20Insights/China/Addressing%20chinas%20 looming%20talent%20shortage/MGI\_Looming\_talent\_ shortage\_in\_China\_full\_report.ashx Fetzer, M. (2015). Serious Games for Talent Selection and Development. *TIP: The Industrial-Organizational Psychologist*, *52*(3), 117--125. Gera, A. (2016, November 17). JPMorgan Agrees to Pay \$264 Million Fine for "Sons and Daughters" Hiring Program in China. *Forbes*. Griffith, R. L., Chmielowski, T., & Yoshita, Y. (2007). Do Applicants Fake? An Examination of the Frequency of Applicant Faking Behavior. *Personnel Review*, *36*(3), 341--355. https://doi.org/10.1108/00483480710731310 Gunter, B., Furnham, A., & Drakely, R. (1993). *Biodata: Biographical Indicators of Business Performance*. London: Routledge. Hermelin, E., Lievens, F., & Robertson, I. T. (2012). The Validity of Assessment Centres for the Prediction of Supervisory Performance Ratings: A Meta-analysis. *International Journal of Selection and Assessment*, *15*(4), 405--411. JPMorgan. (2019). *About Us*. JPMorgan Chase &\ Co. Retrieved April 17, 2019, from https://www. jpmorganchase.com/corporate/About-JPMC/about-us. htm Kapp, K. M. (2012). *The Gamification of Learning and Instruction: Game-Based Methods and Strategies for Training and Education*. San Francisco: Pfeiffer. Kristof-Brown, A. L., Zimmerman, R. D., & Johnson,\ E. C. (2005). Consequences of Individuals' Fit at Work: A Meta-analysis of Person--Job, Person-- Organization, Person--Group, and Person--Supervisor Fit. *Personnel Psychology*, *58*(2), 281--342. https://doi. org/10.1111/j.1744-6570.2005.00672.x Kuncel, N. R., Klieger, D. M., Connelly, B. S., & Ones, D. S. (2013). Mechanical Versus Clinical Data Combination in Selection and Admissions Decisions: A Meta-analysis. *Journal of Applied Psychology*, *98*(6), 1060--1072. https:// doi.org/10.1037/a0034156 Latham, G. P., & Saari, L. M. (1984). Do People Do What They Say? Further Studies on the Situational Interview. *Journal of Applied Psychology*, *69*(4), 569--573. Latham, G. P., Saari, L. M., Pursell, E. D., & Campion, M. A. (1980). The Situational Interview. *Journal of Applied Psychology*, *65*(4), 422--427. Lewis, R. E., & Heckman, R. J. (2006). Talent Management: A Critical Review. *Human Resource Management Review*, *16*(2), 139--154. https://doi.org/10.1016/j.hrmr.2006.03.001 Lievens, F. (2001). Assessor Training Strategies and Their Effects on Accuracy, Interrater Reliability, and Discriminant Validity. *Journal of Applied Psychology*, *86*(2), 255--264. https://doi. org/10.1037/0021-9010.86.2.255 Liu, X.-X., Keller, J., & Hong, Y.-Y. (2015). Hiring of Personal Ties: A Cultural Consensus Analysis of China and the United States. *Management and Organization Review*, *11*(1), 145--169. https://doi.org/10.1017/mor.2015.1 Michaels, E., Handfield-Jones, H., & Axelrod, B. (2001). *The War for Talent*. Boston: Harvard Business School Publishing. Robertson, I. T., & Smith, M. (2001). Personnel Selection. *Journal of Occupational & Organizational Psychology*, *74*(4), 441--472. Ryan, A. M., McFarland, L., Baron, H., & Page, R. (1999).\ An International Look at Selection Practices: Nation and Culture as Explanations for Variability in Practice. *Personnel Psychology*, *52*(2), 359--392. https://doi. org/10.1111/j.1744-6570.1999.tb00165.x Ryan, G., Emmerling, R. J., & Spencer, L. M. (2009). Distinguishing High-Performing European Executives: The Role of Emotional, Social and Cognitive Competencies. *Journal of Management Development*, *28*(9), 859--875. https://doi. org/10.1108/02621710910987692 Sackett, P. R., & Dreher, G. F. (1982). Constructs and Assessment Center Dimensions: Some Troubling Empirical Findings. *Journal of Applied Psychology*, *67*(4), 401--410. https://doi.org/10.1037/0021-9010.67.4.401 Salgado, J. F., Anderson, N., Moscoso, S., Bertua, C., & de Fruyt, F. (2003). International Validity Generalization of GMA and Cognitive Abilities: A European Community Meta-analysis. *Personnel Psychology*, *56*(3), 573--605. Schmidt, F. L., & Hunter, J. E. (1998). The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research FINDING AND MEASURING TALENT 41 42 WORK PSYCHOLOGY IN ACTION Findings. *Psychological Bulletin*, *124*(2), 262--274. https:// doi.org/10.1037/0033-2909.124.2.262 Shackleton, V., & Newell, S. (1994). European Management Selection Methods: A Comparison of Five Countries. *International Journal of Selection and Assessment*, *2*(2), 91--102. https://doi.org/10.1111/j.1468-2389.1994.tb00155.x Silver-Greenberg, J., & Protess, B. (2013, August 29). JPMorgan Hiring Put China's Elite on an Easy Track. *New York Times*. Smither, J. W., Reilly, R. R., Millsap, R. E., Pearlman, K. T., & Stoffey, R. W. (1993). Applicant Reactions to Selection Procedures. *Personnel Psychology*, *46*(1), 49--76. https:// doi.org/10.1111/j.1744-6570.1993.tb00867.x Soderquist, K. E., Papalexandris, A., Ioannou, G., & Prastacos, G. (2010). From Task-Based to Competency-Based:\ A Typology and Process Supporting a Critical HRM T