Game-Related Assessments for Personnel Selection: A Systematic Review PDF

Document Details

HeartfeltGnome388

Uploaded by HeartfeltGnome388

Erasmus University Rotterdam

2022

Pedro J. Ramos-Villagrasa, Elena Fernández-del-Río, Ángel Castro

Tags

personnel selection gamification serious games human resources

Summary

This article presents a systematic review of the use of game-related assessments (GRAs) in personnel selection. Focusing on the validity and applicant reactions in this approach, the researchers highlight that GRAs are a promising tool, although more research is needed to fully understand their effectiveness in comparison to traditional methods. The study explores the factors that play a role in the design and implementation of effective GRAs, addressing concerns surrounding their use in different contexts.

Full Transcript

TYPE Systematic Review PUBLISHED 28 September 2022 DOI 10.338...

TYPE Systematic Review PUBLISHED 28 September 2022 DOI 10.3389/fpsyg.2022.952002 Game-related assessments for OPEN ACCESS personnel selection: A systematic review EDITED BY Kittisak Jermsittiparsert, University of City Island, Cyprus REVIEWED BY Ioannis Nikolaou, Pedro J. Ramos-Villagrasa 1*, Elena Fernández-del-Río 1 and Athens University of Economics and Business, Greece Ángel Castro 2 Konstantina Georgiou, 1 Department of Psychology and Sociology, Universidad de Zaragoza, Zaragoza, Spain, Athens University of Economics and 2 Department of Psychology and Sociology, Universidad de Zaragoza, Teruel, Spain Business, Greece Pachoke Lert-asavapatra, Suan Sunandha Rajabhat University, Thailand Industrial development in recent decades has led to using information and communication technologies (ICT) to support personnel selection processes. *CORRESPONDENCE Pedro J. Ramos-Villagrasa One of the most notable examples is game-related assessments (GRA), [email protected] supposedly as accurate as conventional tests but which generate better SPECIALTY SECTION applicant reactions and reduce the likelihood of adverse impact and faking. This article was submitted to However, such claims still lack scientific support. Given practitioners’ increasing Organizational Psychology, a section of the journal use of GRA, this article reviews the scientific literature on gamification applied Frontiers in Psychology to personnel selection to determine whether the current state of the art RECEIVED 24 May 2022 supports their use in professional practice and identify specific aspects on ACCEPTED 05 September 2022 which future research should focus. Following the PRISMA model, a search PUBLISHED 28 September 2022 was carried out in the Web of Science and Scopus databases, identifying 34 CITATION Ramos-Villagrasa PJ, valid articles, of which 85.3% are empirical studies that analyze five areas: (1) Fernández-del-Río E and Castro Á (2022) validity; (2) applicant reactions; (3) design of GRA; (4) personal characteristics Game-related assessments for personnel and GRA; and (5) adverse impact and faking. Together, these studies show that selection: A systematic review. Front. Psychol. 13:952002. GRA can be used in personnel selection but that the supposed advantages doi: 10.3389/fpsyg.2022.952002 of GRA over conventional tests are fewer than imagined. The results also COPYRIGHT suggest several aspects on which research should focus (e.g., construct © 2022 Ramos-Villagrasa, Fernández- validity, differences depending on the type of game, prediction of different job del-Río and Castro. This is an open-access article distributed under the terms of the performance dimensions), which could help define the situations in which the Creative Commons Attribution License (CC use of GRA may be recommended. BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright KEYWORDS owner(s) are credited and that the original personnel selection, gamification, serious games, job performance, applicant publication in this journal is cited, in accordance with accepted academic reactions, game-based assessment practice. No use, distribution or reproduction is permitted which does not comply with these terms. Introduction The industrial development of recent decades has led to the emergence of digital selection procedures, that is, any use of Information and Communication Technologies (ICT) to improve the personnel selection process (Woods et al., 2020). The incorporation of technology into selection has been remarkably successful, but research on this topic is still very scarce compared to its rapid adoption by professionals (Chamorro-Premuzic et al., 2017; Nikolaou, 2021). Frontiers in Psychology 01 frontiersin.org Ramos-Villagrasa et al. 10.3389/fpsyg.2022.952002 TABLE 1 Taxonomy of elements that make up the gaming experience. Digital selection procedures go beyond a mere change to technology-based assessment (e.g., face-to-face interview vs. Category interview by videoconference). Instead, they may involve changes 1. Action language. How the interaction between person and machine occurs in the assessment formats, the evaluation of work-performance (e.g., pointing and pressing, scrolling with the keys). predictors, and test correction (Tippins, 2015). One of the most 2. Assessment. How game information and goal achievement are recorded (e.g., noteworthy examples is gamification and game-related scores, progress bars). assessments (GRA; Woods et al., 2020). 3. Conflict / Challenge. Difficulty of the game, the type of problems that players Gamification consists of incorporating game elements into must face, and the degree of uncertainty or surprise when encountering such nongaming contexts (Nacke and Deterding, 2017), whereas problems. GRA are assessments based on gamification. Interest in GRA 4. Control. Variety of actions that the player can deploy (i.e., agency). for personnel psychology is now greater than ever: we have the 5. Environment. The place where the action of the game occurs and the player is recent reviews on technology applied to human resources in situated. which this technique has its own section (cfr. Tippins, 2015; 6. Game fiction. Degree of realism, whether the player is knowledgeable about Woods et al., 2020; Nikolaou, 2021); the most recent conferences the game world and whether the player’s actions within the game are of SIOP and EAWOP includes four and three presentations represented directly or indirectly. about games and personnel selection, respectively; and the last 7. Human interaction. Whether there is interaction between players and what volume of International Journal of Selection and Assessment type (e.g., comparative ranking, player vs. player matches). publishes a special issue dedicated to this topic. Although GRA 8. Immersion. To what extent the game contains perceptual elements that seem to be inextricably linked to technology (e.g., Tippins, encourage the player to immerse themselves in the game. 2015; Landers and Sanchez, 2022), game-related evaluations 9. Rules/Goals. The game has clear rules known to the player. that do not require the use of ICT can be designed and applied Adapted from Bedwell et al.’s (2012). (Melchers and Basch, 2022), for example, escape rooms, which can be developed without ICT (e.g., Connelly et al., 2018). However, successful worldwide games for personnel selection From a theoretical point of view, GRA are applications of are technology-based (e.g., Nawaiam, Owiwi, Wasabi Waiter), gamification science, “a social scientific, post-positivist and until now, research has been practically based only on subdiscipline of game science that explores the various design them. Hence, this article also focuses on technology- techniques and related concerns that can be used to add game based GRA. elements to existing real-world processes” (Landers et al., 2018, The use of GRA in personnel selection is growing because p. 318). An example is the work setting (Armstrong and Landers, they appear to reduce the risk of faking and improve candidates’ 2018) and its processes, such as recruitment (Korn et al., 2018), reactions without a substantial loss of predictive validity (Melchers selection (Hommel et al., 2022), or training (Armstrong and and Basch, 2022; Wu et al., 2022). This systematic review article Landers, 2017). was born within this context. However, the increasing use of GRA However, gamification does not reflect the different by personnel selection professionals does not necessarily imply approaches to the relationship between play and human resources, that their use is recommended. We need scientific evidence to as it can generate confusion between researchers and personnel support the equivalence of GRA to conventional selection selection professionals. To avoid this, Landers and Sanchez (2022) methods and determine whether they provide added value have proposed differentiating three terms: game-based assessment, (Nikolaou et al., 2019). This issue is relevant because the selection gameful design assessment, and gamification assessment. Game- processes must meet psychometric requirements and comply with based assessment refers to an evaluation method, while the other the legality and the promotion of applicants’ positive reactions two terms refer to the strategy used when designing evaluation (Salgado et al., 2017). Therefore, we propose the present systematic tests. We will define each of them following these authors’ review to determine the possible favorable evidence for GRA use proposal, qualifying it when necessary to establish a complete in professional practice and to analyze different types of GRA to taxonomy. guide future research. Game-based assessment refers to a selection method, that is, it measures a wide range of job-related constructs through games (Wu et al., 2022). Within game-based assessment, Game-related assessments: Concepts we could also differentiate between theory-driven games, and classification designed to evaluate constructs that are related to job performance, and data-driven games, where game scores are GRA are based on games. What elements characterize a game? related to the criterion instead of the constructs’ psychological Following Landers et al. (2018), they are the constructs that make entity (Landers et al., 2021; Auer et al., 2022). An example of up the play experience under different taxonomies. Bedwell et al.’s game-based assessment is Virus Slayer (Wiernik et al., 2022), a (2012) taxonomy is one of the most accepted in the organizational serious game to assess candidates for cyber occupations in the field, establishing nine categories described in Table 1. United States Air Force (USAF). Frontiers in Psychology 02 frontiersin.org Ramos-Villagrasa et al. 10.3389/fpsyg.2022.952002 Gameful design assessment consists of using game elements are not influenced by their personal characteristics (e.g., sex, to design a new assessment, as in the case of Owiwi, a situational experience with video games) or, if this occurs, knowing how to judgment test to evaluate professional skills to which game correct this effect when estimating the scores. Moreover, GRA elements have been added, such as the choice of a character, a should promote positive applicant reactions. narrative, etc. (Georgiou et al., 2019). The influence of GRA characteristics on assessment results Gamification assessment is a redesign strategy based on an must yet be explored because the use of GRA in selection is still in existing assessment test to which game elements are added, its infancy (Landers and Sanchez, 2022). However, there are modifying it in some way. An example of this strategy is Hommel enough studies to evaluate them concurrently and identify which et al.’s (2022) modification of the Wisconsin Card Sorting Test by issues future GRA research will need to address. incorporating a narrative context, the possibility of earning points, Bearing in mind both issues, rigor in the evaluation and the and a progression graph during the game. particular characteristics of each GRA, we propose to review the Landers and Sanchez (2022) focus on games developed to developing scientific literature on GRA applied to personnel evaluate what other classifications have called serious games selection with two objectives: (1) to determine whether the (Wiernik et al., 2022). However, conceptually, we can also include current state of the art supports their use in professional practice; the possibility of using conventional games to gather information (2) to identify specific aspects on which future research should about specific abilities, such as general cognitive ability (Quiroga focus. This will mitigate the general public’s misgivings concerning et al., 2019; Peters et al., 2021). Therefore, we propose to call this this new form of evaluation (al-Qallawi and Raghavan, 2022) and, second type of GRA playful games. at the same time, it will help to clarify the incipient research on From Landers and Sanchez’s (2022) terms, we propose a games-related assessment, which so far has shown some classification of GRA. As shown in Figure 1, the classification inconsistency, for example with the use of terms (Landers and begins on a continuum: at one end are the traditional assessments Sanchez, 2022). (e.g., tests, simulations), and at the other end, the playful games created for fun. Thus, we distinguish at least four types of GRA: (1) gamified assessment (e.g., Hommel et al., 2022); (2) gamefully Materials and methods designed assessment (e.g., Georgiou, 2021); (3) game-based assessments (e.g., Wiernik et al., 2022); and (4) playful games used Inclusion criteria for assessment purposes (e.g., Sanchez et al., 2022). It has been hypothesized that if the assessment test is presented Three inclusion criteria were established before conducting as a game (i.e., the closer it is to the playfulness extreme in the review: (1) we would accept only published papers; (2) Figure 1), the applicants’ motivation will increase (Coovert et al., written only in English or Spanish; and (3) focused on 2020), their propensity to fake will decrease, as will their tendency technology-based GRA for personnel selection. There were no to offer a better self-image, because they are encouraged to engage restrictions on participants’ populations, geographical or in the game (Landers and Sanchez, 2022). Moreover, the game will cultural origin, research design, or period in which studies elicit better reactions from the applicants, such as those referring were published. to organizational attractiveness (Gkorezis et al., 2021). For all the above, this classification is functional. Literature search Requirements for the use of We followed the PRISMA statement for this review (Page game-related assessments in personnel et al., 2021) and the guidelines based on MARS developed by selection Schalken and Rietbergen (2017), using Web of Science (WoS) and Scopus as databases. The keywords used were [“personnel Although GRA are growing among professionals, we must selection”] and [“gamification” OR “gamified” OR “serious game” be cautious when recommending its use. Therefore, research OR “game”] in the field “topics” in WoS, and as “Title, Abstract, should provide empirical evidence to support the rigor of GRA, and Keywords” in Scopus. The search was performed in March considering the influence of the type of GRA on the results 2022. A total of 105 results were found in WoS and Scopus. (Chamorro-Premuzic et al., 2017; Landers and Sanchez, 2022). Following journal guidelines on systematic reviews, we only Concerning the rigor of the assessments, GRA used for considered published studies. personnel selection must meet psychometric standards (Salgado After removing duplicates, 113 articles remained. et al., 2017; Landers et al., 2021; Wiernik et al., 2022): (1) Screening and coding were performed by the first author, acceptable reliability to ensure consistency in the measure; (2) whose qualification is a Ph.D. in Work and Organizational construct validity, verifying that GRA measure what is meant to Psychology. Screening was based on title and abstract and be measured; (3) predictive validity, to predict the criterion (e.g., provided 16 suitable articles according to our inclusion job performance); (4) freedom from bias, so that applicants’ scores criteria. After reading the whole article, one was removed (i.e., Frontiers in Psychology 03 frontiersin.org Ramos-Villagrasa et al. 10.3389/fpsyg.2022.952002 FIGURE 1 A classification of game-related assessments. Connelly et al., 2018) because it was not related to technology- based GRA. The remaining papers were included in the final Theoretical articles analysis. Articles from three additional sources were also incorporated: (1) eight papers from the special issue on Concerning the theoretical articles, Armstrong et al. (2016) gamification applied to personnel selection from the are the first to delimit GRA. According to them, GRA are more International Journal of Selection and Assessment, published in than just digital versions of situational judgment tests. They March 2022 (some papers from the special issue were already outline the need to establish Industrial-Organizational Psychology included in the database search, the remaining were published literature on gamification and its increase among practitioners: after the search had been performed); (2) four papers known to the authors of this review which were not detected by the Gamification of assessment will not disappear from practice, search due their title, abstract, or keywords; (3) three relevant just as people will not stop using the Internet, mobile devices, papers discovered in the references at the full-text reading or video-based interviews [...] By first understanding stage; and (4) three articles suggested by one of the journal gamification, I-Os can then apply theory to gamification in reviewers, two that were not found in the database search, and order to improve applicant and employee assessment in ways one that was ahead-of-print after the search was performed. that matter to firms and test takers. (p. 676) Thus, the final number of articles was 34 (see Figure 2 for a diagram describing the whole process and Of the remaining theoretical articles, two were reviews on the Supplementary Material 2 for the list of articles included). All role of ICTs for personnel selection (i.e., Woods et al., 2020; articles were written in English except for one in Spanish (i.e., Nikolaou, 2021). In both of them, GRA are currently presented as Albadán et al., 2016). One of the articles was obtained by one of the areas of greatest interest due to the rise of digital contact with the correspondence author. selection procedures. We highlight GRA’s potential advantages (better psychometric characteristics and applicant reactions, less faking, social desirability, and bias) and the existence of Results emerging studies. The article of Küpper et al. (2021) has a different goal: they As a first approximation, the 34 articles identified in the search propose a conceptual framework that explains how serious games were classified as theoretical or empirical. In empirical articles, the can be used for employer branding purposes, a reasonable goal type of GRA analyzed was identified according to the classification given the alleged relationship between GRA and applicant presented in Figure 1. As shown in Table 2, most of the research reactions. Their framework considers game-specific factors (e.g., was empirical (85.3%) and was carried out based on the four types game genre, level of realism), player-specific factors (e.g., self- of GRA identified. Most articles dealt with gamified assessments perceived innovativeness, prior application experience), and (29.4% of the total articles included in the review). Interest in learning (cognitive and affective) as antecedents of three types of GRA has been growing in recent years, with one article published employer branding outcomes. Although suggestive, the model both in 2012 and 2018, two published in 2016 and 2017, three in requires empirical validation. 2019, five in 2020, six in 2021, and fourteen in 2022 (year of These articles are an excellent introduction to the article by publication of the special issue of The International Journal of Landers and Sanchez (2022), prepared as an editorial for the Selection and Assessment). aforementioned special issue in The International Journal of Frontiers in Psychology 04 frontiersin.org Ramos-Villagrasa et al. 10.3389/fpsyg.2022.952002 FIGURE 2 PRISMA flow diagram for the systematic review. Selection and Assessment. They present the articles included in the Supplementary Material 3. We classified their findings into five issue, clarify the above-mentioned concepts (i.e., game-based areas: (1) validity; (2) applicant reactions; (3) design of GRA; (4) assessment, gameful design assessment, gamification assessment), personal characteristics and GRA; and (5) adverse impact and explain the core gameplay loop, which helps understand the faking. Next, we will discuss each of these categories in detail. A gaming experience, and provide some guidelines for the design summary of the findings by category is presented in Table 3. of game-based assessments. They also propose explanatory Reference to GRA types follows the classification shown in models of how GRA influences applicant reactions and reduces Figure 1. faking behavior. However, as with the model of Küpper et al. (2021), empirical validation is still necessary. Validity Research on the validity of GRA has fundamentally addressed two issues, construct validity and predictive validity, although Empirical articles we also found one study on discriminant validity. Most of the empirical articles are cross-sectional and use Construct validity student samples. Specific information about each article (type of Most of the articles on construct validity focus on personality, design, sample, GRA used, constructs evaluated) is presented in although analyzing very diverse issues. First, Hilliard et al. (2022) Frontiers in Psychology 05 frontiersin.org Ramos-Villagrasa et al. 10.3389/fpsyg.2022.952002 TABLE 2 Classification of articles identified in the systematic review. into a different instrument, with its advantages and disadvantages, Type of article Type of GRA Reference concluding that this must be considered when gamifying a test Theoretical Not applicable Armstrong et al. (2016) through storyfication. On the other hand, Harman and Brown Küpper et al. (2021) (2022) examine whether including evocative images in a text- Landers and Sanchez based game (i.e., McCord et al., 2019) will improve its construct (2022) validity. Their results show that, as with the original version, the Nikolaou (2021) associations with the Big Five measured with a conventional test Woods et al. (2020) are modest, and there are no significant differences between the Empirical Gamified assessment Collmus and Landers two versions of the game. We also include herein the work of (2019) Sanchez et al. (2022), who use playful games to evaluate sensation- Hilliard et al. (2022) seeking, height aversion, and risk-taking, only finding a Hommel et al. (2022) relationship between the game scores when evaluating the first Buil et al. (2020) two constructs and openness to experience. The latest study on Ellison et al. (2020) construct validity, specifically on personality, offers unfavorable Harman and Brown results: Wu et al. (2022) develop two game-based assessments to (2022) evaluate facets of conscientiousness but which really assess Landers and Collmus cognitive ability and the remaining Big Five. To their surprise, (2022) they find that the games they used actually measure cognitive Landers et al. (2020) ability (verbal ability and matrix reasoning) better than the Big Laumer et al. (2012) Five. In conclusion, they recommend that game-based assessments McChesney et al. (2022) be designed to consider the possible contamination of the Gamefully designed Brown et al. (2022) information collected, as occurs with situational judgment tests assessment Georgiou (2021) and assessment centers. Georgiou et al. (2019) The rest of the articles on construct validity analyze cognitive Georgiou and Lievens ability, competences, and emotional intelligence. Concerning (2022) cognitive ability, Auer et al. (2022) investigate whether the large Georgiou and Nikolaou amount of data generated by playing a game (i.e., trace data (2020) modeling) can predict cognitive ability and conscientiousness and Gkorezis et al. (2021) whether these data have an incremental value compared to using Nikolaou et al. (2019) only the score generated by the game for prediction. Their results Game-based assessment Albadán et al. (2016) show that trace data modeling predicts cognitive ability but not theory-driven Auer et al. (2022) conscientiousness, and they delve into the difficulties encountered Landers et al. (2021) in assessing personality with game-based assessments. Wiernik et al. (2022) Concerning competences, Georgiou et al. (2019) analyzed the Wu et al. (2022) construct validity of Owiwi, a gamefully designed assessment for Unknown Egol et al. (2017) the evaluation of key competences in the workplace (i.e., resilience, Melchers and Basch flexibility, adaptability, and decision-making). The authors carry (2022) out two studies, the first to develop the scenarios that will be part Playful game Sanchez et al. (2022) of the test with the collaboration of 20 human resources experts, Other/Not applicable Albadán et al. (2018) and the second to validate the test in a sample of 321 university al-Qallawi and Raghavan students and replicate it in a sample of 410 workers and people in (2022) the process of job-seeking. Their results indicate that Owiwi shows Balcerak and Woźniak adequate content validity. Landers et al. (2020) also research (2021) competences, showing that the inclusion of game elements in a Formica et al. (2017) situational judgment test (control, immersion, interaction) does not substantially affect the construct validity. As for emotional intelligence, the two existing pieces of design a test to evaluate the Big Five based on items in which the research show limited support. Brown et al. (2022) propose a applicant must choose the image they think best describes them. gamefully designed assessment in which the social interactions The results show good levels of convergent validity with a that make up the items are performed by abstract shapes. The conventional personality questionnaire. The test elaborated game scores show a moderate association with a situational through storyfication by Landers and Collmus (2022) also gives judgment test. Sanchez et al. (2022) examine whether a playful samples of construct validity. However, the authors acknowledge game of virtual reality can be used to evaluate emotional that modifying the original test has turned the gamified version intelligence, finding a moderate relationship with a conventional Frontiers in Psychology 06 frontiersin.org Ramos-Villagrasa et al. 10.3389/fpsyg.2022.952002 test. In fact, the associations with measures of personality 2021; Auer et al., 2022; Hommel et al., 2022). However, Landers turned out to be higher than the associations with emotional and Collmus’ (2022) attempt to gamify a personality measure intelligence. through storyfication does not find a relationship with the grade Taken together, the research on the construct validity of GRA point average. Focusing on task performance, two of the above- shows inconclusive results, underscoring the importance of game mentioned works include small samples of workers where this design to evaluate adequately what one intends to evaluate. relationship with self-reported job performance is found (Nikolaou et al., 2019) and, in another case, with supervisory Predictive validity ratings (Landers et al., 2021). Also, Melchers and Basch (2022) While research on construct validity has yielded mixed results, find positive associations between the scores of more than one research on predictive validity is more promising. Thus, using thousand applicants in a business simulation GRA and job-related samples composed totally or mainly of students, a relationship has performance in an assessment center. been found between various GRA (i.e., Cognify, Owiwi, Wasabi As GRA seem to show predictive validity, the next question is Waiter, and Wisconsin Card Sorting Test) and academic whether they show incremental validity compared to traditional performance (Egol et al., 2017; Nikolaou et al., 2019; Landers et al., tests. In this sense, Nikolaou et al. (2019) find that if cognitive TABLE 3 Main areas and findings of empirical research on GRA. Area Results References 1a. Construct validity 1. Studies on construct validity GRA show inconclusive results. Game design seems Auer et al. (2022) to have a significant influence on validity. Brown et al. (2022) Georgiou et al. (2019) Harman and Brown (2022) Hilliard et al. (2022) Landers and Collmus (2022) Sanchez et al. (2022) Wu et al. (2022) 2. Most studies on construct validity are on GRA that measure personality. However, Auer et al. (2022) better results have been obtained by evaluating cognitive ability and competences. Georgiou et al. (2019) Harman and Brown (2022) Hilliard et al. (2022) Landers and Collmus (2022) Landers et al. (2020) Wu et al. (2022) 1b. Predictive validity 1. GRA can predict the criterion. The evidence has focused on academic Auer et al. (2022) performance and task performance. Egol et al. (2017) Hommel et al. (2022) Landers et al. (2021) Melchers and Basch (2022) Nikolaou et al. (2019) 2. There is evidence of incremental validity of GRA over traditional tests. Nikolaou et al. (2019) Landers et al. (2021) 1c. Discriminant validity 1. The sole study shows that the GRA analyzed has discriminant validity. Wiernik et al. (2022) 2. Applicant reactions 1. GRA promote positive reactions in the applicants, especially concerning al-Qallawi and Raghavan (2022) organizational attractiveness. Balcerak and Woźniak (2021) Collmus and Landers (2019) Georgiou (2021) Georgiou and Lievens (2022) Georgiou and Nikolaou (2020) Gkorezis et al. (2021) Harman and Brown (2022) Hommel et al. (2022) Landers et al. (2021) Landers and Collmus (2022) (Continued) Frontiers in Psychology 07 frontiersin.org Ramos-Villagrasa et al. 10.3389/fpsyg.2022.952002 TABLE 3 (Continued) Area Results References 2. The perceived organizational attractiveness of being assessed with GRAs is due, at Georgiou and Lievens (2022) least in part, to the effect that the enjoyment and flow of the game has on the applicant’s perception of how innovative and competent the organization is. 3. Negative reactions to GRA are usually related to specific aspects of technology al-Qallawi and Raghavan (2022) (bugs, connection errors, etc.) and not to the content of the test itself. 4. Indicating that a test is a game (game-framing), even when it is not, improves the Collmus and Landers (2019) applicant’s reactions to the test. McChesney et al. (2022) 5. GRA are usually better valued than conventional selection methods, except in the Balcerak and Woźniak (2021) case of job-relatedness. The magnitude of this increase does not seem to be very Collmus and Landers (2019) high, and there are cultural differences. Georgiou and Nikolaou (2020) Georgiou (2021) Harman and Brown (2022) Hommel et al. (2022) Landers and Collmus (2022) Landers et al. (2020, 2021) 6. Providing explanations to applicants before the application of GRA it is advisable Georgiou (2021) to increase its positive reactions. 7. Some personal and GRA characteristics have a positive impact on applicant Buil et al. (2020) reactions: being male, having experience playing video games, having high self- Ellison et al. (2020) efficacy for technology, utility, equity, and perceived fun, and the perception of ease Georgiou and Nikolaou (2020) of use. Gkorezis et al. (2021) Laumer et al. (2012) McChesney et al. (2022) 3. Design of GRA 1. It is possible to design theory-driven GRA. Landers et al. (2021) 2. It is preferable to use GRA developed for evaluation purposes than playful games. Sanchez et al. (2022) 3. The use of virtual reality is only recommended when it adds value to the Sanchez et al. (2022) evaluation. 4. Considerable data are generated during the game that can be analyzed in various Auer et al. (2022) ways. Using this data to make assessments with multiple predictors improves GRA Albadán et al. (2016, 2018) outcomes as an assessment test. Estimating reliability by means of test–retest is Wiernik et al. (2022) recommended in these cases. Sanchez et al. (2022) Wu et al. (2022) 4. Personal characteristics 1. Although it is thought that being men and young is associated with better Melchers and Basch (2022) outcomes in GRA, studies confirming this difference often find modest results that are probably irrelevant in a real context. 2. Education, experience with computers, and self-efficacy for video games may Formica et al. (2017) influence GRA scores. In addition, people who regularly play video games have Hommel et al. (2022) greater emotional stability. Wiernik et al. (2022) Sanchez et al. (2022) 3. There is a relationship between the Big Five and the scores of the GRA, but the Sanchez et al. (2022) specific relationship varies depending on the type of game. Wu et al. (2022) 5. Adverse impact and faking 1. At the very least, GRA have no more adverse impact than a conventional test. Brown et al. (2022) Some studies find more positive results. The only study that analyzes faking reports Hilliard et al. (2022) better results than the original conventional test. Landers et al. (2021) Landers and Collmus (2022) ability and personality are evaluated, the GRA to evaluate Cognify, a game to evaluate cognitive ability, has a cumulative competences (i.e., Owiwi) only predict academic performance. effect on the prediction of academic performance if a cognitive Landers et al. (2021) obtained a similar result, finding that ability test is added to the game score, but not vice versa. Frontiers in Psychology 08 frontiersin.org Ramos-Villagrasa et al. 10.3389/fpsyg.2022.952002 Discriminant validity their reactions to GRA than to conventional tests. In any event, Wiernik et al.’s (2022) study is the only one on this type of two studies by Landers et al. (2020, 2021) find that, compared with validity. Their article documents the creation of the game-based conventional tests, the gain of GRA is minimal, inviting us to assessment Virus Slayer. This game evaluates six competences reflect on the magnitude of the improvement involved when using relevant to the USAF: analytical thinking, active learning, GRA. In addition, as perceptions of justice are subject to cultural deductive reasoning, systems thinking, adaptability, and differences (Anderson et al., 2010), it is also interesting to value situational awareness. Their results show that the game has an research conducted outside the Anglo-Saxon context. In this adequate discriminant validity, and it can be improved by regard, Balcerak and Woźniak (2021) conduct a study in Poland estimating the scores with three different types of information: where they analyze the reactions to different traditional selection multiple gameplay phases, diverse game behavioral indicators, and methods compared to modern and technology-based ones. Unlike residualizing game behavioral indicators. the rest of the research, their results show a clear preference for traditional methods, although it is noteworthy that, of the new Applicant reactions methods, after the e-interview, GRA are the best valued. Undoubtedly, most research has focused on the category of The relationship between personal characteristics and GRA in applicant reactions. In fact, more than half of the empirical applicant reactions has also been the subject of research. Ellison research on GRA deals with this issue. Thus, we can confirm that et al. (2020) find that being male, having high self-efficacy beliefs GRA promote positive reactions in the applicants (Georgiou, for technology, and perceived fairness influence reactions. Along 2021; al-Qallawi and Raghavan, 2022; Georgiou and Lievens, the same lines, Gkorezis et al. (2021) find that GRA make the 2022) and tend to be better valued than conventional selection company seem more attractive, but only if the participants have methods (Collmus and Landers, 2019; Georgiou and Nikolaou, previous experience playing video games. On the other hand, Buil 2020; Landers et al., 2021; Harman and Brown, 2022; Hommel et al. (2020) propose a theoretical model in which personal et al., 2022). It is noteworthy that this result seems consistent characteristics (i.e., competence and autonomy in the use of ICT) across the different types of GRA, as these investigations have influence intrinsic motivation, and this, in turn, influences been conducted with very diverse games, starting with traditional applicant reactions. Their results support the proposed model, assessment and going on to include serious games. finding relationships for all the variables except for the relationship Delving into these investigations, we can qualify this general between intrinsic motivation and perceived usefulness. In idea. Firstly, we recognize the importance of framing: the mere contrast, the openness to experience trait does not influence the fact of defining an online evaluation as a game improves the attractiveness of GRA for candidates (Georgiou and Nikolaou, applicants’ reactions, as they consider the organization to be more 2020; McChesney et al., 2022). Finally, Laumer et al. (2012) innovative and attractive (McChesney et al., 2022) and the test to propose the possibility of using GRA as a tool for candidates’ self- be shorter (Collmus and Landers, 2019). evaluation to decide whether to apply for a job. Using a sample of On the other hand, we must acknowledge that not all reactions 1882 job-seekers, they find that the decision to resort to this game are positive. Thus, al-Qallawi and Raghavan (2022) study the for self-assessment is based on the perception of: (1) ease of use; reactions generated by nine serious games published in mobile (2) utility; (3) fun; and (4) fairness in the selection process. It is application stores (App Store and Google Play). Through a noteworthy that they do not find any influence of the perception qualitative approach, using natural language processing, they of privacy although this issue has repeatedly worried researchers identify a general tendency to value GRA positively. Negative (Tippins, 2015). reactions are due to specific technology-related aspects and not to The underlying mechanisms by which GRA exerts a positive the evaluation itself, such as the presence of bugs or the game’s effect on applicant reactions have recently begun to be explored. design. In addition, they find that negative reviews are often made Using a longitudinal study and an experiment, Georgiou and by people who distrust GRA as an evaluation method. In contrast, Lievens (2022) find that the enjoyment and flow of GRAs caused Landers and Collmus (2022) evaluate the reactions to a applicants to perceive the organization as more innovative and conventional personality measure and a gamified one by competent, and, consequently, more attractive. introducing a narrative (storyfication), finding better results for the GRA, except for face validity. Design of game-related assessments Another necessary qualification is that not all types of The design of GRA has also been of interest to researchers, applicant reactions (e.g., fairness, satisfaction) are valued equally. although with less emphasis and much more diverse studies. Organizational attractiveness appears to experience the most Firstly, we highlight the work of Landers et al. (2021), who significant growth (Georgiou and Nikolaou, 2020; Gkorezis et al., explain and illustrate how to design theory-driven game-based 2021). However, in other investigations with the same gamefully assessments based on research on game design and designed assessment, Georgiou (2021) finds that the game is psychometrics. This is a good guide for future researchers and perceived as less job-related than its conventional counterparts. In practitioners. According to these authors, serious games another study mentioned in the same article, Georgiou shows that developers may use the design thinking theory taken from providing explanations to applicants has a more positive effect on human-computer interaction literature. Design thinking Frontiers in Psychology 09 frontiersin.org Ramos-Villagrasa et al. 10.3389/fpsyg.2022.952002 proposes five stages for the development of game-based personnel selection, as its application has already been proposed assessment that may be iterated until the final version of the during the recruitment phase (i.e., García-Izquierdo et al., game is reached: (1) empathizing, in which the constructs to 2020). Instead, Auer et al. (2022) propose using machine be evaluated are identified (e.g., using job analysis); (2) definition, learning, showing that, at least when predicting with GRA, it in which the actual application context of game is defined, and has incremental validity over traditional approaches. Wu et al. the developers try to solve technical problems (e.g., supported (2022) also propose using machine learning as an alternative to devices, minimum requirements); (3) ideating, in which the regression, especially in GRA that estimate more variables than assessment and the technical team build a shared mental model the number of applicants evaluated. Finally, Wiernik et al. to develop a useful prototype; (4) prototyping, in which the teams (2022) suggest that the information collected during the game create the planned product for trial, either in the form of a low- can be estimated utilizing continuous-time latent growth curve or a high-fidelity prototype; and (5) testing, in which they assess models to improve the prediction. the degree to which the game meets the pre-established goals (e.g., reliability, validity, reactions). Personal characteristics and game-related On the other hand, the study of Sanchez et al. (2022) is the assessments only one that focuses on the use of playful games for selection, in The influence of personal characteristics in the scores is particular, commercial virtual reality video games to evaluate relevant to any method used in selection. In the case of GRA, there performance-related constructs (e.g., emotional intelligence, risk- exists a stereotype that being male and young is commonly taking). They find very limited support for the use of these GRA, associated with better performance in videogames (Fetzer et al., concluding that it is better to use tests designed specifically for 2017). However, based on the conflicting results found in this evaluation purposes. In the particular case of virtual reality, they review, this relationship is more inconsistent than thought (Ellison recommend only using it when its particularities offer some et al., 2020; Georgiou and Nikolaou, 2020; Balcerak and Woźniak, advantage to the evaluation that cannot be obtained by 2021; Gkorezis et al., 2021; Landers et al., 2021; Hommel et al., other means. 2022; McChesney et al., 2022; Wiernik et al., 2022). Studies that Another issue related to the design of GRA is the possibility find sex differences usually report poor effect sizes and are probably of taking advantage of the data generated while playing. In this not very relevant in natural contexts (Melchers and Basch, 2022). sense, as already mentioned regarding validity, Auer et al. (2022) Other sociodemographic characteristics that are related to show the options of trace data modeling to evaluate different GRA scores are education (Wiernik et al., 2022) and computer predictors and their relationship with the criterion. As far as experience (Hommel et al., 2022). As for the relationship with the design is concerned, they find that using this additional use of video games, there is some evidence that self-efficacy in information improves the prediction compared to using the game playing video games positively influences GRA scores, but the score exclusively. They also highlight in their conclusions the results are inconsistent (Sanchez et al., 2022). Playing experience, importance of the design phase of the GRA, clearly defining the on the other hand, does not seem to affect evaluations with GRA constructs that one wants to evaluate. The contributions of (Hommel et al., 2022). Complementing these studies, Formica Albadán et al. (2016, 2018) align with this issue. In 2016, they et al. (2017) find that people who play video games show greater propose a video game to select senior management personnel. The emotional stability than those who do not, specifically, higher game consists of managing a herd of animals and facing different levels of emotional control and impulse control. random events. Although their design is theory-driven, they do Regarding personality, it is also noteworthy that the not elaborate their proposal very much or provide evidence of the relationship of the Big Five with GRA scores varies depending on reliability or validity of the GRA. For their part, Wiernik et al. the videogame (Sanchez et al., 2022; Wu et al., 2022). (2022) achieve similar results with Virus Slayer. Their research suggests that a multifactorial approach, employing different types Adverse impact and faking of information generated by the game, can lead to better results. The idea that GRA allow for unbiased assessments and However, in the opinion of Sanchez et al. (2022), using these prevent faking is probably one of the main arguments in their different measurement forms pose problems in estimating the favor. Research on this is still developing and seems to support reliability through internal consistency. As an alternative, they this idea, albeit with nuances. Concerning adverse impact, some propose the estimation of reliability by test–retest, presenting studies find no difference in scores based on gender, race/ adequate results and showing that it is a viable alternative for GRA ethnicity, or education when using GRA (Brown et al., 2022; that use this type of information. Hilliard et al., 2022), but others, such as that of Landers et al. The last issue related to the design of GRA is the treatment (2021), find similar results to conventional tests of adverse impact of the data collected through the game. Some research proposes by race. These results may be related to the construct they strategies based on different analysis techniques. Thus, Albadán evaluate because the first two papers focus on personality and et al. (2018) show how fuzzy logic can help classify the that of Landers et al. on cognitive ability. Thus, in the absence of applicants when information referring to their behavior is more research in this regard, we conclude that GRA have no collected in the game. Moreover, this approach is not alien to more adverse impact than a conventional test. Frontiers in Psychology 10 frontiersin.org Ramos-Villagrasa et al. 10.3389/fpsyg.2022.952002 With regard to faking, the only research that addresses this especially organizational attractiveness (Georgiou and issue shows that a GRA made by means of storyfication is more Nikolaou, 2020; Gkorezis et al., 2021), offers added value to the resistant to faking than the original test (Landers and specific evaluation process. In any case, practitioners who wish Collmus, 2022). to use GRA should only use games developed specifically for that purpose (Sanchez et al., 2022), based on some psychological theory (Landers et al., 2021), and offering Discussion adequate psychometric characteristics. In this sense, the GRA with the most empirical support so far is Owiwi (cfr., Georgiou This systematic review article has focused on using GRA in et al., 2019; Nikolaou et al., 2019; Georgiou and Nikolaou, personnel selection with two objectives: (1) to determine 2020; Gkorezis et al., 2021), although it suffers from a lack of whether the current state of the art supports their use in research with more samples of workers and applicants. In professional practice; (2) to identify specific aspects on which addition, the game has been expanded to evaluate new future research development should focus. Next, we will address competences but to date, no research has been published to the two objectives from the information obtained through the support its use. systematic review. All of these statements, however, are subject to future verification. The present review has also shown that, given the breadth of GRA types, the different constructs to be evaluated, and Using game-related assessments for the ways of collecting and treating data, we really know very little. personnel selection Fortunately, it has also allowed identifying concrete demands for future research. That is what we will deal with next, answering our As mentioned, the GRA use may be recommended if they second research question. show: (1) reliability; (2) construct validity; (3) predictive validity; (4) freedom from bias; and (5) positive applicant reactions. After reviewing the empirical research, we can Avenues for further research conclude that, indeed, GRA can be used for personnel selection, taking into account some considerations. Undoubtedly, the main recommendation for the future is to First, the results on construct validity reveal inconsistent contextualize research on GRA by drawing on existing outcomes, and this should be improved overall. One possible taxonomies, for example, classifying the game according to the avenue may be to focus on developing games through categorization proposed in Figure 1 and explaining the playable gamification assessment, rather than gameful design and elements introduced according to the taxonomy of Bedwell et al. game-based assessment, at least until research on game design (2012). This will make it easier to group the conclusions obtained identifies how to build tests closer to games without losing and to perform meta-analyses to identify what is and what is not validity. Second, while GRA have been shown to predict suitable in the design of GRA for personnel selection. academic performance and task performance, their results are We will now delve into the different areas identified during the not much better than the existing traditional tests. In fact, systematic review. Regarding theoretical issues, we believe that GRA seem to benefit from the complementary use of further development of GRA is necessary, at least in two ways: (1) conventional testing, but not vice versa (Landers et al., 2021). the literature uses different terms that may overlap (e.g., serious Thus, in the absence of further research in this regard, games, gamified assessment) that need clarification. The present we cannot consider that GRA has greater predictive validity article may help, but the literature development should than other methods. The results on personal characteristics, be accompanied by new terms; and (2) gamification science adverse impact, and faking invite optimism but are still too should develop its application to organizational psychology, scarce to draw conclusions. These applicant reactions are proposing models linking game purposes (e.g., personnel possibly the aspect in which GRA obtain their best results, but selection, onboarding, training) with elements that make up the the game-framing phenomenon (Collmus and Landers, 2019; gaming experience to direct game design. McChesney et al., 2022) suggests that it may not be necessary Concerning validity, researchers must investigate how to to make great efforts for the development of GRA, but to know improve the construct validity of GRA, as well as perform more how to present test-type evaluations or simulations more studies on predictive and discriminant validity. With regard to attractively to applicants. construct validity, we agree with Wu et al.’s (2022) Considering all the above, and taking into account that recommendation to pay attention to the design of the game. In research on this issue is still ongoing and that it is difficult to this sense, the decision to introduce some elements of Bedwell draw conclusions applicable to all GRA (Landers et al., 2021; et al.’s (2012) taxonomy seems to have a differential effect on Wu et al., 2022), we consider that they do not offer sufficient construct validity (e.g., Landers et al., 2020; Harman and advantages to recommend their use over conventional methods Brown, 2022; Landers and Collmus, 2022). Research can unless it is thought that improving applicant reactions, deepen this line, verifying elements’ positive or negative impact Frontiers in Psychology 11 frontiersin.org Ramos-Villagrasa et al. 10.3389/fpsyg.2022.952002 on this type of validity. In the case of predictive validity, it is we still need to know: (1) whether GRA use really prevents necessary to increase the number of studies with workers and faking; (2) under what circumstances it does so or criteria other than academic performance. In the case of job how to enhance this effect (e.g., with or without prior performance, dimensions other than task and contextual explanations). performance can also be analyzed, such as counterproductive work behaviors, adaptive performance, or safety performance (Ramos-Villagrasa et al., 2019). Conclusion In relation to the research on applicant reactions, although it has been the most fruitful line, there are still many unresolved In recent years GRA has been presented as the issues. First, it is necessary to continue delving into the “philosopher’s stone” of selection methods. The results obtained determinants of these reactions. The effect of game-framing by research so far are not so optimistic, but they do prove that (Collmus and Landers, 2019; McChesney et al., 2022) and the GRA have the potential to become one more method among modest effect sizes found by Landers et al. (2021) caution us to those used in personnel selection. This requires an effort from be skeptical of the improvement of applicant reactions compared both theoretical and empirical research. Fortunately, this review with those proposed by traditional tests. However, studies like also shows that there are competent researchers capable of those of Georgiou (2021) and Georgiou and Lievens (2022) undertaking this effort. suggest that we should continue investigating the underlying mechanisms of the GRA-reactions relationship and how to improve it. As for personal determinants, we must continue to Data availability statement identify the variables that determine more favorable reactions, such as being male, having experience playing video games, or The original contributions presented in the study are included self-efficacy for technology (Buil et al., 2020; Ellison et al., 2020; in the article/Supplementary material, further inquiries can Gkorezis et al., 2021). This could help determine the selection be directed to the corresponding author. processes in which it may be especially advisable to resort to GRA. For example, in the ICT sector, characterized by a majority of male professionals, all competent in technology, the use of Author contributions GRA can cause the company to be considered more attractive and thus, capture talent (Aguado et al., 2019). Nor should the PR-V and EF-d-R contributed to the conception and design influence of cultural factors be forgotten (Balcerak and Woźniak, of the study. PR-V performed the search for the systematic review. 2021), and it is advisable to conduct research in different contexts PR-V, EF-d-R, and ÁC wrote the draft of the manuscript. All and cultures, as many marketed GRA are already offered in authors contributed to manuscript revision, read and approved the various languages. submitted version. The design of GRA is possibly the avenue that can offer the most development opportunities, benefiting from interdisciplinary research. Input from experts in game design can help create Funding serious games by gameful design assessment, and data scientists can help collect and analyze the data generated by GRA in novel This work was supported by the Ministry of Science and ways. These results will lead to new research in the other areas Innovation, Government of Spain, under grant PID2021- (validity, personal characteristics, etc.), which will enrich our 122867NA-I00; and the Government of Aragon (Group S31_20D), knowledge about GRA and personnel selection. Department of Innovation, Research and University and FEDER Research on personal characteristics is far from conclusive. 2014–2020, Building Europe from Aragón. The natural advancement of GRA research, accompanied by greater terminological clarity (e.g., type of GRA, predictors it evaluates, etc.), will help clarify the influence of variables such as Conflict of interest sex, age, or experience with computers and video games. At present, we recommend caution to practitioners in using GRA in The authors declare that the research was conducted in the their selection processes. Research on the adverse impact and absence of any commercial or financial relationships that could faking follows the same line, and more investigations are necessary be construed as a potential conflict of interest. to determine a possible general pattern in GRA, or possible differences according to the type of GRA or the constructs evaluated. Publisher’s note Lastly, prevention of faking is also an issue that deserves further research. Georgiou (2021) has shown that prior All claims expressed in this article are solely those explanations can influence applicants’ perception of faking, but of the authors and do not necessarily represent those of their Frontiers in Psychology 12 frontiersin.org Ramos-Villagrasa et al. 10.3389/fpsyg.2022.952002 affiliated organizations, or those of the publisher, the Supplementary material editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its The Supplementary material for this article can be found manufacturer, is not guaranteed or endorsed by the online at: https://www.frontiersin.org/articles/10.3389/fpsyg. publisher. 2022.952002/full#supplementary-material References Aguado, D., Andrés, J. C., García-Izquierdo, A. L., and Rodríguez, J. (2019). García-Izquierdo, A. L., Ramos-Villagrasa, P. J., and Lubiano, M. A. (2020). LinkedIn “big four”: job performance validation in the ICT sector. J. Work Organ. Developing biodata for public manager selection purposes: a comparison between Psychol. 35, 53–64. doi: 10.5093/jwop2019a7 fuzzy logic and traditional methods. J. Work Organ. Psychol. 36, 231–242. doi: 10.5093/jwop2020a22 Albadán, J., Gaona, P., Montenegro, C., González-Crespo, R., and Herrera-Viedma, E. (2018). Fuzzy logic models for non-programmed decision- Georgiou, K. (2021). Can explanations improve applicant reactions towards making in personnel selection processes based on gamification. Informatica 29, gamified assessment methods? Int. J. Sel. Assess. 29, 253–268. doi: 10.1111/ijsa.12329 1–20. doi: 10.15388/Informatica.2018.155 Georgiou, K., Gouras, A., and Nikolaou, I. (2019). Gamification in employee Albadán, J., Garcia Gaona, P. A., and Montenegro Marin, C. (2016). Assessment selection: the development of a gamified assessment. Int. J. Sel. Assess. 27, 91–103. model in a selection process based in gamification. IEEE Lat. Am. Trans. 14, doi: 10.1111/ijsa.12240 2789–2794. doi: 10.1109/TLA.2016.7555256 Georgiou, K., and Lievens, F. (2022). Gamifying an assessment method: what al-Qallawi, S., and Raghavan, M. (2022). A review of online reactions to game-based signals are organizations sending to applicants? J. Manager. Psychol 37, 559–574. doi: assessment mobile applications. Int. J. Sel. Assess. 30, 14–26. doi: 10.1111/ijsa.12346 10.1108/JMP-12-2020-0653 Armstrong, M., Ferrell, J., Collmus, A., and Landers, R. (2016). Correcting Georgiou, K., and Nikolaou, I. (2020). Are applicants in favor of traditional or misconceptions about gamification of assessment: more than SJTs and badges. Ind. gamified assessment methods? Exploring applicant reactions towards a gamified Organ. Psychol. 9, 671–677. doi: 10.1017/iop.2016.69 selection method. Comput. Hum. Behav. 109:106356. doi: 10.1016/j.chb.2020. Armstrong, M. B., and Landers, R. N. (2017). An evaluation of gamified training: 106356 using narrative to improve reactions and learning. Simul. Gaming 48, 513–538. doi: Gkorezis, P., Georgiou, K., Nikolaou, I., and Kyriazati, A. (2021). Gamified or 10.1177/1046878117703749 traditional situational judgement test? A moderated mediation model of Armstrong, M. B., and Landers, R. N. (2018). Gamification of employee training recommendation intentions via organizational attractiveness. Eur. J. Work Organ. and development: gamification of employee training. Int. J. Train. Dev. 22, 162–169. Psy. 30, 240–250. doi: 10.1080/1359432X.2020.1746827 doi: 10.1111/ijtd.12124 Harman, J. L., and Brown, K. D. (2022). Illustrating a narrative: a test of game Auer, E. M., Mersy, G., Marin, S., Blaik, J., and Landers, R. N. (2022). Using elements in game-like personality assessment. Int. J. Sel. Assess. 30, 157–166. doi: machine learning to model trace behavioral data from a game-based assessment. 10.1111/ijsa.12374 Int. J. Sel. Assess. 30, 82–102. doi: 10.1111/ijsa.12363 Hilliard, A., Kazim, E., Bitsakis, T., and Leutner, F. (2022). Measuring Balcerak, A., and Woźniak, J. (2021). Reactions to some ICT-based personnel personality through images: validating a forced-choice image-based assessment selection tools. Econ. Soc. 14, 214–231. doi: 10.14254/2071-789X.2021/14-1/14 of the big five personality traits. J. Intelligence 10:12. doi: 10.3390/jintelligence 10010012 Bedwell, W. L., Pavlas, D., Heyne, K., Lazzara, E. H., and Salas, E. (2012). Toward a taxonomy linking game attributes to learning: an empirical study. Simul. Gaming Hommel, B. E., Ruppel, R., and Zacher, H. (2022). Assessment of cognitive 43, 729–760. doi: 10.1177/1046878112439444 flexibility in personnel selection: validity and acceptance of a gamified version of the Wisconsin Card Sorting Test. Int. J. Sel. Assess. 30, 126–144. doi: 10.1111/ Brown, M. I., Speer, A. B., Tenbrink, A. P., and Chabris, C. F. (2022). Using game-like ijsa.12362 animations of geometric shapes to simulate social interactions: an evaluation of group score differences. Int. J. Sel. Assess. 30, 167–181. doi: 10.1111/ijsa.12375 Korn, O., Brenner, F., Börsig, J., Lalli, F., Mattmüller, M., and Müller, A. (2018). “Defining recrutainment: a model and a survey on the gamification of recruiting Buil, I., Catalán, S., and Martínez, E. (2020). Understanding applicants’ and human resources,” in Advances in the Human Side of Service Engineering. reactions to gamified recruitment. J. Bus. Res. 110, 41–50. doi: 10.1016/j. Vol. 601. eds. L. E. Freund and W. Cellary (Cham: Springer International jbusres.2019.12.041 Publishing), 37–49. Chamorro-Premuzic, T., Akhtar, R., Winsborough, D., and Sherman, R. A. (2017). Küpper, D. M., Klein, K., and Völckner, F. (2021). Gamifying employer branding: The datafication of talent: how technology is advancing the science of human potential An integrating framework and research propositions for a new HRM approach in at work. Curr. Opin. Behav. Sci. 18, 13–16. doi: 10.1016/j.cobeha.2017.04.007 the digitized economy. Hum. Resour. Manag. Rev. 31:100686. doi: 10.1016/j. Collmus, A. B., and Landers, R. N. (2019). Game-framing to improve applicant hrmr.2019.04.002 perceptions of cognitive assessments. J. Pers. Psychol. 18, 157–162. doi: Landers, R. N., Armstrong, M. B., Collmus, A. B., Mujcic, S., and Blaik, J. (2021). 10.1027/1866-5888/a000227 Theory-driven game-based assessment of general cognitive ability: design theory, Connelly, L., Burbach, B. E., Kennedy, C., and Walters, L. (2018). Escape room measurement, prediction of performance, and test fairness. J. Appl. Psychol. doi: recruitment event: description and lessons learned. J. Nurs. Educ. 57, 184–187. doi: 10.1037/apl0000954 [Epub ahead of print]. 10.3928/01484834-20180221-12 Landers, R. N., Auer, E. M., and Abraham, J. D. (2020). Gamifying a situational Coovert, M. D., Wiernik, B. M., and Martin, J. (2020). Use of technology enhanced judgment test with immersion and control game elements: effects on applicant reactions simulations for cyber aptitude assessment: Phase II prototype development. MCD and construct validity. J. Manag. Psychol. 35, 225–239. doi: 10.1108/JMP-10-2018-0446 and Associates Plant City United States. Available at: https://apps.dtic.mil/sti/ Landers, R. N., Auer, E. M., Collmus, A. B., and Armstrong, M. B. (2018). citations/AD1107016 Gamification science, its history and future: definitions and a research agenda. Simul. Gaming 49, 315–337. doi: 10.1177/1046878118774385 Egol, K. A., Schwarzkopf, R., Funge, J., Gray, J., Chabris, C., Jerde, T. E., et al. (2017). Can video game dynamics identify orthopaedic surgery residents who will Landers, R. N., and Collmus, A. B. (2022). Gamifying a personality measure by succeed in training? Int. J. Med. Educ. 8, 123–125. doi: 10.5116/ijme.58e3.c236 converting it into a story: convergence, incremental prediction, faking, and reactions. Int. J. Sel. Assess. 30, 145–156. doi: 10.1111/ijsa.12373 Ellison, L. J., McClure Johnson, T., Tomczak, D., Siemsen, A., and Gonzalez, M. F. (2020). Game on! Exploring reactions to game-based selection assessments. J. Landers, R. N., and Sanchez, D. R. (2022). Game-based, gamified, and gamefully Manag. Psychol. 35, 241–254. doi: 10.1108/JMP-09-2018-0414 designed assessments for employee selection: definitions, distinctions, design, and validation. Int. J. Sel. Assess. 30, 1–13. doi: 10.1111/ijsa.12376 Fetzer, M., McNamara, J., and Geimer, J. L. (2017). “Gamification, serious games and personnel selection,” in The Wiley Blackwell Handbook of the Psychology of Laumer, S., Eckhardt, A., and Weitzel, T. (2012). Online gaming to find a new Recruitment, Selection and Employee Retention. 1st Edn. eds. H. W. Goldstein, E. D. job–examining job seekers’ intention to use serious games as a self-assessment tool. Pulakos, J. Passmore and C. Semedo (Hoboken, NY: Wiley), 293–309. German J. Hum. Res. Manag. 26, 218–240. doi: 10.1177/239700221202600302 Formica, E., Gaiffi, E., Magnani, M., Mancini, A., Scatolini, E., and Ulivieri, M. McChesney, J., Campbell, C., Wang, J., and Foster, L. (2022). What is in a name? (2017). Can video games be an innovative tool to assess personality traits of the Effects of game-framing on perceptions of hiring organizations. Int. J. Sel. Assess. 30, millennial generation? An exploratory research. BPA 280, 29–47. 182–192. doi: 10.1111/ijsa.12370 Frontiers in Psychology 13 frontiersin.org Ramos-Villagrasa et al. 10.3389/fpsyg.2022.952002 McCord, J.-L., Harman, J. L., and Purl, J. (2019). Game-like personality testing: individual work performance questionnaire. J. Work Organ. Psychol. 35, 195–205. an emerging mode of personality assessment. Personal. Individ. Differ. 143, 95–102. doi: 10.5093/jwop2019a21 doi: 10.1016/j.paid.2019.02.017 Salgado, J. F., Moscoso, S., García-Izquierdo, A. L., and Anderson, N. R. (2017). Melchers, K. G., and Basch, J. M. (2022). Fair play? Sex-, age-, and job-related “Inclusive and discrimination-free personnel selection,” in Shaping Inclusive correlates of performance in a computer-based simulation game. Int. J. Sel. Assess. Workplaces Through Social Dialogue. eds. A. Arenas, D. Di Marco, L. Munduate and 30, 48–61. doi: 10.1111/ijsa.12337 M. C. Euwema. (Cham: Springer International Publishing), 103–119. Nacke, L., and Deterding, S. (2017). The maturing of gamification research. Sanchez, D. R., Weiner, E., and Van Zelderen, A. (2022). Virtual reality Comput. Hum. Behav. 71, 450–454. doi: 10.1016/j.chb.2016.11.062 assessments (VRAs): exploring the reliability and validity of evaluations in VR. Int. J. Sel. Assess. 30, 103–125. doi: 10.1111/ijsa.12369 Nikolaou, I. (2021). What is the role of Technology in Recruitment and Selection? Span. J. Psychol. 24:e2. doi: 10.1017/SJP.2021.6 Schalken, N., and Rietbergen, C. (2017). The reporting quality of systematic reviews and meta-analyses in industrial and organizational psychology: a systematic Nikolaou, I., Georgiou, K., and Kotsasarlidou, V. (2019). Exploring the review. Front. Psychol. 8:1395. doi: 10.3389/fpsyg.2017.01395 relationship of a gamified assessment with performance. Span. J. Psychol. 22:E6. doi: 10.1017/sjp.2019.5 Tippins, N. T. (2015). Technology and assessment in selection. Annu. Rev. Organ. Psych. Organ. Behav. 2, 551–582. doi: 10.1146/annurev-orgpsych- Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., 031413-091317 Mulrow, C. D., et al. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 372, 1–9. doi: 10.1136/bmj.n71 Wiernik, B. M., Raghavan, M., Caretta, T. R., and Coovert, M. D. (2022). Developing and validating a serious game-based assessment for cyber occupations Peters, H., Kyngdon, A., and Stillwell, D. (2021). Construction and validation of in the US Air Force. Int. J. Sel. Assess. 30, 27–47. doi: 10.1111/ijsa.12378 a game-based intelligence assessment in minecraft. Comput. Hum. Behav. 119:106701. doi: 10.1016/j.chb.2021.106701 Woods, S. A., Ahmed, S., Nikolaou, I., Costa, A. C., and Anderson, N. R. (2020). Personnel selection in the digital age: a review of validity and applicant reactions, Quiroga, M. A., Diaz, A., Román, F. J., Privado, J., and Colom, R. (2019). and future research challenges. Eur. J. Work Organ. Psy. 29, 64–77. doi: Intelligence and video games: beyond “brain-games”. Intelligence 75, 85–94. doi: 10.1080/1359432X.2019.1681401 10.1016/j.intell.2019.05.001 Wu, F. Y., Mulfinger, E., Alexander, L., Sinclair, A. L., McCloy, R. A., and Oswald, F. L. Ramos-Villagrasa, P. J., Barrada, J. R., Fernández-del-Río, E., and Koopmans, L. (2022). Individual differences at play: an investigation into measuring big five personality (2019). Assessing job performance using brief self-report scales: the case of the facets with game-based assessments. Int. J. Sel. Assess. 30, 62–81. doi: 10.1111/ijsa.12360 Frontiers in Psychology 14 frontiersin.org

Use Quizgecko on...
Browser
Browser