Week 2 Job Analysis PDF

Summary

This document provides an overview of job analysis, including purposes, outputs, and different types of job analysis methods. It details the importance of job analysis for various HR functions, such as recruitment and selection, career development, and HR planning. The document covers topics like qualitative and quantitative data in job analysis, and explores different methods such as task-oriented and work/person-oriented approaches.

Full Transcript

### **Week 2 Job Analysis** 4.3 Module Personnel Psychology Job Analysis: design a job and measure performance subsequently. - Not everything we observe in the workplace would be relevant to work performance/behaviour, and some are not observable. **[Purposes of job analysis (JA)]** *Def...

### **Week 2 Job Analysis** 4.3 Module Personnel Psychology Job Analysis: design a job and measure performance subsequently. - Not everything we observe in the workplace would be relevant to work performance/behaviour, and some are not observable. **[Purposes of job analysis (JA)]** *Definition, outputs, and also linkages to the other HR functions* Purpose: describe jobs in a company structurally and systematically 1. What **tasks/responsibilities** are involved in the job? 2. What **personal qualities** are required in order to perform the job effectively? Outputs: **job description** (tasks performed), **job specification** (personal attributes required), **job evaluation** (values of a job) K - knowledge, S - skills, A - abilities, O - all other characteristics - Abilities cannot be changed easily, e.g. IQ; Other - personal traits, etc. Facilitates: **Criterion Development** (outcomes, DV, performance) - Objectively and precisely defining **what to look at in measuring job performance** - Performance criteria - Criterion variable: parallel to dependent or outcome variable HR Planning: know the profile of all employees, helps manage potential organisational changes Litigation: help define the HR processes against charges of discrimination. Following the objectives of job analysis and making decisions accordingly. *"The fundamental concern of job/task/work analysis and competency modeling is to obtain descriptive information to design training programs, establish performance criteria, develop selection systems, implement job-evaluation systems, redesign machinery or tools, and create career paths for personnel."* - Work is a broader term, and competency modelling is becoming more popular overtime ------------ -------------------------------------------------------------------- Job family Group of similar jobs. E.g., academic jobs Job Similar positions in an organisation. E.g, educators Position Particular person doing a set of tasks. E.g, lecturer Task Specific unit of work with a clear start to finish. E.g. lecturing ------------ -------------------------------------------------------------------- Next: activities, elements. [**Main types of job analysis and examples:** *Task-oriented vs. worker-oriented, FJA, PAQ, and PPRF*] ------------------------------------------------------------------- ------------------------------------------------- **Job/Task Oriented** **Work/Person-Oriented** Listing out job descriptions Listing out the required person characteristics Arranged in terms of activity category, importance, and frequency KSAOs: knowledge, skill, ability, others ------------------------------------------------------------------- ------------------------------------------------- ----------------------------------------------- ------------------------------------------------------------------------------- **Qualitative Data** **Quantitative Data** Narrative statements. E.g., task descriptions Ratings and descriptive statistics. E.g., importance and/or frequency ratings ----------------------------------------------- ------------------------------------------------------------------------------- Sources of Job Analysis Data Interviewing job incumbent Direct observation; Spying Work participation/job try out Self-report; Questionnaire; Diary Interviewing **subject matter experts (SMEs)** Collecting critical incidents; Work on existing database **Functional job analysis (FJA):** identifying and describing the essential tasks, duties, responsibilities, and interactions associated with a particular job role. - Rather than just listing the tasks, focuses on what the job entails in terms of functions and behaviors. - Action verb (e.g., "develops", "prepares", "assesses") - Immediate objective of work (e.g., "in order to\...") - What and how the job gets done; About interactions with data, people, and things. **Position Analysis Questionnaire** - Foundation: all jobs have common elements that can be compared across jobs. - Ask job incumbents to give rating about different attributes to develop - Worker-oriented approach - 1\. Information input - 2\. Mental processes - 3\. Work output - 4\. Relationship with other persons - 5\. Job context (physical & social environment) - 6\. Other job characteristics (e.g., structure/scheduling - Rate each item based on importance to the job, Extent of use, Amount of time, Applicability, Possibility of occurrence, A special code for certain job - Rate from 1-5 and have a not applicable option **Personality-Based Job Analysis** - Rationale: personality related to job performance, jobs increasingly emphasize emotional labor and customer service - Personality-Related Position Requirement Form (PPRF) - 107 behavioural items mapping to Big 5. Three-point response scale (not required, helpful, essential) Critical Incident Technique (CIT) - Identify those infrequent but critical events that can differentiate effective and ineffective workers. - SMEs possess most of this information - Example: bank customer service officer, how they deal with loud complaints [**Other JA technique and an integrated approach**: *CIT and O\*Net*] ![](media/image2.png) Functional Job Analysis under occupation-specific information +-----------------------------------+-----------------------------------+ | **Electronic Performance | **Cognitive task analysis** | | Monitoring** | | +-----------------------------------+-----------------------------------+ | Electronic tracking in work | Examining underlying cognitive | | processes | processes in task completion | | | | | Time for handling a job, | | | conversation taken | | | | | | E.g., Cashier, drivers, call | | | centre operators | | +-----------------------------------+-----------------------------------+ | Input from SMEs is not required | Ask a SME to voice out thoughts | | | when completing a task | | | (think-aloud protocol) | | | | | | Revealing cognitive processes in | | | judgement, problem solving, or | | | planning | +-----------------------------------+-----------------------------------+ **[Biases in JA and remedy]** Errors in JA process: different job analysts have different interpretation, scoring based on different benchmarks, and systematic psychological biases. Morgeson et al. (2004) Participants either score ability statements and task statements (only difference is "ability to record" or "record") The ratings are significantly higher for the ability version compared to the task version. Self-presentation took place when it is "ability to," as they took it personally and they want to present themselves in a positive front. However, such trend is not found among supervisors or job analysts Aguinis, Mazukiewicz, and Heggestad (2009) - PPRF biases come from self-serving biases and social projection - Frame-of-reference training: move the rating frames from respondents themselves to people working in the job in general. - Field experiment that trained some and didn't train others was effective: correlations between own big-five ratings to PPRF ratings dropped with the training. Most effective for those high in openness. **[Beyond JA]** **Job evaluation** - Measures the value of a job - Based on the compensable factors - Comparing KSAOs, work environment, and other factors - Convert to a point system and determine compensation theoretically The idea of "**comparable worth**" - Jobs with similar compensable factor scores should be paid equally (e.g., addressing the gender pay gap problem) - But idea is controversial and ignore factors such as market conditions The need to identify essential functions of a job to comply various laws in employment Rise and Fall of JA - Costly, mechanical process - Less research published in JA recently - Impractical in today's context: small-medium enterprises, blurred job responsibilities - Concepts such as job crafting become more relevant - proactive changes made by employees to alter work demands to increase productivity - Now more focused on competency modelling **Competency Modelling** - Behavioural themes as **identified by an organisation** that are critical for generating desired work outcomes - Competencies are more general and abstract than KSAOs - Subsequent HR functions to align employees with the competencies (e.g., training) - CM is thus a top-down process, whereas JA is bottom-up - CM helps organizational achieve strategic goals, whereas JA is more objective **[Aspects of performance not addressed by JA]** JA concerns with task performance, but not everything about job performance - Other aspects: performance in face of changes in the environment, extra--role behaviours, destrctive behaviors - Adaptive Performance - Organizational Citizenship Behaviours - Counterproductive Work Behaviours ### **Week 3: Individual Differences and Assessment** Jobs come first → individuals, employees, KSAO, assessments Cognitive Ability - The g factor, IQ, general mental ability (various theories) - Single most useful predictor of job performance - There are many different, specific facets: middle level abilities (general memory, visual perception, etc.) and specific abilities (spatial relations) - "G" may not be sufficient to assess success for all jobs Human Attributes - Psychical abilities, sensory abilities, psychomotor abilities The Five-Factor model - Universal personality dimensions: OCEAN, applicable across cultures - Predict work performance, incremental predictability over "g", predict other outcomes (counterproductive behaviour, turnover, satisfaction, OCB) - Suffer from fewer subgroup differences Additional Attributes - Knowledge: declarative knowledge, procedural knowledge, tacit knowledge (lnow-how acquired informally) - Skills: task skills vs. people skills - Competencies: general/desirable attributes identified by an organisation (top-down, job analysis is bottom up) - KSAOs leading to organisational success, connected with organisational core values SIA's Key Capabilities Upskill Now, Be Future-Reading **Tests** Construct validity: testing what you said you want to test Content validity: testing the representative sample/test of the construct Criteria-related validity: Reliability: test-retest validity Better to be a generalized and valid test: cost effective, existing products - Bias: systematic errors in prediction, e.g. always overestimate one group's IQ AO \> KS, more emphasized, because KS can be trained more Personality Test: can be used to screening-in and out Psychopathological traits, right-wing authoritarianism Interviews - Popular but unstructured interviews have problems: casual questions, snap judgement, no standard assessments of what interviewers are looking for and how scoring is done - Structured interviews: questions based on job analysis, consistent questioning plan, few prompting/follow-up questions/elaboration, use more specific and precise questions (can include situational questions, longer interview and more questions. - Emphasise on ratings: rate each, use multiple scales, take notes, interview training - Advantages: person-organisation fit, preferred by applicants, negotiation, Q&As, although can be subjective. Assessment Center - Combining everything: multi-trait-multi-approach-multi-assessor method - Dimensions identified by analysis, multiple exercises, expensive, a panel of assessors - Structured interviews, simulation/situational exercises (presentation, leadership group discussion), testing (personality, cognitive ability) Written Materials 1. Application blanks 2. Grade: minimum qualification 3. Letter of recommendation: positive distortion 4. Biodata: biographical information, hobbies, social style, learning style, but social desirability effect 12 dimensions of college performance Examples: knowledge, learning, artistic, multicultural, leadership, interpersonal, citizenship, etc. Predictability Power of Biodata - Predictive validity for performance data above entrance exam scores and personality scores. Smaller racial subgroup differences in biodata - From recent meta-analytic results: Validity measures of biodata measures are high. Up to.44 in employment settings;.50 in school settings Predicting Power of Various Assessment Methods - A meta-analysis of research spanned across 85 years - Cognitive ability test is almost the single best predictor of performance (.51) - Adds validity when combining it with: Integrity test, work sample test, structured interview. Multiple R increases. *Incremental Validity*. - Work sample test actually higher, at 0.54, but very job specific so GMA better Recent Revusuibs - \*Range restriction. Overcorrection/overestimation of validity. - Used a revised correction procedure on the validity estimates: structured interview ranks the highest, while cognitive ability ranks the fifth. - Generate much lower subgroup differences (B-W d) only 0.23 for structured interview, 0.70+ for cognitive ability Incremental Validity - A theoretical consideration that has strong practical significance - Unique predictive power over an existing test - Thus yields much higher validity when used in combination - Examples of such combinations - Biodata, cognitive ability, and interviews, or take away cognitive ability following Sackett et al.'s findings to reduce subgroup differences - Personality and mental ability - Situational judgment and cognitive ability/ personality/job experience ### **Week 4 Staffing Decisions** - Not just hiring, but also promotion deselection 1. **[Theoretical concepts and challenges]** Basic model for selection Multiple predictors: determine relative predicting power to the criterion. Set cut-off line, decide on most important criterion, empirical/judgmental process **Criterion-Related Validity**: Are predictors predictive of the desired outcomes? - Cellect data on IV (scores on predictor) and DV (actual performance) to compute the correlation - Range restriction: the people with low IV would not be hired so they do not give data in DV. In reality, we only see the blue dots. R = 0.63 was never known, but is a corrected regression. This can be overcorrected, biased. ![](media/image4.png) **Establishing a Valid Criterion** - **Conceptual vs. actual criterion** (sales ability vs. figures). However, they may not perfectly overlap. - **Contamination**: policy changes, overall economy is good, etc. inflates every sales person's sales figures. Captures aspects that are unrelated to sales figures. - **Deficiency**: not telling everything about actual criteria. - **Relevance**: the degree of overlap between conceptual and actual criteria. **Cutoff value** - Criterion-referenced cut scores -- having a test score that are associated with the desired level of performance - Cutoff value influences the probability of getting: - **False positive**: accepting unqualified persons - **False negative**: rejecting qualified persons - Vertical line to the right -- with higher cutoff, higher false negative and lower false positive ![](media/image6.png) 2. **[Recruitment]** Information on Job Advertisements - On the use of AI (Wesche & Sonderegger, 2021) - A strong negative effect for AI vs. human interview on intention to apply and organization attractiveness - The effect was less negative for AI vs. human screening of applicants - The use of AI also lowered perception of procedural justice - Particularly the ability to voice out concerns in the decision-making process Realistic Job Preview - Realistic presentation of both good and bad sides to increase job commitment, satisfaction, and decrease initial turnover. - From recruiter, supervisor, or new employees Bona Fide Occupational Qualification - Authentic job requirements derived from job analysis - BFOQ defence of selection procedure against discriminatory charges 3. **[Selection]** **Staffing models** - **Judgmental model**: no rating or formal computations, intuitive, clinical decision making - Profile matching: Profiles of applicants are matched against successful incumbents. Data driven, not necessarily based on job analysis. - **Regression model**: A formula is computed including all the predictors with their respective weights in it to predict the criterion - Compensatory approach. All applicants could be ranked. - **Multiple cutoffs model**: every predictor has a cutoff value. Successful applicant will pass the requirements of every aspect. Non-compensatory. - **Multiple hurdle model**: above the cutoff of one level, proceed to the next level, sequential, non-compensatory, narrow down the applicant tool. - Most realistic. Employee Deselection +-----------------------------------+-----------------------------------+ | Termination for cause | Layoffs | +-----------------------------------+-----------------------------------+ | Following organisational | Unrelated to employee | | procedures | performance. Organisational | | | downsizing. Spillover | | Received repeated warning, poor | disappointment to survivors. | | appraisal, etc. | | | | - Recommend defensible, | | | objective approach to justify | | | the layoffs | +-----------------------------------+-----------------------------------+ 21st Century Staffing Models - Beyond typical KSAOs identified - Job duties become more complex - Adaptability, global mindset, cultural agility, and relationship management - New staffing approaches - Continuous assessment, realistic testing environment, and relying on actual performance as predictors of future performance. For example, internships. 4. **[Legal issues in staffing decisions]** ------------------------------------------- ---------------------------------------------------- **Adverse Treatment** **Adverse Impact** Different treatment to the minority group Different outcomes to majority and minority groups Intentional act Unintentional act built in the procedures ------------------------------------------- ---------------------------------------------------- **4/5th rule** - Protected group obtains at least 80% of the selection rate (SR) as of the majority group - Also known as adverse impact ratio - The ratio can be sensitive to sample size, esp when the size of the minority group is small - Statistical significance of whether the observed ratio deviates from the 4/5th rule can be determined 5. **[Fair Employment in Singapore: TAFEP]** ### **Week 5: Performance Measurement** ------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------- **Performance appraisal/evaluation** **Performance management** Formalised means of assessing worker performance in comparison to established organisational standard Linking individual performance to organisational goals. Help subordinate understand and model behaviours to the performance criteria HR procedure done yearly Done more frequently between supervisor and subordinate ------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------- **Objectives of Performance Measurement** 1. Managing employees' performance: Performance assessment and feedback 2. Personnel decisions: Salary raise, promotion, placement, termination 3. Identify training needs: Individual level and organisational level 4. Validating job selection tools: Criterion validity **Measuring the criterion (work performance)** - Make crucial differences for the future of an employee - Employees are extremely sensitive about this - An issue that will arouse dissatisfaction easily: Intention to quit, Disruptive behaviours, Reducing job motivation, Litigation **What can be measured?** - Task-related direct outcomes: Measureable units in performance - Behaviors that indicate performance: Task-related behaviors based on job analysis - Critical incidents, organizational citizenship behaviours and counterproductive work behaviours - Traits: Provide ratings on big-five or specific traits (e.g., persistence) - Caveat: May not be stated clearly in job analysis; predictors of performance ≠ measures of performance - Debatable **Types of Measurement** - Depends on the nature of the criterion: Task performance, KSAOs, critical incidents - "Hard" data versus "Soft" data: Being quantitative and objective does not mean achieving a high level of criterion relevance +-----------------------------------+-----------------------------------+ | **Type of Measurement** | **Details** | +-----------------------------------+-----------------------------------+ | Objective Performance Measures | Facts that are viewed to reflect | | | performance (must be critical to | | | performance) | | | | | | - Example: Sewing machine | | | operators -- Number of | | | garments produced per day | +-----------------------------------+-----------------------------------+ | Judgmental Performance Measures | Judgments or ratings, from self, | | | supervisors, peers, subordinates, | | | or customers | | | | | | Useful when performance can't be | | | quantified easily | | | | | | Problems of subjectivity, | | | inter-rater differences and other | | | biases | +-----------------------------------+-----------------------------------+ | 360-Degree Feedback | Getting popular in the 1990s | | | | | | - Self, supervisors, peers, | | | customers, subordinates | +-----------------------------------+-----------------------------------+ | Testing | Engage the employees in | | | work-related tasks derived from | | | job analysis, or be in the form | | | of an interview, walk-through | | | testing | +-----------------------------------+-----------------------------------+ | Electronic Performance Monitoring | Exhaustive performance data for | | | operational jobs, but have | | | privacy issues | +-----------------------------------+-----------------------------------+ To make multi-source feedback more effective: - Train all appraisers - Appraising behaviours relevant to organisations - Peer and subordinate appraisals being anonymous - A tricky finding: managers view comments more positively when identities of subordinates making the comments are known (Antonioni, 1994) - An organisational culture of encouraging feedback **[Rating Methods]** +-----------------------------------+-----------------------------------+ | **Absolute Rating** | **Relative Rating** | +-----------------------------------+-----------------------------------+ | Standalone measurements of an | Comparing a person with a target | | individual | group | +-----------------------------------+-----------------------------------+ | **Narrative** | **Simple ranking** | | | | | Simple | - Crude | | | | | Can be unstructured and vague | - Rank people by different | | | dimensions one by one, then | | Depend on assessor's writing | average the rankings | | skill | | | | - More feasible for a small | | | number of people | +-----------------------------------+-----------------------------------+ | **Recalling critical incidents** | **alternation ranking** | | | | | Can be unrepresentative and | - Starting with tops and | | infrequent | bottoms | | | | | | - Middle rankings still be | | | difficult | +-----------------------------------+-----------------------------------+ | **Graphic rating scale (GRS)** | **Pairwise comparison** | | | | | - Forced choice system of | - Number of comparisons | | giving quantitative ratings | required = n\*(n-1)/2 | | | | | - Likert scales | - Counting the number of "wins" | | | in each comparison and rank | | - Labels on both ends or on | it again | | each point | | | | - Easier to make each | | - Rating quality or frequency | comparison | | | | | | - Time consuming to do all | | | comparison for a large group | +-----------------------------------+-----------------------------------+ | **Behaviorally anchored rating | **Forced Distribution** | | scale (BARS)** | | | | - Curving + banding | | - Based on representative | | | behaviours or critical | - Performance not always | | incidents from the job | normally distributed | | | | | - Rank the behavioural | - Employees dislike | | incidents along a continuum | | | | | | - Become the anchors of a | | | rating scales | | | | | | - Anchors can be located on any | | | position along a scale | | +-----------------------------------+-----------------------------------+ | **Behavioral observation scale | | | (BOS)** | | | | | | - Use one behavior or critical | | | incident in each item. a | | | frequency scale | | +-----------------------------------+-----------------------------------+ | **Behavioral checklist** | | | | | | - Listing performance | | | statements and check whether | | | it is exhibited | | | | | | - Can be in forced-choice | | | format. (Choose two | | | statements that best fit out | | | of four) | | | | | | - All statements are positive | | | but with different weights | | | assigned | | +-----------------------------------+-----------------------------------+ Ranking: mix up relative and absolute performance, influenced by perception of the high achiever Evaluating Different Rating Formats - No single method emerges to be more superior than others. Have been debated but no consensus! **[Rating Errors and Biases]** 1. **Halo Effect**: Overgeneralization of one good attribute to others, general impression affects ratings of specific qualities. Rusty halo effect (The reverse case for bad attributes). 2. **Leniency/Severity Effect**: failure to differentiate ratees. Central tendency effect. Due to personal style, vague banding labels. Use forced distribution, clear anchors for scales. 3. **Recency Effect**: Higher weights given to recent events. Time pressure will exacerbate the effect. Should keep a structured diary. 4. **Attribution Errors**: Tendency to give dispositional explanation of behaviours a. Actor-observer bias: Actor's tendency to give situational explanations: Observer's tendency to give dispositional explanations. Lead to disputes in performance appraisal 5. Personal Bias: Positive regard for a person b. Heighten both halo and leniency effect. Less likely to use punishments. Correlated.74 with job performance ratings. c. Another view: Positive regard is a result of observation of actual job performance 6. Other Biases d. Economic factors (Recession influences performance of every employee) e. Politics within the organisations. Intra-organizational competition f. Manager's own performance appraisal (Manager receiving positive performance feedback rate their employees higher). g. Using own performance as an anchor Factors influencing biases: accountability. Remedies: Training the raters 1. Administrative training: Knowledge of different ratings scales 2. Psychometric training: Learning about the rating biases to avoid them Such training may backfire scarifying accuracy. 3. Frame-of-reference training: give clear context of how ratings should be made **[Providing Feedback]** - Communication is crucial after the appraisal - Formalised feedback: need to know details not just the ratings. - Is feedback always beneficial? - Meta-analysis showed that about 1/3 of feedback decreased performance afterward **Concerns from both sides** - Raters: Want his subordinates to look good. Worrying about damaging the relationship with the subordinate Concerns of influencing subsequent performance of the subordinate - Ratees: Showing willingness to work hard and improve. Expecting a promotion or salary raise. Trying to explain why performance is constrained **Praise or criticise? Hard to say.** - Self-esteem affirmation vs. identifying room for improvement - The feedback sandwich approach: Praise-criticism-praise - But employees may see it as a ritual and still care most on the criticism High performance teams have more praise than criticism. 5.6 vs. 0.36 **Best practices** 1. Focus on the behaviours but not the person 2. Be selective: Focusing on one critical weakness is more effective than commenting on all weaknesses 3. Focus on the way to achieve desired behaviour (positive framing): not just pointing out the weakness but goal-setting. Same as personnel selection, a performance appraisal system needs to address issues of **discrimination and equal opportunities** **Requirements for a Good Appraisal System** 1\. Standardized and uniform 2\. Well communicated 3\. Provide notice of deficiencies and opportunities to correct 4\. Appeal procedures 5\. Provide training to raters and use multiple raters 6\. Allow checks for possible discrimination problems **Critics of Performance Appraisal** Performance is not normally distributed but follows a power law distribution - Most people with acceptable performance with a few star performers - Positively skewed Performance ratings are mostly biased and unreliable Performance feedback does not help most employees to improve Concerns of its fairness and accuracy Little evidence showing PA/PM boost organizational performance given the amount of time and money spent Advocate a **non evaluative coaching-oriented system** of managing performance instead

Use Quizgecko on...
Browser
Browser