Midterm #2 - INDL Psych PDF
Document Details
Uploaded by BonnySugilite8401
Beal University
Tags
Related
- Industrial-Organizational Psychology: Research Methods in the Workplace (2017)
- Chapter 13 Outline - Industrial and Organizational Psychology
- Module 3 From PPT IO Psych PDF
- Industrial Psychology Prelim Scope PDF
- Employee Selection: Recruiting and Interviewing 2016 PDF
- Industrial Psychology 1 Course Outline PDF
Summary
This document contains information about industrial psychology concepts, such as evaluating selection techniques and decisions, and employee performance appraisal methods. It discusses reliability, validity, and utility of selection devices, as well as different methods for performance appraisal. The document also includes questions to consider.
Full Transcript
Midterm #2 covers the following: Week 5: Evaluating Selection Techniques and Decisions (Chapter 6) Week 8: Resumé / Curriculum Vitae workshop (no textbook chapter corresponds to this) Week 8: Evaluating Employee Performance (Chapter 7) Format is the same as Midterm #2. Also go to...
Midterm #2 covers the following: Week 5: Evaluating Selection Techniques and Decisions (Chapter 6) Week 8: Resumé / Curriculum Vitae workshop (no textbook chapter corresponds to this) Week 8: Evaluating Employee Performance (Chapter 7) Format is the same as Midterm #2. Also go to the same classrooms you were assigned to last time. Format: ~35 Multiple choice 6 definitions (choice of 10), 1 point each (6 total); the definitions come from the bolded words in textbook 2 Short-Answers (choice of 4), 3 points each (6 total) Chapter 6: Evaluating Selection Techniques and Decisions This chapter focuses on how to evaluate if a selection method is useful and how to use test scores to make hiring decisions. The term "test" in I/O psychology, refers to any technique used to evaluate someone, such as references, interviews, and assessment centers ++Characteristics of Effective Selection Techniques: Any selection technique used by an organization should be both reliable and valid ★ Reliability is the extent to which a score from a test or evaluation is consistent and free from error. There are three ways to measure reliability: ○ Test-retest method: measures temporal stability by giving the same test to the same group of people at two different times and correlating the scores ○ Alternate-forms method: measures form stability by giving two forms of the same test to the same group of people and correlating the scores ○ Internal consistency method: measures item homogeneity by correlating responses to one test item with responses to other test items. This can be done by the split-half method, K-R 20, or coefficient alpha ★ When evaluating a reliability coefficient, you should consider the magnitude of the coefficient and the people who will be taking the test ★ Validity is the degree to which inferences from test scores are justified by the evidence. In other words, a test is valid if it accurately measures what it is designed to measure. There are five approaches to test validation. ○ Content validity: the extent to which test items sample the content that they are supposed to measure. ○ Criterion validity: correlating test scores with a measure of job performance. This can be done using either predictive validity or concurrent validity. ○ Construct validity: the extent to which a test measures the construct that it claims to measure. ○ Known-group validity: comparing test scores from two contrasting groups that are “known” to differ on the construct being measured. ○ Face validity: the extent to which a test appears to be job-related. Besides reliability and validity, another important characteristic of a good selection technique is cost-efficiency. The technique should be affordable for the organization to implement. ++Establishing the Usefulness of a Selection Device Once a selection device is determined to be reliable and valid, it is important to determine its utility. A selection device's utility is its value or usefulness to the organization14. Several methods can be used to determine a test's utility, including Taylor-Russell tables, the proportion of correct decisions, Lawshe tables, expectancy charts, and the Brogden-Cronbach-Gleser utility formula1. ++Determining the Fairness of a Test After establishing that a selection technique is useful, it is important to ensure that it is also fair1. There are two main types of bias that can occur: ★ Measurement bias occurs when there are group differences (e.g., gender, race, or age) in test scores that are unrelated to the construct being measured ★ Predictive bias is when the predicted level of job success falsely favors one group over another. If differences in test scores result in one group being selected at a significantly higher rate than another, then adverse impact has occurred15. ++Making the Hiring Decision There are several methods an organization can use to make hiring decisions, including: ★ Unadjusted top-down selection: selecting applicants in rank order based on their test scores ★ Rule of three: giving the hiring authority the names of the top three scorers on a test ★ Passing scores: establishing a minimum score that an applicant must achieve to be considered ★ Banding: grouping applicants who have similar test scores into “bands” When choosing a hiring decision strategy, organizations must consider the legal and ethical implications of each approach Chapter 7: Evaluating Employee Performance Chapter 7 of the textbook explores the process of evaluating employee performance. It outlines a 10-step process for developing a performance appraisal system and describes various methods for evaluating performance and providing feedback. Reasons for Evaluating Employee Performance: Performance appraisal systems serve a variety of purposes, including: ★ Providing employee training and feedback: Performance appraisals provide an opportunity for supervisors to identify employee strengths and weaknesses and offer constructive feedback to improve performance. ★ Determining salary increases: Organizations often use performance appraisals to determine the amount of merit-based pay increases employees will receive ★ Making promotion decisions: When making decisions about who to promote, organizations typically use performance appraisals to identify high-performing employees who have the potential to succeed in higher-level roles. ★ Making termination decisions: Performance appraisal documentation is important in legal cases involving the termination of an employee. ★ Conducting organizational research: Performance data can be used to help an organization understand trends in employee performance and make changes to policies or practices. The Performance Appraisal Process. The textbook outlines a 10-step performance appraisal process: 1. Determine the purpose of the appraisal: The organization must first determine the goals of the performance appraisal system, such as those listed above. 2. Identify environmental and cultural limitations: The organization must consider any environmental or cultural factors that might affect the success of the appraisal system, such as legal constraints or cultural norms. 3. Determine who will evaluate performance: This might include supervisors, peers, subordinates, customers, or the employee themselves (self-appraisal). 4. Create an instrument to evaluate performance: The organization must choose an appraisal method that aligns with the goals of the appraisal system and provides meaningful performance data. 5. Explain the system to those who will use it: Both raters and employees should be thoroughly trained on the performance appraisal system so that they understand how it works and what is expected of them. 6. Observe and document performance: Supervisors should observe employee behavior throughout the year and document critical incidents (examples of excellent and poor performance). 7. Evaluate employee performance: Supervisors use the chosen appraisal method to rate employees on various performance dimensions. 8. Review the results of the evaluation with the employee: Supervisors should meet with employees to discuss their performance appraisal and provide feedback. 9. Make personnel decisions: Performance appraisal data is used to inform decisions about salary increases, promotions, training, and termination. 10. Monitor the system for fairness and legal compliance: The performance appraisal system should be regularly reviewed to ensure it is fair, unbiased, and compliant with legal requirements. Performance Appraisal Methods. The textbook discusses several different methods for evaluating employee performance, including: ★ Trait Focus: This method focuses on rating employee personality traits, such as initiative and dependability. However, trait-focused appraisals are not as legally defensible as other appraisal methods because traits are not always directly related to job performance. ★ Competency Focus: This approach involves rating employees on job-related competencies, such as communication skills or problem-solving skills. ★ Task Focus: This method focuses on evaluating employees' performance on specific job tasks, such as completing reports or making sales calls. ★ Goal Focus: This approach involves setting specific goals for employees and evaluating their performance based on the extent to which they achieve those goals ★ Graphic rating scales: This is a simple rating method where supervisors rate employees on a scale for various performance dimensions ★ Behavioral checklists: This method involves checking off statements that describe employee behaviors on the job. ★ Comparison with other employees: Supervisors can rate employees based on how their performance compares to other employees ★ Frequency with which they perform certain behaviors: Supervisors can rate how frequently an employee performs certain behaviors, such as greeting customers or following safety procedures. ★ The extent to which behaviors meet the expectations of the employer: Supervisors can evaluate to what extent an employee's behavior meets the performance standards established for their job. Legal Considerations - The performance appraisal system must be legally defensible, meaning that it should be objective, job-related, and consistent. - Organizations should also be aware of the employment-at-will doctrine, which allows employers in most states to terminate an employee without a reason. - However, even in at-will employment states, employers must ensure that termination decisions are not based on discriminatory factors. Key Concepts ★ Critical incidents: Examples of excellent and poor employee performance observed by supervisors. ★ Critical incident log: Formal accounts of excellent and poor employee performance that are documented by the supervisor ★ Stress: Perceived psychological pressure. ★ Racial bias: The tendency to give members of a particular race lower evaluation ratings than are justified by their performance. Enhancing Your Understanding of Chapter 7 ★ The sources emphasize the importance of a well-designed and legally defensible performance appraisal system. ★ The choice of appraisal method should be carefully considered based on the purpose of the appraisal and the specific job being evaluated. ★ Documentation of employee performance is crucial, especially for legal reasons. ★ Effective communication and feedback are essential components of the performance appraisal process. Resume & Job search ★ A typical employer will… ○ Makes snap judgements in secs ○ Makes assumptions based on “superficial” things Spelling, grammar Formatting Unusual fonts ○ Ignores or discards info Personal descriptors and opinions → e.g., hard-working, team-oriented Vauge and meaningless statements → e.g., customer service excellence Dense text ★ Resume ○ Issues? Usually a resume is in a chronological order Usually Redundant; so like not necessary or needed Usually vague, unclear terms ○ Remember, you are creating a story not a timeline ○ Things you should add include… Transferable skills, project, activities, types of experience, etc. E.g., fast typing Project management Supervisory/leadership positions Training and development ★ Take an example of one of your volunteer/job activities. How could you incorporate it into your resume other than just listing the job title, organization, and year(s) of employment? Do not include… ○ Remember purpose is to get an interview not the job ○ Do not include confusing, red-flag info ○ Explaining why you left a job or took time off – this highlights the gap ○ Personal information; marital status, age, pictures, etc. ○ Irrelevant work experience ○ Negative language ○ Exaggerated job title or responsibilities ○ Unprofessional e-mail address ★ 35-70% of employers report rejecting applicants based on what they found online. ○ Not just because of the photos, but because of writing, lies, poor grammar, opinions, complaining about previous/current employer ★ You don’t need to erase all traces of your online presence, just manage it to the best of your ability. ★ Infographics: Other forms of summaries. You shouldn’t substitute your resume with them tho. ★ Most affective way to find a job? ○ Careerbuilder ○ Indeed ○ Linedin ○ GlassDoor ○ Etc ★ Proactive job searching: ○ Many jobs are not posted – don’t wait for an advertised position ○ Take the initiative: approach organizations, companies, or institutions that interest you. ★ First impressions matter ○ E-mails; Don’t make it too long or friends use a correct form od address; Dr. Mr. Ms. Avoid sending attachments Spelling errors, poor grammar, no formatting ★ Employee performance: ○ What is work performance? Ability to complete tasks + expectations; regarding said job ○ Why do we evaluate work performance? Listed a bunch of key points It’s important especially if your boss needs tosee how you’re doing, your work, whether that comoany you work for is good - they need DATA ★ Regardles of purpose, appraisals must be communicated. For example, Rate My Professor, and the bad comments he got. Regardless of the feedback, it still showed him feedback, something that he can work on more. ★ What prevents performance appraisals from heloing improve and develop: ○ The person might not take the critisim easily / might take it poorely ○ Might be bias / unfit : maybe bsaing the job performance on one “bad” interaction” ○ Migth emphasize the negative biases & give no feedback for positive interactions The actual answers: ○ Defensiveness: Protecting self-esteem. Not open to hearing feedback ○ Modtivational distortion: Selective attention to informationn that supports self-belief. You start to think of the positive things you’ve done ○ Self-Preoccupation: Resist other’s opnions This is more so like the person knows the basic skils & tries to meet the expectations ★ What will be the focus?: ○ Goal Focus (Results) Prevent crimes from occuring Finish shift withouth personal injurty Have arrests and citations stand up in court ○ Competency Focus (KSAOs) D… This is more like “are you even cpmpotent to do this job?” Using these (Results & KSAOS: Pros & cons ★ Pros… ○ Appear & r job related ○ Easier to give feedback ○ Legally defensible ★ Trait focus: ○ Honesty ○ Courtesy ○ Responsibility ○ Dependability ○ Cooperation Pros & cons: Poor feedback…. ★ Task-focused (involved several competencies) ○ Crime preventation… Pros & cons: Easy to evaluate ★ Orgaization citizenship: positive, supposed to help the company ★ Once you evaulute it, it is no longer voluntary ★ Perceptios of fairness: Likely to occure when given feedback & usually unfair… ★ Was input solicited from employee? Were they given an voice ○ Can epmloyees chalenge the evaulaution? ★ It is applied uniformly: Are everyone be evaulted the same? + Do they know it? ++ Definitions: ★ Reliability: The extent to which a score from a test or from an evaluation is consistent and free from error. ★ Test-retest reliability: The extent to which repeated administration of the same test will achieve similar results ★ Temporal stability: The consistency of test scores across time. ★ Alternate-forms reliability: The extent to which two forms of the same test are similar. ★ Counterbalancing: A method of controlling for order effects by giving half of a sample Test A first, followed by Test B, and giving the other half of the sample Test B first, followed by Test A. ★ Form stability: The extent to which the scores on two forms of a test are similar. ★ Internal reliability: The extent to which responses to test items measuring the same construct are consistent. ★ Item stability: The extent to which responses to the same test items are consistent. ★ Item homogeneity: The extent to which test items measure the same construct. ★ Kuder-Richardson Formula 20 (K-R 20): A statistic used to determine internal reliability of tests that use items with dichotomous answers (yes/no, true/ false). ★ Split-half method: A form of internal reliability in which the consistency of item responses is determined by comparing scores on half of the items with scores on the other half of the items. ★ Spearman-Brown prophecy formula: Used to correct reliability coefficients resulting from the split-half method. ★ Coefficient alpha: A statistic used to determine internal reliability of tests that use interval or ratio scales. ★ Scorer reliability: The extent to which two people scoring a test agree on the test score, or the extent to which a test is scored correctly. ★ Validity: The degree to which inferences from test scores are justified by the evidence. ★ Content validity: The extent to which tests or test items sample the content that they are supposed to measure. ★ Criterion validity: The extent to which a test score is related to some measure of job performance. ★ Criterion: A measure of job performance, such as attendance, productivity, or a supervisor rating. ★ Concurrent validity: A form of criterion validity that correlates test scores with measures of job performance for employees currently working for an organization. ★ Predictive validity: A form of criterion validity in which test scores of applicants are compared at a later date with a measure of job performance. ★ Restricted range: A narrow range of performance scores that makes it difficult to obtain a significant validity coefficient. ★ Validity generalization (VG): The extent to which inferences from test scores from one organization can be applied to another organization. ★ Synthetic validity: A form of validity generalization in which validity is inferred on the basis of a match between job components and tests previously found valid for those job components. ★ Construct validity: The extent to which a test actually measures the construct that it purports to measure. ★ Known-group validity: A form of validity in which test scores from two contrasting groups “known” to differ on a construct are compared. ★ Face validity: The extent to which a test appears to be valid. ★ Barnum statements: Statements, such as those used in astrological forecasts, that are so general that they can be true of almost anyone. ★ Mental Measurements Yearbook (MMY): A book containing information about the reliability and validity of various psychological tests. ★ Unproctored internetbased testing (UIT): An assessment method that can be taken virtually at any time and place and on the device of the applicant’s choosing. ★ Computer-adaptive testing (CAT): A type of test taken on a computer in which the computer adapts the difficulty level of questions asked to the test taker’s success in answering previous questions. ★ Taylor-Russell tables: A series of tables based on the selection ratio, base rate, and test validity that yield information about the percentage of future employees who will be successful if a particular test is used. ★ Selection ratio: The percentage of applicants an organization hires. ★ Base rate: Percentage of current employees who are considered successful. ★ Proportion of correct decisions: A utility method that compares the percentage of times a selection decision was accurate with the percentage of successful employees. ★ Tenure: The length of time an employee has been with an organization. ★ Adverse impact: An employment practice that results in members of a protected class being negatively affected at a higher rate than members of the majority class. Adverse impact is usually determined by the four-fifths rule. ★ Predictive bias: A situation in which the predicted level of job success falsely favors one group over another. ★ Single-group validity: The characteristic of a test that significantly predicts a criterion for one class of people but not for another. ★ Measurement bias: Group differences in test scores that are unrelated to the construct being measured. ★ Differential validity: The characteristic of a test that significantly predicts a criterion for two groups, such as both minorities and nonminorities, but predicts significantly better for one of the two groups. ★ Multiple regression: A statistical procedure in which the scores from more than one criterionvalid test are weighted according to how well each test score predicts the criterion. ★ Linear: A straightline relationship between the test score and the criterion of measurement. ★ Top-down selection: Selecting applicants in straight rank order of their test scores. ★ Compensatory approach: A method of making selection decisions in which a high score on one test can compensate for a low score on another test. For example, a high GPA might compensate for a low GRE score. ★ Rule of three: A variation on top-down selection in which the names of the top three applicants are given to a hiring authority who can then select any of the three. ★ Passing score: The minimum test score that an applicant must achieve to be considered for hire. ★ Multiple-cutoff approach: A selection strategy in which applicants must meet or exceed the passing score on more than one selection test. ★ Multiple-hurdle approach: Selection practice of administering one test at a time so that applicants must pass that test before being allowed to take the next test. ★ Banding: A statistical technique based on the standard error of measurement that allows similar test scores to be grouped. ★ Standard error of measurement (SEM): The number of points that a test score could be off due to test unreliability. ★ Forced-choice rating scale: A method of performance appraisal in which a supervisor is given several behaviors and is forced to choose which of them is most typical of the employee ★ Performance appraisal review: A meeting between a supervisor and a subordinate for the purpose of discussing performance appraisal results. ★ Peter Principle: The idea that organizations tend to promote good employees until they reach the level at which they are not competent—in other words, their highest level of incompetence. ★ 360-degree feedback: A performance appraisal system in which feedback is obtained from multiple sources such as supervisors, subordinates, and peers. ★ Multiple-source feedback: Performance appraisal strategy in which an employee receives feedback from sources (e.g., clients, subordinates, peers) other than just their supervisor. ★ Contextual performance: The effort employees make to get along with their peers, improve the organization, and “go the extra mile.” ★ Forced distribution method: A performance appraisal method in which a predetermined percentage of employees are placed into a number of performance categories. ★ Rank order: A method of performance appraisal in which employees are ranked from best to worst. ★ Paired comparison: A form of ranking in which a group of employees to be ranked are compared one pair at a time. ★ Quantity: A type of objective criterion used to measure job performance by counting the number of relevant job behaviors that occur. ★ Quality: A type of objective criterion used to measure job performance by comparing a job behavior with a standard. ★ Error: Deviation from a standard of quality; also a type of response to communication overload that involves processing all information but processing some of it incorrectly. ★ Graphic rating scale: A method of performance appraisal that involves rating employee performance on an interval or ratio scale. ★ Contamination: The condition in which a criterion score is affected by things other than those under the control of the employee. ★ Frame-of-reference training: A method of training raters in which the rater is provided with job-related information, a chance to practice ratings, examples of ratings made by experts, and the rationale behind the expert ratings. ★ Critical incidents: A method of performance appraisal in which the supervisor records employee behaviors that were observed on the job and rates the employee on the basis of that record. ★ Distribution errors: Rating errors in which a rater will use only a certain part of a rating scale when evaluating employee performance. ★ Leniency error: A type of rating error in which a rater consistently gives all employees high ratings, regardless of their actual levels of performance. ★ Central tendency error: A type of rating error in which a rater consistently rates all employees in the middle of the scale, regardless of their actual levels of performance. ★ Strictness error: A type of rating error in which a rater consistently gives all employees low ratings, regardless of their actual levels of performance. ★ Halo error: A type of rating error that occurs when raters allow either a single attribute or an overall impression of an individual to affect the ratings that they make on each relevant job dimension. ★ Proximity error: A type of rating error in which a rating made on one dimension influences the rating made on the dimension that immediately follows it on the rating scale. ★ Contrast error: A type of rating error in which the rating of the performance level of one employee affects the ratings given to the next employee being rated. ★ Assimilation: A type of rating error in which raters base their rating of an employee during one rating period on the ratings the rater gave during a previous period. ★ Recency effect: The tendency for supervisors to recall and place more weight on recent behaviors when they evaluate performance. ★ Infrequent observation: The idea that supervisors do not see most of an employee’s behavior. ★ Stress: Perceived psychological pressure. ★ Affect: Feelings or emotion. ★ Racial bias: The tendency to give members of a particular race lower evaluation ratings than are justified by their actual performance or to give members of one race lower ratings than members of another race. ★ Employment-at-will doctrine: The opinion of courts in most states that employers have the right to hire and fire an employee at will and without any specific cause. ★ Employment-at-will statements: Statements in employment applications and company manuals reaffirming an organization’s right to hire and fire at will. ★ Progressive discipline: Providing employees with punishments of increasing severity, as needed, in order to change behavior. ★ Behaviorally anchored rating scales (BARS): A method of performance appraisal involving the placement of benchmark behaviors next to each point on a graphic rating scale. ★ Mixed-standard scale: A method of performance appraisal in which a supervisor reads the description of a specific behavior and then decides whether the behavior of the employee is better than, equal to, or poorer than the behavior described. ★ Behavioral observation scales (BOS): A method of performance appraisal in which supervisors rate the frequency of observed behaviors. ★ Prototypes: The overall image that a supervisor has of an employee. Questions to take into consideration: ★ What is the difference between reliability and validity? ★ What method of establishing validity is the best? ★ Why is the concept of test utility so important? ★ What is the difference between single-group and differential validity? ★ Why should we use anything other than top-down selection? After all, shouldn’t we always—hire the applicants with the highest scores? ★ What do you think is the most important purpose for performance appraisal? Why? ★ What problems might result from using a 360-degree feedback system? ★ The chapter mentioned a variety of ways to measure performance. Which one do you think is the best? Why? ★ What do you think is the best way to communicate performance-appraisal results to employees? ★ Is the employment-at-will doctrine a good idea? Why or why not?