Hrm 6622 Week 6 Assignment

Hrm 6622 Week 6 Assignment

Part 5
Staffing Activities: Employment

Chapter 11: Decision Making

Chapter 12: Final Match

McGraw-Hill/Irwin

Part 5
Staffing Activities: Employment

Chapter 11:

Decision Making

Staffing Policies and Programs

Staffing System and Retention Management

Support Activities

Legal compliance

Planning

Job analysis

Core Staffing Activities

Recruitment: External, internal

Selection:
Measurement, external, internal

Employment:
Decision making, final match

Staffing Organizations Model

11-*

11-*

Chapter Outline

  • Choice of Assessment Method
  • Validity Coefficient
  • Face Validity
  • Correlation with Other Predictors
  • Adverse Impact
  • Utility
  • Determining Assessment Scores
  • Single Predictor
  • Multiple Predictors
  • Hiring Standards and Cut Scores
  • Description of Process
  • Consequences of Cut Scores
  • Methods to Determine Cut Scores
  • Professional Guidelines
  • Methods of Final Choice
  • Random Selection
  • Ranking
  • Grouping
  • Ongoing Hiring
  • Decision Makers
  • HR Professionals
  • Managers
  • Employees
  • Legal Issues
  • Uniform Guidelines on Employee Selection Procedures
  • Diversity and Hiring Decisions

11-*

Learning Objectives for This Chapter

  • Be able to interpret validity coefficients
  • Estimate adverse impact and utility of selection systems
  • Learn about methods for combining multiple predictors
  • Establish hiring standards and cut scores
  • Evaluate various methods of making a final selection choice
  • Understand the roles of various decision makers in the staffing process
  • Recognize the importance of diversity concerns in the staffing process

11-*

Discussion Questions for This Chapter

  • Your boss is considering using a new predictor. The base rate is high, the selection ratio is low, and the validity coefficient is high for the current predictor. What would you advise your boss and why?
  • What are the positive consequences associated with a high predictor cut score? What are the negative consequences?
  • Under what circumstances should a compensatory model be used? When should a multiple hurdles model be used?
  • What are the advantages of ranking as a method of final choice over random selection?
  • What roles should HR professionals play in staffing decisions? Why?
  • What guidelines do the UGESP offer to organizations when it comes to setting cut scores?

11-*

Choice of Assessment Method

  • Validity Coefficient
  • Face Validity
  • Correlation With Other Predictors
  • Adverse Impact
  • Utility

11-*

Validity Coefficient

  • Practical significance
  • Extent to which predictor adds value to prediction of job success
  • Assessed by examining
  • Sign
  • Magnitude
  • Validities above .15 are of moderate usefulness
  • Validities above .30 are of high usefulness
  • Statistical significance
  • Assessed by probability or p values
  • Reasonable level of significance is p < .05
  • Face validity

11-*

Correlation With Other Predictors

  • To add value, a predictor must add to prediction of success above and beyond forecasting powers of current predictors
  • A predictor is more useful the
  • Smaller its correlation with other predictors and
  • Higher its correlation with the criterion
  • Predictors are likely to be highly correlated with one another when their content domain is similar

11-*

Adverse Impact

  • Role of predictor
  • Discriminates between people in terms of the likelihood of their job success
  • When it discriminates by screening out a disproportionate number of minorities and women,
  • Adverse impact exists which may result in legal problems
  • Issues
  • What if one predictor has high validity and high adverse impact?
  • And another predictor has low validity and low adverse impact?

11-*

Utility Analysis

  • Taylor-Russell Tables
  • Focuses on proportion of new hires who turn out to be successful
  • Requires information on:
  • Selection ratio: Number hired / number of applicants
  • Base rate: proportion of employees who are successful
  • Validity coefficient of current and “new” predictors

11-*

Utility Analysis

  • Economic Gain Formula
  • Focuses on the monetary impact of using a predictor
  • Requires a wide range of information on current employees, validity, number of applicants, cost of testing, etc.

11-*

Limitations of Utility Analysis

  • Determining the dollar value of performance is extremely subjective and variable for many jobs, and requires making many assumptions about how performance leads to economic success
  • Important variables are missing from model
  • EEO / AA concerns
  • Applicant reactions
  • Utility formula based on simplistic assumptions
  • Validity does not vary over time
  • Non-performance criteria are irrelevant
  • Applicants are selected in a top-down manner
    and all job offers are accepted

11-*

Discussion Questions

  • Your boss is considering using a new predictor. The base rate is high, the selection ratio is low, and the validity coefficient is high for the current predictor. What would you advise your boss and why?

11-*

Determining Assessment Scores

  • Single predictor
  • Multiple predictors
  • Three models shown
  • Multiple hurdles model

11-*

Relevant Factors: Selecting
the Best Weighting Scheme

  • Do decision makers have considerable experience and insight into selection decisions?
  • Is managerial acceptance of the selection process important?
  • Is there reason to believe each predictor contributes relatively equally to job success?
  • Are there adequate resources to use involved weighting schemes?
  • Are conditions under which multiple regression is superior satisfied?

11-*

Ex. 11.4: Combined Model
for Recruitment Manager

11-*

Hiring Standards and Cut Scores

  • Issue — What is a passing score?
  • Score may be a
  • Single score from a single predictor or
  • Total score from multiple predictors
  • Description of process
  • Cut score – Separates applicants who advance from those who are rejected

11-*

Exh. 11.5: Consequences of Cut Scores

11-*

Hiring Standards and Cut Scores
(continued)

  • Methods to determine cut scores
  • Minimum competency
  • Top-down
  • Banding
  • Professional guidelines

Ex. 11.6: Use of Cut Scores in Selection Decisions

11-*

11-*

Discussion Questions

  • What are the positive consequences associated with a high predictor cut score? What are the negative consequences?
  • Under what circumstances should a compensatory model be used? When should a multiple hurdles model be used?

Methods of Final Choice

  • Random selection
  • Each finalist has equal chance of being selected
  • Ranking
  • Finalists are ordered from most to least desirable based on results of discretionary assessments
  • Grouping
  • Finalists are banded together into rank-ordered categories
  • Ongoing hiring
  • Hiring all acceptable candidates as they become available for open positions

11-*

11-*

Ex. 11.8: Methods of Final Choice

11-*

Decision Makers

  • Role of human resource professionals
  • Determine process used to design and manage selection system
  • Contribute to outcomes based on initial assessment methods
  • Provide input regarding who receives job offers
  • Role of managers
  • Determine who is selected for employment
  • Provide input regarding process issues
  • Role of employees
  • Provide input regarding selection procedures
    and who gets hired, especially in team approaches

11-*

Discussion Questions

  • What are the advantages of ranking as a method of final choice over random selection?
  • What roles should HR professionals play in staffing decisions? Why?

11-*

Legal Issues

  • Legal issue of importance in decision making
  • Cut scores or hiring standards
  • Uniform Guidelines on Employee
    Selection Procedures (UGESP)
  • If no adverse impact, guidelines are silent on cut scores
  • If adverse impact occurs, guidelines become applicable
  • Choices among finalists

11-*

Discussion Questions

  • What guidelines do the UGESP offer to organizations when it comes to setting cut scores?

11-*

Ethical Issues

  • Issue 1
  • Do you think companies should use banding in selection decisions? Defend your position.
  • Issue 2
  • Is clinical prediction the fairest way to combine assessment information about job applicants, or are the other methods (unit weighting, rational weighting, multiple regression) more fair? Why?

Comments are closed.