Personality Tests in the Workplace

Personality Tests in the Workplace

The basic theme of this chapter is that the assess- ment enterprise in industrial and organizational (I/O) psychology is very broad, very complex, and very intense. The major underlying reason is that the world of work constitutes the major portion of almost everybody’s adult life, over a long period of time. It is complicated. The major components of this complexity are the broad array of variables that must be assessed; the multidimensionality of virtu- ally every one of them; the difficulties involved in developing specifications for such a vast array of variables; the wide variety of assessment methods; the intense interplay among science, research, and practice; and the critical value judgments that come into play. This chapter gives a structured overview of these issues, with particular reference to substan- tively modeling psychology’s major variable domains and the attendant assessment issues that are raised. The conclusion is that substantive specifi- cations for what psychologists are trying to assess are critically important, and I/O psychologists should not shortchange this requirement, no matter how much the marketplace seems to demand otherwise.

To be fair, the term assessment can take on differ- ent meanings. Perhaps its narrowest construction is as a multifactor evaluation of specific individuals in terms of their suitability for a specific course of action, such as selection, training, or promotion. However, if the full spectrum of research and practice concerning the applications of psychology to the world of work is considered, assessment becomes a much, much broader activity. This chapter takes the

broadest perspective. It equates assessment with measurement and outlines a map of the assessment landscape. The landscape is described in terms of (a) an overall framework of relationships that describe what I/O psychology is about, (b) the range of assessment purposes that flow from this frame- work, (c) the range and complexity of the variables that require assessment, (d) the range and complex- ity of the assessment methods that can be used, and (e) the psychometric issues that permeate the assess- ment enterprise.

In the beginning were the independent variable and the dependent variable, a distinction that sounds sophomoric but is of fundamental impor- tance and is often neglected. For example, when dis- cussing the history of assessing leadership, a distinction is often made between trait models and behavioral models as though they were competing explanations (e.g., Hunt, 1999). However, the behavioral models (e.g., Bowers & Seashore, 1966) focus on leader performance—the dependent variable—and trait models focus on a particular set of performance determinants (e.g., cognitive ability, personality)—the independent variables. The dependent variable is the variable of real inter- est. It is the variable one wants to predict, enhance, or explain for various value-laden reasons. The inde- pendent variable has no intrinsic, or extrinsic, value. For example, knowing someone’s general cognitive ability has no intrinsic value. It only has value because it predicts, or does not predict, something else that is of value (e.g., leadership performance). Similarly, independent variables such as training

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

356

programs or motivational interventions have no value unless they can change something that is important (i.e., critical dependent variables).

DEPENDENT VARIABLE LANDSCAPE

So what then are the dependent variables of value that populate the I/O psychology landscape? Identi- fying the relevant set is indeed a value judgment, and the superordinate distinction is whether one takes the individual or the institutional (i.e., organi- zational) point of view (Cronbach & Gleser, 1965). That is, is it the values of the management that determine what dependent variables are important, or the values of the individual job holder? The man- agement cares about the viability of the organiza- tion. Individuals care about their own viability. Sometimes their respective concerns overlap. For example, the management values high individual performance because it contributes to the goals of the organization. Individuals strive for high perfor- mance because it improves their standard of living, long-term financial security, or sense of self-worth. However, for the individual, higher and higher levels of performance may lose value because the effort required to achieve them detracts from other depen- dent variables, such as one’s general life satisfaction.

Wherein lie the values of the researcher and scientist? One argument is that the researcher and scientist must choose between the values of the organization and the values of the individual. Once that choice is made, then the interests of the scien- tist focus on determining the best methods of assess- ment, given the purposes for which the information is to be used. An alternative argument is that the sci- entist does not make the value judgment. A depen- dent variable, such as individual performance, is modeled and measured for the purpose of studying its determinants. Such research can be used both by the organization to improve selection and by the individual to improve career planning. The intent here is not to settle such arguments but to make the point that value judgments permeate all choices of what to assess on the dependent variable side. It is also tempting to argue that values do not intrude on the independent variable side where the canons of psychometric theory preside, but obviously such is

not the case, as discussed in a later section of the chapter. Those value judgments pertain to the con- sequences of the decisions made as a function of assessment of the independent variable. A partial taxonomy of the dependent variables in I/O psychol- ogy follows.

From the organization’s point of view, the depen- dent variables are

■■ individual performance in a work role, including individual performance as a team member;

■■ voluntary turnover; ■■ team performance as a team, not as the aggrega-

tion of individual contributions; ■■ team viability (analogous to individual turnover); ■■ productivity (in the economist’s sense) of (a)

individuals, (b) teams, and (c) organizational units; and

■■ organizational unit effectiveness (i.e., the bottom line).

From the individual’s point of view, the depen- dent variables are

■■ career and occupational achievement; ■■ satisfaction with the outcomes of working

(which could include satisfaction with perfor- mance achievement);

■■ perceived (or experienced) fair treatment (e.g., distributive and procedural justice);

■■ frequency of injury from accidents; and ■■ overall health and well-being, including physical

and mental health, perceived stress, and work– family conflict.

These two lists carry at least the following assumptions, qualifications, or both:

1. Organizations are not concerned about job sat- isfaction or subjective well-being as dependent variables, but only as independent variables that have implications for performance, productivity, effectiveness, or turnover.

2. Information pertaining to the determinants of performance may be used in a selection system, to benefit the organization, or in a career guid- ance system, to benefit the individual (e.g., using ability, personality, and interest assessment to plan educational or job search activities).

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

357

Similarly, training programs that produce higher skill levels can enhance individual performance for the benefit of the organization or enhance career options for individuals.

3. Fair and equitable treatment of individual employees and the level of individual health and well-being may be important dependent variables for the organization if they are incorporated as goals in the organization’s ethical code or in a policy statement of corporate social responsi- bility, for which the management is then held responsible.

For the most part, I/O psychology does not oper- ate from the individual point of view, even though several of its early pioneers did, for example, Donald Paterson or Walter van Dyke Bingham (cf. Koppes, Thayer, Vinchur, & Salas, 2007). At some point, vocational psychology (i.e., the individual point of view) became part of counseling psychology (Camp- bell, 2007; Meyer, 2007).

The dependent variable landscape is complex for assessment purposes, even as illustrated by the pre- ceding simple lists. The complexity of assessment increases considerably when each of the general variables is modeled in terms of its major compo- nents. Consider each of the following.

Individual Performance Before the mid-1980s, there was, relative to the assessment of individual performance, simply the “criterion problem” ( J. T. Austin & Villanova, 1992), which was the problem of finding some exist- ing and applicable indicator that could be construed as a measure (i.e., assessment) of individual perfor- mance (e.g., sales, number of pieces produced) while not worrying too much about the validity, reliability, deficiency, and contamination of the indi- cators. Since then, much has happened regarding how performance is defined and how its latent struc- ture is modeled.

In brief, the consensus is that individual perfor- mance is best defined as consisting of the actions people engage in at work that are directed at achiev- ing the organization’s goals and that can be scaled in terms of how much they contribute to said goals. For example, sometimes it takes a great deal of

covert thinking before the individual does some- thing. Performance is the action, not the thinking that preceded the action, and someone must iden- tify those actions that are relevant to the organiza- tion’s goals and those that are not. For those that are (i.e., performance), the level of proficiency with which the individual performs them must be scaled. Both the judgment of relevance and the judgment of level of proficiency depend on a specification of the organization’s important substantive goals, not con- tent-free goals such as “make a profit.”

Nothing in this definition requires that a set of performance actions be circumscribed by the term job or that they remain static over a significant length of time. Neither does it require that the goals of an organization remain fixed or that a particular management cadre be responsible for determining the organization’s goals (also know as vision). How- ever, for performance assessment to take place, the major operative goals of the organization, within some meaningful time frame, must be known, and the methods by which individual actions are judged to be goal relevant, and scaled in terms of what rep- resents high and low proficiency, must be legiti- mized by the stakeholders empowered to do so by the organization’s charter. Otherwise, there is no organization. This is as true for a family as it is for a corporation.

This definition creates a distinction between per- formance, as defined earlier, and the outcomes of performance (e.g., sales level, incurred costs) that are not solely determined by the performance of a particular individual, even one of its top executives. If these outcome indicators represent the goals of the organization, then individual performance should certainly be related to them. If not, the speci- fications for individual performance are wrong and need changing or, conversely, the organization is pursuing the wrong goals. If the variability in an outcome indicator is totally under the individual’s control, then it is a measure of performance.

Given an apparent consensus on this definition of performance, considerable effort has been devoted to specifying the dimensionality of perfor- mance, in the context of the latent structure of the performance actions required by a particular occu- pation, job, position, or work role (see Bartram,

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

358

2005; Borman & Brush, 1993; Borman & Motow- idlo, 1993; Campbell, McCloy, Oppler, & Sager, 1993; Griffin, Neal, & Parker, 2007; Murphy, 1989a; Organ, 1988; Yukl, Gordon, & Taber, 2002). These models have become known as performance models, and they seem to offer differing specifications for what constitutes the nature of performance as a construct. However, the argument here is that correspondence is virtually total.

Campbell (2012) has integrated all past and cur- rent specifications of the dimensional structure of the dependent variable, individual performance, including those dealing with leadership and manage- ment performance, and the result is summarized in the eight basic factors discussed in the next section.

Orthogonality is not asserted or implied, but content distinctions that have different implications for selection, training, and organizational outcomes certainly are. Although scores on the different dimensions may be added together for a specific measurement purpose, it is not possible to provide a substantive specification for a “general” factor. Whether dimensions can be as general as contextual performance or citizenship behavior is also problematic.

Basic factors. The basic substantive factors of individual performance in a work role (which are not synonymous with Campbell et al., 1993) are asserted to be the following.

Factor 1: Technical Performance. All models acknowledge that virtually all jobs or work roles have technical performance requirements. Such requirements can vary by substantive area (driving a vehicle vs. analyzing data) and by level of complex- ity or difficulty within area (driving a taxi vs. driving a jetliner; tabulating sales frequencies vs. modeling institutional investment strategies). Technical perfor- mance is not to be confused with task performance. A task is simply one possible unit of description that could be used for any performance dimension.

The subfactors for this dimension are obviously numerous, and the domain could be parsed into wide or narrow slices. The Occupational Informa- tion Network (O*NET; Peterson, Mumford, Bor- man, Jeanneret, & Fleishman, 1999) is based on the U.S. Department of Labor’s Standard Occupational

Classification structure, which currently uses 821 occupations for describing the major distinctions in technical task content across the entire labor force, and the 821 occupations are further aggregated into three higher order levels consisting of 449, 96, and 23 occupational clusters, respectively. The managers of O*NET have interestingly divided some of the Standard Occupational Classifications into narrower slices to better suit user needs and have also added new and emerging occupations such that O*NET 14.0 collected data on 965 occupations. The number will grow in the future (Tippins & Hilton, 2010). Potentially, at least, an occupational classification based on technical task content could be used to archive I/O psychology assessment data on individ- ual work-role performance, end-of-training perfor- mance, or predicted performance.

Factor 2: Communication. The Campbell et al. (1993) model is the only one that isolates commu- nication as a separate dimension, but it appears as a subfactor in virtually all others. Communication refers to the proficiency with which one conveys information that is clear, understandable, and well organized. It is defined as being independent of sub- ject matter expertise. The two major subfactors are oral and written communication.

Factor 3: Initiative, Persistence, and Effort. This factor emerged from the contextual performance and management performance literatures as well as the organizational citizenship behavior literature in which it was referred to as Individual Initiative. To make this factor conform to the definition of perfor- mance used here it must be composed of observable actions. Consequently, it is typically specified in terms of extra hours, voluntarily taking on addi- tional tasks, going beyond prescribed responsibili- ties, working under extreme or adverse conditions, and so forth.

Factor 4: Counterproductive Work Behavior. Counterproductive Work Behavior (CWB), as it has come to be called, refers to a category of individual actions or behaviors that have negative implications for accomplishment of the organization’s goals (see Chapter 35, this volume, for additional information on this area).

The current literature does not speak with one voice regarding the meaning of CWB, but the

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

359

specifications generally circumscribe actions that are intentional, that violate or deviate from prescribed norms, and that have a negative effect on the indi- vidual’s contribution to the goals of the unit or orga- nization. Descriptions of this domain are provided by Gruys and Sackett (2003) and Robinson and Ben- nett (1995). The general agreement seems to be that two major subfactors exist (e.g., see R. J. Bennett & Robinson, 2000; Berry, Ones, & Sackett, 2007; Dalal, 2005) distinguished by deviant behaviors directed at the organization (theft, sabotage, falsify- ing information, malingering) and behaviors directed at individuals, including the self (e.g., phys- ical attacks, verbal abuse, sexual harassment, drug and alcohol abuse). Although not yet fully substanti- ated by research, it seems reasonable to also expect an approach–avoidance, or moving toward versus moving away, distinction for both organizational deviance and individual deviance. That is, the CWBs dealing with organizational deviance seem to be divided between aggressively destroying or misusing resources versus avoiding or withdrawing from the responsibilities of the work role. Similarly, CWBs directed at individuals seem to be divided between aggressive actions that are directed at other people and destructive actions directed at the self, such as alcohol and drug abuse and neglect of safety precau- tions. The approach–avoidance distinction is a recurring one in the study of motivation (Elliot & Thrash, 2002; Gable, Reis, & Elliot, 2003) and of personality (Watson & Clark, 1993), including a major two-factor model of psychopathology (Mar- kon, Krueger, & Watson, 2005). It is also suggested in a study of CWB by Marcus, Schuler, Quell, and Humpfner (2002).

A major issue in the CWB literature is whether its principal subfactors are simply the extreme nega- tive end of other performance factors or whether they are independent constructs. The evidence currently available (Berry et al., 2007; Dalal, 2005; Kelloway, Loughlin, Barling, & Nault, 2002; Miles, Borman, Spector, & Fox, 2002; Ones & Viswes- varan, 2003; Spector, Bauer, & Fox, 2010) has sug- gested that CWBs are not simply the negative side of other performance components. Low scores on other performance dimensions could result from a lack of knowledge or skill, but low scores on CWB

reflect intentional deviance and are dispositional in origin.

Factor 5: Supervisory, Manager, Executive (i.e., hierarchal) Leadership. This factor refers to leader- ship performance in a hierarchical relationship. The substantive content, as specified by the leadership research literature, is most parsimoniously described by the six leadership factors listed in Exhibit 22.1 (Campbell, 2012). The parsimony results from the remarkable convergence of the literature, as detailed in Campbell (2012), from the Ohio State and Michigan studies through the contingency theories of Fielder, House, Vroom, and Yetton to the current emphasis on being charismatic and

Exhibit 22.1 Six Basic Factors Making Up Leadership

Performance

1. Consideration, support, person centered: Providing recognition and encouragement, being supportive when under stress, giving constructive feedback, helping others with difficult tasks, building networks with and among others

2. Initiating structure, guiding, directing: Providing task assignments; explaining work methods; clarifying work roles; providing tools, critical knowledge, and technical support

3. Goal emphasis: Encouraging enthusiasm and commitment for the group’s or organization’s goals, emphasizing the important missions to be accomplished

4. Empowerment, facilitation: Delegating authority and responsibilities to others, encouraging participation, allowing discretion in decision making

5. Training, coaching: One-on-one coaching and instruction regarding how to accomplish job tasks, how to interact with other people, and how to deal with obstacles and constraints

6. Serving as a model: Models appropriate behavior regarding interacting with others, acting unselfishly, working under adverse conditions, reacting to crisis or stress, working to achieve goals, showing confidence and enthusiasm, and exhibiting principled and ethical behavior.

Note. From Oxford Handbook of Industrial and Organizational Psychology (p. 173), by S. Kozlowski (Ed.), 2012, New York, NY: Oxford University Press. Copyright 2012 by Oxford University Press. Adapted with permission.

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

360

transformational, leading the team, and operating in highly complex and dynamic environments. In conversations about leadership, the emphasis may be on leader performance, as defined here, or it may be on the outcomes of leader actions (e.g., follower satisfaction, unit profitability), on the determinants (predictors) of leadership performance, or on the contextual influences on leader performance or performance outcomes. However, when describ- ing or assessing leadership performance (as defined here), the specifications are always in terms of one or more of these six factors. The relative emphasis may be different, and different models may hypoth- esize different paths from leader performance to leader effectiveness, which for some people may be the interesting part, but the literature’s characteriza- tion of leader performance itself seems to always be within the boundaries of these six subfactors.

Similarly, the six subfactors circumscribe hierar- chical leadership performance at all organizational levels. However, the relative emphasis on the factors may change at higher organizational levels, and the specific actions within each subfactor may also receive differential emphasis.

Factor 6: Management Performance (hierarchical). Within a hierarchical organization, this factor includes those actions that deal with obtaining, pre- serving, and allocating the organization’s resources to best achieve its goals. The major subfactors of management performance are given in Exhibit 22.2 (Campbell, 2012). The major distinction between leadership performance and management perfor- mance, which not everybody agrees on, is that the leadership dimensions involve interpersonal influ- ence. The management dimensions do not. As it was for the components of leadership, there may be considerably different emphases on the management performance subfactors across work roles and also as a function of the type of organization, organizational level, changes in the situational context, changes in organization goals, and so forth. Also, nothing in the leadership–management distinction implies two separate jobs or work roles. They coexist.

Factor 7: Peer–Team Member Leadership Performance. The content of this factor is paral- lel to the actions that make up hierarchical leader- ship (see Factor 5). The defining characteristic is

that these actions are in the context of peer or team member interrelationships, and the peer–team relationships in question can be at any organiza- tional level (e.g., production teams vs. management teams). That is, the team may consist of nonsupervi- sory roles or a team of unit managers.

Factor 8: Team Member–Peer Management Performance. A defining characteristic of the

Exhibit 22.2 Eight Basic Factors of Management Performance

1. Decision making, problem solving, and strategic innovation: Making sound and timely decisions about major goals and strategies. Includes gathering information from both inside and outside the organization, staying connected to important information sources, forecasting future trends, and formulating strategic and innovative goals to take advantage of them

2. Goal setting, planning, organizing, and budgeting: Formulating operative goals; determining how to use personnel and resources (financial, technical, logistical) to accomplish goals; anticipating potential problems; estimating costs

3. Coordination: Actively coordinating the work of two or more units or the work of several work groups within a unit; scheduling operations; includes negotiating and cooperating with other units

4. Monitoring unit effectiveness: Evaluating progress and effectiveness of units against goals: monitoring costs and resource consumption

5. External representation: Representing the organization to those not in the organization (e.g., customers, clients, government agencies, nongovernment organizations, the public); maintaining a positive organizational image; serving the community; answering questions and complaints from outside the organization

6. Staffing: Procuring and providing for the development of human resources; not one-on-one coaching, training, or guidance, but providing the human resources that the organization or unit needs

7. Administration: Performing day-to-day administrative tasks, keeping accurate records, documenting actions; analyzing routine information and making information available in a timely manner

8. Commitment and compliance: Compliance with the policies, procedures, rules, and regulations of the organization; full commitment to orders and directives, together with loyal constructive criticism of organizational policies and actions

Note. From Oxford Handbook of Industrial and Organizational Psychology (p. 173), by S. Kozlowski (Ed.), 2012, New York, NY: Oxford University Press. Copyright 2012 by Oxford University Press. Adapted with permission.

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

361

high-performance work team (e.g., Goodman, Devadas, & Griffith-Hughson, 1988) is that team members perform many of the management func- tions shown in Exhibit 22.2. For example, the team member performance factors identified in a criti- cal incident study by Olson (2000) that are not accounted for by the technical performance factors, or the peer leadership factors, concern such man- agement functions as planning and problem solving, determining within-team coordination requirements and workload balance, and monitoring team per- formance. In addition, the contextual performance and organizational citizenship behavior literatures have both strongly indicated that representing the unit or organization to external stakeholders and exhibiting commitment to and compliance with the policies and procedures of the organization are critical performance factors at any organizational level. Consequently, to a greater extent than most researchers realize or acknowledge, important ele- ments of management performance exist in the peer or team context as well as in the hierarchical (i.e., management–subordinate) setting.

Again, these eight factors are intended to be an integrative synthesis of what the literature has sug- gested are the principal dimensions of performance in a work role. They are meant to encompass all pre- vious work on individual performance modeling, team member performance, and leadership and management. Even though the different streams of literature may use somewhat different words for essentially the same performance actions, great con- sistency exists across the different sources.

Performance dynamics. The latent structure just summarized has direct implications for the content of performance assessments. However, it does not speak to whether an individual’s level of perfor- mance is stable over time or whether it changes. Assessment of performance dynamics must deal with additional complexities. One source of such dynamics is that performance requirements of the work role itself change over time, which can occur because of changes in (a) the substantive content of the requirements, (b) the level of performance expected, (c) the conditions under which a particu- lar level of performance is expected, or (d) some

combination of these. Individuals can also change. Much of I/O psychology research and practice deals with planned interventions designed to enhance the individual knowledge, skill, and motivational determinants of performance, such as training and development, goal setting, feedback, rewards of various kinds, better supervision, and so forth. Such interventions, with performance requirements held constant, could increase the group mean, have differential effects across people, or both. The per- formance changes produced can be sizable (e.g., Carlson, 1997; Katzell & Guzzo,1983; Locke & Latham, 2002).

Interventions designed to enhance individual performance determinants can also be imple- mented by the individual’s own processes of self- management and regulation (Kanfer, Chen, & Pritchard, 2008; Lord, Diefendorff, Schmidt, & Hall, 2010), and the effectiveness of these self-regulation processes could vary widely across people. In addi- tion, if they have the latitude to do so, individuals could conduct their own job redesign (i.e., change the substantive content of their work role) to better utilize their knowledge and skills and increase the effort they are willing to spend. Academics are fond of doing that.

As noted by Sonnentag and Frese (2012), indi- vidual performance can also change simply as a function of the passage of time. Of course, time is a surrogate for such things as practice and experience, the aging process, or changes in emotional states (Beal, Weiss, Barros, & MacDermid, 2005).

Most likely, for any given individual over any given period of time, many of these sources of per- formance change can be operating simultaneously. Performance dynamics are complex, and attempts to model the complexity have taken many forms. For example, there could be characteristic growth curves for occupations (e.g., Murphy, 1989b), differential growth curves across individuals (Hofmann, Jacobs, & Gerras, 1992; Ployhart & Hakel, 1998; Stewart & Nandkeolyar, 2006; Zyphur, Chaturvedi, & Arvey, 2008), both linear and nonlinear components for growth curves (Deadrick, Bennett, & Russell, 1997; Reb & Cropanzano, 2007; Sturman, 2003), and cyclical changes resulting from a number of self- regulatory mechanisms (Lord et al., 2010).

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

362

Empirical demonstrations of each of these have been established.

Adapting to dynamics. Adaptability can be viewed either as a characteristic of performance itself (i.e., a category of performance actions), as did Hesketh and Neal (1999), or as a property of the individual (i.e., as a determinant of performance). Ployhart and Bliese (2006) presented a thorough discussion of this issue and argued that it is more useful to model (i.e., identify the characteristics of) the adap- tive individual than it is to propose adaptability as a distinct content dimension of performance. One reason is that the general definition of adaptability is not content domain specific, and providing speci- fications for adaptability as a distinct performance dimension has been difficult (e.g., see Pulakos, Arad, Donovan, & Plamondon, 2000).

Domain-specific dynamics. In sum, it can be taken as a given that work-role performance requirements change over time, sometimes over very short periods of time, and that individuals change (i.e., adapt) to meet them. Individuals can also change in anticipa- tion of changes in performance requirements. Many interventions (e.g., training, goal setting, reward systems) have been developed to help individu- als adapt to changing performance requirements. Individuals can also actively engage in their own self-management to develop additional knowledge and skill and to regulate the direction and intensity of their effort. If the freedom to do so exists, they can even proactively change their own performance responsibilities, or at least their relative emphases, so as to better use their own knowledge and skill or to better accomplish unit goals. Even if performance requirements remain relatively constant, individual performance can change over time as the result of practice, feedback, increasing experience, cognitive and physical changes resulting from aging, or even fluctuation in affect or subjective well-being.

As a result of all this, one might ask what impli- cations performance dynamics and individual adapt- ability have for substantive models of individual work performance. This question is not the right question. A more appropriate question is, “What are the implications of substantive models of perfor- mance for the assessment of performance dynamics

and individual adaptability?” The argument here is that although the latent dimensions of performance may be interdependent (e.g., higher technical per- formance could enhance leadership), the assessment of performance change must be linked to the indi- vidual performance dimensions. That is, the nature of performance changes may be different for differ- ent dimensions.

Summary. Why devote so much space to the basic modeling of individual performance in what is supposed to be an overview of assessment in I/O psychology? There are two reasons. First, individual performance is I/O psychology’s most important dependent variable. Second, considering the assess- ment of individual performance raises some very fundamental issues that are relevant for the assess- ment of virtually all other variables, both dependent and independent. For example, what is the most useful specification for the latent structure? To what extent is the “most useful specification” a function of value judgments? Judgments by whom? Aside from conventional considerations of reliability, are the latent variables “dynamic”? What is the expected nature of the within-person variation? All of these issues have implications for the choice of assessment methods and for the purposes for which specific assessments are used.

Performance Assessment The assessment of individual work-role performance may be I/O psychology’s most difficult assessment requirement. J. T. Austin and Villanova (1992) pro- vided ample documentation of the problem. Archi- val objective measures are few and far between and frequently suffer from contamination. Ratings, although they do yield meaningful assessments (W. Bennett, Lance, & Woehr, 2006; Conway & Huffcut, 1997), tend to suffer from low reliability, method variance, contamination, and the possible intrusion of implicit models of performance held by the raters that do not correspond to the stated speci- fications of the assessment procedure (Borman, 1987; Conway, 1998). Alternatives to ratings have been methods such as performance in a simulator, perfor- mance on various forms of job samples (Campbell & Knapp, 2001), and using various indicators of goal

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

363

accomplishment when goals are specified such that accomplishing them is virtually under the individu- al’s total control (Pulakos & O’Leary, 2010).

In addition to these considerations, taking account of the purpose of assessment is also critical. The three major reasons for assessing performance are (a) for research purposes that have no high- stakes consequences; (b) for developmental pur- poses that carry the assurance that low scores do not carry negative consequences; and (c) for high-stakes appraisal situations such as promotion, compensa- tion, termination, and so forth. Most likely, different assessment methods would be appropriate for each. Also, depending on which of the three is operative, the same assessment procedure could produce dif- ferent assessments. For example, raters could be try- ing to satisfy different goals when doing operational performance appraisals versus providing ratings for research purposes only. Murphy and Cleveland (1995) discussed these issues at some length. The overall moral is that the measurement purposes must never be confused.

Team Performance Research, theory, and professional discussion regarding team effectiveness, team performance, the determinants of team performance, and the pro- cesses by which the determinants (independent variables) affect team performance (dependent vari- ables) has expanded exponentially over the past 20 years (e.g., Ilgen, Hollenbeck, Johnson, & Jundt, 2005; Kozlowski & Ilgen, 2006: Mathieu, Maynard, Rapp, & Gilson, 2008). However, most of the atten- tion is given to the determinants of team perfor- mance and effectiveness and to the processes by which they have their effects. Modeling team perfor- mance itself for purposes of guiding assessment has received relatively little attention.

The dominant model is still that articulated by Hackman (1992), that is, that three major factors of group–team performance exist (as distinct from individual performance):

1. The first factor is the degree to which it accom- plishes its major substantive task goals. This factor is analogous to the technical factor for individual performance. No taxonomy of team

goals exists, but it could include such things as meeting production goals, producing solutions to specific problems, developing policy, creating designs, modeling resource allocation decisions, and so forth.

2. The second factor is the degree to which team members feel rewarded by, or satisfied with, their role and committed to the team’s goals so that they continue to commit effort toward team goal accomplishment. This factor is analogous to the effort–initiative factor in individual performance.

3. The third factor is the degree to which the team improves its resources, skills, and coordination over time.

By implication, assessment of team performance would involve assessment of these three factors. The last two factors are sometimes combined into a higher order factor referred to as team viability, or the team’s capability to maintain its technical perfor- mance over time.

Unit and Organizational Effectiveness Organizations, and organizational units, do have a bottom line. That is, by some set of value judg- ments, a set of outcomes is identified that the orga- nization or unit wants to maximize, optimize, or at least maintain at certain levels, such as quantity or quality of output (be it goods or services), sales, rev- enue, costs, earnings, return on investment, stock price, asset values, and so forth. The outcomes deemed important are a management choice, and choices can vary across organizations and across time within organizations. For an educational orga- nization, the outcome could be number of students, graduation rates, time to degree, mean SAT or GRE scores for the student body, prestige of postgradua- tion job placements, and so forth. Again, by defini- tion, the level and variation of such outcomes is the result of multiple determinants, in addition to indi- vidual performance. Although the term organiza- tional effectiveness is used frequently in the I/O literature relative to both research and practice, attempts to model organizational or unit effective- ness for purposes of assessment have been sparse. An early taxonomy was developed by Campbell (1977), which was given a three-dimensional higher

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

364

order structure by Quinn and Rohrbaugh (1983) and Cameron and Quinn (1999).

Productivity Productivity, particularly with regard to its assess- ment, is a frequently misused term in I/O psychol- ogy. Its origins are in the economics of the firm, where it refers to the ratio of the value of output (i.e., effectiveness) to the costs of achieving that level of output. Holding output constant, productiv- ity increases as the costs associated with achieving that level of output decrease. It is possible to talk about the productivity of capital, the productivity of technology, and the productivity of labor, which are usually indexed by the value of output divided by the cost of the labor hours needed to produce it. For the productivity of labor, it would be possible to consider individual productivity, team productivity, or organizational productivity. Assessment of indi- vidual productivity would be a bit tricky, but it must be specified as the ratio of performance level (on each major dimension) to the cost of reaching that level (on each major dimension). Costs could be reflected by number of hours needed or wage rates. For example, terminating high wage-rate employees and hiring cheaper (younger?) individuals who can do the same thing would increase individual productivity.

Turnover Turnover refers to the act of leaving an organization. Turnover can be voluntary or involuntary, as when an individual is terminated by the organization. Both voluntary turnover and involuntary termina- tion can be good or bad depending on the circum- stances. Depending on the work role, turnover could also vary as a function of determinants that operate at various times (e.g., variation in turnover could occur as a function of the initial socialization process, early vs. late promotions, vesting of retire- ment benefits).

For assessment purposes, great benefit would result if a latent structure for turnover could be specified in terms of the substantive reasons individ- uals leave. The beginnings of such a latent structure can be found in the integrative reviews of turnover research by Griffeth, Hom, and Gaertner (2000),

Mitchell and Lee (2001), and Maertz and Campion (2004).

DEPENDENT VARIABLE ASSESSMENT FROM THE INDIVIDUAL’S POINT OF VIEW

Again, the defining characteristic is that higher scores on such variables are of value to the individ- ual for his or her own sake. They are not of value because they correlate with or predict something else that is of value. Consequently, what is a depen- dent variable for the individual could be an indepen- dent variable for the organization.

Job Satisfaction One taxonomy of such dependent variables valued by the individual is represented by the 20 dimen- sions assessed by the Minnesota Importance Ques- tionnaire (Dawis & Lofquist, 1984), which are listed in Exhibit 22.3.

Within the theory of work adjustment (Dawis, Dohm, Lofquist, Chartrand, & Due, 1987; Dawis & Lofquist, 1984), the variables in Exhibit 22.3 are assessed in different ways for different reasons. The Occupational Reinforcer Pattern is a rating by super- visors or managers of the extent to which a particu- lar work role provides outcomes representing each of the variables. The Minnesota Importance Ques- tionnaire is a self-rating by the individual of the importance of being able to experience high levels of each of the 20 dimensions. The Minnesota Satisfac- tion Questionnaire is a self-rating of the degree to which the individual is satisfied with the level of each variable that he or she is currently experienc- ing. According to the theory of work adjustment, overall work satisfaction should be a function of the degree to which the work-role characteristics judged to be important by the individual are indeed pro- vided by the work role, or job.

Exhibit 22.3 represents the literature’s most finely differentiated portrayal of the latent structure of what individuals want from work. There are other portrayals. For example, a long time ago, Herzberg (1959) grouped 16 outcomes obtained via a critical incident procedure (he called it story-telling) into two higher order factors variously called motivators and hygienes or intrinsic and extrinsic. The Job

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

365

Descriptive Index (Smith, Kendall, & Hulin, 1969) focuses on five factors: the nature of the work itself; the characteristics of pay; the characteristics of supervision; the nature of promotion opportunities; and the characteristics of one’s coworkers. There have also been several measures of overall, or gen- eral, job satisfaction (e.g., Hoppock, 1935; Kunin, 1955), which might use one item or several items.

Job satisfaction is a complex construct, and assessment issues revolve around the number of latent factors; the nature of the general factor; whether the sum of the parts (i.e., adding factor scores) captures all the variance in a rating of overall satisfaction; the dynamics of within-person varia- tion; whether the frame of reference should be a description of the individual’s state, an evaluation of that state, or the affective response to the evaluation; and how levels of satisfaction should be scaled (e.g., see Hulin & Judge, 2003). Assessment must deal with all of these issues.

It is instructive, or at least interesting, to com- pare the 20 job characteristics listed in Exhibit 22.3 with other individual work outcomes that the list does not seem to include but that have received important research or assessment attention. Examples follow.

Justice A considerable literature exists on distributive and procedural justice (Colquitt, 2001; Colquitt, Con- lon, Wesson, Porter, & Ng, 2001) that could be viewed as subfactors of Outcome 6 in Exhibit 22.3. Distributive justice refers to an individual’s self- assessment of how well he or she is being rewarded by the organization. Procedural justice refers to the individual’s assessment of the relative fairness of the organization’s procedure for managing and dispens- ing rewards. A meta-analysis by Crede (2006) showed perceptions of procedural justice to have a somewhat higher mean correlation with overall job

Exhibit 22.3 The 20 First-Level Job Outcomes Incorporated in Dawis and Lofquist’s (1984) Minnesota Theory

of Work and Adjustment

1. Ability utilization: The chance to do things that make use of one’s abilities 2. Achievement: Obtaining a feeling of accomplishment and achievement from work 3. Activity: Being able to keep busy all the time, freedom from boredom 4. Advancement: Having realistic chances for promotion and advancement 5. Authority: Being given the opportunity to direct the work of others 6. Company policies and practices: Company policies and practices that are useful, fair, and well thought out 7. Compensation: Compensation that is fair, equitable, and sufficient for the work being done 8. Coworkers: Good interpersonal relationships among coworkers 9. Creativity: The opportunity to innovate and try out new ways of doing things in one’s job

10. Independence: The chance to work without constant and close supervision 11. Moral values: Working does not require being unethical or going against one’s conscience 12. Recognition: Receiving praise and recognition for doing a good job 13. Responsibility: The freedom to use one’s own judgment 14. Security: Not having to worry about losing one’s job 15. Social service: Opportunities to do things for other people as a function of being in a particular work role 16. Social status: The opportunity to be somebody in the community, as a function of working in a particular job and

organization 17. Supervision—human relations: The respect and consideration shown by one’s manager or supervisor 18. Supervision—technical: Having a manager or supervisor who is technically competent and makes good decisions 19. Variety: Having a job that incorporates a variety of things to do 20. Working conditions: Having working conditions that are clean, safe, and comfortable

Note. From Oxford Handbook of Industrial and Organizational Psychology (p. 173), by S. Kozlowski (Ed.), 2012, New York, NY: Oxford University Press. Copyright 2012 by Oxford University Press. Adapted with permission.

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

366

satisfaction than did distributive justice (.56 vs. .62) when correlations were corrected for artifacts.

Overall Well-Being Several dependent variables in the workplace, from the individual’s point of view, go beyond job satis- faction and perceived distributive and procedural justice to include additional facets of overall well- being, such as the following:

■■ Physical health: In terms of its relationship to work roles, physical health is most often talked about in terms of a safe physical environment (Tetrick, Perrewé, & Griffin, 2010), that is, pro- tections from environmental hazards, effective safety procedures, manageable physical demands, and available preventive care for potential ill- ness. Assessment could involve the independent measurement of such factors or the individual’s perception of them.

■■ Mental and psychological health. Although posi- tive psychological health associated with working is a valued outcome from the individual point of view, it presents assessment complications. After controlling for basic personality characteristics, the framework proposed by Warr (1994) could be adopted that would then seek to assess (a) the individual’s level of happiness or unhappiness, (b) relative feelings of comfort versus anxiety, and (c) feelings of depression versus enthusiasm. Lurking in the background is the research on set points (e.g., Lykken, 1999), which has argued that individuals have a characteristic level of hap- piness or well-being that determines much of the variance in their reactions to the work environ- ment on these dimensions.

■■ Work–family conflict. This literature is growing, and the implication is that individuals value a work situation that does not produce undue con- flict with family life or nonwork relationships. The determinants of work–family conflict are many and varied, and several models have been offered relating the determinants to work–family conflict (e.g., J. E. Edwards & Rothbard, 2000; Greenhaus & Powell, 2006; Grzywacz & Carl- son, 2007). Some of the issues are whether work interferes with family or vice versa; whether

the goals of the family and the goals of the indi- vidual at work are different; and the influence of gender (e.g., whether the man or woman stays home). The touchstone for assessment of the dependent variable is defining high scores as the perception (by the job holder) that work and family demands are in balance. That is, work demands do not degrade family goals, and fam- ily demands do not degrade individual work goals. Consequently, assessment should take into account how well the two sets of goals are aligned, and they may not be weighted equally (e.g., for economic reasons). Regardless of the relative weights, Cleveland and Colella (2010) made a strong argument for why both sets of goals strongly influence work–family conflict assessments.

■■ Work-related stress. The study of work stress has generated a very large literature (Sonnentag & Frese, 2003), and work stress is frequently offered as an important criterion variable because of the high frequencies with which it is reported (Harnois & Gabriel, 2000; Levi & Lunde-Jensen, 1996; National Institute for Occupational Safety and Health, 1999). Stress can be defined as a set of physiological, behavioral, or psychological responses to demands (work, family, or envi- ronmental) that are perceived to be challenging or threatening (Neuman, 2004). Assessment of individual stress levels is a more complex enter- prise than assessment of job satisfaction, mental or physical health, or work–family conflict. The measurement operations could be physiologi- cal (e.g., cortisol levels in the blood), behavioral (e.g., absenteeism), psychological (depression), or perceptual (e.g., self-descriptions of stress levels), and the construct validity of any one of them is not assured given the complexities of modeling stress as a construct.

A somewhat overly simplistic model of stress as a criterion would be that the work–family situ- ation incorporates potential stressors. Whether a potential stressor (e.g., a new project deadline) leads to a stress reaction is a function of how it is evaluated by the individual. For some, the new deadline might be threatening (e.g., it increases the probability of a debilitating failure or makes

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

367

it difficult to care for a sick child). For others, it is merely an interesting challenge that will be fun to tackle. If potential stressors are evaluated as threatening, stress levels go up unless the individ- ual has the resources to cope with them (Hobfoll, 1998). The Selye (1975) principle of optimum stress levels says that individuals need a certain amount of perceived stress to be optimally acti- vated (Cooper, Dewe, & Driscoll, 2001). Similar models have been offered by Robert and Hockey (1997) and Warr (1987). However, if stress is too high, several counterproductive outcomes (labeled strains) can occur. These outcomes can be physical (fatigue, headaches), behavioral (reduced performance), or psychological (anxi- ety, sleep impairment). Consequently, assessment must choose among alternative measurement operations, must deal with the appraisal compo- nent (i.e., is a potential stressor actually a stressor?), and must make a case for the construct validity of the assessment of strains.

Individual Perspective: A Summary Comment Job satisfaction, distributed and procedural justice, physical health, mental and psychological health, work–family conflict, stress, or simply evaluation of overall well-being have been discussed as dependent variables in the work setting that are important to individuals. That is, most people value being satisfied with their work, being physically and psychologically healthy, achieving a work life–non-work-life balance, and experiencing optimal stress levels. However, in the I/O psychology literature, these variables are usu- ally not discussed as ends in themselves, but as inde- pendent variables that have an effect on the organization’s bottom line (Cleveland & Colella, 2010; Tetrick et al., 2010). Depending on which per- spective is chosen, the purpose of assessment is dif- ferent, and the choice of assessment methods may differ as well.

INDEPENDENT VARIABLE LANDSCAPE

Compared with the dependent variable domain, the independent variable domain is a lush and verdant landscape—and much more intensely researched

and assessed. It has also been well discussed by others and is the subject of many recent handbooks (Farr & Tippins, 2010; Scott & Reynolds, 2010; Zedeck, 2010). What follows is a brief outline pri- marily for the purpose of making certain distinc- tions that are discussed less often. As might be expected, the outline follows Campbell et al. (1993), Campbell and Kuncel (2001), and Campbell (2012).

The Campbell et al. (1993) model of perfor- mance posited two general kinds of performance determinants: direct and indirect. That is, individual differences in performance (either between or within) are a direct function of the current levels of performance-related knowledge and performance- related skills. There are different kinds of knowledge (e.g., facts, procedures) and different kinds of skills (e.g., cognitive, physical, psychomotor, expressive). The critical factor is that they are the real-time knowledge and skills determinants of performance. The only other direct determinants are motivational and are represented by three choices: (a) where to direct effort, (b) at what levels, and (c) for how long. All other performance determinants must exercise their effects by changing one or more of the direct determinants. It follows that a diagnosis of the direct causes of low or high performance must assess knowledge, skill levels, and choice behaviors that are specific to the work role’s performance requirements in real time. For example, reading skill as a direct determinant refers to how well the indi- vidual reads the material required by the job in the work setting. Reading skill (ability?) as measured by the SAT is an indirect determinant. A multitude of indirect determinants of knowledge, skills, and choice behaviors exists, and a brief outline follows.

Traits: Abilities The individual differences tradition in psychology in general, and I/O psychology in particular, has devoted much attention to the assessment of indi- vidual characteristics that are relatively stable over the adult working years. Assessments of such char- acteristics are used to predict future performance for selection and promotion purposes, predict who will benefit from specific training or development expe- riences, predict performance failures, provide the individual profiles needed to determine person–job

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

368

or person–organization fit, counsel individuals on career options, and serve as control variables in a wide variety of experiments on interventions (e.g., procedures for stress reduction). A brief outline of the major trait domains follows. An overarching distinction is made between abilities and skills (assessed with so-called maximum performance measures) and dispositions (assessed with typical performance measures).

Cognitive abilities. The value of using cognitive abilities to predict important dependent variables is well documented, and general cognitive ability (g) dominates (Ones, Dilchert, Viswesvaran, & Salgado, 2010; F. Schmidt & Hunter, 1998). The existence of g in virtually any matrix of cognitive tests and the correlation of near unity between the general factors estimated from different test batteries (e.g., see W. Johnson, Nijenhuis, & Bouchard, 2008) has been well established. The nature of the latent subfac- tors that make up the general factor is not a totally settled issue. The most comprehensive portrayal is still that of Carroll (1993), who acknowledged g as a single general factor that had eight (Carroll, 1993) or 10 (Carroll, 2003) subfactors. This portrayal is somewhat in opposition to that of Cattell (1971) and Horn (1989), who argued for the crystallized g and fluid g distinction with no general factor. Later investigations (W. Johnson & Bouchard, 2005) have tended not to support the crystallized g–fluid g structure. W. Johnson and Bouchard (2005) reana- lyzed several data sets, using more sophisticated methods, and argued strongly that g has three sub- factors: verbal, perceptual–spatial, and image rota- tion. However, a quantitative factor did not appear as a fourth subfactor, which might be because of the restriction of quantitative ability to simple number facility in the test batteries.

The most finely differentiated picture of how g could be decomposed is the comprehensive model of human abilities proposed by Fleishman and Reilly (1992), which is incorporated into O*NET (Peter- son et al., 1999). It includes 21 cognitive abilities. Although some evidence has been found for differ- ential prediction of performance across different jobs using cognitive ability subfactors (Rosse, Campbell, & Peterson, 2001; Zeidner, Johnson, &

Scholarios, 1997), the incremental gains are small compared with the variance accounted for by g. However, even small gains are significant in the con- text of large-scale selection and classification in large organizations. It is also true that the advan- tages of using specific subfactors rather than g for particular measurement purposes have not been evaluated against highly specific performance sub- factors (e.g., operating specific kinds of equipment that may require highly specific abilities).

Psychomotor abilities. The Fleishman and Reilly (1992) taxonomy includes 10 specific psychomotor abilities grouped into three higher order subfac- tors: (a) hand and finger dexterity and steadiness; (b) control, coordination, and speed of multilimb movements; and (c) complex reaction time and speed of movement involving hands, arms, legs, or all of these. Standardized performance-based tests are available for each of the 10 specific abilities, and they may (should?) be differentially important for predicting performance or specific job tasks, such as using a keyboard versus landing military jet aircraft at sea. No data are available for this domain, but it is interesting to speculate as to whether, for surgeons, open incision surgery requires somewhat different psychomotor abilities than robotic surgery.

Physical abilities. Although most occupations probably do not, several key occupations (e.g., fire- fighter, police officer, certain military occupations) have specialized physical ability requirements. The assessment of physical ability is also critical when considering the suitability of people with disabilities for various jobs. The latent structure of physical abilities was first investigated comprehensively by Fleishman and his colleagues (Fleishman, 1964; Fleishman & Quaintance, 1984; J. Hogan, 1991; Myers, Gebhardt, Crump, & Fleishman, 1993), who eventually arrived at a six-factor latent structure (i.e., static strength, explosive strength, dynamic strength, stamina, trunk strength, and flexibility).

Because physical ability assessment has not received as much research attention as cognitive ability assessment, at least two critical issues should be considered. First, any of the six factors may be broken down into more specific subfactors (e.g., arm and shoulder strength vs. leg strength), and for

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

369

each specific factor, there are two or more specific assessment techniques (e.g., lifting a weight off the ground vs. pushing a weight along the ground). Consequently, both the specific subfactors and the assessment method are critical choices. Gebhardt and Baker (2010) provided a thorough discussion of these issues and the research pertaining to establish- ing the physical requirements of work roles.

Sensory abilities. Certain occupations have spe- cialized requirements for visual and auditory abili- ties (e.g., airline pilot). The Fleishman and Reilly (1992) taxonomy of sensory abilities incorporated in O*NET includes nine factors (e.g., far vision, peripheral vision, sound localization, speed recog- nition), each of which could be assessed by several different tests. For purposes of selection, certifica- tion, or licensure, criterion-referenced measurement is particularly critical for sensory abilities. That is, certain minimum levels of such abilities could be required, and top-down scoring would not suffice.

Somewhat strangely, the Fleishman and Reilly (1992), and consequently the O*NET, taxonomy does not include taste or olfactory abilities. Given the importance of marketing food and drink in cur- rent culture, this omission is potentially serious.

Speaking ability. O*NET includes only one such ability, speech clarity, but others may exist as well (e.g., speech modulation). Given the importance of oral communication in many occupations, this omission, too, would seem to be serious.

“Other” intelligences. The independent variable assessment landscape is also dotted with numer- ous variables that might be best described as “not g” (Lievens & Chan, 2010). The basic theme is that important abilities exist that are independent of g and that play a role in success at work but are not part of mainstream research. The two most prominent abilities in this category are practi- cal intelligence (Sternberg, Wagner, Williams, & Horvath, 1995), not to be confused with a higher order construct labeled successful intelligence (which includes creative, analytical, and practical intel- ligence; Sternberg, 2003), and emotional intelli- gence, measured either as cognitive ability (Salovey & Mayer, 1990) or as personality (Bar-On, 1997).

The available evidence pertaining to these con- structs has been reviewed at some length elsewhere (Gottfredson, 2003; Landy, 2005; Lievens & Chan, 2010; Murphy, 2006). The overall conclusion must still be that construct validity is lacking for measures of these non-g intelligences and that they are in fact better represented by other already existing vari- ables. For example, a recent study by Baum, Bird, and Singh (2011) evaluated a carefully constructed domain-specific situational judgment test of how best to develop businesses in the printing industry, which was then called a test of practical intelligence. With this juxtaposition, knowledge of virtually any specific domain of job-related knowledge could be labeled practical intelligence. What’s in a name?

Traits: Dispositions Still within the context of stable, or at least quasi- stable, traits, the I/O psychology independent vari- able landscape includes many constructs reflective of dispositional tendencies, that is, tendencies toward characteristic behavior in a given context. Personal- ity, motives, goal orientation, values, interests, and atti- tudes are the primary labels for the different domains.

Personality. The assessment of personality domi- nates this landscape (Hough & Dilchert, 2010; see also Chapter 28, this volume) in terms of both the wide range of available assessment instruments (R. Hogan & Kaiser, 2010) and the sheer amount of research relating personality to a wide range of dependent variables (Hough & Ones, 2001; Ones, Dilchert, Viswesvaran, & Judge, 2007). The efficacy of personality assessment for purposes of predicting the I/O psychology dependent variables has had its ups and downs, moving from up (Ghiselli, 1966) to down (Guion & Gottier, 1965) to up (Barrick & Mount, 1991, 2005), to uncertainty (Morgeson et al., 2007), to reaffirmation (R. Hogan & Kaiser, 2010; Hough & Dilchert, 2010; Ones et al., 2007). The ups and downs are generally reflective of how the assessment of personality is represented (e.g., narrow vs. broad traits), which dependent variables are of interest, how predictive validity is estimated, and the utility ascribed to particular magnitudes of estimated validity. The bottom line is that personal- ity assessment is a very useful enterprise so long as

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

370

the inferences that are made are consistent with the evidence pertaining to the dependent variables that can be predicted by appropriate assessments.

The assessment of personality for predictive or diagnostic purposes is complex for at least the fol- lowing reasons.

■■ The measurement operations (i.e., “items”) can come from different models of what constitutes personality description. The lexical approach is based on the words used in normal discourse to describe behavioral tendencies in others. The latent structure of such descriptors can then be investigated empirically. The five-factor model of Costa and McRae (1992) is the dominant solu- tion. A second model would be to consult more basic theories of personality (e.g., Eysenck, 1967; Markon et al., 2005; Tellegen, 1982; Tellegen & Waller, 2000), write items reflective of the com- ponents specified by the theory, and investigate their construct validity. The advocates of the theory-based approach have argued that it pro- duces a latent structure that is tied more closely to biological substrates (DeYoung et al., 2010). Both approaches can produce hierarchical latent structures.

■■ Whether the descriptors (i.e., items or scales) are obtained by data mining normal discourse or by following the specifications of a theory, assessments of an individual can be obtained via self-report or observer report. Although the bulk of personality assessment in I/O psychology is self-report, observer reports may be more predic- tive of various aspects of performance (e.g., Oh, Wang, & Mount, 2011). Are self-reports and observer reports different constructs? R. Hogan and Kaiser (2010) argued the affirmative and referred to self-descriptions as self-identity and to observer descriptions as reputations.

■■ The general agreement (DeYoung, Quilty, & Peterson, 2007) is that the lexically derived Big Five are themselves multidimensional and are composed of distinct facets. Going the other direction, combining two or three of the Big Five into higher order composite dimensions (e.g., integrity) has also been useful. DeYoung (2006) argued for two basic subfactors but rejects the

existence of a general factor. Whether an assess- ment should use composite dimensions, fac- tors at the Big Five level of generality, or more specific facets depends on the measurement purpose.

■■ At the Big Five level of generality, there is con- siderable agreement that the five-factor model is deficient and does not include additional impor- tant constructs such as religiosity, traditionalism or authoritarianism, and locus of control (Hough & Dilchert, 2010).

Motives or needs. Alderfer (1969), Maslow (1943), McClelland (1985), Murray (1938), White (1959), and others have offered models of the latent structure of human motives, or needs. Explicitly, or by implication, motives are defined as inner states that determine the outcomes that people strive to achieve or strive to avoid. The strength of a motive determines the strength of the striving. Different motives are associated with different classes of outcomes (e.g., outcomes that satisfy achievement needs vs. outcomes that meet social needs).

Although the distinctions between the intensity of characteristic behavioral tendencies (personality) and the strength of striving for specific outcomes (motives) are not always perfectly clear, the assess- ment methods have been different enough to war- rant considering them separately. For example, within I/O psychology the projective techniques (ambiguous pictures) used by McClelland (1985) to assess need achievement and fear of failure and the sentence completion scales used by Miner (1977) to assess the motivation to manage are not personality scales in the sense of the NEO Personality Inven- tory, California Psychological Inventory, or Multidi- mensional Personality Questionnaire. Motive assessment has more specific referents (for more information on projective measures, see Volume 2, Chapter 10, this handbook).

Goal orientation. A very specific instantiation of motive assessment that has received increasing attention in I/O psychology is the assessment of goal orientation as it has developed from the work of Dweck and colleagues (Dweck, 1986; Elliott & Dweck, 1988). Initially, two orientations (motives) were posited in the context of training

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

371

and instruction. A performance orientation charac- terizes individuals who strive for a desirable final outcome (e.g., final grade). Similar to McClelland (1985), the goal is to achieve the final outcomes that the culture defines as high achievement. By contrast, a mastery or learning orientation characterizes indi- viduals who strive to learn new things regardless of the effort involved, the frequency of mistakes, or the nature of the final evaluation. It is learning for learning’s sake.

As noted by DeShon and Gillespie (2005), agree- ment on the nature of goal orientation’s latent struc- ture, and on whether it is a trait, quasi-trait, or state variable, is not uniform. Considerable research has focused on whether learning and performance orien- tations are bipolar or independent and whether one or both of them are multidimensional (DeShon & Gillespie, 2005). The answers seem to be that they are not bipolar and that performance orientation can be decomposed into performance orientation– positive—the striving toward final outcomes defined as achievement—and performance orientation– negative—the striving to avoid final outcomes defined as failure. One major implication is that performance-oriented people will avoid situations in which a positive outcome is not relatively certain and that learning-oriented individuals will relish the opportunity to try, regardless of the probability of a successful outcome. Assessment of goal orientations is still at a relatively primitive stage (Payne, Young- court, & Beaubien, 2007) and has not addressed the issue of whether learning or performance orienta- tions are domain specific. For example, could an individual have a high learning orientation in one domain (e.g., software development) but not in another (e.g., cost control)? Also, the question of whether goal orientation is trait or state has not been settled. However, even though assessment is primitive, research has suggested that goal orienta- tion is an important determinant of performance and satisfaction in training and in the work role (Payne et al., 2007).

Interests. Interest assessment receives the most attention within the individual, not the organiza- tional, perspective and is a major consideration in vocational guidance, career planning, and individual

job choice. It has also played a role, albeit smaller, in personnel selection and classification on the basis of the notion that individuals will devote more atten- tion and effort to things that interest them, other things being equal, including the mastery of relevant skills (Van Iddekinge, Putka, & Campbell, 2011).

Assessment of interests is dominated by two inventories, the Self-Directed Search (Holland, 1994) and the Strong Interest Inventory (Harmon, Hansen, Borgen, & Hammer, 1994). The Self- Directed Search portrays interest via the now- familiar RIASEC (realistic, investigative, artistic, social, enterprising, and conventional) hexagon, which says that the latent structure of interests is composed of six factors with a particular pattern of intercorrelations. The RIASEC profiles can be used to characterize both individuals and jobs or occupa- tions. A profile for an occupation is supposedly indicative of the degree to which the occupation will satisfy each of the six interest areas. Holland (1997) viewed the Self-Directed Search as a measure of per- sonality and essentially subsumed interests within the overall domain of personality. The Strong Inter- est Inventory uses empirical weighting to differenti- ate individuals in an occupation from people in general on preferences for specific activities, school subjects, and so forth. Such preferences are not viewed as synonymous with personality. The Strong Interest Inventory is also scored in terms of 20 basic interest dimensions that have relatively low correla- tions with personality measures (Sullivan & Han- sen, 2004). Whether interests account for incremental variance in the dependent variables, when compared with personality or cognitive abil- ity, has only begun to be researched (see Van Iddekinge et al., 2011; for more information on the assessment of interests, see Volume 2, Chapter 19, this handbook).

Values. Although defining values presents the usual difficulties of choosing from among alterna- tives, Chan (2010) presented a careful synthesis. Values seem most usefully defined as “the individu- al’s stable beliefs that serve as general standards by which he or she evaluates specific things, including people, behaviors, activities, and issues” (Chan, 2010, p. 321). By this specification, which distinguishes

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

372

values from personality, motives, and interests, the assessment of values can play an important role in career planning, specific job choice, and decisions to stay from the individual’s perspective and in person- nel selection, person–organization fit, organizational commitment, and turnover from the organization’s perspective.

The latent structure of values in the context of work has not been studied very intensively. As noted by Chan (2010), the taxonomy produced by Schwartz and Bilsky (1990) is perhaps the most useful. It has 10 values dimensions for describing individuals and seven dimensions describing cul- ture, for comparative purposes. Another structure is provided by Cooke and Rousseau (1988). In gen- eral, research on values and the development of methods for the assessment of values in the work context needs more attention in I/O psychology. Values as indicators of cultural distinctions across countries is another matter. Considerable research has been done using Hofstede’s dimensions, and a comprehensive meta-analysis of these dimensions has been provided by Taras, Kirkman, and Steel (2010).

The State Side The independent variables noted so far have been des- ignated as trait variables that are relatively stable over the individual’s work life, or at least the major portion of it. I/O psychology also deals with a complex struc- ture of independent variables that are more statelike. That is, they are to some degree malleable, if not dynamic, as the result of situational effects, planned or unplanned. State variables are no less important than trait variables in explaining individual differ- ences in the critical dependent variables, and the interaction between trait and state should be consid- ered as well. The important state variables also tend to mirror the ability versus disposition distinction. That is, for some state variables, the assessment of maxi- mum performance is the goal, whereas for others, the assessment of representative or typical dispositional states is the goal. More concretely, the distinction is between knowledge and skill versus attitudes and the cognitive regulation of choice behavior. However, for both abilities and dispositions, the distinctions between state and trait are developmentally complex.

Ackerman (2000), Ackerman and Rolfhus (1999), Kanfer and Heggestad (1997), and Lubinski (2010) have provided a roadmap.

Knowledge and Skill Specifications for knowledge and skill are elusive. What follows is an elaboration on Campbell and Kuncel (2001) and an attempt to distinguish among (a) declarative knowledge, (b) proceduralized knowledge, (c) skill, and (d) problem solving. It is meant to be consistent with Anderson (1987) and Simon (1992). The nature of competencies is a separate issue.

Declarative knowledge is knowledge of labels and facts pertaining to objects, events, processes, condi- tions, relationships, rules, if–then relationships, and so forth. As in the Anderson (1987) framework, declarative knowledge is distinguished from proce- duralized knowledge, which refers to knowing how something should be done (e.g., How should shin- gles be put on a roof? How should a correlation matrix be factor analyzed? How should a golf club be swung?). In contrast to knowing how to do something, skill refers to actually being able to do it. Sometimes the distinction between proceduralized knowledge and skill is relatively small (e.g., know- ing how to factor analyze a matrix vs. actually doing it), and sometimes it is huge (e.g., knowing how to swing a three-iron and actually being able to do it at some reasonable level of proficiency; note the qualifier—skills are not dichotomous variables). Consequently, a skill can be defined as the applica- tion of declarative and proceduralized knowledge capabilities to solve structured problems and accom- plish specified goals. That is, the problems or goal accomplishments at issue have known (i.e., correct) solutions and known ways of achieving them. The issue is not whether the problems or specified goals are easy or difficult, it is whether correct solutions can be specified.

The capabilities commonly labeled as problem solving, critical thinking, or creativity should be set apart from a discussion of knowledge and skill. Although these capabilities appear frequently in competency models and other forms of knowledge, skills, and abilities lists, they are seldom, if ever, given a concrete specification, seemingly because

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

373

everyone already knows what they are. Conse- quently, whether problem solving, creativity, and critical thinking are intended as trait or state vari- ables is not clear. That is, are they distinct from gen- eral cognitive ability, and can they be enhanced via training and experience? Attempts to assess these capabilities must somehow deal with this lack of specification.

Following Simon (1992), problem solving could be defined as the application of knowledge and skill capabilities to the development of solutions for ill- structured problems. Ill-structured problems are characterized as problems for which the methods and procedures required to solve them cannot be specified with certainty and for which no correct solution can be specified a priori. Generating solu- tions for such problems is nonetheless fundamen- tally and critically important (e.g., What should be the organization’s research and development strat- egy? What is the optimal use of training resources? How can the coordination among teams be maxi- mized?). Specified in this way, a problem-solving capability is important for virtually all occupations, which invites a discussion of how it can be devel- oped and assessed. The literature on problem solv- ing within cognitive psychology in general, and with regard to the study of expertise in particular, is rea- sonably large (Ericsson, Charness, Feltovich, & Hoffman, 2006). To make a long story brief, the conclusions seem to be that (a) there is no general (i.e., domain-free) capability called problem solving that can be assessed independently of g; (b) problem- solving expertise, as defined earlier, is domain spe- cific; (c) expert problem solvers in a particular substantive or technical specialty simply know a lot, and what they know is organized in a framework that makes it both useful and accessible; and (d) experts use a variety of heuristics and cues correctly to identify and structure problems, determine what knowledge and skills should be applied to them, and judge which solutions are useful.

Currently, expert problem solving is viewed as a dual process (Evans, 2008). That is, solutions are either retrieved from memory very quickly, seem- ingly with minimal effort and thought, or a much more labor-intensive process of problem exploration and definition occurs, thinking about and evaluating

potential solutions and finally settling on a solution or course of action. The latter process is not a serial progression through a specific series of steps, but it is an organized effort to use the expert’s fund of knowledge, skills, and strategies in a useful way.

The dual-process models are not strictly analo- gous to automatic versus controlled processing dis- tinctions (Ackerman, 1987). The distinction is more between identifying a solution very quickly versus identifying one more deliberately. Different brain processes are involved, as evidenced by functional magnetic resonance imaging studies (Evans, 2008). Some investigators (e.g., Salas, Rosen, & DiazGrana- dos, 2010) have been quick to label the fast process intuition and insert it into competency models, knowledge, skills, and abilities lists, and the like— again with virtually no specifications for what intu- ition is. It is another example of an important word from general discourse causing assessment problems for I/O psychology when attempting to incorporate it in research or practice.

Following Simon (1992), Kahneman and Klein (2009) demystified intuition by defining it as a pro- cess that occurs when an ill-structured problem to be solved exists, and the problem situation provides cues that the expert can use to quickly access rele- vant information stored in memory that provides a useful solution. Virtually by definition, intuitive expertise must be based on a large, optimally struc- tured base of information and on identifying the most valid situational cues. There is no magic in intuition. With regard to solving ill-structured prob- lems, the distinction between quickly accessing a useful solution (i.e., intuition) and being more deliberative is not a clear dichotomy. A final solu- tion might be produced quickly but then subjected to varying degrees of deliberation.

Solving structured problems (i.e., exhibiting a skill as defined earlier) is a somewhat different phe- nomenon. Certain (but certainly not all) skills can be practiced enough so that they do become auto- matic (Ackerman, 1988) and can be used without effort or conscious awareness. However, many skills will always remain a controlled or deliberative pro- cess (e.g., creating syntax). Experts do it more quickly and more accurately than other people, but not automatically.

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

374

Creativity What then are creativity and critical thinking? Answering such questions in detail is beyond the scope of this chapter, but the following discussion seems relevant vis-á-vis their assessment. Compre- hensive reviews of creativity theory and research are provided by Dilchert (2008), Runco (2004), and Zhou and Shalley (2003).

Creativity has been assessed as both a cognitive and a dispositional trait, as in creative ability and creative personality. Both cognitive- and personality- based measures have been developed via both empirical keying (e.g., against creative vs. noncre- ative criterion groups) and homogeneous, or construct-based, keying. Meta-analytic estimates of the relationships between cognitive abilities and cre- ative ability and between established personality dimensions (e.g., the Big Five) and creative personal- ity scales are provided by Dilchert (2008) as well as the correlations of creative abilities and creative per- sonality dimensions with measures of performance.

Within a state, framework creativity can also be viewed as a facet of ill-structured problem-solving performance (e.g., George, 2007; Mumford, Baugh- man, Supinski, Costanza, & Threlfall, 1996). Here, the difficulty is in distinguishing creative from noncreative solutions. The specifications for the dis- tinction tend not to go beyond stipulating that cre- ative solutions must be both unique, or novel, and useful (George, 2007; Unsworth, 2001). That is, uniqueness by itself may be of no use. In the context of problem-solving performance, is a unique (i.e., creative) solution just another name for a new solu- tion, or is it a distinction between a good solution and a really good solution (i.e., the latter has more value than the former, given the goals being pur- sued)? In general, creativity as a facet of a problem- solving capability does not seem unique. Attempting to assess creative expertise as distinct from high- level expertise may not be a path well chosen.

Critical Thinking Similar specification problems characterize the assessment of critical thinking, which has assumed rock-star construct status in education, training, and competency modeling (e.g., Galagan, 2010; Paul & Elder, 2006; Secretary’s Commission on Achieving

Necessary Skills, 1999; Stice, 1987). Many, many definitions of critical thinking have been offered in a wide variety of contexts ranging from the Socratic tradition, to the constructivist perspective in educa- tion, to economic theory, to problem solving in the work role, to the value-added assessment of educa- tion, and to the scientific method itself. In all of these, critical thinking is regarded, explicitly or implicitly, as a state variable. That is, it is something to be learned. Moreover, it could be regarded as a cognitive capability or as a motivational disposition (i.e., people differ in the degree to which they want to think critically). Perhaps the former is a prerequi- site for the latter.

Setting aside those specifications that are so gen- eral as to be indistinguishable from thinking, prob- lem solving, or intelligence itself, the defining characteristic of critical thinking seems to be a dis- position to question the validity of any assertion about facts, events, ongoing processes, forecasts of the future, and so forth and to ask why the assertion was made. The form of the questioning (i.e., critical thinking) relies on the canons of rationality, logic, and the scientific method and on domain-specific knowledge. That is, to think critically is to always question the truth value of a statement (a disposi- tion) and to analyze (a cognitive capability) the basis on which the statement is made.

Such a specification invites a consideration of whether such a thing as a general critical thinking skill exists, or whether it must always be substan- tially domain specific. That is, is it even possible to talk about critical thinking independently of content domain? This is the same issue discussed earlier in the context of problem-solving capabilities and creativity.

The assessment of critical thinking is most often via rater judgment and less often by standardized tests (Ennis, 1985; Ewell, 1991; Steedle, Kugelmass, & Nemeth, 2010). One area of research that has confronted both the general versus domain-specific issue and rated versus tested assessment is the devel- opment of the value-added approach to the assess- ment of educational outcomes (Liu, 2011). This effort has been in progress for some 30 or more years but has surged recently as a means for assess- ing teacher effects (kindergarten–Grade 12) on

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

375

student achievement and the college–university effect on undergraduate learning (Klein, Freedman, Shavelson, & Bolus, 2008). The latter is perhaps more relevant and involves the assessment of gains on certain general skills—critical thinking being a major one—as a function of a college or university education. Three principal assessment systems are available (Banta, 2008): the Collegiate Assessment of Academic Proficiency from American College Testing, the Measure of Academic Proficiency and Progress from the Educational Testing Service, and the Collegiate Learning Assessment from the Council for Aid to Education. The first two have a multiple-choice format, but the third uses open- ended (i.e., written) responses to three scenarios involving (a) taking and justifying a particular posi- tion on an issue, (b) critiquing and evaluating a par- ticular position on an issue, and (c) performing the tasks in an in-basket simulation. The responses are scored by expert raters to yield scores on problem solving, analytic reasoning, critical thinking, and writing skills. The stated expectation is that the col- lege or university experience should increase such skills, and schools can be ranked in terms of the extent to which they do so (Klein et al., 2008). Research so far has suggested that scores on such measures do go up from freshman to senior status, but it has been difficult to extract more than one general factor, and the construct validity of the gen- eral factor has not been clearly established.

The moral here is that for assessment purposes, problem solving, creativity, and critical thinking are complex and extremely difficult constructs to spec- ify. They are particularly difficult to specify in a domain-free context. Moreover, is the domain-free context even the most relevant for assessment in I/O psychology? These issues should not be approached in a cavalier fashion, such as listing them in a com- petency model without thorough specification.

Latent Structure of Knowledge and Skills (as Determinants of Performance) For the assessment of individual differences in domain-specific knowledge and skills, a distinction can be made between the direct real-time knowledge and skills determinants of performance in a work role and the knowledge and skills requirements that

are assessed before being hired. The former might be assessed for diagnostic or developmental purposes and the latter for predictive purposes. However, the latter may also serve as a prerequisite for the former and, as asserted in a previous section, the latter (indirect) can only influence performance by influ- encing the former (direct).

In contrast to abilities, the substantive latent structure or structures of knowledge and skills have received scant attention. Part of the problem is sim- ply the almost limitless number of possibilities and the difficulty of choosing the appropriate levels of generality or specificity. That is, many, many knowl- edge and skills domains exist, and they may be sliced very coarsely or very finely.

Content-based knowledge taxonomies do exist. A relatively general one is included in O*NET and consists of 38 knowledge domains that are primarily focused on undergraduate curriculum areas (e.g., psychology, mathematics, philosophy, physics). As noted by Tippins and Hilton (2010), the knowledge requirements for many skilled trades, or technical specialties not requiring a college degree, do not seem to be represented. A taxonomy-like structure that does represent the non–bachelor’s degree spe- cialties is the compilation of technical school curri- cula known as the Catalog of Instructional Programs maintained by the U.S. Department of Education.

Knowledge taxonomies specific to particular classes of occupations have also been developed via comprehensive job analysis efforts over a period of years by the U.S. Office of Personnel Management (2007). To date, they cover these classes of occupations:

■■ professional and administrative, ■■ clerical, ■■ technical, ■■ executive or leadership, ■■ information technology, and ■■ science and engineering.

Collectively, they are a part of the Office of Per- sonnel Management’s MOSAIC system and consti- tute a much more complete taxonomy of job knowledge requirements than the O*NET.

Portraying the taxonomic structure for direct and indirect skills requirements is even more

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

376

problematic than it is for knowledge. O*NET pro- vides a taxonomy of 35 skills that are defined as cross-occupational (i.e., not occupation specific) and that vary from the basic skills such as reading, writing, speaking, and mathematics, to interpersonal skills such as social perceptiveness, and to technical skills such as equipment selection and program- ming. As noted by Tippins and Hilton (2010), the O*NET skills are very general in nature and gener- ally lacking in specifications. Moreover, two of the 35 O*NET skills are complex problem solving and critical thinking, the limitations of which were dis- cussed earlier. Again, a wider set of more concretely specified skills are included in the Office of Person- nel Management’s MOSAIC system but only for cer- tain designated occupational groups.

Because the skills gap has been such a dominant topic in labor market analyses (e.g., Davenport, 2006; Galagan, 2010; Liberman, 2011), one might expect the skills gap literature to provide an array of substantive skills that are particularly critical for assessment. It generally does not. Virtually all skills gap information is obtained via employer surveys in response to items such as “To what extent are you experiencing a shortage of individuals with appro- priate technical skills?” However, the specific tech- nical skills in question are seldom, if ever, specified. Skills such as leadership, management, customer service, sales, information technology, and project management are as specific as it seems to get.

The purposes for which knowledge and skill assessments might be done are, of course, varied. It could be for selection, promotion, establishing needs for training and development, or certification and licensure—all from the organizational perspec- tive. From the individual perspective, it could be for purposes such as job search, career guidance, or self-managed training and education. For organiza- tional purposes, the lack of a taxonomic structure may not be a serious impediment. Organizations can develop their own specific measures to meet their needs, such as specific certification or licensure examinations. However, for individual job search or career planning purposes, the lack of a concrete and substantive taxonomic structure for skills presents problems. Without one, how do individuals navi- gate the skills domain when planning their own

education and training or matching themselves with job opportunities?

State Dispositions By definition, and in contrast to trait dispositions, state dispositions are a class of independent vari- ables that determines volitional choice behavior in a work setting but that can be changed as a result of changes in the individual’s environment. Disposition-altering changes could be planned (e.g., training) or unplanned (e.g., peer feedback). A selected menu of such state dispositions follows.

Job Attitudes There are many definitions of attitudes (Eagley & Chaiken, 1993), but one that seems inclusive stipulates that attitudes have three components: First, attitudes are centered on an object (e.g., Dem- ocrats, professional sports teams, the work you do); second, an attitude incorporates certain beliefs about the object (e.g., Democrats tax and spend, professional sports teams are interesting, the work you do is challenging); and third, on the basis of one’s beliefs, one has an evaluative–affective response to the object (e.g., Democrats are no good, professional sports teams are worth subsidizing, you love the challenges in your job). The evaluation– affective reaction is what influences choice behavior (e.g., you vote Republican, you vote for tax subsi- dies for a professional sports stadium, you will work hard on your job for as long as you can).

Job satisfaction. The job attitude that has domi- nated both the I/O research literature and human resources practice is of course job satisfaction, which was discussed earlier in this chapter as a dependent variable. However, used as an independent variable the correlation between job satisfaction and both performance and retention has been estimated liter- ally hundreds of times (Hulin & Judge, 2003) using the same assessment procedures discussed previ- ously, and the same issues apply (e.g., Weiss, 2002). In addition to job satisfaction, several other work attitudes have received attention for both research and application purposes.

Commitment. As an attitude, commitment in a work setting can take on any one of several different

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

377

objects, and it is possible to assess commitment to the organization, the immediate work group, an occupation or profession, one’s family or significant other, and entities outside of the work situation such as an avocation or civic responsibility. Beliefs about any one of these attitude objects could lead to positive or negative affect that influences decisions to commit effort for short- or long-term durations. The assessment issues revolve around the differen- tiation of attitude intensity across objects and the distinction between commitment to and satisfaction with. That is, measures of job satisfaction and orga- nizational commitment both yield significant cor- relations with turnover and performance (Hulin & Judge, 2003), but does one add incremental variance over the other (Crede, 2006)? Both the latent struc- ture of commitment and its distinctiveness from other attitudes are not settled issues.

Job involvement. Job involvement is variously characterized as a cognitive belief about the impor- tance of one’s work, the degree to which it satisfies individual needs of a certain kind (e.g., achieve- ment, belongingness), or the degree to which an individual’s self-identity is synonymous with the work he or she does (Brown, 1996; Kanungo, 1982; Lodahl & Kejner, 1965). Consequently, it should be related to job satisfaction, self-assessments of long- term performance, commitment to the occupation (but perhaps not the organization), and intentions to stay or leave.

Job engagement. Job engagement is currently a hot topic, as evidenced by at least two recent handbooks (Albrecht, 2010; Bakker & Leiter, 2010) and a major book (Macey, Schneider, Barbera, & Young, 2009). In their focal article in the journal Industrial and Organizational Psychology: Perspectives on Science and Practice, Macey and Schneider (2008) made a concerted attempt to define state engagement, which was characterized as an evaluative or affective state regarding one’s job that goes beyond simply being satisfied, committed, or involved and reflects the individual’s total passion and dedication for his or her work and a willingness to be totally immersed in it. The article elicited 13 quite varied responses that illustrated the major assessment issues with which such constructs must deal, in both research and

practice. For example, what is the latent structure of this construct? Is engagement a dispositional trait, and affective state, or a facet of performance itself? Do measures of engagement account for unique vari- ance over and above satisfaction and commitment? Although managements tend to view engagement as an important construct (Masson, Royal, Agnew, & Fine, 2008; Vosburgh, 2008), its assessment must deal with the preceding issues. Christian, Garza, and Slaughter (2011) reported a meta-analysis that engages some of the issues. Although the number of studies is not great, and there is variation in the measures of engagement, the evidence is supportive of unique variance and some incremental predictive validity that could be attributed to engagement (for further discussion of job satisfaction and related job attitudes, see Chapter 37, this volume).

Motivational States Again, in contrast to trait dispositions, such as the need for achievement, a class of more dynamic moti- vational states has become increasingly important, at least in the research literature, as determinants of choice behavior at work. Consider the following sections.

Self-efficacy and expectancy. The Bandurian notion of self-efficacy is the dominant construct here and is defined as an individual’s self-judgment about his or her relative capability for effective task performance or goal accomplishment (Bandura, 1982). Self-efficacy judgments are specific to par- ticular domains (e.g., statistical analysis, golf) and can change with experience or learning. Self-efficacy is similar to, but not the same as, Vroom’s (1964) definition of expectancy as it functions in his valence– instrumentality–expectancy model of motivated choice behavior. Expectancy is an individual’s per- sonal probability estimate that a particular level of effort will result in achieving a specific performance goal. It is very much intended as a within-person explanation for why individuals make the choices they do across time, even though it is most frequently used, mistakenly, as a between-persons assessment.

Instrumentality (risk) and valence (outcome value). From subjective expected utility to valence–instrumentality–expectancy theory (Vroom,

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

378

1964) to prospect theory (Kahneman & Tversky, 1979), the concepts of risk assessment and outcome value estimation are viewed as state determinants of choice behavior. Individuals want to minimize risk and maximize outcome value and will govern their actions accordingly. However, as noted in prospect theory, preference for risk levels and outcome values are discounted as a function of time. That is, indi- viduals will take on greater risk but value specific outcomes less the farther they are in the future. See Steel and Konig (2006) for an integrated summary of how such state dispositions influence choice behavior. Such considerations have not yet played a very large role in diagnostic assessment of the choice to perform, but perhaps they should.

Core self-evaluation. Judge, Locke, Durham, and Kluger (1998) have done considerable work on a set of dispositions they referred to as core self-evaluations. The set consists of general self-efficacy (i.e., a self- assessment of competence virtually regardless of the domain), self-esteem, locus of control, and neuroti- cism. It is somewhat problematic as to whether these facets can be considered trait or state, but they have shown significant predictive validities ( Judge, Van Vianen, & DePater, 2004). Their distinction as sepa- rate facets is also not a settled issue and may depend on the specific measure involved (Ferris et al., 2011; Judge & Bono, 2001).

Mood and emotion. The dispositional effects of mood and emotion on work behavior have received increasing attention (Mitchell & Daniels, 2003; A. M. Schmidt, Dolis, & Tolli, 2009; Weiss & Rupp, 2011). Specifications for these constructs are not perfectly clear (Mitchell & Daniels, 2003), but in general, mood is defined as an affective state that is quite general, and emotion is usually specified as having a specific referent. That is, one’s mood is gen- erally bad or good, but the individual is emotional (positively or negatively) about specific things. Why assess such dispositional states? The dominant answer is that as state determinants of choice behav- ior, they help to explain the within-person variabil- ity in performance over relatively short periods of time (e.g., Beal et al., 2005). Also, as advocated by Weiss and Rupp (2011), the whole person cannot be assessed without a consideration of these states.

Things that are known to be unknown. A list of state determinants is probably not complete with- out noting that individuals are not aware of all of the determinants of their choice behavior (e.g., Bargh & Chartrand, 1999). That is, people make many choices, even at work, for which they cannot explain the antecedents. Apparently, the reasons for action are not in conscious awareness. Can they be recovered via some form of assessment? That has yet to be determined, but one avenue of investiga- tion concerns priming effects (Gollwitzer, Sheeran, Trotschel, & Webb, 2011).

Competencies (and Competency Modeling) So far, this chapter has avoided the question of whether competencies and competency modeling are, or are not, a distinct sector of the I/O psychol- ogy assessment landscape. That is, is competency modeling just knowledge, skills, abilities, and other characteristics (KSAOs) and job analysis by another name, or should it be set apart? Previous attempts to settle this question have been inconclusive (e.g., Sackett & Laczo, 2003; Schippman, 2010; Schippman et al., 2000). In a further attempt at clar- ity, Campion et al. (2011) outlined best practices in competency modeling and noted its most distinctive features, in the context of the following definition of competencies. That is, competencies are defined as individual KSAOs, or collections of KSAOs, that are needed for effective performance of the job in ques- tion. By this definition, competencies are determi- nants of performance, not performance itself. Unfortunately, Campion et al.’s most detailed exam- ple of a competency (p. 240) is of project manage- ment, the specifications for which seem to be a clear characterization of performance itself, such that the example is not consistent with the definition. The competency modeling literature has variously referred to knowledge, skills, abilities, personal qualities, performance capabilities, and many other things (e.g., attitudes, personality, motives) as com- petencies (Parry, 1996). In the aggregate, very little of the I/O psychology landscape is left out, and Clouseau’s dictum potentially complicates assess- ment—that is, if competencies are everything, then they risk being nothing.

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

379

Campion et al. (2011) attempted to keep that from happening by abstracting best practices and identifying what makes competency modeling unique. Perhaps their most salient points are the following:

■■ Ideally, competency models attempt to develop specifications for the levels of a competency that distinguish high performers from average or low performers. That is, to paraphrase, how do the performance capabilities of expert performers dif- fer substantively from the performance capabili- ties of nonexpert performers, and what level of knowledge, skills, and dispositional characteristics are required to exhibit expert performance levels? This is very different from conventional job analy- sis, which tries to identify the components (e.g., tasks, work activities) of performance and predict which KSAOs will be correlated (or linked) with them. However, competency modeling is similar to cognitive job analysis (Schraagen, Chipman, & Shalin, 2000), which asks how experts, when compared with novices, perform their jobs and what resources (e.g., knowledge, skills, strategies) do they use to perform at that level? Cognitive job analysts and competency modelers should interact more. They have things in common.

■■ High-level subject matter experts (e.g., execu- tives) are used to first specify the substantive goals of the enterprise and then identify (to the best of their ability) both the performance and KSAO competencies at each organizational level that will best facilitate goal accomplishment. This is in contrast to conventional job analysis, which asks incumbents or analysts to rate the importance of KSAOs for performance in a target job, without reference to the enterprise’s goals. Supposedly, the incumbent or analyst subject matter experts (SMEs) have these in mind when making linkage judgments, but perhaps not.

■■ If competencies are specified as in the first bullet, the various components of the human resources system can more directly address enterprise objectives by focusing selection, training, and development on obtaining the most critical com- petencies. In some respects, competency model- ing is analogous to a needs analysis.

Even from this brief examination, it is apparent that competency modeling carries a heavy assess- ment burden. This burden is complicated by a resis- tance to taxonomic thinking and a desire to specify competencies in organizational language. These choices may aid in selling competency modeling to higher management, but they complicate specifica- tion for assessment. For example, how can previous theory and research in leadership be used to define and specify performance levels for leading with courage? Such a competency has a nice ring to it, but what does it mean? Tett, Guterman, Bleier, and Murphy (2000) attempted to address some of these issues by beginning with the research literature and conducting a systematic content analysis of pub- lished management competencies intended to reflect performance capabilities. On the basis of SME judg- ments, they identified 53 competencies grouped into 10 categories and attempted a definition of each of them. Although this effort represents a significant step in the right direction, a few of the competencies still seem more like personality characteristics than performance capabilities (e.g., orderliness, toler- ance). However, their juxtaposition of the SME- developed taxonomy derived from the literature against the competency lists from several private firms is interesting.

THE CONTEXT

So far, this chapter, in the interests of demonstrating the complexity of assessment in I/O psychology, has tried to outline the basic elements in the dependent and independent variable landscape that invite mea- surement. Because the concern is assessment, the complexities of research and practice focused on estimating the interrelationships among, or between, independent and dependent variables, differential prediction across criteria and interactive effects are not addressed. These are questions that, although very critical, do not themselves change the measurement requirements for the variables involved.

However, I/O psychology does make a big deal of the influence of the context, or situation, on the interrelationships among variables. Such contextual variables are often referred to as moderators. Also,

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

380

the context can take on the status of an independent variable. For example, the organizational climate or culture might be hypothesized to influence individ- ual choice behavior. Consequently, it is sometimes important to assess the context itself. For example, Scott and Pearlman (2010) strongly made the case that assessment for organizational change must always deal with assessment of the context.

The literature on the assessment of the context is in fact very large. For example, in the course of developing the specifications for the O*NET data- base two taxonomies were created, one for the work (job) context (Strong, Jeanneret, McPhail, Blakley, & D’Egidio, 1999) and one for the organizational context (Arad, Hanson, & Schneider, 1999). They are both multilevel hierarchical taxonomies. The work context is portrayed as having 39 first-order factors and 10 second-order factors, such as how people in work roles communicate, the position’s environmental conditions, the criticality of the work role, and the pace of the work. The organizational context is reflected by 41 first-order dimensions and seven second-order factors such as organizational structure, organizational culture, and goals.

Not surprisingly, because they were based on extensive literature searches, the O*NET’s work and organizational context taxonomies subsume much of the literature on organizational culture and cli- mate (e.g., James & Jones, 1974; Ostroff, Kinicki, & Tamkins, 2003), organization development ( J. R. Austin & Bartunek, 2003), and work design ( J. R. Edwards, Scully, & Bartek, 1999; Morgeson & Cam- pion, 2003). Within O*NET, the context is assessed via job incumbent ratings. Although a detailed examination of the context literature cannot be pre- sented here, the major features of the context that dominate the need for assessment, and the issues that assessment of the context creates, seem in the author’s opinion to be as follows.

1. the features of the work context that are identi- fied as rewarding or need fulfilling, such as the 20 potential reinforcers assessed by the instru- mentation of the Minnesota theory of work and adjustment or the five job characteristics specified by Hackman and Oldham’s (1976) Job Diagnostic survey;

2. the full range of performance feedback provided by the job and organizational context;

3. the nature and quality of the components of the organization’s human resources system such as selection procedures, compensation practices, and training opportunities;

4. the nature of the organization’s operating goals, such as those resulting from an application of Pritchard’s productivity measurement system (Pritchard, Holling, Lammers, & Clark, 2002);

5. leadership emphasis, in terms of whether it is directive versus participative, formalized versus informal, or centralized versus decentralized;

6. the complexity and variety of the technologies used by the organization;

7. the relative criticality or importance of specific jobs, positions, or roles;

8. the level of conflict among work roles or units; 9. the relative pace of work in terms of the charac-

teristic levels of effort, intensity, and influences of deadlines;

10. the physical nature of the environment (e.g., temperature, illumination, toxicity); and

11. the organizational climate and culture.

Number 11 perhaps deserves special mention. The assessment of organizational climate and cul- ture are important topics in I/O psychology and have a long history ( James & Jones; 1974; Lewin, 1951; Litwin & Stringer, 1968; Trice & Beyer, 1993). However, developing clear specifications for what constitute organizational culture and climate has proven elusive (Denison, 1996; Ostroff et al., 2003; Verbeke, Volgering, & Hessels, 1998). Ver- beke et al. (1998) surveyed the published literature and identified 32 distinct definitions of climate and 54 definitions of culture. However, a not-uncommon distinction is as follows.

Organizational culture refers to the informal rules, expectations, and norms that govern behavior, in addition to written policies, that are both rela- tively stable and widely perceived. Organizational climate generally refers to individual perceptions of the impact of the work environment on individual well-being (e.g., see James & Jones, 1974). By con- vention, psychological climate refers to each individ- ual’s judgment, whereas organizational climate refers

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

381

to the aggregate (e.g., mean) judgment across individuals.

Besides the definitional problems, the assessment of culture and climate must deal with at least the fol- lowing issues as well. First, to what unit are culture or climate referenced? Is it work group, department, division, or organizational climate or culture? Sec- ond, are individuals asked to provide their own indi- vidual judgments about the nature of the climate or culture or to predict the judgments of other organi- zational members? With either method, the con- struct of culture and climate requires some degree of consensus or agreement among individuals, but how much? Finally, is there a genuine latent structure of distinctive subfactors for culture and climate, or should both climate and culture be tied to any num- ber of specific referents that would not necessarily constitute a taxonomy of latent dimensions? James and Jones (1974) argued for the former, and Schneider (1990) argued for the latter. Standardized survey questionnaires do exist for culture (e.g., Cooke & Rousseau, 1988) and for climate (Ostroff, 1993), and they tend to yield stable factor struc- tures. Some evidence also exists for a general climate factor that seems to represent the overall psycholog- ical safety and meaningfulness of the work environ- ment (Brown & Leigh, 1996). The bottom line is that any attempt to assess organizational culture and climate, either as moderator variables or as indepen- dent variables in their own right, must address these issues. As always, settling specification and assess- ment issues must come before considering what mediates the relationship between culture or climate and something else, or the boxes, arrows, and path coefficients have little meaning.

Psychometric Landscape Many features of the psychometric landscape, as they pertain to measurement and assessment in I/O psychology, are well known and have not been dis- cussed, yet again, in this chapter. For several assess- ment purposes, psychologists are governed by the Standards for Educational and Psychological Testing (American Educational Research Association [AERA], American Psychological Association [APA], and National Council on Measurement in Education [NCME], 1999) and the Society of

Industrial and Organizational Psychology’s (2003) Principles, and all professionals are familiar with them. Also, all appropriate professionals should be familiar with the development of measurement the- ory beyond the confines of Spearman’s (1904) clas- sic model of true and error scores, which becomes a special case of the generalizability model (e.g., Putka & Sackett, 2010), and with the basics of item response theory (IRT) as well (Embretson & Reise, 2000).

The most important principle from the psycho- metric landscape is that all assessment, whether for research, practice, or high-stakes decision making, must have evidence-based validity for the purpose or purposes for which it is to be used. This principle is as true for asserting that self-efficacy is being mea- sured in a research study or for stating that critical thinking is a required competency in a competency model as it is for using a personality measure in high-stakes personnel selection. A large literature is also available on what kinds of evidence support the various purposes for which assessment is done (e.g., see AERA et al., 1999; Farr & Tippins, 2010; McPhail, 2007; Scott & Reynolds, 2010). This litera- ture should be part of all I/O psychologists’ expert knowledge base and is discussed in Chapter 4 of this volume. However, for a somewhat contrarian view, see Borsboom, Mellenbergh, and van Heerden (2003) and Borsboom (2006).

There are also some less talked-about issues that readers should think about. The first challenges the very existence of applied psychology. It comes pri- marily from the work of Michell (1999, 2000, 2008) and others (e.g., Kline, 1997) who asserted that psy- chometrics (i.e., measurement in psychology) is a pathological science. For them, measurement in psychology is pathological for two reasons. First, virtually all constructs studied or used in psychol- ogy are not quantitative but are simply assumed to be so without further justification. In this context, being quantitative essentially means that the scores representing individual differences on a variable constitute at least an interval scale. Second, the lack of justification for the assumption of such scale properties is kept hidden (i.e., never mentioned). Consequently, what can be inferred about individual differences on nonquantitative variables, and what

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

382

do the relationships (e.g., correlations) among such variables actually mean? For example, if psycholo- gists assess training effects by administering an achievement test before and after training and report that training produced a gain of 0.5 standard devia- tions (i.e., d = 0.50), what does that mean in terms of what or how much was learned? If neither job satisfaction nor job performance are assessed on a scale with at least interval properties, what does an intercorrelation of .35 mean?

This issue is an old one and goes back to the bifurcation between Stevens (1946), who asserted that measurement is the assignment of numbers to individuals according to rules, and Luce and Tukey (1964), who counterargued for a conjoint measure- ment model that requires interval scales with addi- tive properties. The current version of the argument is discussed in a series of articles by Michell (2000, 2008), Borsboom and Mellenbergh (2004), and Embretson (2006).

Everyone would probably admit that psycholo- gists seldom deal with interval property measure- ment, and the response to the accusation of pathology could be one of four kinds. First, it might be argued that ordinal scales are okay for many important assessment purposes (e.g., top–down selection). Second, the purpose of assessment may not be to scale individuals but to provide develop- mental feedback. Third, many of the variables psy- chologists study are quantitative, because when the same variable is measured with different instru- ments, the results are the same. The assumption has just not been explicitly tested. Fourth, one could argue that psychologists do, on occasion, assess people quantitatively, as in criterion-referenced measurement (Cizek, 2001) or when using IRT models (Embretson, 2006). Borsboom and Mellen- bergh (2004) argued that the Rasch model (i.e., a one-parameter IRT model) is an essentially stochas- tic equivalent to the deterministic conjoint measure- ment model, because it simultaneously scales both items and individuals on the same scale (i.e., theta).

The preceding issue is related to the recent dis- cussion of dominance versus ideal-point scaling for attitude and personality assessment (Drasgow, Cher- nyshenko, & Stark, 2010). In psychology, these two scaling procedures are credited to Likert (1932) and

Thurstone (1928), respectively. Thurstone scaling does provide information about the relative size of the intervals between scores on the attitude– personality continuum. Drasgow et al. (2010) argued persuasively that embedding ideal-point scaling in an IRT model overcomes some of the previous difficul- ties in Thurstone’s scaling and results in a more quantitatively scaled variable. This application has also been used for performance assessment (Borman et al., 2001) via computer-adaptive rating scales.

Another measurement-related criticism of assess- ment in I/O psychology is that the field has seemed to show little interest in test taking as a cognitive process. That is, I/O psychologists do not ask ques- tions about how a test taker decides on a particular response and cannot give a cognitive account of the processes involved (e.g., Mislevy, 2008). The impli- cation is that two individuals may arrive at the same response in different ways (Mislevy & Verhelst, 1990), which in turn implies that their scores do not mean the same thing. This criticism is most often made in the context of ability or achievement test- ing, but it could also be directed at attitude measure- ment, linking judgments in job analysis, assessor ratings in assessment centers, and performance rat- ings in general.

A final issue, and perhaps the most important one, concerns how the structure of the various domains of dependent, independent, and situational variables should be modeled. A very thorough and sophisticated treatment of latent and observed struc- tures was provided by Borsboom et al. (2003). They discussed three distinct ways to model the covari- ance structure of a set of observed scores as a func- tion of latent variables. In the first model, latent variables are constructs that cause responses to operational measures but are not equivalent to them. For example, general mental ability is a latent variable that most surely has neurological sub- strates, as yet unknown, that were formed by hered- ity, experience, and their interaction. The existence of the latent variable is inferred from the covariances of the measures constructed to measure it. The observed covariances using a variety of such mea- sures always yield a general factor. Corrected for attenuation, the intercorrelations of scores on the general factor when obtained from independent sets

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

383

of tests approach unity. This example is the clearest of a real latent variable. Also, it does not preclude the existence of subfactors (e.g., verbal, quantita- tive) that yield highly predictable covariances among observed scores. The latent structure of other trait domains is not quite as clear, at least not yet, but the evidence is sufficient to suggest that such a latent structure exists for some of them, such as personality and interests. In fact, much of the work on the latent structure of personality is an attempt to map the biological substrates of the factors (DeYoung, 2006).

A second model, at the other extreme, is to assert that observed factor scores are nothing more than the sum of the individual scores (i.e., items, tests, ratings) that compose them. Borsboom et al.’s (2003) example is from sociology. Suppose, for an individual, socioeconomic status is taken as the sum of income level, education level, and home value. There is no latent variable labeled socioeconomic sta- tus that determines income, education, and home value. It can only be defined in terms of the three operational measures. That is not to say that the sum score labeled socioeconomic status is not valu- able; however, it does represent a different model that cannot be used in the same way as a substantive latent trait model. Consequently, every time socio- economic status is used as a label for a sum score, the specific measures being aggregated must be spelled out. If there are correlations among the spe- cific measures, they must be explained by common determinants from other domains (e.g., general mental ability and conscientiousness).

The third model represents the attack of the postmodernists on the generally realist approach to research and practice that characterizes applied psy- chology (Boisot & McKelvey, 2010; P. Johnson & Cassell, 2001). That is, observed covariance struc- tures are social constructions that result from how researchers or practitioners construct the way they observe organizational behavior. The postmodern- ists have asserted that assessment in research and practice cannot be independent of this personal psy- chology. Agreement on such social constructions results from the socialization and training processes in I/O psychology. There really is no such thing as an independent latent variable (construct) that

determines the covariance structure of observed measures.

Models 2 and 3 are more similar to each other than they are to the first model, and for I/O psychol- ogy the basic issue is when should Model 1 versus Model 2 be invoked. Depending on the choice, the structural equations are different, and the analysis procedures are different (MacKenzie, Podsakoff, & Jarvis, 2005; Podsakoff, MacKenzie, Podsakoff, & Lee, 2003). Some additional implications of model choice are at least those described next.

If it is appropriate to model the trait determi- nants of performance (e.g., cognitive ability, person- ality, motives) as a function of latent variables, then it is appropriate, and necessary, to base assessment on the specifications for the latent variables. Con- stantly inventing new variables without reference to a known or specified latent structure is dysfunc- tional for research and practice.

In contrast to trait assessment, imposing a latent variable model on state assessment is more problem- atic. For example, are there skill and knowledge domains that can be specified well enough that test- ing and assessment can estimate a domain score that has surplus meaning beyond the sum of a particular set of item scores? This is one thing that made development of knowledge and skill taxonomies for O*NET difficult. However, IRT models provide a way of testing whether latent models are reasonable. A similar question could be asked about attitude or climate assessment. For example, are there general (latent?) dimensions of organizational climate, or should climate always be referenced to specific orga- nizational activities or procedures? Also, what does a path analysis actually estimate if a latent variable model is not appropriate?

These considerations raise another obvious ques- tion. That is, what is the latent structure of perfor- mance itself, or is there one, and what is the impact of this issue on assessment? In this regard, some things are certain, some things are reasonably cer- tain, and some are currently indeterminate. For certain, no single latent variable can be labeled as overall, or general, performance. Overall performance is simply a sum score of whatever measures are at hand. If overall performance is generated by a single rating scale labeled overall performance, then the

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

384

rater must compute the sum score in his or her head, by whatever personal calculus he or she chooses to use, which may or may not be in con- scious awareness. What about the general factor that emerges from the covariance matrix of virtually any set of observed performance scores after controlling for method variance (e.g., Viswesvaran, Schmidt, & Ones, 2005)? Such a factor could arise because trait determinants such as general mental ability and con- scientiousness contribute to individual differences in virtually all performance measures. Conse- quently, if one believes the general factor is a latent variable, then it is reasonable to assert that a set of performance measures simply constitutes another measure of general mental ability. General mental ability is the latent variable. No one has yet given a substantive specification of the general factor in per- formance content terms. It is always specified as a sum score of specific dimensions.

After reviewing all extant research on perfor- mance as a construct, Campbell (2012) has argued for an eight-factor structure (discussed earlier in this chapter) that is invariant across work roles, organizational levels, and type of organizations. The status of each of the factors as a latent variable is a mixed bag. Certainly, the technical performance factor does not represent a latent variable. There is always a technical factor, but it must always be specified as a sum score of assessed performance levels on the specific tech- nical responsibilities of the work role, and it might need to be summed over days, weeks, or years. In contrast, a case can be made, with vary- ing degrees of empirical justification, for the latent variable status of the two subfactors of communication, for the Initiative–Effort factor, and for the subfactors of Counterproductive Work Behavior. Campbell (2012) also argued that the subfactors of leadership and management (shown in Exhibits 22.1 and 22.2) have appeared again and again in leadership research using a variety of measures, and it is reasonable to assert that they represent latent variables of performance. A recent integrative review and meta-analysis of research on trait and behavioral leadership models (DeRue, Nahrgang, Wellman, & Humphrey, 2011) is consistent with this view.

High-Stakes Assessment As is the case for many other subfields, I/O psychology must deal with the assessment complexities of high- stakes testing. Selection for a job, for promotion, and for entry into educational or training programs are indeed high-stakes decisions. They make up a large and critical segment of the research and practice landscape in I/O psychology, and they significantly influence the lives of tens of millions of people. The complexities are intensified enormously by advances in digital technology and by the ethical, legal, and politi- cal environments that influence such decision making.

Each of these testing environment complexities (i.e., technological, ethical, legal, and political) has generated its own literature (cf. Farr & Tippins, 2010; Outtz, 2010). The issues include how to deal with unproctored Internet testing; what feedback to provide to test takers; determining the presence or absence of test bias; the currency of federal guide- lines; the ethical responsibilities of I/O psychologists; and the efficacy of using changes in standardized test scores to evaluate the value added by teachers, school systems, and universities. Again, these high- stakes issues are simply part of the I/O psychology assessment landscape, and the field must deal with them as thoroughly and as directly as it can.

SOME FINAL (AT LAST) REMARKS

The basic theme of this chapter is the assertion that assessment in I/O psychology is very, very complex. Complexity refers to the sheer number of variables across the dependent, independent, and situational variable spectrums; the multidimensional nature of both the latent and the observed structures for each variable; the difficulties involved in developing the substantive specifications for each dimension and their covariance structures; the multiplicity of assessment purposes; the multiplicity of assessment methods; and the intense interaction between science and practice. The scientist–practitioner model still dominates, and that opens the door to the marketplace, high-stakes decision making, the individual versus organizational perspectives, and the attendant value judgments that elicit professional guidelines, governmental rule mak- ing, and litigation precedents, all of which have important and complex implications for assessment.

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

385

The future will become even more complex. The world of work itself becomes ever more complicated as technology, globalization, population growth, cli- mate science, and competing political ideologies contrive to shape it. These forces will shape psycho- logical assessment as well. For example, Embretson (2004) forecast measurement technologies for the 21st century that could barely be imagined a decade ago, and the assessment methods of neuroscience are now being adapted to focus on the neural ante- cedents of work performance (Parasuraman, 2011). Will future graduate training in I/O psychology include becoming familiar with neuroimaging methods (e.g., functional magnetic resonance imag- ing, event-related potential, and magnetoencepha- lography)? The short answer is yes. Identity crises (Ryan & Ford, 2010) aside, it is an interesting and intense time in I/O psychology. It will become even more so in the future, and I/O psychologists have much to contribute to the future of both science and practice.

To deal with this complexity more effectively, this chapter made the following basic points. First, given a particular assessment domain of interest, it is impera- tive to specify its constructs as completely and as carefully as possible and model its covariance struc- ture as precisely as possible. If a new construct is pro- posed, the ways in which it fits into existing structures, or does not fit, should be specified. It is not in the best interests of research and practice to invent new labels for existing variables and imply that something new and different is being assessed or to propose new variables and let them float above the marketplace without specification and an evidence base. This is not an argument for never investigating anything new. It is an argument for careful specifica- tion and research-based assessment.

References Ackerman, P. L. (1987). Individual differences in skill

learning: An integration of psychometric and information processing perspectives. Psychological Bulletin, 102, 3–27. doi:10.1037/0033-2909.102.1.3

Ackerman, P. L. (1988). Determinants of individual differences during skill acquisition: Cognitive abilities and information processing. Journal of Experimental Psychology: General, 117, 288–318. doi:10.1037/0096-3445.117.3.288

Ackerman, P. L. (2000). Domain-specific knowledge as the “dark matter” of adult intelligence: Gf/Gc, personality and interest correlates. The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences, 55, 69–84. doi:10.1093/geronb/55.2.P69

Ackerman, P. L., & Rolfhus, E. L. (1999). The locus of adult intelligence: Knowledge, abilities, and non- ability traits. Psychology and Aging, 14, 314–330. doi:10.1037/0882-7974.14.2.314

Albrecht, S. L. (Ed.). (2010). Handbook of employee engagement: Perspectives, issues, research and practice. Glos, England: Edward Elgar.

Alderfer, C. P. (1969). An empirical test of a new theory of human needs. Organizational Behavior and Human Performance, 4, 142–175. doi:10.1016/0030- 5073(69)90004-X

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for edu- cational and psychological testing (3rd ed.). Washington, DC: American Educational Research Association.

Anderson, J. R. (1987). Skill acquisition: Compilation of weak-method problem solutions. Psychological Review, 94, 192–210. doi:10.1037/0033- 295X.94.2.192

Arad, S., Hanson, M., & Schneider, R. J. (1999). Organizational context. In N. G. Peterson, M. D. Mumford, W. C. Borman, P. R. Jenneret, & E. A. Fleishman (Eds.), An occupational information system for the 21st century: The development of O*NET (pp. 147–174). Washington, DC: American Psychological Association. doi:10.1037/10313-009

Austin, J. R., & Bartunek, J. M. (2003). Theories and prac- tices of organizational development. In W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Eds.), Handbook of psy- chology: Vol. 12. Industrial and organizational psychol- ogy (pp. 309–332). Hoboken, NJ: Wiley.

Austin, J. T., & Villanova, P. (1992). The criterion prob- lem: 1917–1992. Journal of Applied Psychology, 77, 836–874. doi:10.1037/0021-9010.77.6.836

Bakker, A. B., & Leiter, M. P. (Eds.). (2010). Work engagement: A handbook of essential theory and research. New York, NY: Psychology Press.

Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist, 37, 122–147. doi:10.1037/0003-066X.37.2.122

Banta, T. W. (2008). Editor’s notes: Trying to clothe the emperor. Assessment Update, 20, 3–4, 15–16.

Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist, 54, 462–479. doi:10.1037/0003-066X.54.7.462

Bar-On, R. (1997). Bar-On Emotional Quotient Inventory: A measure of emotional intelligence. Toronto, Ontario, Canada: Multi-Health Systems.

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

386

Barrick, M. R., & Mount, M. K. (1991). The Big Five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44, 1–26. doi:10.1111/j.1744-6570.1991.tb00688.x

Barrick, M. R., & Mount, M. K. (2005). Yes, personal- ity matters: Moving on to more important matters. Human Performance, 18, 359–372. doi:10.1207/ s15327043hup1804_3

Bartram, D. (2005). The great eight competen- cies: A criterion-centric approach to validation. Journal of Applied Psychology, 90, 1185–1203. doi:10.1037/0021-9010.90.6.1185

Baum, J. R., Bird, B. J., & Singh, S. (2011). The practical intelligence of entrepreneurs: Antecedents and a link with new venture growth. Personnel Psychology, 64, 397–425. doi:10.1111/j.1744-6570.2011.01214.x

Beal, D. J., Weiss, H. M., Barros, E., & MacDermid, S. M. (2005). An episodic process model of affective influ- ences on performance. Journal of Applied Psychology, 90, 1054–1068. doi:10.1037/0021-9010.90.6.1054

Bennett, R. J., & Robinson, S. L. (2000). Development of a measure of workplace deviance. Journal of Applied Psychology, 85, 349–360. doi:10.1037/0021- 9010.85.3.349

Bennett, W., Lance, C. E., & Woehr, D. J. (Eds.). (2006). Performance measurement: Current perspectives and future challenges. Mahwah, NJ: Erlbaum.

Berry, C. M., Ones, D. S., & Sackett, P. R. (2007). Interpersonal deviance, organizational deviance, and their common correlates: A review and meta- analysis. Journal of Applied Psychology, 92, 410–424. doi:10.1037/0021-9010.92.2.410

Boisot, M., & McKelvey, B. (2010). Integrating modern- ist and postmodernist perspectives on organiza- tions: A complexity science bridge. Academy of Management Review, 35, 415–433. doi:10.5465/ AMR.2010.51142028

Borman, W. C. (1987). Personal constructs, perfor- mance schemata, and “folk theories” of subor- dinate effectiveness: Explorations in an Army officer sample. Organizational Behavior and Human Decision Processes, 40, 307–322. doi:10.1016/0749- 5978(87)90018-5

Borman, W. C., & Brush, D. H. (1993). More prog- ress toward a taxonomy of managerial perfor- mance requirements. Human Performance, 6, 1–21. doi:10.1207/s15327043hup0601_1

Borman, W. C., Buck, D. E., Hanson, M. S., Motowidlo, S. J., Stark, S., & Drasgow, F. (2001). An examination of the comparative reliability, validity, and accuracy of performance ratings made using computerized adaptive rating scales. Journal of Applied Psychology, 86, 965–973. doi:10.1037/0021-9010.86.5.965

Borman, W. C., & Motowidlo, S. J. (1993). Expanding the criterion domain to include elements of

contextual performance. In N. Schmitt & W. C. Borman (Eds.), Personnel selection in organizations (pp. 71–98). San Francisco, CA: Jossey-Bass.

Borsboom, D. (2006). The attack of the psychometricians. Psychometrika, 71, 425–440. doi:10.1007/s11336- 006-1447-6

Borsboom, D., & Mellenbergh, G. J. (2004). Why psychometrics is not pathological: A comment on Michell. Theory and Psychology, 14, 105–120. doi:10.1177/0959354304040200

Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2003). The theoretical status of latent variables. Psychological Review, 110, 203–219. doi:10.1037/ 0033-295X.110.2.203

Bowers, D. G., & Seashore, W. E. (1966). Predicting organizational effectiveness with a four factor theory of leadership. Administrative Science Quarterly, 11, 238–263. doi:10.2307/2391247

Brown, S. P. (1996). A meta-analysis and review of orga- nizational research on job involvement. Psychological Bulletin, 120, 235–255. doi:10.1037/0033- 2909.120.2.235

Brown, S. P., & Leigh, T. W. (1996). A new look at psychological climate and its relationship to job involvement, effort, and performance. Journal of Applied Psychology, 81, 358–368. doi:10.1037/0021- 9010.81.4.358

Cameron, K. S., & Quinn, R. E. (1999). Diagnosing and changing organizational culture: Based on the compet- ing values framework. Reading, MA: Addison-Wesley.

Campbell, J. P. (1977). On the nature of organizational effectiveness. In P. S. Goodman & J. M. Pennings (Eds.), New perspectives on organizational effective- ness (pp. 13–55). San Francisco, CA: Jossey-Bass.

Campbell, J. P. (2007). Profiting from history. In L. L. Koppes, P. W. Thayer, A. J. Vinchur, & E. Salas (Eds.), Historical perspectives in industrial and orga- nizational psychology (pp. 441–457). Mahwah, NJ: Erlbaum.

Campbell, J. P. (2012). Behavior, performance, and effec- tiveness in the 21st century. In S. Kozlowski (Ed.), Oxford handbook of industrial and organizational psychology (pp. 159–194). New York, NY: Oxford University Press.

Campbell, J. P., & Knapp, D. (2001). Exploring the limits of personnel selection and classification. Hillsdale, NJ: Erlbaum.

Campbell, J. P., & Kuncel, N. R. (2001). Individual and team training. In N. Anderson, D. S. Ones, H. K. Sinangil, & C. Viswesvaran (Eds.), Handbook of work and organizational psychology (pp. 278–312). London, England: Blackwell.

Campbell, J. P., McCloy, R. A., Oppler, S. H., & Sager, C. E. (1993). A theory of performance. In N. Schmitt &

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

387

W. C. Borman (Eds.), Frontiers in industrial/organiza- tional psychology: Personnel selection and classification (pp. 35–71). San Francisco, CA: Jossey-Bass.

Campion, M. S., Fink, A. A., Ruggeberg, B. J., Carr, L., Phillips, G. M., & Odman, R. B. (2011). Doing competencies well: Best practices in competency modeling. Personnel Psychology, 64, 225–262. doi:10.1111/j.1744-6570.2010.01207.x

Carlson, K. D. (1997). Impact of instructional strategy on training effectiveness. Unpublished doctoral disserta- tion, University of Iowa, Iowa City.

Carroll, J. B. (1993). Human cognitive abilities: A sur- vey of factor-analytic studies. Cambridge, England: Cambridge University Press. doi:10.1017/ CBO9780511571312

Carroll, J. B. (2003). The higher-stratum structure of cognitive abilities: Current evidence supports g and about 10 broad factors. In N. Nyborg (Ed.), The sci- entific study of general intelligence: Tribute to Arthur R. Jensen (pp. 5–21). Amsterdam, the Netherlands: Pergamon Press.

Cattell, R. B. (1971). Abilities: Their structure, growth, and action. Boston, MA: Houghton Mifflin.

Chan, D. (2010). Values, styles, and motivational con- structs. In J. Farr & N. Tippins (Eds.), Handbook of employee selection (pp. 321–337). New York, NY: Routledge.

Christian, M. S., Garza, A. S., & Slaughter, J. E. (2011). Work engagement: A quantitative review and test of its relations with task and contextual performance. Personnel Psychology, 64, 89–136. doi:10.1111/ j.1744-6570.2010.01203.x

Cizek, G. J. (Ed.). (2001). Setting performance standards: Concepts, methods, and perspectives. Mahwah, NJ: Erlbaum.

Cleveland, J. N., & Colella, A. (2010). Employee work- related health, stress, and safety. In J. Farr & N. Tippins (Eds.), Handbook of employee selection (pp. 531–550). New York, NY: Routledge.

Colquitt, J. A. (2001). On the dimensionality of orga- nizational justice: A construct validation of a mea- sure. Journal of Applied Psychology, 86, 386–400. doi:10.1037/0021-9010.86.3.386

Colquitt, J. A., Conlon, D. E., Wesson, M. J., Porter, C. O., & Ng, K. Y. (2001). Justice at the millennium: A meta-analytic review of 25 years of organizational justice research. Journal of Applied Psychology, 86, 425–445. doi:10.1037/0021-9010.86.3.425

Conway, J. M. (1998). Understanding method vari- ance in multitrait–multirater performance appraisal matrices: Examples using general impressions and interpersonal effect as measured method fac- tors. Human Performance, 11, 29–55. doi:10.1207/ s15327043hup1101_2

Conway, J. M., & Huffcut, A. L. (1997). Psychometric properties of multisource performance ratings: A meta-analysis of subordinate, supervisor, peer, and self-ratings. Human Performance, 10, 331–360. doi:10.1207/s15327043hup1004_2

Cooke, R. A., & Rousseau, D. M. (1988). Behavioral norms and expectations: A quantitative approach to the assessment of organizational culture. Group and Organization Management, 13, 245–273. doi:10.1177/105960118801300302

Cooper, C. L., Dewe, P., & O’Driscoll, M. P. (2001). Organizational stress: A review and critique of theory, research, and applications. Thousand Oaks, CA: Sage.

Costa, P. T., Jr., & McRae, R. R. (1992). Revised NEO Personality Inventory (NEO PI–R) and NEO Five- Factor Inventory (NEO FFI) professional manual. Odessa, FL: Psychological Assessment Resources.

Crede, M. (2006). Job attitude and job evaluation: Examining construct-measurement discrepancies. Unpublished doctoral dissertation, University of Illinois at Urbana–Champaign.

Cronbach, L. J., & Gleser, G. C. (1965). Psychological tests and personnel decisions (2nd ed.). Urbana: University of Illinois Press.

Dalal, R. S. (2005). A meta-analysis of the relationship between organizational citizenship behavior and counterproductive work behavior. Journal of Applied Psychology, 90, 1241–1255.

Davenport, R. (2006). Eliminate the skills gap. Training and Development, 60, 27–32.

Dawis, R. V., Dohm, T. E., Lofquist, L. H., Chartrand, J. M., & Due, A. M. (1987). Minnesota Occupational Classification System III: A psychological taxonomy of work. Minneapolis: University of Minnesota, Department of Psychology, Vocational Psychology Research.

Dawis, R. V., & Lofquist, L. H. (1984). A psychological theory of work adjustment. Minneapolis: University of Minnesota Press.

Deadrick, D. L., Bennett, N., & Russell, C. J. (1997). Using hierarchical linear modeling to examine dynamic performance criteria over time. Journal of Management, 23, 745–757. doi:10.1177/014920639 702300603

Denison, D. R. (1996). What is the difference between organizational culture and organizational climate? A native’s point of view on a decade of paradigm wars. Academy of Management Review, 21, 619–654.

DeRue, D. S., Nahrgang, J. D., Wellman, N., & Humphrey, S. E. (2011). Trait and behavioral theo- ries of leadership: An integration and meta-analytic test of their relative validity. Personnel Psychology, 64, 7–52. doi:10.1111/j.1744-6570.2010.01201.x

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

388

DeShon, R. P., & Gillespie, J. Z. (2005). A moti- vated action theory account of goal orientation. Journal of Applied Psychology, 90, 1096–1127. doi:10.1037/0021-9010.90.6.1096

DeYoung, C. G. (2006). Higher-order factors of the Big Five in a multi-informant sample. Journal of Personality and Social Psychology, 91, 1138–1151. doi:10.1037/0022-3514.91.6.1138

DeYoung, C. G., Hirsh, J. B., Shane, M. S., Papademetris, X., Rajeevan, N., & Gray, J. R. (2010). Testing predictions from personality neuroscience: Brain structure and the Big Five. Psychological Science, 21, 820–828. doi:10.1177/0956797610370159

DeYoung, C. G., Quilty, L. C., & Peterson, J. B. (2007). Between facets and domains: 10 aspects of the Big Five. Journal of Personality and Social Psychology, 93, 880–896. doi:10.1037/0022-3514.93.5.880

Dilchert, S. (2008). Measurement and prediction of cre- ativity at work. Unpublished doctoral dissertation, University of Minnesota, Minneapolis.

Drasgow, F., Chernyshenko, O. S., & Stark, S. (2010). 75 years after Likert: Thurstone was right! Industrial and Organizational Psychology: Perspectives on Science and Practice, 3, 465–476. doi:10.1111/j.1754- 9434.2010.01273.x

Dweck, C. S. (1986). Motivational processes affecting learning. American Psychologist, 41, 1040–1048. doi:10.1037/0003-066X.41.10.1040

Eagley, A. H., & Chaiken, S. (1993). The psychology of attitudes. New York, NY: Wadsworth.

Edwards, J. E., & Rothbard, N. P. (2000). Mechanisms linking work and family: Clarifying the relationship between work and family constructs. Academy of Management Review, 25, 178–199.

Edwards, J. R., Scully, J. S., & Bartek, M. D. (1999). The measurement of work: Hierarchical repre- sentation of the multimethod job design ques- tionnaire. Personnel Psychology, 52, 305–334. doi:10.1111/j.1744-6570.1999.tb00163.x

Elliot, A. J., & Thrash, T. M. (2002). Approach– avoidance motivation in personality: Approach and avoidance temperaments and goals. Journal of Personality and Social Psychology, 82, 804–818. doi:10.1037/0022-3514.82.5.804

Elliott, E. S., & Dweck, C. S. (1988). Goals: An approach to motivation and achievement. Journal of Personality and Social Psychology, 54, 5–12. doi:10.1037/0022- 3514.54.1.5

Embretson, S. E. (2004). The second century of ability testing: Some new predictions and speculations. Measurement, 2, 1–32.

Embretson, S. E. (2006). The continued search for nonar- bitrary metrics in psychology. American Psychologist, 61, 50–55. doi:10.1037/0003-066X.61.1.50

Embretson, S. E., & Reise, S. P. (Eds.). (2000). Item response theory for psychologists. Mahwah, NJ: Erlbaum.

Ennis, R. H. (1985). A logical basis for measuring critical thinking skills. Educational Leadership, 43, 44–48.

Ericsson, K. S., Charness, N., Feltovich, P. J., & Hoffman, R. R. (Eds.). (2006). The Cambridge handbook of expertise and expert performance. New York, NY: Cambridge University Press.

Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59, 255–278. doi:10.1146/ annurev.psych.59.103006.093629

Ewell, P. T. (1991). To capture the ineffable: New forms of assessment in higher education. Review of Research in Education, 17, 75–125.

Eysenck, H. J. (1967). The biological basis of personality. Springfield, IL: Charles C Thomas.

Farr, J. L., & Tippins, N. T. (2010). Handbook of employee selection. New York, NY: Routledge.

Ferris, L. D., Rosen, C. R., Johnson, R. E., Brown, D. J., Risavy, S. D., & Heller, D. (2011). Approach or avoidance (or both)? Integrating core self- evaluations within an approach/avoidance framework. Personnel Psychology, 64, 137–161. doi:10.1111/j.1744-6570.2010.01204.x

Fleishman, E. A. (1964). The structure and measurement of physical fitness. Englewood Cliffs, NJ: Prentice Hall.

Fleishman, E. A., & Quaintance, M. K. (1984). Taxonomies of human performance: The description of human tasks. New York, NY: Academic Press.

Fleishman, E. A., & Reilly, M. E. (1992). Handbook of human abilities: Definitions, measurements, and job task requirements. Bethesda, MD: Management Research Institute.

Gable, S. L., Reis, H. T., & Elliot, A. J. (2003). Evidence for bivariate systems: An empirical test of appetition and aversion across domains. Journal of Research in Personality, 37, 349–372. doi:10.1016/S0092- 6566(02)00580-9

Galagan, P. (2010). Bridging the skills gap: New factors compound the growing skills shortage. Alexandria, VA: American Society for Training and Development.

Gebhardt, D. L., & Baker, T. A. (2010). Physical per- formance tests. In J. Farr & N. Tippins (Eds.), Handbook of employee selection (pp. 277–298). New York, NY: Routledge.

George, J. M. (2007). Creativity in organizations. Academy of Management Annals, 1, 439–477. doi:10.1080/078559814

Ghiselli, E. E. (1966). The validity of occupational aptitude tests. New York, NY: Wiley.

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

389

Gollwitzer, P. M., Sheeran, P., Trotschel, R., & Webb, T. L. (2011). Self-regulation of priming effects on behavior. Psychological Science, 22, 901–907. doi:10.1177/0956797611411586

Goodman, P. S., Devadas, R., & Griffith-Hughson, T. L. (1988). Groups and productivity: Analyzing the effectiveness of self-management teams. In J. P. Campbell, R. J. Campbell, & Associates (Eds.), Productivity in organizations: New perspectives from industrial and organizational psychology (pp. 295– 327). San Francisco: Jossey-Bass.

Gottfredson, L. S. (2003). Dissenting practical intel- ligence theory: Its claims and evidence. Intelligence, 31, 343–397. doi:10.1016/S0160-2896(02)00085-5

Greenhaus, J. H., & Powell, G. N. (2006). When work and family are allies: A theory of work-family enrich- ment. Academy of Management Review, 31, 72–92. doi:10.5465/AMR.1985.4277352

Griffeth, R. W., Hom, P. W., & Gaertner, S. (2000). A meta-analysis of antecedents and correlates of employee turnover: Updated moderator tests, and research implications for the next millen- nium. Journal of Management, 26, 463–488. doi:10.1177/014920630002600305

Griffin, M. S., Neal, A., & Parker, S. K. (2007). A new model of work role performance: Positive behavior in uncertain and interdependent contexts. Academy of Management Journal, 50, 327–347. doi:10.5465/ AMJ.2007.24634438

Gruys, M. L., & Sackett, P. R. (2003). Investigating the dimensionality of counterproductive work behavior. International Journal of Selection and Assessment, 11, 30–42. doi:10.1111/1468-2389.00224

Grzywacz, J. G., & Carlson, D. S. (2007). Conceptual- izing work-family balance: Implications for prac- tice and research. Advances in Developing Human Resources, 9, 455–471. doi:10.1177/152342230 7305487

Guion, R. M., & Gottier, R. F. (1965). Validity of personality measures in personnel selec- tion. Personnel Psychology, 18, 135–164. doi:10.1111/j.1744-6570.1965.tb00273.x

Hackman, J. R. (1992). Group influences on individuals in organizations. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology (Vol. 3, pp. 199–267). Palo Alto, CA: Consulting Psychologists Press.

Hackman, J. R., & Oldham, G. R. (1976). Motivation through the design of work: Test of a theory. Organizational Behavior and Human Performance, 16, 250–279. doi:10.1016/0030-5073(76)90016-7

Harmon, L. W., Hansen, J. C., Borgen, F. H., & Hammer, A. L. (1994). Strong Interest Inventory: Applications and technical guide. Stanford, CA: Stanford University Press.

Harnois, G., & Gabriel, P. (2000). Mental health and work: Impact, issues and good practices. Geneva, Switzerland: International Labour Organisation.

Herzberg, F. (1959). The motivation to work. New York, NY: Wiley.

Hesketh, B., & Neal, A. (1999). Technology and perfor- mance. In D. R. Ilgen & E. D. Pulakos (Eds.), The changing nature of performance: Implications for staff- ing, motivation, and development (pp. 21–55). San Francisco, CA: Jossey-Bass.

Hobfoll, S. E. (1998). Stress, culture, and community: The psychology and physiology of stress. New York, NY: Plenum Press.

Hofmann, D. A., Jacobs, R., & Gerras, S. J. (1992). Mapping individual performance over time. Journal of Applied Psychology, 77, 185–195. doi:10.1037/ 0021-9010.77.2.185

Hogan, J. (1991). Structure of physical performance in occupational tasks. Journal of Applied Psychology, 76, 495–507. doi:10.1037/0021-9010.76.4.495

Hogan, R., & Kaiser, R. B. (2010). Personality. In J. C. Scott & D. H. Reynolds (Eds.), Handbook of work- place assessment: Evidence-based practices for selecting and developing organizational talent (pp. 81–108). San Francisco, CA: Jossey-Bass.

Holland, J. L. (1994). The Self-Directed Search: Professional manual. Odessa, FL: Psychological Assessment Resources.

Holland, J. L. (1997). Making vocational choices: A theory of vocational personalities and work environments (3rd ed.). Odessa, FL: Psychological Assessment Resources.

Hoppock, R. (1935). Job satisfaction. New York, NY: Harper.

Horn, J. L. (1989). Cognitive diversity: A framework of learning. In P. L. Ackerman, R. J. Sternberg, & R. Glaser (Eds.), Learning and individual differences (pp. 61–116). New York, NY: Freeman.

Hough, L., & Dilchert, S. (2010). Personality: Its mea- surement and validity for employee selection. In J. Farr & N. Tippins (Eds.), Handbook of employee selection (pp. 299–319). New York, NY: Routledge.

Hough, L. M., & Ones, D. S. (2001). The structure, mea- surement, validity, and use of personality variables in industrial, work, and organizational psychology. In N. Anderson, D. S. Ones, H. K. Sinangil, & C. Viswesvaran (Eds.), Handbook of industrial, work, and organizational psychology (pp. 233–277). Thousand Oaks, CA: Sage.

Hulin, C. L., & Judge, T. A. (2003). Job attitudes. In W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Eds.), Handbook of psychology: Vol. 12. Industrial and orga- nizational psychology (pp. 255–276). Hoboken, NJ: Wiley.

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

390

Hunt, J. G. (1999). Transformational/charismatic leader- ship’s transformation of the field: An historical essay. Leadership Quarterly, 10, 129–144. doi:10.1016/ S1048-9843(99)00015-6

Ilgen, D. R., Hollenbeck, J. R., Johnson, M., & Jundt, D. (2005). Teams in organizations: From input-process- output models to IMOI models. Annual Review of Psychology, 56, 517–543. doi:10.1146/annurev. psych.56.091103.070250

James, L. R., & Jones, A. P. (1974). Organizational cli- mate: A review of theory and research. Psychological Bulletin, 81, 1096–1112. doi:10.1037/h0037511

Johnson, P., & Cassell, C. (2001). Epistemology and work psychology: New agendas. Journal of Occupational and Organizational Psychology, 74, 125–143. doi:10.1348/096317901167280

Johnson, W., & Bouchard, T. J. (2005). The structure of human intelligence: It is verbal, perceptual, and image rotation (VPR), not fluid and crystal- lized. Intelligence, 33, 393–416. doi:10.1016/j. intell.2004.12.002

Johnson, W., Nijenhuis, J., & Bouchard, T. J. (2008). Still just 1 g: Consistent result from five test bat- teries. Intelligence, 36, 81–95. doi:10.1016/j. intell.2007.06.001

Judge, T. A., & Bono, J. E. (2001). A rose by any other name: Are self-esteem, generalized self-efficacy, neuroticism, and locus of control indicators of a common construct? In B. W. Roberts & R. Hogan (Eds.), Personality psychology in the workplace (pp. 93–118). Washington, DC: American Psychological Association. doi:10.1037/10434-004

Judge, T. A., Locke, E. A., Durham, C. C., & Kluger, A. N. (1998). Dispositional effects on job life sat- isfaction: The role of core evaluation. Journal of Applied Psychology, 83, 17–34. doi:10.1037/0021- 9010.83.1.17

Judge, T. A., Van Vianen, A. E. M., & DePater, I. E. (2004). Emotional stability, core self-evaluations, and job outcomes: A review of the evidence and an agenda for future research. Human Performance, 17, 325–346. doi:10.1207/s15327043hup1703_4

Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64, 515–526. doi:10.1037/a0016755

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263–291. doi:10.2307/1914185

Kanfer, R., Chen, G., & Pritchard, R. (Eds.). (2008). Work motivation: Past, present, and future. New York, NY: Taylor & Francis.

Kanfer, R., & Heggestad, E. D. (1997). Motivational traits and skills: A person-centered approach to work moti- vation. Research in Organizational Behavior, 19, 1–56.

Kanungo, R. N. (1982). Measurement of job and work involvement. Journal of Applied Psychology, 67, 341– 349. doi:10.1037/0021-9010.67.3.341

Katzell, R. A., & Guzzo, R. A. (1983). Psychological approaches to productivity improvement. American Psychologist, 38, 468–472. doi:10.1037/0003- 066X.38.4.468

Kelloway, E. K., Loughlin, C., Barling, J., & Nault, A. (2002). Self-reported counterproductive behav- iors and organizational citizenship behaviors: Separate but related constructs. International Journal of Selection and Assessment, 10, 143–151. doi:10.1111/1468-2389.00201

Klein, S., Freedman, D., Shavelson, R., & Bolus, R. (2008). Assessing school effectiveness. Evaluation Review, 32, 511–525. doi:10.1177/0193841 X08325948

Kline, P. (1997). Commentary on Michell, quantitative science and the definition of measurement in psy- chology. British Journal of Psychology, 88, 358–387. doi:10.1111/j.2044-8295.1997.tb02642.x

Koppes, L. L., Thayer, P. W., Vinchur, A. J., & Salas, E. (Eds.). (2007). Historical perspectives in industrial and organizational psychology. Mahwah, NJ: Erlbaum.

Kozlowski, S. W. J., & Ilgen, D. R. (2006). Enhancing the effectiveness of work groups and teams. Psychological Science in the Public Interest, 7, 77–124.

Kunin, T. (1955). The construction of a new type of attitude measure. Personnel Psychology, 8, 65–77. doi:10.1111/j.1744-6570.1955.tb01189.x

Landy, F. J. (2005). Some historical and scientific issues related to research on emotional intelligence. Journal of Organizational Behavior, 26, 411–424. doi:10.1002/job.317

Levi, L., & Lunde-Jensen, P. (1996). A model for assessing the costs of stressors at national level: Socio-economic costs of work stress in two EU member states. Dublin, Ireland: European Foundation for the Improvement of Living and Working Conditions.

Lewin, K. (1951). Field theory in social science. New York, NY: Harper & Row.

Liberman, V. (2011). Why your people can’t do what you need them to do. Conference Board Review, Winter, 1–8.

Lievens, F., & Chan, D. (2010). Practical intelligence, emotional intelligence, and social intelligence. In J. Farr & N. Tippins (Eds.), Handbook of employee selection (pp. 339–359). New York, NY: Routledge.

Likert, R. (1932). The method of constructing an attitude scale. Archives de Psychologie, 140, 44–53.

Litwin, G. H., & Stringer, R. A. (1968). Motivation and organizational climate. Boston, MA: Harvard University Press.

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

391

Liu, O. L. (2011). Value-added assessment in higher education: A comparison of two methods. Higher Education, 61, 445–461. doi:10.1007/s10734-010- 9340-8

Locke, E. A., & Latham, G. P. (2002). Building a practi- cally useful theory of goal setting and task motiva- tion: A 35-year odyssey. American Psychologist, 57, 705–717. doi:10.1037/0003-066X.57.9.705

Lodahl, T. M., & Kejner, M. (1965). The definition and measurement of job involvement. Journal of Applied Psychology, 49, 24–33. doi:10.1037/h0021692

Lord, R. G., Diefendorff, J. M., Schmidt, A. M., & Hall, R. J. (2010). Self-regulation at work. Annual Review of Psychology, 61, 543–568. doi:10.1146/annurev. psych.093008.100314

Lubinski, D. (2010). Neglected aspects and truncated appraisals in vocational counseling: Interpreting the interest-efficacy association from a broader perspec- tive: Comment on Armstrong and Vogel (2009). Journal of Counseling Psychology, 57, 226–238. doi:10.1037/a0019163

Luce, R. D., & Tukey, J. W. (1964). Simultaneous con- joint measurement: A new scale type of fundamental measurement. Journal of Mathematical Psychology, 1, 1–27. doi:10.1016/0022-2496(64)90015-X

Lykken, D. T. (1999). Happiness: What studies on twins show us about nature, nurture, and the happiness set point. New York, NY: Golden Books.

Macey, W. H., & Schneider, B. (2008). The meaning of employee engagement. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1, 3–30. doi:10.1111/j.1754-9434.2007.0002.x

Macey, W. H., Schneider, B., Barbera, K., & Young, S. A. (2009). Employee engagement: Tools for analysis, practice, and competitive advantage. London, England: Blackwell.

MacKenzie, S. B., Podsakoff, P. M., & Jarvis, C. B. (2005). The problem of measurement model misspecifica- tion in behavioral and organizational research and some recommended solutions. Journal of Applied Psychology, 90, 710–730. doi:10.1037/0021- 9010.90.4.710

Maertz, C. P., & Campion, M. A. (2004). Profiles in quit- ting: Integrating process and content turnover the- ory. Academy of Management Journal, 47, 566–582. doi:10.2307/20159602

Marcus, B., Schuler, H., Quell, P., & Humpfner, G. (2002). Measuring counterproductivity: Development and initial validation of a German self- report questionnaire. International Journal of Selection and Assessment, 10, 18–35. doi:10.1111/1468- 2389.00191

Markon, K. E., Krueger, R. F., & Watson, D. (2005). Delineating the structure of normal and abnormal

personality: An integrative hierarchical approach. Journal of Personality and Social Psychology, 88, 139– 157. doi:10.1037/0022-3514.88.1.139

Maslow, A. H. (1943). A theory of human motivation. Psychological Review, 50, 370–396. doi:10.1037/ h0054346

Masson, R. C., Royal, M. A., Agnew, T. G., & Fine, S. (2008). Leveraging employee engagement: The practical implications. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1, 56–59. doi:10.1111/j.1754-9434.2007.00009.x

Mathieu, J., Maynard, M. T., Rapp, T., & Gilson, L. (2008). Team effectiveness 1997–2007: A review of recent advancements and a glimpse into the future. Journal of Management, 34, 410–476. doi:10.1177/0149206308316061

McClelland, D. C. (1985). How motives, skills, and val- ues determine what people do. American Psychologist, 40, 812–825. doi:10.1037/0003-066X.40.7.812

McPhail, S. M. (Ed.). (2007). Alternative validation strate- gies: Developing new and leveraging existing validity evidence. San Francisco, CA: Jossey-Bass.

Meyer, H. H. (2007). Influence of formal and informal organizations on the development of I-O psychology. In L. L. Koppes, P. W. Thayer, A. J. Vinchur, & E. Salas (Eds.), Historical perspectives in industrial and organizational psychology (pp. 139–168). Mahwah, NJ: Erlbaum.

Michell, J. (1999). Measurement in psychology: Critical history of a methodological concept. Cambridge, England: Cambridge University Press. doi:10.1017/ CBO9780511490040

Michell, J. (2000). Normal science, pathological science and psychometrics. Theory and Psychology, 10, 639– 667. doi:10.1177/0959354300105004

Michell, J. (2008). Is psychometrics pathological science? Measurement, 6, 7–24.

Miles, D. E., Borman, W. C., Spector, P. E., & Fox, S. (2002). Building an integrative model of extra role work behaviors: A comparison of counterproductive work behavior with organizational citizenship behav- ior. International Journal of Selection and Assessment, 10, 51–57. doi:10.1111/1468-2389.00193

Miner, J. B. (1977). Motivation to manage: A ten-year update on the “studies in management education” research. Atlanta, GA: Organizational Measurement Systems Press.

Mislevy, R. J. (2008). How cognitive science challenges the educational measurement tradition. Measurement, 6, 124.

Mislevy, R. J., & Verhelst, N. (1990). Modeling item responses when different subjects employ differ- ent solution strategies. Psychometrika, 55, 195–215. doi:10.1007/BF02295283

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

392

Mitchell, T. R., & Daniels, D. (2003). Motivation. In W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Eds.), Handbook of psychology: Vol. 12. Industrial and orga- nizational psychology (pp. 225–254). Hoboken, NJ: Wiley.

Mitchell, T. R., & Lee, T. W. (2001). The unfolding model of voluntary turnover and job embedded- ness: Foundations for a comprehensive theory of attachment. In B. Staw & R. Sutton (Eds.), Research in organizational behavior (Vol. 23, pp. 189–246). Stamford, CT: JAI Press.

Morgeson, F. P., & Campion, M. E. (2003). Work design. In W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Eds.), Handbook of psychology: Vol. 12. Industrial and organizational psychology (pp. 423–452). Hoboken, NJ: Wiley.

Morgeson, F. P., Campion, M. S., Dipboye, R. L., Hollenbeck, J. R., Murphy, K., & Schmitt, N. (2007). Reconsidering the use of personality tests in person- nel selection contexts. Personnel Psychology, 60, 683–729. doi:10.1111/j.1744-6570.2007.00089.x

Mumford, M. D., Baughman, W. S., Supinski, E. P., Costanza, D. P., & Threlfall, K. V. (1996). Process- based measures of creative problem solving skills: Overall prediction. Creativity Research Journal, 9, 63–76. doi:10.1207/s15326934crj0901_6

Murphy, K. R. (1989a). Dimensions of job performance. In R. Dillon & J. Pelligrino (Eds.), Testing: Applied and theoretical perspectives (pp. 218–247). New York, NY: Praeger.

Murphy, K. R. (1989b). Is the relationship between cognitive ability and job performance stable over time? Human Performance, 2, 183–200. doi:10.1207/ s15327043hup0203_3

Murphy, K. R. (Ed.). (2006). A critique of emotional intel- ligence: What are the problems and how can they be fixed? Mahwah, NJ: Erlbaum.

Murphy, K. R., & Cleveland, J. N. (1995). Understanding performance appraisal: Social, organizational, and goal-based perspectives. Thousand Oaks, CA: Sage.

Murray, H. A. (1938). Explorations in personality. New York, NY: Oxford University Press.

Myers, D. C., Gebhardt, D. L., Crump, C. E., & Fleishman, E. A. (1993). The dimensions of human physical performance: Factor analysis of strength, stamina, flexibility, and body composition mea- sures. Human Performance, 6, 309–344. doi:10.1207/ s15327043hup0604_2

National Institute of Occupational Safety and Health. (1999). Stress … at work (DHHS Publication No. 99–101). Cincinnati, OH: Author.

Neuman, J. H. (2004). Injustice, stress, and aggression in organizations. In R. W. Griffin & A. M. O’Leary-

Kelly (Eds.), The dark side of organizational behavior (pp. 62–102). San Francisco, CA: Jossey-Bass.

Oh, I.-S., Wang, G., & Mount, M. K. (2011). Validity of observer ratings of the five-factor model of per- sonality traits: A meta-analysis. Journal of Applied Psychology, 96, 762–773. doi:10.1037/a0021832

Olson, A. M. (2000). A theory and taxonomy of individual team member performance. Unpublished doctoral dis- sertation, University of Minnesota, Minneapolis.

Ones, D. S., Dilchert, S., Viswesvaran, C., & Judge, T. A. (2007). In support of personality assessment in organizational settings. Personnel Psychology, 60, 995–1027. doi:10.1111/j.1744-6570.2007.00099.x

Ones, D. S., Dilchert, S., Viswesvaran, C., & Salgado, J. F. (2010). Cognitive abilities. In J. Farr & N. Tippins (Eds.), Handbook of employee selection (pp. 255–275). New York, NY: Routledge.

Ones, D. S., & Viswesvaran, C. (2003). Personality and counterproductive work behaviors. In M. Koslowsky, S. Stashevsky, & A. Sagie (Eds.), Misbehavior and dysfunctional attitudes in organizations (pp. 211–249). Hampshire, England: Palgrave Macmillan.

Organ, D. W. (1988). Organizational citizenship behavior: The good soldier syndrome. Lexington, MA: Lexington Books.

Ostroff, C. (1993). The effects of climate and personal influences on individual behavior and attitudes in organizations. Organizational Behavior and Human Decision Processes, 56, 56–90. doi:10.1006/ obhd.1993.1045

Ostroff, C., Kinicki, A. J., & Tamkins, M. (2003). Organizational culture and climate. In W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Eds.), Handbook of psy- chology: Vol. 12. Industrial and organizational psychol- ogy (pp. 565–594). Hoboken, NJ: Wiley.

Outtz, J. L. (2010). Addressing the flaws in our assess- ment decisions. In J. C. Scott & D. H. Reynolds (Eds.), Handbook of workplace assessment: Evidence- based practices for selecting and developing organi- zational talent (pp. 711–727). San Francisco, CA: Jossey-Bass.

Parasuraman, R. (2011). Neuroergonomics: Brain, cognition, and performance at work. Current Directions in Psychological Science, 20, 181–186. doi:10.1177/0963721411409176

Parry, S. B. (1996). The quest for competencies. Training, 33, 48–54.

Paul, R., & Elder, L. (2006). Critical thinking tools for tak- ing charge of your learning and your life. Upper Saddle River, NJ: Prentice Hall.

Payne, S. C., Youngcourt, S. S., & Beaubien, J. M. (2007). A meta-analytic examination of the goal orientation nomological net. Journal of Applied Psychology, 92, 128–150. doi:10.1037/0021-9010.92.1.128

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

393

Peterson, N. G., Mumford, M. D., Borman, W. C., Jeanneret, P. R., & Fleishman, E. A. (Eds.). (1999). An occupational information system for the 21st century: The development of O*NET. Washington, DC: American Psychological Association. doi:10.1037/10313-000

Ployhart, R. E., & Bliese, P. D. (2006). Individual adapt- ability (I-ADAPT) theory: Conceptualizing the antecedents, consequences, and measurement of individual differences in adaptability. In E. Salas (Ed.), Advances in human performance and cogni- tive engineering research (Vol. 6, pp. 3–39). Oxford, England: Emerald Group.

Ployhart, R. E., & Hakel, M. D. (1998). The substan- tive nature of performance variability: Predicting interindividual differences in intraindividual performance. Personnel Psychology, 51, 859–901. doi:10.1111/j.1744-6570.1998.tb00744.x

Podsakoff, P. M., MacKenzie, S. B., Podsakoff, N. P., & Lee, J. Y. (2003). The mismeasure of man(agement) and its implications for leadership research. Leadership Quarterly, 14, 615–656. doi:10.1016/j. leaqua.2003.08.002

Pritchard, R. D., Holling, H., Lammers, F., & Clark, B. D. (Eds.). (2002). Improving organizational performance with the productivity measurement and enhancement system: An international collaboration. Huntington, NY: Nova Science.

Pulakos, E. D., Arad, S., Donovan, M. S., & Plamondon, K. E. (2000). Adaptability in the workplace: Development of a taxonomy of adaptive perfor- mance. Journal of Applied Psychology, 85, 612–624. doi:10.1037/0021-9010.85.4.612

Pulakos, E. D., & O’Leary, R. S. (2010). Defining and measuring results of workplace behavior. In J. Farr & N. Tippins (Eds.), Handbook of employee selection (pp. 513–529). New York, NY: Routledge.

Putka, D. J., & Sackett, P. R. (2010). Reliability and valid- ity. In J. L. Farr & N. T. Tippins (Eds.), Handbook of employee selection (pp. 9–49). New York, NY: Routledge.

Quinn, R. W., & Rohrbaugh, J. (1983). A spatial model of effectiveness criteria: Towards a competing values approach to organizational analysis. Management Science, 29, 363–377. doi:10.1287/mnsc.29.3.363

Reb, J., & Cropanzano, R. (2007). Evaluating dynamic performance: The influence of salient gestalt characteristics on performance ratings. Journal of Applied Psychology, 92, 490–499. doi:10.1037/0021- 9010.92.2.490

Robert, G., & Hockey, J. (1997). Compensatory control in the regulation of human performance under stress and high workload: A cognitive-energetical frame- work. Biological Psychology, 45, 73–93. doi:10.1016/ S0301-0511(96)05223-4

Robinson, S. L., & Bennett, R. J. (1995). A typology of deviant workplace behaviors: A multidimensional scaling study. Academy of Management Journal, 38, 555–572. doi:10.2307/256693

Rosse, R. L., Campbell, J. P., & Peterson, N. G. (2001). Personnel classification and differential job assign- ments: Estimating classification gains. In J. P. Campbell & D. J. Knapp (Eds.), Exploring the limits of personnel selection and classification (pp. 453–506). Hillsdale, NJ: Erlbaum.

Runco, M. A. (2004). Creativity. Annual Review of Psychology, 55, 657–687. doi:10.1146/annurev. psych.55.090902.141502

Ryan, A. M., & Ford, K. J. (2010). Organizational psychology and the tipping point of professional identity. Industrial and Organizational Psychology: Perspectives on Science and Practice, 3, 241–258. doi:10.1111/j.1754-9434.2010.01233.x

Sackett, P. R., & Laczo, R. M. (2003). Job and work analy- sis. In W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Eds.), Handbook of psychology: Vol. 12. Industrial and organizational psychology (pp. 21–37). Hoboken, NJ: Wiley.

Salas, E., Rosen, M. S., & DiazGranados, D. (2010). Expertise-based intuition and decision making in organizations. Journal of Management, 36, 941–973. doi:10.1177/0149206309350084

Salovey, P., & Mayer, J. D. (1989–1990). Emotional intel- ligence. Imagination, Cognition and Personality, 9, 185–211.

Secretary’s Commission on Achieving Necessary Skills. (1999). Skills and tasks for jobs. Washington, DC: U.S. Department of Labor.

Schippman, J. S. (2010). Competencies, job analysis, and the next generation of modeling. In J. C. Scott & D. H. Reynolds (Eds.), Handbook of workplace assessment: Evidence-based practices for selecting and developing organizational talent (pp. 197–231). San Francisco, CA: Jossey-Bass.

Schippman, J. S., Ash, R. A., Battista, M., Carr, L., Eyde, L. D., Hesketh, B., … Sanchez, J. I. (2000). The practice of competency modeling. Personnel Psychology, 53, 703–740. doi:10.1111/j.1744-6570.2000.tb00220.x

Schmidt, A. M., Dolis, C. M., & Tolli, A. P. (2009). A matter of time: Individual differences, contextual dynamics, and goal progress effects on multiple-goal self-regulation. Journal of Applied Psychology, 94, 692–709. doi:10.1037/a0015012

Schmidt, F., & Hunter, J. (1998). The validity and util- ity of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262– 274. doi:10.1037/0033-2909.124.2.262

Schneider, B. (1990). The climate for service: An applica- tion of the climate construct. In B. Schneider (Ed.),

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

John P. Campbell

394

Organizational climate and culture (pp. 383–412). San Francisco, CA: Jossey-Bass.

Schraagen, J. M., Chipman, S. F., & Shalin, V. (Eds.). (2000). Cognitive task analysis. Mahwah, NJ: Erlbaum.

Schwartz, S. H., & Bilsky, W. (1990). Toward a theory of the universal content and structure of values: Extensions and cross cultural replications. Journal of Personality and Social Psychology, 58, 878–891. doi:10.1037/0022-3514.58.5.878

Scott, J. C., & Pearlman, K. (2010). Assessment for organizational change: Mergers, restructuring, and downsizing. In J. C. Scott & D. H. Reynolds (Eds.), Handbook of workplace assessment: Evidence-based practices for selecting and developing organizational talent (pp. 533–575). San Francisco, CA: Jossey-Bass.

Scott, J. C., & Reynolds, D. H. (Eds.). (2010). Handbook of workplace assessment: Evidence-based practices for selecting and developing organizational talent. San Francisco, CA: Jossey-Bass.

Selye, H. (1975). Confusion and controversy in the stress field. Journal of Human Stress, 1, 37–44. doi:10.1080/ 0097840X.1975.9940406

Simon, H. A. (1992). What is an explanation of behavior? Psychological Science, 3, 150–161. doi:10.1111/j.1467-9280.1992.tb00017.x

Smith, P. C., Kendall, L. M., & Hulin, C. L. (1969). The measurement of satisfaction in work and retirement. Chicago, IL: Rand McNally.

Society for Industrial and Organizational Psychology. (2003). Principles for the validation and use of person- nel selection procedures. Bowling Green, OH: Author.

Sonnentag, S., & Frese, M. (2003). Stress in organiza- tions. In W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Eds.), Handbook of psychology: Vol. 12. Industrial and organizational psychology (pp. 453–492). Hoboken, NJ: Wiley.

Sonnentag, S., & Frese, M. (2012). Performance dynam- ics. In S. Kozlowski (Ed.), Oxford handbook of indus- trial and organizational psychology (pp. 548–575). New York, NY: Oxford University Press.

Spearman, C. (1904). General intelligence, objectively determined and measured. American Journal of Psychology, 15, 201–293. doi:10.2307/1412107

Spector, P. E., Bauer, J. A., & Fox, S. (2010). Measurement artifacts in the assessment of coun- terproductive work behavior and organizational citizenship behavior: Do we know what we think we know? Journal of Applied Psychology, 95, 781–790. doi:10.1037/a0019477

Steedle, J., Kugelmass, H., & Nemeth, A. (2010). What do they measure? Comparing three learning out- comes assessment. Change: The Magazine of Higher Learning, 42, 33–37. doi:10.1080/00091383.2010.4 90491

Steel, P., & Konig, C. (2006). Integrating theories of motivation. Academy of Management Review, 31, 889–913. doi:10.5465/AMR.2006.22527462

Sternberg, R. J. (2003). A broad view of intelligence: The theory of successful intelligence. Consulting Psychology Journal: Practice and Research, 55, 139– 154. doi:10.1037/1061-4087.55.3.139

Sternberg, R. J., Wagner, R. K., Williams, W. M., & Horvath, J. A. (1995). Testing common sense. American Psychologist, 50, 912–927. doi:10.1037/0003-066X.50.11.912

Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 677–680. doi:10.1126/ science.103.2684.677

Stewart, G. L., & Nandkeolyar, A. K. (2006). Adaptation and intraindividual variation in sales outcomes: Exploring the interactive effects of personality and environmental opportunity. Personnel Psychology, 59, 307–332.

Stice, J. E. (Ed.). (1987). Teaching critical thinking and problem solving abilities. San Francisco, CA: Jossey- Bass.

Strong, M. H., Jeanneret, P. R., McPhail, S. M., Blakley, B. R., & D’Egidio, E. L. (1999). Work context: Taxonomy and measurement of the work envi- ronment. In N. G. Peterson, M. D. Mumford, W. C. Borman, P. R. Jenneret, & E. A., Fleishman (Eds.), An occupational information system for the 21st century: The development of O*NET (pp. 127– 145). Washington, DC: American Psychological Association. doi:10.1037/10313-008

Sturman, M. C. (2003). Searching for the inverted u-shaped relationship between time and perfor- mance: Meta-analyses of the experience/perfor- mance, tenure/performance, and age/performance relationships. Journal of Management, 29, 609–640.

Sullivan, B. A., & Hansen, J. C. (2004). Mapping associa- tions between interests and personality: Toward a conceptual understanding of individual differences in vocational behavior. Journal of Counseling Psychology, 51, 287–298. doi:10.1037/0022-0167.51.3.287

Taras, V., Kirkman, B. L., & Steel, P. (2010). Examining the impact of culture’s consequences: A three-decade, multilevel, meta-analytic review of Hofstede’s cul- tural value dimensions. Journal of Applied Psychology, 95, 405–439. doi:10.1037/a0018938

Tellegen, A. (1982). Brief manual of the Multidimensional Personality Questionnaire. Unpublished manuscript, University of Minnesota, Minneapolis.

Tellegen, A., & Waller, N. (2000). Exploring personal- ity through test construction: Development of the Multidimensional Personality Questionnaire. In S. R. Briggs & J. M. Cheek (Eds.), Personality measures: Development and evaluation (Vol. 1, pp. 133–161). Greenwich, CT: JAI Press.

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .

Assessment in Industrial and Organizational Psychology

395

Tetrick, L., Perrewé, P. L., & Griffin, M. (2010). Employee work-related health, stress, and safety. In J. L. Farr & N. Tippins (Eds.), Handbook of employee selection (pp. 531–549). New York, NY: Routledge.

Tett, R. P., Guterman, H. A., Bleier, A., & Murphy, P. A. (2000). Development and content validation of a “hyperdimensional” taxonomy of managerial competence. Human Performance, 13, 205–251. doi:10.1207/S15327043HUP1303_1

Thurstone, L. L. (1928). Attitudes can be measured. American Journal of Sociology, 33, 529–554. doi:10.1086/214483

Tippins, N. T., & Hilton, M. L. (Eds.).; Panel to Review the Occupational Informational Network (O*NET), National Research Council. (2010). A database for a changing economy: Review of the Occupational Information Network (O*NET). Washington, DC: National Academies Press.

Trice, H. M., & Beyer, J. M. (1993). The cultures of work organizations. Englewood Cliffs, NJ: Prentice Hall.

Unsworth, K. (2001). Unpacking creativity. Academy of Management Review, 2, 289–297.

U.S. Office of Personnel Management. (2007). Delegated examining operations handbook: A guide for federal agency examining offices. Washington, DC: U.S. Office of Personnel Management.

Van Iddekinge, C. H., Putka, D. J., & Campbell, J. P. (2011). Reconsidering vocational interests for per- sonnel selection: The validity of an interest-based selection test in relation to job knowledge, job per- formance and continuance intentions. Journal of Applied Psychology, 96, 13–33. doi:10.1037/a0021193

Verbeke, W., Volgering, M., & Hessels, M. (1998). Exploring the conceptual expansion within the field of organizational behavior: Organizational climate and organizational culture. Journal of Management Studies, 35, 303–329. doi:10.1111/1467-6486.00095

Viswesvaran, C., Schmidt, F. L., & Ones, D. S. (2005). Is there a general factor in ratings of job perfor- mance? A meta-analytic framework for disentan- gling substantive and error influences. Journal of Applied Psychology, 90, 108–131. doi:10.1037/0021- 9010.90.1.108

Vosburgh, R. M. (2008). State-trait returns! And one practitioner’s request. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1, 72–73. doi:10.1111/j.1754-9434.2007.00014.x

Vroom, V. (1964). Work and motivation. Chichester, England: Wiley.

Warr, P. B. (1987). Work, unemployment, and mental health. Oxford, England: Oxford University Press.

Warr, P. B. (1994). A conceptual framework for the study of work and mental health. Work and Stress, 8, 84– 97. doi:10.1080/02678379408259982

Watson, D., & Clark, L. A. (1993). Behavioral disinhibi- tion versus constraint: A dispositional perspective. In D. M. Wegner & J. W. Pennebaker (Eds.), Handbook of mental control (pp. 506–527). New York, NY: Prentice Hall.

Weiss, H. M. (2002). Deconstructing job satisfaction: Separating evaluations, beliefs, and affective expe- riences. Human Resource Management Review, 12, 173–194. doi:10.1016/S1053-4822(02)00045-1

Weiss, H. M., & Rupp, D. E. (2011). Experiencing work: An essay on a person-centric work psychology. Industrial and Organizational Psychology: Perspectives on Science and Practice, 4, 83–97. doi:10.1111/j.1754- 9434.2010.01302.x

White, R. W. (1959). Motivation reconsidered: The concept of competence. Psychological Review, 66, 297–333. doi:10.1037/h0040934

Yukl, G. A., Gordon, A., & Taber, T. (2002). A hierarchi- cal taxonomy of leadership behavior: Integrating a half century of behavior research. Journal of Leadership and Organizational Studies, 9, 15–32. doi:10.1177/107179190200900102

Zedeck, S. (Ed.). (2010). APA handbook of industrial and organizational psychology. Washington, DC: American Psychological Association.

Zeidner, J., Johnson, C. D., & Scholarios, D. (1997). Evaluating military selection and classification sys- tems in the multiple job context. Military Psychology, 9, 169–186. doi:10.1207/s15327876mp0902_4

Zhou, J., & Shalley, C. E. (2003). Research on employee creativity: A critical review and directions for future research. In J. J. Martocchio & G. R. Ferris (Eds.), Research in personnel and human resource management (Vol. 22, pp. 165–217). Oxford, England: Elsevier Science.

Zyphur, M. J., Chaturvedi, S., & Arvey, R. (2008). Job performance over time is a function of latent trajectories and previous performance. Journal of Applied Psychology, 93, 217–224. doi:10.1037/0021- 9010.93.1.217

Co py

ri gh

t Am

er ic

an P sy

ch ol og ic al A ss oc ia ti on . No t fo r fu

rt he

r di

st ri

bu ti

on .


Comments are closed.