Tuesday, 24th November 2015
Speakers: Dr Hugh McCredie, John Hackston, Rab McIver, Lauren Jeffery-Smith and Helen Baron
Our 2015 New Frontiers in Psychometrics was themed ‘From Intuition to Algorithms in Selection and Development’ and covered the following topics:
- Towards a generic algorithm for selecting managers
- Closing the academic-practitioner-nonexpert gap: automated selection in SMEs
- Improving report interpretation – consistent, valid, bespoke?
- Creating selection algorithms: What is the evidence?

Dr Hugh McCredie
TPF Vice Chair and Independent Researcher
Dr McCredie opened the seminar by revealing the Causal Flow Model (Spencer & Spencer, 1993), which postulated that personality traits and abilities underpinned the acquisition of competencies that, in turn, impacted on performance. He then proceeded to outline his own efforts, with 16PF Form A over several decades, to develop computerised personality predictions of managerial performance, demonstrating, that he had established small to moderate correlations between the personality characteristics of general mental ability (g), stability (N-), low Agreeableness (A-) and Extraversion (E), with the competency clusters of Intellect, Interpersonal skill, Results-orientation and Adaptability, respectively. Dr McCredie then presented a generic personality algorithm, involving these four factors plus middle range scores for Conscientiousness (C) and Openness (O), for predicting managerial performance. The algorithm is provided in full in McCredie (2014).

John Hackston
Head of Research, OPP
John Hackston of OPP presented first on ‘Closing the academic-practitioner-non-expert gap: automated selection in SMEs’. This featured the ‘Sirius’ platform incorporating 18 competencies related to the 16PF measure. For any particular role, the end user assigns each of the competencies to one of five categories of impact: Critical, Large, Moderate, Small, None; with a limit to the number included in the first two categories. The competencies are then classified according to frequency of application to differentiate within those of equal impact. Candidates then complete the 16PF measure, plus a full-length ability test, where appropriate. Scores are then processed by algorithm to yield a weighted total that can be compared to a norm to reveal the degree of fit to each competency, and to the role in general as the basis of sifting for the next stage of the selection process.


Rab McIver and Lauren Jeffery-Smith
Saville Consulting UK Ltd
Rab MacIver and Lauren Jeffery-Smith of Saville Consulting UK Ltd, presented on the topic ‘Improving report interpretation – consistent, valid, bespoke?’ Rab opened the presentation with the assertion: ‘User validity argues that validity in practice is not about the validity of a test scale, but about the validity of interpretation from test outputs in use’. Thus, in the context of selection, it would be the extent to which performance predictions, e.g. interpretations of test scores, were matched by subsequent performance in reality (see MacIver et al. 2014). Lauren suggested two basic approaches to selection; both had attractions and limitations. The first was to use standard predictive measures (e.g. Saville’s WAVE); the second was a role- or organisation-specific measure. She suggested that the use of a generic competency taxonomy delivered the best balance of cost-effectiveness between the two and termed this ‘The Third Approach’. The heart of the process was the mapping of the client organisation’s competency requirement for the role in question onto a generic competency taxonomy that in turn mapped onto the Great Eight competencies and the Big Five personality factors. One of the benefits of relating client-selected competencies to such a standard measure is the psychometric soundness and the psychological comprehensiveness of the latter. Lauren found similar, small but significant, correlations between WAVE scores and leadership competencies to those that Dr McCredie reported for 16PF Form A, above. When corrected for unreliability of the criterion ratings these achieved moderate effect sizes.

Helen Baron
Convenor of the BPS Assessment Centre Standards Working Group
Helen’s topic was ‘Creating selection algorithms: What is the evidence?’ The connection with the earlier presentations lay in the Group’s focus on AC decision making and she reported John Hackston of OPP Ltd Rab MacIver and Lauren Jeffery-Smith of Saville Consulting UK Ltd 17 ‘Research indicates that arithmetic combinations of scores (e.g. averaging) are associated with much higher validities than consensual methods of determining final scores through discussion by Centre staff.’ Helen explored the reasons for the modest criterion validity of ACs (i.e. slightly lower than that for personality measures). Amongst the most frequently reported issues were: Assessors being asked to work extremely long hours; The order in which candidates take part in exercises not being the same for all; Wash-ups which are rushed due to lack of time. She shared meta-analysis evidence showing 50% increase in validity with arithmetic over consensual methods and some interesting, if challenging, options for the arithmetic combining of scores.
Leave A Comment