Considerations and issues in EI Test Development

Considerations and issues in EI Test Development

This section relates to what needs to be addressed in EI measurement in my opinion.

For any construct to be useful, it needs to be describable, definable and measurable in order to be applied. For the construct of emotional-social intelligence to be applied moreover, we must first be able to measure it like other constructs in the finite and social sciences. There is no way to accurately assess how emotionally intelligent we are and if this type of intelligence has increased, remained the same or decreased as the result of a planned intervention without a method of effectively measuring it. Such measures need to be developed and validated in a specific way, and certain basic psychometric characteristics need to be demonstrated.

This section relates to what needs to be addressed and presented regarding EI measurement when advancing an EI measure. These are the essential criteria for sound EI measurement; and the degree to which they are met should guide test users in selecting the most valid and reliable measure. The seven basic considerations and issues discussed here are the following:

  1. The content domain must be adequately mapped, described and defined
  2. The most appropriate assessment method must be determined
  3. Adequate content validity must be demonstrated
  4. Adequate reliability must be demonstrated
  5. Adequate factorial validity must be demonstrated
  6. Adequate construct validity must be demonstrated
  7. Adequate predictive validity must be demonstrated

1. The content domain must be adequately mapped, described and defined. Attempting to map, describe and define the content domain is the first step in EI measurement development. This needs to be carried out as methodically, adequately and clearly as possible. There should, hopefully, be agreement between theorists and researchers as to what is included within the domain we wish to measure.

However, this undertaking is difficult to accomplish with respect to the EI construct, which has been described with such a wide range of different conceptualizations to date. This apparent lack of agreement makes it difficult to determine, describe and define the content domain involved. A similar scientific challenge has been encountered with other psychological constructs that have been around for more than a century such as personality and intelligence. As was previously suggested, when discussing the conceptualization the EI, a good starting point for mapping the content domain of this construct would be to systematically review what has appeared in the literature from Darwin to the present to gain a better idea of how this construct has been described to date.

As was previously suggested, the key search words to look for in the literature when attempting to understand the possible domain involved are the following:emotional expressionemotional awarenessemotional literacy, emotional competencealexithymiapsychological mindedness (PM), social intelligence,social-emotional learning (SEL), social-emotional education (SEE), intrapersonal intelligenceinterpersonal intelligencepersonal intelligencespractical intelligencesuccessful intelligenceemotional intelligence (EI), and emotional-social intelligence. After identifying a consensus of potential factorial components and reducing redundant terms, individuals involved in EI measurement should attempt to define the major competencies, skills and behaviors that emerge as comprehensively and clearly as possible.

2. The most appropriate assessment method must be determined. After identifying and defining what are thought to be the key factorial components of emotional-social intelligence, the next step in EI test development involves determining the appropriate assessment modality to be included in the instrument being constructed. First, theorists and researchers need to agree if the EI construct that they wish to measure is the potential for emotionally and socially intelligent behavior or the behavior itself. The former (a measure of the potential for a particular type of behavior) ‘traditionally’ dictates the application of ability testing, which has frequently been used to tap some covert ability, capacity or potential. And the latter, a measure of a particular type of behavior, typically dictates the use of self-report and/or multi-rater assessment techniques designed to assess self- and other-observed behavior or performance. When the ability-based assessment modality is employed, the potential limitation of this approach that must be addressed is the use of expert- and consensus-determined ‘correct answers’ in the response format. The limitation of this approach in selecting the correct answers is the potential impact of the culture of the experts on generating these answers. This potential problem makes the cross-cultural generalizability of ability testing questionable. In order to reliably and effectively use the ability-based assessment modality, the correct answers need to be generated by very large and culturally diverse samples of experts. Furthermore, the individuals that generate the correct answers in ability-testing should represent a balanced mixture of males and females of different ages in light of the fact that there could also be a gender/age impact on the answers that are selected in addition to a general cultural/socio-economic impact. In any event, test authors and publishers of ability tests of EI should present detailed information on the demographic breakdown of the experts who selected the correct responses that are used and from where they were selected. Additionally, this type of information which is published in technical manuals should clearly state when and where the correct answers were selected and the number of participants who generated these answers. Furthermore, findings related to the effect of gender, age and ethnicity on the answers that are selected must be published as well.

When self-report instruments are selected to assess emotionally and socially intelligent behavior, responses can be distorted intentionally by the respondent which is a very definite possibility especially if the self-report instrument does not include response bias indicators (for evaluating “faking good” and “faking bad”) and a correction factor based on these indicators. It is thus crucial to employ these devices when using self-report methods to identify and correct for response bias. Additionally, it is imperative for EI test authors and publishers to develop and publish appropriate norms with empirically-based cutoff points used to flag potentially invalid profiles.

3. Adequate content validity must be demonstrated. Once the factorial components of EI are clearly defined and the assessment modality is selected, items need to be selected that are representative of the content domain being measured. An attempt should be made to choose the ‘best’ items. In addition to selecting brief, simple and clearly-worded sentences, culturally biased items need to be identified and modified or deleted. In addition to modifying or deleting culturally-biased as well as gender- and age-biased items, it is also important to avoid items with religious, political and sexual content that can contribute to test sabotaging among respondents.

The initial version of the measure then needs to be piloted, and statistical procedures such as items analysis and factor analysis need to be conducted to help weed out weak items. Additionally, group differences related to gender, age and ethnicity need to be examined, and separate norms should be provided if justified (i.e., if significant differences are found between the groups being compared). This whole process is referred to as content validity, which is not validity in the true statistical sense but rather an indication of how well the items are thought to uniquely cover the content domain being measured. With respect to this type of validation, the Bar-On EQ-i was validated primarily by the systematic way in which the items were generated and selected, which involved a serious and thorough attempt to express the essence of each factor based on the definitions that were created as was previously explained. The effectiveness of this process, should be examined by applying a combination of item analysis and factor analysis as was previously mentioned with respect to the Bar-On EQ-i.

4. Adequate reliability must be demonstrated. The next important step in EI assessment development is to examine the instrument’s reliability. This, essentially, examines the internal consistency of the instrument and its stability over time. Reliability indicates the extent to which individual differences in test scores are attributable to true differences in the characteristics being considered. There are two basic types of reliability that are traditionally examined in test construction; the first is internal consistency reliability, and the second is retest reliability or stability. Within this context, consistency is the extent to which the items cluster together and measure the same construct or same factorial component of a particular construct. This procedure estimates reliability from a single administration of the inventory and measures the consistency of the content of the individual scale being examined. On the other hand, retest reliability refers to the temporal stability of the instrument (i.e., stability over time). This type of reliability is a function of the reliability of the respondent more than the measure. A third type of reliability that is often examined is the standard error of measurement (SEM), which is calculated based on reliability estimates for the instrument’s scales. SEM gives an indication of how much an individual’s score might vary from the respondent’s true score. The two types of reliability carried out on the Bar-On EQ-i were internal consistency and retest reliability; and the three types that were conducted on the Bar-On EQ-i:YV were internal consistency, retest reliability, and standard error of measurement.

5. Adequate factorial validity must be demonstrated. Factorial validation is a process that examines an instrument’s factorial structure, designed to assess the extent to which it is empirically and theoretically justified. The statistical procedure applied to examine factorial validity has traditionally been factor analysis. For example, the factorial validity of the Bar-On EQ-i suggests a structure comprising 15 factors as was expected.

6. Adequate construct validity must be demonstrated. The next step in EI test development is to examine and demonstrate adequate construct validity, which is probably the most important step. Construct validation is a statistical procedure that evaluates how well a psychometric instrument measures what it was designed to measure – in the case of the Bar-On EQ-i and its factorial components. This procedure examines the ability of the test to measure the construct it was designed to measure and the degree to which the construct being measured is different from or similar to other constructs such as personality and cognitive intelligence. There are two basic types of construct validity. Convergent construct validity examines the degree of similarity between the measure being examined and other measures that are purported to measure the same construct, while divergent construct validity examines the degree to which the measure is different from measures that are thought to measure other constructs. From a technical point of view, convergent construct validity assesses the extent to which two or more measures of the same construct correlate, while divergent construct validity assesses how measures of the construct such as EI differ from other constructs that are assumed to be theoretically unrelated to it, such as personality and cognitive intelligence. Both types of construct validity must be examined and demonstrated. Establishing just one of them is insufficient in test development. Last, incremental validity, based on hierarchical regressions, should also be applied to examine new EI measures to see if they explain behavior over and beyond traditional measures such as personality and cognitive intelligence.

7. Adequate predictive validity must be demonstrated. Another very important step in EI test development focuses on examining its predictive validity. Essentially, this is designed to examine and demonstrate the ability of the test to predict success in various areas of human involvement. This is an ongoing process that is carried out over a very long period of time. If an EI measure is shown to possess poor predictive validity, its applicability and usefulness will be questionable and limited even if it possesses adequate construct validity and reliability.

Following the development and validation of an EI measure, an important aspect of EI research should then focus on empirically exploring ways to develop this construct in order to enhance emotionally and socially intelligent behavior in individuals.

 

Copyright 2013-2023 Reuven Bar-On. All rights reserved. | Disclaimer