Introduction to Psychometry

  • Jul 26, 2021
click fraud protection
Introduction to Psychometry

Psychometry It can be defined as: "Methodological discipline, within the area of ​​Psychology, whose fundamental task is the measurement o quantification of the psychological variables with all the implications that this entails, both theoretical and practices". The origin of Psychometry can be located towards the middle of the 19th century and, from that moment, it will develop, fundamentally through these two routes: Psychophysics studies: they gave rise to the development of models that allowed assigning numerical values ​​to the stimuli and, therefore, that allowed the scaling of stimuli.

Thus, psychometrics must first deal with the justification and legitimation of psychological measurement, for which it must:

  • Develop formal models that allow the representation of the phenomena to be studied and enable the transformation of the facts into data
  • Validate the models developed to determine to what extent it represents the reality they intend and establish the conditions that allow the measurement process to be carried out

Psychological measurement

According to Coombs, Dwes and Tversky (1981) it is considered that the fundamental roles assigned to Science is the description, explanation and prediction of observable phenomena by means of a few general laws that express the relationships between the properties of objects investigated. Psychology as a science will have its scientific basis in measurement, which will allow it to empirically contrast the hypotheses raised. According to Nunnally (1970) measurement is reduced to something very simple, it consists of a set of rules to assign numbers to objects in a way such that these numbers represent quantities of attributes, understanding by attributes the characteristics of the objects and not the objects themselves.

However, the difficulty involved in measuring psychological characteristics is recognized given their uniqueness and, therefore, Therefore, the difficulties that had to be overcome until the need and possibility of measuring this type of variables. The differences with the physical attributes when measuring this type of variables (psychological), a new conception of measurement was proposed (Zeller and Carmines 1980) considered that it is a process by which directly unobservable abstract concepts (constructs) are linked with directly observable empirical indicators (behaviors). This type of measurement is often called measurement by indicatorsSince psychological variables cannot be measured directly, it is necessary to select a series of indicators that can be measured directly.

Studies about the individual differences that led to the development of tests and the different theories of the tests, made possible the assignment of numerical values ​​to the subjects and therefore, the scaling of subjects. Three decisive factors can be considered in the development of the tests:

  • The opening of Galton's anthropometric laboratory in London
  • The development of Pearson's correlation
  • Spearman's interpretation of it, considering that the correlation between two variables indicates that both have a common factor. Tests as instruments have anticipated their theoretical foundation.

The closest origins are located in those first sensorimotor tests used by Galton (1822-1911) in his anthropometric laboratory in Kensington, Galton also has the honor of being the first to apply statistical technology to analyze the data from his tests, a work that he will continue with. Pearson.

James McKeen Cattell (1860-1944) will be the first to use the term "mental test", but his tests, like Dalton's, were of a sensory nature and the analysis of the data made clear the null correlation between this type of test and the intellectual level of the subjects. It will be Binet who takes a radical turn in the philosophy of tests, by introducing more cognitive tasks in his scale aimed at evaluating aspects such as judgment, etc. In the revision of the scale carried out by Terman at Stanford University, and which is known as the revision Stanford-Binet, the intelligence quotient (IQ) was used for the first time to express the scores of the subjects. The idea originated from Stern, who in 1911 proposed dividing the mental age (ME) by the chronological (CE), multiplying by one hundred to avoid decimals: CI = (ME / CE) x100.

The next step in the historical evolution of the tests will be marked by the appearance of collective intelligence tests, prompted by the need for the US Army in 1917 to select and classify the soldiers who were to take part in the First World War, a committee led by Yerkes designed from the diverse material already existing, especially from Otis's unpublished test, the now famous Alpha and Beta test, the first for the general population and the second for use with illiterate or non-English-proficient prisoners, these tests are still in use today. For the appearance of the classic test batteries of today we have to wait until the 30s and 40s, whose most genuine product will be the Primary Mental Abilities of Thurstone.

The different models will give rise to numerous test batteries (PMA, DAT, GATB, TEA, etc.) commonly used today. For his part, the Swiss psychiatrist Roschach proposed in 1921 his famous projective inkblot test, which will be followed by other projective tests of very different types of stimuli and tasks, including the TAT, CAT, Rosenzweig's Frustration Test, etc. However, the projective technique that can be considered pioneering is the Word Association or Free Association Test, described by Galton.

As a consequence of the boom achieved by tests, the need arises to develop a theoretical framework that serves as a foundation for the scores obtained by the subjects when applied to them, enable the validation of the interpretations and inferences made to starting from it, and allows the estimation of the measurement errors inherent in any measurement process through the development of a series of models.

Thus, a general theoretical framework was developed, the Theory of Tests, which will allow establishing a functional relationship between the variables observable from the empirical scores obtained by the subjects in the tests or in the items that compose them and the variables unobservable. TCT was developed, fundamentally, from the contributions of Galton, Pearson and Spearman that revolve around three basic concepts: empirical or observed scores (X) the true scores (V) and the scores due to error (e) The central objective was to find a model statistic that adequately substantiates the test scores and allows the estimation of the measurement errors associated with any testing process. measurement.

Spearman's linear model is an additive model in which the observed score (dependent variable) of a subject in a test (X) is the result of the sum of two components: its true score (independent variable) in the test (V) and the error (and) X = V + e Based on this model and some minimal assumptions, the TCT will develop a whole set of deductions aimed at estimating the amount of error that affects test scores.

Assumptions:

  • The score (V) is the mathematical expectation of the empirical score (X): V = E (X)
  • The correlation between the true scores of "n" subjects in a test and the measurement errors is equal to zero. rve = 0
  • The correlation between the measurement errors (re1e2) that affect the scores of the subjects in two different tests is equal to zero. re1e2 = 0.

Starting from these three assumptions of the model, the following deductions are established:

  1. The measurement error (e) is the difference between the empirical score (X) and the true score (V). e = X-V
  2. The mathematical expectation of the measurement errors is zero, therefore they are unbiased errors E (e) = 0
  3. The mean of the empirical scores is equal to the mean of the true ones.
  4. The true scores do not covary with the errors. Cov (V, e) = 0
  5. The covariance between the empirical scores and the true ones is equal to the variance of the true ones: cov (X, V) = S2 (V)
  6. The covariance between the empirical scores of two tests is equal to the covariance between the true ones: cov (Xj, Xk) = cov (Vj, Vk) g) The variance of the empirical scores is equal to the variance of the true scores plus the errors: S2 (X) = S2 (V) + S2 (e)
  7. The correlation between the empirical scores and the errors is equal to the quotient between the standard deviation of the errors and that of the empirical ones. rxe = Se / S

This article is merely informative, in Psychology-Online we do not have the power to make a diagnosis or recommend a treatment. We invite you to go to a psychologist to treat your particular case.

instagram viewer