How do you best ensure consistency of data collected over time?

How do you best ensure consistency of data collected over time?

data collected

 

Data collected

Data collected is very important. Data is a collection of facts, figures, objects, symbols, and events gathered from different sources. Organizations collect data with various data collection methods to make better decisions. Without data collected, it would be difficult for organizations to make appropriate decisions, so data is data collected from different audiences at various points in time.

For instance, an organization must collect data on product demand, customer preferences, and competitors before launching a new product. If data is not collected beforehand, the organization’s newly launched product may fail for many reasons, such as less demand and inability to meet customer needs. The data collected is used to improve the offering or target customers.

Although data is a valuable asset for every organization, it does not serve any purpose until analyzed or processed to get the desired results.

Validity and reliability in qualitative methodology

In current academic circles, which are increasingly using qualitatively oriented methods and techniques for their different types of research, a difficulty related to the validity and reliability of their results has repeatedly arisen.

In general, the concepts of validity and reliability that reside in the minds of a large majority of researchers continue to be those used in the traditional positivist epistemological orientation, already more than surpassed in the second half of the 20th century. From here a conflict arises, since qualitative methodology adopts, as the basis and fundamental postulate of its theory of knowledge and science, the postpositivist epistemic paradigm.

The postpositivist paradigm has been installed in the academic field after many studies in international symposiums on the philosophy of science (see Suppe, 1977, 1979) in which the  of the inherited conception (logical positivism) which, from that moment on, was abandoned by almost all epistemologists” (Echeverría, 1989, p. 25), due, as Popper (1977, p. 118) points out, to its insurmountable intrinsic difficulties.< /span>

Obviously, it is not enough for these conclusions to be reached at this high scientific level for them to be immediately adopted in practice by the majority of researchers, nor were the heliocentric ideas of Copernicus and Galileo fully adopted until after a century by illustrious astronomers from the universities of Bologna, Padua and Pisa. According to Galileo (1968) this required “changing people’s heads, which only God could do” (p. 119).

Postpositivist epistemology shows that there is no, in the cognitive process of our mind, a direct relationship between the visual empirical image , auditory, olfactory, etc. and the external reality to which they refer, but always is mediated and interpreted by the personal and individual horizon of the researcher: his or her values, interests, beliefs, feelings, etc., and, for this same reason, the traditional positivist concepts of validity (as a physiological mind-thing relationship) and of  reliability (as a repetition of the same mental process) must be reviewed and redefined.

qualitative

 Epistemological basis for a redefinition of Validity and Reliability 

 Systemic ontology

When an entity is a composition or aggregation of elements (diversity of unrelated parts), it can, in general, be studied and measured appropriately under the guidance of the parameters of traditional quantitative science, in which mathematics and probabilistic techniques play the main role; when, on the other hand, a reality is not a juxtaposition of elements, but rather its “constituent parts” they form an organized totality with strong interaction with each other, that is, they constitute a system, its study and understanding requires the capture of that internal dynamic structure that characterizes it and, to do so, requires a methodology structural-systemic.

Bertalanffy had already pointed out that “general systems theory – as he originally conceived it and not as it has been disseminated by many authors that he criticizes and disavows (1981, p. 49) – was destined to play a role analogous to that played by the Aristotelian logic in the science of antiquity” (Thuillier, 1975, p. 86).

There are two basic kinds of systems: the linear and the non-linear. Linear systems  do not present “surprises”, since they are fundamentally “aggregate”, due to the little interaction between the parts: they can be decomposed into their elements and recompose again, a small change in an interaction produces a small change in the solution, determinism is always present and, by reducing the interactions to very small values, the system can be considered to be composed of independent or linearly dependent parts.

The world of systems no-linear, On the other hand, it is totally different: it can be unpredictable, violent and dramatic, a small change in a parameter can change the solution little by little and, suddenly, change to a totally new type of solution, as when, in quantum physics , “quantum leaps” occur, which are an absolutely unpredictable event that is not controlled by causal laws, but only by the laws of probability.

These non-linear systems must be grasped from within and their situation must be evaluated in parallel with their development. Prigogine claims (1986) that the non-linear world contains much of what is important in nature: the world of dissipative structures.

Well, our universe is basically made up of non-linear systems at all levels: physical, chemical, biological, psychological and sociocultural.

If we observe our environment we see that we are immersed in a world of systems. When considering a tree, a book, an urban area, any device, a social community, our language, an animal, the firmament, in all of them we find a common feature: they are complex entities, formed by parts in mutual interaction, whose identity results from an adequate harmony between its constituents, and endowed with their own substantivity that transcends that of those parts; In short, it is about what, in a generic way, we call systems (Aracil, 1986, p. 13). Hence, von Bertalanffy (1981) maintains that “from the atom to the galaxy we live in a world of systems” (p. 47).

According to Capra (1992), quantum theory demonstrates that “all particles are dynamically composed of one another in a self-consistent manner, and, in that sense, it can be said that they “contain” each other. In this way, physics (the new physics) is a model science for the new concepts and methods of other disciplines. In the field of biology, Dobzhansky (1967) has pointed out that the genome, which comprises both regulatory and operant genes, works as an orchestra and not as a set of soloists.

Also Köhler (1967), for psychology, used to say that “in the structure (system) each part dynamically knows each one of the others.” And Ferdinand de Saussure (1931), for linguistics, stated that “the meaning and value of each word is in the others”, that the system It is “an organized totality, made of supportive elements that can only be defined in relation to each other depending on their place in this totality.

If the significance and value of each element of a dynamic structure or system is closely related to that of the others, if everything is a function of everything, and if each element is necessary to define others, it cannot be seen or understood or measured “in itself”, in isolation, but through the position< /span> problem refers continuously and systematically to the state of the system. considered as a whole” (in: Lyotard, 1989, p. 31).each or role it plays in the structure. Thus, Parsons points out that “the most decisive condition for a dynamic analysis to be valid is that function and the 

The need for a proper approach to dealing with systems has been felt in all fields of science. Thus a series of related modern approaches were born, such as, for example, cybernetics, computer science, set theory, network theory, decision theory, game theory, stochastic models and others; and, in practical application, systems analysis, systems engineering, the study of ecosystems, operations research, etc.

Although these theories and applications differ in some initial assumptions, mathematical techniques, and goals, they nevertheless coincide in dealing, in one way or another and according to their area of ​​interest, with “systems,” and “organization” that is, they agree to be “systems sciences” that study aspects not addressed until now and problems of interaction of many variables, organization, regulation, choice of goals, etc. They all seek the “systemic structural configuration” of the realities they study.

In a system there is a set of interrelated units in such a way that the behavior of each part depends on the state of all the others, since they are all found in a structure that interconnects them. Organization and communication in the systems approach challenges traditional logic, replacing the concept of energy with that of information, and that of cause-effect with that of structure and feedback.

In living beings, and especially in human beings, there are structures of a very high level of complexity, which are made up of systems of systems whose understanding defies the acuity of the most privileged minds; These systems constitute a “physical-chemical-biological-psychological-cultural and spiritual” whole.

Only referring to the biological field, we talk about the blood system, respiratory system, nervous system, muscular system, skeletal system, reproductive system, immune system and many others. Let’s imagine the high level of complexity that is formed when all these systems interrelate and interact with all the other systems of a single person and, even more so, of entire social groups.

Now, what implications does the adoption of the systemic paradigm have for the cultivation of science and its technology? They completely change the foundations of the entire scientific edifice: its bases, its conceptual structure and its methodological scaffolding. This is the path that methodologies that are inspired by hermeneutic approaches, the phenomenological perspective and ethnographic orientations try to follow today, that is, qualitative methodologies.

1.2. Positivist validity and reliability

Traditional positivist literature defines different types of validity, (construct validity, internal validity, external validity); but they all try to verify if we actually measure what we propose to measure. Likewise, this epistemological orientation seeks to determine a good level of reliability, that is, its possibility of repeating the same research with identical results. All these indicators have a common denominator: they are calculated and determined by means of “an isolated measure, independent of the complex realities to which they refer.”

positivist

The validity of hypothetical constructs (of constructs), which is the most important, tries to establish an operational measure for the concepts used; In the psychological field, for example, the instrument would measure the isolated psychological property or properties that underlie the variable. This validity is not easy to understand, since it is immersed in the scientific framework of the research and its methodology. These are the ones that give it meaning.

Internal validity is specifically related to establishing or finding a causal or explanatory relationship; that is, if event x leads to event y; excluding the possibility that it is caused by event z. This logic is not applicable, for example, to a descriptive or exploratory study (Yin, 2003, p. 36).

External validity tries to verify whether the results of a given study are generalizable beyond its limits. This requires that there be a homology or, at least, an analogy between the sample (studied case) and the universe to which it is intended to be applied.

Some authors refer to this type of validity with the name of content validity, since they define it as the representativeness or sampling adequacy of the content that is measured with the content of the universe from which it is extracted (Kerlinger, 1981a, p. 322).

Likewise, reliability aims to ensure that a researcher, following the same procedures described by another previous researcher and conducting the same study, can reach the same results and conclusions. Note that this is a redoing of the same study, not a replica of it.

1.3. Critical analysis of positivist criteria

All these indicators ignore the fact that each reality or human entity, be it a thought, a belief, an attitude, an interest, a behavior, etc., are not isolated entities, but rather they receive their meaning or significance, that is, they are configured as such, by the type and nature of the other elements and factors of the system or dynamic structure in which they are inserted and by the role and function they play in it; all of which can change with the temporal variable, since they are never static. An isolated element can never be adequately conceptualized or categorized, since it may have many meanings according to that constellation of factors or structure from which it comes.

If we delve deeper into the “parts-whole” phenomenon, and focus more closely on its epistemological aspect, we will say that there are two modes of intellectual apprehension of an element that is part of a totality. Michael Polanyi (1966) puts it this way:

…we cannot understand the whole without seeing its parts, but neither can we see the parts without understanding the whole… When we understand a certain series of elements as part of a whole, the focus of our attention moves from the details to now not understood to the understanding of their joint meaning.

This passage of attention does not make us lose sight of the details, since a whole can only be seen by seeing its parts, but it completely changes the way we apprehend the details. Now we apprehend them in terms of the whole on which we have focused our attention. I will call this subsidiary apprehension of details, as opposed to the focal apprehension that we would employ to attend to the details themselves, not as parts of the whole (pp. 22-23).

Unfortunately, analytical philosophy and its positivist orientation followed the advice that Descartes puts as a guiding idea and as a second maxim, in the Discourse on Method: “fragment every problem into as many simple and separate elements as possible.” This orientation has systematically accepted the (false) assumption that total reality would be captured by dismembering it (disintegrative analysis) into its different components.

This approach constituted the conceptual paradigm of science for almost three centuries; but it breaks or ignores the set of links and relationships that each human entity, and sometimes even the same physical or chemical entities, has with the rest. And that rest or context is precisely what gives it the nature that constitutes it, its characteristics, its properties and its attributes.

This decontextualization of realities makes them amorphous, ambiguous and, most of the time, without any meaning or, also, with many possible meanings. As the creator of General Systems Theory, Ludwig von Bertalanffy (1976), very appropriately points out, “every mathematical model is an oversimplification, and it is debatable whether it reduces real events to the bare bones or whether it tears out vital parts of their anatomy.” ; (p. 117).

positivist orientation

For a greater exemplification, let’s think about what is happening recently in the field of medicine. Excellent professionals in this science, sometimes guided by their specialization or super-specialization, prescribe a medicine that seems magnificent for a certain ailment or condition, but they are unaware that, for some people in particular, it can even be fatal, since they have a special allergy, for example, to penicillin or some component of it.

This without pointing out that the etiology of a certain disease sometimes has its origin in non-biological areas, such as a high level of stress for psychological reasons, family problems or socioeconomic difficulties; all areas that the distinguished specialist may be unaware of even in their simplest topics, but that could give a clue as to where the necessary therapy should be directed.

Postpositivist View of Validity and Reliability

 The validity.

In a broad and general sense, we will say that an investigation will have a high level of “validity” to the extent that its results “reflect” an image that is as complete as possible, clear and representative of the reality or situation studied.

But we do not have a single type of knowledge. The natural sciences produce knowledge that is effective in dealing with the physical world; They have been successful in producing instrumental knowledge that has been politically and lucratively exploited in technological applications. But instrumental knowledge is only one of the three cognitive forms that contribute to human life.

The historical-hermeneutic sciences (interpretive sciences) produce the interactive knowledge that underlies the life of each human being and the community of which he or she is a part; Likewise, critical social science produces the reflective and critical knowledge that human beings need for their development, emancipation and self-realization.

Each form of knowledge has its own interests, its own uses and its own criteria of validity; For this reason, it must be justified on its own terms, as has traditionally been done with ‘objectivity’ for the natural sciences, as Dilthey did for hermeneutics, and as Marx and Engels did for critical theory.

In the natural sciences, validity is related to your ability to control the physical environment with new physical, chemical, and biological inventions; In the hermeneutical sciences, validity is appreciated according to the level of its ability to produce human relationships with a high sense of empathy and connection; and in critical social science, this validity will be related to its ability to overcome obstacles to promote the growth and development of more self-sufficient human beings in the full sense.

As we pointed out, an investigation has a high level of validity if when observing or appreciating a reality, that reality is observed or appreciated in its full sense, and not just an aspect or part of it.

If reliability has always represented a difficult requirement for qualitative research, due to its peculiar nature (impossibility of repeating, stricto sensu, the same study), the same has not happened in relation to validity. On the contrary, validity is the greatest strength of these investigations. Indeed, qualitative researchers’ assertion that their studies have a high level of validity derives from their way of collecting information and the analysis techniques they use.

These procedures induce them to live among the subjects participating in the study, to collect data for long periods of time, review, compare and analyze them continuously, to adapt the interviews to the empirical categories of the participants and not to abstract concepts or strangers brought from another environment, to use participatory observation in the real media and contexts where the events occur and, finally, to incorporate into the analysis process a continuous activity of feedback and reevaluation.

All this guarantees a level of validity that few methodologies can offer. However, validity can also be perfected, and it will be all the greater to the extent that some problems and difficulties that may arise in the process are taken into account. qualitative research. Among others, for good internal validity, special attention will have to be paid to the following:

a) There may be a noticeable change in the environment studied between the beginning and the end of the investigation. In this case, information will have to be collected and collated at different times in the process.

b) It is necessary to carefully calibrate the extent to which the observed reality is a function of the position, status and role that the researcher has assumed within the group. Interactive situations always create new realities or modify existing ones.

c)  The credibility of information can vary greatly: informants can lie, omit relevant data or have a distorted view of things. It will be necessary to contrast it with that of others, collect it at different times, etc.; It is also convenient that the sample of informants represents in the best possible way the groups, orientations or positions of the population studied, as a strategy to correct perceptual distortions and prejudices, although it will always remain true that the truth is not produced by a random exercise. and democratic in the collection of general information, but by the information of the most qualified and trustworthy people.

credibility

Regarding external validity, it is necessary to remember that often the meaning structures discovered in one group are not comparable with those of another, because they are specific and typical of that group, in that situation and in those circumstances, or because the second group has been poorly chosen and the conclusions obtained in the first are not applicable to it.

  The confiability

Research with good reliability is one that is stable, secure, consistent, the same as itself at different times and predictable for the future. Reliability also has two sides, one internal and one external: there is internal reliability when several observers, when studying the same reality, agree in their conclusions; There is external reliability when independent researchers, when studying a reality in different times or situations, reach the same results. 

The traditional concept of external reliability implies that a study can be repeated with the same method without altering the results, that is, is a measure of the replicability of the research results. In the human sciences it is practically impossible to reproduce the exact conditions in which a behavior and its study took place. Heraclitus already said in his time that “no one bathed in the same river twice”; and Cratylus added that “it was not possible to do it even once”, since the water is continually flowing (Aristotle, Metaphysics, iv, 5).

In studies carried out through qualitative research, which, in general, are guided by a systemic, hermeneutic, phenomenological, ethnographic and humanistic orientation, reliability is oriented towards the level of interpretive agreement between different observers, evaluators or judges of the same phenomenon, that is, reliability will be, above all, internal, interjudges< a i=4>. A good level of this reliability is considered when it reaches 70%, that is, for example, out of 10 judges, there is consensus among 7.

Given the particular nature of all qualitative research and the complexity of the realities it studies, it is not possible to repeat or replicate a study in the strict sense, as can be done in many experimental investigations. Due to this, the reliability of these studies is achieved using other rigorous and systematic procedures. 

internal reliability is very important. Indeed, the level of consensus between different observers of the same reality increases the credibility that the significant structures discovered in a given environment deserve, as well as the security that the level of congruence of the phenomena under study is strong and solid. 

Table of Contents