What are the best 5 common data collection instruments?

Financial

 

What are the best 5 common data collection instruments?

data collection

Data Collection

In the age when information is power, how we gather that information should be one of our major concerns, right? Also, which of the many data collection methods is the best for your particular needs? Whatever the answer to the two questions above, one thing is for sure – whether you’re an enterprise, organization, agency, entrepreneur, researcher, student, or just a curious individual, data gathering needs to be one of your top priorities.

Still, raw data doesn’t always have to be particularly useful. Without proper context and structure, it’s just a set of random facts and figures after all. However, if you organize, structure, and analyze data obtained from different sources, you’ve got yourself a powerful “fuel” for your decision-making.

Data collection is defined as the “process of gathering and measuring information on variables of interest, in an established systematic fashion that enables one to answer queries, stated research questions, test hypotheses, and evaluate outcomes.”

It is estimated that, by 2025, the total volume of data created and consumed worldwide will reach 163 zettabytes. That being said, there are numerous reasons for data collection, but here we are going to focus primarily on those relevant to marketers and small business owners:

  • It helps you learn more about your target audience by collecting demographic information
  • It enables you to discover trends in the way people change their opinions and behavior over time or in different circumstances
  • It lets you segment your audience into different customer groups and direct different marketing strategies at each of the groups based on their individual needs
  • It facilitates decision making and improves the quality of decisions made
  • It helps resolve issues and improve the quality of your product or service based on the feedback obtained

According to Clario, global top collectors of personal data among social media apps are:

  • Facebook
  • Instagram
  • TikTok
  • Clubhouse
  • Twitter

And given how successfull they are when it comes to meeting their users’ needs and interests, it is safe to say that streamlined and efficient data collection process is at the core of any serious business in 2023.

Before we dive deeper into different data collection techniques and methods, let’s just briefly differentiate between the two main types of data collection – primary and secondary.

Primary vs. Secondary Data Collection

Primary data collection

Primary data (also referred to as raw data) is the data you collect first-hand, directly from the source. In this case, you are the first person to interact with and draw conclusions from such data, which makes it more difficult to interpret it.

According to reasearch, about 80% of all collected data by 2025. will be unstructured. In other words, unstructured daza collected as primary data but nothing meaningful has been done with it. Unstructured data needs to be organized and analyzed if it’s going to be used as in-depth fuel for decision-making.

Secondary data collection

Secondary data represents information that has already been collected, structured, and analyzed by another researcher. If you are using books, research papers, statistics, survey results that were created by someone else, they are considered to be secondary data.

Secondary data collection is much easier and faster than primary. But, on the other hand, it’s often very difficult to find secondary data that’s 100% applicable to your own situation, unlike primary data collection, which is in most cases done with a specific need in mind.

Some examples of secondary data include census data gathered by the US Census Bureau, stock prices data published by Nasdaq, employment and salaries data posted on Glassdoor, all kinds of statistics on Statista, etc. Further along the line, both primary and secondary data can be broken down into subcategories based on whether the data is qualitative or quantitative.

Quantitative vs. Qualitative data

Quantitative Data

This type of data deals with things that are measurable and can be expressed in numbers or figures, or using other values that express quantity. That being said, quantitative data is usually expressed in numerical form and can represent size, length, duration, amount, price, and so on.

Quantitative research is most likely to provide answers to questions such as who? when? where? what? and how many?

Quantitative survey questions are in most cases closed-ended and created in accordance with the research goals, thus making the answers easily transformable into numbers, charts, graphs, and tables.

The data obtained via quantitative data collection methods can be used to conduct market research, test existing ideas or predictions, learn about your customers, measure general trends, and make important decisions.

For instance, you can use it to measure the success of your product and which aspects may need improvement, the level of satisfaction of your customers, to find out whether and why your competitors are outselling you, or any other type of research.

As quantitative data collection methods are often based on mathematical calculations, the data obtained that way is usually seen as more objective and reliable than qualitative. Some of the most common quantitative data collection techniques include surveys and questionnaires (with closed-ended questions).

Compared to qualitative techniques, quantitative methods are usually cheaper and it takes less time to gather data this way. Plus, due to a pretty high level of standardization, it’s much easier to compare and analyze the findings obtained using quantitative data collection methods.

Qualitative Data

Unlike quantitative data, which deals with numbers and figures, qualitative data is descriptive in nature rather than numerical. Qualitative data is usually not easily measurable as quantitative and can be gained through observation or open-ended survey or interview questions.

Qualitative research is most likely to provide answers to questions such as “why?” and “how?”

As mentioned, qualitative data collection methods are most likely to consist of open-ended questions and descriptive answers and little or no numerical value. Qualitative data is an excellent way to gain insight into your audience’s thoughts and behavior (maybe the ones you identified using quantitative research, but weren’t able to analyze in greater detail).

Data obtained using qualitative data collection methods can be used to find new ideas, opportunities, and problems, test their value and accuracy, formulate predictions, explore a certain field in more detail, and explain the numbers obtained using quantitative data collection techniques.

As quantitative data collection methods usually do not involve numbers and mathematical calculations but are rather concerned with words, sounds, thoughts, feelings, and other non-quantifiable data, qualitative data is often seen as more subjective, but at the same time, it allows a greater depth of understanding.

Some of the most common qualitative data collection techniques include open-ended surveys and questionnaires, interviews, focus groups, observation, case studies, and so on.

Quantitative vs. Qualitative data

5 Data Collection Methods

Before we dive deeper into different data collection tools and methods – what are the 5 methods of data collection? Here they are:

  • Surveys, quizzes, and questionnaires
  • Interviews
  • Focus groups
  • Direct observations
  • Documents and records (and other types of secondary data, which won’t be our main focus here)

Data collection methods can further be classified into quantitative and qualitative, each of which is based on different tools and means.

Quantitative data collection methods

1. Closed-ended Surveys and Online Quizzes

Closed-ended surveys and online quizzes are based on questions that give respondents predefined answer options to opt for. There are two main types of closed-ended surveys – those based on categorical and those based on interval/ratio questions.

Categorical survey questions can be further classified into dichotomous (‘yes/no’), multiple-choice questions, or checkbox questions and can be answered with a simple “yes” or “no” or a specific piece of predefined information.

Interval/ratio questions, on the other hand, can consist of rating-scale, Likert-scale, or matrix questions and involve a set of predefined values to choose from on a fixed scale. To learn more, we have prepared a guide on different types of closed-ended survey questions.

Once again, these types of data collection methods are a great choice when looking to get simple and easily analyzable counts, such as “85% of respondents said surveys are an effective means of data collection” or “56% of men and 61% of women have taken a survey this year” (disclaimer: made-up stats).

If you’d like to create something like this on your own, learn more about how to make the best use of our survey maker.

It lets you segment your audience into different customer groups and direct different marketing strategies at each of the groups based on their individual needs (check out our quiz maker for more details).

Qualitative data collection methods

2. Open-Ended Surveys and Questionnaires

Opposite to closed-ended are open-ended surveys and questionnaires. The main difference between the two is the fact that closed-ended surveys offer predefined answer options the respondent must choose from, whereas open-ended surveys allow the respondents much more freedom and flexibility when providing their answers.

Here’s an example that best illustrates the difference:

When creating an open-ended survey, keep in mind the length of your survey and the number and complexity of questions. You need to carefully determine the optimal number of questions, as answering open-ended questions can be time-consuming and demanding, and you don’t want to overwhelm your respondents.

Compared to closed-ended surveys, one of the quantitative data collection methods, the findings of open-ended surveys are more difficult to compile and analyze due to the fact that there are no uniform answer options to choose from. In addition, surveys are considered to be among the most cost-effective data collection tools.

3. 1-on-1 Interviews

One-on-one (or face-to-face) interviews are one of the most common types of data collection methods in qualitative research. Here, the interviewer collects data directly from the interviewee. Due to it being a very personal approach, this data collection technique is perfect when you need to gather highly personalized data.

Depending on your specific needs, the interview can be informal, unstructured, conversational, and even spontaneous (as if you were talking to your friend) – in which case it’s more difficult and time-consuming to process the obtained data – or it can be semi-structured and standardized to a certain extent (if you, for example, ask the same series of open-ended questions).

4. Focus groups

The focus group data collection method is essentially an interview method, but instead of being done 1-on-1, here we have a group discussion.

Whenever the resources for 1-on-1 interviews are limited (whether in terms of people, money, or time) or you need to recreate a particular social situation in order to gather data on people’s attitudes and behaviors, focus groups can come in very handy.

Ideally, a focus group should have 3-10 people, plus a moderator. Of course, depending on the research goal and what the data obtained is to be used for, there should be some common denominators for all the members of the focus group.

For example, if you’re doing a study on the rehabilitation of teenage female drug users, all the members of your focus group have to be girls recovering from drug addiction. Other parameters, such as age, education, employment, marital status do not have to be similar.

Focus groups

 

5. Direct observation

Direct observation is one of the most passive qualitative data collection methods. Here, the data collector takes a participatory stance, observing the setting in which the subjects of their observation are while taking down notes, video/audio recordings, photos, and so on.

Due to its participatory nature, direct observation can lead to bias in research, as the participation may influence the attitudes and opinions of the researcher, making it challenging for them to remain objective. Plus, the fact that the researcher is a participant too can affect the naturalness of the actions and behaviors of subjects who know they’re being observed.

Interactive online data collection

Above, you’ve been introduced to 5 different data collection methods that can help you gather all the quantitative and qualitative data you need. Even though we’ve classified the techniques according to the type of data you’re most likely to obtain, many of the methods used above can be used to gather both qualitative and quantitative data.

While online quiz maker may seem like an inocuous tool for data collection, it’s actually a great way to engage with your target audience in a way that will result in actionable and valuable data and information. Quizzes can be more helpful in gathering data about people’s behavior, personal preferences, and more intimate impulses.

You can go for these options:

  • Personality quiz

This type of quiz has been used for decades by psychologists and human resources managers – if administered properly, it can give you a great insight into the way your customers are reasoning and making decisions.

The results can come in various forms – they are usually segmented into groups with similar characteristics. You can use it to find out what your customers like, what their habits are, how they decide to purchase a product, etc.

  • Scored survey

This type of questionnaire lingers somewhere between a quiz and survey – but in this case, you can quantify the result based on your own metrics and needs.  For example, you can use it to determine the quality of a lead.

  • Survey

You can use surveys to collect opinions and feedback from your customers or audience. For example, you can use it to find out how old your customers are, what their education level is, what they think about your product, and how all these elements interact with each other when it comes to the customer’s opinion about your business.

  • Test quiz

This type of quiz can help you test the user’s knowledge about the certain topic, and it differentiates from the personality quiz by having answers that are correct or false.

You can use it to test your products or services. For example, if you are selling a language learning software, a test quiz is a valuable insight into its effectiveness.

How to make data collection science-proof

However, if you want to acquire this often highly-sensitive information and draw conclusions from it, there are specific rules you need to follow. The first group of those rules refers to the scientific methodology of this form of research and the second group refers to legal regulation.

1. Pay Attention to Sampling

Sampling is the first problem you may encounter if you are seeking to research a demographic that extends beyond the people on your email list or website. A sample, in this case, is a group of people taken from a larger population for measurement.

To be able to draw correct conclusions, you have to say with scientific certainty that this sample reflects the larger group it represents.

Your sample size depends on the type of data analysis you will perform and the desired precision of the estimates.

Remember that until recently, users of the internet and e-mail were not truly representative of the general population. This gap has closed significantly in recent years, but the way you distribute your quiz or survey can also limit the scope of your research.

For example, a Buzzfeed type of quiz is more likely to attract a young, affluent demographic that doesn’t necessarily reflect the opinions and habits of middle-aged individuals.

You can use this software to calculate the size of the needed sample. You can also read more about sapling and post-survey adjustments that will guarantee that your results are reliable and applicable.

2. Ensure high-response rate

Online survey response rates can vary and sometimes can as low as 1%.  You want to make sure that you offer potential respondents some form of incentive (for example, a discount for your product or an entertainment value for people who solve personality quizzes).

Response rate is influenced by interests of participants, survey structure, communication methods, and assurance of privacy and confidentiality. We will deal with the confidentiality in the next chapter, and here you can learn more about optimizing your quizzes for high response rates.

Now that you know what are the advantages and disadvantages of the online quizzes and surveys, these are the key takeaways for making a high-quality questionnaire.

3. Communicate clearly

Keep your language simple and avoid questions that may lead to confusion or ambiguous answers. Unless your survey or quiz target a specific group, the language shouldn’t be too technical or complicated.

Also, avoid cramming multiple questions into one. For example, you can ask whether the product is “interesting and useful,” and offer “yes” and “no” as an answer – but the problem is that it could be interesting without being useful and vice versa.

4. Keep it short and logical

Keep your quizzes and surveys as short as possible and don’t risk people opting out of the questionnaire halfway.  If the quiz or survey have to be longer, divide them into several segments of related questions. For example, you can group questions in a personality quiz into interests, goals, daily habits etc. Follow a logical flow with your questions, don’t jump from one topic to another.

5. Avoid bias

Don’t try to nudge respondents’ answers towards a certain result. We know it feels easier to ask how amazing your product is, but try to stay neutral and simply ask people what they think about it.

Also, make sure that multimedia content in the survey or quiz does not affect responses.

6. Consider respondents’ bias

If you conduct personality quizzes, you may notice that you cannot always expect total accuracy when you ask people to talk about themselves. Sometimes, people don’t have an accurate perception of their own daily activities, so try to be helpful in the way you word the questions.

For example, it’s much easier for them to recall how much time they spend on their smartphone on a daily basis, then to as them to calculate in on a weekly or monthly basis.

Even then you may not get accurate answers, which is why you should cross-examine the results with other sources of information.

7. Respect Privacy and Confidentiality

As we previously mentioned, respecting users’ privacy and maintaining confidentiality is one of the most important factors that contribute to high response rates.

Until fairly recently, privacy and data protection laws were lagging almost decades behind our technological development. It took several major data-breach and data-mining scandals to put this issue on the agenda of the governments and legal authorities.

For a good reason – here are some stats showing how Internet users feel about privacy.

  • 85% of the world’s adults want to do more to protect their online privacy
  • 71% of the world’s adults have taken measures to protect their online privacy
  • 1 in 4 Americans are asked to agree to a privacy policy on a daily basis
  • Two-thirds of the world’s consumers think that tech companies have too much control over their data
  • According to consumers, the most appropriate type of collected data is brand purchase history

Many of global users’ concerns were addressed for the first time in the General Data Protection Regulation (GDPR) which was introduced on the 25th May 2018. It establishes different privacy legislation from European countries under one umbrella of legally binding EU regulation.

Although the law is European, each website that receives European visitors has to comply – and this means everyone. So what are your obligations under GDPR?

  • you have to seek permission to use the customers’ data, explicitly and unambiguously
  • you have to explain why you need this data
  • you have to prove you need this data
  • you have to document the ways you use personal data
  • you have to report any data breaches promptly
  • accessible privacy settings built into your digital products and websites
  • switched on privacy settings
  • regular privacy impact assessments

While the new rulebook may seem intimidating at first, in reality, it comes down to a matter of business ethics. Think about it in the simplest terms. Sleazy sliding into people’s email inbox may have its short-term benefits, but in the long run, it amounts to building an email list full of people who are uninterested in your product and irritated by your spam.

Actively seeking permission to send emails to your potential and existing customers is an excellent way to make sure that your list is full of high-quality leads that want to hear or buy from you.

Protecting your customers’ data or going to great lengths to explain how you’re going to use it establishes a long-term relationship based on trust.

What steps will you take to enhance the transparency of your data collection methods?

mujer robot

What steps will you take to enhance the transparency of your data collection methods?

data collection

 

What are Data Collection Methods?

Data collection methods are techniques and procedures used to gather information for research purposes. These methods can range from simple self-reported surveys to more complex experiments and can involve either quantitative or qualitative approaches to data gathering.

Some common data collection methods include surveys, interviews, observations, focus groups, experiments, and secondary data analysis. The data collected through these methods can then be analyzed and used to support or refute research hypotheses and draw conclusions about the study’s subject matter.

Data collection methods play a crucial role in the research process as they determine the quality and accuracy of the data collected. Here are some mejor importance of data collection methods.

  • Determines the quality and accuracy of collected data.
  • Ensures that the data is relevant, valid, and reliable.
  • Helps reduce bias and increase the representativeness of the sample.
  • Essential for making informed decisions and accurate conclusions.
  • Facilitates achievement of research objectives by providing accurate data.
  • Supports the validity and reliability of research findings.

Methods, techniques and constants for the evaluation of online public access catalogs

Until the emergence of new information and communication technologies (ICT), and in particular the Internet, information systems generally contained tangible resources. With the advent of the Web, the development of collections became more complex, which progressively began to have electronic and virtual documents, which required information professionals to face the challenge of changing the perspective of technical processes, this time with new requirements capable of facing documentary complexity (especially from the point of view of structural integration).

This reality demands the reorganization and redesign of essential processes in information systems, particularly for the storage and retrieval of information, in such a way that they facilitate clear and expeditious access to information. In this sense, attention to the methods used for description becomes vitally important, both from a formal and content point of view.

Even though many organizations, work groups and individuals are using the Internet to generate and/or distribute information, and the amount of electronic resources available on the Web has increased substantially in recent years, a good part of the collections, especially Those that are not generated in HTML are “invisible” from the benefits that general searches on the Internet currently offer, so it can be argued that there is an imminent need to access this type of resources based on new management strategies. of the contents. Consequently, online catalogs require new specific tags (new metadata sets), new metalanguages, new semantics and new syntax to achieve efficient search and retrieval.

Methods

The new challenge for information professionals consists of representing not only the constant or explicit concepts of the documents, but also the changes in the understanding or use of these, emergent or circumstantial changes, which must also be identified as inputs for the construction of metadata and for the knowledge management process, which would identify new topics that would serve potential users of information systems.

The guarantee for competitiveness and excellence in the provision of online catalog services depends on a new strategic vision in the quality evaluation and management processes, with the identification of the opportunities offered by a scenario in constant transformation and supporting new demands for adaptability and incessant changes for the provision of services through continuous improvement.

METHODS AND TECHNIQUES FOR EVALUATION OF ONLINE CATALOGS

The application of automation in the recovery of bibliographic records was initially represented by large databases that led to the creation of online catalogs. With the development of information and communications technologies supported by networks, access to records has transcended the doors of libraries, which is possible through remote access to online public access catalogs (OPAC). Its main objective was that end users could conduct themselves, autonomously and independently, in online information searches.5 Online catalogs were the first information retrieval systems designed to be used directly by the general public, requiring or no training.

With the widespread use of online catalogs, they have become a dynamic channel of access to constantly growing information resources through the use of networks and the possibilities of hyperlinks.

Although several difficulties for their use still persist in many of them, they are important for cataloguers, as they serve as a guide for the use of rules and standards when working with bibliographic records, and also to approach and adopt measures that from usability and User-centered design allows the use and exploitation of OPACs in accordance with the needs of users of information systems.

At present, the analysis and study of users, as well as the creation of products/services that satisfy their needs, is a complex issue, especially if these products and services are in the web environment, both even more so if they are analyzed under the influence of the philosophy of web 2.0.

The truth is that new generations of users (2.0 users) have grown up with the use of computers and with access to all the benefits they offer; Therefore, their forms of consumption, access and processing of information, as well as their needs and expectations, are different, since they require and expect personalized products and services with immediate response, collaborative and multitasking, they assume participatory learning, they prefer access non-linear to information, they prefer graphical representations to written text and expect the interfaces of different systems to be more intuitive.

These users consume a wide variety of information, but not in a static way, since they become, in turn, producers of new information, which benefits them with the significant advantages of the knowledge that is built.

Information designers and professionals must aim to make OPACs a system to improve, promote, facilitate the use and consumption of information in information systems, through the incorporation of techniques and tools that meet the requirement that what users value: ease of use. In this sense, it can be said that the perspectives presented by the studies related to OPACs in the field of their improvement are different, and above all that they are carried out partially focused on the different benefits that they offer.

EVALUATION METHODS

Regarding the methods used for evaluation, it could be argued that there is great terminological diversity to identify the different practices used in this sense, but that they could be systematized into three large divisions: qualitative, quantitative and those that use comparisons.

Quantitative methods

These are methods that focus above all on the collection of statistical information related to the functioning of the institution, related to efficiency, effectiveness and cost-effectiveness. They are methods focused on the operation of the systems, but although very necessary, they have the drawback that they use statistics collected by staff or by automatic systems; Therefore, the data collected may have a certain deviation, from which it is inferred that the results are not completely reliable.

Qualitative methods

They are assisted by qualitative information collection techniques, such as exchanges of opinions or brainstorming, interviews and questionnaires, strategies that are much closer to human sensations. They are mostly used to discover long-term results, goals and impact of systems.

Qualitative methods tend to use a natural and holistic approach in the evaluation process. “They also tend to pay more attention to the subjective aspects of human experience and behavior.” These methods must be applied with extreme care, always keeping in mind that satisfaction with the results of the systems will be in line with the groups of users who receive them, a very complex element due to the diversity of criteria and perceptions that exists from one group to another. , which depends, in turn, on a set of subjective and polycausal factors.

Comparative methods

It is recognized as the method that uses the comparison between various systems, processes, products or services to determine best practices, such as benchmarking. It is a process of evaluating products, services and processes between organizations, through which one analyzes how another performs a specific function to match or improve it. The application of these methods allows organizations to achieve higher quality in their products, services and processes through cooperation, collaboration and the exchange of information.

Their objective is to correct errors and identify opportunities, learning to provide solutions and make decisions following the patterns of leaders. This type of study is carried out in direct contact with competitors or non-competitors and at the end the results are shared so that each organization creates its own organizational improvement system.

methods

It should be noted in this space that each of the aforementioned methods pursue their particular objectives and are determined by the information collection techniques used in the evaluation process; that the combination of several of them could be beneficial to fully meet the objective of any evaluation. These information collection techniques must be compatible with the method used in the evaluation process, so that it can provide the necessary information. There are a large number of information collection techniques, but the most used in the evaluation process are those shown below:

1. The tests.

2. The evaluations of the participants.

3. Expert evaluations.

4. Surveys.

5. The interviews.

6. Observation of behavior and activities.

7. The evaluation of personnel performance.

8. Daily analysis of participants.

9. Analysis of historical and current archives.

10. Transactional analysis.

11. Content analysis.

12. Bibliometric techniques, especially citation analysis.

13. Use files.

14. Anecdotal evidence.

Evaluation activities are still useful, even if they do not immediately lead to decision-making. The reflection that they generate on the weaknesses they reveal is useful to define new lines of work that focus on resolving the elements that generate difficulties and dissatisfaction, both for employees and users/customers.

The methods used in the evaluation of OPAC in general contain a broad component of statistical application and it could be argued that they are not entirely methods produced by Library Science or Information Science, in their own way, but are marked by the influence of other fields of knowledge such as Mathematical and Computer Sciences, Cognitive Psychology, HCI, usability, among other disciplines.

It is worth clarifying that the use of none of these methods is exclusive of another, although they are usually applied depending on what is to be measured in each case, an issue that has contributed to measuring quality from specific perspectives and not from a point of comprehensive view.

Most authors do not establish differences between the methods and techniques for collecting information, and they can be consulted, from very general classifications to other very detailed ones used for particular cases. When general studies are mentioned, the one that proposes developing four basic methodologies for catalog evaluation could be analyzed:

– Questionnaires: both for users and system workers.

– Group interviews: with the selection of a specific topic, also applied to end users and system personnel.

– System monitoring: both through direct observation of users and the recording of system operations.

– Controlled or laboratory experiments.

On the other hand, there are other more specific and detailed ones that aim to evaluate a particular aspect within online catalogs. In the case of the interface study, the following methods are found:

Methods prior to commercial distribution of the interface

– Expert reviews: based on heuristic evaluations, review by previous recommendations, consistency inspection and user simulations.

– Usability testing: through discounted testing, exploration testing, field testing, validation testing and others.

– Lab test.

– Questionnaires.

– Interviews and discussions with users.

Methods during the active life of the product

– Monitoring of user performance.

– Monitoring and/or telephone or online help.

– Communication of problems.

– News groups.

– User information: through newsletters or FAQs.

Another that addresses both the perspective of the system and that of the user and where there is a difference between data collection methods and techniques is the study that proposes the following:

– Analysis of prototypes.

– Controlled experiments.

– Protocol transaction analysis (TLA).

– Comparative analysis.

– Protocol analysis.

– Expert evaluations of the system.

The first three methods (prototype analysis, controlled experiments and protocol transaction analysis), proposed in this study, are focused on the operation of the system, while the last three (comparative analysis, protocol analysis, expert evaluations of the system) are most used to verify human behavior and its interaction with the system; hence this proposal is considered generalizing and integrating.

Thus, in this same study, the following data collection techniques are proposed: questionnaires, interviews, log transaction records, protocol records and verbal protocol records. The feasibility and relevance of combining several research techniques to obtain better results is also stated.

It should be mentioned that the use of any type of data collection methods and techniques for the evaluation of online catalogs is considered correct, taking into consideration, of course, the objectives sought with the use of each of them, in each of the cases to be evaluated.

The optimal thing would be the combination of several methods and techniques that provide sufficient data, in such a way that they can offer the information closest to reality for subsequent evaluation and decision making, using both quantitative and qualitative data, referring to both users. as well as the system, in a way that allows a comprehensive appreciation of this product and/or service. Some information collection techniques used most frequently in OPAC evaluation studies are described below, of which there are known advantages and disadvantages in their application.

 

 

 

 

How to better manage data validation and cleaning processes?

Machine

How to better manage data validation and cleaning processes?   Data Data is a collection of facts, figures, objects, symbols, and events gathered from different sources. Organizations collect data with various data collection methods to make better decisions. Without data, it would be difficult for organizations to make appropriate decisions, so data is collected from … Read more

How do you best account for seasonal variations in your data collection?

Geospatial

How do you best account for seasonal variations in your data collection?Data collection 

Data collection

Data collection is the process of collecting and analyzing information on relevant variables in a predetermined, methodical way so that one can respond to specific research questions, test hypotheses, and assess results. Data collection can be either qualitative or quantitative.

Data is a collection of facts, figures, objects, symbols, and events gathered from different sources. Organizations collect data with various data collection methods to make better decisions. Without data, it would be difficult for organizations to make appropriate decisions, so data is collected from different audiences at various points in time.

For instance, an organization must collect data on product demand, customer preferences, and competitors before launching a new product. If data is not collected beforehand, the organization’s newly launched product may fail for many reasons, such as less demand and inability to meet customer needs. 

Although data is a valuable asset for every organization, it does not serve any purpose until analyzed or processed to get the desired results.

Decoding seasonal variations with linearly weighted moving averages

1. Introduction to seasonal variations

Seasonal variations are a natural phenomenon that affects the economy, weather patterns, consumer behavior and many other aspects of our lives. These variations occur due to various factors, such as change in weather, holidays and cultural practices, that influence data patterns over time. Understanding seasonal variations and their impact on different data sets is  essential for decision making  in various fields. Linearly weighted moving averages (LWMA) are one of the methods most effective statistics for analyzing seasonal variations.This technique analyzes data by assigning different weights to data points based on their position in time.

In this section, we will introduce seasonal variations and their impact on different data sets. We will also provide detailed information on how LWMA can be used to decode seasonal variations. Here are some key points to keep in mind:

1. Seasonal variations occur in many different fields, such as economics, meteorology, and marketing. For example, in the retail industry, sales of winter clothing generally increase during the winter season, and sales of summer clothing increase during the winter season. summer season.

2. Seasonal variations can be regular, irregular, or mixed. Regular variations occur at fixed intervals, such as every year or quarter. Irregular variations occur due to unpredictable events, such as natural disasters or economic recessions. Mixed variations occur due to a combination of regular and irregular factors.

3. LWMA can be used to analyze seasonal variations by assigning different weights to data points based on their position in time. For example, if we are analyzing monthly sales data, we can assign higher weights to recent months and lower pesos in the first months.

4. LWMA is particularly effective at handling seasonal variations because it reduces the  impact of outliers  and emphasizes patterns in the data. For example, if there is a sudden increase in sales due to a promotion, LWMA will assign a lower weight to that data point, which will reduce its impact on the overall analysis.

5. LWMA can be applied to different types of data sets, such as time series data, financial data and stock market data. It is a versatile technique that can provide valuable information on different aspects of seasonal variations.

Understanding seasonal variations and their impact on different data sets is crucial to making informed decisions . LWMA is an effective statistical method that can be used to analyze seasonal variations and provide valuable insights into patterns and trends in data .

seasonal variations

2. Understanding Linearly Weighted Moving Averages (LWMA)

Understanding linearly weighted moving averages (LWMA) is an essential component of analyzing time series data. It is a statistical method that smoothes a time series by giving more weight to recent observations and less weight to older ones. LWMA assigns weights to prices in the time series, with the most recent prices assigned the highest weight and the oldest prices assigned the lowest weight. In this way, the moving average is more responsive to recent price changes, making it makes it useful for analyzing trends and forecasting future prices.

Here are some key ideas for understanding linearly weighted moving averages:

1. LWMA assigns more weight to recent prices: This means that the moving average line will be more sensitive to recent price changes and less sensitive to older ones. This is because recent prices are more relevant to the situation current market.

Example: Suppose you are analyzing the price of a stock over the last month. LWMA would assign more weight to prices over the past few days, making the moving average more sensitive to recent price changes.

2. It is a customizable tool: Unlike other moving averages, LWMA allows you to customize the weight assigned to each price in the time series. You can assign more weight to certain prices and less to others, depending on the analysis you want to perform.

LWMA

Example: Suppose you are analyzing the price of a stock over the last year. You can assign more weight to prices in recent months and less weight to prices in the first few months, making the moving average more sensitive to changes. recent price changes.

3. Helpful in identifying trends: LWMA is commonly used to identify trends in time series data. It can help you determine whether a trend is bullish or bearish by analyzing the slope and direction of the moving average.

Example: Suppose you are analyzing the price of a stock over the past year. If the moving average line slopes up, it indicates an uptrend, and if it slopes down, it indicates a downtrend.

Overall, understanding linearly weighted moving averages can be a valuable tool for analyzing time series data, identifying trends, and forecasting future prices. By customizing the weights assigned to each price in the time series, you can create a moving average that responds more to recent price changes and more accurate in forecasting future prices.

3. Advantages of LWMA in seasonal data analysis

Seasonal variations in data are a common occurrence in many fields, such as finance, economics, and meteorology. They are caused by factors such as weather, holidays, and production cycles, and can have a significant impact on data analysis and forecasting. To address this problem, analysts often use the linearly weighted moving average (LWMA) method, which is specifically designed to handle seasonal fluctuations in the data. There are several advantages to using LWMA in seasonal data analysis, from its ability to provide accurate trend estimates to its ability to smooth out irregularities in the data.

Here are some of the advantages of LWMA in seasonal data analysis:

1. Accurate trend estimation: One of the main advantages of using LWMA in seasonal data analysis is its ability to provide accurate trend estimates. LWMA assigns greater weights to more recent data points, allowing it to capture the underlying trend in the data with greater precision. This is particularly useful when analyzing seasonal data, where the pattern tends to repeat itself over time.

For example, suppose we want to analyze the sales of a particular product over the past year. If sales tend to increase during the holiday season, LWMA will be able to capture this trend more accurately than other methods that do not take seasonal variations into account. .

2. Smooth out irregularities: Another advantage of LWMA is its ability to smooth out irregularities in the data. Because LWMA assigns greater weights to more recent data points, it can reduce the impact of outliers and other irregularities in the data. This can help provide a clearer picture of the underlying pattern in the data.

For example, let’s say we are analyzing temperature fluctuations in a particular city over the past year. If there were a particularly cold week in the middle of summer, LWMA could smooth out this irregularity and provide a more accurate representation of the seasonal pattern in the data. temperature.

3. Flexibility: LWMA is a flexible method that can be adapted to different types of seasonal data. It can be used to analyze data with different seasonal patterns, such as weekly, monthly or annual patterns. In addition, LWMA can be combined with other methods to improve its precision and effectiveness.

Overall, using LWMA to analyze seasonal data can provide several advantages, including accurate trend estimation, smoothing out irregularities, and flexibility. By using this method, analysts can gain a better understanding of the underlying patterns in the data and make more accurate forecasts and predictions.

What criteria are best for determining the relevance of your data sources?

inteligencia artificial

What criteria are best for determining the relevance of your data sources?

data sources

What is a Data Sources

Data sources is very important. In data analysis and business intelligence, a data sources is a vital component that provides raw data for analysis. A data source is a location or system that stores and manages data, and it can take on many different forms. From traditional databases and spreadsheets to cloud-based platforms and APIs, countless types of data sources are available to modern businesses.

Understanding the different types of data sources and their strengths and limitations is crucial for making informed decisions and deriving actionable insights from data. In this article, we will define what is a data source, examine data source types, and provide examples of how they can be used in different contexts.

Information

In today’s world, it is essential to master skills that allow us to manage information appropriately, according to our needs. Being a person competent in information management becomes a fundamental factor for the development of our academic life, as well as our professional life and even staff. Therefore, a key factor will be our degree of autonomy in the management of information.

The history of access to information has been one of universalization and progressive growth. In recent years, we have witnessed a true information explosion, in which the volume of information of all kinds (journalistic, economic, commercial, academic , scientific, etc.) has exploded to reach unthinkable dimensions, almost always difficult to manage.

Thanks to the development of ICT (information and communication technologies), our capacity to process, store and transmit information through the use of computers and communications networks, giving rise to the birth of the information and knowledge society in which we are immersed.

information

What are the sources of information?

An information source is understood as any instrument or, in a broader sense, resource, that can serve to satisfy an information need.

The objective of the information sources will be to facilitate the location and identification of documents, thus answering the question: where are we going to look for the information?

It is necessary to consider the type of information sources that will be consulted for class work. The student must select sources that provide information at a level appropriate to his or her needs.

1. Books:

We generally call a book a “scientific, literary or any other work of sufficient length to form a volume, which may appear in print or on another medium.”

Traditionally, the book was a printed document, but today we can find many in electronic format. Depending on the content and structure, various types of books can be established:

  • Manuals: These are works in which the most substantial aspects of a subject are gathered and synthesized. They compile basic data that is easy to consult, and are especially useful for getting started in the fundamentals of a discipline.
  • Monographs: They are specific studies on a specific topic and will help us gain in-depth knowledge of the area of ​​knowledge. They can provide both basic and exhaustive information on the topic of the work. We can complete the information using specialized magazine articles.
  • Encyclopedias and dictionaries: They offer synthetic and timely information on a topic for quick reference. There are general ones, for all topics, and specialized ones, for a specific subject. Encyclopedia entries are of medium length, while dictionaries contain short definitions.
  • Doctoral theses: These are research works carried out to obtain a doctorate degree. They are original works, not published commercially, exponents of research, with very complete information on a topic of study.

To locate books we will consult the library catalogue.

2. Magazines:

These are periodical publications that appear in successive installments. They are a fundamental source of up-to-date information, necessary to stay up to date on a topic.

We must highlight that electronic publishing has had a great impact on the publication of magazines, and a large number of them are already They publish in digital format. To locate journal articles we will consult the bibliographic databases.

1 . Library catalogs

Catalogs are databases that include descriptions of the documents held by a library. They include the publications that make up the fund or collection of a library: books and magazines, both printed and electronic, sound recordings, videos, etc. The libraries of the University of Valencia have a common catalog called Trobes.

What can we NOT find in the catalogue?

We cannot find MAGAZINE ARTICLES. Articles contained in magazines must be searched in bibliographic databases.

Through a search system, catalogs allow us to locate documents and find out their availability online. To find books and other resources available through the catalog we can search by different fields:

– Author: search by the last name and first name of an author, the name of a public or private organization

– Title: search by exact title

– Word: search for documents that contain said word in any of the record fields

– Subject: search for records of a specific subject or topic. In Trobes the subjects are in Valencian.

When we have identified the book we are looking for in the catalog, we have to locate it in the library. The catalog provides us with a signature for each copy located and indicates where (room, closet, shelf) in the library we can find it.

The catalog also allows:

– Consult the documents in electronic version subscribed by the library: magazines and electronic books and databases

– Carry out certain procedures remotely: reservations, renewals, etc.

2. Databases available through the Library

In addition to the documents that we find in the library catalog, we may need to search for more information (press, scientific articles, statistics, legislation, jurisprudence, financial data…) on the topic of our work.

For this, the library has a series of databases.

What is a database?

A database is a collection of data (texts, figures and/or images) belonging to the same context, systematically selected and stored, and organized according to a search program that allows their location and automated retrieval.

The libraries of the University of Valencia subscribe to a wide range of databases where we can locate information. We can access through the following link: http://biblioteca.uv.es/castellano/recursos_electronicos/bases_dades/acces.php

They are usually found online and we can access them through the university network or from home by setting up a virtual private network (VPN). They also gather freely accessible databases.

There are different types of databases, depending on the information they contain: bibliographic, factual, press; You can consult the main ones for your discipline in section 2.4. Sources of information in Social Sciences. Some of the most used are bibliographic databases, which contain references to documents, mainly journal articles, chapters, reports, conference communications, patents, etc. Sometimes they contain access to the full text of the documents and/or a summary.

General characteristics:

  • rfield-structured recordsauthor, title, title of the source, type of document, etc.
  • contain iinformation extracted fromprimary sources (journals, monographs, conference proceedings…), submitted to documentary analysis (indexation and summary).
  • They allow you to search by keywords.
  • They allow you to save information to print it, save it, send it to an email account or to a bibliography manager.

Internet

The Internet provides access to a large and diverse amount of information and resources. However, unlike libraries that select and evaluate information based on the quality and relevance of each resource, the Internet contains everything, no one is in charge of the content that is hosted, since it is a medium in which it can be self-published.

It is a participatory environment where anyone can contribute information. And that is where the problem of the network lies: not all the information is true or verified. Therefore, when using the Internet as a source of information, we must be critical and know how to differentiate which resources can help us. We must evaluate the information we find, especially if we want to use it to do a job.

Google

One of the first impulses when you feel a need for information is to turn to Google to satisfy it. Although in some cases this resource is sufficient, it is necessary to keep in mind that neither is everything that is, nor is it everything that is, that is, that there is a lot of important information that does not appear in conventional searches and that much of what appears only adds noise and confusion.

How does Google work?

Google incorporates an automatic algorithm that evaluates the sites found, so that only the most relevant ones appear, taking into account the terms or keywords entered in the search. Once the results are obtained, these terms appear in bold, so that the user knows why those resources have been selected.

To evaluate the quality of the resources, Google uses the number of links as a measure. that each page has. In this way, each link from one page to another works as a “quote.” But all links are not valued equally: those links, or quotes, that come from pages that in turn have received more links from other pages are worth more. Through this “democratic” system, Google orders the list of results by placing the websites that receive the most links at the top of the list.

The main characteristic of these search engines is that they only index websites linked to the academic world: journal portals, repositories, headquarters academic websites, databases, commercial publishers, scientific societies, online library catalogs, etc.

information

In the search process, we can come across a wide variety of information on our topic. However, not all information will have the same value, therefore, we must select the appropriate sources of information, taking into account different aspects.

How to best address issues related to respondent fatigue or participant burnout?

shutterstock 1007012911

How to best address issues related to respondent fatigue or participant burnout?

fatigue

 

Fatigue

Fatigue is a great problem.  In colloquial language, the term “fatigue” is used to refer to the feeling of tiredness after an effort, which can be of a diverse nature and generates demotivation for the continuation of that effort, whether intellectual, work-related. or sporty. Unfortunately, there is no universally accepted definition of fatigue, which makes its nature conceptually complex and ambiguous.

Fatigue can be a consequence of physical or mental effort. This review will focus on fatigue as a state resulting from the practice of a physical-sports activity in which both types of effort are usually present and is associated with the training load (training stimulus that generates a breakdown of the body’s homeostasis). and causes the activation of allostatic mechanisms that allow the state of functional balance to be recovered).

mental effort

The factors that contribute to fatigue resulting from physical activity arise not only from the physical effort, but also from the concomitant mental load and the results of the task being performed. Among the physiological factors that have been investigated in relation to fatigue, cardiovascular performance, muscular vascular occlusion, efficiency in the use of oxygen and nutrients, neuromuscular fatigue, and the presence of metabolites in the internal environment stand out.

Furthermore, factors directly implemented in the central nervous system (CNS) intervene in this process that serve to regulate effort and protect the body from damage that could occur due to overexertion.

However, fatigue also derives from the tactical nature of activity typical of motor interaction sports, in which the athlete invests an effort: on the one hand cognitive for decision making and on the other leading to emotional self-regulation. In this context, mental load, as an element that can influence fatigue, has become an area of ​​research of undeniable importance. In this case, fatigue does not determine the inability to continue sporting activity, but rather to do so while maintaining an optimal level of performance.

Although experimentation on the factors that influence the appearance of fatigue points to multi-causal models, in the scientific literature there is an over-representation of physiological and biomechanical mechanisms, to the detriment of those from psychology or neuroscience, which is why An updated review of these aspects is very pertinent.

Concepts of fatigue and mechanisms that contribute to its appearance

The multicausal nature of fatigue has been the subject of study in biomechanics, physiology and psychology, the first 2 covering its objective nature and the latter its subjective and mental nature. This division of the study of fatigue has generated diverse and not always compatible definitions.

The physiological approach defines fatigue as a functional failure of the organism that is reflected in a decrease in performance and that generally originates from excessive energy expenditure or depletion of the elements necessary for its generation. In this sense, most research focuses on muscular aspects, understanding fatigue as a loss of the maximum capacity to generate force or a loss of power production.

However, the physiological explanation of fatigue goes beyond these aspects, making it necessary to also consider the effect that exercise produces on motor units, the internal environment and the CNS.

López-Chicharro and Fernández-Vaquero understand that fatigue can result from the alteration of any of the processes on which muscle contraction depends and appear as a consequence of the simultaneous alteration of several of these processes. This approach is also shared by authors such as Barbany, who distinguishes between fatigue resulting from a failure in central activation and peripheral fatigue.

mental effort

The central and peripheral mechanisms have generally been studied in isolation, assuming that their combination occurs in a linear manner, which has probably produced biases in the interpretation of the data and in the conclusions obtained. Abbiss and Laursen have carried out a complete review of these models, which include: the cardiovascular/anaerobic model, the energy supply/depletion model, the neuromuscular model, the muscle trauma model, the biomechanical model, the thermoregulation model and, finally, the motivational/psychological model, which focuses on the influence of intrapsychological factors, such as performance expectations or required effort.

Cognitive strategies to manage fatigue

There are many athletes who use various cognitive strategies to influence their performance in competition, based on managing the discomfort caused by effort, delaying the onset of fatigue. Some research has used hypnotic suggestion to selectively modify the level of perceived exertion of participants, in order to identify the potential contributions of higher brain centers towards cardiorespiratory regulation and other peripheral physiological mechanisms. Some of them have shown that cognitive processes can exert a certain influence on the variations caused at a perceptual, and even metabolic, level through these hypnotic suggestions.

Different works analyze the relationship between perceived effort, cognitive processes and the effects they can have on resistance tasks, generating the development of cognitive strategies for their control. In general these have been included in 2 main types: associative and dissociative. With the former, the athlete concentrates on the signals he receives from the changes in his body state as a consequence of the effort made, while dissociative techniques are based on distracting the athlete with thoughts or mental tasks unrelated to the effort made. The distracting effect of these techniques is based on making use of attentional resources to leave the control of bodily sensations at an unconscious level.

Some of these works have focused their interest on verifying the degree of effectiveness of different cognitive processing strategies for sports performance. The first antecedents suggest that the level of sports performance could act as a mediator of the effectiveness of the different strategies, since the highest level athletes in long-term endurance tests tended to preferentially use associative strategies, while those of lower level level, the dissociative ones.

Probably the first work that attempted to verify this possible effect with an experimental design was that of González-Suárez. The results of the experiment revealed greater performance (longer endurance time) when the subjects ran to self-imposed exhaustion using associative strategies. Likewise, those with a higher athletic level kept running for longer than subjects with lower levels. Dissociative strategies also produced a decrease in perceptions of fatigue and physical exertion, while associative strategies tended to increase perceptions of fatigue.

On the other hand, Hutchinson and Tenenbaum conclude their work in a cycle ergometer resistance test at 50, 70 and 90% of VO2max< at i=2>, that “attentional focusing was predominantly dissociative during the low-intensity phase of the task, and turned toward predominantly associative as the intensity increased.” This seems to indicate that increasing the intensity of the exercise makes the subject unable to abstract from the bodily sensations generated by the exercise. In any case, as Díaz-Ocejo et al point out. , the results are currently not conclusive and it is advisable to approach the research considering other possible mediating variables of the effect of the different cognitive strategies.

Neurocognitive mechanisms of fatigue processing

The afferent information that can alter the RPE is very diverse, and it remains to be elucidated how the CNS integrates it and elaborates the sensation of fatigue. From some studies it is known that the nervous structures involved could be located in the insular cortex, the anterior cingulate cortex (medial prefrontal region) and the thalamic regions.

In relation to the distribution of training content

In the same way that the accumulation of physical load throughout training causes the appearance of fatigue and deterioration of performance, the accumulated effect of mental load contributes to the appearance of fatigue, and this to the decrease in physical performance. and engine.

For this reason, in training sessions in which the objective focuses on learning new game behaviors, motor responses of a high level of coordination, tactical aspects with high cognitive demands, or demands a high level of emotional self-control or concentration, the tasks that pursue it will be located in the initial part of the session, when the athlete has most of their physiological, cognitive and psychological resources available.

However, when the objective is not the acquisition of new motor schemes but the implementation of consolidated game actions and behaviors, the activities focused on their development will be located in the final phase of the training session, just when the accumulation of The physical and mental load leads to a state of fatigue that demands self-control from the athlete. That is, we would place the execution of those behaviors in training in the place that most closely simulates the situations in which those behaviors will have to be deployed in real competition.

If we focus the analysis on the distribution of content throughout a microcycle, for example, that of a team that competes during the weekend, the training activities that involve, on the one hand, greater physical effort and, on the other, greater Cognitive or emotional self-control should be located in the first part (Monday to Wednesday), reducing the magnitude of the loads in the days before competing to leave the necessary time to guarantee the recovery or supercompensation of the athlete.

In this sense, the evaluation of the athlete’s performance, or control of the training process, which is so advisable as a means to stimulate learning, must move away from competition, because as Buceta points out, it can generate stress that would add to what it already produces. the competition itself.

What role does randomization best play in your data collection design?

directivos empresas inteligencia artificial

What role does randomization best play in your data collection design?

data collection

 

What is data collection?

Data collection is the process of gathering data for use in business decision-making, strategic planning, research and other purposes. It’s a crucial part of data analytics applications and research projects: Effective data collection provides the information that’s needed to answer questions, analyze business performance or other outcomes, and predict future trends, actions and scenarios.

IT systems regularly collect data on customers, employees, sales and other aspects of business operations when transactions are processed and data is entered. Companies also conduct surveys and track social media to get feedback from customers. Data scientists, other analysts and business users then collect relevant data to analyze from internal systems, plus external data sources if needed. The latter task is the first step in data preparation, which involves gathering data and preparing it for use in business intelligence (BI) and analytics applications.

An overview of randomization techniques: An unbiased assessment of outcome in clinical research

A good experiment or trial minimizes the variability of the evaluation and provides unbiased evaluation of the intervention by avoiding confounding from other factors, which are known and unknown.

Randomization ensures that each patient has an equal chance of receiving any of the treatments under study, generate comparable intervention groups, which are alike in all the important aspects except for the intervention each groups receives. It also provides a basis for the statistical methods used in analyzing the data. The basic benefits of randomization are as follows: it eliminates the selection bias, balances the groups with respect to many known and unknown confounding or prognostic variables, and forms the basis for statistical tests, a basis for an assumption of free statistical test of the equality of treatments. In general, a randomized experiment is an essential tool for testing the efficacy of the treatment.

In practice, randomization requires generating randomization schedules, which should be reproducible. Generation of a randomization schedule usually includes obtaining the random numbers and assigning random numbers to each subject or treatment conditions. Random numbers can be generated by computers or can come from random number tables found in the most statistical text books.

For simple experiments with small number of subjects, randomization can be performed easily by assigning the random numbers from random number tables to the treatment conditions. However, in the large sample size situation or if restricted randomization or stratified randomization to be performed for an experiment or if an unbalanced allocation ratio will be used, it is better to use the computer programming to do the randomization such as SAS, R environment etc.

REASON FOR RANDOMIZATION

Researchers in life science research demand randomization for several reasons. First, subjects in various groups should not differ in any systematic way. In a clinical research, if treatment groups are systematically different, research results will be biased. Suppose that subjects are assigned to control and treatment groups in a study examining the efficacy of a surgical intervention. If a greater proportion of older subjects are assigned to the treatment group, then the outcome of the surgical intervention may be influenced by this imbalance. The effects of the treatment would be indistinguishable from the influence of the imbalance of covariates, thereby requiring the researcher to control for the covariates in the analysis to obtain an unbiased result.

Second, proper randomization ensures no a priori knowledge of group assignment (i.e., allocation concealment). That is, researchers, subject or patients or participants, and others should not know to which group the subject will be assigned. Knowledge of group assignment creates a layer of potential selection bias that may taint the data. Schul and Grimes stated that trials with inadequate or unclear randomization tended to overestimate treatment effects up to 40% compared with those that used proper randomization. The outcome of the research can be negatively influenced by this inadequate randomization.

Statistical techniques such as analysis of covariance (ANCOVA), multivariate ANCOVA, or both, are often used to adjust for covariate imbalance in the analysis stage of the clinical research. However, the interpretation of this post adjustment approach is often difficult because imbalance of covariates frequently leads to unanticipated interaction effects, such as unequal slopes among subgroups of covariates.

One of the critical assumptions in ANCOVA is that the slopes of regression lines are the same for each group of covariates. The adjustment needed for each covariate group may vary, which is problematic because ANCOVA uses the average slope across the groups to adjust the outcome variable. Thus, the ideal way of balancing covariates among groups is to apply sound randomization in the design stage of a clinical research (before the adjustment procedure) instead of post data collection. In such instances, random assignment is necessary and guarantees validity for statistical tests of significance that are used to compare treatments.

data

TYPES OF RANDOMIZATION

Many procedures have been proposed for the random assignment of participants to treatment groups in clinical trials. In this article, common randomization techniques, including simple randomization, block randomization, stratified randomization, and covariate adaptive randomization, are reviewed. Each method is described along with its advantages and disadvantages. It is very important to select a method that will produce interpretable and valid results for your study. Use of online software to generate randomization code using block randomization procedure will be presented.

Simple randomization

Randomization based on a single sequence of random assignments is known as simple randomization.This technique maintains complete randomness of the assignment of a subject to a particular group. The most common and basic method of simple randomization is flipping a coin. For example, with two treatment groups (control versus treatment), the side of the coin (i.e., heads – control, tails – treatment) determines the assignment of each subject. Other methods include using a shuffled deck of cards (e.g., even – control, odd – treatment) or throwing a dice (e.g., below and equal to 3 – control, over 3 – treatment). A random number table found in a statistics book or computer-generated random numbers can also be used for simple randomization of subjects.

This randomization approach is simple and easy to implement in a clinical research. In large clinical research, simple randomization can be trusted to generate similar numbers of subjects among groups. However, randomization results could be problematic in relatively small sample size clinical research, resulting in an unequal number of participants among groups.

Randomization

The block randomization method is designed to randomize subjects into groups that result in equal sample sizes. This method is used to ensure a balance in sample size across groups over time. Blocks are small and balanced with predetermined group assignments, which keeps the numbers of subjects in each group similar at all times. The block size is determined by the researcher and should be a multiple of the number of groups (i.e., with two treatment groups, block size of either 4, 6, or 8). Blocks are best used in smaller increments as researchers can more easily control balance.

After block size has been determined, all possible balanced combinations of assignment within the block (i.e., equal number for all groups within the block) must be calculated. Blocks are then randomly chosen to determine the patients’ assignment into the groups.

Although balance in sample size may be achieved with this method, groups may be generated that are rarely comparable in terms of certain covariates. For example, one group may have more participants with secondary diseases (e.g., diabetes, multiple sclerosis, cancer, hypertension, etc.) that could confound the data and may negatively influence the results of the clinical trial. Pocock and Simon stressed the importance of controlling for these covariates because of serious consequences to the interpretation of the results. Such an imbalance could introduce bias in the statistical analysis and reduce the power of the study. Hence, sample size and covariates must be balanced in clinical research.

randomization

Stratified randomization

The stratified randomization method addresses the need to control and balance the influence of covariates. This method can be used to achieve balance among groups in terms of subjects’ baseline characteristics (covariates). Specific covariates must be identified by the researcher who understands the potential influence each covariate has on the dependent variable. Stratified randomization is achieved by generating a separate block for each combination of covariates, and subjects are assigned to the appropriate block of covariates. After all subjects have been identified and assigned into blocks, simple randomization is performed within each block to assign subjects to one of the groups.

How will you deal with unexpected challenges or obstacles during data collection?

Localization

How will you deal with unexpected challenges or obstacles during data collection?

data collection

 

What is data collection?

Data collection is the process of gathering data for use in business decision-making, strategic planning, research and other purposes. It’s a crucial part of data analytics applications and research projects: Effective data collection provides the information that’s needed to answer questions, analyze business performance or other outcomes, and predict future trends, actions and scenarios.

In businesses, data collection happens on multiple levels. IT systems regularly collect data on customers, employees, sales and other aspects of business operations when transactions are processed and data is entered. Companies also conduct surveys and track social media to get feedback from customers. Data scientists, other analysts and business users then collect relevant data to analyze from internal systems, plus external data sources if needed. The latter task is the first step in data preparation, which involves gathering data and preparing it for use in business intelligence (BI) and analytics applications.

For research in science, medicine, higher education and other fields, data collection is often a more specialized process, in which researchers create and implement measures to collect specific sets of data. In both the business and research contexts, though, the collected data must be accurate to ensure that analytics findings and research results are valid.

Some observations on the challenges of digital transformation research in the business sector

Since digital transformation is an applied field and not purely theoretical, collaboration with companies during research is essential. However, such research activities are typically subject to two main types of challenges, one arising from the data collection process and another from the publication process. Below, I will take a closer look at these two obstacles and offer solutions.

Challenges in the data collection process

Trust is the fundamental basis for successful collaboration between companies and researchers. However, creating the trust necessary to establish that initial connection can be difficult, especially when the parties do not know each other. Companies tend to refuse to collaborate with external researchers when the benefit and/or form of collaboration is unclear.

However, even in cases where a minimum trust has been established, companies often have reservations about disclosing their most sensitive and specific data. They may want to avoid falling into the hands of competitors or may not want to speak publicly about their failures. This resistance represents a big problem for researchers, since these insights are important for the general understanding of the underlying problem and would allow other professionals to learn from them. Withholding certain data also prevents general understanding of the object of research.

Another key challenge in terms of collaboration is often creating a common timeline. In the business context, decisions can sometimes be made randomly, and deadlines are usually short. This does not always correspond to the requirements that researchers must meet in their environment. For example, for professionals without academic training, it is often problematic to understand that publication processes can take several years.

Matrix 3

 

Challenges in the publishing process

For many researchers, publishing studies on digital transformation is often a difficult process due to the lack of theoretical foundations and development. While conclusions may be practically relevant, their integration into the body of knowledge and their implications for research are not always clearly defined.

As research with companies is often carried out on a small scale, it can be difficult to ensure its generalisability or replicability. At this point, therefore, it would be necessary to anticipate a possible selection bias that could call into question the representativeness of the results, which is normally associated with a concern for the suitability of interviewees who, for various reasons, could not give opinions on the different functions of the company or on the company as a whole.

Some suggestions

In view of these frequent problems in the data collection and publication processes, some recommendations are made below:

In general, researchers should strive to establish long-term collaborations with companies, not only because it can reinforce mutual trust, but also because it could improve the efficiency of many collaborative processes. To this end, it might be useful to jointly create a long-term plan. Larger collaborative initiatives can be complemented by a more institutionalized approach, for example through regular stakeholder meetings.

Certainly, the key to success in establishing such partnerships is to highlight the benefits that the company can obtain. Only by sharing the benefits will companies commit to supporting researchers in the long term and assuming the additional costs that this may entail. The potential benefits of collaboration can be justified, not only by providing external expertise and methodological support, but also, for example, by facilitating better access to universities’ knowledge resources or to high-potential students.

Transparency is also crucial to establishing a relationship of trust. This should apply not only to operational matters, but also to the objectives pursued by both parties, including clear definition of roles and responsibilities and open and reliable communication between the parties. Researchers should inform companies of interim results and proactively share other issues of interest or potential project ideas that could also stimulate collaboration.

Whenever sensitive data is involved, a confidentiality and non-disclosure agreement can be advantageous for both parties. In this way, researchers will have a more complete and reliable view of the object of the investigation, while the company will ensure the protection of its sensitive data. From the researcher’s perspective, although access to that sensitive data may be crucial, not all information needs to be published, and data anonymity or publication embargo periods may mean that it can be published without violating the agreement. However, since unexpected changes in the research environment are common in companies, researchers must have a Plan B.

When considering publication, it is advisable to develop a clear theoretical basis during an early stage of research planning, without neglecting the generalizability of the practical problem. Researchers must also identify the most appropriate publication options. Journals that are more practitioner-oriented may offer advantages in terms of the length of the publication process, as well as a potentially more suitable target audience.

To ensure the scientific rigor of the research, it is advisable to select an adequate number of respondents within the companies. It is especially recommended to triangulate results with external sources (for example, annual reports or newspaper articles) to reduce potential respondent bias. Researchers should also strive to make the selection of their respondents and companies as transparent and legitimate as possible. Detailed documentation of the research process and underlying methodology will further increase examiners’ confidence. While small-scale exploratory studies are particularly suitable for new areas of research, large-scale quantitative studies could be a good opportunity to verify the generalizability of the promise of initial results.

salud medico cerebro realidad virtual medicina robot

Conclusion

Research on the digital transformation of “living objects” can sometimes be fraught with difficulties, but if well prepared and the above recommendations are taken into account, researchers can overcome the key double challenge of such efforts.

What steps will you take to ensure the privacy of participants in your data collection?

inteligencia artificial como arma doble filo ciberataques sofisticados pero sistemas mejorados 3031516

What steps will you take to ensure the privacy of participants in your data collection?

data collection

Data collection

Data collection is very important. Data is a collection of facts, figures, objects, symbols, and events gathered from different sources. Organizations collect data with various data collection methods to make better decisions. Without data, it would be difficult for organizations to make appropriate decisions, so data is collected from different audiences at various points in time.

For instance, an organization must collect data on product demand, customer preferences, and competitors before launching a new product. If data is not collected beforehand, the organization’s newly launched product may fail for many reasons, such as less demand and inability to meet customer needs.

Although data is a valuable asset for every organization, it does not serve any purpose until analyzed or processed to get the desired results.

Data collection methods are techniques and procedures used to gather information for research purposes. These methods can range from simple self-reported surveys to more complex experiments and can involve either quantitative or qualitative approaches to data gathering.

Some common data collection methods include surveys, interviews, observations, focus groups, experiments, and secondary data analysis. The data collected through these methods can then be analyzed and used to support or refute research hypotheses and draw conclusions about the study’s subject matter.

Privacy and information rights in the information society and the ICT environment*

The human rights that have been affirmed in the Constitutions of the different countries, in accordance with the theory of guaranteeism, are those that can be considered fundamental. The Mexican Constitution, in Decree published in the Official Gazette of the Federation on June 10, 2011, it has reformed the name of its first chapter as “On human rights and their guarantees”, therefore, when reference is made to the constitutional recognition of these rights, we will take this point into account.

The Mexican Constitution recognizes as human or fundamental rights related to information and, therefore, to the information society, mainly the following: the right to information (article 1) of personal data (article 16); in addition to freedom of expression and of the press (article 7), and the inviolability of communications (article 16). Copyright and property rights intellectual property are mentioned in the fundamental norm as a reference to the fact that its existence should not be considered a monopoly (article 28).

Therefore, these rights and some others that are linked to personal information, such as the right to intimacy, privacy, honor and one’s own image — which, although not directly recognized in the Constitution , they are through the international treaties signed by this country—are the ones that we will address for study in this work.

The reasons for this delimitation (which could seem very broad) are based on the fact that there is a connection between all of them and that, given the growing use of ICT (which supports the development of said information society), they can be violated in jointly or collaterally and not in isolation. An example of this is the connection that exists between the right to information and freedom of expression with respect to the right to privacy. On several occasions they collide, and on other occasions they are almost complements of each other.

The analysis will be carried out by describing the regulation or protection that exists in Mexico of these rights and comparing them in some cases with the norms of other countries or regions. All this so that through a brief exercise of lege ferenda we can schematically detect the challenges that remain pending in Mexico.

Likewise, these rights will be discussed in the face of the challenges posed by the information and knowledge society (SIC). All of this, especially in terms of their protection, since at the same time that these rights are essential and the core of it, their impact also places them at a level of permanent risk that increases due to the lack of effective and timely legal protection. or self-regulation.

The importance of the information security measures that some countries have attempted or suggested adopting will also be pointed out, in order to control the flow of information that society receives through the network, its benefits and harms with respect to rights such as that of information or privacy.

On the other hand, in relation to universal service, which we will also talk about, it must be said that the doctrine refers to it as one of the ways to make other fundamental rights a reality, such as that of information or access to the Internet. or to the SIC. Universal service is that which appears in the telecommunications legislation of various countries, although not in the case of Mexico, whose Federal Law for this sector only speaks of social coverage, as we will see in due course.

Fundamental rights related to personal information

Various fundamental and personality rights are related to each other, but they are differentiable, so it is necessary to make a distinction between them. In this way, linked but not equal rights must be listed, such as the rights to honor, to one’s own image, to intimacy or privacy, to data protection, to inviolability. of the address and the secrecy of communications. However, although the legal good that each of them protects is diverse, they cannot be treated in isolation, and even less so when they are analyzed within the framework of a SIC that interconnects many aspects.

The legal framework of personality rights also has a relationship with a principle of law recognized in the Declaration of Human Rights, which is that of human dignity. The European Community has elevated this to a fundamental legal good and, therefore, taking into account the large amount of personal information that circulates on the networks, it is evident that the situation resulting from this may specifically affect this good.

The right to data protection is closely linked to that of intimacy and privacy, but it enjoys its own autonomy (according to jurisprudential interpretation) since although the right to privacy has been derived from the recognition of freedom personal in the first generation of rights, it was until the third generation that, in “response to the phenomenon of the so-called ‘contamination of freedoms’ (liberties’ pollution)“, the right to privacy gained greater popularity, which caused.

This was forced to expand its spectrum through the recognition of new aspects of it, to now have a ramification of rights incorporated into it, such as the right to honor, to one’s own image, to private life (in its most broad), to the protection of personal data, and even, for a sector of doctrine, to computer freedom.

Thus, the right to the protection of personal data is built on the right to privacy and, in addition to implying the obligation of the State to guarantee the protection of personal information contained in archives, databases, files or any other medium, whether documentary or digital, grants the owner of such information the right to control over it, that is, to access, review, correct and demand the omission of personal data that a public or private entity has in its possession.

This right, in accordance with what we mentioned before, and according to GALÁN, is also linked to constitutional and legal rights or principles of great value, such as human dignity, individual freedom, self-determination and the democratic principle. Therefore, the aforementioned author maintains:

The protection of personal data, even recognizing the dynamism of its objective content, derived from technological changes, guarantees the person a power of control – of positive content – ​​over the capture, use, destination and subsequent trafficking of personal data. . Therefore, this right covers those data that are relevant to the exercise of any person’s rights, whether or not they are constitutional and whether or not they are related to honor, ideology, personal and family privacy.

For its part, the right to honor, to one’s own image and even the constitutional guarantees of inviolability of the home and the secrecy of private communications, are closely related to personal information, since they all refer to information related to people, to the physical appearance of a person (image), to that contained within their home, or in the communications they issue.

information

 

Legal recognition of fundamental rights relating to personal information

As we mentioned before, and according to the theory of fundamental rights (particularly that of guaranteeism, by Luigi FERRAJOLI), the human rights that have been constitutionally affirmed are those that can be defined as fundamental. One of the essential attributes of these rights, according to their origin and inspiring philosophical elements, is their universality. Hence, they appear reflected in international instruments such as the Universal Declaration of Human Rights (UDHR) of 1948 and other similar ones, although the nomination of these other legal bodies does not include the adjective “universal”.

In this sense, universality carries a strong naturalist influence of the first constitutionalism. Thus, it was thought that if the rights stated were, precisely, natural, then they had to be recognized for all people, taking into account that they all carry the same “nature.” In the words of RIALS, cited by CARBONELL, “if there exists a rational natural order knowable with evidence, it would be inconceivable that it would be consecrated with significant variants depending on the latitudes.”

From that perspective, we could say that in Mexican positive law, the right to the protection of personal data and the guarantees of the inviolability of the home and the secrecy of private communications are expressly recognized in the Constitution (article 16), but not so. the right to intimacy, privacy, honor and one’s own image, as will be specified below.

Direct recognition of the right to protection of personal data is made in article 16 of the Constitution, in which it was incorporated, in the second paragraph, in a reform published in the Official Gazette of the Federation on June 1, 2009, the recognition of the right of every person to the protection of their personal data, access, rectification and cancellation thereof, as well as to express their opposition.

Likewise, the sixteenth paragraph established the terms for the exercise of this right and the cases of exception to the principles that govern data processing (for reasons of national security, provisions of public order, public safety and health or to protect the rights of third parties) to be established by the law that is enacted on the matter (which took place the following year).

The Federal Law on Protection of Personal Data Held by Private Parties (LFPDPPP) of 2010 is the legislation for the development of the constitutional precept just cited, and in its text personal data is defined as “that information referring to an identified person or identifiable”, thus aligning, so to speak, with the most common international definition and in particular with that of the Spanish standard on the matter.

Evidently, Mexican legislation is concerned with defining the principles and criteria to make this right effective and the procedures to put it into effect. The LFPDPPP Regulations develop all these areas more fully.

It should also be mentioned that several years ago there was already legislation that regulated some aspects of the processing of personal data, but it only applies to the public sphere. This is the Federal Law on Transparency and Access to Government Public Information, published in the aforementioned official body on June 11, 2002, which defines what, for the purposes of that Law, is must be understood as personal data in its article 3, section II, adjusting closely to what the legislation applicable to privately owned files would later reflect.

However, although these specific developments exist, as we said, the rights to privacy and intimacy are not expressly mentioned in the fundamental Mexican norm. However, its recognition could be understood through a lato sensu interpretation of the first paragraph of article 16 of the Constitution, where it states:

“No one can be bothered “in his person, family, domicile, papers or possessions, but by virtue of a written order from the competent authority, which establishes and motivates the legal cause of the procedure.” Indeed, some protection for these rights can be derived from this, although it is necessary to mention that the rest of the content of this paragraph basically refers to the procedural field. The same happens with the content of article 7 of the Constitution, which establishes respect for private life as a limit to freedom of the press.

In addition to the above, we must say that even considering that there is a lack of constitutional recognition of the aforementioned human rights, currently this is not an obstacle to claiming their protection and practice, since they can be exercised through conventional means, as established currently article 1 of the constitution. In this, as we mentioned before, it is stipulated that all people will enjoy the human rights recognized in the Constitution itself and in the international treaties to which Mexico is a party.

How do you best ensure consistency of data collected over time?

INTELIGENCIA ARTIFICIAL .jpg

How do you best ensure consistency of data collected over time?

data collected

 

Data collected

Data collected is very important. Data is a collection of facts, figures, objects, symbols, and events gathered from different sources. Organizations collect data with various data collection methods to make better decisions. Without data collected, it would be difficult for organizations to make appropriate decisions, so data is data collected from different audiences at various points in time.

For instance, an organization must collect data on product demand, customer preferences, and competitors before launching a new product. If data is not collected beforehand, the organization’s newly launched product may fail for many reasons, such as less demand and inability to meet customer needs. The data collected is used to improve the offering or target customers.

Although data is a valuable asset for every organization, it does not serve any purpose until analyzed or processed to get the desired results.

Validity and reliability in qualitative methodology

In current academic circles, which are increasingly using qualitatively oriented methods and techniques for their different types of research, a difficulty related to the validity and reliability of their results has repeatedly arisen.

In general, the concepts of validity and reliability that reside in the minds of a large majority of researchers continue to be those used in the traditional positivist epistemological orientation, already more than surpassed in the second half of the 20th century. From here a conflict arises, since qualitative methodology adopts, as the basis and fundamental postulate of its theory of knowledge and science, the postpositivist epistemic paradigm.

The postpositivist paradigm has been installed in the academic field after many studies in international symposiums on the philosophy of science (see Suppe, 1977, 1979) in which the  of the inherited conception (logical positivism) which, from that moment on, was abandoned by almost all epistemologists” (Echeverría, 1989, p. 25), due, as Popper (1977, p. 118) points out, to its insurmountable intrinsic difficulties.< /span>

Obviously, it is not enough for these conclusions to be reached at this high scientific level for them to be immediately adopted in practice by the majority of researchers, nor were the heliocentric ideas of Copernicus and Galileo fully adopted until after a century by illustrious astronomers from the universities of Bologna, Padua and Pisa. According to Galileo (1968) this required “changing people’s heads, which only God could do” (p. 119).

Postpositivist epistemology shows that there is no, in the cognitive process of our mind, a direct relationship between the visual empirical image , auditory, olfactory, etc. and the external reality to which they refer, but always is mediated and interpreted by the personal and individual horizon of the researcher: his or her values, interests, beliefs, feelings, etc., and, for this same reason, the traditional positivist concepts of validity (as a physiological mind-thing relationship) and of  reliability (as a repetition of the same mental process) must be reviewed and redefined.

qualitative

 Epistemological basis for a redefinition of Validity and Reliability 

 Systemic ontology

When an entity is a composition or aggregation of elements (diversity of unrelated parts), it can, in general, be studied and measured appropriately under the guidance of the parameters of traditional quantitative science, in which mathematics and probabilistic techniques play the main role; when, on the other hand, a reality is not a juxtaposition of elements, but rather its “constituent parts” they form an organized totality with strong interaction with each other, that is, they constitute a system, its study and understanding requires the capture of that internal dynamic structure that characterizes it and, to do so, requires a methodology structural-systemic.

Bertalanffy had already pointed out that “general systems theory – as he originally conceived it and not as it has been disseminated by many authors that he criticizes and disavows (1981, p. 49) – was destined to play a role analogous to that played by the Aristotelian logic in the science of antiquity” (Thuillier, 1975, p. 86).

There are two basic kinds of systems: the linear and the non-linear. Linear systems  do not present “surprises”, since they are fundamentally “aggregate”, due to the little interaction between the parts: they can be decomposed into their elements and recompose again, a small change in an interaction produces a small change in the solution, determinism is always present and, by reducing the interactions to very small values, the system can be considered to be composed of independent or linearly dependent parts.

The world of systems no-linear, On the other hand, it is totally different: it can be unpredictable, violent and dramatic, a small change in a parameter can change the solution little by little and, suddenly, change to a totally new type of solution, as when, in quantum physics , “quantum leaps” occur, which are an absolutely unpredictable event that is not controlled by causal laws, but only by the laws of probability.

These non-linear systems must be grasped from within and their situation must be evaluated in parallel with their development. Prigogine claims (1986) that the non-linear world contains much of what is important in nature: the world of dissipative structures.

Well, our universe is basically made up of non-linear systems at all levels: physical, chemical, biological, psychological and sociocultural.

If we observe our environment we see that we are immersed in a world of systems. When considering a tree, a book, an urban area, any device, a social community, our language, an animal, the firmament, in all of them we find a common feature: they are complex entities, formed by parts in mutual interaction, whose identity results from an adequate harmony between its constituents, and endowed with their own substantivity that transcends that of those parts; In short, it is about what, in a generic way, we call systems (Aracil, 1986, p. 13). Hence, von Bertalanffy (1981) maintains that “from the atom to the galaxy we live in a world of systems” (p. 47).

According to Capra (1992), quantum theory demonstrates that “all particles are dynamically composed of one another in a self-consistent manner, and, in that sense, it can be said that they “contain” each other. In this way, physics (the new physics) is a model science for the new concepts and methods of other disciplines. In the field of biology, Dobzhansky (1967) has pointed out that the genome, which comprises both regulatory and operant genes, works as an orchestra and not as a set of soloists.

Also Köhler (1967), for psychology, used to say that “in the structure (system) each part dynamically knows each one of the others.” And Ferdinand de Saussure (1931), for linguistics, stated that “the meaning and value of each word is in the others”, that the system It is “an organized totality, made of supportive elements that can only be defined in relation to each other depending on their place in this totality.

If the significance and value of each element of a dynamic structure or system is closely related to that of the others, if everything is a function of everything, and if each element is necessary to define others, it cannot be seen or understood or measured “in itself”, in isolation, but through the position< /span> problem refers continuously and systematically to the state of the system. considered as a whole” (in: Lyotard, 1989, p. 31).each or role it plays in the structure. Thus, Parsons points out that “the most decisive condition for a dynamic analysis to be valid is that function and the 

The need for a proper approach to dealing with systems has been felt in all fields of science. Thus a series of related modern approaches were born, such as, for example, cybernetics, computer science, set theory, network theory, decision theory, game theory, stochastic models and others; and, in practical application, systems analysis, systems engineering, the study of ecosystems, operations research, etc.

Although these theories and applications differ in some initial assumptions, mathematical techniques, and goals, they nevertheless coincide in dealing, in one way or another and according to their area of ​​interest, with “systems,” and “organization” that is, they agree to be “systems sciences” that study aspects not addressed until now and problems of interaction of many variables, organization, regulation, choice of goals, etc. They all seek the “systemic structural configuration” of the realities they study.

In a system there is a set of interrelated units in such a way that the behavior of each part depends on the state of all the others, since they are all found in a structure that interconnects them. Organization and communication in the systems approach challenges traditional logic, replacing the concept of energy with that of information, and that of cause-effect with that of structure and feedback.

In living beings, and especially in human beings, there are structures of a very high level of complexity, which are made up of systems of systems whose understanding defies the acuity of the most privileged minds; These systems constitute a “physical-chemical-biological-psychological-cultural and spiritual” whole.

Only referring to the biological field, we talk about the blood system, respiratory system, nervous system, muscular system, skeletal system, reproductive system, immune system and many others. Let’s imagine the high level of complexity that is formed when all these systems interrelate and interact with all the other systems of a single person and, even more so, of entire social groups.

Now, what implications does the adoption of the systemic paradigm have for the cultivation of science and its technology? They completely change the foundations of the entire scientific edifice: its bases, its conceptual structure and its methodological scaffolding. This is the path that methodologies that are inspired by hermeneutic approaches, the phenomenological perspective and ethnographic orientations try to follow today, that is, qualitative methodologies.

1.2. Positivist validity and reliability

Traditional positivist literature defines different types of validity, (construct validity, internal validity, external validity); but they all try to verify if we actually measure what we propose to measure. Likewise, this epistemological orientation seeks to determine a good level of reliability, that is, its possibility of repeating the same research with identical results. All these indicators have a common denominator: they are calculated and determined by means of “an isolated measure, independent of the complex realities to which they refer.”

positivist

The validity of hypothetical constructs (of constructs), which is the most important, tries to establish an operational measure for the concepts used; In the psychological field, for example, the instrument would measure the isolated psychological property or properties that underlie the variable. This validity is not easy to understand, since it is immersed in the scientific framework of the research and its methodology. These are the ones that give it meaning.

Internal validity is specifically related to establishing or finding a causal or explanatory relationship; that is, if event x leads to event y; excluding the possibility that it is caused by event z. This logic is not applicable, for example, to a descriptive or exploratory study (Yin, 2003, p. 36).

External validity tries to verify whether the results of a given study are generalizable beyond its limits. This requires that there be a homology or, at least, an analogy between the sample (studied case) and the universe to which it is intended to be applied.

Some authors refer to this type of validity with the name of content validity, since they define it as the representativeness or sampling adequacy of the content that is measured with the content of the universe from which it is extracted (Kerlinger, 1981a, p. 322).

Likewise, reliability aims to ensure that a researcher, following the same procedures described by another previous researcher and conducting the same study, can reach the same results and conclusions. Note that this is a redoing of the same study, not a replica of it.

1.3. Critical analysis of positivist criteria

All these indicators ignore the fact that each reality or human entity, be it a thought, a belief, an attitude, an interest, a behavior, etc., are not isolated entities, but rather they receive their meaning or significance, that is, they are configured as such, by the type and nature of the other elements and factors of the system or dynamic structure in which they are inserted and by the role and function they play in it; all of which can change with the temporal variable, since they are never static. An isolated element can never be adequately conceptualized or categorized, since it may have many meanings according to that constellation of factors or structure from which it comes.

If we delve deeper into the “parts-whole” phenomenon, and focus more closely on its epistemological aspect, we will say that there are two modes of intellectual apprehension of an element that is part of a totality. Michael Polanyi (1966) puts it this way:

…we cannot understand the whole without seeing its parts, but neither can we see the parts without understanding the whole… When we understand a certain series of elements as part of a whole, the focus of our attention moves from the details to now not understood to the understanding of their joint meaning.

This passage of attention does not make us lose sight of the details, since a whole can only be seen by seeing its parts, but it completely changes the way we apprehend the details. Now we apprehend them in terms of the whole on which we have focused our attention. I will call this subsidiary apprehension of details, as opposed to the focal apprehension that we would employ to attend to the details themselves, not as parts of the whole (pp. 22-23).

Unfortunately, analytical philosophy and its positivist orientation followed the advice that Descartes puts as a guiding idea and as a second maxim, in the Discourse on Method: “fragment every problem into as many simple and separate elements as possible.” This orientation has systematically accepted the (false) assumption that total reality would be captured by dismembering it (disintegrative analysis) into its different components.

This approach constituted the conceptual paradigm of science for almost three centuries; but it breaks or ignores the set of links and relationships that each human entity, and sometimes even the same physical or chemical entities, has with the rest. And that rest or context is precisely what gives it the nature that constitutes it, its characteristics, its properties and its attributes.

This decontextualization of realities makes them amorphous, ambiguous and, most of the time, without any meaning or, also, with many possible meanings. As the creator of General Systems Theory, Ludwig von Bertalanffy (1976), very appropriately points out, “every mathematical model is an oversimplification, and it is debatable whether it reduces real events to the bare bones or whether it tears out vital parts of their anatomy.” ; (p. 117).

positivist orientation

For a greater exemplification, let’s think about what is happening recently in the field of medicine. Excellent professionals in this science, sometimes guided by their specialization or super-specialization, prescribe a medicine that seems magnificent for a certain ailment or condition, but they are unaware that, for some people in particular, it can even be fatal, since they have a special allergy, for example, to penicillin or some component of it.

This without pointing out that the etiology of a certain disease sometimes has its origin in non-biological areas, such as a high level of stress for psychological reasons, family problems or socioeconomic difficulties; all areas that the distinguished specialist may be unaware of even in their simplest topics, but that could give a clue as to where the necessary therapy should be directed.

Postpositivist View of Validity and Reliability

 The validity.

In a broad and general sense, we will say that an investigation will have a high level of “validity” to the extent that its results “reflect” an image that is as complete as possible, clear and representative of the reality or situation studied.

But we do not have a single type of knowledge. The natural sciences produce knowledge that is effective in dealing with the physical world; They have been successful in producing instrumental knowledge that has been politically and lucratively exploited in technological applications. But instrumental knowledge is only one of the three cognitive forms that contribute to human life.

The historical-hermeneutic sciences (interpretive sciences) produce the interactive knowledge that underlies the life of each human being and the community of which he or she is a part; Likewise, critical social science produces the reflective and critical knowledge that human beings need for their development, emancipation and self-realization.

Each form of knowledge has its own interests, its own uses and its own criteria of validity; For this reason, it must be justified on its own terms, as has traditionally been done with ‘objectivity’ for the natural sciences, as Dilthey did for hermeneutics, and as Marx and Engels did for critical theory.

In the natural sciences, validity is related to your ability to control the physical environment with new physical, chemical, and biological inventions; In the hermeneutical sciences, validity is appreciated according to the level of its ability to produce human relationships with a high sense of empathy and connection; and in critical social science, this validity will be related to its ability to overcome obstacles to promote the growth and development of more self-sufficient human beings in the full sense.

As we pointed out, an investigation has a high level of validity if when observing or appreciating a reality, that reality is observed or appreciated in its full sense, and not just an aspect or part of it.

If reliability has always represented a difficult requirement for qualitative research, due to its peculiar nature (impossibility of repeating, stricto sensu, the same study), the same has not happened in relation to validity. On the contrary, validity is the greatest strength of these investigations. Indeed, qualitative researchers’ assertion that their studies have a high level of validity derives from their way of collecting information and the analysis techniques they use.

These procedures induce them to live among the subjects participating in the study, to collect data for long periods of time, review, compare and analyze them continuously, to adapt the interviews to the empirical categories of the participants and not to abstract concepts or strangers brought from another environment, to use participatory observation in the real media and contexts where the events occur and, finally, to incorporate into the analysis process a continuous activity of feedback and reevaluation.

All this guarantees a level of validity that few methodologies can offer. However, validity can also be perfected, and it will be all the greater to the extent that some problems and difficulties that may arise in the process are taken into account. qualitative research. Among others, for good internal validity, special attention will have to be paid to the following:

a) There may be a noticeable change in the environment studied between the beginning and the end of the investigation. In this case, information will have to be collected and collated at different times in the process.

b) It is necessary to carefully calibrate the extent to which the observed reality is a function of the position, status and role that the researcher has assumed within the group. Interactive situations always create new realities or modify existing ones.

c)  The credibility of information can vary greatly: informants can lie, omit relevant data or have a distorted view of things. It will be necessary to contrast it with that of others, collect it at different times, etc.; It is also convenient that the sample of informants represents in the best possible way the groups, orientations or positions of the population studied, as a strategy to correct perceptual distortions and prejudices, although it will always remain true that the truth is not produced by a random exercise. and democratic in the collection of general information, but by the information of the most qualified and trustworthy people.

credibility

Regarding external validity, it is necessary to remember that often the meaning structures discovered in one group are not comparable with those of another, because they are specific and typical of that group, in that situation and in those circumstances, or because the second group has been poorly chosen and the conclusions obtained in the first are not applicable to it.

  The confiability

Research with good reliability is one that is stable, secure, consistent, the same as itself at different times and predictable for the future. Reliability also has two sides, one internal and one external: there is internal reliability when several observers, when studying the same reality, agree in their conclusions; There is external reliability when independent researchers, when studying a reality in different times or situations, reach the same results. 

The traditional concept of external reliability implies that a study can be repeated with the same method without altering the results, that is, is a measure of the replicability of the research results. In the human sciences it is practically impossible to reproduce the exact conditions in which a behavior and its study took place. Heraclitus already said in his time that “no one bathed in the same river twice”; and Cratylus added that “it was not possible to do it even once”, since the water is continually flowing (Aristotle, Metaphysics, iv, 5).

In studies carried out through qualitative research, which, in general, are guided by a systemic, hermeneutic, phenomenological, ethnographic and humanistic orientation, reliability is oriented towards the level of interpretive agreement between different observers, evaluators or judges of the same phenomenon, that is, reliability will be, above all, internal, interjudges< a i=4>. A good level of this reliability is considered when it reaches 70%, that is, for example, out of 10 judges, there is consensus among 7.

Given the particular nature of all qualitative research and the complexity of the realities it studies, it is not possible to repeat or replicate a study in the strict sense, as can be done in many experimental investigations. Due to this, the reliability of these studies is achieved using other rigorous and systematic procedures. 

internal reliability is very important. Indeed, the level of consensus between different observers of the same reality increases the credibility that the significant structures discovered in a given environment deserve, as well as the security that the level of congruence of the phenomena under study is strong and solid.