What are the potential worst limitations of your data collection approach?

Inteligencia artificial y ciencia scaled 1

What are the potential worst limitations of your data collection approach?

data collection

 

 

What is Data Collection?

Data collection is the procedure of collecting, measuring, and analyzing accurate insights for research using standard validated techniques.

Put simply, data collection is the process of gathering information for a specific purpose. It can be used to answer research questions, make informed business decisions, or improve products and services.

To collect data, we must first identify what information we need and how we will collect it. We can also evaluate a hypothesis based on collected data. In most cases, data collection is the primary and most important step for research. The approach to data collection is different for different fields of study, depending on the required information.

Data Collection Methods

There are many ways to collect information when doing research. The data collection methods that the researcher chooses will depend on the research question posed. Some data collection methods include surveys, interviews, tests, physiological evaluations, observations, reviews of existing records, and biological samples.

Main types of limitations

Some methodological limitations

  • Sample size : Is the number of units of analysis you use in your study determined by the type of research problem you are investigating? Keep in mind that if your sample size is too small, it will be difficult to find meaningful relationships and generalizations from the data, since statistical tests typically require a larger sample size to ensure a representative distribution of the population. and be considered representative of the groups of people, objects, processes, etc., studied. Although, of course, sample size is less relevant in qualitative research.
  • Lack of available and/or reliable data:  Lack of data or reliable data is likely an aspect that may limit the scope of your analysis, the size of your sample, or may be a significant obstacle to finding a trend, generalization, or relationship. significant. You should not only describe these limitations, but also offer reasons why you believe the data is missing or unreliable, which will be very useful as an opportunity to describe future research needs.
  • The lack of previous research studies on the topic : Referencing and criticizing previous research studies constitutes the basis of the bibliographic review and helps lay the foundation for understanding the research problem being investigated. Depending on the scope of your research topic, there may be little prior research on your topic. Of course, before assuming that this is true, the main international databases should be widely consulted. It is important to highlight that discovering a limitation of this type can serve as an opportunity to identify new gaps in the literature and consequently new research.
  • Measure used to collect the data:  Sometimes, after completing the interpretation of the results, you discover that the way you collected data inhibited your ability to conduct a thorough analysis of the results. For example, not including a specific question in a survey that, in retrospect, could have helped address a particular issue that arose later in the study.
  • Self-reported data : Self-reported data is limited by the fact that it can rarely be independently verified. In other words, I am referring to the case where the researcher has to investigate what people think about a topic, whether in interviews, focus groups, or in questionnaires, at face value. These self-reported data may contain several potential sources of bias that you should be aware of and note as limitations. These biases can become evident if they are inconsistent with data from other sources. These are: 1)  selective memory , that is, remembering or not remembering experiences or events that occurred at some point in the past; 2)  “telescope” effect , where self-informants remember events that occurred once as if they occurred at another time; 3)  attribution , which refers to the act of attributing positive events and outcomes to one’s own person, but attributing negative events and outcomes to external forces; and 4)  exaggeration,  the act of representing results or embellishing events as more significant than they really were (Price and Murnan, 2004).

Possible limitations of the researcher

  • Access:  If the study depends on having access to people, organizations or documents and, for any reason, access is denied or limited in some way, the reasons for this situation must be described.
  • Longitudinal effects : The time available to investigate a problem and measure change or stability over time is in most cases very limited, for example, due to the expiration date of project assignments, these limitations are advisable that are expressed in the research report or in a scientific article.
  • Cultural limitations and other types of bias:  Bias is when a person, place or thing is seen or shown in an inaccurate way. The bias is generally negative, although one can have a positive bias as well, especially if that bias reflects your reliance on research that supports only your hypothesis. When revising your article, critically review the way you have stated a problem, selected the data to study, what you may have omitted, the way you have arranged procedures, events, people or places.

No one expects science to be perfect, especially not the first time, and even your colleagues can be very critical, but no one’s work is beyond limitations. Our knowledge base is based on discovering each piece of the puzzle, one at a time, and the limitations show us where we need to make greater efforts next time. From a peer review perspective, I do not believe that limitations are inherently bad, on the contrary, omitting them would leave hidden flaws that could be repeated, it is necessary to see them as an opportunity, even the limitations of your study can be the inspiration from another researcher.

References

Price, J.H. y Murnan, J. (2004). Research Limitations and the Necessity of Reporting Them. American Journal of Health Education, 35, 66-67.

What are the limitations of the research?

limitations

How can they affect the results of a scientific study of social reality?

Research limitations are aspects or conditions that are identified as possible obstacles to achieving the objectives of a research. Furthermore, such limitations restrict or condition the validity, applicability and generalization of the results of a study or investigation. They are aspects that the researcher recognizes and points out as factors that could have influenced the results or that limit the interpretation and extrapolation of the findings (Booth et al., 2008; Yin, 2017; Black, 1999; and, Leedy and Ormrod, 2016).

It is important to highlight limitations in a research report so that readers understand the restrictions inherent to the study and can interpret the results appropriately (American Psychological Association, 2020).

Common limitations

Let’s look at some of the limitations that are frequently mentioned in research reports. These are not the only ones, others can be identified; Here are some of the typical limitations associated with quantitative and qualitative approaches in research:

Sample size

If the sample used in the research is small, the results may not be representative of the general population. This may limit the generalizability of the findings.

Selection bias

If the sample is not selected randomly or if it has specific characteristics, it may introduce bias into the results.

Response bias

In studies involving surveys or questionnaires, missing or biased responses from participants can affect the validity of the results.

Assumptions of normality

In some statistical methods, data are assumed to follow a normal distribution. If this assumption is not met, there may be problems in data analysis.

Resource limitations

For research that follows the quantitative approach, limited availability of funding or access to data may restrict the depth and breadth of the research. Qualitative data collection and analysis is often a time- and resource-intensive process, which can limit the amount of data that can be collected.

Measurement tools

If the instruments used to collect data are not reliable or valid, the results may not accurately reflect the variables being studied.

Information bias

If participants do not provide accurate or complete information, whether intentionally or unintentionally, this can bias the results.

Temporal context

The results of a study can be influenced by when it was conducted, as conditions can change over time.

Temporary effects

In longitudinal research, it can be difficult to control for temporal effects, which can lead to misinterpretations of causal relationships.

Limitations on generalization

Some studies may be limited in terms of the applicability of the results to specific populations or particular situations.

Validity and reliability

Validity and reliability in qualitative research can be difficult to establish due to the subjective nature of the reality from which the data is obtained for analysis and interpretation.

Limited generalization

In qualitative research, results focus on specific contexts and cannot always be widely generalized.

Researcher bias

Researcher bias can influence the collection and analysis of qualitative data if the researcher is not aware of his or her own perspectives and biases.

Subjective interpretation

Despite criteria of scientific rigor and transparency, the interpretation of qualitative data is subjective and depends on the perspective of the researcher, which can generate debates about objectivity.

Uncontrolled external factors

Factors outside the researcher’s control that may influence the results, such as unexpected events or changes in the environment.

Ethical limitations

In research involving human subjects, there may be ethical restrictions on the collection of certain types of data or the manipulation of variables (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979).

Pointing out limitations can be useful to guide future research and improvements in methodological design. It is important that researchers are aware of these limitations and address them appropriately in their research reports to ensure the transparency and validity of their studies.

 

How to better document and inform yourself of any changes made during the data collection process?

artificial intelligence geb5f7498d 1280

How to better document and inform yourself of any changes made during the data collection process?

 

data collection

 

 

What is Data Collection?

Data collection is the procedure of collecting, measuring, and analyzing accurate insights for research using standard validated techniques.

Put simply, data collection is the process of gathering information for a specific purpose. It can be used to answer research questions, make informed business decisions, or improve products and services.

To collect data, we must first identify what information we need and how we will collect it. We can also evaluate a hypothesis based on collected data. In most cases, data collection is the primary and most important step for research. The approach to data collection is different for different fields of study, depending on the required information.

 

What is a change control process and how is it implemented?

A change control process allows project managers to submit requests to stakeholders for review, which are then approved or rejected. It is an important process to help manage large projects with many moving parts.

When it comes to  managing multiple projects , things can get difficult. From coordinating work schedules to tracking goals and results, the last thing you want to deal with is a major project change. However, if you implement a change control process, you can easily submit project change requests.

The change control process is essential for large-scale initiatives where teammates from multiple departments work together. Below we will analyze the process in more detail and show you specific examples that will help you implement your own change control procedure.

What does the change control process mean?

Change control is a process used to manage change requests for projects and other important initiatives. It is part of a change management plan that defines the roles to manage change within a team or company. While a change process has many parts, the easiest way to visualize it is by creating a change log to track project change requests.

In most cases, anyone involved can request changes. A request can be as small as a modification to the  project schedule  or as large as a new deliverable. However, it is important to note that not all requests will be approved, as it is up to key participants to approve or reject change requests.

Since a change control process includes many moving parts and differs from company to company, it is advisable to incorporate tools that help process cycles flow smoothly. Tools like  workflow management software  can help you manage work and communications in one place.

change control

 

Change control vs. change management

Can’t you understand the difference between change control and change management? Don’t worry! There are many differences between change control and a  change management plan . Change control is just one of the many pieces of a change management strategy.

  • Change control:  A change control process is important for any company as it can facilitate the flow of information when changes need to be made to a project. A successful process must define success metrics, organize workflows, facilitate team communication, and set them up for success. 
  • Change Management:  A change management plan involves coordinating budget, schedule, communications, and resources. While a change control process consists of a formal document that describes a change request and the impact of that change, change management refers to the overall plan.

As you can see, a change control process is only a small part of a larger change management plan. So, although they are related, both terms are different.

What are the benefits of a change control process?

Implementing a change control process, with the support of  organizational software , can help you efficiently organize and manage your team’s work, as well as project deliverables and deadlines. It is also very important when you consider the possible consequences of not being able to manage changes effectively.  

A change management process can help you execute a  resource management plan  or other work management objectives. Here are some additional benefits of implementing a change control process.

Higher productivity  

A change control process will eliminate confusion around project deliverables and allow you to focus on execution rather than gathering information. As a result, you will achieve greater productivity and efficiency, especially with the help of  productivity software .

Without a properly implemented process, productivity can suffer due to time spent on the details of the job. Due to limited availability for more important work, employees fail to meet  a quarter (26%) of deadlines  each week.

Effective communication

Proper documentation of changes can help reduce communication problems. When goals and objectives are clearly defined, team communication can flourish. However, it is important to note that a change control process will not solve all communication problems. It can also be helpful to adopt  work management software  to keep communication about different projects in one place.  

A change control process can also be shared with the executives involved to easily provide context around change requests.  

download 4

 

Greater collaboration and teamwork

Effective communication, in addition to being a benefit in itself, also helps improve collaboration. Clear communication about project changes enhances collaboration and teamwork. 

For example, when changes are clearly communicated from the beginning, stakeholders have more time to focus on creativity and teamwork. Without effective communication, those involved are forced to spend their time gathering information instead of working with team members and fostering creativity.

To further improve collaboration, try combining the change control process with  task management software  to set your team up for success.

gettyimages 519517527 612x612 1

 

The five stages of a change control process

Like the five  phases of project management , there are five key steps to creating a change control process. Although there may be some small differences, there are key elements that are common to all processes. From inception to implementation, each of these essential steps helps change requests move through the different stages quickly and efficiently and avoid unnecessary changes.

Some prefer to have the procedure in a change control process flow as it may be easier to visualize. Regardless of how it is displayed, the result will always be the final decision to approve or reject a change request.  

Let’s take a closer look at what goes into each stage of an effective change control process.

1. Start of the change request

The  initial phase  of the process begins with a change request. There are numerous reasons why you might request a change, such as submitting a request to adjust the delivery date of a creative asset that is taking longer than expected. And while a request will most likely come from a stakeholder or project leader, anyone can submit a change request.

If a team member wants to make a request, they must submit it through a change request form. As a project manager, you should maintain a change log and store it in a place that is easy to find and accessible to everyone.

Once the application form has been completed, you will need to update the change log with a name, a brief description, and any other information you consider important, such as the date, name of the applicant, etc. The change log stores all changes made to the project, which can be useful if  you manage multiple projects  that span several months.

Below we provide some examples of the different fields you can include in a change request form.

  • Project’s name
  • The date
  • Request Description
  • Applicant
  • Change Manager
  • Priority
  • Impact of change
  • Term
  • Comments

The fields you include will depend on the level of detail you want your change log to have and the type of change you receive.

2. Evaluation of the change request

Once the initial form has been submitted and approved, the application will be evaluated. At this stage, the requested changes are analyzed. 

The evaluation phase is not necessarily where a decision is made. At this stage the application is reviewed to obtain all the necessary information. The information will likely be reviewed by a project or department leader, who will evaluate some key details such as the resources needed, the impact of the request, and to whom the request should be referred.

If the change request passes the initial evaluation stage, the analysis phase begins where a decision will be made. 

3. Analysis of the change request

The change impact analysis phase culminates with a final decision made by the relevant project leader on whether the request will be approved or rejected. While you can also participate in the decision-making process, it is always advisable to obtain formal approval from a project leader. In some cases, there may even be a change control committee to oversee the approval of requests.

An approved change request must be signed and communicated to the team to then continue with the rest of the phases of the process. The change must be documented in the change log and in all channels where project communication is maintained to ensure that all  project participants  clearly understand the necessary changes. 

If the change request is rejected, it must also be documented in the change log. And while it’s not necessary to communicate a denied request to the team, it might be helpful to notify them to avoid confusion.

4. Implementation of the requested change

If the requested change is approved, the process will move to the implementation phase. This is where you and others involved in the project will work to apply the changes to the project.  

Implementation of changes may vary depending on the stage of the project, but generally will involve updating the  project schedule  and deliverables and informing the entire team. Then you can start with the concrete work. It is important to evaluate the scope of the project to ensure that adjustments to the schedule do not have a significant impact on the proposed objectives.

It is best to share the request information in a shared workspace and in the change log to avoid a decrease in productivity when trying to find new information. You can even share a  business case  to cover all the aspects you consider necessary.

5. Closing the change request

Once the request has been documented, shared, and implemented, the request is ready to be closed. While some teams don’t have a formal closure plan, it’s helpful to have one to store information in a place that all team members can reference in the future. 

During the closure phase, all documentation, change logs, and communication should be stored in a shared space that can be accessed in the future. It’s also a good idea to store the original change form and the revised project plan you created during the process.

Once the documents are stored in the appropriate place, you can finish the related tasks and work towards the successful completion of your project. Some project leaders also organize a  post-mortem meeting  before officially ending the project.

What steps will you take to ensure the best generalization of your findings?

IAr 1

What steps will you take to ensure the best generalization
of your findings?

generalization

 

The Generalization

Generalization is applied by researchers in academia. It can be defined as the extension of the results and conclusions of a research carried out on a population sample to the general population. Although the reliability of this extension is not absolute, it is statistically probable.

Since good generalization requires data on large populations, quantitative research—experimental, for example—provides the best basis for producing a broad generalization. The larger the sample population, the more the results can be generalized. For example, a comprehensive study of the role that computers play in the writing process might reveal that students who compose most of their text on a computer are statistically likely to move more chunks of text than students who do not compose on a computer. 

Transferability

Transferability is applied by readers of the research. Although generalizability usually applies only to certain types of quantitative methods, transferability can apply to varying degrees to most types of research. Unlike generalizability, transferability does not imply broad statements, but rather invites readers of the research to make connections between the elements of a study and their own experience. For example, high school teachers could selectively apply the results of a study showing that heuristic writing exercises help students at the college level to their own classrooms.

Interrelationships between Generalization and Transferability

Generalizability and transferability are important elements of any research methodology, but they are not mutually exclusive. Generalization, to varying degrees, relies on the transferability of research results. It is important for researchers to understand the implications of these two aspects of research before designing a study. Researchers seeking to make a generalizable claim must carefully examine the variables involved in the study.

Among them are the population sample used and the mechanisms for formulating a causal model. Furthermore, if researchers want the results of their study to be transferable to another context, they must maintain a detailed account of the environment surrounding their research, and include a rich description of that environment in their final report. With the knowledge that the sample population was large and varied, as well as detailed information about the study itself, readers of the research can generalize and transfer the results to other situations with greater confidence.

Transferability

Generalization

Generalization is not only common to research, but also to everyday life. In this section, we establish a working definition of generalization as it applies within and outside of academic research. We also define and consider three different types of generalization and some of their likely applications. Finally, we discuss some of the potential shortcomings and limitations of generalizability that researchers should consider when constructing a study that they hope will produce potentially generalizable results.

Definition

In many ways, according to Shavelson et al (1991), generalization is nothing more than making predictions based on recurring experience. If something happens frequently, we hope that it will continue to happen in the future. Researchers use the same type of reasoning when generalizing the results of their studies.

Once researchers have collected enough data to support a hypothesis, a premise can be formulated about the behavior of that data. This is what makes it generalizable to similar circumstances. However, due to its foundation in probability, this generalization cannot be considered conclusive or exhaustive.

Although generalization can occur in informal and non-academic contexts, in academic studies it usually only applies to certain research methods. Quantitative methods allow some generalization. Experimental research, for example, often produces generalizable results. However, this experimentation must be rigorous to obtain generalizable results.

Generalization Example 1

An example of generalization in everyday life is driving. Driving a car in traffic requires drivers to make assumptions about the likely outcome of certain actions. When approaching an intersection where a driver is preparing to turn left, the driver passing through the intersection assumes that the driver turning left will yield to him before turning. The driver passing through the intersection applies this assumption with caution, recognizing the possibility that the other driver may turn prematurely.

American drivers also generalize that everyone drives on the right side of the road. However, if we try to generalize this assumption to other settings, such as England, we will be making a potentially disastrous mistake. It is therefore evident that generalization is necessary to form coherent interpretations in many different situations. However, we do not expect our generalizations to work the same in all circumstances. With enough evidence we can make predictions about human behavior. At the same time we must recognize that our assumptions are based on statistical probability.

Generalization Example 2

Consider this example of generalizable research in the field of English studies. A study of students’ evaluations of composition instructors could reveal that there is a strong correlation between the grade students expect to earn in a course and whether they give their instructor high marks.

The study may find that 95% of students who expect to receive a “C” or lower in their class give their instructor a grade of “average” or lower. Therefore, there would be a high probability that prospective students who expect a “C” or less will not give their instructor high grades. However, the results would not necessarily be conclusive. Some students might buck the trend.

A second form of generalization focuses on measurements rather than treatments. For a result to be considered generalizable outside the test group, it must produce the same results with different forms of measurement. In terms of the heuristic example above, the results will be more generalizable if the same results are obtained when evaluated “with questions that have slightly different wording, or when we use a six-point scale instead of a nine-point scale” (Runkel and McGrath, 1972, p.46).

A third type of generalization concerns the subjects of the test situation. Although the results of an experiment may be internally valid, that is, applicable to the group being tested, in many situations the results cannot be generalized beyond that particular group. Researchers hoping to generalize their results to a broader population should ensure that their test group is relatively large and chosen at random. However, researchers must take into account the fact that test populations of more than 10,000 subjects do not significantly increase generalizability (Firestone,1993).

Potential limitations

No matter how carefully these three forms of generalizability are applied, there is no absolute guarantee that the results obtained in a study will occur in all situations outside the study. To determine causal relationships in a test environment, precision is of utmost importance. However, if researchers want to generalize their findings, range and variance must take precedence over precision.

Therefore, it is difficult to test accuracy and generalizability simultaneously, as focusing on one reduces the reliability of the other. One solution to this problem is to make a greater number of observations. This has a double effect: first, it increases the sample population, which increases generalizability. Second, precision can be reasonably maintained because random errors across observations will be averaged (Runkel and McGrath, 1972).

 

How do you ensure the best validity of your data?

inteligencia artificial para el diagno stico temprano de la esquizofrenia foto freepik 1

How do you ensure the best validity of your data?

data

Data

 

What is Data Collection?

Data collection is the procedure of collecting, measuring, and analyzing accurate insights for research using standard validated techniques.

Put simply, data collection is the process of gathering information for a specific purpose. It can be used to answer research questions, make informed business decisions, or improve products and services.

To collect data, we must first identify what information we need and how we will collect it. We can also evaluate a hypothesis based on collected data. In most cases, data collection is the primary and most important step for research. The approach to data collection is different for different fields of study, depending on the required information.

Validity is an evaluation criterion used to determine how important the empirical evidence and theoretical foundations that support an instrument, examination, or action taken are.  Also, it is understood as the degree to which an instrument measures what it purports to measure or that it meets the objective for which it was constructed. This criterion is essential to consider a test valid. Validity along with reliability determine the quality of an instrument.

Currently, this has become a relevant element within the measurement due to the increase in new instruments used at crucial moments, for example when selecting new personnel or when determining the approval or disapproval of an academic degree. Likewise, there are who point out the need to validate the content of existing instruments.

The validation process is dynamic and continuous and becomes more relevant as it is further explored. The  American Psychological Association  (APA), in 1954, identified 4 types of validity: content, predictive, concurrent and construct.  However, other authors classify it into appearance, content, criterion and construct validity.

Content validity is defined as the logical judgment about the correspondence that exists between the trait or characteristic of the student’s learning and what is included in the test or exam. It aims to determine whether the proposed items or questions reflect the content domain (knowledge, skills or abilities) that you wish to measure.

To do this, evidence must be gathered about the quality and technical relevance of the  test ; It is essential that it is representative of the content through a valid source, such as: literature, relevant population or expert opinion. The above ensures that the test includes only what it must contain in its entirety, that is, the relevance of the instrument.

validity

This type of validity can consider internal and external criteria. Among the internal validity criteria are the quality of the content, curricular importance, content coverage, cognitive complexity, linguistic adequacy, complementary skills and the value or weighting that will be given to each item. Among the external validity criteria are: equity, transfer and generalization, comparability and sensitivity of instruction; These have an impact on both students and teachers.

The objective of this review is to know the methodologies involved in the content validity process. This need arises from the decision to opt for a multiple-choice written exam, which measures knowledge and cognitive skills, as a modality to obtain the professional title of nurse or nurse midwife in a health school at a Chilean university. This process began in 2003 with the development of questions and their psychometric analysis; however, it is considered essential to determine the content validity of the instrument used.

To achieve this objective, a search was carried out in different databases of the electronic collection, available in the University’s multi-search system, using the key words:  content validity, validation by experts, think-aloud protocol/ spoken thought . For the selection of publications, the inclusion criteria used were: articles published from 2002 onwards; full text, without language restriction, it should be noted that bibliography of classic authors on the subject was incorporated. 58 articles were found, of which 40 were selected.

The information found was organized around the 2 most used methodologies to validate content: expert committee and cognitive interview.

Content validity type

There are various methodologies that allow determining the content validity of a  test  or instrument, some authors propose that among them are the results of the  test , the opinion of the students, cognitive interviews and evaluation by experts; others perform statistical analyzes with various mathematical formulas, for example, they use factor formulas with structural equations,  these are less common.

In cognitive interviews, qualitative data is obtained that can be delved into; unlike expert evaluation that seeks to determine the skill that the exam questions are intended to measure. Some experts point out that to validate the content of an instrument, the following are essential: review of research, critical incidents, direct observation of the applied instrument, expert judgment and instructional objectives. The methods frequently mentioned in the reviewed articles are the expert committee and the cognitive interview.

Expert Committee

It is a methodology that allows determining the validity of the instrument through a panel of expert judges for each of the curricular areas to be considered in the evaluation instrument, who must analyze – at a minimum – the coherence of the items with the objectives of the courses, the complexity of the items and the cognitive ability to be evaluated. Judges must have training in question classification techniques for content validity. This methodology is the most used to perform content validation.

It is therefore essential that before carrying out this validation, two problems are resolved: first, determine what can be measured and second, determine who will be the experts who will validate the instrument. For the first, it is essential that the author does an exhaustive bibliographic review on the topic, he can also work with focus groups; This period is defined by some authors as a stage of development.

Expert Committee

For the second, although there is no consensus that defines the characteristics of an expert, it is essential that he or she knows about the area to be investigated, whether at an academic and/or professional level, and that, in turn, he or she knows about complementary areas. However, other authors are more emphatic when defining who is an expert and consider it a requirement, for example, that they have at least 5 years of experience in the area. All this requires that the sample be intentional.

The characteristics of the expert must be defined and, at the same time, the number of them determined. Delgado and others point out that there should be at least 3, while  García  and  Fernández , when applying statistical variables, concluded that the ideal number varies between 15 and 25 experts;  However,  Varela  and others point out that the number will depend on the objectives of the study, with a range between 7 and 30 experts.

There are other less strict authors when determining the number of experts; they consider the existence of various factors, such as: geographical area or work activity, among others. Furthermore, they point out that it is essential to anticipate the number of experts who will not be able to participate or who will defect during the process.

Once it is decided what the criteria will be to select the experts, they are invited to participate in the project; During the same period, a classification matrix is ​​prepared, with which each judge will determine the degree of validity of the questions.

For the process of preparing the matrix, the Likert scale of 3, 4 or 5 points is used where the evaluation of the possible answers could be classified into different types, for example: a) excellent, good, average and bad; b) essential; useful; useful, but not essential or necessary. The above depends on the type of matrix and the specific objectives pursued.

Furthermore, other studies mention having incorporated spaces where the expert can provide their contributions and appreciations regarding each question. Subsequently, each expert is given – via email or in person in an office provided by the researcher – the classification matrix and the instrument to be evaluated.

Once the results of the experts are obtained, the data is analyzed; The most common way is to measure the agreement of the evaluation of the item under review, reported by each of the experts, it is considered acceptable when it exceeds 80%; those that do not reach this percentage can be modified and subjected to a new validation process or simply be eliminated from the instrument.

Other authors report using Lashe’s (1975) statistical test to determine the degree of agreement between the judges; they observe a content validity ratio with values ​​between -1 and +1. When the value is positive it indicates that more than half of the judges agree; On the contrary, if this is negative, it means that less than half of the experts are. Once the values ​​are obtained, the questions or items are modified or eliminated.

To determine content validity using experts, the following phases are proposed: a) define the universe of admissible observations; b) determine who are the experts in the universe; c) present – ​​by the experts – the judgment through a concrete and structured procedure on the validity of the content and d) prepare a document that summarizes the data previously collected.

The literature describes other methodologies that can be used together or individually. Among them are:

– Fehring Model: aims to explore whether the instrument measures the concept it wants to measure with the opinion of a group of experts; It is used in the field of nursing, by the American Nursing Diagnostic Association (NANDA), to analyze
the validity of interventions and results. The method consists of the following phases:

a) Experts are selected, who determine the relevance and relevance of the topic and the areas to be evaluated using a Likert scale.

b) The scores assigned by the judges and the proportion of these in each of the categories of the scale are determined, thereby obtaining the content validity index (CVI); This index is achieved by adding each of the indicators provided by the experts in each of the items, and, finally, it is divided by the total number of experts. Each of these particular indices are averaged, those whose average does not exceed 0.8 are discarded.

c) The format of the text is definitively edited, taking into account the CVI value, according to the aforementioned parameter, those items that will make up the final instrument and those that, due to their low CVI value, are considered critical and must be reviewed are determined. .

An example of a specific use of this model was the adaptation carried out by  Fehring  to carry out the content validity of nursing diagnoses; In this case, the author proposes 7 characteristics that an expert must meet, which are associated with a score according to their importance. It is expected to obtain at least 5 of them to be selected as an expert.

The maximum score is obtained by the degree of Doctor of Nursing (4 points) and one of the criteria for the minimum scores (1 point) is having one year of clinical practice in the area of ​​interest; It is important to clarify that the authors recognize the difficulty that exists in some countries due to the lack of expertise of professionals.

– Q Methodology: it was introduced by  Thompson  and  Stephenson  in 1935, in order to identify in a qualitative-quantitative way common patterns of opinion of experts regarding a situation or topic. The methodology is carried out through the Q ordering system, which is divided into stages: the first brings together the experts as advised by  Waltz  (between 25 and 70), who select and order the questions according to their points of view. on the topic under study, in addition, bibliographic evidence is provided as support.

The second phase consists of collecting this information, by each of the experts, according to relevance, which goes along a continuum, from “strongly agree” to “strongly disagree”; Finally, statistical analyzes are carried out to determine the similarity of all the information and the dimensions of the phenomenon. 30

– Delphi Method: allows obtaining the opinion of a panel of experts; It is used when there is little empirical evidence, the data are diffuse or subjective factors predominate. It allows experts to express themselves freely since opinions are confidential; At the same time, it avoids problems such as poor representation and the dominance of some people over others.

During the process, 2 groups participate, one of them prepares the questions and designs exercises, called the monitor group, and the second, made up of experts, analyzes them. The monitoring group takes on a fundamental role since it must manage the objectives of the study and, in addition, meet a series of requirements, such as: fully knowing the Delphi methodology, being an academic researcher on the topic to be studied and having skills for interpersonal relationships.

The rounds happen in complete anonymity, the experts give their opinion and debate the opinions of other peers, make their comments and reanalyze their own ideas with the feedback of the other participants. Finally, the monitoring group generates a report that summarizes the analysis of each of the responses and strategies provided by the experts. It is essential that the number of rounds be limited due to the risk of abandonment of the process by the experts.

The latter is the most used due to its high degree of reliability, flexibility, dynamism and validity (content and others); Among its attributes, the following stand out: the anonymity of the participants, the heterogeneity of the experts, the interaction and prolonged feedback between the participants, this last attribute is an advantage that is not present in the other methods. Furthermore, there is evidence that indicates that it is a contribution to the security of the decision made, since this responsibility is shared by all participants.

 

What is your plan for quality control in data collection?

What is your plan for quality control in data collection?

data collection

What is Data Collection?

Data collection is the procedure of collecting, measuring, and analyzing accurate insights for research using standard validated techniques.

Put simply, data collection is the process of gathering information for a specific purpose. It can be used to answer research questions, make informed business decisions, or improve products and services.

To collect data, we must first identify what information we need and how we will collect it. We can also evaluate a hypothesis based on collected data. In most cases, data collection is the primary and most important step for research. The approach to data collection is different for different fields of study, depending on the required information.

The ability to identify and resolve quality-related problems quickly and efficiently is essential for anyone working in quality control or interested in process improvement. With the seven basic quality tools in your possession, you can easily manage the quality of your product or process, whatever industry you serve.

Where did quality tools originate?

The seven quality tools were originally developed by Japanese engineering professor Kaoru Ishikawa. They were implemented by Japan’s industrial training program during the postwar period, when the country turned to statistical quality control as a means of quality assurance. His goal was to implement basic, easy-to-use tools that workers from diverse backgrounds and with varied skill sets could implement without extensive training.

Today, these quality management tools are still considered the reference for solving a variety of problems. They are often implemented in conjunction with   today’s most widely used process improvement methodologies , such as various phases of Six Sigma, TQM, continuous improvement processes, and Lean management.

The seven quality tools

 

quality

1. Stratification

Stratification analysis is a quality control tool used to classify data, objects, and people into separate and distinct groups. Separating data through stratification can help you determine its meaning and reveal patterns that might otherwise go unnoticed when grouped together. 

Whether you examine equipment, products, shifts, materials, or even days of the week, stratification analysis allows you to understand data before, during, and after it is collected.

To get the most out of the stratification process, think about what information about your data sources can affect the final results of the analysis. Make sure you configure your data collection to include that information. 

2. Histogram

Quality professionals are often tasked with analyzing and interpreting the behavior of different groups of data, in an effort to manage quality. This is where quality control tools like the histogram come into play. 

The histogram can help you represent the frequency distribution of data clearly and concisely across different groups in a sample, allowing you to quickly and easily identify areas for improvement within processes. The structure is similar to that of a bar chart: each bar within a histogram represents a group, and the height of the bar represents the frequency of the data within that group. 

Histograms are particularly useful when breaking down the frequency of data into categories such as age, days of the week, physical measurements, or any other category that can be arranged chronologically or numerically. 

collect quantitative

3. Check (or count) sheet

Check sheets can be used to collect quantitative or qualitative data. When used to collect quantitative data, they may be called count sheets. A check sheet collects data in the form of check or count marks that indicate how many times a particular value has occurred, allowing you to quickly focus on defects or errors within your process or product, defect patterns, and even , the causes of specific defects.

With their simple setup and easy-to-read graphs, check sheets make it easy to record preliminary frequency distribution data when measuring processes. This particular chart can be used as a preliminary data collection tool when creating histograms, bar charts, and other quality tools.

4. Cause and effect diagram (fishbone or Ishikawa diagram)

Introduced by Kaoru Ishikawa, the  fishbone diagram  helps users identify the various factors (or causes) that lead to an effect, usually represented as a problem to be solved. Named for its resemblance to a fishbone, this quality management tool works by defining a quality-related problem on the right side of the diagram, with individual root causes and subcauses branching off to its left.   

The causes and subcauses in this diagram are generally classified into six main groups: measurements, materials, personnel, environment, methods, and machines. These categories can help you identify the possible source of your problem while maintaining a structured and orderly diagram.

5. Pareto diagram (80-20 rule)

As a quality control tool, the Pareto chart operates according to the 80-20 rule. This rule assumes that, in any situation, 80% of the problems in a process or system are caused by the top 20% of factors, often called the “vital few.” The remaining 20% ​​of problems are caused by the 80% of the least important factors. 

The Pareto chart is a combination of a bar and line chart, which represents individual values ​​in descending order using bars, while the cumulative total is represented by the line.

The goal of the Pareto chart is to highlight the relative importance of a variety of parameters, allowing you to identify and focus your efforts on the factors that have the greatest impact on a specific part of a process or system. 

6. Scatter plot

Of the seven quality tools, the scatterplot is the most useful for representing the relationship between two parameters, which is ideal for quality control professionals trying to identify cause-and-effect relationships. 

The variable values ​​are on the Y axis of the diagram and the independent values ​​are on the X axis. Each point represents an intersection point. When joined together, those points can highlight the relationship between the two parameters. The stronger the correlation in the diagram, the stronger the relationship between the parameters.

Scatter plots can be useful as a quality control tool when used to define relationships between quality defects and possible causes, such as environment, activity, personnel, etc. Once the relationship between a particular defect and its cause has been established, you can implement focused solutions with possible better results.

. Control chart (also called Shewhart chart)

Named after Walter A. Shewhart, this quality improvement tool can help quality improvement professionals determine whether or not a process is stable and predictable, making it easier to identify factors that can lead to variations or defects. 

Control charts use a center line to represent an average or mean, as well as an upper and lower line to represent control limits based on historical data. By comparing historical data with data collected from your current process, you can determine if your process is controlled or affected by specific variations.

Using a control chart can save your organization time and money by predicting process performance, especially in terms of what your customer or organization expects from the final product.

quality

Additional: flowcharts

Some sources change the stratification to include flowcharts as one of the seven basic tools of quality control. Flowcharts  are commonly used to document organizational structures and process flows, making them ideal for identifying  bottlenecks and unnecessary steps within a process or system. 

Mapping your current process can help you more effectively identify which activities are completed by whom, how processes flow from one department or task to another, and what steps can be eliminated to streamline the process. 

What are the advantages and disadvantages of different data collection methods?

shutterstock 103080416 1280x720 1

What are the advantages and disadvantages of different data collection methods?

data collection

What is Data Collection?

Data collection is the procedure of collecting, measuring, and analyzing accurate insights for research using standard validated techniques.

Put simply, data collection is the process of gathering information for a specific purpose. It can be used to answer research questions, make informed business decisions, or improve products and services.

To collect data, we must first identify what information we need and how we will collect it. We can also evaluate a hypothesis based on collected data. In most cases, data collection is the primary and most important step for research. The approach to data collection is different for different fields of study, depending on the required information.

Collecting data helps your organization answer relevant questions, evaluate results, and better anticipate customer probabilities and future trends.

In this article you will learn what data collection is, what it is used for, the advantages and disadvantages it has, the skills or abilities that a professional requires to carry out correct data collection, the methods used and some tips to carry it out. cape.

What is data collection?

According to Dr. Luis Eduardo Falcón Morales, director of the Master’s Degree in Applied Artificial Intelligence at the Tecnológico de Monterrey, he explains to us that currently everything generates data in any format, whether written, by video, comments on social networks, tweets, etc. .

“The issue here is that then this data collection begins to collect information to try to find information about the processes on which these data are being generated,” said Falcón Morales.

So we can say that data collection is the process of searching, collecting and measuring data from different sources to obtain information about the processes, services and products of your company or business and to be able to evaluate these results so that you can make better decisions.

What is data collection used for?

Teacher Luis Eduardo indicated that data collection mainly serves to improve continuous improvement processes but it must be understood that it also depends to a large extent on the problem being attacked or the objective set for which said collection is being carried out.

Next, he gives us some uses of data collection:

  • Identify business opportunities for your company, service or product.
  • Analyze structured data (data that is in a standardized format, meets a defined structure, and is easily accessible to humans and programs) in a simple way to understand the context in which said data was generated.
  • Analyze unstructured data (data sets, typically large collections of files, not stored in a structured database format, such as social media comments, tweets, videos, etc.) in a simple way to understand context in which said data were developed.
  • Store data according to the characteristics of a specific audience to support the efforts of your marketing area.
  • Better understand the behaviors of your clients, users and leads.

Data Collection Methods

There are many ways to collect information when doing research. The data collection methods that the researcher chooses will depend on the research question posed. Some data collection methods include surveys, interviews, tests, physiological evaluations, observations, reviews of existing records, and biological samples.

Phone vs. Online vs. In-Person Interviews

Essentially there are four choices for data collection – in-person interviews, mail, phone, and online. There are pros and cons to each of these modes.

  • In-Person Interviews
    • Pros: In-depth and a high degree of confidence in the data
    • Cons: Time-consuming, expensive, and can be dismissed as anecdotal
  • Mail Surveys
    • Pros: Can reach anyone and everyone – no barrier
    • Cons: Expensive, data collection errors, lag time
  • Phone Surveys
    • Pros: High degree of confidence in the data collected, reach almost anyone
    • Cons: Expensive, cannot self-administer, need to hire an agency
  • Web/Online Surveys
    • Pros: Cheap, can self-administer, very low probability of data errors
    • Cons: Not all your customers might have an email address/be on the internet, customers may be wary of divulging information online.

In-person interviews always are better, but the big drawback is the trap you might fall into if you don’t do them regularly. It is expensive to regularly conduct interviews and not conducting enough interviews might give you false positives. Validating your research is almost as important as designing and conducting it.

We’ve seen many instances where after the research is conducted – if the results do not match up with the “gut-feel” of upper management, it has been dismissed off as anecdotal and a “one-time” phenomenon. To avoid such traps, we strongly recommend that data-collection be done on an “ongoing and regular” basis.

This will help you compare and analyze the change in perceptions according to marketing for your products/services. The other issue here is sample size. To be confident with your research, you must interview enough people to weed out the fringe elements.

A couple of years ago there was a lot of discussion about online surveys and their statistical analysis plan. The fact that not every customer had internet connectivity was one of the main concerns.

Although some of the discussions are still valid, the reach of the internet as a means of communication has become vital in the majority of customer interactions. According to the US Census Bureau, the number of households with computers has doubled between 1997 and 2001.

online surveys

Data Collection Examples

Data collection is an important aspect of research. Let’s consider an example of a mobile manufacturer, company X, which is launching a new product variant. To conduct research about features, price range, target market, competitor analysis, etc. data has to be collected from appropriate sources.

The marketing team can conduct various data collection activities such as online surveys or focus groups.

The survey should have all the right questions about features and pricing, such as “What are the top 3 features expected from an upcoming product?” or “How much are your likely to spend on this product?” or “Which competitors provide similar products?” etc.

For conducting a focus group, the marketing team should decide the participants and the mediator. The topic of discussion and objective behind conducting a focus group should be clarified beforehand to conduct a conclusive discussion.

Data collection methods are chosen depending on the available resources. For example, conducting questionnaires and surveys would require the least resources, while focus groups require moderately high resources.

Advantages and disadvantages of data collection

Falcón Morales pointed out that the main advantage, and the most important, is knowledge itself, because knowing is power in some way in your company, it is knowing what your customers [4] think is something negative or positive about your product, service or process.

methods

However, he indicated that the main disadvantage is that people often think that “data collection is magic” and that is not the case. It is a process of continuous improvement, therefore it has no end.

“It is not I apply it once and that’s it, no, it is an endless cycle,” said the director of the Master’s Degree in Applied Artificial Intelligence.

The other disadvantage is the ethical question of the professional or the company to handle the data, “since we do not know what use they may give it.”

Skills to carry out data collection

The director of the Master’s Degree in Applied Artificial Intelligence explained that the main skills are soft skills. They are between them:

  1. Critical thinking
  2. Effective communication
  3. Proactive problem solving
  4. Intellectual curiosity
  5. Business sense

Methods for data collection

Data collection can be carried out through research methods, which are:

  • Analytical method : this method reviews each data in depth and in an orderly manner; goes from the general to the particular to obtain conclusions.
  • Synthetic method : here the information is analyzed and summarized; Through logical reasoning he arrives at new knowledge.
  •  Deductive method : this method starts from general knowledge to reach singular knowledge.
  •  Inductive method : from the analysis of particular data, general conclusions are reached.

research methods

Tips for carrying out data collection

Falcón Morales provided 5 tips to the professional to collect data:

  • Make a plan with the objective to be solved.
  • Gather all the data.
  • Define the data architecture.
  • Establish data governance.
  • Maintain a secure data channel.

 

How to best verify the accuracy of self-reported data?

shutterstock 252668938 1280x720 1

How to best verify the accuracy of self-reported data?

shutterstock 252668938 1280x720 1

 

Data

What is Data Collection?

Data collection is the procedure of collecting, measuring, and analyzing accurate insights for research using standard validated techniques.

Put simply, data collection is the process of gathering information for a specific purpose. It can be used to answer research questions, make informed business decisions, or improve products and services.

To collect data, we must first identify what information we need and how we will collect it. We can also evaluate a hypothesis based on collected data. In most cases, data collection is the primary and most important step for research. The approach to data collection is different for different fields of study, depending on the required information.

collect data

 

A significant number of scientific investigations denote a lack of rigor,  and this is largely due to the non-validation of the instruments used. This is much more evident in the behavioral sciences, where the most frequent methodology is qualitative,  a type of research where an indiscriminate use of instruments is observed, which are not typical of this methodology. This responds to an interest in the search for contextualization and homogeneity.

In an analysis carried out on 102 doctoral theses developed in the last 10 years, it was detected that the most used instrument is the survey; that each investigation designed its own instrument; and that, in the best of cases, responded to the objectives set (Conference in postdoctoral course: Analysis of the use of instruments in doctoral research, presented in 2014 by Tomás Crespo Borges, at the Pedagogical University of Villa Clara).

Due to its importance and complexity of application, instrument validation is considered a type of study within intervention studies, that is, at the same level as experimental, quasi-experimental, among others.

The questionnaire is an instrument for collecting information, designed to quantify and universalize it. For this reason, the moment of validation is of great importance, since the results obtained from its application can falsify the research, and thus, lead to fatal consequences in robust studies, in the social, constructive, and life of a patient. , among others.

In this work, dividing sections will be used; in practice it is a process that is presented as a system, where all its elements have an important function.

A first conception that has two phases is described below:

Phase 1: Generalities of validation

An instrument must meet two fundamental elements: validity and reliability, to match the gold standard instrument. If it does not exist, then it must meet a series of requirements to be reliable enough to accept the results in scientific research.

First: Validation involves two fundamental concepts, what has been applied up to this point? Is it good, surely? Second: How accurate is the new instrument to compare it with the one accepted by the scientific community, as correct in its measurements?

Phase 2: Internal validity

Validity is the degree to which an instrument measures what it is intended to measure. To obtain it, the instrument to be used must be compared with the ideal, gold standard or  Gold Standard .

Reaffirmed as a process, five sources of evidence have been postulated for it: according to the content, the internal structure, in relation to other variables, in the consequences of the instrument and in the response processes.

Reliability is the degree of congruence with which an instrument measures the variable. It is obtained by evaluating reproducibility, which is when there is a good correlation in the measurements at different times; and on the other hand, reliability, which is the accuracy of the measurements at different times. The application of both concepts is revealed in a recent article, where an instrument is validated with the purpose of being used in a study on tourist destinations in the province of El Oro, Ecuador.

When exploring the state of the art, the first thing to do is verify the existence of instruments applied in previous research, used for the same purpose, that have been validated at the time, as part of the investigative process. The most used tests, depending on the measurements of the variables, can be Student’s t  or Anova, if the data follow a normal distribution; otherwise, their non-parametric counterparts; Wilcoxon or Kruskal Wallis, in the case of two or three measurements, respectively, in both situations.

When there is no instrument that fits the objectives of the research, then it must be formed and contrasted with the ideal or gold standard.

In the second option, validity is very difficult to prove, since it has been decided to use an instrument different from those existing in the literature consulted.

Next, reliability is verified. For this, reproducibility is measured .  The instrument is applied several times (two or more) in samples that belong to the same universe or population where the research is carried out. To obtain a correlation considered good in the results (according to the Pearson, Spearman coefficients or the CCC coefficient of agreement) between the measurements, a value greater than 0.7 is accepted, although the ideal is 0.9.

Generalities of validation

For reliability, it is proven that in the different measurements, taken in the same universe or population, the responses of the subjects do not differ significantly, that is, there is accuracy in the instrument measurements at different times. The most used statistical tests are Aiken’s V and Dahlberg’s error. Therefore, validity is measured with another instrument, and reliability with the same one.

Other authors include the term optimization. It is associated with minimizing the error when providing a criterion, at the time of decision-making, based on the results obtained from the instrument.

In general sense, in the studies discussed it can be seen that there are several ways to carry out the validation of measurement instruments. The one that the researcher considers most appropriate can be used, but keeping in mind that the one selected meets all the necessary scientific rigor.

Below, a methodology will be shown to validate a measurement instrument, which is a hybrid between the conception of two different groups of authors, who are essentially similar.

Qualitative, which coincides with content analysis, is part of internal validity. To this are added the reliability and the construct, which belong to the quantitative, as well as the criterion, stability and performance. These last three correspond to external validity.

A second conception, which has six phases, in correspondence with  Supo ‘s idea , is described below:

Phase 1 : qualitative or content validation. It is part of internal validity. It is the creation of the instrument. It is divided into three moments, which do not have to follow an order, but are mandatory. It coincides with a type of diagnostic investigation.

  • Approach to the population: its purpose is to investigate the problem being addressed, approach the units of analysis or variables that should be used in the research. To do this, interviews, population survey studies and others can be carried out to provide this information.
  • Expert judgment: the selected experts are responsible for assessing whether the items in the instrument are clear, precise, relevant, coherent and exhaustive.
  • Rational validity (knowledge): they must be concepts that have been searched in the literature. It is assumed that the researcher is knowledgeable about the topic being studied.
qualitative or content validation

Phase 2 : quantitative or reliability. It is within the internal validity of the instrument.

This phase was detailed previously. According to  Aiken : “…strictly speaking, rather than being a characteristic of a test, reliability is a property of the scores obtained when the test is administered to a particular group of people, on a particular occasion, and under specific conditions.”

 

 

The Best Art of Translating Humor: English Jokes and Puns in Hindi

art

Introduction: Humor, often described as a Art universal language, has the power to transcend cultural and linguistic boundaries. Translating English jokes and puns into Hindi is a delightful yet intricate task that involves more than linguistic proficiency—it requires an understanding of cultural nuances, wordplay, and the essence of comedy in both languages. In this blog, … Read more

Translating Best English Proverbs and Sayings into Meaningful Hindi Equivalents

English

Introduction: Language, as a mirror of culture, is English adorned with proverbs and sayings that encapsulate the wisdom, values, and shared experiences of a community. Translating English proverbs and sayings into Hindi is a fascinating journey that requires not only linguistic prowess but a deep understanding of cultural nuances. In this blog, we delve into … Read more

Exploring the Beauty of Poetry: Translating English Poetic Sentences into Hindi

Poetry

Introduction: Poetry, with its eloquent expressions and evocative imagery, transcends linguistic boundaries, inviting readers into the realm of emotions and profound thoughts. Translating poetic sentences from English to Hindi is a delicate art, akin to capturing the essence of a fleeting moment. In this blog, we embark on a journey to explore the intricacies and … Read more