How to best communicate the results of your data collection to stakeholders?

Image

How to best communicate the results of your data collection to stakeholders?

data collection

Data Collection

Data collection is a systematic process of gathering observations or measurements. Whether you are performing research for business, governmental or academic purposes, data collection allows you to gain first-hand knowledge and original insights into your research problem.

While methods and aims may differ between fields, the overall process of data collection remains largely the same. Before you begin collecting data, you need to consider:

  • The aim of the research
  • The type of data that you will collect
  • The methods and procedures you will use to collect, store, and process the data

To collect high-quality data that is relevant to your purposes, follow these four steps.

How to effectively communicate the results of the employee engagement report? (with examples)

You did a survey on employee engagement, perfect! 

You are already measuring your staff’s commitment to your mission, the team, and their role within the company.

But what are you going to do with the results you get from their contributions? 

And most importantly, how will you move from reporting on employee engagement to meeting your staff’s desires for professional growth?

Are you still struggling to find the answers?

Our guide is for you. We have put together practical tips and examples that will allow you to:

  • Know exactly what to do with employee engagement survey data.
  • Make sense of what that data reveals.
  • Excellence in communicating engagement survey results.

The importance  of employee engagement  for companies

First things first: the relevance and impact of an employee engagement report goes beyond the borders of HR. Of course, measuring engagement is a People Ops function. And it’s also one of the trends in employee engagement: actively listening to determine whether workers are thriving.

But a survey report  on employee engagement  transcends concerns about team members’ aspirations. It’s a tool full of insightful information that C-suite executives need to understand how healthy and robust their workforce is.

And that is a strategic question. Because there is no way to drive business results without engaged and enthusiastic staff.

Employee engagement drives productivity, performance, and a positive workplace. 

How to analyze  your employee engagement data

Let’s consider that you already know how to design an employee engagement survey and set your goals. 

Therefore, now we will focus on analyzing the results obtained when carrying out the study.

The first important tip is to prepare the analysis in advance. To do this, put in place the mechanisms to quantify and segment the data.

Use numerical scales, scores and percentages

Use numerical scales and convert responses to numerical values ​​whenever you can in your engagement survey. 

You will see that comparing the data will not cause you a big headache. 

This is because numbers are much less prone to misinterpretation than the opinion of your staff in free text.

Consider qualitative contributions

Although quantitative data is objective, qualitative data must also be analyzed. Because, sometimes, it is not easy to put thoughts and feelings into figures. And without data, it would be impossible to draw conclusions about employee motivation, attitudes, and challenges.

For example, if you ask people to rate their satisfaction with their team from 1 to 5, a number other than five won’t tell you much. But if you ask them to write down the reason for their incomplete satisfaction, you’ll get the gist of their complaint or concern.

Segment the groups of respondents

Without a doubt, your workers are divided into different groups according to different criteria. And that translates into different perceptions of their jobs, their colleagues and their organization.

To find out, segment the engagement survey results while keeping the responses anonymous. 

Assure your employees of their anonymity and ask them to indicate their:

  • age;
  • gender;
  • department and team;
  • tenure;
  • whether they are junior, intermediate or senior;
  • your executive level (manager, director, vice president or C-level).

Read the consensus

In this step the real action of the analysis begins. 

If you’ve followed our recommendations for quantifying (and segmenting) survey data, you’ll be ready to determine precisely what they’re trying to tell you.

In general terms, if most or all of your staff express the same opinion on an issue, you need to investigate the issue and improve something. 

On the other hand, if only a few people are unhappy with the same issue, it may not be necessary to address it so thoroughly.

It all depends on the importance of the topic for the business and the proper functioning of the team and the company. And it’s up to stakeholders to decide whether it’s worth putting effort into finding out what’s behind the results.

Cross data

Engagement levels  are not always related to staff’s position, teams or your company. Instead, those levels may have to do with other factors, such as salary or benefits package, to name a couple.

And it’s up to you to combine engagement survey data with data from other sources.

Therefore, consider the information in your HR management system when reviewing the results of your engagement.

Additionally, evaluate engagement data against business information. 

This is information that you can extract from your ERP system that is closely related to business results.

data

Compare results

The ultimate goal of analyzing engagement survey data is to uncover critical areas for improvement. And to do this as comprehensively as possible, you should compare your current engagement survey results with

  • The results of your previous engagement surveys to understand the engagement levels of your organization, departments and teams over time.
  • The  results of national and global surveys on engagement, especially those of other companies in your sector with a similar activity to yours.

And these are the perceptions to look for:

  • Why is your organization performing better or worse than before?
  • Why do certain departments and teams perform better or worse than before?
  • Why does your company perform better or worse than similar ones in your country or abroad?

How to organize data in your  employee engagement  report

An employee engagement survey report should shed light on how engagement affects the performance of your company and your staff. 

But a report like this is useless if you do not organize the data it contains well. Let’s see how.

First of all, you must keep in mind the objective of the survey. That is developing an action plan to improve the areas with the greatest positive impact on your employee engagement levels.

So keep in mind that traditionally some areas score low in any organization. We’re talking pay and benefits,  career progression  , and workplace politics, to name a few.

But as a general rule, you should prioritize areas where your company scored poorly compared to industry benchmarks. Those are the ones most likely:

  • Generate positive ROI once you improve them.
  • Promote improvement of all other areas of the  employee experience .

Typically, the most impactful areas are:

  • Appreciation to employees;
  • Response to proactive employees;
  • Employee participation in decision making;
  • Communication with leaders.

But it might suggest other areas to focus on.

Now, you need to conveniently organize the survey results in your employee engagement report. 

In other words, you must disclose the commitment figures:

  • for the entire company;
  • by department;
  • by team;
  • by age and sex;
  • by possession;
  • by executive level;
  • by seniority;
  • by period (current month, quarter or year versus the previous one);
  • by region (within your country, in your foreign locations and in comparison to national and global benchmarks in your sector);
  • any combination of the above that makes sense, such as by gender and team or by age and department.

And to identify areas for improvement, you should display the survey data by those areas within each of the divisions above. We recommend that you convert these divisions into distinct sections of the document.

We also recommend using media to visualize results, such as charts and graphs. 

For example, use:

  • Bar charts: to identify trends over time.
  • Line graphs: to compare this year’s data with last year’s data.
  • Callout Charts: To highlight surprising figures or conclusions.

These visuals will help stakeholders objectively understand and analyze the results of the employee engagement report. But most importantly, visualization makes it easy to prioritize areas for improvement and provides actionable results.

Good practices for communicating engagement survey results

After collecting and analyzing employee engagement survey data, it’s time to share it within the company.

And here are our tips for communicating engagement survey results to your employees and leaders.

 3 Tips for Sharing Employee Engagement Survey Results with Employees

Immediately after completing the employee engagement survey, the CEO should make a communication to the entire company. Alternatively, the VP of HR or a senior HR leader can do this.

And in that communication, they make a recognition.

Thank employees for participating in the study

Your boss – if not yourself – can do this via email or in an all-hands meeting. 

But it’s essential that you thank employees as soon as the survey closes. And in addition to saying thank you, the leader must reaffirm his commitment to take engagement to higher levels.

Advise them to appreciate employees’ dedication to helping improve your organizational culture. This will convey the message that employee opinion is valuable, which in itself has a positive impact on engagement.

Briefly present the commitment data you have obtained

One week after closing the survey, your leader should share an overview of the results with the organization. Again, an email or company-wide meeting is all it takes.

The summary should include participation statistics and a summary of the main results (best and worst figures). 

This time is also a great opportunity for your leader to explain what employees should expect next. And one way to set expectations is to outline the action plan.

However, their leader cannot provide many details at this time. 

The first communication of employee engagement results should focus on numbers with a broader impact. 

In other words, it is an occasion to focus on the effect of data at the organizational level.

overview

Report complete engagement data and plan improvements

Three weeks after the survey closes, HR and leaders – team leaders and other executives – must get to work:

  • Carefully review the results.
  • Detail the action plan: the areas of improvement you will address and the engagement initiatives you will implement. 

Once key stakeholders have decided on the action plan, it’s time to communicate all the details to employees. 

The deadline should be no later than one or two months after the survey closes.

3 Tips for Sharing Engagement Survey Results with Leaders and Key Stakeholders

Once the results of the engagement survey are obtained, the first step is to share them with the management team. These are our main recommendations on how to approach this task.

There is no need to rush when deciding what to do with the data.

Give your leaders time to review the engagement data, digest it, and think carefully about it. 

We recommend this calendar:

  • One week after the survey closes, before communicating high-level results.
  • Three weeks after completing the survey, before we begin to thoroughly discuss the data and delve into the action plan.
  • One or two months after the survey closes, before communicating detailed results.

Increasing the engagement levels of your employees is a process of change. And as with any corporate change, internalizing it does not happen overnight. Additionally, your leaders are the ones who must steer the helm of change, so they need time to prepare.

Emphasize the end goal and fuel the dialogue

The process of scrutinizing engagement data starts with you. Introduce your leaders:

  • the overall employee engagement score;
  • company-wide trends;
  • department-specific trends;
  • strengths and weaknesses (or opportunities).

Leaders must clearly understand what organizational culture is being pursued. 

And the survey results will help them figure out what’s missing. 

So make sure you communicate this mindset to them.

Next, as you discuss the data in depth, you should promote an open dialogue. Only then will your leaders agree on an effective action plan to increase engagement levels.

Don’t sweep problems under the rug

You can’t increase employee engagement without transparency. And you play a role in it too. 

You should share the fantastic results and painful numbers from your engagement survey with your leaders.

Reporting alarming discoveries is mandatory for improvement. 

After all, how could you improve something without fully understanding and dissecting it? 

Additionally, investigating negative ratings and comments is ultimately a win-win for both workers and the company.

 Real Examples of Employee  Engagement Reports

Here are four employee engagement reports that caught our attention. We’ll investigate why they did it and what you can learn from them.

1. New Mexico Department of Environment

The New Mexico  Department of Environment’s  engagement report  : 

1. Start with a  message from a senior leader  in your organization, providing:

  • The overall response rate;
  • The overall level of commitment;
  • Some areas for improvement;
  • A reaffirmation of senior management’s commitment to addressing employee feedback.

2. Use  graphs  to highlight the most interesting conclusions.

3. Breaks down the figures from an  overview to  year-on-year  highs  and lows  by department and survey section

4.  Compare  your employees’ level of engagement  to a national benchmark.

5. Includes information on the   organization’s  commitment actions throughout the year prior to the survey.

6.  Demographic breakdown of respondents .

7. Clarify  next steps for  your  leaders  and how employees can participate.

8. Discloses an appendix containing  year-over-year scores  for all survey questions that used a numerical scale.

Note:  The year-over-year comparison allows this organization to identify trends in employee engagement.

2. GitLab

The  GitLab Commit Report :

  • Explain how survey responses   will  be kept  confidential .
  • List  the areas of interest for the survey.
  • Shows a  chronology of the actions that the company carried out around the survey.
  • Clarify the  steps that will follow after closing the survey.
  • It presents the  global response rate, the global commitment level and an industry benchmark.
  • Thank you  employees for  participating  in the survey.
  • It reveals the  top-ranked responses in the three main areas of interest  and compares them to the industry benchmark.
  • Highlight  areas that require improvement.

👀  Note:  The calendar with the survey actions may seem insignificant. However, it is an element of  transparency that generates trust among readers of the report.

3. UCI Irvine Human Resources

The  University of California  HR  Employee Engagement Report  :

  1. Start by  justifying why employee engagement is important  to workplace culture and various stakeholders.
  2. Defines the  responsibilities of everyone  involved in engaging those stakeholders.
  3. Remember the  results of the previous employee engagement survey  and set them as  a reference .
  4. Compare the  most recent survey results with the baseline figures.
  5. Distinguishes engaged, disengaged, and actively disengaged staff members  between previous and most recent data by organizational unit.
  6. Lists new opportunities  that the department should address  and strengths that it should continue to explore.
  7. It presents a  timeline of the phased engagement program  and some planned actions.
  8. Describes the next steps  leaders should take with their team members.

👀 Note:  The report notes that the figures vary between the two editions of the survey because the HR department encouraged staff participation instead of forcing it.

4. UCI Riverside Chancellor’s Office

The  University of California,  Riverside  Office of the Chancellor’s  Employee Engagement Report :

  1. Contains  instructions on  how scores were calculated .
  2. Compare employee engagement survey results  to different types of references , from previous survey results to national numbers.
  3. Highlights issues  that represent a  priority  for the organization.
  4. Distinguish the  level of statistical significance that each number has , clarifying the extent to which they are significant.
  5. Describe  the suggested actions  in some detail.
  6. It groups scores by category – such as professional development or performance management -, role – such as manager or director -, gender, ethnicity, seniority and salary range.
  7. Break down the scores  within each category.
  8. Shows the  total percentage of employees at each engagement level , from highly engaged, empowered and energized to disengaged.
  9. It concludes with the  main drivers of commitment , such as the promotion of social well-being.

 Note:  The document is very visual and relies on colors to present data. While this appeals to most readers, it makes it less inclusive and compromises organization-wide interpretation.

Now that you know how to analyze your survey data and organize your engagement report, learn how to create an  employee engagement program .

 

What considerations are taken into account for the best longitudinal data collection?

Global

What considerations are taken into account for the best longitudinal data collection?

data collection

 

Data Collection

Data collection is a systematic process of gathering observations or measurements. Whether you are performing research for business, governmental or academic purposes, data collection allows you to gain first-hand knowledge and original insights into your research problem.

While methods and aims may differ between fields, the overall process of data collection remains largely the same. Before you begin collecting data, you need to consider:

  • The aim of the research
  • The type of data that you will collect
  • The methods and procedures you will use to collect, store, and process the data

To collect high-quality data that is relevant to your purposes, follow these four steps.

LONGITUDINAL STUDIES: CONCEPT AND PARTICULARITIES

WHAT IS A LONGITUDINAL STUDY?

 The discussion about the meaning of the term longitudinal was summarized by Chin in 1989: for epidemiologists it is synonymous with a cohort or follow-up study, while for some statisticians it implies repeated measurements. He himself decided not to define the term longitudinal, as it was difficult to find a concept acceptable to everyone, and chose to consider it equivalent to “monitoring”, the most common thought for professionals of the time.

The longitudinal study in epidemiology

In the 1980s it was very common to use the term longitudinal to simply separate cause from effect. As opposed to the transversal term. Miettinen defines it as a study whose basis is the experience of the population over time (as opposed to a section of the population). Consistent with this idea, Rothman, in his 1986 text, indicates that the word longitudinal denotes the existence of a time interval between exposure and the onset of the disease. Under this meaning, the case-control study, which is a sampling strategy to represent the experience of the population over time (especially under Miettinen’s ideas), would also be a longitudinal study.

Likewise, Abramson agrees with this idea, who also differentiates longitudinal descriptive studies (studies of change) from longitudinal analytical studies, which include case-control studies. Kleinbaum et al. likewise define the term longitudinal as opposed to transversal but, with a somewhat different nuance, they speak of “longitudinal experience” of a population (versus “transversal experience”) and for them it implies the performance of at least two series of observations over a follow-up period. The latter authors exclude case-control studies. Kahn and Sempos also do not have a heading for these studies and in the keyword index, the entry “longitudinal study” reads “see prospective study.”

This is reflected in the Dictionary of Epidemiology directed by Last, which considers the term “longitudinal study” as synonymous with cohort study or follow-up study. In Breslow and Day’s classic text on cohort studies, the term longitudinal is considered equivalent to cohort and is used interchangeably. However, Cook and Ware defined the longitudinal study as one in which the same individual is observed on more than one occasion and differentiated it from follow-up studies, in which individuals are followed until the occurrence of an event such as death. death or illness (although this event is already the second observation).

longitudinal

Since 1990, several texts consider the term longitudinal equivalent to other names, although most omit it. A reflection of this is the book co-edited by Rothman and Greenland, in which there is no specific section for longitudinal studies within the chapters dedicated to design, and the Encyclopedia of Epidemiological Methods also coincides with this trend, which does not offer a specific entry for this type of studies.

The fourth edition of Last’s Dictionary of Epidemiology reproduces his entry from previous editions. Gordis considers it synonymous with a concurrent prospective cohort study. Aday partially follows Abramson’s ideas, already mentioned, and differentiates descriptive studies (several cross-sectional studies sequenced over time) from analytical ones, among which are prospective or longitudinal cohort studies.

In other fields of clinical medicine, the longitudinal sense is considered opposite to the transversal and is equated with cohort, often prospective. This is confirmed, for example, in publications focused on the field of menopause.

The longitudinal study in statistics

Here the ideas are much clearer: a longitudinal study is one that involves more than two measurements throughout a follow-up; There must be more than two, since every cohort study has this number of measurements, the one at the beginning and the one at the end of follow-up. This is the concept existing in the aforementioned text by Goldstein from 1979. In that same year Rosner was explicit when indicating that longitudinal data imply repeated measurements on subjects over time, proposing a new analysis procedure for this type of data. . Since that time, articles in statistics journals (for example) and texts are consistent in the same concept.

Two reference works in epidemiology, although they do not define longitudinal studies in the corresponding section, coincide with the prevailing statistical notion. In the book co-directed by Rothman and Greenland, in the chapter Introduction to regression modeling, Greenland himself states that longitudinal data are repeated measurements on subjects over a period of time and that they can be carried out for exposures. time-dependent (e.g., smoking, alcohol consumption, diet, or blood pressure) or recurrent outcomes (e.g., pain, allergy, depression, etc.).

In the Encyclopedia of Epidemiological Methods, the “sample size” entry includes a “longitudinal studies” section that provides the same information provided by Greenland.

It is worth clarifying that the statistical view of a “longitudinal study” is based on a particular data analysis (taking repeated measures into account) and that the same would be applicable to intervention studies, which also have follow-up.

To conclude this section, in the monographic issue of  Epidemiologic Reviews  dedicated to cohort studies, Tager, in his article focused on the outcome variable of cohort studies, broadly classifies cohort studies into two large groups, ” life table” and “longitudinal”, clarifying that this classification is something “artificial”. The first are the conventional ones, in which the result is a discrete variable, the exposure and the population-time are summarized, incidences are estimated and the main measure is the relative risk.

"artificial"

The latter incorporate a different analysis, taking advantage of repeated measurements in subjects over time, allowing inference, in addition to population, at the individual level in the changes of a process over time or in the transitions between different states. of health and illness.

The previous ideas denote that in epidemiology there is a tendency to avoid the concept of longitudinal study. However, summarizing the ideas discussed above, the notion of longitudinal study refers to a cohort study in which more than two measurements are made over time and in which an analysis is carried out that takes into account the different measurements. . The three key elements are: monitoring, more than two measures and an analysis that takes them into account. This can be done prospectively or retrospectively, and the study can be observational or interventional.

PARTICULARITIES OF LONGITUDINAL STUDIES

When measuring over time,  quality control  plays an essential role. It must be ensured that all measurements are carried out in a timely manner and with standardized techniques. The long duration of some studies requires special attention to changes in personnel, deterioration of equipment, changes in technologies, and inconsistencies in participant responses over time.

There is a  greater probability of dropout  during follow-up. The factors involved in this are several:

* The definition of a population according to an unstable criterion. For example, living in a specific geographic area may cause participants with changes of address to be ineligible in later phases.

* It will be greater when, in the case of responders who are not contacted once, no further attempts are made to establish contact in subsequent phases of the follow-up.

* The object of the study influences; For example, in a political science study those not interested in politics will drop out more.

* The amount of personal attention devoted to responders. Telephone and letter interviews are less personal than those conducted face to face, and are not used to strengthen ties with the study.

* The time invested by the responder in satisfying the researchers’ demand for information. The higher it is, the greater the frequency of abandonments.

* The frequency of contact can also play a role, although not everyone agrees. There are studies that have documented that an excess of contacts impairs follow-up, while others have either found no relationship or it is negative.

To avoid dropouts, it is advisable to establish strategies to retain and track participating members. The willingness to participate should be assessed at the beginning and what is expected of the participants. Bridges must be established with the participants by sending congratulatory letters, study updates, etc.

The frequency of contact must be regular. Study staff must be enthusiastic, easy to communicate, respond quickly and appropriately to participants’ problems, and adaptable to their needs. We must not disdain giving incentives that motivate continuation in the study.

Thirdly, another major problem compared to other cohort studies is the  existence of missing data . If a participant is required to have all measurements made, it can produce a problem similar to dropouts during follow-up. For this purpose, techniques for imputation of missing values ​​have been developed and, although it has been suggested that they may not be necessary if generalized estimating equations (GEE analysis) are applied, it has been proven that other procedures give better results, even when the losses are completely random.

Frequently, information losses are differential and more measurements are lost in patients with a worse level of health. It is recommended in these cases that data imputation be done taking into account the existing data of the individual who is missing.

Analysis

In the analysis of longitudinal studies it is possible to treat time-dependent covariates that can both influence the exposure under study and be influenced by it (variables that simultaneously behave as confounders and intermediate between exposure and effect). Also, in a similar way, it allows controlling recurring results that can act on the exposure and be caused by it (they behave both as confounders and effects).

Longitudinal analysis can be used when there are measurements of the effect and/or exposure at different moments in time. Suppose that the relationship between a dependent variable Y is a function of a variable , which is expressed according to the following equation :

Y it  = bx it  + z i a + e it

where the subscript  i  refers to the individual, the t at the moment of time and e is an error term (Z does not change as it is stable and that is why it has a single subscript). The existence of several measurements allows us to estimate the coefficient b without needing to know the value of the stable variable, by performing a regression of the difference in the effect (Y) on the difference in values ​​of the independent variables:

Y it  – Y i1  = b(x it  – x i1  ) + a( z i  – z i  ) +
+ e it  – e i1  = b( x it  – x i1  ) + e it  – e i1

That is, it is not necessary to know the value of the time-independent (or stable) variables over time. This is an advantage over other analyses, in which these variables must be known. The above model is easily generalizable to a multivariate vector of factors changing over time.

The longitudinal analysis is carried out within the context of generalized linear models and has two objectives: to adopt conventional regression tools, in which the effect is related to the different exposures and to take into account the correlation of the measurements between subjects. This last aspect is very important. Suppose you analyze the effect of growth on blood pressure; The blood pressure values ​​of a subject in the different tests performed depend on the initial or basal value and therefore must be taken into account.

For example, longitudinal analysis could be performed in a childhood cohort in which vitamin A deficiency (which can change over time) is assessed as the main exposure over the risk of infection (which can be multiple over time). , controlling the influence of age, weight and height (time-dependent variables). The longitudinal analysis can be classified into three large groups.

a) Marginal models: they combine the different measurements (which are slices in time) of the prevalence of the exposure to obtain an average prevalence or other summary measure of the exposure over time, and relate it to the frequency of the disease . The longitudinal element is age or duration of follow-up in the regression analysis. The coefficients of this type of models are transformed into a population prevalence ratio; In the example of vitamin A and infection it would be the prevalence of infection in children with vitamin A deficiency divided by the prevalence of infection in children without vitamin A deficiency.

b) Transition models regress the present result on past values ​​and on past and present exposures. An example of them are Markov models. The model coefficients are directly transformed into a quotient of incidences, that is, into RRs; In the example it would be the RR of vitamin A deficiency on infection.

c) Random effects models allow each individual to have unique regression parameters, and there are procedures for standardized results, binary, and person-time data. The model coefficients are transformed into an odds ratio referring to the individual, which is assumed to be constant throughout the population; In the example it would be the odds of infection in a child with vitamin A deficiency versus the odds of infection in the same child without vitamin A deficiency.

Linear, logistic, Poisson models, and many survival analyzes can be considered particular cases of generalized linear models. There are procedures that allow late entries or at different times and unequally in the observation of a cohort.

In addition to the parametric models indicated in the previous paragraph, analysis using non-parametric methods is possible; For example, the use of functional analysis with  splines  has recently been reviewed.

Several specific texts on longitudinal data analysis have been mentioned. One of them even offers examples with the routines to write to correctly carry out the analysis using different conventional statistical packages (STATA, SAS, SPSS).

What are the best 5 common data collection instruments?

Financial

 

What are the best 5 common data collection instruments?

data collection

Data Collection

In the age when information is power, how we gather that information should be one of our major concerns, right? Also, which of the many data collection methods is the best for your particular needs? Whatever the answer to the two questions above, one thing is for sure – whether you’re an enterprise, organization, agency, entrepreneur, researcher, student, or just a curious individual, data gathering needs to be one of your top priorities.

Still, raw data doesn’t always have to be particularly useful. Without proper context and structure, it’s just a set of random facts and figures after all. However, if you organize, structure, and analyze data obtained from different sources, you’ve got yourself a powerful “fuel” for your decision-making.

Data collection is defined as the “process of gathering and measuring information on variables of interest, in an established systematic fashion that enables one to answer queries, stated research questions, test hypotheses, and evaluate outcomes.”

It is estimated that, by 2025, the total volume of data created and consumed worldwide will reach 163 zettabytes. That being said, there are numerous reasons for data collection, but here we are going to focus primarily on those relevant to marketers and small business owners:

  • It helps you learn more about your target audience by collecting demographic information
  • It enables you to discover trends in the way people change their opinions and behavior over time or in different circumstances
  • It lets you segment your audience into different customer groups and direct different marketing strategies at each of the groups based on their individual needs
  • It facilitates decision making and improves the quality of decisions made
  • It helps resolve issues and improve the quality of your product or service based on the feedback obtained

According to Clario, global top collectors of personal data among social media apps are:

  • Facebook
  • Instagram
  • TikTok
  • Clubhouse
  • Twitter

And given how successfull they are when it comes to meeting their users’ needs and interests, it is safe to say that streamlined and efficient data collection process is at the core of any serious business in 2023.

Before we dive deeper into different data collection techniques and methods, let’s just briefly differentiate between the two main types of data collection – primary and secondary.

Primary vs. Secondary Data Collection

Primary data collection

Primary data (also referred to as raw data) is the data you collect first-hand, directly from the source. In this case, you are the first person to interact with and draw conclusions from such data, which makes it more difficult to interpret it.

According to reasearch, about 80% of all collected data by 2025. will be unstructured. In other words, unstructured daza collected as primary data but nothing meaningful has been done with it. Unstructured data needs to be organized and analyzed if it’s going to be used as in-depth fuel for decision-making.

Secondary data collection

Secondary data represents information that has already been collected, structured, and analyzed by another researcher. If you are using books, research papers, statistics, survey results that were created by someone else, they are considered to be secondary data.

Secondary data collection is much easier and faster than primary. But, on the other hand, it’s often very difficult to find secondary data that’s 100% applicable to your own situation, unlike primary data collection, which is in most cases done with a specific need in mind.

Some examples of secondary data include census data gathered by the US Census Bureau, stock prices data published by Nasdaq, employment and salaries data posted on Glassdoor, all kinds of statistics on Statista, etc. Further along the line, both primary and secondary data can be broken down into subcategories based on whether the data is qualitative or quantitative.

Quantitative vs. Qualitative data

Quantitative Data

This type of data deals with things that are measurable and can be expressed in numbers or figures, or using other values that express quantity. That being said, quantitative data is usually expressed in numerical form and can represent size, length, duration, amount, price, and so on.

Quantitative research is most likely to provide answers to questions such as who? when? where? what? and how many?

Quantitative survey questions are in most cases closed-ended and created in accordance with the research goals, thus making the answers easily transformable into numbers, charts, graphs, and tables.

The data obtained via quantitative data collection methods can be used to conduct market research, test existing ideas or predictions, learn about your customers, measure general trends, and make important decisions.

For instance, you can use it to measure the success of your product and which aspects may need improvement, the level of satisfaction of your customers, to find out whether and why your competitors are outselling you, or any other type of research.

As quantitative data collection methods are often based on mathematical calculations, the data obtained that way is usually seen as more objective and reliable than qualitative. Some of the most common quantitative data collection techniques include surveys and questionnaires (with closed-ended questions).

Compared to qualitative techniques, quantitative methods are usually cheaper and it takes less time to gather data this way. Plus, due to a pretty high level of standardization, it’s much easier to compare and analyze the findings obtained using quantitative data collection methods.

Qualitative Data

Unlike quantitative data, which deals with numbers and figures, qualitative data is descriptive in nature rather than numerical. Qualitative data is usually not easily measurable as quantitative and can be gained through observation or open-ended survey or interview questions.

Qualitative research is most likely to provide answers to questions such as “why?” and “how?”

As mentioned, qualitative data collection methods are most likely to consist of open-ended questions and descriptive answers and little or no numerical value. Qualitative data is an excellent way to gain insight into your audience’s thoughts and behavior (maybe the ones you identified using quantitative research, but weren’t able to analyze in greater detail).

Data obtained using qualitative data collection methods can be used to find new ideas, opportunities, and problems, test their value and accuracy, formulate predictions, explore a certain field in more detail, and explain the numbers obtained using quantitative data collection techniques.

As quantitative data collection methods usually do not involve numbers and mathematical calculations but are rather concerned with words, sounds, thoughts, feelings, and other non-quantifiable data, qualitative data is often seen as more subjective, but at the same time, it allows a greater depth of understanding.

Some of the most common qualitative data collection techniques include open-ended surveys and questionnaires, interviews, focus groups, observation, case studies, and so on.

Quantitative vs. Qualitative data

5 Data Collection Methods

Before we dive deeper into different data collection tools and methods – what are the 5 methods of data collection? Here they are:

  • Surveys, quizzes, and questionnaires
  • Interviews
  • Focus groups
  • Direct observations
  • Documents and records (and other types of secondary data, which won’t be our main focus here)

Data collection methods can further be classified into quantitative and qualitative, each of which is based on different tools and means.

Quantitative data collection methods

1. Closed-ended Surveys and Online Quizzes

Closed-ended surveys and online quizzes are based on questions that give respondents predefined answer options to opt for. There are two main types of closed-ended surveys – those based on categorical and those based on interval/ratio questions.

Categorical survey questions can be further classified into dichotomous (‘yes/no’), multiple-choice questions, or checkbox questions and can be answered with a simple “yes” or “no” or a specific piece of predefined information.

Interval/ratio questions, on the other hand, can consist of rating-scale, Likert-scale, or matrix questions and involve a set of predefined values to choose from on a fixed scale. To learn more, we have prepared a guide on different types of closed-ended survey questions.

Once again, these types of data collection methods are a great choice when looking to get simple and easily analyzable counts, such as “85% of respondents said surveys are an effective means of data collection” or “56% of men and 61% of women have taken a survey this year” (disclaimer: made-up stats).

If you’d like to create something like this on your own, learn more about how to make the best use of our survey maker.

It lets you segment your audience into different customer groups and direct different marketing strategies at each of the groups based on their individual needs (check out our quiz maker for more details).

Qualitative data collection methods

2. Open-Ended Surveys and Questionnaires

Opposite to closed-ended are open-ended surveys and questionnaires. The main difference between the two is the fact that closed-ended surveys offer predefined answer options the respondent must choose from, whereas open-ended surveys allow the respondents much more freedom and flexibility when providing their answers.

Here’s an example that best illustrates the difference:

When creating an open-ended survey, keep in mind the length of your survey and the number and complexity of questions. You need to carefully determine the optimal number of questions, as answering open-ended questions can be time-consuming and demanding, and you don’t want to overwhelm your respondents.

Compared to closed-ended surveys, one of the quantitative data collection methods, the findings of open-ended surveys are more difficult to compile and analyze due to the fact that there are no uniform answer options to choose from. In addition, surveys are considered to be among the most cost-effective data collection tools.

3. 1-on-1 Interviews

One-on-one (or face-to-face) interviews are one of the most common types of data collection methods in qualitative research. Here, the interviewer collects data directly from the interviewee. Due to it being a very personal approach, this data collection technique is perfect when you need to gather highly personalized data.

Depending on your specific needs, the interview can be informal, unstructured, conversational, and even spontaneous (as if you were talking to your friend) – in which case it’s more difficult and time-consuming to process the obtained data – or it can be semi-structured and standardized to a certain extent (if you, for example, ask the same series of open-ended questions).

4. Focus groups

The focus group data collection method is essentially an interview method, but instead of being done 1-on-1, here we have a group discussion.

Whenever the resources for 1-on-1 interviews are limited (whether in terms of people, money, or time) or you need to recreate a particular social situation in order to gather data on people’s attitudes and behaviors, focus groups can come in very handy.

Ideally, a focus group should have 3-10 people, plus a moderator. Of course, depending on the research goal and what the data obtained is to be used for, there should be some common denominators for all the members of the focus group.

For example, if you’re doing a study on the rehabilitation of teenage female drug users, all the members of your focus group have to be girls recovering from drug addiction. Other parameters, such as age, education, employment, marital status do not have to be similar.

Focus groups

 

5. Direct observation

Direct observation is one of the most passive qualitative data collection methods. Here, the data collector takes a participatory stance, observing the setting in which the subjects of their observation are while taking down notes, video/audio recordings, photos, and so on.

Due to its participatory nature, direct observation can lead to bias in research, as the participation may influence the attitudes and opinions of the researcher, making it challenging for them to remain objective. Plus, the fact that the researcher is a participant too can affect the naturalness of the actions and behaviors of subjects who know they’re being observed.

Interactive online data collection

Above, you’ve been introduced to 5 different data collection methods that can help you gather all the quantitative and qualitative data you need. Even though we’ve classified the techniques according to the type of data you’re most likely to obtain, many of the methods used above can be used to gather both qualitative and quantitative data.

While online quiz maker may seem like an inocuous tool for data collection, it’s actually a great way to engage with your target audience in a way that will result in actionable and valuable data and information. Quizzes can be more helpful in gathering data about people’s behavior, personal preferences, and more intimate impulses.

You can go for these options:

  • Personality quiz

This type of quiz has been used for decades by psychologists and human resources managers – if administered properly, it can give you a great insight into the way your customers are reasoning and making decisions.

The results can come in various forms – they are usually segmented into groups with similar characteristics. You can use it to find out what your customers like, what their habits are, how they decide to purchase a product, etc.

  • Scored survey

This type of questionnaire lingers somewhere between a quiz and survey – but in this case, you can quantify the result based on your own metrics and needs.  For example, you can use it to determine the quality of a lead.

  • Survey

You can use surveys to collect opinions and feedback from your customers or audience. For example, you can use it to find out how old your customers are, what their education level is, what they think about your product, and how all these elements interact with each other when it comes to the customer’s opinion about your business.

  • Test quiz

This type of quiz can help you test the user’s knowledge about the certain topic, and it differentiates from the personality quiz by having answers that are correct or false.

You can use it to test your products or services. For example, if you are selling a language learning software, a test quiz is a valuable insight into its effectiveness.

How to make data collection science-proof

However, if you want to acquire this often highly-sensitive information and draw conclusions from it, there are specific rules you need to follow. The first group of those rules refers to the scientific methodology of this form of research and the second group refers to legal regulation.

1. Pay Attention to Sampling

Sampling is the first problem you may encounter if you are seeking to research a demographic that extends beyond the people on your email list or website. A sample, in this case, is a group of people taken from a larger population for measurement.

To be able to draw correct conclusions, you have to say with scientific certainty that this sample reflects the larger group it represents.

Your sample size depends on the type of data analysis you will perform and the desired precision of the estimates.

Remember that until recently, users of the internet and e-mail were not truly representative of the general population. This gap has closed significantly in recent years, but the way you distribute your quiz or survey can also limit the scope of your research.

For example, a Buzzfeed type of quiz is more likely to attract a young, affluent demographic that doesn’t necessarily reflect the opinions and habits of middle-aged individuals.

You can use this software to calculate the size of the needed sample. You can also read more about sapling and post-survey adjustments that will guarantee that your results are reliable and applicable.

2. Ensure high-response rate

Online survey response rates can vary and sometimes can as low as 1%.  You want to make sure that you offer potential respondents some form of incentive (for example, a discount for your product or an entertainment value for people who solve personality quizzes).

Response rate is influenced by interests of participants, survey structure, communication methods, and assurance of privacy and confidentiality. We will deal with the confidentiality in the next chapter, and here you can learn more about optimizing your quizzes for high response rates.

Now that you know what are the advantages and disadvantages of the online quizzes and surveys, these are the key takeaways for making a high-quality questionnaire.

3. Communicate clearly

Keep your language simple and avoid questions that may lead to confusion or ambiguous answers. Unless your survey or quiz target a specific group, the language shouldn’t be too technical or complicated.

Also, avoid cramming multiple questions into one. For example, you can ask whether the product is “interesting and useful,” and offer “yes” and “no” as an answer – but the problem is that it could be interesting without being useful and vice versa.

4. Keep it short and logical

Keep your quizzes and surveys as short as possible and don’t risk people opting out of the questionnaire halfway.  If the quiz or survey have to be longer, divide them into several segments of related questions. For example, you can group questions in a personality quiz into interests, goals, daily habits etc. Follow a logical flow with your questions, don’t jump from one topic to another.

5. Avoid bias

Don’t try to nudge respondents’ answers towards a certain result. We know it feels easier to ask how amazing your product is, but try to stay neutral and simply ask people what they think about it.

Also, make sure that multimedia content in the survey or quiz does not affect responses.

6. Consider respondents’ bias

If you conduct personality quizzes, you may notice that you cannot always expect total accuracy when you ask people to talk about themselves. Sometimes, people don’t have an accurate perception of their own daily activities, so try to be helpful in the way you word the questions.

For example, it’s much easier for them to recall how much time they spend on their smartphone on a daily basis, then to as them to calculate in on a weekly or monthly basis.

Even then you may not get accurate answers, which is why you should cross-examine the results with other sources of information.

7. Respect Privacy and Confidentiality

As we previously mentioned, respecting users’ privacy and maintaining confidentiality is one of the most important factors that contribute to high response rates.

Until fairly recently, privacy and data protection laws were lagging almost decades behind our technological development. It took several major data-breach and data-mining scandals to put this issue on the agenda of the governments and legal authorities.

For a good reason – here are some stats showing how Internet users feel about privacy.

  • 85% of the world’s adults want to do more to protect their online privacy
  • 71% of the world’s adults have taken measures to protect their online privacy
  • 1 in 4 Americans are asked to agree to a privacy policy on a daily basis
  • Two-thirds of the world’s consumers think that tech companies have too much control over their data
  • According to consumers, the most appropriate type of collected data is brand purchase history

Many of global users’ concerns were addressed for the first time in the General Data Protection Regulation (GDPR) which was introduced on the 25th May 2018. It establishes different privacy legislation from European countries under one umbrella of legally binding EU regulation.

Although the law is European, each website that receives European visitors has to comply – and this means everyone. So what are your obligations under GDPR?

  • you have to seek permission to use the customers’ data, explicitly and unambiguously
  • you have to explain why you need this data
  • you have to prove you need this data
  • you have to document the ways you use personal data
  • you have to report any data breaches promptly
  • accessible privacy settings built into your digital products and websites
  • switched on privacy settings
  • regular privacy impact assessments

While the new rulebook may seem intimidating at first, in reality, it comes down to a matter of business ethics. Think about it in the simplest terms. Sleazy sliding into people’s email inbox may have its short-term benefits, but in the long run, it amounts to building an email list full of people who are uninterested in your product and irritated by your spam.

Actively seeking permission to send emails to your potential and existing customers is an excellent way to make sure that your list is full of high-quality leads that want to hear or buy from you.

Protecting your customers’ data or going to great lengths to explain how you’re going to use it establishes a long-term relationship based on trust.

What steps will you take to enhance the transparency of your data collection methods?

mujer robot

What steps will you take to enhance the transparency of your data collection methods?

data collection

 

What are Data Collection Methods?

Data collection methods are techniques and procedures used to gather information for research purposes. These methods can range from simple self-reported surveys to more complex experiments and can involve either quantitative or qualitative approaches to data gathering.

Some common data collection methods include surveys, interviews, observations, focus groups, experiments, and secondary data analysis. The data collected through these methods can then be analyzed and used to support or refute research hypotheses and draw conclusions about the study’s subject matter.

Data collection methods play a crucial role in the research process as they determine the quality and accuracy of the data collected. Here are some mejor importance of data collection methods.

  • Determines the quality and accuracy of collected data.
  • Ensures that the data is relevant, valid, and reliable.
  • Helps reduce bias and increase the representativeness of the sample.
  • Essential for making informed decisions and accurate conclusions.
  • Facilitates achievement of research objectives by providing accurate data.
  • Supports the validity and reliability of research findings.

Methods, techniques and constants for the evaluation of online public access catalogs

Until the emergence of new information and communication technologies (ICT), and in particular the Internet, information systems generally contained tangible resources. With the advent of the Web, the development of collections became more complex, which progressively began to have electronic and virtual documents, which required information professionals to face the challenge of changing the perspective of technical processes, this time with new requirements capable of facing documentary complexity (especially from the point of view of structural integration).

This reality demands the reorganization and redesign of essential processes in information systems, particularly for the storage and retrieval of information, in such a way that they facilitate clear and expeditious access to information. In this sense, attention to the methods used for description becomes vitally important, both from a formal and content point of view.

Even though many organizations, work groups and individuals are using the Internet to generate and/or distribute information, and the amount of electronic resources available on the Web has increased substantially in recent years, a good part of the collections, especially Those that are not generated in HTML are “invisible” from the benefits that general searches on the Internet currently offer, so it can be argued that there is an imminent need to access this type of resources based on new management strategies. of the contents. Consequently, online catalogs require new specific tags (new metadata sets), new metalanguages, new semantics and new syntax to achieve efficient search and retrieval.

Methods

The new challenge for information professionals consists of representing not only the constant or explicit concepts of the documents, but also the changes in the understanding or use of these, emergent or circumstantial changes, which must also be identified as inputs for the construction of metadata and for the knowledge management process, which would identify new topics that would serve potential users of information systems.

The guarantee for competitiveness and excellence in the provision of online catalog services depends on a new strategic vision in the quality evaluation and management processes, with the identification of the opportunities offered by a scenario in constant transformation and supporting new demands for adaptability and incessant changes for the provision of services through continuous improvement.

METHODS AND TECHNIQUES FOR EVALUATION OF ONLINE CATALOGS

The application of automation in the recovery of bibliographic records was initially represented by large databases that led to the creation of online catalogs. With the development of information and communications technologies supported by networks, access to records has transcended the doors of libraries, which is possible through remote access to online public access catalogs (OPAC). Its main objective was that end users could conduct themselves, autonomously and independently, in online information searches.5 Online catalogs were the first information retrieval systems designed to be used directly by the general public, requiring or no training.

With the widespread use of online catalogs, they have become a dynamic channel of access to constantly growing information resources through the use of networks and the possibilities of hyperlinks.

Although several difficulties for their use still persist in many of them, they are important for cataloguers, as they serve as a guide for the use of rules and standards when working with bibliographic records, and also to approach and adopt measures that from usability and User-centered design allows the use and exploitation of OPACs in accordance with the needs of users of information systems.

At present, the analysis and study of users, as well as the creation of products/services that satisfy their needs, is a complex issue, especially if these products and services are in the web environment, both even more so if they are analyzed under the influence of the philosophy of web 2.0.

The truth is that new generations of users (2.0 users) have grown up with the use of computers and with access to all the benefits they offer; Therefore, their forms of consumption, access and processing of information, as well as their needs and expectations, are different, since they require and expect personalized products and services with immediate response, collaborative and multitasking, they assume participatory learning, they prefer access non-linear to information, they prefer graphical representations to written text and expect the interfaces of different systems to be more intuitive.

These users consume a wide variety of information, but not in a static way, since they become, in turn, producers of new information, which benefits them with the significant advantages of the knowledge that is built.

Information designers and professionals must aim to make OPACs a system to improve, promote, facilitate the use and consumption of information in information systems, through the incorporation of techniques and tools that meet the requirement that what users value: ease of use. In this sense, it can be said that the perspectives presented by the studies related to OPACs in the field of their improvement are different, and above all that they are carried out partially focused on the different benefits that they offer.

EVALUATION METHODS

Regarding the methods used for evaluation, it could be argued that there is great terminological diversity to identify the different practices used in this sense, but that they could be systematized into three large divisions: qualitative, quantitative and those that use comparisons.

Quantitative methods

These are methods that focus above all on the collection of statistical information related to the functioning of the institution, related to efficiency, effectiveness and cost-effectiveness. They are methods focused on the operation of the systems, but although very necessary, they have the drawback that they use statistics collected by staff or by automatic systems; Therefore, the data collected may have a certain deviation, from which it is inferred that the results are not completely reliable.

Qualitative methods

They are assisted by qualitative information collection techniques, such as exchanges of opinions or brainstorming, interviews and questionnaires, strategies that are much closer to human sensations. They are mostly used to discover long-term results, goals and impact of systems.

Qualitative methods tend to use a natural and holistic approach in the evaluation process. “They also tend to pay more attention to the subjective aspects of human experience and behavior.” These methods must be applied with extreme care, always keeping in mind that satisfaction with the results of the systems will be in line with the groups of users who receive them, a very complex element due to the diversity of criteria and perceptions that exists from one group to another. , which depends, in turn, on a set of subjective and polycausal factors.

Comparative methods

It is recognized as the method that uses the comparison between various systems, processes, products or services to determine best practices, such as benchmarking. It is a process of evaluating products, services and processes between organizations, through which one analyzes how another performs a specific function to match or improve it. The application of these methods allows organizations to achieve higher quality in their products, services and processes through cooperation, collaboration and the exchange of information.

Their objective is to correct errors and identify opportunities, learning to provide solutions and make decisions following the patterns of leaders. This type of study is carried out in direct contact with competitors or non-competitors and at the end the results are shared so that each organization creates its own organizational improvement system.

methods

It should be noted in this space that each of the aforementioned methods pursue their particular objectives and are determined by the information collection techniques used in the evaluation process; that the combination of several of them could be beneficial to fully meet the objective of any evaluation. These information collection techniques must be compatible with the method used in the evaluation process, so that it can provide the necessary information. There are a large number of information collection techniques, but the most used in the evaluation process are those shown below:

1. The tests.

2. The evaluations of the participants.

3. Expert evaluations.

4. Surveys.

5. The interviews.

6. Observation of behavior and activities.

7. The evaluation of personnel performance.

8. Daily analysis of participants.

9. Analysis of historical and current archives.

10. Transactional analysis.

11. Content analysis.

12. Bibliometric techniques, especially citation analysis.

13. Use files.

14. Anecdotal evidence.

Evaluation activities are still useful, even if they do not immediately lead to decision-making. The reflection that they generate on the weaknesses they reveal is useful to define new lines of work that focus on resolving the elements that generate difficulties and dissatisfaction, both for employees and users/customers.

The methods used in the evaluation of OPAC in general contain a broad component of statistical application and it could be argued that they are not entirely methods produced by Library Science or Information Science, in their own way, but are marked by the influence of other fields of knowledge such as Mathematical and Computer Sciences, Cognitive Psychology, HCI, usability, among other disciplines.

It is worth clarifying that the use of none of these methods is exclusive of another, although they are usually applied depending on what is to be measured in each case, an issue that has contributed to measuring quality from specific perspectives and not from a point of comprehensive view.

Most authors do not establish differences between the methods and techniques for collecting information, and they can be consulted, from very general classifications to other very detailed ones used for particular cases. When general studies are mentioned, the one that proposes developing four basic methodologies for catalog evaluation could be analyzed:

– Questionnaires: both for users and system workers.

– Group interviews: with the selection of a specific topic, also applied to end users and system personnel.

– System monitoring: both through direct observation of users and the recording of system operations.

– Controlled or laboratory experiments.

On the other hand, there are other more specific and detailed ones that aim to evaluate a particular aspect within online catalogs. In the case of the interface study, the following methods are found:

Methods prior to commercial distribution of the interface

– Expert reviews: based on heuristic evaluations, review by previous recommendations, consistency inspection and user simulations.

– Usability testing: through discounted testing, exploration testing, field testing, validation testing and others.

– Lab test.

– Questionnaires.

– Interviews and discussions with users.

Methods during the active life of the product

– Monitoring of user performance.

– Monitoring and/or telephone or online help.

– Communication of problems.

– News groups.

– User information: through newsletters or FAQs.

Another that addresses both the perspective of the system and that of the user and where there is a difference between data collection methods and techniques is the study that proposes the following:

– Analysis of prototypes.

– Controlled experiments.

– Protocol transaction analysis (TLA).

– Comparative analysis.

– Protocol analysis.

– Expert evaluations of the system.

The first three methods (prototype analysis, controlled experiments and protocol transaction analysis), proposed in this study, are focused on the operation of the system, while the last three (comparative analysis, protocol analysis, expert evaluations of the system) are most used to verify human behavior and its interaction with the system; hence this proposal is considered generalizing and integrating.

Thus, in this same study, the following data collection techniques are proposed: questionnaires, interviews, log transaction records, protocol records and verbal protocol records. The feasibility and relevance of combining several research techniques to obtain better results is also stated.

It should be mentioned that the use of any type of data collection methods and techniques for the evaluation of online catalogs is considered correct, taking into consideration, of course, the objectives sought with the use of each of them, in each of the cases to be evaluated.

The optimal thing would be the combination of several methods and techniques that provide sufficient data, in such a way that they can offer the information closest to reality for subsequent evaluation and decision making, using both quantitative and qualitative data, referring to both users. as well as the system, in a way that allows a comprehensive appreciation of this product and/or service. Some information collection techniques used most frequently in OPAC evaluation studies are described below, of which there are known advantages and disadvantages in their application.

 

 

 

 

How do you best account for seasonal variations in your data collection?

Geospatial

How do you best account for seasonal variations in your data collection?Data collection 

Data collection

Data collection is the process of collecting and analyzing information on relevant variables in a predetermined, methodical way so that one can respond to specific research questions, test hypotheses, and assess results. Data collection can be either qualitative or quantitative.

Data is a collection of facts, figures, objects, symbols, and events gathered from different sources. Organizations collect data with various data collection methods to make better decisions. Without data, it would be difficult for organizations to make appropriate decisions, so data is collected from different audiences at various points in time.

For instance, an organization must collect data on product demand, customer preferences, and competitors before launching a new product. If data is not collected beforehand, the organization’s newly launched product may fail for many reasons, such as less demand and inability to meet customer needs. 

Although data is a valuable asset for every organization, it does not serve any purpose until analyzed or processed to get the desired results.

Decoding seasonal variations with linearly weighted moving averages

1. Introduction to seasonal variations

Seasonal variations are a natural phenomenon that affects the economy, weather patterns, consumer behavior and many other aspects of our lives. These variations occur due to various factors, such as change in weather, holidays and cultural practices, that influence data patterns over time. Understanding seasonal variations and their impact on different data sets is  essential for decision making  in various fields. Linearly weighted moving averages (LWMA) are one of the methods most effective statistics for analyzing seasonal variations.This technique analyzes data by assigning different weights to data points based on their position in time.

In this section, we will introduce seasonal variations and their impact on different data sets. We will also provide detailed information on how LWMA can be used to decode seasonal variations. Here are some key points to keep in mind:

1. Seasonal variations occur in many different fields, such as economics, meteorology, and marketing. For example, in the retail industry, sales of winter clothing generally increase during the winter season, and sales of summer clothing increase during the winter season. summer season.

2. Seasonal variations can be regular, irregular, or mixed. Regular variations occur at fixed intervals, such as every year or quarter. Irregular variations occur due to unpredictable events, such as natural disasters or economic recessions. Mixed variations occur due to a combination of regular and irregular factors.

3. LWMA can be used to analyze seasonal variations by assigning different weights to data points based on their position in time. For example, if we are analyzing monthly sales data, we can assign higher weights to recent months and lower pesos in the first months.

4. LWMA is particularly effective at handling seasonal variations because it reduces the  impact of outliers  and emphasizes patterns in the data. For example, if there is a sudden increase in sales due to a promotion, LWMA will assign a lower weight to that data point, which will reduce its impact on the overall analysis.

5. LWMA can be applied to different types of data sets, such as time series data, financial data and stock market data. It is a versatile technique that can provide valuable information on different aspects of seasonal variations.

Understanding seasonal variations and their impact on different data sets is crucial to making informed decisions . LWMA is an effective statistical method that can be used to analyze seasonal variations and provide valuable insights into patterns and trends in data .

seasonal variations

2. Understanding Linearly Weighted Moving Averages (LWMA)

Understanding linearly weighted moving averages (LWMA) is an essential component of analyzing time series data. It is a statistical method that smoothes a time series by giving more weight to recent observations and less weight to older ones. LWMA assigns weights to prices in the time series, with the most recent prices assigned the highest weight and the oldest prices assigned the lowest weight. In this way, the moving average is more responsive to recent price changes, making it makes it useful for analyzing trends and forecasting future prices.

Here are some key ideas for understanding linearly weighted moving averages:

1. LWMA assigns more weight to recent prices: This means that the moving average line will be more sensitive to recent price changes and less sensitive to older ones. This is because recent prices are more relevant to the situation current market.

Example: Suppose you are analyzing the price of a stock over the last month. LWMA would assign more weight to prices over the past few days, making the moving average more sensitive to recent price changes.

2. It is a customizable tool: Unlike other moving averages, LWMA allows you to customize the weight assigned to each price in the time series. You can assign more weight to certain prices and less to others, depending on the analysis you want to perform.

LWMA

Example: Suppose you are analyzing the price of a stock over the last year. You can assign more weight to prices in recent months and less weight to prices in the first few months, making the moving average more sensitive to changes. recent price changes.

3. Helpful in identifying trends: LWMA is commonly used to identify trends in time series data. It can help you determine whether a trend is bullish or bearish by analyzing the slope and direction of the moving average.

Example: Suppose you are analyzing the price of a stock over the past year. If the moving average line slopes up, it indicates an uptrend, and if it slopes down, it indicates a downtrend.

Overall, understanding linearly weighted moving averages can be a valuable tool for analyzing time series data, identifying trends, and forecasting future prices. By customizing the weights assigned to each price in the time series, you can create a moving average that responds more to recent price changes and more accurate in forecasting future prices.

3. Advantages of LWMA in seasonal data analysis

Seasonal variations in data are a common occurrence in many fields, such as finance, economics, and meteorology. They are caused by factors such as weather, holidays, and production cycles, and can have a significant impact on data analysis and forecasting. To address this problem, analysts often use the linearly weighted moving average (LWMA) method, which is specifically designed to handle seasonal fluctuations in the data. There are several advantages to using LWMA in seasonal data analysis, from its ability to provide accurate trend estimates to its ability to smooth out irregularities in the data.

Here are some of the advantages of LWMA in seasonal data analysis:

1. Accurate trend estimation: One of the main advantages of using LWMA in seasonal data analysis is its ability to provide accurate trend estimates. LWMA assigns greater weights to more recent data points, allowing it to capture the underlying trend in the data with greater precision. This is particularly useful when analyzing seasonal data, where the pattern tends to repeat itself over time.

For example, suppose we want to analyze the sales of a particular product over the past year. If sales tend to increase during the holiday season, LWMA will be able to capture this trend more accurately than other methods that do not take seasonal variations into account. .

2. Smooth out irregularities: Another advantage of LWMA is its ability to smooth out irregularities in the data. Because LWMA assigns greater weights to more recent data points, it can reduce the impact of outliers and other irregularities in the data. This can help provide a clearer picture of the underlying pattern in the data.

For example, let’s say we are analyzing temperature fluctuations in a particular city over the past year. If there were a particularly cold week in the middle of summer, LWMA could smooth out this irregularity and provide a more accurate representation of the seasonal pattern in the data. temperature.

3. Flexibility: LWMA is a flexible method that can be adapted to different types of seasonal data. It can be used to analyze data with different seasonal patterns, such as weekly, monthly or annual patterns. In addition, LWMA can be combined with other methods to improve its precision and effectiveness.

Overall, using LWMA to analyze seasonal data can provide several advantages, including accurate trend estimation, smoothing out irregularities, and flexibility. By using this method, analysts can gain a better understanding of the underlying patterns in the data and make more accurate forecasts and predictions.

What role does randomization best play in your data collection design?

directivos empresas inteligencia artificial

What role does randomization best play in your data collection design?

data collection

 

What is data collection?

Data collection is the process of gathering data for use in business decision-making, strategic planning, research and other purposes. It’s a crucial part of data analytics applications and research projects: Effective data collection provides the information that’s needed to answer questions, analyze business performance or other outcomes, and predict future trends, actions and scenarios.

IT systems regularly collect data on customers, employees, sales and other aspects of business operations when transactions are processed and data is entered. Companies also conduct surveys and track social media to get feedback from customers. Data scientists, other analysts and business users then collect relevant data to analyze from internal systems, plus external data sources if needed. The latter task is the first step in data preparation, which involves gathering data and preparing it for use in business intelligence (BI) and analytics applications.

An overview of randomization techniques: An unbiased assessment of outcome in clinical research

A good experiment or trial minimizes the variability of the evaluation and provides unbiased evaluation of the intervention by avoiding confounding from other factors, which are known and unknown.

Randomization ensures that each patient has an equal chance of receiving any of the treatments under study, generate comparable intervention groups, which are alike in all the important aspects except for the intervention each groups receives. It also provides a basis for the statistical methods used in analyzing the data. The basic benefits of randomization are as follows: it eliminates the selection bias, balances the groups with respect to many known and unknown confounding or prognostic variables, and forms the basis for statistical tests, a basis for an assumption of free statistical test of the equality of treatments. In general, a randomized experiment is an essential tool for testing the efficacy of the treatment.

In practice, randomization requires generating randomization schedules, which should be reproducible. Generation of a randomization schedule usually includes obtaining the random numbers and assigning random numbers to each subject or treatment conditions. Random numbers can be generated by computers or can come from random number tables found in the most statistical text books.

For simple experiments with small number of subjects, randomization can be performed easily by assigning the random numbers from random number tables to the treatment conditions. However, in the large sample size situation or if restricted randomization or stratified randomization to be performed for an experiment or if an unbalanced allocation ratio will be used, it is better to use the computer programming to do the randomization such as SAS, R environment etc.

REASON FOR RANDOMIZATION

Researchers in life science research demand randomization for several reasons. First, subjects in various groups should not differ in any systematic way. In a clinical research, if treatment groups are systematically different, research results will be biased. Suppose that subjects are assigned to control and treatment groups in a study examining the efficacy of a surgical intervention. If a greater proportion of older subjects are assigned to the treatment group, then the outcome of the surgical intervention may be influenced by this imbalance. The effects of the treatment would be indistinguishable from the influence of the imbalance of covariates, thereby requiring the researcher to control for the covariates in the analysis to obtain an unbiased result.

Second, proper randomization ensures no a priori knowledge of group assignment (i.e., allocation concealment). That is, researchers, subject or patients or participants, and others should not know to which group the subject will be assigned. Knowledge of group assignment creates a layer of potential selection bias that may taint the data. Schul and Grimes stated that trials with inadequate or unclear randomization tended to overestimate treatment effects up to 40% compared with those that used proper randomization. The outcome of the research can be negatively influenced by this inadequate randomization.

Statistical techniques such as analysis of covariance (ANCOVA), multivariate ANCOVA, or both, are often used to adjust for covariate imbalance in the analysis stage of the clinical research. However, the interpretation of this post adjustment approach is often difficult because imbalance of covariates frequently leads to unanticipated interaction effects, such as unequal slopes among subgroups of covariates.

One of the critical assumptions in ANCOVA is that the slopes of regression lines are the same for each group of covariates. The adjustment needed for each covariate group may vary, which is problematic because ANCOVA uses the average slope across the groups to adjust the outcome variable. Thus, the ideal way of balancing covariates among groups is to apply sound randomization in the design stage of a clinical research (before the adjustment procedure) instead of post data collection. In such instances, random assignment is necessary and guarantees validity for statistical tests of significance that are used to compare treatments.

data

TYPES OF RANDOMIZATION

Many procedures have been proposed for the random assignment of participants to treatment groups in clinical trials. In this article, common randomization techniques, including simple randomization, block randomization, stratified randomization, and covariate adaptive randomization, are reviewed. Each method is described along with its advantages and disadvantages. It is very important to select a method that will produce interpretable and valid results for your study. Use of online software to generate randomization code using block randomization procedure will be presented.

Simple randomization

Randomization based on a single sequence of random assignments is known as simple randomization.This technique maintains complete randomness of the assignment of a subject to a particular group. The most common and basic method of simple randomization is flipping a coin. For example, with two treatment groups (control versus treatment), the side of the coin (i.e., heads – control, tails – treatment) determines the assignment of each subject. Other methods include using a shuffled deck of cards (e.g., even – control, odd – treatment) or throwing a dice (e.g., below and equal to 3 – control, over 3 – treatment). A random number table found in a statistics book or computer-generated random numbers can also be used for simple randomization of subjects.

This randomization approach is simple and easy to implement in a clinical research. In large clinical research, simple randomization can be trusted to generate similar numbers of subjects among groups. However, randomization results could be problematic in relatively small sample size clinical research, resulting in an unequal number of participants among groups.

Randomization

The block randomization method is designed to randomize subjects into groups that result in equal sample sizes. This method is used to ensure a balance in sample size across groups over time. Blocks are small and balanced with predetermined group assignments, which keeps the numbers of subjects in each group similar at all times. The block size is determined by the researcher and should be a multiple of the number of groups (i.e., with two treatment groups, block size of either 4, 6, or 8). Blocks are best used in smaller increments as researchers can more easily control balance.

After block size has been determined, all possible balanced combinations of assignment within the block (i.e., equal number for all groups within the block) must be calculated. Blocks are then randomly chosen to determine the patients’ assignment into the groups.

Although balance in sample size may be achieved with this method, groups may be generated that are rarely comparable in terms of certain covariates. For example, one group may have more participants with secondary diseases (e.g., diabetes, multiple sclerosis, cancer, hypertension, etc.) that could confound the data and may negatively influence the results of the clinical trial. Pocock and Simon stressed the importance of controlling for these covariates because of serious consequences to the interpretation of the results. Such an imbalance could introduce bias in the statistical analysis and reduce the power of the study. Hence, sample size and covariates must be balanced in clinical research.

randomization

Stratified randomization

The stratified randomization method addresses the need to control and balance the influence of covariates. This method can be used to achieve balance among groups in terms of subjects’ baseline characteristics (covariates). Specific covariates must be identified by the researcher who understands the potential influence each covariate has on the dependent variable. Stratified randomization is achieved by generating a separate block for each combination of covariates, and subjects are assigned to the appropriate block of covariates. After all subjects have been identified and assigned into blocks, simple randomization is performed within each block to assign subjects to one of the groups.

How will you deal with unexpected challenges or obstacles during data collection?

Localization

How will you deal with unexpected challenges or obstacles during data collection?

data collection

 

What is data collection?

Data collection is the process of gathering data for use in business decision-making, strategic planning, research and other purposes. It’s a crucial part of data analytics applications and research projects: Effective data collection provides the information that’s needed to answer questions, analyze business performance or other outcomes, and predict future trends, actions and scenarios.

In businesses, data collection happens on multiple levels. IT systems regularly collect data on customers, employees, sales and other aspects of business operations when transactions are processed and data is entered. Companies also conduct surveys and track social media to get feedback from customers. Data scientists, other analysts and business users then collect relevant data to analyze from internal systems, plus external data sources if needed. The latter task is the first step in data preparation, which involves gathering data and preparing it for use in business intelligence (BI) and analytics applications.

For research in science, medicine, higher education and other fields, data collection is often a more specialized process, in which researchers create and implement measures to collect specific sets of data. In both the business and research contexts, though, the collected data must be accurate to ensure that analytics findings and research results are valid.

Some observations on the challenges of digital transformation research in the business sector

Since digital transformation is an applied field and not purely theoretical, collaboration with companies during research is essential. However, such research activities are typically subject to two main types of challenges, one arising from the data collection process and another from the publication process. Below, I will take a closer look at these two obstacles and offer solutions.

Challenges in the data collection process

Trust is the fundamental basis for successful collaboration between companies and researchers. However, creating the trust necessary to establish that initial connection can be difficult, especially when the parties do not know each other. Companies tend to refuse to collaborate with external researchers when the benefit and/or form of collaboration is unclear.

However, even in cases where a minimum trust has been established, companies often have reservations about disclosing their most sensitive and specific data. They may want to avoid falling into the hands of competitors or may not want to speak publicly about their failures. This resistance represents a big problem for researchers, since these insights are important for the general understanding of the underlying problem and would allow other professionals to learn from them. Withholding certain data also prevents general understanding of the object of research.

Another key challenge in terms of collaboration is often creating a common timeline. In the business context, decisions can sometimes be made randomly, and deadlines are usually short. This does not always correspond to the requirements that researchers must meet in their environment. For example, for professionals without academic training, it is often problematic to understand that publication processes can take several years.

Matrix 3

 

Challenges in the publishing process

For many researchers, publishing studies on digital transformation is often a difficult process due to the lack of theoretical foundations and development. While conclusions may be practically relevant, their integration into the body of knowledge and their implications for research are not always clearly defined.

As research with companies is often carried out on a small scale, it can be difficult to ensure its generalisability or replicability. At this point, therefore, it would be necessary to anticipate a possible selection bias that could call into question the representativeness of the results, which is normally associated with a concern for the suitability of interviewees who, for various reasons, could not give opinions on the different functions of the company or on the company as a whole.

Some suggestions

In view of these frequent problems in the data collection and publication processes, some recommendations are made below:

In general, researchers should strive to establish long-term collaborations with companies, not only because it can reinforce mutual trust, but also because it could improve the efficiency of many collaborative processes. To this end, it might be useful to jointly create a long-term plan. Larger collaborative initiatives can be complemented by a more institutionalized approach, for example through regular stakeholder meetings.

Certainly, the key to success in establishing such partnerships is to highlight the benefits that the company can obtain. Only by sharing the benefits will companies commit to supporting researchers in the long term and assuming the additional costs that this may entail. The potential benefits of collaboration can be justified, not only by providing external expertise and methodological support, but also, for example, by facilitating better access to universities’ knowledge resources or to high-potential students.

Transparency is also crucial to establishing a relationship of trust. This should apply not only to operational matters, but also to the objectives pursued by both parties, including clear definition of roles and responsibilities and open and reliable communication between the parties. Researchers should inform companies of interim results and proactively share other issues of interest or potential project ideas that could also stimulate collaboration.

Whenever sensitive data is involved, a confidentiality and non-disclosure agreement can be advantageous for both parties. In this way, researchers will have a more complete and reliable view of the object of the investigation, while the company will ensure the protection of its sensitive data. From the researcher’s perspective, although access to that sensitive data may be crucial, not all information needs to be published, and data anonymity or publication embargo periods may mean that it can be published without violating the agreement. However, since unexpected changes in the research environment are common in companies, researchers must have a Plan B.

When considering publication, it is advisable to develop a clear theoretical basis during an early stage of research planning, without neglecting the generalizability of the practical problem. Researchers must also identify the most appropriate publication options. Journals that are more practitioner-oriented may offer advantages in terms of the length of the publication process, as well as a potentially more suitable target audience.

To ensure the scientific rigor of the research, it is advisable to select an adequate number of respondents within the companies. It is especially recommended to triangulate results with external sources (for example, annual reports or newspaper articles) to reduce potential respondent bias. Researchers should also strive to make the selection of their respondents and companies as transparent and legitimate as possible. Detailed documentation of the research process and underlying methodology will further increase examiners’ confidence. While small-scale exploratory studies are particularly suitable for new areas of research, large-scale quantitative studies could be a good opportunity to verify the generalizability of the promise of initial results.

salud medico cerebro realidad virtual medicina robot

Conclusion

Research on the digital transformation of “living objects” can sometimes be fraught with difficulties, but if well prepared and the above recommendations are taken into account, researchers can overcome the key double challenge of such efforts.

What steps will you take to ensure the privacy of participants in your data collection?

inteligencia artificial como arma doble filo ciberataques sofisticados pero sistemas mejorados 3031516

What steps will you take to ensure the privacy of participants in your data collection?

data collection

Data collection

Data collection is very important. Data is a collection of facts, figures, objects, symbols, and events gathered from different sources. Organizations collect data with various data collection methods to make better decisions. Without data, it would be difficult for organizations to make appropriate decisions, so data is collected from different audiences at various points in time.

For instance, an organization must collect data on product demand, customer preferences, and competitors before launching a new product. If data is not collected beforehand, the organization’s newly launched product may fail for many reasons, such as less demand and inability to meet customer needs.

Although data is a valuable asset for every organization, it does not serve any purpose until analyzed or processed to get the desired results.

Data collection methods are techniques and procedures used to gather information for research purposes. These methods can range from simple self-reported surveys to more complex experiments and can involve either quantitative or qualitative approaches to data gathering.

Some common data collection methods include surveys, interviews, observations, focus groups, experiments, and secondary data analysis. The data collected through these methods can then be analyzed and used to support or refute research hypotheses and draw conclusions about the study’s subject matter.

Privacy and information rights in the information society and the ICT environment*

The human rights that have been affirmed in the Constitutions of the different countries, in accordance with the theory of guaranteeism, are those that can be considered fundamental. The Mexican Constitution, in Decree published in the Official Gazette of the Federation on June 10, 2011, it has reformed the name of its first chapter as “On human rights and their guarantees”, therefore, when reference is made to the constitutional recognition of these rights, we will take this point into account.

The Mexican Constitution recognizes as human or fundamental rights related to information and, therefore, to the information society, mainly the following: the right to information (article 1) of personal data (article 16); in addition to freedom of expression and of the press (article 7), and the inviolability of communications (article 16). Copyright and property rights intellectual property are mentioned in the fundamental norm as a reference to the fact that its existence should not be considered a monopoly (article 28).

Therefore, these rights and some others that are linked to personal information, such as the right to intimacy, privacy, honor and one’s own image — which, although not directly recognized in the Constitution , they are through the international treaties signed by this country—are the ones that we will address for study in this work.

The reasons for this delimitation (which could seem very broad) are based on the fact that there is a connection between all of them and that, given the growing use of ICT (which supports the development of said information society), they can be violated in jointly or collaterally and not in isolation. An example of this is the connection that exists between the right to information and freedom of expression with respect to the right to privacy. On several occasions they collide, and on other occasions they are almost complements of each other.

The analysis will be carried out by describing the regulation or protection that exists in Mexico of these rights and comparing them in some cases with the norms of other countries or regions. All this so that through a brief exercise of lege ferenda we can schematically detect the challenges that remain pending in Mexico.

Likewise, these rights will be discussed in the face of the challenges posed by the information and knowledge society (SIC). All of this, especially in terms of their protection, since at the same time that these rights are essential and the core of it, their impact also places them at a level of permanent risk that increases due to the lack of effective and timely legal protection. or self-regulation.

The importance of the information security measures that some countries have attempted or suggested adopting will also be pointed out, in order to control the flow of information that society receives through the network, its benefits and harms with respect to rights such as that of information or privacy.

On the other hand, in relation to universal service, which we will also talk about, it must be said that the doctrine refers to it as one of the ways to make other fundamental rights a reality, such as that of information or access to the Internet. or to the SIC. Universal service is that which appears in the telecommunications legislation of various countries, although not in the case of Mexico, whose Federal Law for this sector only speaks of social coverage, as we will see in due course.

Fundamental rights related to personal information

Various fundamental and personality rights are related to each other, but they are differentiable, so it is necessary to make a distinction between them. In this way, linked but not equal rights must be listed, such as the rights to honor, to one’s own image, to intimacy or privacy, to data protection, to inviolability. of the address and the secrecy of communications. However, although the legal good that each of them protects is diverse, they cannot be treated in isolation, and even less so when they are analyzed within the framework of a SIC that interconnects many aspects.

The legal framework of personality rights also has a relationship with a principle of law recognized in the Declaration of Human Rights, which is that of human dignity. The European Community has elevated this to a fundamental legal good and, therefore, taking into account the large amount of personal information that circulates on the networks, it is evident that the situation resulting from this may specifically affect this good.

The right to data protection is closely linked to that of intimacy and privacy, but it enjoys its own autonomy (according to jurisprudential interpretation) since although the right to privacy has been derived from the recognition of freedom personal in the first generation of rights, it was until the third generation that, in “response to the phenomenon of the so-called ‘contamination of freedoms’ (liberties’ pollution)“, the right to privacy gained greater popularity, which caused.

This was forced to expand its spectrum through the recognition of new aspects of it, to now have a ramification of rights incorporated into it, such as the right to honor, to one’s own image, to private life (in its most broad), to the protection of personal data, and even, for a sector of doctrine, to computer freedom.

Thus, the right to the protection of personal data is built on the right to privacy and, in addition to implying the obligation of the State to guarantee the protection of personal information contained in archives, databases, files or any other medium, whether documentary or digital, grants the owner of such information the right to control over it, that is, to access, review, correct and demand the omission of personal data that a public or private entity has in its possession.

This right, in accordance with what we mentioned before, and according to GALÁN, is also linked to constitutional and legal rights or principles of great value, such as human dignity, individual freedom, self-determination and the democratic principle. Therefore, the aforementioned author maintains:

The protection of personal data, even recognizing the dynamism of its objective content, derived from technological changes, guarantees the person a power of control – of positive content – ​​over the capture, use, destination and subsequent trafficking of personal data. . Therefore, this right covers those data that are relevant to the exercise of any person’s rights, whether or not they are constitutional and whether or not they are related to honor, ideology, personal and family privacy.

For its part, the right to honor, to one’s own image and even the constitutional guarantees of inviolability of the home and the secrecy of private communications, are closely related to personal information, since they all refer to information related to people, to the physical appearance of a person (image), to that contained within their home, or in the communications they issue.

information

 

Legal recognition of fundamental rights relating to personal information

As we mentioned before, and according to the theory of fundamental rights (particularly that of guaranteeism, by Luigi FERRAJOLI), the human rights that have been constitutionally affirmed are those that can be defined as fundamental. One of the essential attributes of these rights, according to their origin and inspiring philosophical elements, is their universality. Hence, they appear reflected in international instruments such as the Universal Declaration of Human Rights (UDHR) of 1948 and other similar ones, although the nomination of these other legal bodies does not include the adjective “universal”.

In this sense, universality carries a strong naturalist influence of the first constitutionalism. Thus, it was thought that if the rights stated were, precisely, natural, then they had to be recognized for all people, taking into account that they all carry the same “nature.” In the words of RIALS, cited by CARBONELL, “if there exists a rational natural order knowable with evidence, it would be inconceivable that it would be consecrated with significant variants depending on the latitudes.”

From that perspective, we could say that in Mexican positive law, the right to the protection of personal data and the guarantees of the inviolability of the home and the secrecy of private communications are expressly recognized in the Constitution (article 16), but not so. the right to intimacy, privacy, honor and one’s own image, as will be specified below.

Direct recognition of the right to protection of personal data is made in article 16 of the Constitution, in which it was incorporated, in the second paragraph, in a reform published in the Official Gazette of the Federation on June 1, 2009, the recognition of the right of every person to the protection of their personal data, access, rectification and cancellation thereof, as well as to express their opposition.

Likewise, the sixteenth paragraph established the terms for the exercise of this right and the cases of exception to the principles that govern data processing (for reasons of national security, provisions of public order, public safety and health or to protect the rights of third parties) to be established by the law that is enacted on the matter (which took place the following year).

The Federal Law on Protection of Personal Data Held by Private Parties (LFPDPPP) of 2010 is the legislation for the development of the constitutional precept just cited, and in its text personal data is defined as “that information referring to an identified person or identifiable”, thus aligning, so to speak, with the most common international definition and in particular with that of the Spanish standard on the matter.

Evidently, Mexican legislation is concerned with defining the principles and criteria to make this right effective and the procedures to put it into effect. The LFPDPPP Regulations develop all these areas more fully.

It should also be mentioned that several years ago there was already legislation that regulated some aspects of the processing of personal data, but it only applies to the public sphere. This is the Federal Law on Transparency and Access to Government Public Information, published in the aforementioned official body on June 11, 2002, which defines what, for the purposes of that Law, is must be understood as personal data in its article 3, section II, adjusting closely to what the legislation applicable to privately owned files would later reflect.

However, although these specific developments exist, as we said, the rights to privacy and intimacy are not expressly mentioned in the fundamental Mexican norm. However, its recognition could be understood through a lato sensu interpretation of the first paragraph of article 16 of the Constitution, where it states:

“No one can be bothered “in his person, family, domicile, papers or possessions, but by virtue of a written order from the competent authority, which establishes and motivates the legal cause of the procedure.” Indeed, some protection for these rights can be derived from this, although it is necessary to mention that the rest of the content of this paragraph basically refers to the procedural field. The same happens with the content of article 7 of the Constitution, which establishes respect for private life as a limit to freedom of the press.

In addition to the above, we must say that even considering that there is a lack of constitutional recognition of the aforementioned human rights, currently this is not an obstacle to claiming their protection and practice, since they can be exercised through conventional means, as established currently article 1 of the constitution. In this, as we mentioned before, it is stipulated that all people will enjoy the human rights recognized in the Constitution itself and in the international treaties to which Mexico is a party.

What best considerations have you made for accessibility in your data collection?

0jffur7626lf9g8msn44

What best considerations have you made for accessibility in your data collection?

data collection

What are Data Collection Methods?

Data collection methods are techniques and procedures used to gather information for research purposes. These methods can range from simple self-reported surveys to more complex experiments and can involve either quantitative or qualitative approaches to data gathering.

Some common data collection methods include surveys, interviews, observations, focus groups, experiments, and secondary data analysis. The data collected through these methods can then be analyzed and used to support or refute research hypotheses and draw conclusions about the study’s subject matter.

Data collection methods play a crucial role in the research process as they determine the quality and accuracy of the data collected. Here are some mejor importance of data collection methods.

  • Determines the quality and accuracy of collected data.
  • Ensures that the data is relevant, valid, and reliable.
  • Helps reduce bias and increase the representativeness of the sample.
  • Essential for making informed decisions and accurate conclusions.
  • Facilitates achievement of research objectives by providing accurate data.
  • Supports the validity and reliability of research findings.

To familiarize ourselves with the concept of universal accessibility, it is important to mention its historical process. In 1948, the United Nations (UN) promulgated the Universal Declaration of Human Rights, which drafted the Principles of Equal Rights and Opportunities for all Citizens, but it was not until 1963 at the First International Congress for the Suppression of Architectural Barriers carried out in Switzerland, whose main objective proposes to try new measures for the design of buildings by eliminating barriers that obstruct access for people with disabilities.

In 1982, Spain approved the Law on Social Integration of the Disabled (lismi), in that same year, the UN promoted the development of the World Program of Action towards the Disabled; In 2003, the Law on Equality, Non-Discrimination and Universal Accessibility (LIONDAO) incorporated the concept of universal accessibility in which it promoted equal opportunities benefiting all people; In 2006, the UN again held a convention on the rights of people with disabilities and in 2013, the General Law on the Rights of People with Disabilities and their Social Inclusion established that all services, environments, goods and products be accessible.

The study area of ​​the Technological Development Corporation (CDT) (2018) defines universal accessibility as the condition that spaces usable by all people must meet in safe and comfortable conditions with the aim of moving autonomously and naturally. It is a space that must have equal opportunities and social inclusion for people with different abilities, free of obstacles and barriers (urban, architectural and mobility) that prevent correct movement.

Accessibility seeks the inclusion of all citizens in public and private spaces, it must be “integral and guarantee not only mere accessibility, but also circulation, use, orientation, security and functionality” (Olivera, 2006: 332). Pedestrian mobility is one of the main requirements in the physical accessibility of cities (Ipiña García, 2019: 159).

Universal accessibility is directly related to the quality of life of the inhabitants of a city, it must be understood that people have the right to enjoy all the services that the city can provide, being the responsibility of the public and private sectors to modify the environment to that can be used under conditions of equality, taking into account social, economic and geographical needs.

One of the main problems in terms of universal accessibility is that cities were not designed for the use of all people, but it is a fact that currently regulations, laws, plans, programs, etc. have been implemented that They have gradually transformed some sectors of our cities, improving the quality of life of users. To achieve these changes, it is necessary to have knowledge, empathy and awareness in order to generate simple and intuitive spaces that have equal opportunities.

UNIVERSAL ACCESSIBILITY AS AN IMPORTANT PART OF PUBLIC SPACE

There is an intimate relationship between universal accessibility and public space, due to the permanent dynamics of the inhabitants in the city; so the latter would not exist without public space and it would perish without citizens. “As the city is a historical fact, the public space is also historical; It is part of the cultural manifestations of a civilization, which is always limited in time and space” (Gamboa Samper, 2003: 13).

Since the 19th century, Camillo Sitte, one of the precursors of the German school, considered that the city should be designed for pedestrians; since then, people have thought about creating functional and flexible spaces that can be used by everyone. “Ultimately, the success of a city must be measured in its ability to guarantee access to all citizens to the benefits that have made cities one of the most wonderful human inventions” (D. Davila, 2012: 60). Consequently, public space is a collective site for public use that must guarantee well-being for all people, this includes responding to the needs of citizens, thus promoting universal accessibility.

Squares, parks and gardens are part of the public space, but they are also made up of streets that allow people to move to reach their destination. At a smaller mobility scale, the pedestrian can be defined as any person who travels on foot through public or private space (Municipal Government of Cusco, sf). Pedestrian movement or mobility must meet certain requirements so that it is carried out under quality conditions; accessibility, safety, comfort and attractiveness (Alfonzo, 2005; Pozueta et al., 2009), when satisfied, the pedestrian environment will have the necessary quality for the pedestrian to move, which will have a decisive impact on service levels. pedestrian aspect of the urban environment (Olszewski and Wibowo, 2005) (Larios Gómez, 2017: 6).

In terms of universal accessibility, it is important to adapt at least one accessible pedestrian route in spaces with greater pedestrian flow. In the analysis of an urban space, priority must be given to the implementation of accessible routes that link main avenues, secondary streets, stops and access to public transport and vehicle parking (Boudeguer Simonetti et al., 2010: 39), in this way the spaces They may be used by all people under equal conditions.

The public space is characterized by being easily accessible, allowing interaction between its inhabitants, creating social ties that allow generating a link with the space, this causes citizens to experience their environment, identifying and appropriating the elements that make up the public space. One of the problems we currently have is that society has gradually stopped going to these spaces due to insecurity, inaccessibility, pollution, lack of maintenance on the streets and gardens; generating its abandonment and deterioration.

accessibility

When the public space meets the characteristics of security, universal accessibility, mobility, identity, inclusion and permanence, it is said to be a quality space that allows the city to be experienced, enjoying the pedestrian routes, observing the architectural elements that are part of it, such as the facades of the buildings, the planters, benches and lamps of the urban furniture.

The parks and gardens that are fundamental in the cities, not only for providing them with green areas, but for preserving a part of their history, in this sense, Segovia and Jordán (2005) affirm that the quality of public space can be evaluated above all by the intensity and quality of the social relationships it facilitates, by its capacity to welcome and mix different groups and behaviors, and by its opportunity to stimulate symbolic identification. cultural expression and integration.

For public space to play the role of being a system that allows interaction between people and the enjoyment of recreational places, it is essential that citizens can enter them without physical barriers, being accessible to all inhabitants, ” An environment is needed with a level of quality that allows environmental sustainability and, of course, services that articulate the appropriate functioning of urban public spaces with the population” (Rueda et al., 2012) (Alvarado Azpeitia et al., 2017 : 131), which consists of generating a public road on which cars, bicycles and public transport can also travel, always giving importance and priority to the pedestrian.

A space accessible to everyone

As noted in previous paragraphs, in the 19th century people were already thinking about creating cities designed for pedestrians, but it was not until 2003 that the term “Universal Accessibility” was implemented, which aims to include all people regardless of your age and physical, visual, mental, hearing and multiple disability condition; creating or adapting spaces that allow their use and movement autonomously and implementing Universal Design or Design for All, benefiting the greatest number of people possible.

Universal accessibility

Architect Wilson Castellanos Parra mentions that believing that universal accessibility responds exclusively to the needs of people with reduced mobility is a mistake; It is more than a ramp, it is understood as “the condition that environments, processes, goods, products and services, as well as objects or instruments, tools and devices, must meet; to be understandable and applicable to all people” (Castellanos Parra, 2016); In the virtual conference “Universal Accessibility in Colombian Architecture Curricula” he describes some criteria to identify accessibility conditions in environments, these are:

1. Wandering (refers to the spaces of approach, spaces traveled),

2. Apprehension (achieving certain requirements when carrying out any activity such as: signage elements), location (auxiliary services) and communication (interactive communication such as: graphics, information panels, etc.).

Universal accessibility is linked to various topics such as: the chain of accessibility, mobility, design of complete streets, among others; that seek the movement of people in conditions of equality, quality and safety. 

The Secretariat of Agrarian, Territorial and Urban Development (sedatu) in collaboration with the Inter-American Development Bank (bid) produced The Street Manual: Road Design for Mexican cities where, in an illustrated manner, a pyramid that classifies the hierarchy is shown. of mobility.

Under this classification, all people can make their trips in inclusive, safe, sustainable and resilient conditions; Priority should be given to pedestrians and drivers of non-motorized vehicles to promote a more efficient and inclusive use of road space (sedatu; Inter-American Development Bank, 2019: 62).

How is consent and data collection from minors best addressed?

comercio datos personales menores

How is consent and data collection from minors best addressed?

 

Data collection

Data collection

Data is a collection of facts, figures, objects, symbols, and events gathered from different sources. Organizations collect data with various data collection methods to make better decisions. Without data, it would be difficult for organizations to make appropriate decisions, so data is collected from different audiences at various points in time.

For instance, an organization must collect data on product demand, customer preferences, and competitors before launching a new product. If data is not collected beforehand, the organization’s newly launched product may fail for many reasons, such as less demand and inability to meet customer needs. 

Although data is a valuable asset for every organization, it does not serve any purpose until analyzed or processed to get the desired results.

Data collection methods are techniques and procedures used to gather information for research purposes. These methods can range from simple self-reported surveys to more complex experiments and can involve either quantitative or qualitative approaches to data gathering.

Some common data collection methods include surveys, interviews, observations, focus groups, experiments, and secondary data analysis. The data collected through these methods can then be analyzed and used to support or refute research hypotheses and draw conclusions about the study’s subject matter.

The right to the protection of personal data: origin, nature and scope of protection.

 Origins and legal autonomy

The approach to the study of any right with constitutional status requires, without a doubt, a reference to its origins, for which, on this occasion, the generational classification of human rights developed at a doctrinal level will be very useful.

In general, historically the recognition of four generations of fundamental rights, individual or first generation rights, has prevailed; public freedoms or second generation rights; social or third generation rights; and rights linked to the emergence of new technologies and scientific development, classified in the fourth generation, these have corresponded to ideological and social moments with their own characteristics and differentiating features.

In particular, the fourth generation is presented as a response to the phenomenon known as “liberties pollution” , a term coined by some authors to refer to the degradation of classic fundamental rights in the face of recent uses of new technology.

Indeed, the technological development that has occurred since the second half of the 20th century has shown the limitations and insufficiency of the right to privacy – first generation right – as the only mechanism to respond to the specific dangers contained in the automated processing of personal information. , which is why starting in the seventies, the dogmatic and jurisprudential construction of a new fundamental right began to take shape: the right to the protection of personal data.

From a theoretical point of view, the reformulation of the classic notion of the right to privacy no longer as a right of exclusion, as it had initially been conceived, but rather as a power to control information relating to one itself, represented a clear breaking point in the conceptualization that had been maintained on it until that moment.

protection

On the other hand, in the jurisprudential context, the legal conformation of this right – which was classified as the right to informational self-determination – originates in a ruling issued by the German Federal Constitutional Court in 1983, declaring the unconstitutionality of a law that regulated the demographic census process at that time. In contrast, Chilean jurisprudence was particularly late in the configuration of the right to the protection of personal data, since its first approximation occurred in 1995, when the Constitutional Court linked it, precisely, to the protection of privacy.

It is true that the right to privacy constitutes an important, if not essential, antecedent in terms of the formation of the right that is the object of our study; however, this does not mean that both should be confused, an issue that in its moment sparked countless debates. Some authors, for example, stated that the right to the protection of personal data constituted a form of manifestation of the particular characteristics that the right to privacy acquires in the computer age, denying the autonomy that it is possible to attribute to it today.

From our perspective, and as the Spanish Constitutional Court was responsible for announcing at the beginning of this century, two fundamental rights closely linked to each other, as well as clearly differentiated, coexist in our legal system: the right to privacy and the right to the protection of personal data. With the first, the confidentiality of the information related to an individual is protected, while with the second the proper use of the information related to a subject is guaranteed, once it has been revealed to a third party, since the confessed data It is therefore not public and, consequently, cannot circulate freely.

Thus, the legal power to have and control at all times the use and traffic of this information belongs entirely to its owner. In other words, the fundamental right to data protection does not constitute a right to secrecy or confidentiality, but rather a power to govern its publicity. In this way, while the right to privacy would be a power of exclusion, the right to protection of personal data is consecrated, instead, as one of disposition.

In accordance with what was stated  above , the latter seems to be the position finally adopted by the Chilean Constitution. In this regard, it is worth remembering that the Organization for Economic Cooperation and Development (OECD) pointed out, in 2015, that our country was in compliance with its personal data protection regulations, pointing out that among its member states, only Chile and Turkey had not yet perfected their legislation on the matter.

On this level, the reform of article 19 number 4 of the constitutional text was framed, which since June 16, 2018 has assured all people “ respect and protection of private life and the honor of the person and their family.” , and also, the protection of your personal data “, adding that ” the treatment and protection of these data will be carried out in the manner and conditions determined by the law .”

As can be seen, the new wording of the Chilean fundamental norm now enshrines the right to the protection of personal data in an autonomous and differentiated manner, a trend adopted for several years by the fundamental charters of other countries in Europe and Latin America, with Chile joining the this majority trend.

 Natural capacity as an essential element for the exercise of personality rights

The tendency followed by the Chilean legal system to give relevance to what is known as natural capacity – or maturity – as an essential substrate on which to base the exercise capacity of children and adolescents, is especially marked in the field of the rights of personality – or in other words, in the field of extra-patrimonial legal acts, and it is precisely in this context that the first voices in favor of maintaining that, although the dichotomy “capacity for enjoyment/capacity for exercise” could still have some relevance in the patrimonial sphere, it was, on the other hand, unsustainable in the extra-patrimonial personality sphere.

It seems that denying the capacity to exercise personality rights in the space when the subject, despite his or her chronological age, meets the intellectual and volitional conditions sufficient to exercise them on his or her own, becomes a plausible violation of dignity and freedom. free development of the personality of the individual, recognized in article 1 of our Constitution as superior values ​​of the regulatory system ( People are born free and equal in dignity and rights ).

child or adolescent

Certainly, it has been discussed whether the distinction between the capacity to enjoy and the capacity to exercise is applicable in the field of personality rights, since the enjoyment or exercise of these rights is personal. Hence, it is difficult to speak of authentic legal representation in this environment, with this representation being very nuanced or being configured rather as assistance or action by parents or guardians/curators in compliance with their duty to care for the child or adolescent especially justified when it comes to avoiding harm.

Given the above and in accordance with the principle of  favor filii , the implementation of personality rights by their legitimate holders can only be limited when their will to activate them is contrary to preponderant interests in attention to the full development of their personality, in the same way that the will of their representatives can be limited   when their intervention is contrary to the interests of the child or adolescent.

Well, it is precisely in that context, in which the idea of ​​adopting the criterion of sufficient maturity, self-government or natural capacity emerges strongly, as a guideline to follow to delimit the autonomous exercise of personality rights, avoiding With this, the person who has not yet reached the age of majority is simply the holder of the right, but cannot, however, exercise it. In this way, the general rule becomes that the girl, boy or adolescent who is sufficiently mature can freely dispose of her or his rights.

With the above meaning, it should be noted that in this new scenario the question is limited to specifying what is meant by showing sufficient maturity, since we are faced with an indeterminate legal concept around which there is no unified legal definition. Each boy and girl is different, and therefore it is very difficult to establish when or not they have the necessary exercise capacity, due to their intellectual development, to be the master of their own person.