What criteria are best for determining the relevance of your data sources?

inteligencia artificial

What criteria are best for determining the relevance of your data sources?

data sources

What is a Data Sources

Data sources is very important. In data analysis and business intelligence, a data sources is a vital component that provides raw data for analysis. A data source is a location or system that stores and manages data, and it can take on many different forms. From traditional databases and spreadsheets to cloud-based platforms and APIs, countless types of data sources are available to modern businesses.

Understanding the different types of data sources and their strengths and limitations is crucial for making informed decisions and deriving actionable insights from data. In this article, we will define what is a data source, examine data source types, and provide examples of how they can be used in different contexts.

Information

In today’s world, it is essential to master skills that allow us to manage information appropriately, according to our needs. Being a person competent in information management becomes a fundamental factor for the development of our academic life, as well as our professional life and even staff. Therefore, a key factor will be our degree of autonomy in the management of information.

The history of access to information has been one of universalization and progressive growth. In recent years, we have witnessed a true information explosion, in which the volume of information of all kinds (journalistic, economic, commercial, academic , scientific, etc.) has exploded to reach unthinkable dimensions, almost always difficult to manage.

Thanks to the development of ICT (information and communication technologies), our capacity to process, store and transmit information through the use of computers and communications networks, giving rise to the birth of the information and knowledge society in which we are immersed.

information

What are the sources of information?

An information source is understood as any instrument or, in a broader sense, resource, that can serve to satisfy an information need.

The objective of the information sources will be to facilitate the location and identification of documents, thus answering the question: where are we going to look for the information?

It is necessary to consider the type of information sources that will be consulted for class work. The student must select sources that provide information at a level appropriate to his or her needs.

1. Books:

We generally call a book a “scientific, literary or any other work of sufficient length to form a volume, which may appear in print or on another medium.”

Traditionally, the book was a printed document, but today we can find many in electronic format. Depending on the content and structure, various types of books can be established:

  • Manuals: These are works in which the most substantial aspects of a subject are gathered and synthesized. They compile basic data that is easy to consult, and are especially useful for getting started in the fundamentals of a discipline.
  • Monographs: They are specific studies on a specific topic and will help us gain in-depth knowledge of the area of ​​knowledge. They can provide both basic and exhaustive information on the topic of the work. We can complete the information using specialized magazine articles.
  • Encyclopedias and dictionaries: They offer synthetic and timely information on a topic for quick reference. There are general ones, for all topics, and specialized ones, for a specific subject. Encyclopedia entries are of medium length, while dictionaries contain short definitions.
  • Doctoral theses: These are research works carried out to obtain a doctorate degree. They are original works, not published commercially, exponents of research, with very complete information on a topic of study.

To locate books we will consult the library catalogue.

2. Magazines:

These are periodical publications that appear in successive installments. They are a fundamental source of up-to-date information, necessary to stay up to date on a topic.

We must highlight that electronic publishing has had a great impact on the publication of magazines, and a large number of them are already They publish in digital format. To locate journal articles we will consult the bibliographic databases.

1 . Library catalogs

Catalogs are databases that include descriptions of the documents held by a library. They include the publications that make up the fund or collection of a library: books and magazines, both printed and electronic, sound recordings, videos, etc. The libraries of the University of Valencia have a common catalog called Trobes.

What can we NOT find in the catalogue?

We cannot find MAGAZINE ARTICLES. Articles contained in magazines must be searched in bibliographic databases.

Through a search system, catalogs allow us to locate documents and find out their availability online. To find books and other resources available through the catalog we can search by different fields:

– Author: search by the last name and first name of an author, the name of a public or private organization

– Title: search by exact title

– Word: search for documents that contain said word in any of the record fields

– Subject: search for records of a specific subject or topic. In Trobes the subjects are in Valencian.

When we have identified the book we are looking for in the catalog, we have to locate it in the library. The catalog provides us with a signature for each copy located and indicates where (room, closet, shelf) in the library we can find it.

The catalog also allows:

– Consult the documents in electronic version subscribed by the library: magazines and electronic books and databases

– Carry out certain procedures remotely: reservations, renewals, etc.

2. Databases available through the Library

In addition to the documents that we find in the library catalog, we may need to search for more information (press, scientific articles, statistics, legislation, jurisprudence, financial data…) on the topic of our work.

For this, the library has a series of databases.

What is a database?

A database is a collection of data (texts, figures and/or images) belonging to the same context, systematically selected and stored, and organized according to a search program that allows their location and automated retrieval.

The libraries of the University of Valencia subscribe to a wide range of databases where we can locate information. We can access through the following link: http://biblioteca.uv.es/castellano/recursos_electronicos/bases_dades/acces.php

They are usually found online and we can access them through the university network or from home by setting up a virtual private network (VPN). They also gather freely accessible databases.

There are different types of databases, depending on the information they contain: bibliographic, factual, press; You can consult the main ones for your discipline in section 2.4. Sources of information in Social Sciences. Some of the most used are bibliographic databases, which contain references to documents, mainly journal articles, chapters, reports, conference communications, patents, etc. Sometimes they contain access to the full text of the documents and/or a summary.

General characteristics:

  • rfield-structured recordsauthor, title, title of the source, type of document, etc.
  • contain iinformation extracted fromprimary sources (journals, monographs, conference proceedings…), submitted to documentary analysis (indexation and summary).
  • They allow you to search by keywords.
  • They allow you to save information to print it, save it, send it to an email account or to a bibliography manager.

Internet

The Internet provides access to a large and diverse amount of information and resources. However, unlike libraries that select and evaluate information based on the quality and relevance of each resource, the Internet contains everything, no one is in charge of the content that is hosted, since it is a medium in which it can be self-published.

It is a participatory environment where anyone can contribute information. And that is where the problem of the network lies: not all the information is true or verified. Therefore, when using the Internet as a source of information, we must be critical and know how to differentiate which resources can help us. We must evaluate the information we find, especially if we want to use it to do a job.

Google

One of the first impulses when you feel a need for information is to turn to Google to satisfy it. Although in some cases this resource is sufficient, it is necessary to keep in mind that neither is everything that is, nor is it everything that is, that is, that there is a lot of important information that does not appear in conventional searches and that much of what appears only adds noise and confusion.

How does Google work?

Google incorporates an automatic algorithm that evaluates the sites found, so that only the most relevant ones appear, taking into account the terms or keywords entered in the search. Once the results are obtained, these terms appear in bold, so that the user knows why those resources have been selected.

To evaluate the quality of the resources, Google uses the number of links as a measure. that each page has. In this way, each link from one page to another works as a “quote.” But all links are not valued equally: those links, or quotes, that come from pages that in turn have received more links from other pages are worth more. Through this “democratic” system, Google orders the list of results by placing the websites that receive the most links at the top of the list.

The main characteristic of these search engines is that they only index websites linked to the academic world: journal portals, repositories, headquarters academic websites, databases, commercial publishers, scientific societies, online library catalogs, etc.

information

In the search process, we can come across a wide variety of information on our topic. However, not all information will have the same value, therefore, we must select the appropriate sources of information, taking into account different aspects.

How to best address issues related to respondent fatigue or participant burnout?

shutterstock 1007012911

How to best address issues related to respondent fatigue or participant burnout?

fatigue

 

Fatigue

Fatigue is a great problem.  In colloquial language, the term “fatigue” is used to refer to the feeling of tiredness after an effort, which can be of a diverse nature and generates demotivation for the continuation of that effort, whether intellectual, work-related. or sporty. Unfortunately, there is no universally accepted definition of fatigue, which makes its nature conceptually complex and ambiguous.

Fatigue can be a consequence of physical or mental effort. This review will focus on fatigue as a state resulting from the practice of a physical-sports activity in which both types of effort are usually present and is associated with the training load (training stimulus that generates a breakdown of the body’s homeostasis). and causes the activation of allostatic mechanisms that allow the state of functional balance to be recovered).

mental effort

The factors that contribute to fatigue resulting from physical activity arise not only from the physical effort, but also from the concomitant mental load and the results of the task being performed. Among the physiological factors that have been investigated in relation to fatigue, cardiovascular performance, muscular vascular occlusion, efficiency in the use of oxygen and nutrients, neuromuscular fatigue, and the presence of metabolites in the internal environment stand out.

Furthermore, factors directly implemented in the central nervous system (CNS) intervene in this process that serve to regulate effort and protect the body from damage that could occur due to overexertion.

However, fatigue also derives from the tactical nature of activity typical of motor interaction sports, in which the athlete invests an effort: on the one hand cognitive for decision making and on the other leading to emotional self-regulation. In this context, mental load, as an element that can influence fatigue, has become an area of ​​research of undeniable importance. In this case, fatigue does not determine the inability to continue sporting activity, but rather to do so while maintaining an optimal level of performance.

Although experimentation on the factors that influence the appearance of fatigue points to multi-causal models, in the scientific literature there is an over-representation of physiological and biomechanical mechanisms, to the detriment of those from psychology or neuroscience, which is why An updated review of these aspects is very pertinent.

Concepts of fatigue and mechanisms that contribute to its appearance

The multicausal nature of fatigue has been the subject of study in biomechanics, physiology and psychology, the first 2 covering its objective nature and the latter its subjective and mental nature. This division of the study of fatigue has generated diverse and not always compatible definitions.

The physiological approach defines fatigue as a functional failure of the organism that is reflected in a decrease in performance and that generally originates from excessive energy expenditure or depletion of the elements necessary for its generation. In this sense, most research focuses on muscular aspects, understanding fatigue as a loss of the maximum capacity to generate force or a loss of power production.

However, the physiological explanation of fatigue goes beyond these aspects, making it necessary to also consider the effect that exercise produces on motor units, the internal environment and the CNS.

López-Chicharro and Fernández-Vaquero understand that fatigue can result from the alteration of any of the processes on which muscle contraction depends and appear as a consequence of the simultaneous alteration of several of these processes. This approach is also shared by authors such as Barbany, who distinguishes between fatigue resulting from a failure in central activation and peripheral fatigue.

mental effort

The central and peripheral mechanisms have generally been studied in isolation, assuming that their combination occurs in a linear manner, which has probably produced biases in the interpretation of the data and in the conclusions obtained. Abbiss and Laursen have carried out a complete review of these models, which include: the cardiovascular/anaerobic model, the energy supply/depletion model, the neuromuscular model, the muscle trauma model, the biomechanical model, the thermoregulation model and, finally, the motivational/psychological model, which focuses on the influence of intrapsychological factors, such as performance expectations or required effort.

Cognitive strategies to manage fatigue

There are many athletes who use various cognitive strategies to influence their performance in competition, based on managing the discomfort caused by effort, delaying the onset of fatigue. Some research has used hypnotic suggestion to selectively modify the level of perceived exertion of participants, in order to identify the potential contributions of higher brain centers towards cardiorespiratory regulation and other peripheral physiological mechanisms. Some of them have shown that cognitive processes can exert a certain influence on the variations caused at a perceptual, and even metabolic, level through these hypnotic suggestions.

Different works analyze the relationship between perceived effort, cognitive processes and the effects they can have on resistance tasks, generating the development of cognitive strategies for their control. In general these have been included in 2 main types: associative and dissociative. With the former, the athlete concentrates on the signals he receives from the changes in his body state as a consequence of the effort made, while dissociative techniques are based on distracting the athlete with thoughts or mental tasks unrelated to the effort made. The distracting effect of these techniques is based on making use of attentional resources to leave the control of bodily sensations at an unconscious level.

Some of these works have focused their interest on verifying the degree of effectiveness of different cognitive processing strategies for sports performance. The first antecedents suggest that the level of sports performance could act as a mediator of the effectiveness of the different strategies, since the highest level athletes in long-term endurance tests tended to preferentially use associative strategies, while those of lower level level, the dissociative ones.

Probably the first work that attempted to verify this possible effect with an experimental design was that of González-Suárez. The results of the experiment revealed greater performance (longer endurance time) when the subjects ran to self-imposed exhaustion using associative strategies. Likewise, those with a higher athletic level kept running for longer than subjects with lower levels. Dissociative strategies also produced a decrease in perceptions of fatigue and physical exertion, while associative strategies tended to increase perceptions of fatigue.

On the other hand, Hutchinson and Tenenbaum conclude their work in a cycle ergometer resistance test at 50, 70 and 90% of VO2max< at i=2>, that “attentional focusing was predominantly dissociative during the low-intensity phase of the task, and turned toward predominantly associative as the intensity increased.” This seems to indicate that increasing the intensity of the exercise makes the subject unable to abstract from the bodily sensations generated by the exercise. In any case, as Díaz-Ocejo et al point out. , the results are currently not conclusive and it is advisable to approach the research considering other possible mediating variables of the effect of the different cognitive strategies.

Neurocognitive mechanisms of fatigue processing

The afferent information that can alter the RPE is very diverse, and it remains to be elucidated how the CNS integrates it and elaborates the sensation of fatigue. From some studies it is known that the nervous structures involved could be located in the insular cortex, the anterior cingulate cortex (medial prefrontal region) and the thalamic regions.

In relation to the distribution of training content

In the same way that the accumulation of physical load throughout training causes the appearance of fatigue and deterioration of performance, the accumulated effect of mental load contributes to the appearance of fatigue, and this to the decrease in physical performance. and engine.

For this reason, in training sessions in which the objective focuses on learning new game behaviors, motor responses of a high level of coordination, tactical aspects with high cognitive demands, or demands a high level of emotional self-control or concentration, the tasks that pursue it will be located in the initial part of the session, when the athlete has most of their physiological, cognitive and psychological resources available.

However, when the objective is not the acquisition of new motor schemes but the implementation of consolidated game actions and behaviors, the activities focused on their development will be located in the final phase of the training session, just when the accumulation of The physical and mental load leads to a state of fatigue that demands self-control from the athlete. That is, we would place the execution of those behaviors in training in the place that most closely simulates the situations in which those behaviors will have to be deployed in real competition.

If we focus the analysis on the distribution of content throughout a microcycle, for example, that of a team that competes during the weekend, the training activities that involve, on the one hand, greater physical effort and, on the other, greater Cognitive or emotional self-control should be located in the first part (Monday to Wednesday), reducing the magnitude of the loads in the days before competing to leave the necessary time to guarantee the recovery or supercompensation of the athlete.

In this sense, the evaluation of the athlete’s performance, or control of the training process, which is so advisable as a means to stimulate learning, must move away from competition, because as Buceta points out, it can generate stress that would add to what it already produces. the competition itself.

What role does randomization best play in your data collection design?

directivos empresas inteligencia artificial

What role does randomization best play in your data collection design?

data collection

 

What is data collection?

Data collection is the process of gathering data for use in business decision-making, strategic planning, research and other purposes. It’s a crucial part of data analytics applications and research projects: Effective data collection provides the information that’s needed to answer questions, analyze business performance or other outcomes, and predict future trends, actions and scenarios.

IT systems regularly collect data on customers, employees, sales and other aspects of business operations when transactions are processed and data is entered. Companies also conduct surveys and track social media to get feedback from customers. Data scientists, other analysts and business users then collect relevant data to analyze from internal systems, plus external data sources if needed. The latter task is the first step in data preparation, which involves gathering data and preparing it for use in business intelligence (BI) and analytics applications.

An overview of randomization techniques: An unbiased assessment of outcome in clinical research

A good experiment or trial minimizes the variability of the evaluation and provides unbiased evaluation of the intervention by avoiding confounding from other factors, which are known and unknown.

Randomization ensures that each patient has an equal chance of receiving any of the treatments under study, generate comparable intervention groups, which are alike in all the important aspects except for the intervention each groups receives. It also provides a basis for the statistical methods used in analyzing the data. The basic benefits of randomization are as follows: it eliminates the selection bias, balances the groups with respect to many known and unknown confounding or prognostic variables, and forms the basis for statistical tests, a basis for an assumption of free statistical test of the equality of treatments. In general, a randomized experiment is an essential tool for testing the efficacy of the treatment.

In practice, randomization requires generating randomization schedules, which should be reproducible. Generation of a randomization schedule usually includes obtaining the random numbers and assigning random numbers to each subject or treatment conditions. Random numbers can be generated by computers or can come from random number tables found in the most statistical text books.

For simple experiments with small number of subjects, randomization can be performed easily by assigning the random numbers from random number tables to the treatment conditions. However, in the large sample size situation or if restricted randomization or stratified randomization to be performed for an experiment or if an unbalanced allocation ratio will be used, it is better to use the computer programming to do the randomization such as SAS, R environment etc.

REASON FOR RANDOMIZATION

Researchers in life science research demand randomization for several reasons. First, subjects in various groups should not differ in any systematic way. In a clinical research, if treatment groups are systematically different, research results will be biased. Suppose that subjects are assigned to control and treatment groups in a study examining the efficacy of a surgical intervention. If a greater proportion of older subjects are assigned to the treatment group, then the outcome of the surgical intervention may be influenced by this imbalance. The effects of the treatment would be indistinguishable from the influence of the imbalance of covariates, thereby requiring the researcher to control for the covariates in the analysis to obtain an unbiased result.

Second, proper randomization ensures no a priori knowledge of group assignment (i.e., allocation concealment). That is, researchers, subject or patients or participants, and others should not know to which group the subject will be assigned. Knowledge of group assignment creates a layer of potential selection bias that may taint the data. Schul and Grimes stated that trials with inadequate or unclear randomization tended to overestimate treatment effects up to 40% compared with those that used proper randomization. The outcome of the research can be negatively influenced by this inadequate randomization.

Statistical techniques such as analysis of covariance (ANCOVA), multivariate ANCOVA, or both, are often used to adjust for covariate imbalance in the analysis stage of the clinical research. However, the interpretation of this post adjustment approach is often difficult because imbalance of covariates frequently leads to unanticipated interaction effects, such as unequal slopes among subgroups of covariates.

One of the critical assumptions in ANCOVA is that the slopes of regression lines are the same for each group of covariates. The adjustment needed for each covariate group may vary, which is problematic because ANCOVA uses the average slope across the groups to adjust the outcome variable. Thus, the ideal way of balancing covariates among groups is to apply sound randomization in the design stage of a clinical research (before the adjustment procedure) instead of post data collection. In such instances, random assignment is necessary and guarantees validity for statistical tests of significance that are used to compare treatments.

data

TYPES OF RANDOMIZATION

Many procedures have been proposed for the random assignment of participants to treatment groups in clinical trials. In this article, common randomization techniques, including simple randomization, block randomization, stratified randomization, and covariate adaptive randomization, are reviewed. Each method is described along with its advantages and disadvantages. It is very important to select a method that will produce interpretable and valid results for your study. Use of online software to generate randomization code using block randomization procedure will be presented.

Simple randomization

Randomization based on a single sequence of random assignments is known as simple randomization.This technique maintains complete randomness of the assignment of a subject to a particular group. The most common and basic method of simple randomization is flipping a coin. For example, with two treatment groups (control versus treatment), the side of the coin (i.e., heads – control, tails – treatment) determines the assignment of each subject. Other methods include using a shuffled deck of cards (e.g., even – control, odd – treatment) or throwing a dice (e.g., below and equal to 3 – control, over 3 – treatment). A random number table found in a statistics book or computer-generated random numbers can also be used for simple randomization of subjects.

This randomization approach is simple and easy to implement in a clinical research. In large clinical research, simple randomization can be trusted to generate similar numbers of subjects among groups. However, randomization results could be problematic in relatively small sample size clinical research, resulting in an unequal number of participants among groups.

Randomization

The block randomization method is designed to randomize subjects into groups that result in equal sample sizes. This method is used to ensure a balance in sample size across groups over time. Blocks are small and balanced with predetermined group assignments, which keeps the numbers of subjects in each group similar at all times. The block size is determined by the researcher and should be a multiple of the number of groups (i.e., with two treatment groups, block size of either 4, 6, or 8). Blocks are best used in smaller increments as researchers can more easily control balance.

After block size has been determined, all possible balanced combinations of assignment within the block (i.e., equal number for all groups within the block) must be calculated. Blocks are then randomly chosen to determine the patients’ assignment into the groups.

Although balance in sample size may be achieved with this method, groups may be generated that are rarely comparable in terms of certain covariates. For example, one group may have more participants with secondary diseases (e.g., diabetes, multiple sclerosis, cancer, hypertension, etc.) that could confound the data and may negatively influence the results of the clinical trial. Pocock and Simon stressed the importance of controlling for these covariates because of serious consequences to the interpretation of the results. Such an imbalance could introduce bias in the statistical analysis and reduce the power of the study. Hence, sample size and covariates must be balanced in clinical research.

randomization

Stratified randomization

The stratified randomization method addresses the need to control and balance the influence of covariates. This method can be used to achieve balance among groups in terms of subjects’ baseline characteristics (covariates). Specific covariates must be identified by the researcher who understands the potential influence each covariate has on the dependent variable. Stratified randomization is achieved by generating a separate block for each combination of covariates, and subjects are assigned to the appropriate block of covariates. After all subjects have been identified and assigned into blocks, simple randomization is performed within each block to assign subjects to one of the groups.

How will you deal with unexpected challenges or obstacles during data collection?

Localization

How will you deal with unexpected challenges or obstacles during data collection?

data collection

 

What is data collection?

Data collection is the process of gathering data for use in business decision-making, strategic planning, research and other purposes. It’s a crucial part of data analytics applications and research projects: Effective data collection provides the information that’s needed to answer questions, analyze business performance or other outcomes, and predict future trends, actions and scenarios.

In businesses, data collection happens on multiple levels. IT systems regularly collect data on customers, employees, sales and other aspects of business operations when transactions are processed and data is entered. Companies also conduct surveys and track social media to get feedback from customers. Data scientists, other analysts and business users then collect relevant data to analyze from internal systems, plus external data sources if needed. The latter task is the first step in data preparation, which involves gathering data and preparing it for use in business intelligence (BI) and analytics applications.

For research in science, medicine, higher education and other fields, data collection is often a more specialized process, in which researchers create and implement measures to collect specific sets of data. In both the business and research contexts, though, the collected data must be accurate to ensure that analytics findings and research results are valid.

Some observations on the challenges of digital transformation research in the business sector

Since digital transformation is an applied field and not purely theoretical, collaboration with companies during research is essential. However, such research activities are typically subject to two main types of challenges, one arising from the data collection process and another from the publication process. Below, I will take a closer look at these two obstacles and offer solutions.

Challenges in the data collection process

Trust is the fundamental basis for successful collaboration between companies and researchers. However, creating the trust necessary to establish that initial connection can be difficult, especially when the parties do not know each other. Companies tend to refuse to collaborate with external researchers when the benefit and/or form of collaboration is unclear.

However, even in cases where a minimum trust has been established, companies often have reservations about disclosing their most sensitive and specific data. They may want to avoid falling into the hands of competitors or may not want to speak publicly about their failures. This resistance represents a big problem for researchers, since these insights are important for the general understanding of the underlying problem and would allow other professionals to learn from them. Withholding certain data also prevents general understanding of the object of research.

Another key challenge in terms of collaboration is often creating a common timeline. In the business context, decisions can sometimes be made randomly, and deadlines are usually short. This does not always correspond to the requirements that researchers must meet in their environment. For example, for professionals without academic training, it is often problematic to understand that publication processes can take several years.

Matrix 3

 

Challenges in the publishing process

For many researchers, publishing studies on digital transformation is often a difficult process due to the lack of theoretical foundations and development. While conclusions may be practically relevant, their integration into the body of knowledge and their implications for research are not always clearly defined.

As research with companies is often carried out on a small scale, it can be difficult to ensure its generalisability or replicability. At this point, therefore, it would be necessary to anticipate a possible selection bias that could call into question the representativeness of the results, which is normally associated with a concern for the suitability of interviewees who, for various reasons, could not give opinions on the different functions of the company or on the company as a whole.

Some suggestions

In view of these frequent problems in the data collection and publication processes, some recommendations are made below:

In general, researchers should strive to establish long-term collaborations with companies, not only because it can reinforce mutual trust, but also because it could improve the efficiency of many collaborative processes. To this end, it might be useful to jointly create a long-term plan. Larger collaborative initiatives can be complemented by a more institutionalized approach, for example through regular stakeholder meetings.

Certainly, the key to success in establishing such partnerships is to highlight the benefits that the company can obtain. Only by sharing the benefits will companies commit to supporting researchers in the long term and assuming the additional costs that this may entail. The potential benefits of collaboration can be justified, not only by providing external expertise and methodological support, but also, for example, by facilitating better access to universities’ knowledge resources or to high-potential students.

Transparency is also crucial to establishing a relationship of trust. This should apply not only to operational matters, but also to the objectives pursued by both parties, including clear definition of roles and responsibilities and open and reliable communication between the parties. Researchers should inform companies of interim results and proactively share other issues of interest or potential project ideas that could also stimulate collaboration.

Whenever sensitive data is involved, a confidentiality and non-disclosure agreement can be advantageous for both parties. In this way, researchers will have a more complete and reliable view of the object of the investigation, while the company will ensure the protection of its sensitive data. From the researcher’s perspective, although access to that sensitive data may be crucial, not all information needs to be published, and data anonymity or publication embargo periods may mean that it can be published without violating the agreement. However, since unexpected changes in the research environment are common in companies, researchers must have a Plan B.

When considering publication, it is advisable to develop a clear theoretical basis during an early stage of research planning, without neglecting the generalizability of the practical problem. Researchers must also identify the most appropriate publication options. Journals that are more practitioner-oriented may offer advantages in terms of the length of the publication process, as well as a potentially more suitable target audience.

To ensure the scientific rigor of the research, it is advisable to select an adequate number of respondents within the companies. It is especially recommended to triangulate results with external sources (for example, annual reports or newspaper articles) to reduce potential respondent bias. Researchers should also strive to make the selection of their respondents and companies as transparent and legitimate as possible. Detailed documentation of the research process and underlying methodology will further increase examiners’ confidence. While small-scale exploratory studies are particularly suitable for new areas of research, large-scale quantitative studies could be a good opportunity to verify the generalizability of the promise of initial results.

salud medico cerebro realidad virtual medicina robot

Conclusion

Research on the digital transformation of “living objects” can sometimes be fraught with difficulties, but if well prepared and the above recommendations are taken into account, researchers can overcome the key double challenge of such efforts.

What steps will you take to ensure the privacy of participants in your data collection?

inteligencia artificial como arma doble filo ciberataques sofisticados pero sistemas mejorados 3031516

What steps will you take to ensure the privacy of participants in your data collection?

data collection

Data collection

Data collection is very important. Data is a collection of facts, figures, objects, symbols, and events gathered from different sources. Organizations collect data with various data collection methods to make better decisions. Without data, it would be difficult for organizations to make appropriate decisions, so data is collected from different audiences at various points in time.

For instance, an organization must collect data on product demand, customer preferences, and competitors before launching a new product. If data is not collected beforehand, the organization’s newly launched product may fail for many reasons, such as less demand and inability to meet customer needs.

Although data is a valuable asset for every organization, it does not serve any purpose until analyzed or processed to get the desired results.

Data collection methods are techniques and procedures used to gather information for research purposes. These methods can range from simple self-reported surveys to more complex experiments and can involve either quantitative or qualitative approaches to data gathering.

Some common data collection methods include surveys, interviews, observations, focus groups, experiments, and secondary data analysis. The data collected through these methods can then be analyzed and used to support or refute research hypotheses and draw conclusions about the study’s subject matter.

Privacy and information rights in the information society and the ICT environment*

The human rights that have been affirmed in the Constitutions of the different countries, in accordance with the theory of guaranteeism, are those that can be considered fundamental. The Mexican Constitution, in Decree published in the Official Gazette of the Federation on June 10, 2011, it has reformed the name of its first chapter as “On human rights and their guarantees”, therefore, when reference is made to the constitutional recognition of these rights, we will take this point into account.

The Mexican Constitution recognizes as human or fundamental rights related to information and, therefore, to the information society, mainly the following: the right to information (article 1) of personal data (article 16); in addition to freedom of expression and of the press (article 7), and the inviolability of communications (article 16). Copyright and property rights intellectual property are mentioned in the fundamental norm as a reference to the fact that its existence should not be considered a monopoly (article 28).

Therefore, these rights and some others that are linked to personal information, such as the right to intimacy, privacy, honor and one’s own image — which, although not directly recognized in the Constitution , they are through the international treaties signed by this country—are the ones that we will address for study in this work.

The reasons for this delimitation (which could seem very broad) are based on the fact that there is a connection between all of them and that, given the growing use of ICT (which supports the development of said information society), they can be violated in jointly or collaterally and not in isolation. An example of this is the connection that exists between the right to information and freedom of expression with respect to the right to privacy. On several occasions they collide, and on other occasions they are almost complements of each other.

The analysis will be carried out by describing the regulation or protection that exists in Mexico of these rights and comparing them in some cases with the norms of other countries or regions. All this so that through a brief exercise of lege ferenda we can schematically detect the challenges that remain pending in Mexico.

Likewise, these rights will be discussed in the face of the challenges posed by the information and knowledge society (SIC). All of this, especially in terms of their protection, since at the same time that these rights are essential and the core of it, their impact also places them at a level of permanent risk that increases due to the lack of effective and timely legal protection. or self-regulation.

The importance of the information security measures that some countries have attempted or suggested adopting will also be pointed out, in order to control the flow of information that society receives through the network, its benefits and harms with respect to rights such as that of information or privacy.

On the other hand, in relation to universal service, which we will also talk about, it must be said that the doctrine refers to it as one of the ways to make other fundamental rights a reality, such as that of information or access to the Internet. or to the SIC. Universal service is that which appears in the telecommunications legislation of various countries, although not in the case of Mexico, whose Federal Law for this sector only speaks of social coverage, as we will see in due course.

Fundamental rights related to personal information

Various fundamental and personality rights are related to each other, but they are differentiable, so it is necessary to make a distinction between them. In this way, linked but not equal rights must be listed, such as the rights to honor, to one’s own image, to intimacy or privacy, to data protection, to inviolability. of the address and the secrecy of communications. However, although the legal good that each of them protects is diverse, they cannot be treated in isolation, and even less so when they are analyzed within the framework of a SIC that interconnects many aspects.

The legal framework of personality rights also has a relationship with a principle of law recognized in the Declaration of Human Rights, which is that of human dignity. The European Community has elevated this to a fundamental legal good and, therefore, taking into account the large amount of personal information that circulates on the networks, it is evident that the situation resulting from this may specifically affect this good.

The right to data protection is closely linked to that of intimacy and privacy, but it enjoys its own autonomy (according to jurisprudential interpretation) since although the right to privacy has been derived from the recognition of freedom personal in the first generation of rights, it was until the third generation that, in “response to the phenomenon of the so-called ‘contamination of freedoms’ (liberties’ pollution)“, the right to privacy gained greater popularity, which caused.

This was forced to expand its spectrum through the recognition of new aspects of it, to now have a ramification of rights incorporated into it, such as the right to honor, to one’s own image, to private life (in its most broad), to the protection of personal data, and even, for a sector of doctrine, to computer freedom.

Thus, the right to the protection of personal data is built on the right to privacy and, in addition to implying the obligation of the State to guarantee the protection of personal information contained in archives, databases, files or any other medium, whether documentary or digital, grants the owner of such information the right to control over it, that is, to access, review, correct and demand the omission of personal data that a public or private entity has in its possession.

This right, in accordance with what we mentioned before, and according to GALÁN, is also linked to constitutional and legal rights or principles of great value, such as human dignity, individual freedom, self-determination and the democratic principle. Therefore, the aforementioned author maintains:

The protection of personal data, even recognizing the dynamism of its objective content, derived from technological changes, guarantees the person a power of control – of positive content – ​​over the capture, use, destination and subsequent trafficking of personal data. . Therefore, this right covers those data that are relevant to the exercise of any person’s rights, whether or not they are constitutional and whether or not they are related to honor, ideology, personal and family privacy.

For its part, the right to honor, to one’s own image and even the constitutional guarantees of inviolability of the home and the secrecy of private communications, are closely related to personal information, since they all refer to information related to people, to the physical appearance of a person (image), to that contained within their home, or in the communications they issue.

information

 

Legal recognition of fundamental rights relating to personal information

As we mentioned before, and according to the theory of fundamental rights (particularly that of guaranteeism, by Luigi FERRAJOLI), the human rights that have been constitutionally affirmed are those that can be defined as fundamental. One of the essential attributes of these rights, according to their origin and inspiring philosophical elements, is their universality. Hence, they appear reflected in international instruments such as the Universal Declaration of Human Rights (UDHR) of 1948 and other similar ones, although the nomination of these other legal bodies does not include the adjective “universal”.

In this sense, universality carries a strong naturalist influence of the first constitutionalism. Thus, it was thought that if the rights stated were, precisely, natural, then they had to be recognized for all people, taking into account that they all carry the same “nature.” In the words of RIALS, cited by CARBONELL, “if there exists a rational natural order knowable with evidence, it would be inconceivable that it would be consecrated with significant variants depending on the latitudes.”

From that perspective, we could say that in Mexican positive law, the right to the protection of personal data and the guarantees of the inviolability of the home and the secrecy of private communications are expressly recognized in the Constitution (article 16), but not so. the right to intimacy, privacy, honor and one’s own image, as will be specified below.

Direct recognition of the right to protection of personal data is made in article 16 of the Constitution, in which it was incorporated, in the second paragraph, in a reform published in the Official Gazette of the Federation on June 1, 2009, the recognition of the right of every person to the protection of their personal data, access, rectification and cancellation thereof, as well as to express their opposition.

Likewise, the sixteenth paragraph established the terms for the exercise of this right and the cases of exception to the principles that govern data processing (for reasons of national security, provisions of public order, public safety and health or to protect the rights of third parties) to be established by the law that is enacted on the matter (which took place the following year).

The Federal Law on Protection of Personal Data Held by Private Parties (LFPDPPP) of 2010 is the legislation for the development of the constitutional precept just cited, and in its text personal data is defined as “that information referring to an identified person or identifiable”, thus aligning, so to speak, with the most common international definition and in particular with that of the Spanish standard on the matter.

Evidently, Mexican legislation is concerned with defining the principles and criteria to make this right effective and the procedures to put it into effect. The LFPDPPP Regulations develop all these areas more fully.

It should also be mentioned that several years ago there was already legislation that regulated some aspects of the processing of personal data, but it only applies to the public sphere. This is the Federal Law on Transparency and Access to Government Public Information, published in the aforementioned official body on June 11, 2002, which defines what, for the purposes of that Law, is must be understood as personal data in its article 3, section II, adjusting closely to what the legislation applicable to privately owned files would later reflect.

However, although these specific developments exist, as we said, the rights to privacy and intimacy are not expressly mentioned in the fundamental Mexican norm. However, its recognition could be understood through a lato sensu interpretation of the first paragraph of article 16 of the Constitution, where it states:

“No one can be bothered “in his person, family, domicile, papers or possessions, but by virtue of a written order from the competent authority, which establishes and motivates the legal cause of the procedure.” Indeed, some protection for these rights can be derived from this, although it is necessary to mention that the rest of the content of this paragraph basically refers to the procedural field. The same happens with the content of article 7 of the Constitution, which establishes respect for private life as a limit to freedom of the press.

In addition to the above, we must say that even considering that there is a lack of constitutional recognition of the aforementioned human rights, currently this is not an obstacle to claiming their protection and practice, since they can be exercised through conventional means, as established currently article 1 of the constitution. In this, as we mentioned before, it is stipulated that all people will enjoy the human rights recognized in the Constitution itself and in the international treaties to which Mexico is a party.

How do you best ensure consistency of data collected over time?

INTELIGENCIA ARTIFICIAL .jpg

How do you best ensure consistency of data collected over time?

data collected

 

Data collected

Data collected is very important. Data is a collection of facts, figures, objects, symbols, and events gathered from different sources. Organizations collect data with various data collection methods to make better decisions. Without data collected, it would be difficult for organizations to make appropriate decisions, so data is data collected from different audiences at various points in time.

For instance, an organization must collect data on product demand, customer preferences, and competitors before launching a new product. If data is not collected beforehand, the organization’s newly launched product may fail for many reasons, such as less demand and inability to meet customer needs. The data collected is used to improve the offering or target customers.

Although data is a valuable asset for every organization, it does not serve any purpose until analyzed or processed to get the desired results.

Validity and reliability in qualitative methodology

In current academic circles, which are increasingly using qualitatively oriented methods and techniques for their different types of research, a difficulty related to the validity and reliability of their results has repeatedly arisen.

In general, the concepts of validity and reliability that reside in the minds of a large majority of researchers continue to be those used in the traditional positivist epistemological orientation, already more than surpassed in the second half of the 20th century. From here a conflict arises, since qualitative methodology adopts, as the basis and fundamental postulate of its theory of knowledge and science, the postpositivist epistemic paradigm.

The postpositivist paradigm has been installed in the academic field after many studies in international symposiums on the philosophy of science (see Suppe, 1977, 1979) in which the  of the inherited conception (logical positivism) which, from that moment on, was abandoned by almost all epistemologists” (Echeverría, 1989, p. 25), due, as Popper (1977, p. 118) points out, to its insurmountable intrinsic difficulties.< /span>

Obviously, it is not enough for these conclusions to be reached at this high scientific level for them to be immediately adopted in practice by the majority of researchers, nor were the heliocentric ideas of Copernicus and Galileo fully adopted until after a century by illustrious astronomers from the universities of Bologna, Padua and Pisa. According to Galileo (1968) this required “changing people’s heads, which only God could do” (p. 119).

Postpositivist epistemology shows that there is no, in the cognitive process of our mind, a direct relationship between the visual empirical image , auditory, olfactory, etc. and the external reality to which they refer, but always is mediated and interpreted by the personal and individual horizon of the researcher: his or her values, interests, beliefs, feelings, etc., and, for this same reason, the traditional positivist concepts of validity (as a physiological mind-thing relationship) and of  reliability (as a repetition of the same mental process) must be reviewed and redefined.

qualitative

 Epistemological basis for a redefinition of Validity and Reliability 

 Systemic ontology

When an entity is a composition or aggregation of elements (diversity of unrelated parts), it can, in general, be studied and measured appropriately under the guidance of the parameters of traditional quantitative science, in which mathematics and probabilistic techniques play the main role; when, on the other hand, a reality is not a juxtaposition of elements, but rather its “constituent parts” they form an organized totality with strong interaction with each other, that is, they constitute a system, its study and understanding requires the capture of that internal dynamic structure that characterizes it and, to do so, requires a methodology structural-systemic.

Bertalanffy had already pointed out that “general systems theory – as he originally conceived it and not as it has been disseminated by many authors that he criticizes and disavows (1981, p. 49) – was destined to play a role analogous to that played by the Aristotelian logic in the science of antiquity” (Thuillier, 1975, p. 86).

There are two basic kinds of systems: the linear and the non-linear. Linear systems  do not present “surprises”, since they are fundamentally “aggregate”, due to the little interaction between the parts: they can be decomposed into their elements and recompose again, a small change in an interaction produces a small change in the solution, determinism is always present and, by reducing the interactions to very small values, the system can be considered to be composed of independent or linearly dependent parts.

The world of systems no-linear, On the other hand, it is totally different: it can be unpredictable, violent and dramatic, a small change in a parameter can change the solution little by little and, suddenly, change to a totally new type of solution, as when, in quantum physics , “quantum leaps” occur, which are an absolutely unpredictable event that is not controlled by causal laws, but only by the laws of probability.

These non-linear systems must be grasped from within and their situation must be evaluated in parallel with their development. Prigogine claims (1986) that the non-linear world contains much of what is important in nature: the world of dissipative structures.

Well, our universe is basically made up of non-linear systems at all levels: physical, chemical, biological, psychological and sociocultural.

If we observe our environment we see that we are immersed in a world of systems. When considering a tree, a book, an urban area, any device, a social community, our language, an animal, the firmament, in all of them we find a common feature: they are complex entities, formed by parts in mutual interaction, whose identity results from an adequate harmony between its constituents, and endowed with their own substantivity that transcends that of those parts; In short, it is about what, in a generic way, we call systems (Aracil, 1986, p. 13). Hence, von Bertalanffy (1981) maintains that “from the atom to the galaxy we live in a world of systems” (p. 47).

According to Capra (1992), quantum theory demonstrates that “all particles are dynamically composed of one another in a self-consistent manner, and, in that sense, it can be said that they “contain” each other. In this way, physics (the new physics) is a model science for the new concepts and methods of other disciplines. In the field of biology, Dobzhansky (1967) has pointed out that the genome, which comprises both regulatory and operant genes, works as an orchestra and not as a set of soloists.

Also Köhler (1967), for psychology, used to say that “in the structure (system) each part dynamically knows each one of the others.” And Ferdinand de Saussure (1931), for linguistics, stated that “the meaning and value of each word is in the others”, that the system It is “an organized totality, made of supportive elements that can only be defined in relation to each other depending on their place in this totality.

If the significance and value of each element of a dynamic structure or system is closely related to that of the others, if everything is a function of everything, and if each element is necessary to define others, it cannot be seen or understood or measured “in itself”, in isolation, but through the position< /span> problem refers continuously and systematically to the state of the system. considered as a whole” (in: Lyotard, 1989, p. 31).each or role it plays in the structure. Thus, Parsons points out that “the most decisive condition for a dynamic analysis to be valid is that function and the 

The need for a proper approach to dealing with systems has been felt in all fields of science. Thus a series of related modern approaches were born, such as, for example, cybernetics, computer science, set theory, network theory, decision theory, game theory, stochastic models and others; and, in practical application, systems analysis, systems engineering, the study of ecosystems, operations research, etc.

Although these theories and applications differ in some initial assumptions, mathematical techniques, and goals, they nevertheless coincide in dealing, in one way or another and according to their area of ​​interest, with “systems,” and “organization” that is, they agree to be “systems sciences” that study aspects not addressed until now and problems of interaction of many variables, organization, regulation, choice of goals, etc. They all seek the “systemic structural configuration” of the realities they study.

In a system there is a set of interrelated units in such a way that the behavior of each part depends on the state of all the others, since they are all found in a structure that interconnects them. Organization and communication in the systems approach challenges traditional logic, replacing the concept of energy with that of information, and that of cause-effect with that of structure and feedback.

In living beings, and especially in human beings, there are structures of a very high level of complexity, which are made up of systems of systems whose understanding defies the acuity of the most privileged minds; These systems constitute a “physical-chemical-biological-psychological-cultural and spiritual” whole.

Only referring to the biological field, we talk about the blood system, respiratory system, nervous system, muscular system, skeletal system, reproductive system, immune system and many others. Let’s imagine the high level of complexity that is formed when all these systems interrelate and interact with all the other systems of a single person and, even more so, of entire social groups.

Now, what implications does the adoption of the systemic paradigm have for the cultivation of science and its technology? They completely change the foundations of the entire scientific edifice: its bases, its conceptual structure and its methodological scaffolding. This is the path that methodologies that are inspired by hermeneutic approaches, the phenomenological perspective and ethnographic orientations try to follow today, that is, qualitative methodologies.

1.2. Positivist validity and reliability

Traditional positivist literature defines different types of validity, (construct validity, internal validity, external validity); but they all try to verify if we actually measure what we propose to measure. Likewise, this epistemological orientation seeks to determine a good level of reliability, that is, its possibility of repeating the same research with identical results. All these indicators have a common denominator: they are calculated and determined by means of “an isolated measure, independent of the complex realities to which they refer.”

positivist

The validity of hypothetical constructs (of constructs), which is the most important, tries to establish an operational measure for the concepts used; In the psychological field, for example, the instrument would measure the isolated psychological property or properties that underlie the variable. This validity is not easy to understand, since it is immersed in the scientific framework of the research and its methodology. These are the ones that give it meaning.

Internal validity is specifically related to establishing or finding a causal or explanatory relationship; that is, if event x leads to event y; excluding the possibility that it is caused by event z. This logic is not applicable, for example, to a descriptive or exploratory study (Yin, 2003, p. 36).

External validity tries to verify whether the results of a given study are generalizable beyond its limits. This requires that there be a homology or, at least, an analogy between the sample (studied case) and the universe to which it is intended to be applied.

Some authors refer to this type of validity with the name of content validity, since they define it as the representativeness or sampling adequacy of the content that is measured with the content of the universe from which it is extracted (Kerlinger, 1981a, p. 322).

Likewise, reliability aims to ensure that a researcher, following the same procedures described by another previous researcher and conducting the same study, can reach the same results and conclusions. Note that this is a redoing of the same study, not a replica of it.

1.3. Critical analysis of positivist criteria

All these indicators ignore the fact that each reality or human entity, be it a thought, a belief, an attitude, an interest, a behavior, etc., are not isolated entities, but rather they receive their meaning or significance, that is, they are configured as such, by the type and nature of the other elements and factors of the system or dynamic structure in which they are inserted and by the role and function they play in it; all of which can change with the temporal variable, since they are never static. An isolated element can never be adequately conceptualized or categorized, since it may have many meanings according to that constellation of factors or structure from which it comes.

If we delve deeper into the “parts-whole” phenomenon, and focus more closely on its epistemological aspect, we will say that there are two modes of intellectual apprehension of an element that is part of a totality. Michael Polanyi (1966) puts it this way:

…we cannot understand the whole without seeing its parts, but neither can we see the parts without understanding the whole… When we understand a certain series of elements as part of a whole, the focus of our attention moves from the details to now not understood to the understanding of their joint meaning.

This passage of attention does not make us lose sight of the details, since a whole can only be seen by seeing its parts, but it completely changes the way we apprehend the details. Now we apprehend them in terms of the whole on which we have focused our attention. I will call this subsidiary apprehension of details, as opposed to the focal apprehension that we would employ to attend to the details themselves, not as parts of the whole (pp. 22-23).

Unfortunately, analytical philosophy and its positivist orientation followed the advice that Descartes puts as a guiding idea and as a second maxim, in the Discourse on Method: “fragment every problem into as many simple and separate elements as possible.” This orientation has systematically accepted the (false) assumption that total reality would be captured by dismembering it (disintegrative analysis) into its different components.

This approach constituted the conceptual paradigm of science for almost three centuries; but it breaks or ignores the set of links and relationships that each human entity, and sometimes even the same physical or chemical entities, has with the rest. And that rest or context is precisely what gives it the nature that constitutes it, its characteristics, its properties and its attributes.

This decontextualization of realities makes them amorphous, ambiguous and, most of the time, without any meaning or, also, with many possible meanings. As the creator of General Systems Theory, Ludwig von Bertalanffy (1976), very appropriately points out, “every mathematical model is an oversimplification, and it is debatable whether it reduces real events to the bare bones or whether it tears out vital parts of their anatomy.” ; (p. 117).

positivist orientation

For a greater exemplification, let’s think about what is happening recently in the field of medicine. Excellent professionals in this science, sometimes guided by their specialization or super-specialization, prescribe a medicine that seems magnificent for a certain ailment or condition, but they are unaware that, for some people in particular, it can even be fatal, since they have a special allergy, for example, to penicillin or some component of it.

This without pointing out that the etiology of a certain disease sometimes has its origin in non-biological areas, such as a high level of stress for psychological reasons, family problems or socioeconomic difficulties; all areas that the distinguished specialist may be unaware of even in their simplest topics, but that could give a clue as to where the necessary therapy should be directed.

Postpositivist View of Validity and Reliability

 The validity.

In a broad and general sense, we will say that an investigation will have a high level of “validity” to the extent that its results “reflect” an image that is as complete as possible, clear and representative of the reality or situation studied.

But we do not have a single type of knowledge. The natural sciences produce knowledge that is effective in dealing with the physical world; They have been successful in producing instrumental knowledge that has been politically and lucratively exploited in technological applications. But instrumental knowledge is only one of the three cognitive forms that contribute to human life.

The historical-hermeneutic sciences (interpretive sciences) produce the interactive knowledge that underlies the life of each human being and the community of which he or she is a part; Likewise, critical social science produces the reflective and critical knowledge that human beings need for their development, emancipation and self-realization.

Each form of knowledge has its own interests, its own uses and its own criteria of validity; For this reason, it must be justified on its own terms, as has traditionally been done with ‘objectivity’ for the natural sciences, as Dilthey did for hermeneutics, and as Marx and Engels did for critical theory.

In the natural sciences, validity is related to your ability to control the physical environment with new physical, chemical, and biological inventions; In the hermeneutical sciences, validity is appreciated according to the level of its ability to produce human relationships with a high sense of empathy and connection; and in critical social science, this validity will be related to its ability to overcome obstacles to promote the growth and development of more self-sufficient human beings in the full sense.

As we pointed out, an investigation has a high level of validity if when observing or appreciating a reality, that reality is observed or appreciated in its full sense, and not just an aspect or part of it.

If reliability has always represented a difficult requirement for qualitative research, due to its peculiar nature (impossibility of repeating, stricto sensu, the same study), the same has not happened in relation to validity. On the contrary, validity is the greatest strength of these investigations. Indeed, qualitative researchers’ assertion that their studies have a high level of validity derives from their way of collecting information and the analysis techniques they use.

These procedures induce them to live among the subjects participating in the study, to collect data for long periods of time, review, compare and analyze them continuously, to adapt the interviews to the empirical categories of the participants and not to abstract concepts or strangers brought from another environment, to use participatory observation in the real media and contexts where the events occur and, finally, to incorporate into the analysis process a continuous activity of feedback and reevaluation.

All this guarantees a level of validity that few methodologies can offer. However, validity can also be perfected, and it will be all the greater to the extent that some problems and difficulties that may arise in the process are taken into account. qualitative research. Among others, for good internal validity, special attention will have to be paid to the following:

a) There may be a noticeable change in the environment studied between the beginning and the end of the investigation. In this case, information will have to be collected and collated at different times in the process.

b) It is necessary to carefully calibrate the extent to which the observed reality is a function of the position, status and role that the researcher has assumed within the group. Interactive situations always create new realities or modify existing ones.

c)  The credibility of information can vary greatly: informants can lie, omit relevant data or have a distorted view of things. It will be necessary to contrast it with that of others, collect it at different times, etc.; It is also convenient that the sample of informants represents in the best possible way the groups, orientations or positions of the population studied, as a strategy to correct perceptual distortions and prejudices, although it will always remain true that the truth is not produced by a random exercise. and democratic in the collection of general information, but by the information of the most qualified and trustworthy people.

credibility

Regarding external validity, it is necessary to remember that often the meaning structures discovered in one group are not comparable with those of another, because they are specific and typical of that group, in that situation and in those circumstances, or because the second group has been poorly chosen and the conclusions obtained in the first are not applicable to it.

  The confiability

Research with good reliability is one that is stable, secure, consistent, the same as itself at different times and predictable for the future. Reliability also has two sides, one internal and one external: there is internal reliability when several observers, when studying the same reality, agree in their conclusions; There is external reliability when independent researchers, when studying a reality in different times or situations, reach the same results. 

The traditional concept of external reliability implies that a study can be repeated with the same method without altering the results, that is, is a measure of the replicability of the research results. In the human sciences it is practically impossible to reproduce the exact conditions in which a behavior and its study took place. Heraclitus already said in his time that “no one bathed in the same river twice”; and Cratylus added that “it was not possible to do it even once”, since the water is continually flowing (Aristotle, Metaphysics, iv, 5).

In studies carried out through qualitative research, which, in general, are guided by a systemic, hermeneutic, phenomenological, ethnographic and humanistic orientation, reliability is oriented towards the level of interpretive agreement between different observers, evaluators or judges of the same phenomenon, that is, reliability will be, above all, internal, interjudges< a i=4>. A good level of this reliability is considered when it reaches 70%, that is, for example, out of 10 judges, there is consensus among 7.

Given the particular nature of all qualitative research and the complexity of the realities it studies, it is not possible to repeat or replicate a study in the strict sense, as can be done in many experimental investigations. Due to this, the reliability of these studies is achieved using other rigorous and systematic procedures. 

internal reliability is very important. Indeed, the level of consensus between different observers of the same reality increases the credibility that the significant structures discovered in a given environment deserve, as well as the security that the level of congruence of the phenomena under study is strong and solid. 

What measures are best to address potential biases in the selection of your data sources?

Datasets

What measures are best to address potential biases in the selection of your data sources?

data sources

 

What is a Data Source

Data Source is very important. In data analysis and business intelligence, a data source is a vital component that provides raw data for analysis. A data source is a location or system that stores and manages data, and it can take on many different forms. From traditional databases and spreadsheets to cloud-based platforms and APIs, countless types of data sources are available to modern businesses.

Understanding the different types of data sources and their strengths and limitations is crucial for making informed decisions and deriving actionable insights from data. In this article, we will define what is a data source, examine data source types, and provide examples of how they can be used in different contexts.

In short, data source refers to the physical or digital location where data can be stored as a data table, data object, or another storage format. It’s also where someone can access data for further use — analysis, processing, visualization, etc.

You often deal with data sources when you need to perform any transformations with your data. Let’s assume you have an eCommerce website on Shopify. And you want to analyze your sales to understand how to enhance your store performance. You decided that you would use Tableau for data processing. As it is a standalone tool, you must somehow fetch the data you need from Shopify. Thus, Shopify will act as a data source for your further data manipulations.

The difference between what is being valued and what is believed to be valued (Casal & Mateu, 2003). Unlike random error, systematic error is not compensated by increasing the sample size (Department of Statistics, Universidad Carlos III de Madrid). However, although its importance is vital in the development of an investigation, it is relevant to mention that
none is exempt from them; and that the essential thing is to know them to try to avoid, minimize or correct them (Beaglehole et al., 2008).

bias

Bias

The risk of bias appearing is intrinsically related to clinical research, which is particularly high in frequency since it works with variables that involve individual and population dimensions, which are also difficult to control. However, they also occur in basic sciences, a context in which experimental settings present conditions in which biases adopt peculiar characteristics and are less complex to minimize, since a series or a large part of the variables can be controlled.

From a statistical perspective, when trying to measure a variable, it must be considered that the value obtained as a result of the measurement (XM) is made up of two parts; the true value (XV) and the measurement error (XE); so that XM = XV + XE. Thus, the measurement error is in turn composed of two parts; one random and the other systematic or bias, which can be measurement, selection or confusion (Dawson-Saunders et al., 1994).

This explanation allows us to understand the fundamental characteristics of any measurement: accuracy (measurements close to the true value [not biased]); and precision (repeated measurements of a phenomenon with similar values) (Manterola, 2002).
The objective of this article is to describe the concepts that allow us to understand the importance of biases, the most frequent ones in clinical research, their association with the different types of research designs and the strategies that allow them to be minimized and controlled.

POSSIBILITIES OF COMMITTING BIAS
A simple way to understand the different possibilities of committing bias during research is to think about the three axes that dominate research: what will be observed or measured, that is, the variable under study; the one who will observe or measure, that is, the observer; and with what will be observed or measured, that is, the measuring instrument (Tables II and III) (Beaglehole et al.).

1. From the variable (s) under study.

There are a series of possibilities of bias that are associated with the variable under study, either at the time of its observation, the measurement of its magnitude and its subsequent classification (Manterola).

a) Periodicity: Corresponds to the variability in the observation; That is, what is observed can follow an abnormal pattern over time, either because it is distributed uniformly over time or because it is concentrated in periods. Knowledge of this characteristic is essential in biological events that present known cycles such as the circadian rhythm,
electroencephalographic waves, etc.

b) Observation conditions: There are events that require special conditions for their occurrence to be possible, such as environmental humidity and temperature, respiratory and heart rates. These are non-controllable situations that, if not adequately considered, can generate bias; context more typical of basic sciences.

c) Nature of the measurement: Sometimes there may be difficulty in measuring the magnitude or value of a variable, qualitative or quantitative. This situation may occur because the magnitude of the values ​​is small (hormonal determinations), or due to the nature of the phenomenon under study (quality of life).

d) Errors in the classification of certain events:
They may occur as a result of modifications in the nomenclature used; fact that must be noted by the researcher. For example, neoplasm classification codes, operational definition of obesity, etc.

2. From the observer
The ability to observe an event of interest
(EI) varies from one subject to another. What’s more, when faced with the same stimulus it is possible that two individuals can have different perceptions. Therefore, homogenizing the observation, guaranteeing adequate conditions for its occurrence and adequate observation methodology, leads to minimizing measurement errors.

This is how we know that the error is inherent to the observer, independent of the measuring instrument used. This is why in the different clinical research models, strict conditions are required to homogenize the measurements made by different observers; using clear operational definitions or verifying compliance with these requirements among the subjects incorporated into the study.

 3. From the measurement instrument (s) The measurement of biomedical phenomena using more than just the senses entails the participation of measurement instruments, which in turn may have technical limitations to be able to measure exactly what they are. is desired.

The limitations of measurement instruments apply both to “hard” devices and technology, as well as to population exploration instruments such as surveys, questionnaires, scales and others. Regarding the latter, it is important to consider that the verification of compliance with the technical attributes of these is usually left aside, which, independent of any consideration, are “measuring instruments”, since they have been designed to measure the occurrence of an EI; Therefore, they must be subject to the same considerations as any measuring instrument (Manterola).

These restrictions easily apply to diagnostic tests, in which there is always the probability of overdiagnosing subjects (false positives) or underdiagnosing them (false negatives), committing errors of a different nature in both cases.
Frequently, it is necessary to resort to the design of data collection instruments; whose purpose, like the application of diagnostic tests, is to separate the population according to the presence of some IS.

Thus, if an instrument lacks adequate sensitivity, it will determine a low identification rate of subjects with IS (true positives). On the contrary, screening instruments with low specificity will decrease the probability of finding subjects without the IS (true negatives).

For example, a questionnaire intended to carry out a prevalence study of gastroesophageal reflux may consider inappropriate items to detect the problem in a certain group of subjects, altering their sensitivity. The same instrument, with an excessive number of items of little significance in relation to the problem, may lack adequate specificity to measure EI.

Probability:

Cohorts Cases and controls Cross section Ecological studies

  1. Selection bias Low High Medium Not applicable
  2. Recall bias Low High High Not applicable Confusion
  3. bias Low Medium Medium High
  4. Follow-up losses High Low Not applicable Not applicable
  5. Time required High Medium Medium Low
  6. Cost High Medium Medium Low
  7. Table III. Most common types of bias in observational studies.
  8. MANTEROLA, C. & OTZEN, T. Biases in clinical research. Int. J. Morphol., 33(3):1156-1164, 2015. Another way of classifying biases is that which is related to the frequency in which they occur and the stage of the study in which they originate; It is known that in clinical research, the most frequent biases that affect the validity of a study can be classified into three categories: selection (generated during the selection or monitoring of the study population), information (originated during measurement processes in the study population) and confusion (occur due to the impossibility of comparing the study groups).

1. Selection biases
This type of bias, particularly common in case-control studies (events that occurred in the past can influence the probability of being selected in the study); It occurs when there is a systematic error in the procedures used to select the subjects of the study (Restrepo Sarmiento & Gómez-Restrepo, 2004). Therefore, it leads to an estimate of the effect different from that obtainable for the white population.

It is due to systematic differences between the characteristics of the subjects selected for the study and those of the individuals who were selected for us. For example: hospital cases and those excluded from these either because the subject dies before arriving at the hospital due to the acute or more serious nature of their condition; or for not being sick enough to require admission to the hospital under study; or due to the costs of entry; the distance of the healthcare center from the home of the subject who is excluded from the study, etc.

They can occur in any type of study design, however, they occur most frequently in retrospective case series, case-control, cross-sectional, and survey studies. This type of bias prevents extrapolation of conclusions in studies carried out with volunteers drawn from a population without IS. An example of this situation is the so-called Berkson bias; Also called Berkson’s fallacy or paradox, or admission or diagnostic bias; which is defined as the set of selective factors that lead to systematic differences that can be generated in a case-control study with hospital cases.

It occurs in those situations in which the combination between an exposure and the IS under study increases the risk of admission to a hospital, which leads to a systematically higher exposure rate among hospital cases compared to controls (for example: negative association between cancer and pulmonary tuberculosis, in which tuberculosis acted as a protective factor for the development of cancer; which was explained by the low frequency of tuberculosis in those hospitalized for cancer, a fact
that does not mean that among these subjects the frequency of the disease is less).

Another subtype of selection bias is the so-called Neymann bias (prevalence or incidence), which occurs when the condition under study determines premature loss due to death of the subjects affected by it; For example, if in a group
of 1000 subjects with high blood pressure (risk factor for myocardial infarction) and 1000 non-hypertensive subjects, followed for 10 years; An intense association is observed between arterial hypertension and myocardial infarction. However, it may occur that an association is not obtained due to the non-incorporation in the analysis of subjects who die from myocardial infarction during follow-up.

Another subtype of selection bias is the so-called non-response bias (self-selection or volunteer effect), which occurs when the degree of motivation of a subject who voluntarily participates in research can vary significantly in relation to other subjects; either over or under reporting.

Another that should be mentioned is the membership (or belonging) bias, which occurs when among the subjects under study there are subgroups of individuals who share a particular attribute, related positively or negatively with the variable under study; For example, the profile of surgeons’ habits and lifestyles may differ significantly from that of the general population, such that incorporating a large number of this type of subjects in a study may determine findings conditioned by this factor.

Another is the bias of the selection procedure, which occurs in some clinical trials (CT), in which the random assignment process to the study groups is not respected (Manterola & Otzen, 2015). Another type of selection bias is loss to follow-up bias, which can occur especially in cohort studies, when subjects from one of the study cohorts are lost totally or partially (≥ 20%) and pre-follow-up cannot be completed. -established, thus generating a relevant alteration in the results (Lazcano-Ponce et al., 2000; Manterola et al., 2013).

measurement bias

2.  Measurement bias

This type of bias occurs when a defect occurs when measuring exposure or evolution that generates different information between the study groups that are compared (precision). It is therefore due to errors made in obtaining the information that is required once the eligible subjects are part of the study sample (classification of subjects with and without IS; or of exposed and non-exposed).

In practice, it can present itself as the incorrect classification of subjects, variables or attributes, within a category different from the one to which they should have been assigned. The probabilities of classification can be the same in all groups under study, called “non-differential incorrect classification” (the degree of misclassification) MANTEROLA, C. & OTZEN, T. Biases in clinical research. Int. J. Mor

How to best handle data storage and archiving after the project is finished?

Big Data

How to best handle data storage and archiving after the project is finished?

data

 

What is Data Collection?

Data collection is the procedure of collecting, measuring, and analyzing accurate insights for research using standard validated techniques.

To collect data, we must first identify what information we need and how we will collect it. We can also evaluate a hypothesis based on collected data. In most cases, data collection is the primary and most important step for research. The approach to data collection is different for different fields of study, depending on the required information.

Research Data Management  RDM) is present in all phases of research and encompasses the collection, documentation, storage and preservation of data used or generated during a research project. Data management helps researchers:  organize it,  locate it,  preserve it,  reuse it.

Additionally, data management allows:

  • Save time  and make efficient use of available resources : You will be able to find, understand and use data whenever you need.
  • Facilitate the  reuse of the data  you have generated or collected: Correct management and documentation of data throughout its life cycle will allow it to remain accurate, complete, authentic and reliable. These attributes will allow them to be understood and used by other people.
  • Comply with the requirements of funding agencies : More and more agencies require the presentation of data management plans and/or the deposit of data in repositories as requirements for research funding.
  • Protect and preserve data : By managing and depositing data in appropriate repositories, you can safely safeguard it over time, protecting your investment of time and resources and allowing it to serve new research and discoveries in the future.

Research data  is  “all that material that serves to certify the results of the research that is carried out, that has been recorded during it and that has been recognized by the scientific community” (Torres-Salinas; Robinson-García; Cabezas-Clavijo, 2012), that is, it is  any information  collected, used or generated in experimentation, observation, measurement, simulation, calculation, analysis, interpretation, study or any other inquiry process  that supports and justifies the scientific contributions  that are disseminated in research publications.

They come  in any format and support,  for example:

  • Numerical files,  spreadsheets, tables, etc.
  • Text documents  in different versions
  • Images,  graphics, audio files, video, etc.
  • Software code  or records, databases, etc.
  • Geospatial data , georeferenced information

Joint Statement on Research Data from STM, DataCite and Crossref

In 2012, DataCite and STM drafted an initial joint statement on linking and citing research data. 

The signatories of this statement recommend the following as best practices in research data sharing:

  1. When publishing their results, researchers deposit the related research data and results in a trusted data repository that assigns persistent identifiers (DOIs when available). Researchers link to research data using persistent identifiers.
  2. When using research data created by others, researchers provide attribution by citing the data sets in the references section using persistent identifiers.
  3. Data repositories facilitate the sharing of research results in a FAIR manner, including support for metadata quality and completeness.
  4. Editors establish appropriate data policies for journals, outlining how data will be shared along with the published article.
  5. The editors establish instructions for authors to include Data Citations with persistent identifiers in the references section of articles.
  6. Publishers include Data Citations and links to data in Data Availability Statements with persistent identifiers (DOIs when available) in the article metadata recorded in Crossref.
  7. In addition to Data Citations, Data Availability Statements (human and machine readable) are included in published articles where applicable.
  8. Repositories and publishers connect articles and data sets through persistent identifier connections in metadata and reference lists.
  9. Funders and research organizations provide researchers with guidance on open science practices, track compliance with open science policies where possible, and promote and incentivize researchers to openly share, cite, and link research data.
  10. Funders, policy-making institutions, publishers, and research organizations collaborate to align FAIR research data policies and guidelines.
  11. All stakeholders collaborate to develop tools, processes and incentives throughout the research cycle to facilitate the sharing of high-quality research data, making all steps in the process clear, easy and efficient for researchers through provision of support and guidance.
  12. Stakeholders responsible for research evaluation factor data sharing and data citation into their reward and recognition system structures.

research

The first phase of an investigation requires  designing and planning  your project. To do this, you must:

  • Know the  requirements and programs  of the financing agencies
  • Search  research data
  • Prepare a  Data Management Plan .

Other prior considerations:

  •     If your research involves working with humans, informed consent must be obtained.
  •     If you are involved in a collaborative research project with other academic institutions, industry partners or citizen science partners, you will need to ensure that your partners agree to the data sharing.
  •     Think about whether you are going to work with confidential personal or commercial data.
  •     Think about what systems or tools you will use to make data accessible and what people will need access to it.

During the project…

This is the phase of the project where the researcher  organizes, documents, processes and  stores  the data.

Is required :

  • Update the Data Management Plan
  • Organize and document data
  • Process the data
  • Store data for security and preservation

The  description of data  must provide a context for its interpretation and use, since the data itself lacks this information, unlike scientific publications. It is about being able to understand and reuse them .

The following information should be  included:

  • The context: history of the project, objectives and hypotheses.
  • Origin of the data: if the data is generated within the project or if it is collected (in this case, indicate the source from which it was extracted).
  • Collection methods, instruments used.
  • Typology and format of data (observational, experimental, computational data, etc.)
  • Description standards: what metadata standard to use.
  • Structure of data files and relationships between files.
  • Data validation, verification, cleaning and procedures carried out to ensure its quality.
  • Changes made to the data over time since its original creation and identification of the different versions.
  • Information about access, conditions of use or confidentiality.
  • Names, labels and description of variables and values.

project

STRUCTURE OF A DATASET

 The data must be clean and correctly structured and ordered:

A data set is structured if:

  •     Each variable forms a column
  •     Each observation forms a row
  •     Each cell is a simple measurement

Some recommendations :

  •    Structure the  data in TIDY (vertical) format  i.e. each value is a row, rather than horizontally. Non-TIDY (horizontal) data.
  •    Columns  are used for variables  and their names can be up to 8 characters long without spaces or special signs.
  •    Avoid text values ​​to encode variables, better  encode them with numbers .
  •    In  each cell, a single value
  •    If you do not have  a value available , provide the missing value codes.
  •    Provide  data tables , which collect all the data encodings and denominations used.
  •    Use data dictionary or separate list of these short variable names and their full meaning

DATA SORTING

Ordered data  or  “TIDY DATA” are those obtained from a process called “DATA TIDYING” or data ordering. It is one of the important cleaning processes during big data processing.

Ordered data sets have a structure that makes work easier; They are easy to manipulate, model and visualize. ‘Tidy’ data sets are arranged in such a way that each variable is a column and each observation (or case) is a row.” (Wikipedia).

There may be  exceptions  to open dissemination, based on reasons of confidentiality, privacy, security, industrial exploitation, etc. (H2020, Work Programme, Annexes, L Conditions related to open access to research data).

There are some  reasons why certain types of data cannot and/or should not be shared , either in whole or in part, for example:

  • When the data constitutes or contains sensitive information . There may be national and even institutional regulations on data protection that will need to be taken into account. In these cases, precautions must be taken to anonymize the data and, in this way, make its access and reuse possible without any errors in the ethical use of the information.

  • When the data is not the property of those who collected it or when it is shared by more than one party, be they people or institutions . In these cases, you must have the necessary permissions from the owners to share and/or reuse the data.

  • When the data has a financial value associated with its intellectual property , which makes it unwise to share the data early. Before sharing them, you must verify whether these types of limits exist and, according to each case, determine the time that must pass before these restrictions cease to apply.  

What best considerations have you made for accessibility in your data collection?

0jffur7626lf9g8msn44

What best considerations have you made for accessibility in your data collection?

data collection

What are Data Collection Methods?

Data collection methods are techniques and procedures used to gather information for research purposes. These methods can range from simple self-reported surveys to more complex experiments and can involve either quantitative or qualitative approaches to data gathering.

Some common data collection methods include surveys, interviews, observations, focus groups, experiments, and secondary data analysis. The data collected through these methods can then be analyzed and used to support or refute research hypotheses and draw conclusions about the study’s subject matter.

Data collection methods play a crucial role in the research process as they determine the quality and accuracy of the data collected. Here are some mejor importance of data collection methods.

  • Determines the quality and accuracy of collected data.
  • Ensures that the data is relevant, valid, and reliable.
  • Helps reduce bias and increase the representativeness of the sample.
  • Essential for making informed decisions and accurate conclusions.
  • Facilitates achievement of research objectives by providing accurate data.
  • Supports the validity and reliability of research findings.

To familiarize ourselves with the concept of universal accessibility, it is important to mention its historical process. In 1948, the United Nations (UN) promulgated the Universal Declaration of Human Rights, which drafted the Principles of Equal Rights and Opportunities for all Citizens, but it was not until 1963 at the First International Congress for the Suppression of Architectural Barriers carried out in Switzerland, whose main objective proposes to try new measures for the design of buildings by eliminating barriers that obstruct access for people with disabilities.

In 1982, Spain approved the Law on Social Integration of the Disabled (lismi), in that same year, the UN promoted the development of the World Program of Action towards the Disabled; In 2003, the Law on Equality, Non-Discrimination and Universal Accessibility (LIONDAO) incorporated the concept of universal accessibility in which it promoted equal opportunities benefiting all people; In 2006, the UN again held a convention on the rights of people with disabilities and in 2013, the General Law on the Rights of People with Disabilities and their Social Inclusion established that all services, environments, goods and products be accessible.

The study area of ​​the Technological Development Corporation (CDT) (2018) defines universal accessibility as the condition that spaces usable by all people must meet in safe and comfortable conditions with the aim of moving autonomously and naturally. It is a space that must have equal opportunities and social inclusion for people with different abilities, free of obstacles and barriers (urban, architectural and mobility) that prevent correct movement.

Accessibility seeks the inclusion of all citizens in public and private spaces, it must be “integral and guarantee not only mere accessibility, but also circulation, use, orientation, security and functionality” (Olivera, 2006: 332). Pedestrian mobility is one of the main requirements in the physical accessibility of cities (Ipiña García, 2019: 159).

Universal accessibility is directly related to the quality of life of the inhabitants of a city, it must be understood that people have the right to enjoy all the services that the city can provide, being the responsibility of the public and private sectors to modify the environment to that can be used under conditions of equality, taking into account social, economic and geographical needs.

One of the main problems in terms of universal accessibility is that cities were not designed for the use of all people, but it is a fact that currently regulations, laws, plans, programs, etc. have been implemented that They have gradually transformed some sectors of our cities, improving the quality of life of users. To achieve these changes, it is necessary to have knowledge, empathy and awareness in order to generate simple and intuitive spaces that have equal opportunities.

UNIVERSAL ACCESSIBILITY AS AN IMPORTANT PART OF PUBLIC SPACE

There is an intimate relationship between universal accessibility and public space, due to the permanent dynamics of the inhabitants in the city; so the latter would not exist without public space and it would perish without citizens. “As the city is a historical fact, the public space is also historical; It is part of the cultural manifestations of a civilization, which is always limited in time and space” (Gamboa Samper, 2003: 13).

Since the 19th century, Camillo Sitte, one of the precursors of the German school, considered that the city should be designed for pedestrians; since then, people have thought about creating functional and flexible spaces that can be used by everyone. “Ultimately, the success of a city must be measured in its ability to guarantee access to all citizens to the benefits that have made cities one of the most wonderful human inventions” (D. Davila, 2012: 60). Consequently, public space is a collective site for public use that must guarantee well-being for all people, this includes responding to the needs of citizens, thus promoting universal accessibility.

Squares, parks and gardens are part of the public space, but they are also made up of streets that allow people to move to reach their destination. At a smaller mobility scale, the pedestrian can be defined as any person who travels on foot through public or private space (Municipal Government of Cusco, sf). Pedestrian movement or mobility must meet certain requirements so that it is carried out under quality conditions; accessibility, safety, comfort and attractiveness (Alfonzo, 2005; Pozueta et al., 2009), when satisfied, the pedestrian environment will have the necessary quality for the pedestrian to move, which will have a decisive impact on service levels. pedestrian aspect of the urban environment (Olszewski and Wibowo, 2005) (Larios Gómez, 2017: 6).

In terms of universal accessibility, it is important to adapt at least one accessible pedestrian route in spaces with greater pedestrian flow. In the analysis of an urban space, priority must be given to the implementation of accessible routes that link main avenues, secondary streets, stops and access to public transport and vehicle parking (Boudeguer Simonetti et al., 2010: 39), in this way the spaces They may be used by all people under equal conditions.

The public space is characterized by being easily accessible, allowing interaction between its inhabitants, creating social ties that allow generating a link with the space, this causes citizens to experience their environment, identifying and appropriating the elements that make up the public space. One of the problems we currently have is that society has gradually stopped going to these spaces due to insecurity, inaccessibility, pollution, lack of maintenance on the streets and gardens; generating its abandonment and deterioration.

accessibility

When the public space meets the characteristics of security, universal accessibility, mobility, identity, inclusion and permanence, it is said to be a quality space that allows the city to be experienced, enjoying the pedestrian routes, observing the architectural elements that are part of it, such as the facades of the buildings, the planters, benches and lamps of the urban furniture.

The parks and gardens that are fundamental in the cities, not only for providing them with green areas, but for preserving a part of their history, in this sense, Segovia and Jordán (2005) affirm that the quality of public space can be evaluated above all by the intensity and quality of the social relationships it facilitates, by its capacity to welcome and mix different groups and behaviors, and by its opportunity to stimulate symbolic identification. cultural expression and integration.

For public space to play the role of being a system that allows interaction between people and the enjoyment of recreational places, it is essential that citizens can enter them without physical barriers, being accessible to all inhabitants, ” An environment is needed with a level of quality that allows environmental sustainability and, of course, services that articulate the appropriate functioning of urban public spaces with the population” (Rueda et al., 2012) (Alvarado Azpeitia et al., 2017 : 131), which consists of generating a public road on which cars, bicycles and public transport can also travel, always giving importance and priority to the pedestrian.

A space accessible to everyone

As noted in previous paragraphs, in the 19th century people were already thinking about creating cities designed for pedestrians, but it was not until 2003 that the term “Universal Accessibility” was implemented, which aims to include all people regardless of your age and physical, visual, mental, hearing and multiple disability condition; creating or adapting spaces that allow their use and movement autonomously and implementing Universal Design or Design for All, benefiting the greatest number of people possible.

Universal accessibility

Architect Wilson Castellanos Parra mentions that believing that universal accessibility responds exclusively to the needs of people with reduced mobility is a mistake; It is more than a ramp, it is understood as “the condition that environments, processes, goods, products and services, as well as objects or instruments, tools and devices, must meet; to be understandable and applicable to all people” (Castellanos Parra, 2016); In the virtual conference “Universal Accessibility in Colombian Architecture Curricula” he describes some criteria to identify accessibility conditions in environments, these are:

1. Wandering (refers to the spaces of approach, spaces traveled),

2. Apprehension (achieving certain requirements when carrying out any activity such as: signage elements), location (auxiliary services) and communication (interactive communication such as: graphics, information panels, etc.).

Universal accessibility is linked to various topics such as: the chain of accessibility, mobility, design of complete streets, among others; that seek the movement of people in conditions of equality, quality and safety. 

The Secretariat of Agrarian, Territorial and Urban Development (sedatu) in collaboration with the Inter-American Development Bank (bid) produced The Street Manual: Road Design for Mexican cities where, in an illustrated manner, a pyramid that classifies the hierarchy is shown. of mobility.

Under this classification, all people can make their trips in inclusive, safe, sustainable and resilient conditions; Priority should be given to pedestrians and drivers of non-motorized vehicles to promote a more efficient and inclusive use of road space (sedatu; Inter-American Development Bank, 2019: 62).

How is consent and data collection from minors best addressed?

comercio datos personales menores

How is consent and data collection from minors best addressed?

 

Data collection

Data collection

Data is a collection of facts, figures, objects, symbols, and events gathered from different sources. Organizations collect data with various data collection methods to make better decisions. Without data, it would be difficult for organizations to make appropriate decisions, so data is collected from different audiences at various points in time.

For instance, an organization must collect data on product demand, customer preferences, and competitors before launching a new product. If data is not collected beforehand, the organization’s newly launched product may fail for many reasons, such as less demand and inability to meet customer needs. 

Although data is a valuable asset for every organization, it does not serve any purpose until analyzed or processed to get the desired results.

Data collection methods are techniques and procedures used to gather information for research purposes. These methods can range from simple self-reported surveys to more complex experiments and can involve either quantitative or qualitative approaches to data gathering.

Some common data collection methods include surveys, interviews, observations, focus groups, experiments, and secondary data analysis. The data collected through these methods can then be analyzed and used to support or refute research hypotheses and draw conclusions about the study’s subject matter.

The right to the protection of personal data: origin, nature and scope of protection.

 Origins and legal autonomy

The approach to the study of any right with constitutional status requires, without a doubt, a reference to its origins, for which, on this occasion, the generational classification of human rights developed at a doctrinal level will be very useful.

In general, historically the recognition of four generations of fundamental rights, individual or first generation rights, has prevailed; public freedoms or second generation rights; social or third generation rights; and rights linked to the emergence of new technologies and scientific development, classified in the fourth generation, these have corresponded to ideological and social moments with their own characteristics and differentiating features.

In particular, the fourth generation is presented as a response to the phenomenon known as “liberties pollution” , a term coined by some authors to refer to the degradation of classic fundamental rights in the face of recent uses of new technology.

Indeed, the technological development that has occurred since the second half of the 20th century has shown the limitations and insufficiency of the right to privacy – first generation right – as the only mechanism to respond to the specific dangers contained in the automated processing of personal information. , which is why starting in the seventies, the dogmatic and jurisprudential construction of a new fundamental right began to take shape: the right to the protection of personal data.

From a theoretical point of view, the reformulation of the classic notion of the right to privacy no longer as a right of exclusion, as it had initially been conceived, but rather as a power to control information relating to one itself, represented a clear breaking point in the conceptualization that had been maintained on it until that moment.

protection

On the other hand, in the jurisprudential context, the legal conformation of this right – which was classified as the right to informational self-determination – originates in a ruling issued by the German Federal Constitutional Court in 1983, declaring the unconstitutionality of a law that regulated the demographic census process at that time. In contrast, Chilean jurisprudence was particularly late in the configuration of the right to the protection of personal data, since its first approximation occurred in 1995, when the Constitutional Court linked it, precisely, to the protection of privacy.

It is true that the right to privacy constitutes an important, if not essential, antecedent in terms of the formation of the right that is the object of our study; however, this does not mean that both should be confused, an issue that in its moment sparked countless debates. Some authors, for example, stated that the right to the protection of personal data constituted a form of manifestation of the particular characteristics that the right to privacy acquires in the computer age, denying the autonomy that it is possible to attribute to it today.

From our perspective, and as the Spanish Constitutional Court was responsible for announcing at the beginning of this century, two fundamental rights closely linked to each other, as well as clearly differentiated, coexist in our legal system: the right to privacy and the right to the protection of personal data. With the first, the confidentiality of the information related to an individual is protected, while with the second the proper use of the information related to a subject is guaranteed, once it has been revealed to a third party, since the confessed data It is therefore not public and, consequently, cannot circulate freely.

Thus, the legal power to have and control at all times the use and traffic of this information belongs entirely to its owner. In other words, the fundamental right to data protection does not constitute a right to secrecy or confidentiality, but rather a power to govern its publicity. In this way, while the right to privacy would be a power of exclusion, the right to protection of personal data is consecrated, instead, as one of disposition.

In accordance with what was stated  above , the latter seems to be the position finally adopted by the Chilean Constitution. In this regard, it is worth remembering that the Organization for Economic Cooperation and Development (OECD) pointed out, in 2015, that our country was in compliance with its personal data protection regulations, pointing out that among its member states, only Chile and Turkey had not yet perfected their legislation on the matter.

On this level, the reform of article 19 number 4 of the constitutional text was framed, which since June 16, 2018 has assured all people “ respect and protection of private life and the honor of the person and their family.” , and also, the protection of your personal data “, adding that ” the treatment and protection of these data will be carried out in the manner and conditions determined by the law .”

As can be seen, the new wording of the Chilean fundamental norm now enshrines the right to the protection of personal data in an autonomous and differentiated manner, a trend adopted for several years by the fundamental charters of other countries in Europe and Latin America, with Chile joining the this majority trend.

 Natural capacity as an essential element for the exercise of personality rights

The tendency followed by the Chilean legal system to give relevance to what is known as natural capacity – or maturity – as an essential substrate on which to base the exercise capacity of children and adolescents, is especially marked in the field of the rights of personality – or in other words, in the field of extra-patrimonial legal acts, and it is precisely in this context that the first voices in favor of maintaining that, although the dichotomy “capacity for enjoyment/capacity for exercise” could still have some relevance in the patrimonial sphere, it was, on the other hand, unsustainable in the extra-patrimonial personality sphere.

It seems that denying the capacity to exercise personality rights in the space when the subject, despite his or her chronological age, meets the intellectual and volitional conditions sufficient to exercise them on his or her own, becomes a plausible violation of dignity and freedom. free development of the personality of the individual, recognized in article 1 of our Constitution as superior values ​​of the regulatory system ( People are born free and equal in dignity and rights ).

child or adolescent

Certainly, it has been discussed whether the distinction between the capacity to enjoy and the capacity to exercise is applicable in the field of personality rights, since the enjoyment or exercise of these rights is personal. Hence, it is difficult to speak of authentic legal representation in this environment, with this representation being very nuanced or being configured rather as assistance or action by parents or guardians/curators in compliance with their duty to care for the child or adolescent especially justified when it comes to avoiding harm.

Given the above and in accordance with the principle of  favor filii , the implementation of personality rights by their legitimate holders can only be limited when their will to activate them is contrary to preponderant interests in attention to the full development of their personality, in the same way that the will of their representatives can be limited   when their intervention is contrary to the interests of the child or adolescent.

Well, it is precisely in that context, in which the idea of ​​adopting the criterion of sufficient maturity, self-government or natural capacity emerges strongly, as a guideline to follow to delimit the autonomous exercise of personality rights, avoiding With this, the person who has not yet reached the age of majority is simply the holder of the right, but cannot, however, exercise it. In this way, the general rule becomes that the girl, boy or adolescent who is sufficiently mature can freely dispose of her or his rights.

With the above meaning, it should be noted that in this new scenario the question is limited to specifying what is meant by showing sufficient maturity, since we are faced with an indeterminate legal concept around which there is no unified legal definition. Each boy and girl is different, and therefore it is very difficult to establish when or not they have the necessary exercise capacity, due to their intellectual development, to be the master of their own person.