A significant problem in clinical research is the possibility of subjective biases that qualitatively distort the reliability of the data being collected. Although the evidence-based medicine industry aims to achieve critical quality measures, studies are still not immune to bias-induced bias errors. For example, Pannucci & Wilkins (2010) reported poor methodological preparation of academic papers that form a diverse body of sources on plastic surgery. Consequently, this trend may lead to an overall decrease in quality and, as a result, an increase in treatment risks. To inhibit the development of such a scenario, the authors provide several review-based recommendations on preventing bias.
The first step in implementing this strategy is to identify the sources of such a problem carefully. Without a thorough understanding of the mechanisms that determine bias, it is impossible to find a way to eliminate it. When conducting research, it is critical to identify risks and predict outcomes in advance, as well as to plan the sample: patient participants must meet inclusion criteria, and their allocation must be based on strict categorical differentiation. A standardized approach must be implemented during interactions with participants, eliminating the possibility of historical control.
In addition, subjective sources of information, including patients’ opinions about the nature of their disease, should not be used as reliable materials: such a decision can lead to systematic bias errors. It is not uncommon for researchers to ignore undesirable findings when describing data, which only increases the likelihood of bias. Stratified analysis can be useful to address the problem of a cohort study in which the chance of formulating incorrect relationships between variables is increased. Finally, Pannucci & Wilkins emphasized that it is the task of the researcher to achieve a balance between internal and external validity of the trial in which participants appear randomized, handlers are entirely unbiased, and the sample collected perfectly replicates the general population. All of the above strategies significantly reduce the likelihood of bias in academic research.
The use of online platforms for questionnaire research has increased in recent years, whether they are personal questionnaires or materials created as part of large-scale academic studies. Although the implementation of data collection has become more accessible, the quality of such results can be significantly diminished by inept management of online biases, as Ball (2019) reported. For example, a critical source of such bias is the failure to account for the responses of those participants who did not have access to the Internet: therefore, to eliminate such a trap, the study design must be thought through in advance to create an equal survey environment for respondents. In addition, online surveys increase the likelihood of incorrect sampling bias if the survey is not controlled by the authors.
Accordingly, it is necessary to achieve a situation in which each participant is strictly documented and fits the inclusion criteria. The same applies to the complete inadmissibility of repeated completion of the survey form by the same individual. At the same time, Ball pointed out the need for information-gathering methods that focus strictly on the topic under study and do not include detached, unhelpful aspects of the study.
In the context of using the Internet, not for research but for personal use, each individual should be concerned about the quality of the information collected. As Ball has shown, any information found must be critically analyzed before it becomes valuable for use. Similar to the seriousness with which respondents are selected for online participation, selecting material to be read or cited for personal use should similarly be based on thoughtful evaluation. An author’s background, number of publications, and general subject matter can be used to determine competence. Taken together, these all achieve the conditions of no — or minimized — online bias: whether in the field of research or personal surfing.
Ball, H. L. (2019). Conducting online surveys. Journal of Human Lactation, 35(3), 413-417.
Pannucci, C. J., & Wilkins, E. G. (2010). Identifying and avoiding bias in research. Plastic and Reconstructive Surgery, 126(2), 619.