[ad_1]
WHAT IS ALREADY KNOWN ON THIS TOPIC
-
Over the past decade, the ideals of research integrity and research fairness have gained considerable momentum in global health. While both have been the subject of intense academic debate, there is little empirical data on actual practices related to integrity and fairness specific to global health.
WHAT THIS STUDY ADDS
-
Findings suggest that global health researchers mostly adhere to research integrity and research fairness principles. Some behaviours are more frequently reported (transparent reporting of studies, seeking local ethical approval) than others (engagement with affected populations, engagement with local decision-makers, adherence to Open Science), with little variation between responses from the Global North and the Global South.
-
We identified several structural, institutional and individual factors associated with these patterns, such as an inflexible donor landscape, research institutions’ investments in relationship building, guidelines and mentoring, as well as power differentials and competition between researchers.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
-
This study shows there are more synergies than trade-offs between research integrity and fairness as they share similar determinants and the same goal of increasing research quality and maximising the societal benefits of research. There is ample scope to make such synergies explicit and to align both agendas in global health.
Introduction
Global health decisions are often made remotely, without consideration for the people they affect.1 Ideally, the conduct of global health research should straddle the twin ideals of research integrity and research fairness2 and should follow practices that guarantee the validity and trustworthiness of science.3 It should also aim to transcend the ‘distance’ between research teams and researched populations that currently characterises the study of global health problems. While the pursuit of integrity and fairness in global health can conflict ideologically,4 they share the same goal of increasing the societal benefits of research. Addressing these breaches of research integrity and fairness requires an understanding of the magnitude of the problem and its determinants.
Over the past decade, research integrity and research fairness have gained considerable momentum globally. Research integrity emerged in the late 2000s as a response to the reproducibility crisis in science.5 It encompasses all professional codes promoting Responsible Research Practices (RRPs), with a strong traditional focus on individual researchers’ responsibilities,6 but increasingly recognising the role of research environments in shaping collaborations3 7 and reward systems.8 9 Research integrity is closely intertwined with Open Science, which promotes full transparency of research processes to maximise reproducibility.10 Research fairness,11 on the other hand, encompasses a broad range of initiatives from the mid-2010s to promote fair and equitable research partnerships12 13 and a holistic understanding of global challenges. Research fairness initiatives aim to increase ownership of research agendas and outputs in Low-income and Middle-Income Countries (LMICs), thereby maximising the positive impact on local research systems and populations. These initiatives focus on power imbalances between actors in previously colonising and colonised nations and as such are broadly aligned with recent calls to ‘decolonise global health’.14 Research integrity and research fairness share the goal of increasing research quality and maximising the societal benefits of research by following two different paths.4 While research integrity prioritises, rewards and reinforces scientific processes in the pursuit of global and generalisable knowledge, research fairness is geared towards local information needs and favours knowledge production that reflects the ‘lived experience of people themselves’.14
Research integrity has been the subject of many scholarly debates, and recent surveys have reported the prevalence of Questionable Research practices (QRPs).15 16 QRPs are defined as ‘subtle trespasses’15 or ‘misbehaviours’17 that lie in the grey zone between fabrication, falsification and plagiarism on one side, and responsible research conduct on the other. The term includes practices such as not reporting flaws in study design or execution, selective citation to enhance one’s findings and so forth.15 It is sometimes argued that the ‘prevention paradox’ applies to research integrity—while QRPs may constitute a less severe offence compared with fabrication and falsification their frequency makes them more damaging to science.18 This is backed up by data: estimates of data fabrication or falsification from a recent study in the Netherlands were just above 4%15 while over 50% of researchers engaged frequently in at least one QRP. This is in line with a Nature study that revealed that ‘more than 50% of researchers are unable to reproduce their own work’.5 There are no data specifically for global health research, however, there is no reason to believe that it fares better than other types of research.19
Research fairness has also been the subject of academic scrutiny, with substantial evidence that the current global health research ecosystem is unfairly skewed towards institutions and scientists in High-Income Countries (HICs). A systematic review found that 28% of studies mentioned some kind of ‘discriminatory power imbalance’ between Global North and South research team members.1 Studies have shown that LMIC research agendas are led by donors in HICs,20 with research funding disproportionately allocated to HIC institutes.21 To our knowledge, there are no systematic analyses of global health research funding flows, but donor reports and tracking tools reveal that approximately 70% of funding is channelled through Global North institution.20 22–26 Studies on roles and perceptions of researchers in global health collaborations show that local researchers are often relegated as ‘glorified field workers’27 with no influence on study design28 and less likely to be in prominent author positions.29
While research integrity and fairness are both intensely researched, no studies to date have investigated research practices jointly influencing integrity and fairness in global health. Therefore, little is known about potential synergies and trade-offs. For example, are efforts to increase research integrity improving research fairness and vice versa? Or are strategies to increase one affecting researchers’ ability to improve on the other? Furthermore, we do not know whether there are fundamentally diverse ways of conducting research in different areas of the world, more specifically between those who are conducting research from ‘afar’ (eg, researchers based in the Global North conducting research in the Global South) versus those conducting research nearby (ie, Global South researchers conducting research in their settings). Furthermore, the research integrity agenda has so far been dominated by Global North actors, which could potentially lead to different levels of adherence to research integrity principles across geographies.
Limited research on both research integrity and fairness in global health means there is little data to support either debate—beyond norms or guidelines. Yet data on desired and undesired practices, barriers and facilitators, can provide evidence to develop improvement strategies and baselines to measure progress. To provide this information, we conducted a mixed-methods study to estimate the frequency and to explore the determinants of practices associated with research integrity and research fairness.
Methods
This mixed-methods research study combined an online survey (quantitative component) with online in-depth key informant interviews (qualitative component). The quantitative component estimated the frequency of research practices by measuring self-reported adherence to norms and guidelines of research integrity and fairness. The qualitative component aimed to provide context and to explore determinants, that is, barriers and facilitators for such practices. Determinants could be described at individual (concerning the researchers themselves), institutional (regarding academic institutes) and structural (the wider research landscape) levels. The qualitative research tools were based on preliminary analyses of the quantitative component (exploratory approach). In line with previous research integrity surveys, we defined QRPs as ‘misbehaviours (that) lie somewhere on a continuum between scientific fraud, bias and simple carelessness’.17 Following this logic, we defined Unfair Research Practices (URPs) as behaviours that perpetuate power imbalances, marginalisation, exploitation and inequality in the production and dissemination of knowledge. As opposed to QRPs and URPs, RRPs and Fair Research Practices (FRPs) promote positive behaviours related to research integrity and fairness respectively. This study builds on a pre-existing pilot study.30
Data collection tools
The questionnaire for the online survey can be found in the repository. This self-administered survey was developed and emailed to researchers in English. This tool lists 20 QRP/RRPs related to research integrity and 20 URP/FRPs related to research fairness, including adaptations of questions developed by the Dutch National Survey on Research Integrity (NSRI)31 that are in line with the Bridging Research Integrityamd Global Health Epidemiology (BRIDGE) criteria.4 Alignment with the NSRI was done to enable comparisons and because the NSRI questions underwent prior psychometric validation.4 15 For research fairness, we developed 20 URP/FRPs derived from the BRIDGE criteria. Like the NSRI survey we used statements in the first person, a 3-year recall period and a 7-point Likert scale to retain similar levels of content and construct validity. The Likert scale consisted of the following values: 1=never, 2=very rarely, 3=rarely, 4=occasionally, 5=frequently, 6=very frequently and 7=always. To avoid straight lining, some statements were phrased as a questionable/unfair and some as responsible/fair.
The in-depth interview schedule was based on the BRIDGE guidelines and included topics on participants’ experiences with QRPs and URPs and determinants of these practices. This schedule covered all six phases of an epidemiological study (preparation, protocol development, data collection, data management, data analysis and reporting). The interviewers navigated the conversations according to interviewees’ preferences for focusing on given study phases and exploring contributing factors. Interviewers probed interviewees for statements where quantitative findings showed the strongest differences or overall lowest or highest scores. We aimed at receiving maximum variation of phases among interviews. All interviews were conducted in English.
Study population and setting
The study population consisted of researchers currently involved in global health and working in institutes that are part of the following networks: tropED (Network for Education in International Health), INDEPTH Network, Asian Health and Demographic Surveillance System Network, Astra South Asia, African Institute of Mathematical Science, Consortium of Universities for Global Health, Humanitarian Health Ethics Network, Collaboration for Evidence Based Health Care in Africa, Trials of Excellence in Southern Africa, West African and Central African Network for TB, HIV/AIDS and Malaria, East African Consortium for Clinical Sciences, Bill & Melinda Gates Foundation Global Health Grantees, Southeast Asia One Health University Network, COVID-19 Clinical Research Coalition and the European and Developing Countries Trial Partnership. We used the algorithm in figure 1 to assess eligibility of institutions within the sample framework. ‘Relevant institutions’ included universities, non-profit non-government research/knowledge centres, government research institutes or funding institutions with a role in research development. ‘Relevant type of research’ included global health (international) or public health (national) research involving interactions with communities, patients, public or other local actors (ie, not laboratory or document based).
Institutes were categorised by geographical location—Global North and Global South—to enable stratified sampling and equal geographical distribution of institutes across geographies. In the absence of an objective measurement to classify countries as Global North or South, we referred to the World Bank 2021 country income classifications32 in LMICs, upper-middle-income countries (UMICs) and HICs. More specifically, LMICs and UMICs were considered as Global South, and HICs as Global North. During quantitative data collection, participants were asked to self-identify as either Global North or Global South, with possibilities including ‘prefer not to disclose’ or ‘other’. In-depth interviews revealed that this category includes researchers raised, trained or working in different countries across north/south divide and reclassified as Global Neutral.
Sampling and sample size
For the online survey, participants were sampled using two-stage stratified cluster sampling. The sample size was calculated based on a Z-score of 1.96, a hypothesised prevalence of 50% for QRP/URPs, a precision of 10% on each side of the prevalence estimate, a design effect of 3, a 50% response rate and an additional 10% institutional attrition rate. Overall, the estimated (target) sample size per stratum was rounded up to 900. This sample was achieved by randomly selecting 30 institutes per stratum and 30 individuals per institute, in the first sampling stage. In the second stage, we compiled lists of all researchers listed on publicly available websites working in the selected institutes. Researchers were randomly selected from the compiled list and invited to participate in the online survey.
During the online survey, we captured participants’ willingness to participate in the in-depth interview. We invited three to four willing participants per strata to participate in interviews making a total of nine interviewees and we reached saturation with this sample size. Although we initially planned to select Global North and Global South researchers equally, two participants identified as ‘Global Neutral’ during the interview.
Data analyses
The frequency survey had two main outcome variables: (1) RRP score and (2) FRP score. Since the questionnaire provided statements phrased either as questionable/unfair practice or as responsible/fair, we recoded responses to align all responses on a positive scale, whereby a score of “1” for RRP or FRP denoted never engaging in that practice and a score of “7” denoted very frequent engagement. In other words, high scores for both the RRP and FRP score denoted desirable behaviours from an integrity and fairness standpoint.
We then grouped the RRP and FRP questions into behaviour domains (online supplemental table 1). RRPs questions were grouped under (1) following meticulous research process; (2) mentoring of junior researchers, (3) ensuring adequate methods; (4) aiming for transparent reporting; (5) striving for reproducibility and (6) engaging with open science. FRPs were grouped under (1) aiming for engagement of local decision-makers; (2) aiming for engagement of affected populations; (3) working in partnership with local researchers; (4) striving for fair agreements between study partners; (5) seeking local ethical approval and (6) ensuring respectful data collection.
We reported the median (and IQR) of Likert scale responses for each RRP and FRP. Average scores in each behaviour domain were derived followed by median and IQRs. This two-step process involved estimating a mean score for each behaviour domain and estimating the overall mean for each domain. Statistical differences between the Global North and Global South researchers for individual practices (RRP and FRP) were explored using the Wilcoxon rank sum test and differences across each of the 12 domains were with a Student’s t-test. We used linear regression to explore individual factors associated with RRPs and FRPs. We included the researcher’s age, gender, seniority (highest attained degree), discipline of the researcher, place of origin (Global South researcher working for Global North institutions or vice versa), years of involvement in research and role of the researcher as independent variables. We reported the results as adjusted coefficient, 95% CI and p values.
For the qualitative analyses, we used the six phases of the BRIDGE guidelines to create codes (deductive approach) and later added new codes arising from the data (inductive approach). Relevant and similar codes were merged into categories and analysed to identify themes arising from the data. We used NVivo software V.11 for data coding and analysis.
Data management
We used Survey Monkey to collect quantitative data online adhering to global data safety and protection practices like limiting identifiable researcher information and automatically deleting stored data after 13 months. Participants provided some indirect identifiers during the data collection, including geographical region of institute and level of experience, to prospectively strengthen the analysis and allow us to explore the data in different strata. These data, however, were presented carefully to prevent the identification of individual members.
Qualitative data collected were audio recorded in online meeting platforms, following informed consent and anonymised prior to storing in a password-protected online folder. The qualitative data from the interviews were transcribed using otter.ai and were checked and edited manually for all the transcripts. Personal identifiers were removed throughout the transcription and coding processes and labelled based on their identification as Global North (GN), Global South (GS) or Global Neutral (GX) researchers.
Patient and public involvement
This meta-research study is not conducted on patients but on researchers themselves. Researchers were consulted in the past with a Delphi study for the development of the BRIDGE guidelines that laid the foundations of this research. In addition, we set up a study task force including researchers from South Africa, India and the Netherlands that independently reviewed this study’s tools and methods ahead of data collection. This publication will be shared with all researchers who participated in this study.
Results
Sample description
Of the 1717 researchers contacted to participate in the survey, 145 (8.4% responded). Response rates were slightly higher for researchers in the Global South than the Global North at 9.5% (n=82) and 6.2% (n=53), respectively.
The survey participants’ country of origin exhibited a geographical distribution, with India comprising 9.0% (n=13) of respondents, followed by South Africa, Switzerland and the UK, each accounting for 5.5% (n=8). The USA constituted 4.8% (n=7) and represented a significant portion of respondents. Additionally, researchers from Nigeria (n=6), Ghana (n=5), Belgium, Germany, Zambia and Portugal, each contributing three respondents, were identified. Furthermore, Kenya, the Philippines, Uganda and Australia (n=2) were represented, while Bangladesh, Indonesia, Ivory Coast, Malaysia, Sudan and Vietnam each had one participant. While 44% (n=64) either did not respond to the question or preferred not to disclose the geographical location of their country of origin.
Professional and demographic details of survey participants can be found in table 1, though this information was provided by at most 118 (81.4%) of the researchers. Overall, our sample is slightly skewed towards Global South researchers (56.6%). The relative majority of participants were based in sub-Saharan Africa (28.3%), closely followed by those in Europe and Central Asia (22.1%).
In terms of disciplinary background, most researchers had a biomedical background with the highest number of researchers reporting biostatistics/epidemiology as their discipline (29.0%) or biology/medicine (22.8%). Men and women are approximately equally distributed (40.7% vs 38.6%) although interestingly global south researchers were slightly more likely to be men and global north researchers slightly more likely to be women. In terms of seniority, our sample was skewed towards senior researchers with the largest group of researchers (44.8%) being established researchers with more than 10 years posteducation, followed by mid-career researchers with 3–10 years posteducation (24.8%). Most researchers had up to a PhD degree (23.4%) or were university professors (22.8%).
Research integrity and fairness behaviour domains
An analysis of frequency scores, by location (Global North vs Global South) and research behaviour domains (online supplemental table 1), shows some variation between domains but little by geographical location. These are summarised in figure 2 with further details in online supplemental table 2. Participants based in the Global South had statistically significantly higher mean scores of mentoring compared with the Global North (5.72 (SD: 1.03) vs 6.15 (1.32)) (p=0.001) as well as engagement of affected populations (4.44 (SD: 1.35) vs 5.08 (SD: 1.71) (p=0.038). We defined good mentoring as sufficiently supervising junior coworkers and giving sufficient attention to the skills or expertise to perform studies. While the engagement of affected populations comprised questions on consulting representatives of affected populations during the preparation stage of research or to develop lay dissemination products and developing dissemination products specifically for affected populations or their representatives. The highest frequency score overall was for seeking local ethical approval (online supplemental table 2) at 6.62 (SD=1.09). On the other hand, four domains stand out for having relatively lower scores around 5 (corresponding to frequently engaging in the behaviour): engagement of affected populations (consulting representatives of affected parties at preparation and dissemination, developing lay dissemination tools); engagement of local decision-makers (consulting end users at preparation phase and when developing lay dissemination material); open science (publishing articles and datasets open access) and ensuring reproducibility (preregistering study protocols, publishing valid negative studies, taking steps to correct errors in published work) (figure 2 and online supplemental table 2).
Differences in research practices between Global North and Global South researchers
Despite overall similar behaviour between Global North and Global South researchers, a detailed review of each individual practice reveals a consistent trend whereby researchers based in the Global South report more desirable behaviour, with small but significant differences across many subindicators. Median Likert scores for individual questions are presented overall and by location (Global North and Global South) in online supplemental table 3. Three main findings emerged about Global South researchers: (1) they are more likely to report having the skills and expertise essential to perform studies and sufficient supervision or mentoring of junior coworkers; (2) more likely to report consulting representatives of affected populations and end-users and (3) more likely to report having clear decision-making processes, using data sharing agreements (but less likely to report the strategies put in place to encourage data reanalysis by local researchers).
First, there are differences regarding enough attention to skills and expertise necessary and sufficient mentoring of junior coworkers, Global South researchers reported a median score of 7 (corresponding to ‘always’ as per the Likert scale) whereas Global North researchers reported a median score of 6 (very frequently). The in-depth interviews provide some context to make sense of these differences. With regard to giving sufficient attention to skills or expertise, Global North as well as Global South interviewees discussed that Global North research institutes more frequently have a leadership role, even if they lack context-specific expertise and knowledge, which may explain the slightly lower scores reported by Global North researchers. Regarding sufficient mentoring and supervision of junior coworkers, some Global South participants referred to ‘helicopter research’ projects by Global North research, as projects that are frequently insufficiently overseen by far-away supervisors. Furthermore, one Global North researcher admitted that as a junior researcher, they found themselve not receiving sufficient mentoring and supervision on decision-making processes, when they were placed in a coordinating role of a research project in the Global South, suggesting that their junior rank did not meet the level of experience necessary for the role and responsibilities.
Second, with regard to consulting end-users of research and affected populations, differences were observed when participants were asked if they consulted representatives of affected populations during the preparation stage of research. There was a statistically significant difference in the median responses between Global South researchers who reported a score of 7 (corresponding to ‘always’ as per the Likert scale) and Global North with a score of 5 (frequently) (p<0.0001) (online supplemental table 3). There was also a statistically significant difference in researchers’ responses to whether they consulted end-users of the research during the preparation stage of the research between Global South’s median score of 6 (very frequently) and Global North with a median score of 5 (frequently) (p=0.032) (online supplemental table 3). Interestingly, most qualitative data for these aspects was discussed by the researchers based in the Global South, despite their higher scores. Most participants identified the need to communicate with the local communities before, during and after the research and indicated that project/funding timelines do not provide sufficient time to do it properly: ‘…Going back to the… the community itself is. Is not…is not something that that we are good at doing.’ [GS3].
Third, when asked whether the research they were involved in had clear and fair data ownership agreements, we found statistically significant differences between the Global South, median score of 7 (always), compared with the Global North, median score of 6 (very frequently) (p=0.045) (online supplemental table 3). This was a similar difference to the one found in the question of having clearly agreed decision-making processes. Furthermore, there was a statistically significant difference in the researchers’ responses to whether strategies were put in place to encourage secondary analyses by local researchers when data was made openly available. For Global South researchers, the median score was 6 (very frequently), and for the Global North, the median score was 4 (occasionally) (p=0.037) (online supplemental table 3). These results suggest that data-sharing agreements may do little to promote reanalyses by local researchers. The few interview responses on this topic confirmed that there is an issue with the misuse or re-use of data either in the absence of agreements or beyond the original intention of the study. They also described the increased use of data-sharing agreements over time and how these agreements are only part of the solution: preventing issues with data-sharing relies more on trust and confidence between partners.
Determinants of research integrity and fairness
The study of determinants of research integrity and fairness was done both qualitatively to study structural, institutional and individual determinants of research integrity; and quantitatively (with a regression model) to estimate the effect of individual determinants (association between professional characteristics on the overall research fairness and research integrity scores).
Structural determinants
Most interviewees addressed the inflexible and inequitable donor landscape, where funding streams largely flow from Global North donors to Global North institutes, who subsequently subcontract their partners in the Global South. This funding pattern reportedly reinforces a top-down structure to the research partnership with unequal power dynamics, roles and responsibilities between Global North and Global South researchers frequently defined from the onset of a collaboration. Another structural factor mentioned by several participants was time and resource pressure, especially during the proposal and protocol development phases. This limits the fair involvement of affected populations and end-users of research as well as the ability to cocreate study tools and jointly plan fieldwork. This limited involvement can also extend to members of the research team, where pressures on the partnership leads to delegated roles in research project. The most commonly reported example is the practice of subcontracting fieldwork or data collection to a Global South team who have limited insight or influence on the prior or later stages of the research. Some interviewees further stressed that the competition between research institutes in the acquisition of work in the global health arena further intensifies the pressure on both Global North and South institutes, which can encourage URPs and QRPs.
Another structural determinant that emerged strongly from the qualitative data concerns issues around trust in research partnerships and distinct skills and qualities ascribed to researchers that are rooted in colonialism and racism.
Sometimes they think that they are white girls, boys will do better, they know better than the south, than the brown skin or black skin, (…) that maybe (we) will not do our work properly, not produce the quality,(…). So there is also a negative (attitude), without any ground, without working with somebody, they have this mindset. (…) So there’s also this colonising mentality that black people, they are lazy. They don’t do their or they don’t have the quality we can trust them. They couldn’t manage. (GS4)
The pervasiveness of neocolonial sentiments was also recognised to transcend the Global North/South divide:
I don’t think that colonisation is something that belongs to (the) North. I think it’s a concept like patriarchy. And women can be as patriarchal as possible. It doesn’t have to be men. It’s a practice, it’s a belief…. there are a lot of Southern people around (who) are also in power in the northern global hemisphere. I don’t see them behave differently. (GS4)
When it comes to dissemination and communication of results, both Global North and South participants stated that necessary but unpaid hours needed to publish in peer-reviewed journals influenced the levels of fair coauthorship. Some participants also stated that the additional responsibilities of executing projects and moving from project to project does not allow them to prioritise publications. Some interviewees also shared accounts of concerns with personal finances and livelihood putting pressure on researchers. This can lead to QRPs, such as fabricating patients and pocketing their financial reimbursement. Lastly, some responses suggest that cultural factors can also influence research practices. A specific example given by a Global South researcher concerned the practice around informally tipping gatekeepers and study participants to incentivise participation which can be viewed as questionable.
Institutional determinants: Several interviewees considered the level of buy-in and commitment to fair and robust research practices by the senior leadership in academic institutions as a crucial factor. Related to this, a lack of training and reflexivity around ethics and relationship building was often discussed:
I think people aren’t trained in relationships. (…) I mean, what if, if that could really change, those relationships could be more open, and we could more openly reflect on them with each other too, I think you could see real change. But then you have to really value like, what does it mean to create and foster a partnership? And I don’t know any curriculum that’s doing that in global health. (GN3)
While most interviewees, based in the Global North and South, acknowledged insufficient institutional commitment to addressing URPs and QRPs, some participants perceived increasing investments in training and guideline development around ethics and fairness in Global North institutes. Some interviewees recognised, however, the different institutional dynamics at northern and southern institutes, with greater overheads in the Global North affording more institutional support for safeguarding research integrity. Time constraints can also be understood as an institutional factor affecting research fairness and integrity, which interviewees addressed in relation to delayed contracting and ethics review, which can hamper the meaningful involvement and fair treatment of southern partners. Further, as previously discussed, insufficient supervision of students or junior staff also emerged as an individual determinant negatively affecting research fairness and integrity.
Individual determinants
The regression models fitted to explore the association between research practices and demographic/professional characteristics of researchers do not support strong hypotheses regarding their effect of individual determinants (online supplemental tables 4 and 5). Indeed, while some Wald tests are significant, most overall log-likelihood ratio tests are not. The only independent individual determinant which emerged from the regression analysis (beyond Global North and Global South differences discussed above) was academic rank (with more senior rank reporting more desirable behaviour) and discipline of the researcher (with biomedical sciences reporting higher scores than medicine) after adjusting for other factors. Indeed, bachelors or masters level degree was associated with a −14.0 (95% CI –27.0 to –0.74) (Wald test p=0.042) reduction in the responsible research score compared with associate professor or full professor in multivariate analyses after adjusting for location, career length, gender, geographical region the researcher and discipline of the researcher of global health. Researchers from the biomedical science discipline were associated with a −43.0 (95% CI −71.0 to –16.0) (Wald test p=0.002) reduction in the responsible research score compared with biology/medicine in the multivariate analysis after adjusting for location, years of involvement, gender, geographical region the researcher’s institute and academic rank (online supplemental table 4). When it comes to FRPs, the only independent individual determinants emerging from the regression analysis were career length and discipline. Indeed, the environmental and occupational research discipline was associated with a 18.0 (95% CI 1.1 to 35.0) (Wald test p=0.004) increase in the fair research score compared with biology/medicine in the multivariate analysis after adjusting for location, years of involvement, gender, geographical region the researcher’s institute and academic rank. Researchers preferring not to disclose their career length were associated with a −50.0 (95% CI -99 to −1.6) (Wald test p=0.046) reduction in the fair research score compared with Early Career (<3 years posteducation) in the multivariate analysis after adjusting for location, years of involvement, gender, geographical region the researcher’s institute and academic rank (online supplemental table 5). However, it is important to note that according to overall likelihood ratio tests for RRPs, the p value for the discipline of the researcher was significant (p=0.029) while for FRPs neither the academic rank nor the discipline of the researcher was statistically significant
The topic of seniority also emerged clearly from the interviews conducted, although under a different and less-flattering perspective. Most interviewees considered that seniority, in relation to researchers’ track-record of acquired work and number of publications, enables researchers to wield power over younger researchers which can result in both questionable and unfair practices. An unfair practice that was frequently discussed in interviews, as illustrated by the quote below, relates to unfair publication agreements between junior and senior-level researchers which devoid science of any potential societal value and impact.
There’s also practice those who are senior, it doesn’t matter how much they contribute, they want to be the first author. (…) Because, first of all to tell that because I am the Principal investigator, I have to be the first author of every single paper, it doesn’t matter, whoever, right? Because I brought the grant. I’ve experienced that. So the thing that I brought the grant, so I should be the first author. (…), (GS4)
Another potential reason for age discrepancies in URPs was the more recent nature of decolonising debates, resulting in junior researchers potentially being more sensitised and committed to FRP.
Related to the issue of seniority, many participants reported that concerns of career progression may incentivise Global Health researchers to engage in unfair and questionable behaviours. Interestingly, this finding was discussed mostly in relation to Global North researchers, and practices concerning fair involvement of Global South partners in the research preparation as well as dissemination and communication stage.
I really think that it’s much easier for (a)Western person to come to our settings and get something published. Maybe I can bring another example. You know, I was talking with someone from the North, who just came to Uganda for one-year research fellowship and then just chatting, I asked the motivation and she was like, well, but you can come here in one year, I can pull out easy, easy 2-3 papers, you know, and then I can start my career from there. So you see. Hey, there is definitely, there is less competition, it’s easy to maybe write up maybe in some institutions that you know it’s easier to get data because (…) they have no policy on data sharing and stuff like that. (…) They asked for the data, you gave it to them. (GS1).
While this quote describes Global North researchers taking advantage of their access to data for individual benefit, some also stressed what interviewee GX1 described as ‘laziness to go the extra mile’; that is, taking the time to cocreate and cede decision-making power on the use and sharing of data.
Several interviewees also pointed to language skills, an individual determinant that can be connected to structural factors around access to quality education as well as institutional factors such as the lack of linguistic diversity in the sector’s working language and within academic journals. Language skills reportedly affect both the role of Global South researchers acting (predominantly) as data collectors given their knowledge of local languages as well as being less involved in the write-up of study results, arguably because Global North partners possess stronger English language skills. Some participants also touched on another individual factor which affects (not) publishing results, being the researchers’ pride or fear of missing potential future funding opportunities associated with mentioning study flaws, limitations or negative findings.
Discussion
Overall, researchers reported mostly adhering to research integrity and research fairness principles with little geographical variation. Some behaviours are more frequently reported (transparent reporting of studies, seeking local ethical approval) than others (engagement with affected populations, engagement with local decision-makers, adherence to Open Science), with little variation between responses from Global North and Global South. However, there are some small yet significant differences in specific practices (individual questions of the survey) with Global South researchers generally more likely to report desirable practices, such as giving sufficient attention to skills and expertise, sufficiently mentoring their junior coworkers, consulting representatives of affected populations, consulting end-users of research, having clear decision-making processes agreed on, having clear and fair data ownership agreements and experiencing strategies in place to encourage secondary analyses.
Several structural factors were associated with these patterns, such as an inflexible donor landscape leading to time and resource pressure and competition. This was found to exacerbate tendencies to allocate unequal roles and responsibilities between Global North and Global South researchers that are rooted in pervasive neocolonial mentalities. However, institutional factors were also seen as important determinants, such as universities’ and research institutions’ investments in relationship building, efforts to develop and ensure adherence to guidelines and commitment to mentoring students or junior staff. Our regression analyses did not provide convincing evidence that individual determinants affect research practices, but interviews revealed that senior researchers at times enact power over younger researchers resulting in both questionable and unfair practices. Similarly, neocolonial superiority complexes were also found to explain unfair practices.
Overall we found that there are more synergies than trade-offs between research integrity and fairness as they share similar structural determinants (competitive landscape leading to time and resource pressure and competition), institutional determinants (investments in relationship building, efforts to develop and ensure adherence to guidelines, commitment to mentoring supervising of students or junior staff) and individual determinants (power differentials and competition between senior and junior researchers). This is clearly exemplified by the ‘helicopter research’ projects led by insufficiently supervised far-away supervisors (a classic threat to research integrity) but seen as key for career progression because they enable ‘quick and low-cost’ publication returns (a clear example of extractive research). Another example is data fabrication—the research integrity violation ‘par excellence’—which can be the result of financial constraints in unfair partnerships (either at the individual or project level) that put pressure on researchers in LMICs to fabricate patients and pocket financial reimbursements. This is in line with other studies that have also shown that integrity and fairness are not mutually exclusive goals. Indeed, the disproportionate and unfair distribution of decision-making power within research partnerships29 can weaken the design of research processes and tools, even in the task of properly soliciting informed consent from research participants.33 Given that both integrity and fairness share not only determinants leading to these practices but also a goal of increasing both research quality and the societal benefits of research, there is ample scope to make such synergies explicit and to align both agendas in international research collaborations, as has been advocated by the recently published Cape Town Statement on research integrity.2
The only trade-off that emerged between research integrity and research fairness agendas concerns Open Science and data-sharing. In general, practices related to Open Science (publishing under Open Access conditions as well as making data and programming code openly available) were found to be least frequently reported. This is not surprising for at least two reasons. First, much global health research relies on a mixture of quantitative and qualitative research methods. Yet it can be difficult to anonymise qualitative data when participants provide detailed information about their personal experiences and in such cases removing personal identifiers may not be enough to ensure anonymity. Second, there is a literature showing that Open Science practices are problematic in global health7 34 as they can exacerbate existing inequities between regions. Indeed, as argued in a recent review, open research processes can only lead to wide reuse or participation if there is strong pre-existing capacity to do so (in terms of knowledge, skills, financial resources, technological readiness and motivation). If that is not the case, Open Science risks mostly helping researchers in higher income countries get published, who did not share the ‘legwork’34 in collecting data but have access to higher analytical capacities (eg, students able/willing to do free internships, better computing power) and financial resources (institutional funds to pay article processing charges for spin-off research ideas). Our results are not clear-cut but in line with these concerns. Indeed, in addition to finding rather low adherence to Open Science practices, we also found that strategies were only occasionally put in place to encourage secondary analyses by local researchers according to Global North researchers (though interestingly Global South researchers reported that this happened frequently).
Although there are few studies looking at determinants of research integrity or fairness in global health, those that exist primarily focus on the former. In line with previous studies,15 we found some evidence that junior researchers are less likely to adhere to responsible research practice compared with senior researchers. This is expected as junior researchers are meant to grow into research and collaboration. However, this was not corroborated by the qualitative research which showed that senior levels researchers were more likely to have URPs, especially relating to unfair publication agreements. A recent qualitative study of global health research partnerships also outlined many factors affecting research integrity identified in our study, including the competitive culture of academia, the scarcity of long-term research grant funding, institutional cultures and power dynamics between junior and senior researchers, and lack of research integrity training.35 While that study primarily focused on practices linked to research integrity, they also describe unfair practices related to authorship, and the conflicts that can occur between Global North and Global South partners over fair attribution. Other studies of fairness in global health research have not sought to identify its determinants as we have but explored dynamics of international research partnerships that were reported as contributing factors in this study. The extractive nature of research conducted in the Global South by researchers from the Global North has been clearly argued in a study that linked between (research) data to gold and researchers and gold diggers.36 For example, even where global health research partnerships are ‘supposed’ to lead to ‘North to South transfers of financial and material resources’ macrolevel differences in power and resources end up being reproduced by the institutions, research teams and individuals that make up the partnerships.37 Even setting up such partnerships equitably can be complicated when northern funders set limits on indirect costs or overheads for the southern partner.38 Thus, while equity remains the aim for many global health research partnerships, a de facto reality is established with entrenched institutional arrangements, which extends to the division of roles across partners. Reports of Global South researchers ‘being relegated to tasks well below their capacity’ with ‘no opportunity to participate in priority-setting or in leadership roles’ are common.39 The main strength of our study is the scientific rigour of the study design, but limitations include our low response rate and associated biases. Indeed, we applied a robust epidemiological design for the frequency study based on two-stage random sampling of researchers clustered within institutes from a broad range of research networks worldwide. Latin American as well as Francophone and Lusophone African institutes are absent from our sample (apart from Cote D’Ivoire) due to their limited representation in the networks that contributed to the sampling frame. To the best of our knowledge, this is the first epidemiologically rigorous study on research fairness (although there are more examples in the field of research integrity). Moreover, we were able to combine the breadth of evidence from a global survey with the depth of knowledge obtained from interviews. Despite the small sample size, we were able to reach saturation and triangulate information on several important survey themes. However, our study suffered from a low response rate, with only 8.4% of researchers contacted via email filling in the survey. This is a well-known issue for research integrity surveys (the Dutch National Research Integrity Survey had a response rate of 21.2%15 while the survey of European and American researchers had a response rate of 7.2%40) and can be ascribed to time constraints, survey fatigue, lack of perceived relevance, trust and privacy concerns. Furthermore, we cannot exclude the possibility of positively biased responses since our study measured self-reported adherence to research practices that are well known to be either desirable or not desirable. On a more subtle level, there is also the possibility of an ‘intention-action gap’: no one is against fairness and integrity, but it often takes courage and deliberate actions to consistently act on one’s principles. Lastly, especially, considering that the response rate was different between Global North (6.2%) and Global South (9.5%), it is possible that each stratum represents a different subset of researchers, meaning the differences in scores we found between the two groups may reflect differences between these subgroups rather than ‘population-level’ differences (differential bias).
There are also some limitations pertaining to the survey tool. First, we did not determine the prevalence of research fairness and integrity, as initially intended per protocol, due to the measurement scales used. Indeed, survey questions measured the frequency of researchers’ involvement in specific practices (never, very rarely, rarely, occasionally, frequently, very frequently and always), rather than binary responses (yes, no). With hindsight, we did not feel comfortable collapsing the frequency variables into two categories as this would mean substantial data loss. As a second limitation related to our tool, some variables are more likely to generate valid responses as they incorporate specific and verifiable details about desired behaviours (eg, whether end users were consulted for dissemination products or whether researchers made material accessible on Open Science platforms). Conversely, other questions were based on subjective judgements (eg, allocation of authorship as ‘fair’ or mentorship as ‘insufficient’).
Our study has several implications for practice. Taking individual accountability as a starting point, our findings underline the importance of regular training, coaching and mentoring for global health researchers, for both junior and senior staff, ensuring exposure to up-to-date practices related to research integrity that also address issues of biases and privileges. For strengthened institutional commitment, academic institutions could prioritise investments in training as well as guideline development and adherence to ethical and FRPs. Next to the individual and institutional determinants, funding policies and inequities in the funding landscape need to be addressed at scale, as flexible and equitable practices can hopefully drive more equal power dynamics between Global North and Global South researchers and instil good practices in global health research. Overall, more scientific research on this topic will be key to taking an evidence-informed approach to tackling research fairness and integrity, for instance, providing qualitative insights to document and learn from good practices for counteracting the individual, institutional and structural determinants affecting research fairness and integrity.
Conclusion
To the best of our knowledge, this is the first study providing empirical evidence using rigorous research methods to study research fairness and integrity in global health. Our study suggests that global health researchers mostly adhere to research integrity and research fairness principles (with little geographical variation) but structural, institutional and individual factors are a barrier to following ‘ideal’ practices. Our study shows there is ample scope to align research integrity and research fairness agendas in global health, as only science that is conducted with fairness can be considered responsible and conducted with integrity. This is underscored by the fact that there are more synergies than trade-offs between research integrity and research fairness as they share similar determinants. These include structural factors, such as inflexible donor landscapes and neocolonial mentalities, and unequal roles and responsibilities between Global North and Global South researchers. In practice, our study emphasises the need for institutional and structural initiatives to promote research integrity that also address issues of biases and privileges, foster equal partnerships and address funding inequities, to promote good practices in global health research.
Data availability statement
Data are available on reasonable request. Data are available on reasonable request and will be delinked from the study participants.
Ethics statements
Patient consent for publication
Ethics approval
This study involves human participants and was approved by KIT Research Ethics Committee approval, with application number (S-181). KEM Hospital Research Centre Ethics Committee and the ref no is KEMHRC/RVMlECI 24. Participants gave informed consent to participate in the study before taking part.
Acknowledgments
We are very grateful to Lonneke van der Waa and Lindy van Vliet for making this study possible. Many thanks to Carel IJsselmuiden and Sanjay Juvekar for insightful inputs during the conception stage of this study.
[ad_2]
Source link