Data quality in clinical trials is one of the most important keys to their success. And still, it is an often-neglected area of attention. This oversight comes from a number of factors, from the diverse nature of trials making it hard to standardize protocols, to a generally under-funded education across the board.
The significance of clinical trial data is generally poorly understood by those dealing with it, and in this article, we approach some of the reasons for this and suggest some ways to potentially improve the situation. To begin with, let’s take a look at some context.
The Goals of Clinical Trial Data Quality
For this paper, we take the assumption that clinical trials are coordinated by the CROs or research centers, sponsored by the manufacturer, and regulated by the FDA, and with this, in mind, we have a framework to consider the definition and significance of clinical trial data.
As we know, developing medicine is an incredibly time-consuming and expensive process. A significant proportion of both of these resources is spent on the requirement that the data that result from these trials are accurate. This demand for precision requires a meticulous set of recruitment and testing protocols and a standardized method of reporting and analysis.
Clinical trial data is scrutinized by monitoring processes arranged by pharmaceutical manufacturers, and audits are held to ensure the treatments are being employed in accordance with the agreed-upon protocols. This monitoring can represent almost a third of the total study costs, and often, manufacturers employ an entire department for this specific purpose.
The main focus of data quality as a discipline revolves around how to collect necessary data, in adequate amounts, and to a high degree of accuracy, while at the same time, limiting the amount of unnecessary data that is gathered. For the pharmaceutical companies, these data collected should be relevant to the effectiveness of the therapy, from the regulatory side, it must also represent an adherence to the clinical trial protocols as well as representing the safety and pharmacokinetics of the compound itself.
Therefore, the goals of clinical trial data quality are to reduce the spending on the collection of redundant data while ensuring the quality of the necessary data is sufficient for the relevant stakeholders. Therefore, data quality monitoring has a critical role in the smooth and effective running of clinical trials.
The Importance of Data Quality Monitoring
There are several stakeholders in this data quality.
- Starting at the end of the process, the patients and future sufferers of the disease being treated stand to gain from a successful result; one which arises from a well-run and accurately reported trial.
- The clinicians and researchers who are dedicating their time and careers to the pursuit of new medical drugs and devices stand to gain in reputation and contribution to medical science from every study, whether successful or not, as long as the data is valid.
- The sponsors may have a corporate and financial stake in the process, and stand to get a return on their significant monetary investment from a successful trial.
- And the regulatory bodies who oversee the safety and efficacy of the trials can only give the go-ahead to drugs tested in accordance with their protocols.
The FDA’s second objective is to assure the quality and integrity of the biomedical research data used to support the initiation of the expansion of clinical trials. In order to do this, all clinical research sites are inspected by agents representing the Bioresearch Monitoring Program.
Each of these stakeholders, therefore, has a benefit to gain from quality data. From the FDA perspective, and for all of the stakeholders too, a clinical trial must collect the information required to allow the FDA to assess the safety and efficacy of the product.
So, fundamental to the success of the trial, and the return to every stakeholder is the accurate collection and reporting of clinical trial data. However, the Good Clinical Practice (CGP) guidelines available have some worrying limitations. Primarily, that it is mostly designed for drug-registration trials and based on informal metrics of consensus. For example, there is no consensus for an objective definition of “quality” as it pertains to data.
Further, the modernization of clinical trial processes is leaving many behind. The adoption of CROs for the performance of trials, as well as the move to a more computerized system of data entry and processing, mean that a lack of training and preparation brings up a number of challenges to the collection of quality data.
Data quality monitoring provides the key not only to reducing mistakes but to overhauling the systems in place that contribute to poor-quality data. Investigating the areas in which data quality is threatened brings up a handful of points of attention that stakeholders in clinical trials can address. Trials involve a vast range of different processes and practices, and many of them contain areas in which data quality is vulnerable.
Identifying the challenges faced by researchers in clinical trials is the first step to finding and strengthening weak points in these practices, and improving the quality of clinical trial data for all stakeholders.
Challenges in Assuring Data Quality in Clinical Trials
A study into the ambiguities in the GCP and challenges faced by researchers identified the following key themes as opportunities to improve the quality of clinical data:
- Education
- Ways of Working
- Working With IT
- Working with Data
- Data Quality Monitoring
Each of these themes houses areas for improvement, which, when addressed, should go a long way to boosting data quality significantly. Here, we’ll discuss their findings, and in the section following, talk briefly about what might work as a solution to some of these problems.
Education
Many of the participants in the study conducted reported the feeling of being under-prepared. The majority of researchers interviewed suggested that the training required to meet industry standards simply isn’t there, and this may be a product of diffusion of responsibility in the clinical trial environment.
There was a consensus around the importance of staff training from organizations in particular, rather than simply a generalized overview. Participants also mentioned that the training from one place to another is more or less the same, and related to the SOPs for the trials in practice, but covers little in the way of formal training in data collection and entry.
As such, trainees are taught on the job, as part of the trial process, which may reflect an inefficiency and a lack of rigor in the training process.
Ways of Working
Taking responsibility for, and ownership of, their data was considered a top priority for staff to feel like they were contributing to the study. The significance of the role that staff are playing played a significant part in the motivations for researchers, and part of this is in the responsibility they must take over the quality of their data.
Time constraints from unrealistic workloads play a role here. Having the time to take responsibility for your own data is important to improving the quality of the data collected.
Staff engagement is also a factor. Engaging staff in the design improves their sense of responsibility and promotes ownership. It also fosters more personal connections between team members and encourages loyalty. This engagement swings both ways, as closer working relationships allow members to identify those who are cutting corners in their data collection.
Helping to dispel the feeling of organizational hierarchy encourages more open and honest communication, with further promotes engagement. Finally, involving cross-disciplinary involvement brings in expertise from different angles that can lead to different interpretations of data that are worthy of discussion and recommendations.
Working with IT
The changes to technologies that are being employed in clinical trials is changing the trial landscape itself. New systems come with benefits and drawbacks, but overall, the response to the new systems was positive.
Faster and easier data entry is a big advantage, and the ability to upskill from paper documentation to electrical on the job is a time-saver and provides a transferrable skill.
Storing all data in a centralized system also makes it easier to access for all stakeholders. Audit trails are a lot faster and more obvious this way, and the conditions and logic applied to platforms like REDCap can instantly flag up or remove data that is beyond realistic parameters, ultimately improving data quality and reducing entry errors.
However, this rigidity can throw up other problems, such as throwing out valid measurements that were entered in the wrong unit. Another danger of electronic data is that vast amounts of it can be deleted by mistake, for example when transferring from one device to another.
Working with Data
On that topic, the study threw up a number of strategies that researchers have used to reduce the mistakes. Something that stands out is that there are subjective estimates of what constitutes a reasonable amount of human error to tolerate.
This illuminates a weak point in regulation, possibly due to the diverse nature of study data; in which there can be no one-size-fits-all approach, and thresholds must be attributed independently to each case.
While impossible to standardize an acceptance level, there may be ways to standardize the calculation of one based on pre-agreed variables in the study, such as the therapeutic area, or attributions of the population being tested.
Missing data is sometimes replaced by carrying values forward from previous entries, where a subject, for example, forgot to fill out one box. This comes from a lack of publicized thresholds and acceptance for missing data in general.
Data auditing is another area to consider. There is a lack of collaborative attitudes between auditors and researchers, with common complaints of FDA and pharmaceutical company auditors taking the attitude of high suspicion and strict discipline. Agents who took a more friendly and engaging approach to guiding individuals towards better adherence were much better received and offered training and educational opportunities for those being audited.
Data Quality Monitoring
Many researchers claimed that they had worked in environments where data monitoring wasn’t considered important at all. Others said that the strategies of data monitoring were unified in every center they’d worked in.
Others claimed to have a lack of knowledge about the methods and significance of data monitoring. There were reports of subjective methods of monitoring data and the frequency of such monitoring varied a lot. One finding was that monitoring was more prevalent when the appropriate technologies were available. Budget cuts also correlated negatively with monitoring efforts.
So, from these findings, a number of key improvements immediately expose themselves. From these, it’s possible to piece together some improvements that can potentially be made to improve clinical trial data quality.
How to Implement Improvements to Clinical Trial Data Quality
Broken down by the same five themes, we can go over some suggestions for improvement that could be made to create better quality data for clinical trials.
Education
There’s no question that formal education could step in and quickly improve the quality of data entry, collection, and analysis. Deciding on to whom this responsibility falls can be a matter of discussion.
For funders, the extra investment in optional courses in data quality could provide a rapid return. For sites and researchers, a foundation course at the beginning of the trial to upskill everyone in using the technologies involved may also solve a lot of problems.
For trial design, workers should be included in the preparation phases, and values, thresholds, and methods should all be agreed upon for the context of the study at hand.
Ways of Working
Creating an environment that promotes good quality data is important. If staff are overloaded, human error and motivation are going to decrease. Focusing on the quality as well as the quantity of work needs to take priority in trial design.
Engaging researchers more thoroughly, and expressing the significance of their role promotes deeper engagement and more responsibility. This fosters a stronger commitment to the program and the collection and reporting of better quality data.
Working in groups, facilitating collaboration, and reducing the perceived hierarchy of workers are all important moves to consider to improve this engagement.
Working with IT
In order to fully leverage the vast improvements that technology can provide to data entry and storage, it needs to be adopted well. Trial and error is not a good way to learn new systems when the quality of the data you’re working with is critical. Therefore, full focus on training, teaching the limitations of the technology, and employing redundancy measures and backups should be considered.
If researchers are able to work with numerous electronic systems, the initial investment in training will pay off significantly in short order. The accessibility, analysis, and security of high-quality data on well-implemented IT systems is a huge advantage to all stakeholders.
Working with Data
Data quality definitions need to be better established. GCP reportedly does not cover enough detail for universal bases, the o clinical design has to fill in the gaps. Metrics need to be discussed with multiple stakeholders and pre-determined protocols for acceptance standards, missing data, and monitoring cycles established from the beginning.
For audits, it’s important that the culture of oversight is not a hostile one. Aggressive assessments that are designed to catch people out only promote the filling in of blanks to cover someone’s tracks. A collaborative approach to adherence that emphasizes the common goals of all stakeholders is the key to ensuring data is honest and realistic.
Data Monitoring
Again, setting protocols for monitoring data will have to be down to the trial design teams. With input from other stakeholders, it should be possible to design these into the program with the understanding that they will be funded as a critical component of a robust clinical trial.
Attitudes to monitoring also need to be adjusted, and the understanding of its significance, as well as how to go about it, should be the responsibility of the appropriate training bodies.
Conclusion
Data quality in clinical trials is what holds a successful study together. While the cost of a failed trial is tremendous, there are still some glaringly significant oversights when it comes to ensuring that the data collected and reported are accurate and relevant.
Researchers report blind spots in training, lack of experience and knowledge, and in some cases, a total lack of concern for data quality monitoring during trials. The official channels provide little in the way of help with this, and Good Clinical Practice guidelines are continually described as being too vague and tediously irrelevant.
Yet, every stakeholder in clinical trials stands to benefit from better quality data, and with a few modifications and the employment of more collaborative attitudes across departments, better data is just around the corner.
The use of more robust and standardized training, promotion of better engagement, and healthier relationships between auditors and researchers are some of the key elements to work on, and the implementation of, and training on, new electronic platforms should quickly prove a significant investment.