[ps2id id=’formal-committee’ target=”/]
21. DATA MONITORING
21a. Formal Committee
Composition of data monitoring committee (DMC); summary of its role and reporting structure; statement of whether it is independent from the sponsor and competing interests; and reference to where further details about its charter can be found, if not in the protocol. Alternatively, an explanation of why a DMC is not needed.
Example
“Appendix 3. Charter and responsibilities of the Data Monitoring Committee
A Data Monitoring Committee (DMC) has been established. The DMC is independent of the study organisers. During the period of recruitment to the study, interim analyses will be supplied, in strict confidence, to the DMC, together with any other analyses that the committee may request. This may include analyses of data from other comparable trials. In the light of these interim analyses, the DMC will advise the TSC [Trial Steering Committee] if, in its view:
a) the active intervention has been proved, beyond reasonable doubt*, to be different from the control (standard management) for all or some types of participants, and
b) the evidence on the economic outcomes is sufficient to guide a decision from health care providers regarding recommendation of early lens extraction for PACG [primary angle closure glaucoma].
The TSC can then decide whether or not to modify intake to the trial. Unless this happens, however, the TSC, PMG [project management group], clinical collaborators and study office staff (except those who supply the confidential analyses) will remain ignorant of the interim results.
The frequency of interim analyses will depend on the judgement of the Chair of the DMC, in consultation with the TSC. However, we anticipate that there might be three interim analyses and one final analysis.
The Chair is Mr D.G.-H., with Dr D.C., and Professor B.D. Terms of reference for the DMC are available on request from the EAGLE [Effectiveness in Angle Closure Glaucoma of Lens Extraction] study office.
* Appropriate criteria for proof beyond reasonable doubt cannot be specified precisely. A difference of at least three standard deviation [sic] in the interim analysis of a major endpoint may be needed to justify halting, or modifying, such a study prematurely [Reference X].” 325
Explanation
For some trials, there are important reasons for periodic inspection of the accumulating outcome data by study group. In principle, a trial should be modified or discontinued when the accumulated data have sufficiently disturbed the clinical equipoise that justified the initiation of the trial. Data monitoring can also inform aspects of trial conduct, such as recruitment, and identify the need to make adjustments.
The decision to have a data monitoring committee (DMC) will be influenced by local standards. While certain trials warrant some form of data monitoring, many do not need a formal committee,326 such as trials with a short duration or known minimal risks. A DMC was described in 65% (98/150) of cancer trial protocols with time-to-event outcomes in Italy in 2000-5,327 and in 17% (12/70) of protocols for Danish randomised trials approved in 1994-5.6 About 40% of clinical trials registered on ClinicalTrials.gov from 2007-2010 reported having a DMC.328 The protocol should either state that there will be a DMC and provide further details, as discussed below, or indicate that there will not be a DMC, preferably with reasons.
When formal data monitoring is performed, it is often done by a DMC consisting of members from a variety of disciplines.254;329 The primary role of a DMC is to periodically review the accumulating data and determine if a trial should be modified or discontinued. The DMC does not usually have executive power; rather, it communicates the outcome of its deliberations to the trial steering committee or sponsor.
Independence, in particular from the sponsor and trial investigators, is a key characteristic of the DMC and can be broadly defined as the committee comprising members who are “completely uninvolved in the running of the trial and who cannot be unfairly influenced (either directly or indirectly) by people, or institutions, involved in the trial.”254 The DMC members are usually required to declare any competing interests (Item 28). Among the 12 trial protocols that described a DMC and were approved in Denmark in 1994-5,6 four explicitly stated that the DMC was independent from the sponsor and investigators; three had non-independent DMCs; and independence was unclear for the remaining five protocols.
The protocol should name the chair and members of the DMC. If the members are not yet known, the protocol can indicate the intended size and characteristics of the membership until further details are available. The protocol should also indicate the DMC’s roles and responsibilities, planned method of functioning, and degree of independence from those conducting, sponsoring, or funding the trial.254;330;331 A charter is recommended for detailing this information331; if this charter is not appended to the protocol, the protocol should indicate whether a charter exists or will be developed, and if so, where it can be accessed.[ps2id id=’interim-analysis’ target=”/]
21b. Interim analysis
Description of any interim analyses and stopping guidelines, including who will have access to these interim results and make the final decision to terminate the trial.
Example
“Premature termination of the study
An interim-analysis is performed on the primary endpoint when 50% of patients have been randomised and have completed the 6 months follow-up. The interim-analysis is performed by an independent statistician, blinded for the treatment allocation. The statistician will report to the independent DSMC [data and safety monitoring committee]. The DSMC will have unblinded access to all data and will discuss the results of the interim-analysis with the steering committee in a joint meeting. The steering committee decides on the continuation of the trial and will report to the central ethics committee. The Peto approach is used: the trial will be ended using symmetric stopping boundaries at P < 0.001 [Reference X]. The trial will not be stopped in case of futility, unless the DSMC during the course of safety monitoring advices [sic] otherwise. In this case DSMC will discuss potential stopping for futility with the trial steering committee.”332
Explanation
Interim analyses can be conducted as part of an adaptive trial design to formally monitor the accumulating data in clinical trials. They are generally performed in trials that have a DMC, longer duration of recruitment, and potentially serious outcomes. Interim analyses were described in 71% (106/150) of cancer trial protocols with time-to-event outcomes in Italy in 2000-5,327 and in 19% (13/70) of protocols for Danish randomised trials approved in 1994-5.6 The results of these analyses, along with non-statistical criteria, can be part of a stopping guideline that helps inform whether the trial should be continued, modified, or halted earlier than intended for benefit, harm, or futility. Criteria for stopping for harm are often different from those for benefit and might not employ a formal statistical criterion.333 Stopping for futility occurs in instances where, if the study were to continue, it is unlikely that an important effect would be seen (i.e., low chance of rejecting null hypothesis). Multiple analyses of the accumulating data increase the risk of a false positive (type I) error, and various statistical strategies have been developed to compensate for this inflated risk.254;333-335 Aside from informing stopping guidelines, pre-specified interim analyses can be used for other trial adaptations such as sample size re-estimation, alteration to the proportion of participants allocated to each study group, and changes to eligibility criteria.111
A complete description of any interim analysis plan, even if it is only to be performed at the request of an oversight body (e.g., DMC), should be provided in the protocol – including the statistical methods, who will perform the analyses, and when they will be conducted (timing and indications). If applicable, details should also be provided about the decision criteria – statistical or other – that will be adopted to judge the interim results as part of a guideline for early stopping or other adaptations. Among 86 protocols for randomised trials with a time-to-event cancer outcome that proposed efficacy interim analyses, all stated the planned timing of the analyses, 91% specified the overall reason to be used for stopping (e.g., superiority, futility), and 94% detailed the statistical approach.327
In addition, it is important to state who will see the outcome data while the trial is ongoing, whether these individuals will remain blinded (masked) to study groups, and how the integrity of the trial implementation will be protected (e.g., maintaining blinding) when any adaptations to the trial are made. A third of protocols for industry-initiated randomised trials receiving Danish ethics approval in 1994-95 stated that the sponsor had access to accumulating trial data, which can introduce potential bias due to competing interests.10 Finally, the protocol should specify who has the ultimate authority to stop or modify the trial – e.g., the principal investigator, trial steering committee, or sponsor.[ps2id id=’harms’ target=”/]
22. HARMS
Plans for collecting, assessing, reporting, and managing solicited and spontaneously reported adverse events and other unintended effects of trial interventions or trial conduct.
Example
“Secondary outcomes
. . . In our study an adverse event will be defined as any untoward medical occurrence in a subject without regard to the possibility of a causal relationship. Adverse events will be collected after the subject has provided consent and enrolled in the study. If a subject experiences an adverse event after the informed consent document is signed (entry) but the subject has not started to receive study intervention, the event will be reported as not related to study drug. All adverse events occurring after entry into the study and until hospital discharge will be recorded. An adverse event that meets the criteria for a serious adverse event (SAE) between study enrollment and hospital discharge will be reported to the local IRB [Institutional Review Board] as an SAE. If haloperidol is discontinued as a result of an adverse event, study personnel will document the circumstances and data leading to discontinuation of treatment. A serious adverse event for this study is any untoward medical occurrence that is believed by the investigators to be causally related to study-drug and results in any of the following: Life-threatening condition (that is, immediate risk of death); severe or permanent disability, prolonged hospitalization, or a significant hazard as determined by the Data Safety Monitoring Board. Serious adverse events occurring after a subject is discontinued from the study will NOT be reported unless the investigators feels that the event may have been caused by the study drug or a protocol procedure. Investigators will determine relatedness of an event to study drug based on a temporal relationship to the study drug, as well as whether the event is unexpected or unexplained given the subject’s clinical course, previous medical conditions, and concomitant medications.
. . . The study will monitor for the following movement-related adverse effects daily through patient examination and chart review: dystonia, akathisia, pseudoparkinsonism, akinesia, and neuroleptic malignant syndrome. Study personnel will use the Simpson-Angus [Reference X] and Barnes Akathisia [Reference X] scales to monitor movement-related effects.
. . .
For secondary outcomes, binary measures, e.g. mortality and complications, logistic regression will be used to test the intervention effect, controlling for covariates when appropriate . . .”266
Explanation
Evaluation of harms has a key role in monitoring the condition of participants during a trial and in enabling appropriate management of adverse events. Documentation of trial-related adverse events also informs clinical practice and the conduct of ongoing and future studies. We use the term “harms” instead of “safety” to better reflect the negative effects of interventions.300 An adverse event refers to an untoward occurrence during the trial, which may or may not be causally related to the intervention or other aspects of trial participation.300;336 This definition includes unfavourable changes in symptoms, signs, laboratory values, or health conditions. In the context of clinical trials, it can be difficult to attribute causation for a given adverse event. An adverse effect is a type of adverse event that can be attributed to the intervention.
Harms can be specified as primary or secondary outcomes (Item 12) or can be assessed as part of routine monitoring. To the extent possible, distinctions should be made between adverse events that are anticipated versus unanticipated, and solicited versus unsolicited, because expectation can influence the number and perceived severity of recorded events. For example, providing statements in the informed consent process about the possibility of a particular adverse effect or using structured, as opposed to open-ended, questionnaires for data collection, can increase the reporting of specific events (‘priming’).269;337-339 The timeframe for recording adverse events can also affect the type of data obtained.340;341
The protocol should describe the procedures and frequency of harms data collection, the overall surveillance timeframe, any instruments to be used, and their validity and reliability, if known. Substantial discrepancies have been observed between protocol-specified plans for adverse event collection and reporting, and what is described in final publications.5 Although trials are often not powered to detect important differences in rates of uncommon adverse events, it is also important to describe plans for data analysis, including formal hypothesis testing or descriptive statistics.300;342
Finally, the protocol should address the reporting of harms to relevant groups (e.g., sponsor, research ethics committee/institutional review board, data monitoring committee, regulatory agency), which is an important process that is subject to local regulation.343 Key considerations include the severity of the adverse event, determination of potential causality, and whether it represents an unexpected or anticipated event. For multicentre studies, procedures and timing should be outlined for central collection, evaluation, and reporting of pooled harms data.[ps2id id=’auditing’ target=”/]
23. AUDITING
Item 23: Frequency and procedures for auditing trial conduct, if any, and whether the process will be independent from investigators and the sponsor.
Example
“11.4 Data Monitoring and Quality Assurance
Through the combination of our web-based, instantaneous electronic validation, the DCC’s [Data Coordinating Center] daily visual cross-validation of the data for complex errors, and regular on-site monitoring, the quality and completeness of the data will be reflective of the state of the art in clinical trials.
Both the European and US DCCs will conduct monitoring of source documents via fax at all enrolling ARUBA [A Randomized trial of Unruptured Brain Arteriovenous malformations] sites and will conduct at least one onsite monitoring visit per year over the course of the study at 100% of clinical sites (with repeat visits to sites where performance is a concern). Monitoring of European study sites will be assured by the European Coordinating Center (Paris). The primary objectives of the DCC during the on-site visits are to educate, support and solve problems. The monitors will discuss the protocol in detail and identify and clarify any areas of weakness. At the start of the trial, the monitors will conduct a tutorial on the web-based data entry system. The coordinators will practice entering data so that the monitors can confirm that the coordinators are proficient in all aspects of data entry, query response, and communication with the DCC. They will audit the overall quality and completeness of the data, examine source documents, interview investigators and coordinators, and confirm that the clinical center has complied with the requirements of the protocol. The monitors will verify that all adverse events were documented in the correct format, and are consistent with protocol definition.
The monitors will review the source documents as needed, to determine whether the data reported in the Web-based system are complete and accurate. Source documents are defined as medical charts, associated reports and records including initial hospital admission report . . .
The monitors will confirm that the regulatory binder is complete and that all associated documents are up to date. The regulatory binder should include the protocol and informed consent (all revisions), IRB [Institutional Review Board] approvals for all of the above documents, IRB correspondence, case report forms, investigator’s agreements . . .
Scheduling monitoring visits will be a function of patient enrollment, site status and other commitments. The DCC will notify the site in writing at least three weeks prior to a scheduled visit. The investigators must be available to meet with the monitors. Although notification of the visits will include the list of patients scheduled to be reviewed, the monitors reserve the right to review additional ARUBA patients.
If a problem is identified during the visit (i.e., poor communication with the DCC, inadequate or insufficient staff to conduct the study, missing study documents) the monitor will assist the site in resolving the issues. Some issues may require input from the Operations Committee, Steering Committee or one of the principal investigators.
The focus of the visit/electronic monitoring will be on source document review and confirmation of adverse events. The monitor will verify the following variables for all patients: initials, date of birth, sex, signed informed consent, eligibility criteria, date of randomization, treatment assignment, adverse events, and endpoints . . .” 313
Explanation
Auditing involves periodic independent review of core trial processes and documents. It is distinct from routine day-to-day measures to promote data quality (Items 18a and 19). Auditing is intended to preserve the integrity of the trial by independently verifying a variety of processes and prompting corrective action if necessary. The processes reviewed can relate to participant enrolment, consent, eligibility, and allocation to study groups; adherence to trial interventions and policies to protect participants, including reporting of harms (Item 22); and completeness, accuracy, and timeliness of data collection. In addition, an audit can verify adherence to applicable policies such as the International Conference on Harmonisation Good Clinical Practice and regulatory agency guidelines.160
In multicentre trials, auditing is usually considered both overall and for each recruiting centre. Audits can be done by exploring the trial dataset or performing site visits. Audits might be initially conducted across all sites, and subsequently conducted using a risk based approach that focuses, for example, on sites that have the highest enrolment rates, large numbers of withdrawals, or atypical (low or high) numbers of reported adverse events.
If auditing is planned, the procedures and anticipated frequency should be outlined in the protocol, including a description of the personnel involved and their degree of independence from the trial investigators and sponsor. If procedures are further detailed elsewhere (e.g., audit manual), then the protocol should reference where the full details can be obtained.