The position of Health and Disability Commissioner was established in 1994, following the Cartwright Inquiry into the treatment of cervical cancer at National Women’s Hospital, which found research had been undertaken without ethical approval or informed participant consent. The Code of Health and Disability Services Consumers’ Rights, which includes protection from unethical research, became law in 1996. The Code has been reviewed at roughly five-yearly intervals, but has remained largely unchanged. Parts of the Code related to research have, unfortunately, not kept pace with newer concepts in healthcare evaluation and improvement—the learning health system—over the last quarter century.
Healthcare improves in two main ways. New drugs or devices undergo rigorous evaluation in traditional clinical trials. If they provide sufficient clinical benefit, at a cost a health system can afford, they enter clinical use. The research leading to these advances requires participant consent, and is costly to undertake. Breakthrough drugs, such as PCSK-9 inhibitors which are extremely effective at lowering cholesterol levels, and devices such as transcatheter approaches to aortic valve replacement, are often very expensive. The other way is to review or audit current practice, compare it with appropriate local or international benchmarks, and introduce changes to approach best practice standards. This type of quality improvement initiative is not considered research and doesn’t usually require participant consent.
There is a grey zone between these two approaches. Comparative effectiveness studies evaluate accepted clinical practice, where more than one drug, device or approach is used for the same condition. Drug and device manufacturers are focused on fulfilling US Food and Drug Administration and European regulatory requirements, rather than undertaking the type of head-to-head comparison, which might usefully guide clinical practice. Because drugs and devices in accepted use should achieve roughly similar clinical outcomes, studies comparing them often need to be very large to detect any differences.
The ideal learning health system undergoes a continual cycle of rigorously assessing what works and what doesn’t, and modifying practice from there. The idea that usual practice should be constantly questioned and evaluated is not appreciated by most patients, who believe that their recommended healthcare is underpinned by rigorous science. That is not usually the case. In cardiology it is estimated that only 11% of guideline treatment recommendations are based upon adequate randomised trial data, with half based upon expert opinion alone.1
There are many examples of treatments given to thousands of people based on expert opinion, which are later found to have uncertain benefit or cause harm when more rigorous assessment is undertaken.
The electronic capture of health information via electronic clinical records, condition and procedure databases, and national datasets is arguably the greatest advance in healthcare over the last 25 years. It affords the opportunity to extract, aggregate and analyse detailed patient and health system information thereby identifying shortcomings and opportunities for improvement. New Zealand is ahead of many other countries, including the US and Australia, in having a unique patient identifier and national mortality, hospital discharge coding, prescribing and laboratory datasets. In cardiology, linkage of the All New Zealand Acute Coronary Syndrome - Quality Improvement (ANZACS-QI) database to other national datasets has provided novel insights in many aspects of unstable coronary disease, including disparities in health outcomes by ethnicity and by region.2
Audit of current practice is an important component of quality improvement initiatives, particularly if able to be benchmarked against relevant comparators. However, observational data, including that used for audit, is not reliable for determining the best treatment. The decision to use one treatment or strategy rather than another may be influenced by other factors, including some which are unknown, associated with favourable or unfavourable outcomes. These confound the association, so established treatments or approaches can only be reliably compared if they are randomly allocated. The challenge of identifying a true difference between established treatments or strategies is greater because that difference is usually modest.
Another change over the last 25 years has been the development of novel research methodologies. A major limitation of the traditional individual participant, double-blinded, randomised, controlled trial is limited external validity. Enrolled patient populations typically lack proportional representation of the elderly, females, those from disadvantaged populations, those with co-morbidities, and most importantly when evaluating treatments, those at highest risk for adverse events.3 In New Zealand, Māori and Pacific peoples are very often under-represented in clinical trials.
There has been a move towards pragmatic studies, aiming to enrol a more diverse study population by simplifying trial requirements. Trials may be embedded in established patient or procedure registries, and outcomes assessed by linkage to other datasets, such as those coding for mortality or hospital discharge diagnoses. Running trials within registries also allow comparison of trial patients with those not enrolled but in the registry, thereby providing insights into the likely generalisability of the study findings.
Comparative effectiveness studies typically compare standard or accepted treatments applied as part of routine care pathways to many or all patients with a particular condition. Randomly allocating treatment to one patient cohort and comparing it to a different treatment applied to another cohort using cluster randomisation has advantages with regard to both trial administration and making the results more directly relevant to clinical practice. Apart from more simply enrolling larger patient numbers thereby enabling trials to be powered for clinically relevant endpoints, cluster randomisation facilitates enrolment of a wide spectrum of patients with a particular condition, including those often excluded from trials with individual randomisation. This increases the generalisability, or external validity, of the study findings.
The Code of Health and Disability Services Consumers’ Rights states that the consumer has the right to be fully informed. Under section 6(1): Every consumer has the right to the information that a reasonable consumer, in that consumer’s circumstances, would expect to receive, including … (d) notification of any proposed participation in teaching or research, including whether the research requires and has received ethical approval.
This has been interpreted as precluding either individual or cluster random allocation of any aspect of patient care, without prior written, informed consent from anyone affected by that care. The only exceptions are studies comparing established treatments undertaken in settings where consent cannot be obtained without delaying time-critical treatment, such as in unconscious patients in intensive care.
The National Ethics Advisory Committee (NEAC) to the Ministry of Health in their 2012 Guidelines wrote in regard to a “community intervention study (or cluster intervention study)” that “individual consent to participate … should not be required if gaining that consent is impracticable, and if the benefits from the study are sufficient and the potential harms minimal.”4 However, when updated in 2019 this was replaced by the more circumspect “NEAC recognises that there is a tension between ethics and the legal framework for consent, as cluster randomised trials generally are not designed to seek consent. This tension creates a legal barrier to some research that may otherwise meet ethical standards. NEAC is aware of the tension and support a review of the law in this area”.5
Other countries have considered this issue and come to a conclusion similar to that of NEAC in 2012. Following a recommendation from the Ottawa Ethics of Cluster Randomized Trials Consensus Group, the Canadian Tri-Council Policy Statement “allows research ethics boards to approve an alteration to the informed consent process, such as a waiver of consent, if the following criteria are met: (1) there is no more than minimal risk to participants, (2) the alteration to consent requirements is unlikely to affect the welfare of participants adversely, (3) it is impossible or impracticable to carry out the research properly given the research design if prior consent is needed, and (4) there is a plan to offer participants the possibility of having their data deleted from the study database”.6 Similar criteria have been used in the US.7 A waiver of consent was recently granted for Canadian sites participating in the PICS Trial, a cluster-randomised comparison of various prophylactic antibiotic regimens to prevent cardiac surgical site infection.7,8
A key consideration around randomly allocating treatment to patients without their prospective consent is whether this is acceptable to patients. Some insights can be gleaned from trials undertaken in the acute setting where randomised treatment has already been given, and consent can only be obtained for follow-up and use of data. The SAFE, CHEST and SPLIT trials compared various intravenous fluid solutions in the intensive care setting; fewer than 2% of patients or their relatives elected to opt out.9–11 In the HEAT-PPCI trial, undertaken in patients with ST elevation myocardial infarction, 0.2% of patients did not give consent for their ongoing participation.12 Although these observations are potentially subject to survivor bias, very few participants appear to be concerned about being included in comparative effectiveness studies.
Obtaining consent is costly. A typical phase 3 pivotal, 20,000 patient, randomised cardiovascular drug trial, with individual participant consent and randomisation, designed to comply with the requirements of the US Food and Drug Administration, may cost NZ$75 million, which approximates the annual research budget of the Health Research Council, the main New Zealand biomedical research funding body. The budget upper threshold for an HRC-funded trial is about $1.2 million, which leads to optimistic power calculations and limits most New Zealand studies to using surrogate rather than clinically relevant endpoints.
In contrast, the New Zealand Oxygen Trial recently compared two oxygen administration protocols in patients calling an ambulance or presenting to hospital with a suspected acute coronary syndrome. It used a cluster-randomised, cross-over design and was embedded in established registries (ambulance service and ANZACS-QI). Consent was waived given the acute setting; informed consent is not possible in patients with chest pain needing immediate treatment. The trial enrolled 40,000 patients over two years, and was undertaken on a project grant of $160,000 from the National Heart Foundation.13
Embedding trials in registries can considerably reduce costs, as can cluster randomisation. However, many large, simple, clinically relevant, randomised, comparative effectiveness trials are unable to be undertaken if participant consent is required, because of the cost of obtaining consent.
The COVID-19 pandemic has challenged and disrupted previous constraints around the way research is assessed and undertaken. One example is OpenSAFELY, which used purpose-built software to analyse data from the electronic general practice medical records of 17 million English NHS patients, 5,683 of whom subsequently died from COVID.14 The records were examined in situ, without copies being made, and with a log kept of all interactions. The study benefitted from a UK government decree allowing wider access to health data for research purposes, and took 42 days from idea conception to publication. It has produced the most comprehensive information yet describing those who are at increased risk of contracting and dying from COVID. Recognition of the importance of randomised clinical trials within the NHS has allowed for the rapid and rigorous evaluation of several therapies for COVID, contrasting with other countries where treatments of unproven benefit and possible harm have been advocated and funded.
Coronary angiography and percutaneous coronary intervention (PCI) require vascular access. Approximately 90% of New Zealand procedures are via the radial rather than femoral artery as the latter is associated with more frequent bleeding complications, including life-threatening retroperitoneal bleeding. The radial artery is of smaller calibre, and vasospasm may occur with advancing and manipulating catheters. Once the vascular sheath is inserted, bolus injection of an intra-arterial vasodilator reduces the likelihood of spasm. The most commonly used vasodilators are verapamil, a calcium channel blocker, and nitroglycerin, a nitrate. In New Zealand, roughly 60% give verapamil and 40% nitroglycerin, as part of routine unit practice. The incidence of spasm in the current era is unknown but likely to be low, perhaps 2–4%. There are no adequately powered comparisons of verapamil with nitroglycerin.15 When procedure consent is obtained, no New Zealand interventional cardiologist mentions giving a vasodilator, nor which one; it is regarded as a routine part of the procedure pathway.
Is verapamil or nitroglycerin the better vasodilator to prevent radial spasm, when used as the default option in routine practice? Clinicians are free to give another medication, or none at all, if they think that is better for a particular patient. From an individual patient perspective, any differences will be small and of minimal, if any, clinical relevance (if spasm occurs, further boluses of the same or other vasodilators are given, or smaller diameter catheters used). However, there are over three million PCI procedures performed worldwide each year, so minor differences in either efficacy or cost may be important at the population level.
Because spasm is uncommon, and any difference between vasodilators will be small, a trial would require almost 10,000 patients. A trial of this size, with individual consent and randomisation, would be difficult to justify because of the high cost and administrative burden relative to the clinical importance of the findings. Trials with individual consent are particularly difficult when evaluating unit policies, applied as the default option to the treatment of patients over a period of time.
Giving verapamil for six months, deciding to switch to nitroglycerin for the next six months, and collecting data on vasospasm would not require consent. Such audits of practice are a strongly encouraged aspect of quality assurance and continuing professional development. However, adding rigour to the evaluation by randomly allocating the order of verapamil and nitroglycerin administration over that 12-month period is currently illegal in New Zealand without individual participant consent.
The Code, very appropriately, is primarily designed to protect the rights of patients. However, it fails to achieve an equally important outcome: to enable the healthcare system to deliver the best possible treatment to those patients, within available resources. This goal may be achieved by having comparative effectiveness research as an integral part of routine care and, in some limited and clearly defined circumstances, undertaken without written participant consent. Such research would require close ethical scrutiny, with independent lay and expert input into the study design and oversight. Individual autonomy around all healthcare decisions which are meaningful to the patient must be preserved, and information on the trial must be freely available and readily accessible.
Any future changes to the Code need wide public consultation on consent, research and the evidence underpinning treatment recommendations. The views of consumers, Māori, clinicians, ethicists and the legal profession all need consideration. However, those perspectives must be informed by understanding that, in most circumstances, randomised evaluations provide the only reliable way to determine the best treatment for a patient. They may also identify currently used treatments or procedures which provide little or no benefit, or cause harm, leading to their discontinuation.
Randomised, comparative effectiveness studies should be an integral part of any learning health system aimed at better healthcare delivery, and reducing waste and harm from ineffective treatments or strategies. These should be both enabled and required by those governing and funding healthcare in New Zealand.
The Health and Disability Code needs revision to include consideration of the importance of embedding a healthcare culture of continual evaluation and improvement, and the critical role randomised evaluations have in achieving these goals.16
The Health and Disability Code precludes any research involving a competent patient without the informed consent of the participant. A learning health system requires rigorous evaluation of both new and established clinical practice, including low-risk components of usual care pathways. When comparing two accepted practices, the only way to control for unknown confounders is by randomisation. In some limited circumstances, particularly when comparing groups or clusters of patients, this comparison can only practicably be undertaken without consent. The current Code impedes a learning health system and is detrimental to the health of New Zealanders. It urgently needs updating.
1. Tricoci P AJ, Kramer JM, Califf MR, Smith SC Jr. Scientific Evidence Underlying the ACC/AHA Clinical Practice Guidelines. JAMA. 2009; 301:831–41.
2. Grey C, Jackson R, Wells S, et al. Trends in ischaemic heart disease: patterns of hospitalisation and mortality rates differ by ethnicity (ANZACS-QI 21). The New Zealand Medical Journal. 2018; 131:21–31.
3. Tahhan AS, Vaduganathan M, Greene SJ, et al. Enrollment of Older Patients, Women, and Racial and Ethnic Minorities in Contemporary Heart Failure Clinical Trials: A Systematic Review. JAMA cardiology. 2018; 3:1011–19.
4. National Ethics Advisory Committee. Ethical Guidelines for Intervention Studies: Revised edition2012 29 May 2020. Available from: http://www.moh.govt.nz/notebook/nbbooks.nsf/0/A1E97A72A3AC8BC3CC257A60000B1D3B/$file/ethical-guidelines-for-intervention-studies-2012v2.pdf
5. National Ethics Advisory Committee. National Ethical Standards for Health and Disability Research and Quality Improvement2019 29 May 2020. Available from: http://neac.health.govt.nz/system/files/documents/publications/national-ethical-standards-health-disability-research-quality-improvement-2019.pdf
6. Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, Social Sciences and Humanities Research Council of Canada. Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans2014 29 May 2020. Available from: http://ethics.gc.ca/eng/documents/TCPS_2-2014_FINAL_Web.pdf
7. U.S. Department of Health & Human Services, Office for Human Research Protection. Attachment D: Informed Consent and Waiver of Consent2013. Available from: http://www.hhs.gov/ohrp/sachrp-committee/recommendations/2013-january-10-letter-attachment-d/index.html
8. van Oostveen RB, Romero-Palacios A, Whitlock R, et al. Prevention of Infections in Cardiac Surgery study (PICS): study protocol for a pragmatic cluster-randomized factorial crossover pilot trial. Trials. 2018; 19:688.
9. Finfer S, Bellomo R, Boyce N, et al. A comparison of albumin and saline for fluid resuscitation in the intensive care unit. N Engl J Med. 2004; 350:2247–56.
10. Myburgh JA, Finfer S, Bellomo R, et al. Hydroxyethyl starch or saline for fluid resuscitation in intensive care. N Engl J Med. 2012; 367:1901–11.
11. Young P, Bailey M, Beasley R, et al. Effect of a Buffered Crystalloid Solution vs Saline on Acute Kidney Injury Among Patients in the Intensive Care Unit: The SPLIT Randomized Clinical Trial. JAMA. 2015; 314:1701–10.
12. Shahzad A, Kemp I, Mars C, et al. Unfractionated heparin versus bivalirudin in primary percutaneous coronary intervention (HEAT-PPCI): an open-label, single centre, randomised controlled trial. Lancet. 2014; 384:1849–58.
13. Stewart R, Jones P, Dicker B, et al. The New Zealand Oxygen Therapy in Acute Coronary Syndromes trial (NZOTACS). ESC Congress 2019 together with World Congress of Cardiology 2019; Paris, France: European Society of Cardiology.
14. Williamson E, Walker AJ, Bhaskaran KJ, et al. OpenSAFELY: factors associated with COVID-19-related hospital death in the linked electronic health records of 17 million adult NHS patients. medRxiv. 2020:2020.05.06.20092999.
15. Curtis E, Fernandez R, Lee A. The effect of vasodilatory medications on radial artery spasm in patients undergoing transradial coronary artery procedures: a systematic review. JBI database of systematic reviews and implementation reports. 2017; 15:1952–67.
16. Webster M, Stewart R, Aagaard N, et al. The learning health system: trial design and participant consent in comparative effectiveness research. Eur Heart J. 2019; 40:1236–40.
The position of Health and Disability Commissioner was established in 1994, following the Cartwright Inquiry into the treatment of cervical cancer at National Women’s Hospital, which found research had been undertaken without ethical approval or informed participant consent. The Code of Health and Disability Services Consumers’ Rights, which includes protection from unethical research, became law in 1996. The Code has been reviewed at roughly five-yearly intervals, but has remained largely unchanged. Parts of the Code related to research have, unfortunately, not kept pace with newer concepts in healthcare evaluation and improvement—the learning health system—over the last quarter century.
Healthcare improves in two main ways. New drugs or devices undergo rigorous evaluation in traditional clinical trials. If they provide sufficient clinical benefit, at a cost a health system can afford, they enter clinical use. The research leading to these advances requires participant consent, and is costly to undertake. Breakthrough drugs, such as PCSK-9 inhibitors which are extremely effective at lowering cholesterol levels, and devices such as transcatheter approaches to aortic valve replacement, are often very expensive. The other way is to review or audit current practice, compare it with appropriate local or international benchmarks, and introduce changes to approach best practice standards. This type of quality improvement initiative is not considered research and doesn’t usually require participant consent.
There is a grey zone between these two approaches. Comparative effectiveness studies evaluate accepted clinical practice, where more than one drug, device or approach is used for the same condition. Drug and device manufacturers are focused on fulfilling US Food and Drug Administration and European regulatory requirements, rather than undertaking the type of head-to-head comparison, which might usefully guide clinical practice. Because drugs and devices in accepted use should achieve roughly similar clinical outcomes, studies comparing them often need to be very large to detect any differences.
The ideal learning health system undergoes a continual cycle of rigorously assessing what works and what doesn’t, and modifying practice from there. The idea that usual practice should be constantly questioned and evaluated is not appreciated by most patients, who believe that their recommended healthcare is underpinned by rigorous science. That is not usually the case. In cardiology it is estimated that only 11% of guideline treatment recommendations are based upon adequate randomised trial data, with half based upon expert opinion alone.1
There are many examples of treatments given to thousands of people based on expert opinion, which are later found to have uncertain benefit or cause harm when more rigorous assessment is undertaken.
The electronic capture of health information via electronic clinical records, condition and procedure databases, and national datasets is arguably the greatest advance in healthcare over the last 25 years. It affords the opportunity to extract, aggregate and analyse detailed patient and health system information thereby identifying shortcomings and opportunities for improvement. New Zealand is ahead of many other countries, including the US and Australia, in having a unique patient identifier and national mortality, hospital discharge coding, prescribing and laboratory datasets. In cardiology, linkage of the All New Zealand Acute Coronary Syndrome - Quality Improvement (ANZACS-QI) database to other national datasets has provided novel insights in many aspects of unstable coronary disease, including disparities in health outcomes by ethnicity and by region.2
Audit of current practice is an important component of quality improvement initiatives, particularly if able to be benchmarked against relevant comparators. However, observational data, including that used for audit, is not reliable for determining the best treatment. The decision to use one treatment or strategy rather than another may be influenced by other factors, including some which are unknown, associated with favourable or unfavourable outcomes. These confound the association, so established treatments or approaches can only be reliably compared if they are randomly allocated. The challenge of identifying a true difference between established treatments or strategies is greater because that difference is usually modest.
Another change over the last 25 years has been the development of novel research methodologies. A major limitation of the traditional individual participant, double-blinded, randomised, controlled trial is limited external validity. Enrolled patient populations typically lack proportional representation of the elderly, females, those from disadvantaged populations, those with co-morbidities, and most importantly when evaluating treatments, those at highest risk for adverse events.3 In New Zealand, Māori and Pacific peoples are very often under-represented in clinical trials.
There has been a move towards pragmatic studies, aiming to enrol a more diverse study population by simplifying trial requirements. Trials may be embedded in established patient or procedure registries, and outcomes assessed by linkage to other datasets, such as those coding for mortality or hospital discharge diagnoses. Running trials within registries also allow comparison of trial patients with those not enrolled but in the registry, thereby providing insights into the likely generalisability of the study findings.
Comparative effectiveness studies typically compare standard or accepted treatments applied as part of routine care pathways to many or all patients with a particular condition. Randomly allocating treatment to one patient cohort and comparing it to a different treatment applied to another cohort using cluster randomisation has advantages with regard to both trial administration and making the results more directly relevant to clinical practice. Apart from more simply enrolling larger patient numbers thereby enabling trials to be powered for clinically relevant endpoints, cluster randomisation facilitates enrolment of a wide spectrum of patients with a particular condition, including those often excluded from trials with individual randomisation. This increases the generalisability, or external validity, of the study findings.
The Code of Health and Disability Services Consumers’ Rights states that the consumer has the right to be fully informed. Under section 6(1): Every consumer has the right to the information that a reasonable consumer, in that consumer’s circumstances, would expect to receive, including … (d) notification of any proposed participation in teaching or research, including whether the research requires and has received ethical approval.
This has been interpreted as precluding either individual or cluster random allocation of any aspect of patient care, without prior written, informed consent from anyone affected by that care. The only exceptions are studies comparing established treatments undertaken in settings where consent cannot be obtained without delaying time-critical treatment, such as in unconscious patients in intensive care.
The National Ethics Advisory Committee (NEAC) to the Ministry of Health in their 2012 Guidelines wrote in regard to a “community intervention study (or cluster intervention study)” that “individual consent to participate … should not be required if gaining that consent is impracticable, and if the benefits from the study are sufficient and the potential harms minimal.”4 However, when updated in 2019 this was replaced by the more circumspect “NEAC recognises that there is a tension between ethics and the legal framework for consent, as cluster randomised trials generally are not designed to seek consent. This tension creates a legal barrier to some research that may otherwise meet ethical standards. NEAC is aware of the tension and support a review of the law in this area”.5
Other countries have considered this issue and come to a conclusion similar to that of NEAC in 2012. Following a recommendation from the Ottawa Ethics of Cluster Randomized Trials Consensus Group, the Canadian Tri-Council Policy Statement “allows research ethics boards to approve an alteration to the informed consent process, such as a waiver of consent, if the following criteria are met: (1) there is no more than minimal risk to participants, (2) the alteration to consent requirements is unlikely to affect the welfare of participants adversely, (3) it is impossible or impracticable to carry out the research properly given the research design if prior consent is needed, and (4) there is a plan to offer participants the possibility of having their data deleted from the study database”.6 Similar criteria have been used in the US.7 A waiver of consent was recently granted for Canadian sites participating in the PICS Trial, a cluster-randomised comparison of various prophylactic antibiotic regimens to prevent cardiac surgical site infection.7,8
A key consideration around randomly allocating treatment to patients without their prospective consent is whether this is acceptable to patients. Some insights can be gleaned from trials undertaken in the acute setting where randomised treatment has already been given, and consent can only be obtained for follow-up and use of data. The SAFE, CHEST and SPLIT trials compared various intravenous fluid solutions in the intensive care setting; fewer than 2% of patients or their relatives elected to opt out.9–11 In the HEAT-PPCI trial, undertaken in patients with ST elevation myocardial infarction, 0.2% of patients did not give consent for their ongoing participation.12 Although these observations are potentially subject to survivor bias, very few participants appear to be concerned about being included in comparative effectiveness studies.
Obtaining consent is costly. A typical phase 3 pivotal, 20,000 patient, randomised cardiovascular drug trial, with individual participant consent and randomisation, designed to comply with the requirements of the US Food and Drug Administration, may cost NZ$75 million, which approximates the annual research budget of the Health Research Council, the main New Zealand biomedical research funding body. The budget upper threshold for an HRC-funded trial is about $1.2 million, which leads to optimistic power calculations and limits most New Zealand studies to using surrogate rather than clinically relevant endpoints.
In contrast, the New Zealand Oxygen Trial recently compared two oxygen administration protocols in patients calling an ambulance or presenting to hospital with a suspected acute coronary syndrome. It used a cluster-randomised, cross-over design and was embedded in established registries (ambulance service and ANZACS-QI). Consent was waived given the acute setting; informed consent is not possible in patients with chest pain needing immediate treatment. The trial enrolled 40,000 patients over two years, and was undertaken on a project grant of $160,000 from the National Heart Foundation.13
Embedding trials in registries can considerably reduce costs, as can cluster randomisation. However, many large, simple, clinically relevant, randomised, comparative effectiveness trials are unable to be undertaken if participant consent is required, because of the cost of obtaining consent.
The COVID-19 pandemic has challenged and disrupted previous constraints around the way research is assessed and undertaken. One example is OpenSAFELY, which used purpose-built software to analyse data from the electronic general practice medical records of 17 million English NHS patients, 5,683 of whom subsequently died from COVID.14 The records were examined in situ, without copies being made, and with a log kept of all interactions. The study benefitted from a UK government decree allowing wider access to health data for research purposes, and took 42 days from idea conception to publication. It has produced the most comprehensive information yet describing those who are at increased risk of contracting and dying from COVID. Recognition of the importance of randomised clinical trials within the NHS has allowed for the rapid and rigorous evaluation of several therapies for COVID, contrasting with other countries where treatments of unproven benefit and possible harm have been advocated and funded.
Coronary angiography and percutaneous coronary intervention (PCI) require vascular access. Approximately 90% of New Zealand procedures are via the radial rather than femoral artery as the latter is associated with more frequent bleeding complications, including life-threatening retroperitoneal bleeding. The radial artery is of smaller calibre, and vasospasm may occur with advancing and manipulating catheters. Once the vascular sheath is inserted, bolus injection of an intra-arterial vasodilator reduces the likelihood of spasm. The most commonly used vasodilators are verapamil, a calcium channel blocker, and nitroglycerin, a nitrate. In New Zealand, roughly 60% give verapamil and 40% nitroglycerin, as part of routine unit practice. The incidence of spasm in the current era is unknown but likely to be low, perhaps 2–4%. There are no adequately powered comparisons of verapamil with nitroglycerin.15 When procedure consent is obtained, no New Zealand interventional cardiologist mentions giving a vasodilator, nor which one; it is regarded as a routine part of the procedure pathway.
Is verapamil or nitroglycerin the better vasodilator to prevent radial spasm, when used as the default option in routine practice? Clinicians are free to give another medication, or none at all, if they think that is better for a particular patient. From an individual patient perspective, any differences will be small and of minimal, if any, clinical relevance (if spasm occurs, further boluses of the same or other vasodilators are given, or smaller diameter catheters used). However, there are over three million PCI procedures performed worldwide each year, so minor differences in either efficacy or cost may be important at the population level.
Because spasm is uncommon, and any difference between vasodilators will be small, a trial would require almost 10,000 patients. A trial of this size, with individual consent and randomisation, would be difficult to justify because of the high cost and administrative burden relative to the clinical importance of the findings. Trials with individual consent are particularly difficult when evaluating unit policies, applied as the default option to the treatment of patients over a period of time.
Giving verapamil for six months, deciding to switch to nitroglycerin for the next six months, and collecting data on vasospasm would not require consent. Such audits of practice are a strongly encouraged aspect of quality assurance and continuing professional development. However, adding rigour to the evaluation by randomly allocating the order of verapamil and nitroglycerin administration over that 12-month period is currently illegal in New Zealand without individual participant consent.
The Code, very appropriately, is primarily designed to protect the rights of patients. However, it fails to achieve an equally important outcome: to enable the healthcare system to deliver the best possible treatment to those patients, within available resources. This goal may be achieved by having comparative effectiveness research as an integral part of routine care and, in some limited and clearly defined circumstances, undertaken without written participant consent. Such research would require close ethical scrutiny, with independent lay and expert input into the study design and oversight. Individual autonomy around all healthcare decisions which are meaningful to the patient must be preserved, and information on the trial must be freely available and readily accessible.
Any future changes to the Code need wide public consultation on consent, research and the evidence underpinning treatment recommendations. The views of consumers, Māori, clinicians, ethicists and the legal profession all need consideration. However, those perspectives must be informed by understanding that, in most circumstances, randomised evaluations provide the only reliable way to determine the best treatment for a patient. They may also identify currently used treatments or procedures which provide little or no benefit, or cause harm, leading to their discontinuation.
Randomised, comparative effectiveness studies should be an integral part of any learning health system aimed at better healthcare delivery, and reducing waste and harm from ineffective treatments or strategies. These should be both enabled and required by those governing and funding healthcare in New Zealand.
The Health and Disability Code needs revision to include consideration of the importance of embedding a healthcare culture of continual evaluation and improvement, and the critical role randomised evaluations have in achieving these goals.16
The Health and Disability Code precludes any research involving a competent patient without the informed consent of the participant. A learning health system requires rigorous evaluation of both new and established clinical practice, including low-risk components of usual care pathways. When comparing two accepted practices, the only way to control for unknown confounders is by randomisation. In some limited circumstances, particularly when comparing groups or clusters of patients, this comparison can only practicably be undertaken without consent. The current Code impedes a learning health system and is detrimental to the health of New Zealanders. It urgently needs updating.
1. Tricoci P AJ, Kramer JM, Califf MR, Smith SC Jr. Scientific Evidence Underlying the ACC/AHA Clinical Practice Guidelines. JAMA. 2009; 301:831–41.
2. Grey C, Jackson R, Wells S, et al. Trends in ischaemic heart disease: patterns of hospitalisation and mortality rates differ by ethnicity (ANZACS-QI 21). The New Zealand Medical Journal. 2018; 131:21–31.
3. Tahhan AS, Vaduganathan M, Greene SJ, et al. Enrollment of Older Patients, Women, and Racial and Ethnic Minorities in Contemporary Heart Failure Clinical Trials: A Systematic Review. JAMA cardiology. 2018; 3:1011–19.
4. National Ethics Advisory Committee. Ethical Guidelines for Intervention Studies: Revised edition2012 29 May 2020. Available from: http://www.moh.govt.nz/notebook/nbbooks.nsf/0/A1E97A72A3AC8BC3CC257A60000B1D3B/$file/ethical-guidelines-for-intervention-studies-2012v2.pdf
5. National Ethics Advisory Committee. National Ethical Standards for Health and Disability Research and Quality Improvement2019 29 May 2020. Available from: http://neac.health.govt.nz/system/files/documents/publications/national-ethical-standards-health-disability-research-quality-improvement-2019.pdf
6. Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, Social Sciences and Humanities Research Council of Canada. Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans2014 29 May 2020. Available from: http://ethics.gc.ca/eng/documents/TCPS_2-2014_FINAL_Web.pdf
7. U.S. Department of Health & Human Services, Office for Human Research Protection. Attachment D: Informed Consent and Waiver of Consent2013. Available from: http://www.hhs.gov/ohrp/sachrp-committee/recommendations/2013-january-10-letter-attachment-d/index.html
8. van Oostveen RB, Romero-Palacios A, Whitlock R, et al. Prevention of Infections in Cardiac Surgery study (PICS): study protocol for a pragmatic cluster-randomized factorial crossover pilot trial. Trials. 2018; 19:688.
9. Finfer S, Bellomo R, Boyce N, et al. A comparison of albumin and saline for fluid resuscitation in the intensive care unit. N Engl J Med. 2004; 350:2247–56.
10. Myburgh JA, Finfer S, Bellomo R, et al. Hydroxyethyl starch or saline for fluid resuscitation in intensive care. N Engl J Med. 2012; 367:1901–11.
11. Young P, Bailey M, Beasley R, et al. Effect of a Buffered Crystalloid Solution vs Saline on Acute Kidney Injury Among Patients in the Intensive Care Unit: The SPLIT Randomized Clinical Trial. JAMA. 2015; 314:1701–10.
12. Shahzad A, Kemp I, Mars C, et al. Unfractionated heparin versus bivalirudin in primary percutaneous coronary intervention (HEAT-PPCI): an open-label, single centre, randomised controlled trial. Lancet. 2014; 384:1849–58.
13. Stewart R, Jones P, Dicker B, et al. The New Zealand Oxygen Therapy in Acute Coronary Syndromes trial (NZOTACS). ESC Congress 2019 together with World Congress of Cardiology 2019; Paris, France: European Society of Cardiology.
14. Williamson E, Walker AJ, Bhaskaran KJ, et al. OpenSAFELY: factors associated with COVID-19-related hospital death in the linked electronic health records of 17 million adult NHS patients. medRxiv. 2020:2020.05.06.20092999.
15. Curtis E, Fernandez R, Lee A. The effect of vasodilatory medications on radial artery spasm in patients undergoing transradial coronary artery procedures: a systematic review. JBI database of systematic reviews and implementation reports. 2017; 15:1952–67.
16. Webster M, Stewart R, Aagaard N, et al. The learning health system: trial design and participant consent in comparative effectiveness research. Eur Heart J. 2019; 40:1236–40.
The position of Health and Disability Commissioner was established in 1994, following the Cartwright Inquiry into the treatment of cervical cancer at National Women’s Hospital, which found research had been undertaken without ethical approval or informed participant consent. The Code of Health and Disability Services Consumers’ Rights, which includes protection from unethical research, became law in 1996. The Code has been reviewed at roughly five-yearly intervals, but has remained largely unchanged. Parts of the Code related to research have, unfortunately, not kept pace with newer concepts in healthcare evaluation and improvement—the learning health system—over the last quarter century.
Healthcare improves in two main ways. New drugs or devices undergo rigorous evaluation in traditional clinical trials. If they provide sufficient clinical benefit, at a cost a health system can afford, they enter clinical use. The research leading to these advances requires participant consent, and is costly to undertake. Breakthrough drugs, such as PCSK-9 inhibitors which are extremely effective at lowering cholesterol levels, and devices such as transcatheter approaches to aortic valve replacement, are often very expensive. The other way is to review or audit current practice, compare it with appropriate local or international benchmarks, and introduce changes to approach best practice standards. This type of quality improvement initiative is not considered research and doesn’t usually require participant consent.
There is a grey zone between these two approaches. Comparative effectiveness studies evaluate accepted clinical practice, where more than one drug, device or approach is used for the same condition. Drug and device manufacturers are focused on fulfilling US Food and Drug Administration and European regulatory requirements, rather than undertaking the type of head-to-head comparison, which might usefully guide clinical practice. Because drugs and devices in accepted use should achieve roughly similar clinical outcomes, studies comparing them often need to be very large to detect any differences.
The ideal learning health system undergoes a continual cycle of rigorously assessing what works and what doesn’t, and modifying practice from there. The idea that usual practice should be constantly questioned and evaluated is not appreciated by most patients, who believe that their recommended healthcare is underpinned by rigorous science. That is not usually the case. In cardiology it is estimated that only 11% of guideline treatment recommendations are based upon adequate randomised trial data, with half based upon expert opinion alone.1
There are many examples of treatments given to thousands of people based on expert opinion, which are later found to have uncertain benefit or cause harm when more rigorous assessment is undertaken.
The electronic capture of health information via electronic clinical records, condition and procedure databases, and national datasets is arguably the greatest advance in healthcare over the last 25 years. It affords the opportunity to extract, aggregate and analyse detailed patient and health system information thereby identifying shortcomings and opportunities for improvement. New Zealand is ahead of many other countries, including the US and Australia, in having a unique patient identifier and national mortality, hospital discharge coding, prescribing and laboratory datasets. In cardiology, linkage of the All New Zealand Acute Coronary Syndrome - Quality Improvement (ANZACS-QI) database to other national datasets has provided novel insights in many aspects of unstable coronary disease, including disparities in health outcomes by ethnicity and by region.2
Audit of current practice is an important component of quality improvement initiatives, particularly if able to be benchmarked against relevant comparators. However, observational data, including that used for audit, is not reliable for determining the best treatment. The decision to use one treatment or strategy rather than another may be influenced by other factors, including some which are unknown, associated with favourable or unfavourable outcomes. These confound the association, so established treatments or approaches can only be reliably compared if they are randomly allocated. The challenge of identifying a true difference between established treatments or strategies is greater because that difference is usually modest.
Another change over the last 25 years has been the development of novel research methodologies. A major limitation of the traditional individual participant, double-blinded, randomised, controlled trial is limited external validity. Enrolled patient populations typically lack proportional representation of the elderly, females, those from disadvantaged populations, those with co-morbidities, and most importantly when evaluating treatments, those at highest risk for adverse events.3 In New Zealand, Māori and Pacific peoples are very often under-represented in clinical trials.
There has been a move towards pragmatic studies, aiming to enrol a more diverse study population by simplifying trial requirements. Trials may be embedded in established patient or procedure registries, and outcomes assessed by linkage to other datasets, such as those coding for mortality or hospital discharge diagnoses. Running trials within registries also allow comparison of trial patients with those not enrolled but in the registry, thereby providing insights into the likely generalisability of the study findings.
Comparative effectiveness studies typically compare standard or accepted treatments applied as part of routine care pathways to many or all patients with a particular condition. Randomly allocating treatment to one patient cohort and comparing it to a different treatment applied to another cohort using cluster randomisation has advantages with regard to both trial administration and making the results more directly relevant to clinical practice. Apart from more simply enrolling larger patient numbers thereby enabling trials to be powered for clinically relevant endpoints, cluster randomisation facilitates enrolment of a wide spectrum of patients with a particular condition, including those often excluded from trials with individual randomisation. This increases the generalisability, or external validity, of the study findings.
The Code of Health and Disability Services Consumers’ Rights states that the consumer has the right to be fully informed. Under section 6(1): Every consumer has the right to the information that a reasonable consumer, in that consumer’s circumstances, would expect to receive, including … (d) notification of any proposed participation in teaching or research, including whether the research requires and has received ethical approval.
This has been interpreted as precluding either individual or cluster random allocation of any aspect of patient care, without prior written, informed consent from anyone affected by that care. The only exceptions are studies comparing established treatments undertaken in settings where consent cannot be obtained without delaying time-critical treatment, such as in unconscious patients in intensive care.
The National Ethics Advisory Committee (NEAC) to the Ministry of Health in their 2012 Guidelines wrote in regard to a “community intervention study (or cluster intervention study)” that “individual consent to participate … should not be required if gaining that consent is impracticable, and if the benefits from the study are sufficient and the potential harms minimal.”4 However, when updated in 2019 this was replaced by the more circumspect “NEAC recognises that there is a tension between ethics and the legal framework for consent, as cluster randomised trials generally are not designed to seek consent. This tension creates a legal barrier to some research that may otherwise meet ethical standards. NEAC is aware of the tension and support a review of the law in this area”.5
Other countries have considered this issue and come to a conclusion similar to that of NEAC in 2012. Following a recommendation from the Ottawa Ethics of Cluster Randomized Trials Consensus Group, the Canadian Tri-Council Policy Statement “allows research ethics boards to approve an alteration to the informed consent process, such as a waiver of consent, if the following criteria are met: (1) there is no more than minimal risk to participants, (2) the alteration to consent requirements is unlikely to affect the welfare of participants adversely, (3) it is impossible or impracticable to carry out the research properly given the research design if prior consent is needed, and (4) there is a plan to offer participants the possibility of having their data deleted from the study database”.6 Similar criteria have been used in the US.7 A waiver of consent was recently granted for Canadian sites participating in the PICS Trial, a cluster-randomised comparison of various prophylactic antibiotic regimens to prevent cardiac surgical site infection.7,8
A key consideration around randomly allocating treatment to patients without their prospective consent is whether this is acceptable to patients. Some insights can be gleaned from trials undertaken in the acute setting where randomised treatment has already been given, and consent can only be obtained for follow-up and use of data. The SAFE, CHEST and SPLIT trials compared various intravenous fluid solutions in the intensive care setting; fewer than 2% of patients or their relatives elected to opt out.9–11 In the HEAT-PPCI trial, undertaken in patients with ST elevation myocardial infarction, 0.2% of patients did not give consent for their ongoing participation.12 Although these observations are potentially subject to survivor bias, very few participants appear to be concerned about being included in comparative effectiveness studies.
Obtaining consent is costly. A typical phase 3 pivotal, 20,000 patient, randomised cardiovascular drug trial, with individual participant consent and randomisation, designed to comply with the requirements of the US Food and Drug Administration, may cost NZ$75 million, which approximates the annual research budget of the Health Research Council, the main New Zealand biomedical research funding body. The budget upper threshold for an HRC-funded trial is about $1.2 million, which leads to optimistic power calculations and limits most New Zealand studies to using surrogate rather than clinically relevant endpoints.
In contrast, the New Zealand Oxygen Trial recently compared two oxygen administration protocols in patients calling an ambulance or presenting to hospital with a suspected acute coronary syndrome. It used a cluster-randomised, cross-over design and was embedded in established registries (ambulance service and ANZACS-QI). Consent was waived given the acute setting; informed consent is not possible in patients with chest pain needing immediate treatment. The trial enrolled 40,000 patients over two years, and was undertaken on a project grant of $160,000 from the National Heart Foundation.13
Embedding trials in registries can considerably reduce costs, as can cluster randomisation. However, many large, simple, clinically relevant, randomised, comparative effectiveness trials are unable to be undertaken if participant consent is required, because of the cost of obtaining consent.
The COVID-19 pandemic has challenged and disrupted previous constraints around the way research is assessed and undertaken. One example is OpenSAFELY, which used purpose-built software to analyse data from the electronic general practice medical records of 17 million English NHS patients, 5,683 of whom subsequently died from COVID.14 The records were examined in situ, without copies being made, and with a log kept of all interactions. The study benefitted from a UK government decree allowing wider access to health data for research purposes, and took 42 days from idea conception to publication. It has produced the most comprehensive information yet describing those who are at increased risk of contracting and dying from COVID. Recognition of the importance of randomised clinical trials within the NHS has allowed for the rapid and rigorous evaluation of several therapies for COVID, contrasting with other countries where treatments of unproven benefit and possible harm have been advocated and funded.
Coronary angiography and percutaneous coronary intervention (PCI) require vascular access. Approximately 90% of New Zealand procedures are via the radial rather than femoral artery as the latter is associated with more frequent bleeding complications, including life-threatening retroperitoneal bleeding. The radial artery is of smaller calibre, and vasospasm may occur with advancing and manipulating catheters. Once the vascular sheath is inserted, bolus injection of an intra-arterial vasodilator reduces the likelihood of spasm. The most commonly used vasodilators are verapamil, a calcium channel blocker, and nitroglycerin, a nitrate. In New Zealand, roughly 60% give verapamil and 40% nitroglycerin, as part of routine unit practice. The incidence of spasm in the current era is unknown but likely to be low, perhaps 2–4%. There are no adequately powered comparisons of verapamil with nitroglycerin.15 When procedure consent is obtained, no New Zealand interventional cardiologist mentions giving a vasodilator, nor which one; it is regarded as a routine part of the procedure pathway.
Is verapamil or nitroglycerin the better vasodilator to prevent radial spasm, when used as the default option in routine practice? Clinicians are free to give another medication, or none at all, if they think that is better for a particular patient. From an individual patient perspective, any differences will be small and of minimal, if any, clinical relevance (if spasm occurs, further boluses of the same or other vasodilators are given, or smaller diameter catheters used). However, there are over three million PCI procedures performed worldwide each year, so minor differences in either efficacy or cost may be important at the population level.
Because spasm is uncommon, and any difference between vasodilators will be small, a trial would require almost 10,000 patients. A trial of this size, with individual consent and randomisation, would be difficult to justify because of the high cost and administrative burden relative to the clinical importance of the findings. Trials with individual consent are particularly difficult when evaluating unit policies, applied as the default option to the treatment of patients over a period of time.
Giving verapamil for six months, deciding to switch to nitroglycerin for the next six months, and collecting data on vasospasm would not require consent. Such audits of practice are a strongly encouraged aspect of quality assurance and continuing professional development. However, adding rigour to the evaluation by randomly allocating the order of verapamil and nitroglycerin administration over that 12-month period is currently illegal in New Zealand without individual participant consent.
The Code, very appropriately, is primarily designed to protect the rights of patients. However, it fails to achieve an equally important outcome: to enable the healthcare system to deliver the best possible treatment to those patients, within available resources. This goal may be achieved by having comparative effectiveness research as an integral part of routine care and, in some limited and clearly defined circumstances, undertaken without written participant consent. Such research would require close ethical scrutiny, with independent lay and expert input into the study design and oversight. Individual autonomy around all healthcare decisions which are meaningful to the patient must be preserved, and information on the trial must be freely available and readily accessible.
Any future changes to the Code need wide public consultation on consent, research and the evidence underpinning treatment recommendations. The views of consumers, Māori, clinicians, ethicists and the legal profession all need consideration. However, those perspectives must be informed by understanding that, in most circumstances, randomised evaluations provide the only reliable way to determine the best treatment for a patient. They may also identify currently used treatments or procedures which provide little or no benefit, or cause harm, leading to their discontinuation.
Randomised, comparative effectiveness studies should be an integral part of any learning health system aimed at better healthcare delivery, and reducing waste and harm from ineffective treatments or strategies. These should be both enabled and required by those governing and funding healthcare in New Zealand.
The Health and Disability Code needs revision to include consideration of the importance of embedding a healthcare culture of continual evaluation and improvement, and the critical role randomised evaluations have in achieving these goals.16
The Health and Disability Code precludes any research involving a competent patient without the informed consent of the participant. A learning health system requires rigorous evaluation of both new and established clinical practice, including low-risk components of usual care pathways. When comparing two accepted practices, the only way to control for unknown confounders is by randomisation. In some limited circumstances, particularly when comparing groups or clusters of patients, this comparison can only practicably be undertaken without consent. The current Code impedes a learning health system and is detrimental to the health of New Zealanders. It urgently needs updating.
1. Tricoci P AJ, Kramer JM, Califf MR, Smith SC Jr. Scientific Evidence Underlying the ACC/AHA Clinical Practice Guidelines. JAMA. 2009; 301:831–41.
2. Grey C, Jackson R, Wells S, et al. Trends in ischaemic heart disease: patterns of hospitalisation and mortality rates differ by ethnicity (ANZACS-QI 21). The New Zealand Medical Journal. 2018; 131:21–31.
3. Tahhan AS, Vaduganathan M, Greene SJ, et al. Enrollment of Older Patients, Women, and Racial and Ethnic Minorities in Contemporary Heart Failure Clinical Trials: A Systematic Review. JAMA cardiology. 2018; 3:1011–19.
4. National Ethics Advisory Committee. Ethical Guidelines for Intervention Studies: Revised edition2012 29 May 2020. Available from: http://www.moh.govt.nz/notebook/nbbooks.nsf/0/A1E97A72A3AC8BC3CC257A60000B1D3B/$file/ethical-guidelines-for-intervention-studies-2012v2.pdf
5. National Ethics Advisory Committee. National Ethical Standards for Health and Disability Research and Quality Improvement2019 29 May 2020. Available from: http://neac.health.govt.nz/system/files/documents/publications/national-ethical-standards-health-disability-research-quality-improvement-2019.pdf
6. Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, Social Sciences and Humanities Research Council of Canada. Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans2014 29 May 2020. Available from: http://ethics.gc.ca/eng/documents/TCPS_2-2014_FINAL_Web.pdf
7. U.S. Department of Health & Human Services, Office for Human Research Protection. Attachment D: Informed Consent and Waiver of Consent2013. Available from: http://www.hhs.gov/ohrp/sachrp-committee/recommendations/2013-january-10-letter-attachment-d/index.html
8. van Oostveen RB, Romero-Palacios A, Whitlock R, et al. Prevention of Infections in Cardiac Surgery study (PICS): study protocol for a pragmatic cluster-randomized factorial crossover pilot trial. Trials. 2018; 19:688.
9. Finfer S, Bellomo R, Boyce N, et al. A comparison of albumin and saline for fluid resuscitation in the intensive care unit. N Engl J Med. 2004; 350:2247–56.
10. Myburgh JA, Finfer S, Bellomo R, et al. Hydroxyethyl starch or saline for fluid resuscitation in intensive care. N Engl J Med. 2012; 367:1901–11.
11. Young P, Bailey M, Beasley R, et al. Effect of a Buffered Crystalloid Solution vs Saline on Acute Kidney Injury Among Patients in the Intensive Care Unit: The SPLIT Randomized Clinical Trial. JAMA. 2015; 314:1701–10.
12. Shahzad A, Kemp I, Mars C, et al. Unfractionated heparin versus bivalirudin in primary percutaneous coronary intervention (HEAT-PPCI): an open-label, single centre, randomised controlled trial. Lancet. 2014; 384:1849–58.
13. Stewart R, Jones P, Dicker B, et al. The New Zealand Oxygen Therapy in Acute Coronary Syndromes trial (NZOTACS). ESC Congress 2019 together with World Congress of Cardiology 2019; Paris, France: European Society of Cardiology.
14. Williamson E, Walker AJ, Bhaskaran KJ, et al. OpenSAFELY: factors associated with COVID-19-related hospital death in the linked electronic health records of 17 million adult NHS patients. medRxiv. 2020:2020.05.06.20092999.
15. Curtis E, Fernandez R, Lee A. The effect of vasodilatory medications on radial artery spasm in patients undergoing transradial coronary artery procedures: a systematic review. JBI database of systematic reviews and implementation reports. 2017; 15:1952–67.
16. Webster M, Stewart R, Aagaard N, et al. The learning health system: trial design and participant consent in comparative effectiveness research. Eur Heart J. 2019; 40:1236–40.
The position of Health and Disability Commissioner was established in 1994, following the Cartwright Inquiry into the treatment of cervical cancer at National Women’s Hospital, which found research had been undertaken without ethical approval or informed participant consent. The Code of Health and Disability Services Consumers’ Rights, which includes protection from unethical research, became law in 1996. The Code has been reviewed at roughly five-yearly intervals, but has remained largely unchanged. Parts of the Code related to research have, unfortunately, not kept pace with newer concepts in healthcare evaluation and improvement—the learning health system—over the last quarter century.
Healthcare improves in two main ways. New drugs or devices undergo rigorous evaluation in traditional clinical trials. If they provide sufficient clinical benefit, at a cost a health system can afford, they enter clinical use. The research leading to these advances requires participant consent, and is costly to undertake. Breakthrough drugs, such as PCSK-9 inhibitors which are extremely effective at lowering cholesterol levels, and devices such as transcatheter approaches to aortic valve replacement, are often very expensive. The other way is to review or audit current practice, compare it with appropriate local or international benchmarks, and introduce changes to approach best practice standards. This type of quality improvement initiative is not considered research and doesn’t usually require participant consent.
There is a grey zone between these two approaches. Comparative effectiveness studies evaluate accepted clinical practice, where more than one drug, device or approach is used for the same condition. Drug and device manufacturers are focused on fulfilling US Food and Drug Administration and European regulatory requirements, rather than undertaking the type of head-to-head comparison, which might usefully guide clinical practice. Because drugs and devices in accepted use should achieve roughly similar clinical outcomes, studies comparing them often need to be very large to detect any differences.
The ideal learning health system undergoes a continual cycle of rigorously assessing what works and what doesn’t, and modifying practice from there. The idea that usual practice should be constantly questioned and evaluated is not appreciated by most patients, who believe that their recommended healthcare is underpinned by rigorous science. That is not usually the case. In cardiology it is estimated that only 11% of guideline treatment recommendations are based upon adequate randomised trial data, with half based upon expert opinion alone.1
There are many examples of treatments given to thousands of people based on expert opinion, which are later found to have uncertain benefit or cause harm when more rigorous assessment is undertaken.
The electronic capture of health information via electronic clinical records, condition and procedure databases, and national datasets is arguably the greatest advance in healthcare over the last 25 years. It affords the opportunity to extract, aggregate and analyse detailed patient and health system information thereby identifying shortcomings and opportunities for improvement. New Zealand is ahead of many other countries, including the US and Australia, in having a unique patient identifier and national mortality, hospital discharge coding, prescribing and laboratory datasets. In cardiology, linkage of the All New Zealand Acute Coronary Syndrome - Quality Improvement (ANZACS-QI) database to other national datasets has provided novel insights in many aspects of unstable coronary disease, including disparities in health outcomes by ethnicity and by region.2
Audit of current practice is an important component of quality improvement initiatives, particularly if able to be benchmarked against relevant comparators. However, observational data, including that used for audit, is not reliable for determining the best treatment. The decision to use one treatment or strategy rather than another may be influenced by other factors, including some which are unknown, associated with favourable or unfavourable outcomes. These confound the association, so established treatments or approaches can only be reliably compared if they are randomly allocated. The challenge of identifying a true difference between established treatments or strategies is greater because that difference is usually modest.
Another change over the last 25 years has been the development of novel research methodologies. A major limitation of the traditional individual participant, double-blinded, randomised, controlled trial is limited external validity. Enrolled patient populations typically lack proportional representation of the elderly, females, those from disadvantaged populations, those with co-morbidities, and most importantly when evaluating treatments, those at highest risk for adverse events.3 In New Zealand, Māori and Pacific peoples are very often under-represented in clinical trials.
There has been a move towards pragmatic studies, aiming to enrol a more diverse study population by simplifying trial requirements. Trials may be embedded in established patient or procedure registries, and outcomes assessed by linkage to other datasets, such as those coding for mortality or hospital discharge diagnoses. Running trials within registries also allow comparison of trial patients with those not enrolled but in the registry, thereby providing insights into the likely generalisability of the study findings.
Comparative effectiveness studies typically compare standard or accepted treatments applied as part of routine care pathways to many or all patients with a particular condition. Randomly allocating treatment to one patient cohort and comparing it to a different treatment applied to another cohort using cluster randomisation has advantages with regard to both trial administration and making the results more directly relevant to clinical practice. Apart from more simply enrolling larger patient numbers thereby enabling trials to be powered for clinically relevant endpoints, cluster randomisation facilitates enrolment of a wide spectrum of patients with a particular condition, including those often excluded from trials with individual randomisation. This increases the generalisability, or external validity, of the study findings.
The Code of Health and Disability Services Consumers’ Rights states that the consumer has the right to be fully informed. Under section 6(1): Every consumer has the right to the information that a reasonable consumer, in that consumer’s circumstances, would expect to receive, including … (d) notification of any proposed participation in teaching or research, including whether the research requires and has received ethical approval.
This has been interpreted as precluding either individual or cluster random allocation of any aspect of patient care, without prior written, informed consent from anyone affected by that care. The only exceptions are studies comparing established treatments undertaken in settings where consent cannot be obtained without delaying time-critical treatment, such as in unconscious patients in intensive care.
The National Ethics Advisory Committee (NEAC) to the Ministry of Health in their 2012 Guidelines wrote in regard to a “community intervention study (or cluster intervention study)” that “individual consent to participate … should not be required if gaining that consent is impracticable, and if the benefits from the study are sufficient and the potential harms minimal.”4 However, when updated in 2019 this was replaced by the more circumspect “NEAC recognises that there is a tension between ethics and the legal framework for consent, as cluster randomised trials generally are not designed to seek consent. This tension creates a legal barrier to some research that may otherwise meet ethical standards. NEAC is aware of the tension and support a review of the law in this area”.5
Other countries have considered this issue and come to a conclusion similar to that of NEAC in 2012. Following a recommendation from the Ottawa Ethics of Cluster Randomized Trials Consensus Group, the Canadian Tri-Council Policy Statement “allows research ethics boards to approve an alteration to the informed consent process, such as a waiver of consent, if the following criteria are met: (1) there is no more than minimal risk to participants, (2) the alteration to consent requirements is unlikely to affect the welfare of participants adversely, (3) it is impossible or impracticable to carry out the research properly given the research design if prior consent is needed, and (4) there is a plan to offer participants the possibility of having their data deleted from the study database”.6 Similar criteria have been used in the US.7 A waiver of consent was recently granted for Canadian sites participating in the PICS Trial, a cluster-randomised comparison of various prophylactic antibiotic regimens to prevent cardiac surgical site infection.7,8
A key consideration around randomly allocating treatment to patients without their prospective consent is whether this is acceptable to patients. Some insights can be gleaned from trials undertaken in the acute setting where randomised treatment has already been given, and consent can only be obtained for follow-up and use of data. The SAFE, CHEST and SPLIT trials compared various intravenous fluid solutions in the intensive care setting; fewer than 2% of patients or their relatives elected to opt out.9–11 In the HEAT-PPCI trial, undertaken in patients with ST elevation myocardial infarction, 0.2% of patients did not give consent for their ongoing participation.12 Although these observations are potentially subject to survivor bias, very few participants appear to be concerned about being included in comparative effectiveness studies.
Obtaining consent is costly. A typical phase 3 pivotal, 20,000 patient, randomised cardiovascular drug trial, with individual participant consent and randomisation, designed to comply with the requirements of the US Food and Drug Administration, may cost NZ$75 million, which approximates the annual research budget of the Health Research Council, the main New Zealand biomedical research funding body. The budget upper threshold for an HRC-funded trial is about $1.2 million, which leads to optimistic power calculations and limits most New Zealand studies to using surrogate rather than clinically relevant endpoints.
In contrast, the New Zealand Oxygen Trial recently compared two oxygen administration protocols in patients calling an ambulance or presenting to hospital with a suspected acute coronary syndrome. It used a cluster-randomised, cross-over design and was embedded in established registries (ambulance service and ANZACS-QI). Consent was waived given the acute setting; informed consent is not possible in patients with chest pain needing immediate treatment. The trial enrolled 40,000 patients over two years, and was undertaken on a project grant of $160,000 from the National Heart Foundation.13
Embedding trials in registries can considerably reduce costs, as can cluster randomisation. However, many large, simple, clinically relevant, randomised, comparative effectiveness trials are unable to be undertaken if participant consent is required, because of the cost of obtaining consent.
The COVID-19 pandemic has challenged and disrupted previous constraints around the way research is assessed and undertaken. One example is OpenSAFELY, which used purpose-built software to analyse data from the electronic general practice medical records of 17 million English NHS patients, 5,683 of whom subsequently died from COVID.14 The records were examined in situ, without copies being made, and with a log kept of all interactions. The study benefitted from a UK government decree allowing wider access to health data for research purposes, and took 42 days from idea conception to publication. It has produced the most comprehensive information yet describing those who are at increased risk of contracting and dying from COVID. Recognition of the importance of randomised clinical trials within the NHS has allowed for the rapid and rigorous evaluation of several therapies for COVID, contrasting with other countries where treatments of unproven benefit and possible harm have been advocated and funded.
Coronary angiography and percutaneous coronary intervention (PCI) require vascular access. Approximately 90% of New Zealand procedures are via the radial rather than femoral artery as the latter is associated with more frequent bleeding complications, including life-threatening retroperitoneal bleeding. The radial artery is of smaller calibre, and vasospasm may occur with advancing and manipulating catheters. Once the vascular sheath is inserted, bolus injection of an intra-arterial vasodilator reduces the likelihood of spasm. The most commonly used vasodilators are verapamil, a calcium channel blocker, and nitroglycerin, a nitrate. In New Zealand, roughly 60% give verapamil and 40% nitroglycerin, as part of routine unit practice. The incidence of spasm in the current era is unknown but likely to be low, perhaps 2–4%. There are no adequately powered comparisons of verapamil with nitroglycerin.15 When procedure consent is obtained, no New Zealand interventional cardiologist mentions giving a vasodilator, nor which one; it is regarded as a routine part of the procedure pathway.
Is verapamil or nitroglycerin the better vasodilator to prevent radial spasm, when used as the default option in routine practice? Clinicians are free to give another medication, or none at all, if they think that is better for a particular patient. From an individual patient perspective, any differences will be small and of minimal, if any, clinical relevance (if spasm occurs, further boluses of the same or other vasodilators are given, or smaller diameter catheters used). However, there are over three million PCI procedures performed worldwide each year, so minor differences in either efficacy or cost may be important at the population level.
Because spasm is uncommon, and any difference between vasodilators will be small, a trial would require almost 10,000 patients. A trial of this size, with individual consent and randomisation, would be difficult to justify because of the high cost and administrative burden relative to the clinical importance of the findings. Trials with individual consent are particularly difficult when evaluating unit policies, applied as the default option to the treatment of patients over a period of time.
Giving verapamil for six months, deciding to switch to nitroglycerin for the next six months, and collecting data on vasospasm would not require consent. Such audits of practice are a strongly encouraged aspect of quality assurance and continuing professional development. However, adding rigour to the evaluation by randomly allocating the order of verapamil and nitroglycerin administration over that 12-month period is currently illegal in New Zealand without individual participant consent.
The Code, very appropriately, is primarily designed to protect the rights of patients. However, it fails to achieve an equally important outcome: to enable the healthcare system to deliver the best possible treatment to those patients, within available resources. This goal may be achieved by having comparative effectiveness research as an integral part of routine care and, in some limited and clearly defined circumstances, undertaken without written participant consent. Such research would require close ethical scrutiny, with independent lay and expert input into the study design and oversight. Individual autonomy around all healthcare decisions which are meaningful to the patient must be preserved, and information on the trial must be freely available and readily accessible.
Any future changes to the Code need wide public consultation on consent, research and the evidence underpinning treatment recommendations. The views of consumers, Māori, clinicians, ethicists and the legal profession all need consideration. However, those perspectives must be informed by understanding that, in most circumstances, randomised evaluations provide the only reliable way to determine the best treatment for a patient. They may also identify currently used treatments or procedures which provide little or no benefit, or cause harm, leading to their discontinuation.
Randomised, comparative effectiveness studies should be an integral part of any learning health system aimed at better healthcare delivery, and reducing waste and harm from ineffective treatments or strategies. These should be both enabled and required by those governing and funding healthcare in New Zealand.
The Health and Disability Code needs revision to include consideration of the importance of embedding a healthcare culture of continual evaluation and improvement, and the critical role randomised evaluations have in achieving these goals.16
The Health and Disability Code precludes any research involving a competent patient without the informed consent of the participant. A learning health system requires rigorous evaluation of both new and established clinical practice, including low-risk components of usual care pathways. When comparing two accepted practices, the only way to control for unknown confounders is by randomisation. In some limited circumstances, particularly when comparing groups or clusters of patients, this comparison can only practicably be undertaken without consent. The current Code impedes a learning health system and is detrimental to the health of New Zealanders. It urgently needs updating.
1. Tricoci P AJ, Kramer JM, Califf MR, Smith SC Jr. Scientific Evidence Underlying the ACC/AHA Clinical Practice Guidelines. JAMA. 2009; 301:831–41.
2. Grey C, Jackson R, Wells S, et al. Trends in ischaemic heart disease: patterns of hospitalisation and mortality rates differ by ethnicity (ANZACS-QI 21). The New Zealand Medical Journal. 2018; 131:21–31.
3. Tahhan AS, Vaduganathan M, Greene SJ, et al. Enrollment of Older Patients, Women, and Racial and Ethnic Minorities in Contemporary Heart Failure Clinical Trials: A Systematic Review. JAMA cardiology. 2018; 3:1011–19.
4. National Ethics Advisory Committee. Ethical Guidelines for Intervention Studies: Revised edition2012 29 May 2020. Available from: http://www.moh.govt.nz/notebook/nbbooks.nsf/0/A1E97A72A3AC8BC3CC257A60000B1D3B/$file/ethical-guidelines-for-intervention-studies-2012v2.pdf
5. National Ethics Advisory Committee. National Ethical Standards for Health and Disability Research and Quality Improvement2019 29 May 2020. Available from: http://neac.health.govt.nz/system/files/documents/publications/national-ethical-standards-health-disability-research-quality-improvement-2019.pdf
6. Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, Social Sciences and Humanities Research Council of Canada. Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans2014 29 May 2020. Available from: http://ethics.gc.ca/eng/documents/TCPS_2-2014_FINAL_Web.pdf
7. U.S. Department of Health & Human Services, Office for Human Research Protection. Attachment D: Informed Consent and Waiver of Consent2013. Available from: http://www.hhs.gov/ohrp/sachrp-committee/recommendations/2013-january-10-letter-attachment-d/index.html
8. van Oostveen RB, Romero-Palacios A, Whitlock R, et al. Prevention of Infections in Cardiac Surgery study (PICS): study protocol for a pragmatic cluster-randomized factorial crossover pilot trial. Trials. 2018; 19:688.
9. Finfer S, Bellomo R, Boyce N, et al. A comparison of albumin and saline for fluid resuscitation in the intensive care unit. N Engl J Med. 2004; 350:2247–56.
10. Myburgh JA, Finfer S, Bellomo R, et al. Hydroxyethyl starch or saline for fluid resuscitation in intensive care. N Engl J Med. 2012; 367:1901–11.
11. Young P, Bailey M, Beasley R, et al. Effect of a Buffered Crystalloid Solution vs Saline on Acute Kidney Injury Among Patients in the Intensive Care Unit: The SPLIT Randomized Clinical Trial. JAMA. 2015; 314:1701–10.
12. Shahzad A, Kemp I, Mars C, et al. Unfractionated heparin versus bivalirudin in primary percutaneous coronary intervention (HEAT-PPCI): an open-label, single centre, randomised controlled trial. Lancet. 2014; 384:1849–58.
13. Stewart R, Jones P, Dicker B, et al. The New Zealand Oxygen Therapy in Acute Coronary Syndromes trial (NZOTACS). ESC Congress 2019 together with World Congress of Cardiology 2019; Paris, France: European Society of Cardiology.
14. Williamson E, Walker AJ, Bhaskaran KJ, et al. OpenSAFELY: factors associated with COVID-19-related hospital death in the linked electronic health records of 17 million adult NHS patients. medRxiv. 2020:2020.05.06.20092999.
15. Curtis E, Fernandez R, Lee A. The effect of vasodilatory medications on radial artery spasm in patients undergoing transradial coronary artery procedures: a systematic review. JBI database of systematic reviews and implementation reports. 2017; 15:1952–67.
16. Webster M, Stewart R, Aagaard N, et al. The learning health system: trial design and participant consent in comparative effectiveness research. Eur Heart J. 2019; 40:1236–40.
The full contents of this pages only available to subscribers.
Login, subscribe or email nzmj@nzma.org.nz to purchase this article.