Options Appraisal and Guidelines
Response of the ADSS to Public Consultation
This response falls into two parts:
- A commentary on the Options Appraisal and Guidelines Paper, and its limits
- Comments on the Models proposed and some of the questions associated with them.
The Options Appraisal and Guidelines Paper should, as the author states on p.3, be read in conjunction with the Research Governance Implementation Plan, May 2004. However the Paper concentrates almost entirely on Ethics appraisal. Section 8.7-8 discusses reviews of research methods, but discounts the relevance of these to the process of ethics review. This seems at odds with much practical experience, of risk assessments also needing to review methods ("science"), researcher competence and so forth. It also does not refer to the requirement on care organisations to confirm that there has been independent expert review of projects in respect of both its science and its ethics. (Implementation Plan, 5.4.2) Thus the paper is based on only one element, albeit important, which in turn distances it from the large volume of in-house research currently not appraised independently whether for ethics or science.
For ADSS, a key policy issue is the extent to which such local and limited research should be scrutinised, to comply effectively with any ethical and methodological norms. The related issue is how stringent such scrutiny should be. At one end of the spectrum it might minimally review ethical issues, essentially only seeking to safeguard participants from major risks of abuse or "adverse incidents"; at the other, wide-ranging reviews of ethics, fitness for purpose, methodology and processes of dissemination would be part of a wider prescriptive framework. The Research Governance Framework (RGF) model tends to the latter, but the Consultation Paper, referring back to the 2002 survey, rightly points to a current absence of research governance in CSSRs, as well as diversity. (3.39)
The Consultation Paper cites claims made about the current operation of COREC- supported systems which have not (yet) been backed by evidence. Also the unintended consequences of these systems have not been investigated. Some are mentioned in recent BMJ articles ( July /August 2004); others are anecdotal, such as relabelling research as "audit", in order to evade research governance requirements in the NHS. The recent announcement of a review of NHS ethical procedures, under the auspices of Lord Warner, is welcome. However it is a matter for regret that social care perspectives do not appear to be engaged, especially as the RGF is explicitly for Health and Social Care.
Descriptions of current practice in the Option Appraisal and Guidelines Paper, perhaps but not necessarily reflecting reality, are relatively static implying ethics review is completed once approval is given, rather than ethical issues arising additionally during the course of a project despite the nod in 2.6 to a need for continuing advice to researchers.
1.19 cites the 2002 case studies research and survey of CSSRs. It misses the qualification "external" which would apply to the (low) statistic of 4% of projects being subject to ethics review. The key issue is the type of independence of ethics review, vis a vis researcher and organisations. Personnel involved in ethics review should include a significant proportion from outside the immediate organisation if independence is to be expected. In-house ethics review might well be swift: but it may also be superficial.
1.23 seems to misrepresent the position of student researchers not employed by the CSSR in which they carry out research in that their projects are covered by Stage 1 implementation.(Stage 2 will extend coverage to all student projects).
2.1. The definition of the purpose of ethics review is narrower than the wider purpose, also involving methods, in the CSSR context- as mentioned above. This is the more important, as recent research on University Ethics Committees indicates gaps and inconsistencies precisely in the field of student research, which is applicable within CSSRs. ( Tinker & Coomber: University Research Ethics Committees, Kings College London, July 2004)
2.4 Clinical trials have limited relevance in CSSR research, so the European Directive is not central, whether or not its timetabling requirements are justifiable.
2.8 "Suspicion" is a slightly emotive way of describing reservations expressed by those in the social care world about the NHS system and how it operates. There are, of course, some instances of joint NHS social care ethics review arrangements, though these, like others have not been analysed and reported in publicly accessible media. The reservations seem to be about rigidity; timeliness; and lack of appreciation of social science methodological and social care environments. There are also apparent strengths, in terms of power to ensure overt compliance; and common national documentation and processes.
Models 1 and 2
Both these models are derived from centralised NHS structures; and from processes which are extraneous to the wide variety of social research carried out in CSSRs. Historically such NHS structures and processes have been antipathetic to certain forms of research, and to the significant involvement of lay NHS users. They have changed significantly in the past few years, under pressure of "adverse incident" fears, but driven by national priorities, backed by cash.
They are criticised as opaque, and tending to cronyism in membership. Despite the operation of the Central Office, varied and idiosyncratic practice continues to be reported. Little rigorous analysis has been undertaken on the content of MREC and LREC activity: their documentation is treated as confidential, and the impact on them of the Freedom of Information Act has yet to be seen. Thus large-scale drug trials may be numerous or infrequent, but clearly it is in the interests of major research organisations for there to be a single authoritative ethics gateway process within the NHS. The same may apply at lower levels of frequency for major social care research organisations, and indeed Central Government carrying out national research on the performance of CSSRs.
There is no discussion here of the relative strength of methods reviews in the NHS, as compared with ethics review associated with the NHS processes that these models are also based on.
Following Model 1 in social care would be likely to assure the protective function of ethics appraisal at a national level for multi-site research; but the large bulk of social care research is not done at multiple sites. So the limited financial resources available from central government to encourage good practice and eliminate bad practice would need to be concentrated at regional or more local levels.
The other relevant difference from the NHS is the high proportion of own account in-house research undertaken by and within CSSRs and their contracted voluntary or private organisations. It is this area that receives least external scrutiny at present, and therefore has most potential for inconsistent or bad practice.
A parallel with Inspection arrangements suggests trends in the opposite direction to model 1.Inspection within CSSRs of units or services has moved from CSSRs to Social Services Inspection to Commission on Social Care Standards to Commission for Social Care Inspection in the space of a few years, with further change expected.
Q.1 indicates a need for the same kind of development run-in time that a new organisation, such as SCIE, has needed. As a free-standing system, Model 1 would need to demonstrate its value, including in those areas where social care research overlaps with the NHS ( and therefore with COREC and its responsibilities). This would be a stimulus to a high profile of activity, and therefore the need for a central office: the location of this might be with SCIE or ADSS as first suggestions. The implications of the Children Act 2004 would, as mentioned later, make coherence more difficult. The culture of both of the above has been advisory rather than executive. Also ADSS in particular would not be able to deploy resources on the scale which would be required, so would need pump-priming funding in order to proceed.
Staff recruited to work within an ADSS based structure would probably be located within a specific local authority. Further development work would be necessary to establish regional committees: but local arrangements at CSSR level could be established more quickly than regionally, would be less remote, and could still link with and be accountable to a central office.
Q.2 points to the need for development and quality assurance work, which applies to all realistic models, whether the central role is executive, within a decentralised model; or setting and monitoring accreditation standards. One way of minimising delays and costs would be to transform the ADSS role at the centre, while supporting local arrangements as in Model 4.
Q.3 can be addressed by having strong systems of accreditation, which is important if approved research is to have parity with and be accepted by health colleagues (and vice-versa). Accreditation would be needed within all models but could be developed within existing regulatory frameworks, such as CSCI or the Audit Commission. Explicit accreditation standards, which are checked externally, would help research governance be taken more seriously. The question begged in q.3 is how the accreditation framework and subsequent processes would be established. A continuing role for central government and its DH/RGF Advisory Group would be critical, in order to maintain momentum and consistency.
Like Model 1, would require a permanent commitment of central government funds ( as with COREC currently). While this would relieve local authorities of some financial responsibilities, the marginal advantages in terms of some shared support costs would be offset by a degree of organisational isolation and lack of visibility, as far as social care research staff in and outside local authorities are concerned. Ultimately there is a risk that social care research, much of which is short-term, would be marginalized.
For procedures, it would not be axiomatic that CORECs current procedures should apply without change: indeed, unchanged they would constitute a strait jacket rather than a support mechanism, geared as they are to clinical trials; and with R & D assessments separate from ethics appraisal anyway. Thus, as with option 1, a system of procedures would need to be developed in the light of, and in part adapted from CORECs current repertoire. This activity is analogous to accreditation-type activity also consistent with option 4.
Q1 posits a specialist system within COREC, but
Q.2 identifies some of the differences inherent. What is clear, however, is that all existing working procedures of COREC are designed around health, especially clinical research. Even application forms assume a particular type of topic, category of method and form of outcome. Cost implications would not be negligible.
Q.3 begs questions about similarity or differences between designs and methods, across social care, health services and public health research. An alternative could be to split clinical from non-clinical research.
This takes an element of a system ( a virtual panel) but it cannot constitute a system itself.
Its attractiveness, as Q.1 hints, lies in the presumed speed of decision-making though of course material must be available in the first place electronically to make the model viable. Having this as a compulsory requirement would not be realistically sustainable at present.
In response to Q.2, this would exclude older people in particular as potential panel members, because of current low electronic communication patterns. It would push decision-making towards voting rather than committee discussion, then consensus based outcomes (as theoretically applies with LRECs though delegation of powers to chairs may undermine the accountability here). The technology support costs would have to be met, as would inherent training and development costs.
Q.3 on trustworthiness could be responded to in the same way as for other models i.e. through accreditation of structure, process and assessment of individuals whose task would be to judge the worth of research applications.
This is described as a pluralist system, based on local diversity. It has the attraction of recognising current realities in CSSRs, and of building on systems where these exist; but it could be regarded as weak in terms of the national policy thrust, and least likely (on its own) to engender large-scale and effective recognition with the NHS.
To address the weaknesses, implied in Q.1, would require rigorous accreditation of locally produced systems, with a common source but varying detailed processes; and the provision of training, which would have to extend to middle managers in CSSRs not just specialist research, Quality Assurance and other central personnel.
To respond to Q.2, accreditation processes could most appropriately be linked with the existing inspection and regulatory frameworks.( eg. Delivery and Improvement Statements within the CSCI context) Even so, and with standards being developed by existing members of the RGF Advisory Group, consistent broad standards and practice might well yield diverse outcomes in different CSSRs: anecdotally, this kind of comment is made a propos of the forthcoming DfES User Experience Survey on Children in Need, 2005.
In recognition of the issues behind Q.3, grants to CSSRs, whether direct or mediated, would need to be ring-fenced. Organisations will be becoming more rather than less diverse, so pointing to the need for a coherent supplementary process akin to that of MRECs. The current ADSS system is a precursor but not an example of an MREC. ADSS reflects the situation in the field. The present dedicated in-house staff research base is low in local authorities, apart from in a handful of leading CSSRs: some more up to date staffing figures on this should emerge from the forthcoming survey.
In response to Q.4, research on CSSR-supported service users in independent care provision is already a problem area for compliance: though this issue can be tackled superficially by contractual clauses, it would need statutory regulation to be persuasive quickly.
These conclusions are expressed through describing models as positioned on some different scales.
On a scale of strong-weak compliance outcomes, the COREC related models 1 and 2 appear at the strong end. Nevertheless, one must caution that strong compliance, though essential for any system that purports to regulate consistently, may reflect rather than embody power relationships. Good enough compliance could emerge from other models essentially model 4 - without impeding local research creativity.
On the national-local continuum, model 4 appears financially and in central support terms the most likely to survive further legislative and central government changes, such as in the important area of child care research in CSSRs. It is a model which, however, still requires a major financial commitment in the support and accreditation areas. It starts from a low baseline of resources and commitment, as compared with COREC related models. It also offers most potential to regulate student projects without stifling them perhaps using the techniques of model 3; and with a (potential) accredited HEI system
On the scale of mutual equivalence/ duplication of assessment processes of research evaluation, for those projects which cross NHS/CSSR boundaries, accredited local arrangements, but within a firm national/regional structure, would appear about halfway. In other words, this can be described as neither a COREC clone nor a free-for all. Within CSSRs there are, of course, many projects which cross departmental and other organisational boundaries. If, in addition, the desired corporate emphasis, posited in the Implementation Plan, is to be followed through, again model 4, once established, would seem most generally applicable.
On the scale of dependence-independence Model 4 ( if local not regional) would require protocol and personnel safeguards to avoid cosiness and cronyism, while still engaging and interacting with the CSSR in particular whose in-house work it would be reviewing. Models 1 and 2 would be more independent but also more distant.
On the scale of separation or integration of ethics from science, Model 4 allows for the least rigid demarcations, which can be artificial, especially to the lay person.
The ideal model embodies devolution, within a tight framework of skilled, purposefully developed regulation. It has taken COREC some years to get to its present position, which of course may change as a result of the Warner Review. Significant funding for COREC was necessary; and is necessary pro rata even more in social care because of the more heterogeneous and lower resourced environment CSSR research is carried out in.
David Johnstone Chair of ADSS Standards and Performance Committee
15 Dec. 2004