How evidence can drive adoption of innovation

Three essential changes to evidence that will drive adoption of digital innovation

Helen Guyatt, Head of Research, Evaluation, and Insight

 

To innovate is not just to create something new, but to create something that addresses a need – and, importantly, something that can be put into practice. The missing link between ideation and innovation is application.

To be able to put something into practice, and to do so at scale, it’s vital to know that it does in fact work. This is especially true of tools designed to help support vulnerable people. Robust evidence standards, then, are key to innovation making a real difference: without a way of knowing whether a tool does what it claims to do, it is understandably difficult to justify ever implementing that tool.

At Brain in Hand, research and evidence have always been at the heart of our work. Our self-management system, which combines digital tools with human support, underwent two invaluable studies while still in its pilot stages: an independent study of autistic adults by Devon Partnership Trust and a National Autistic Society study of students both found that Brain in Hand had a positive impact on people’s lives, including improvements in confidence and ability to implement coping strategies. The former study also provided early evidence that use of Brain in Hand could help service users reduce their level of contact with clinical support, thus delivering savings for services.

More recently, an independent clinical study funded by the Small Business Research Initiative (SBRI Healthcare) showed that our approach to digital support had significant benefits for autistic people in the areas of anxiety, self-injurious behaviour, and quality of life. Following this evidence for the effectiveness of our support system, we have also achieved NHSX Digital Technology Assessment Criteria (DTAC) compliance, which looks to ensure that digital tools meet standards in the areas of clinical safety, data protection, technical security, interoperability, and usability and accessibility.

In practice, meeting standards and providing evidence of impact is often not sufficient to enable new tools to be commissioned at scale. This can be frustrating when there is evidence of need and impact and a solution is available that can ease burden on stretched resources, but there continue to be barriers to wide-scale adoption.

We believe this could be improved if the following three things were to happen:

1) Standards became more agile and flexible, keeping pace with evolving needs to ensure fitness for purpose.

2) Incentives for researchers focused on action and change arising from research, rather than publications per se.  

3) Decision-makers more often acted on evidence-based information.

 

Standards fit for purpose

The purpose of evidence standards is to help decision-makers trust a product or service. In support services, this needs to be to the degree that there can be confidence in its scalability.

For some time, randomised control trials (RCTs) have been considered the gold standard of research. Although they are undeniably powerful when used correctly, recent thinking suggests that there is perhaps an assumption that any RCT is inherently useful by virtue of its format, when in fact inadequate planning and reporting of RCT studies is contributing to avoidable wasted research. Studies are being conducted in a vacuum rather than with consideration for their practical application.

The value of an evidence standard is not simply that it ticks a box to say “yes, X requirement has been met”, but that it provides proof of the work that went into it. For our DTAC accreditation, for example, we compiled an extensive suite of evidence not only on our technical systems, but on our teams’ ways of working and how we provide a genuine solution to our users’ needs. Compiling this information was a valuable exercise, and the fact that that information is now easily available to potential purchasers is enormously positive.

Ultimately, those responsible for setting the standards must collaborate across the board to ensure that they are meaningful. This means working with the innovators who need to meet those standards, as well as with the services who will use them to make commissioning decisions. If a standard is unachievable for a solution, or not of genuine use to a decision-maker, it is not serving its purpose.

 

Incentives for researchers

In the current academic system, research influence is often measured indirectly through citation data. The Times Higher Education (THE) World University Rankings have citations at 30% of the total score for a university, pushing this as a metric of how well a university is doing in “spreading new knowledge and ideas”. A more meaningful indicator, however, is the influence of research on positive change as opposed to its mere existence. We want evidence to be meaningful, for the purpose of studies to yield actionable insights that will make things better.

This might involve a move towards embracing different types of evidence. The RCT has its place, of course, but it is far from the only type of evidence that could prove useful. The type of research ought to depend on the type of product or service; in mental health support, for example, the single-case study is a powerful tool for demonstrating effectiveness. Researchers therefore need to be open to exploring different ways of testing and proving.

There is some indication that things may be moving in this direction already. With grant funders such as SBRI and Innovate UK, for example, the focus of work is centred not on publication but on making the impact of their funding as wide and deep as possible.

 

Decision-makers acting on evidence

There is a ‘pilotisation’ endemic in UK health and social care, and understandably so. Every service wants to be certain that a solution works before committing funds to it, and before rolling it out to potentially vulnerable people. This results in great caution in purchasing decisions, with solutions commonly trialled for a year with only, say, ten end users.

The biggest issue here is that new ways of doing things cannot make a difference unless implemented at scale. Small, limited pilots often fail to provide sufficiently powerful evidence for decision-makers to go bigger, precisely because the sample is too small to generate what the commissioner is looking for. It becomes a circle of continually looking for more and more evidence, never progressing.

If the perception of risk is a barrier to adoption, commissioners and solution providers need to find a way to build evidence thresholds into the system whereby decision-makers can commit to stepping up scaling to the next level. An innovation that has demonstrated it can meet these standards at level one ought to go from “a new thing to try” to “a proven solution we can immediately roll out at the next level”. In so doing, services can move away from endless trials, establishing early on what would count as sufficient evidence and then acting on it as soon as that threshold is reached.

We think that evidence standards will soon become must-haves rather than nice-to-haves, with solutions only commissioned if they meet them, in turn unlocking rapid deployment at scale. Decision-makers will need to be brave, and break out of the cycle of pilotisation. They will instead need to focus on proactively seeking out tools that can demonstrate a high level of evidence.

As decision makers, technology is going to be key, but in the same way that we need to embrace new ideas to improve services, we also need to be brave and embrace faster evaluation and decision-making practices to deliver them.

— Mel Lock, Director of Adult Social Care, Lead Commissioner for Adults and Health, Somerset County Council

Conclusion

We believe that having robust, meaningful standards that motivate action is the clearest path to truly doing things differently – and better. Evidence that decision-makers accept and act on can be the difference between ‘innovation’ and ‘proven solutions’ that can be deployed at scale. Only by embracing new but proven ways of doing things at scale can real progress be made.

Some of the work is incumbent on those who produce and sell solutions, of course – we must generate evidence, but we must also understand that it is rarely the only factor in decisions. Appealing to emotions with anecdotal evidence in the form of end user stories and lived experiences, showing the difference solutions can make to real people, is a powerful tool; the value of these less rigorous forms of evidence ought not to be underestimated.

Evidencing effectiveness should be a continual journey. Solution providers should continually strive to learn, never assuming that the work has been completed; there are always limitations of context, population, methodology, equipoise, application, and any number of other things. Those innovators who keep adding to their evidence base should be worthy of further trust. Meanwhile, services should look to regularly review whether their current tools meet standards and make changes where necessary.

When evidence standards enable innovations to prove their effectiveness, and when services act to adopt these new, proven solutions, we should truly start to see large-scale change for the better.

People talking aroud a table

It’s critical to be able to demonstrate that support tools work. We’ve always worked hard to demonstrate Brain in Hand’s effectiveness, but we think more needs to be done to ensure that evidence-based solutions are actively embraced by services. Click below to see this article published in Open Access Government.

Link to article