TAPPI Phase 2: Evaluating initiatives trialling technology for an ageing population
In 2021, members of the Technology for our Ageing Population: Panel for Innovation (TAPPI) developed 10 principles for delivering technology services in housing and care settings to improve the lives of older people. Drawing on the ‘care ready’ HAPPI design principles, they highlighted that technology services should be adaptable, co-produced, cost-effective, choice-led, interoperable, inclusive, outcome-focused, person-centred, preventative and quality-focused.
The second phase of the TAPPI project, funded by the Dunhill Medical Trust and overseen by the Housing LIN and the TSA, aims to test these 10 principles and find out how they work in practice and how they can be embedded in service delivery across six testbed sites in England, Scotland and Wales selected to trial technologies with the aim of improving outcomes for their service users against the TAPPI principles.
The Cambridge Centre for Housing and Planning Research (CCHPR) is the TAPPI Phase 2 Evaluation and Shared Learning partner, and is working with the six testbed sites to evaluate the adoption of the 10 principles. Earlier this month, CCHPR led an evaluation workshop (opens new window) to provide the testbed staff with guidance on how to successfully evaluate their projects. The workshop highlighted several key considerations which organisations planning to evaluate a technology service should take into account.
How to plan an Evaluation
When planning an evaluation, it is important to consider (opens new window) exactly what you want to use the findings of your evaluation for. For instance, if you want to use the evaluation outcomes to provide evidence which will inform investment decisions, the questions you will need to find the answers to may be quite different from those you would ask if you were aiming to use the evaluation outcomes to provide feedback to service users about your learning. Deciding on the intentions of the evaluation at the outset is therefore crucial for shaping evaluation plans: it will direct what kinds of questions you ask, what kinds of data you need to collect in order to answer those questions, and your approach to sharing your findings at different stages of the evaluation.
What to measure
Next, there is the question of what you’re going to measure in order to answer your questions. As the NHS (opens new window) highlights, there are many different kinds of impacts which can be measured. These might include considering whether service users have made progress towards achieving personal goals, how satisfied people have been with the support they received when using the service, or whether people have experienced any changes in their socio-economic circumstances. Evaluations often consider impacts on various different stakeholders, including the organisation delivering the service, service users, staff, families and carers.
What data to collect
Importantly, once a decision has been made about what to measure, there are many options to choose between with regards to what kinds of data might be collected in order to measure it. Quantitative data can be collected through the use of surveys and questionnaires, while popular options for qualitative data collection include interviews and focus groups. Both types of data can be valuable for evaluation and can illuminate findings in different ways. Crucially, any data should be analysed, either by using software packages (in the case of quantitative data) or by reading through and annotating the data to identify themes and patterns (in the case of qualitative data). Service providers carrying out evaluations are encouraged to carefully consider their capacity for analysis prior to making a decision about what kinds of data to collect (and in what quantities).
Adapting to TAPPI challenges
Evaluating the use of technologies in services for older people is not without its challenges (opens new window). It can be difficult to work out whether outcomes should be attributed to the technologies or to other aspects of the service. Deciding on a suitable sample size can be challenging, as a balance must be drawn between collecting enough data to enable reasonable interpretations to be made, and not collecting so much data that it becomes too challenging to analyse. Participants may drop out of the evaluation and may not wish to take part in follow-up surveys or interviews, thereby making it difficult to track change over time.
However, anticipating the challenges which may arise, and continually adapting and reflecting upon the evaluation process, should ensure that enough high-quality data is collected to answer the key questions at the heart of the evaluation. Indeed, while it may not be possible to achieve a ’perfect’ evaluation strategy within the confines of organisational budgets and capacities, by evaluating the impacts of a service, opportunities for improving the experiences and outcomes of the older people engaging with technologies can be maximised and learning shared across providers to extend the reach of promising initiatives.
Rising to these challenges, we look forward to evaluating the adoption of TAPPI across the six testbeds - from principles to implementation – and capturing what worked and what didn’t work so well.
Slides from the TAPPI evaluation workshop held in December 2022 can be found here (opens new window).
The Cambridge Centre for Housing and Planning Research (CCHPR), led by Dr Gemma Burgess, is a University of Cambridge Research Centre, and is part of the Department of Land Economy. Find out about CCHPR role as the TAPPI Phase 2 ‘Evaluation and Shared Learning’ partner.
Find out more about the second Phase of TAPPI here
Comments
Add your comment