How much do you care?
Over the last few months, it’s begun to feel like the principle of care has taken a back seat when it comes to monitoring, evaluation, or research (MER) in the cultural sector. In this long-read blog article, I’d like to reflect on how we can take more care in gathering the data we either need (or are expected to collect) from our audiences.
Whether it’s the way I collect, analyse, or present data from audiences, care is one of my core values in all MER client work. Although there’s plenty of best practice, here’s three examples which highlight what I mean when I say care has gone out the window. On more than one occasion I’ve had to explain to a client why it isn’t appropriate to monitor someone’s ethnicity or age by ‘looking at them’. Another was continuously bombarding audiences with multiple different survey designs in a bid to capture over fifteen different funder monitoring requirements (and wondering why their response rate was so incredibly low). And a charitable funder demanded an organisation asked their target audience to complete an end-of-project survey when the client had repeatedly tried to explain that none of the participants had the ability to do so due to their individual needs.
Organisations are often under pressure to collect a vast amount of internal and external data as a requirement of their funding agreements, whilst simultaneously planning and delivering their programmes. Not everyone has the luxury of being able to hire a freelance evaluator or create a data monitoring role within their team to navigate through what can be quite a daunting or complex task. It’s therefore easy to fall into the trap of chasing figures at whatever the (emotional/experiential) impact on the participant and to the detriment of care. The pressure is real – we need (and I believe we should) evidence the impact we’re having. Not just for our funders (and the public), but to assess whether we’ve achieved our own organisational aims (it’s a big morale boost if nothing else). But at what cost are we doing this?
How to build a care-centred approach with your MER work
Care, noun, the process of protecting someone or something and providing what that person or thing needs. [Cambridge]
Care, verb, if you take care about something, you feel that it is important and are concerned about it. [Collins]
Care in planning
Start with your why (rather than your funders) and create a MER plan. Make a plan that you care about, driven by your organisation and your evaluation principles rather than your funder/s monitoring requirements. I’ve seen so many evaluation plans that aren’t really evaluation plans – they’re a list of monitoring requirements for funders. But what do you want to achieve? And how do those monitoring requirements align or fit into the mix? Sidenote: if they don’t, I’d be reflecting on whether you’re working with the right funder.
Think about how you can be inclusive with your planning process. Is the framework you’re developing socially engaged or people centred? Is it co-created with the target market? How do you know which outcomes your participants or audiences want to experience? In a recent evaluation framework development session with Ideas Test (one of my NPO Creative People and Places clients) we invited young people taking part in the programme to help shape our evaluation framework for their particular project. We spent an afternoon collaboratively reflecting on the benefit, difference or change they wanted to experience at the start of their engagement. We’re also using a Most Significant Change methodology across Ideas Test’s overall CPP programme, with a representative range of target audiences involved in the final story selection (as well as story collection) process. We all (hopefully) put audiences at the heart of other aspects of our work, so why should evaluation be any different? Co-creating evaluation outcomes isn’t unusual, it’s a core element of the SROI process (certainly the one recommended by the UK Cabinet Office’s 2012 guide) but it’s not something you often see talked about in the cultural sector.

Streamline multiple funder requirements. Without grant funding many of my clients wouldn’t exist (and I’d be out of a job). But it’s completely impractical (especially for smaller organisations) to collect output data for numerous different funders when their requirements vary ever so slightly for the same thing (for example, if a range of funders all want to know visitor age, but their banded age response categories are marginally different ). This also happens with outcomes, with similar expectations (such as improved health and wellbeing) worded slightly differently. Sometimes there are streamlining solutions (such as collecting evidence on one outcome that would ‘hit’ various other ones). For example, the below image was taken in an evaluation workshop with a client, where we messily clustered every expected outcome and KPI from different funders on one wall and created overarching outcome ‘buckets’ so we could see where the similarities and crossovers were. Visually mapping your varying funder requirements alongside your own outputs and outcomes onto a giant wall can help reduce anxiety and streamline what looks complex on paper. We need to hold open conversations with funders to explain the impact that multiple funding streams and funding conditions are having on staff (and freelancer) time. I’d encourage you to have conversations with your grant managers if you’re finding it difficult to collect everything in the same way – can you push back on the exact age banding categories, or streamline and supply the same information across the board?

Care in your methodology and design
Do what’s right for your audience. I’ve seen so many organisations rely on surveys when they’re not the most appropriate methodology for either the target audience (e.g. those with limited literacy) or the type of evidence required (e.g. if something is better evidenced via a qualitative approach). Some funders acknowledge this and encourage the use of alternative methodologies, whereas others aren’t all that explicit. Don’t force a square peg in a round hole – if it’s not right, speak to your funder to explain why you don’t think a survey is appropriate, rather than just assume you have no choice and go ahead regardless. I’m working on a project with Scottish Book Trust at the moment called Reading is Caring. It’s a programme which uses reading to support those living with dementia and their family or professional care partners. Our approach is time-consuming, but care-driven – we’ve designed a methodology which is people-centred and tailored per project participant by giving them flexible options for feeding back. This is essential for both those living with dementia, and the people looking after them who are often time poor.

Give context to why you’re asking sensitive questions. One of my clients recently piloted their new NPO survey on a group of young people. The majority of respondents opted not to answer the demographic questions as they found them too intrusive. More worryingly, questions on gender and assignation at birth were reported as potentially traumatic. This is clearly problematic from the perspective of care and may indicate the need for trauma-informed research (TIR) practice. It’s not unique to NPOs – we ask demographic questions across a lot of surveys in order to align with the census or other local authority statistics (so we can understand if we’re representative of our catchment area for example). But I’ll often see surveys given out with absolutely no context about why demographic data is being collected. If you want to collect this data (or have no choice due to funder monitoring requirements) how can you approach it in a care-focused way? Can you give a verbal or written trigger warning at the start of that particular demographic question set? If there’s little room for providing written context (as is the case at the start of the current NPO Illuminate survey) my advice would be to add it into the email body copy when you send your e-survey. Or if you’re conducting face to face surveys ensure your fieldworkers are briefed with a consistent boilerplate script to use. This should typically include a thorough explanation about why you’re asking for the information, how it’s used and how it will help your organisation. But I’d go further than that and flip it – what’s the benefit to them in handing over this type of information? This is simply a matter of ethics.
Although you could argue that the quantity and personal nature of demographic questions has changed, the challenges or concerns with getting audiences to complete them isn’t a new issue. Back in 2002, I did an A/B test on an exit survey with audiences at Kelham Island Museum (at the time part of Sheffield Industrial Museums Trust, now Sheffield Museums Trust) during my MA Arts and Heritage Management degree. I conducted half of the surveys with no contextual explanation for the demographic questions, and the other with an (albeit basic) explanation. I think you can probably guess which one was the most successful in getting fewer ‘prefer not to say’ responses (the latter of course).
Care in your data collection
Sell the benefits. We’re asking a lot from our target audiences. Surveys are doing a lot of heavy lifting for us – especially if you’re an NPO or CPP NPO. One of my clients tested the completion time for their NPO survey which included the core mandatory question set plus five other questions to help evidence outcomes in their organisational evaluation framework. It took nearly 15 minutes with over half of the respondents giving up before they’d even got to the end. Is this what caring about our audiences looks like? This issue isn’t unique to ACE-funded organisations. I’ve seen several twenty-page surveys in my time working in the cultural sector. Ask yourself: why should the respondent bother? Would you?
The ‘ask’ is more important than ever. Although I encourage the use of non-client incentives for surveys (and there are best practice rules on these), we’re asking for long survey completions with no benefit at all to the respondent. What can you legitimately offer people for taking the time to give their feedback? What can you tell them so they understand the importance? Like the demographic question contextual narrative, what’s going to genuinely convince them it’s worth their while? Do you share the responses from surveys with your audiences? Could you do this in a visually appealing way in your space, in your promotional communication printed materials, on your website?
Be secure. I sometimes see a lack of care in how personal data is stored. Make sure you’re adhering to your data processing and storage obligations. One organisation I worked with a few years ago couldn’t understand why I was concerned that their SurveyMonkey data had been downloaded and shared with a partner, without removing the e-database ‘opt in’ personal emails. Care runs from the start of the planning process, right through to the end.
Care in our wellbeing as evaluators
It also relates to us as human beings who are designing and delivering MER activity. I’ve worked on a variety of projects over the last few years which have involved talking to participants who’ve experienced severe trauma in their lives. For example, I’ve just finished working on Legacy of 67, an NHLF-funded project which involved capturing the (often sensitive or traumatic) impact stories of those within the LGBTQ+ community who were impacted before and after the partial decriminalisation of homosexuality in 1967. If you’re working with potentially traumatic or triggering material, make sure you’re prepared and take care with your own wellbeing. I’ve attended a two-day Adult Mental Health First Aid course and ensured I’ve got processes in place for dealing with my own triggers.
In summary
- Create a plan that starts with you (your organisation).
- Think about how you can build in a ‘care check’ at each stage of the monitoring and evaluation process, from planning, design, data collection, storage, analysis and write up.
- Consider how you can make your evaluation approach more socially engaged or person-centred, and wherever possible do this before any funding bids so that your outcomes match what you actually submit.
- Don’t be afraid of speaking to funders to share your concerns and discuss your ideas for alternative methodologies. Push back on anything that you feel is absolutely not doable and make a case about why.
- Wherever possible, before applying for funding ask your grant manager what their expectations are with MER – if their response doesn’t match your values or capabilities think twice before applying.
- If you’re an NPO or CPP NPO, share feedback and ask questions about the Illuminate survey platform with the PWC helpdesk. Help make things better by testing, reporting, revising and trying again.
- Think about ways you can provide support to staff who are collecting potentially challenging impact story data.
- Continue training staff in MER – how to do it and why it’s important. The more methodological approaches up your sleeve, the less likely you’ll be to default to surveys.
Further resources to explore on care
Updated MRS Code of Conduct 2023 (with an increased focus on participant wellbeing, including providing information to support and assist as appropriate)
The Care Lab (Whitworth Art Gallery)
Stephen Welsh ‘Cultural Encounters of the Empathetic Kind’ published in Arts Professional (July 2023)
Other links recommended to me on this topic, or found subsequent to publishing
Kate Fitzgerald Consulting Limited
Trauma-Informed Reseach SRA blog
_______________________________
What does care mean to you? Do you feel your approach (and funders requirements) to evaluation takes care? I’ve love to hear from you.