AI is increasingly becoming a part of everyday life and constantly features in public conversation. It promises efficiency, frees up mental bandwidth, and can take over repetitive tasks sparking ongoing discussion, both positive and negative, about its capabilities, potential uses, and how it could make life better. At the same time, use of the technology raises concerns around ethics, biases, data security and the impact it will have on jobs and people’s ability to think.

AI adoption is now at a tipping point with every organisation, from commercial businesses to third sector and non-profits, striving to balance the benefits of the technology with the risks, and how best to take the public with them on that journey.

New CharityTracker research of 3,000 UK adults in December 2025 sought to understand general attitudes to AI and how charities should be using it. Acceptance of charities use of AI was found to be conditional rather than automatic. Top benefits centred around efficiency -  saving staff time (29%), improving services (27%) and using money more efficiently (27%). Conversely, the greatest perceived risks were around data security (36%), the loss of the human factor (35%) and the risk of serious mistakes (31%).

Overall, the view on AI use by charities was mixed, with uncertainty dominating opinion, 36% of people were positive about charities using AI, 25% negative and 37% unsure. Results indicate that people judge charity use of AI not by ideology, but by context, transparency and its perceived impact on trust.

For charities, this creates both opportunity and responsibility. Below are some key learnings and practical tips to help organisations use AI in ways that align with public attitudes and expectations and protect confidence.

  1. Not all uses of AI are viewed equally.

There is strongest public support for uses that protect resources or improve efficiency without affecting personal interactions, such as detecting scams or fraudulent activity (64% of people find this acceptable) or helping with general productivity (53% acceptable). These uses are widely accepted across age groups and donor status and are safe use spaces for charities. 

  1. Using AI to influence decision making on who receives help is a clear red line.

Public comfort drops sharply when AI is used to influence decisions about who receives help, with more people feeling this is unacceptable than acceptable (38% Vs 33%). Even supporters who are generally positive about AI, are uneasy with its role in judgement-based or moral decisions.

  1. Older audiences are less comfortable with usage of AI.

Age is the strongest predictor of attitudes to AI and comfort with charities using AI, followed by whether people use AI themselves. Older adults are more likely to not be using any form of AI than younger cohorts - across the general population 32% of people report no usage at all, rising to 55% amongst people 65+. However, our research showed that people are not necessarily strongly opposed to AI, they are more cautious and place greater importance on reassurance, transparency and having someone to turn to when they need help.

  1. AI needs to be visible, not invisible.

People want to know when AI is used (50% of people claimed being told clearly when AI is used was their top priority), so it’s important to clearly signal this and why (if appropriate). Transparency is key and hidden or unclear use risks undermining trust, even when the application itself is reasonable.

  1. Access to human contact needs to be protected.

Concerns about AI answering enquiries are less about technology and more about the fear of being “fobbed off” as AI can feel like a barrier to human contact. People want reassurance that a real person is still available and there is an easy route to speak to someone when needed.

  1. Personal data should be used minimally and with caution.

There is general discomfort with charities using personal data in AI systems and levels of comfort decline as data becomes more personal. 37% of people felt that using basic contact and donation data was acceptable, falling to just 13% for sensitive personal information. Alongside this, 31% of people felt none of these were acceptable and this rose sharply among older adults and non-donors. Charities need to be cautious and minimal in any use of personal data and should clearly explain why in order to alleviate concerns and build trust.

  1. Handle synthetic media with care.

The used of AI-generated images and videos was polarising, with 40% of people feeling it was acceptable and 31% feeling it was unacceptable, with levels of acceptance declining with age.  Concerns are likely related to authenticity and trust rather than creativity. If using synthetic media, charities should do so sparingly, ensuring it is framed ethically and clearly labelled.

  1. Human oversight improves perceptions of acceptability.

Human oversight increases public confidence in charity’s use of AI. However, for sensitive applications like personal data processing or decisions on who receives help, oversight alone does not fully reassure. It’s important that charities embed human reviews and checks into their workflows and processes to safeguard their practices and communicate this where relevant to build and maintain trust.

  1. Focus on the undecided majority.

Within the general public, the largest group is not anti-AI, but unsure or undecided with benefits and risks both salient. Whilst 42% of people felt AI threatened job security, 33% thought it enabled access to more information, 30% felt it helped people get more done and 25% felt it enabled medical and scientific breakthroughs. Again, age has an impact with younger adults more likely to feel more positive and older adults and non-users more likely to disengage rather than strongly oppose. How charities use and talk about AI now will shape long-term trust for the less engaged and undecided majority. Clear communication focusing on reassurance, transparency and communicating benefits and safeguards will help ease concerns and build trust.

Overall, the public’s ask of charities is not “don’t use AI” but “use AI visibly and responsibly”. While people appreciate that AI presents real opportunities for charities to do more with limited resources, used carelessly it risks eroding trust. The public expects charities to use the technology in a responsible, conscious and transparent way that reflects their (charitable) values.