Is there a role for robots in philanthropic storytelling?

Hiking on Mount Buffalo over the break, a fellow walker heard what I did for a living and immediately asked: “So, is AI going to put you out of business?”

Not in a cruel way, I should add. It was a reasonable question, and one I get asked a lot.

There is no doubt that ChatGPT and other AI bots are being designed to be able to write articles, blogs, content where humans previously considered themselves essential to the task. I recently had to write the closest I ever get to a legal letter and asked AI for some wording on a part that I was hazy about. It came back with what I regarded as almost the perfect sentence. This shook me a little, but I didn’t hesitate in including the sentence in my letter.

But what about content for charities and philanthropy, the niche where I work, concentrating on storytelling, media and communications to help those in the giving sector cut through the noise? Can AI sweep in to write more compelling content than I can, or create virtual videos with realistic avatar spokespeople, replacing my years of experience across all media? More to the point, how can I convince my existing and future clients that they should invest in me, not Robo-Write? 

It’s a difficult question to approach, without sounding wildly defensive and protectionist. 

So, as preparation to write this piece, I did the only logical thing and asked ChatGPT: “Where can – and where can’t – AI help charities with their communications?”*

And was genuinely surprised when ChatGPT’s considered reply was that I, the human, was more important than AI. You’ve got to hand it to the robots; a complete lack of emotion means humility is not an issue. 

Even the Mount Buffalo highlands weren’t free of AI. Photo: Nicko

Digging into its response, the Transformer was confident that AI could help with first drafts, and with such backend elements as data analysis, predictive analytics, ad targeting and even chatbots for FAQs on a charity website. It can be “efficient”, it said, but you need to “keep a human in the loop”. 

It also articulated the areas where it considered it would struggle to match a human.

AI, it said, “fell down” when it came to authentic emotion, cultural and contextual nuance, a risk of generic messaging, bias in data, and handling complexity of narrative. 

It can’t help (yet), it added, when it came to ethical decision-making, building deep relationships, or crisis communications (“In moments of controversy or crisis, human intuition and quick judgement are irreplaceable,” it said). High-stakes storytelling, requiring a nuanced understanding of trauma, human rights, or emotional impact, also “demanded careful crafting that AI cannot yet provide.”

What was particularly interesting for me was that ChatGPT’s response tied in exactly with my wider exploration of where AI may or may not be helpful to my work. Teaming with Shawn Callahan from Anecdote, and AI expert Chris Nolan in Hollywood (no, not the Batman Chris Nolan), we’ve been looking at how we can help senior executive teams lift their “humanity” in response to the arrival of AI.

The workshop we’ve developed shows how to utilise AI in ways that are useful and practical, while also asking executives a critical question: in a serious moment of corporate crisis, when the right decision has to be made, ethically, commercially, strategically, politically and intuitively, even if it may appear in the short term to be the wrong decision – are you going to trust that moment to AI? Or do you want to know that your senior leaders can step up to handle that intense moment, with their human intuition, deeper understanding of what’s at stake and using their acumen and experience to navigate such difficult waters?

Even in my less-dramatic daily storytelling, I’ve seen this need for an understanding of nuance play out. Any AI generated copy I’ve witnessed so far has been bland, and unable to provide specific, real world (read: human) experiences.

I recently interviewed Rabbi Shlomo Morrison, the founder and CEO of St Kilda charity C Care, tackling food insecurity and social isolation. He told me an anecdote about one night as he was driving home from delivering meals, and saw an apparently homeless man sitting on the side of St Kilda Road. Shlomo had a few left-over meals in the car, so he stopped and offered one to the man, who gratefully accepted the dish. Shlomo began regaling the guy with other potential sources of help, such as an initiative then running through a major telco where people sleeping rough could receive a free mobile phone, so they could access an app that could tell them where such necessities as a bed or meals might be available. 

Shlomo finished his monologue, explaining the expansive benefits of such a resource, only to have the man say, “Mate, have you got a fork?”

The whole time, the rough sleeper had this precious hot meal in front of him but no cutlery with which to eat it, and his one pressing goal in life at that moment was to eat a meal for the first time that day. The big picture could wait. 

Shlomo was disarmingly self-depreciating in telling that anecdote, as an aside to the bigger story of C Care’s achievements, but it told me a lot about him, in how the moment resonated. “It was such a small, little experience but large in terms of how we talk about walking a mile in other people’s shoes,” Shlomo said. “It’s so important to be a great active listener and give people a mouthpiece to share what they’re going through and where their difficulty is.”

Both his experience in that moment, and his recounting it, were very human interactions. I wonder if AI could have picked out this nuance? Or be an active listener?

So how can AI be used effectively in charity communications?

As the initial novelty dies and serious consideration of AI capabilities takes place, it is becoming apparent that an engine like ChatGP, Google’s Gemini or the others rolling out can be a great co-pilot, a helpful sidekick. (Microsoft has even acknowledged this, naming its embedded AI ‘Co-pilot”.) However, the machines cannot yet offer a replacement for human empathy, nuance, cultural sensitivity and other emotional intelligence, which is so crucial in the philanthropy space. 

Given AI’s potential to “hallucinate”, it’s also best to be clear with your audience if AI is involved in creating content, to potentially cover yourself, and recognise the need for careful review by a human for not only facts but tone and language. Most charities I work with have specific words, phrases or angles that are strictly off-limits to avoid triggering or insulting the people they help; a difficult ask for the generic copy of the robot.

It’s also worth being aware of the limitations of the inputs AI GPTs work off – such as, for example, OpenAI, the makers of ChatGPT, currently only having an agreement to use News Ltd newspapers, not alternative voices such as Nine media or the ABC, as outlined in The Conversation by TJ Thomson and James Meese. This potentially politicises or biases output in ways your organisation might not usually be comfortable with.

Finally, when it came to media, comms and storytelling for charities and philanthropy, ChatGPT’s advice was to “focus on authenticity”. 

“Use AI for efficiency, but let humans handle the emotional core of communications,” it suggested. 

I couldn’t have said it better myself.


(* By the way, as I discovered – outlined in a previous post – every time you use an AI engine in this way, for a general query, the cooling systems required for the massive generative AI data centres require roughly half a litre of water. So be wary about playing with this new toy – it’s not good for the environment. Comparatively, while writing this, I’ve consumed a cup of black tea, so, there’s one small win for the humans.)

Leave a comment