Ethical storytelling is the real collateral damage in the current AI wars — NGOs need to be prepared

It looks like Silicon Valley is coming down from its initial panic from the release of an AI chatbot by Chinese startup DeekSeek. Engagement with OpenAI tools continues to climb despite DeepSeek’s software being deemed less dependent on pricey, high-end microchips and energy-consuming data centers. 

But has this war between AI tech companies been a distraction from the real red flag that should be alerting NGOs and journalists?

Ask DeepSeek why there are no women in China’s politburo or about human rights abuses in Xinjiang, and it launches it into propagandistic defenses of the Chinese Communist Party. Question why it seems to repeatedly serve as spokesperson for Beijing authorities, it explains those official positions were “factual and well-informed.”

Buried under the promise of AI to increase efficiency and reduce the burden of menial tasks are landmines that can undermine NGOs call to action, damaging their reputations, and most importantly, hurting the people they are trying to help.

We’ve already seen what can go so very wrong.

In 2022, the National Eating Disorders Association’s foray into AI  had disastrous results when the organization integrated a chatbot into its helpline.

Not only did “Tessa,” the AI-powered chatbot, replace human beings as part of a Covid-inspired downsizing, it gave what experts considered harmful advice. 

According to one advocate, who described her experience in an Instagram post, Tessa advised those seeking help with unhealthy body image issues and food disorders how to get slimmer by creating a 500 to 1,000-calorie-per-day deficit. 

“NEDA fired their helpline staff and now offers a robot who will provide ample advice on how to keep one sick in their ED longer,” wrote Sharon Maxwell. “Every single thing Tessa suggested were things that led to the development of my eating disorder.”

Across the aid and charity sector, experts and leaders are struggling to come up with ethical and responsible ways of using the generative AI language models, which are now transforming businesses worldwide to bolster the performance, efficiency, and impact of nonprofit organizations.

In late 2023, the Technology Association of Grantmakers released a 15-page report outlining a framework for responsible AI use in philanthropy.

Based on feedback from several hundred professionals working in the sector, the document urges investment in AI professional development and the use of AI to improve efficiency and promote equity. It urges listening to people before taking any bold steps. It suggests having stakeholders work with IT staff to come up with solutions that don’t jeopardize an organization’s mission or image. 

It advises organizations not to pursue AI as an end in itself, but to use it only when it emerges as the most practical solution. 

“While new AI tools hold an allure, this framework strongly recommends resolving ethical concerns and mitigating risks before moving forward with implementation,” the report said.

The list of those concerns is long. To name a few:

Artists, writers, and scholars that are frequent recipients of philanthropic support are outraged over what they describe as the wholesale and unauthorized pilfering of their creative output to “train” AI software. 

“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted,” said a statement signed by 40,000 people in the arts. 

News outlets from the US to India are suing OpenAI for its alleged unauthorized use of their content to the popular ChatGPT AI assistant. Other popular AI tools, such as Gemini and DeepSeek, are trained using similar techniques.

Donors and organizations supporting the arts, academy, and independent media should seriously consider the implications of incorporating tools into their workflow that recipients of their support consider exploitative.

And while there are vague promises that AI could close the technology gap between have and have nots, the super-rich dominate much of the AI space, with prominent companies in the sector largely owned by billionaire tech bros such as Elon Musk, private equity firms, and autocratic regimes.

Organizations should step carefully before integrating such technologies into their workflows and operations. 

Other ethical considerations include protecting the privacy of both employees and recipients of charitable support when using AI to analyze organizational data; every piece of data placed into a chatbot may be permanently absorbed by the AI model for its future use. 

DeepSeek’s egregious adherence to the Communist Party line aside, many have voiced concerns about the ideological slant of AI agents and language models which ultimately are only as neutral or unbiased as the source material they are trained with, and the engineers who oversee them.  

Organizations should also maintain transparency about how AI is incorporated into their work, including pitches to donors, policy recommendations, and implementation of assistance. 

Even organizations mostly shying away from incorporating AI into their operations should be cautious about potentially using AI-generated images or likenesses that could damage a non-profit’s credibility.

In a piece written late last year, Jean Westrick, executive director of the Technology Association of Grantmakers, called on philanthropic organizations to spend 2025 coming up with guardrails for responsible AI use, improving awareness of  the importance of securing their data, and investing in both the human and technical infrastructure needed for responsible AI use. 

“The lack of formalized structures to guide AI usage — including policies and advisory committees — raises questions about our readiness to leverage AI responsibly,” she wrote.

While it’s doubtful that Washington may heed this call, it’s nevertheless incumbent on the rest of us to do so.

Tags: Artificial Intelligence, DeepSeek, ethical storytelling, journalism, openai

If you liked this post, check out these:

DSC_9405
Filming Transition: Do ethical storytelling rules apply to the Taliban?
chatgpt icon
ChatGPT is scaring visual storytellers. It shouldn’t… yet.
DALL-E MINI human face
DALL·E 2: A new frontier for visual storytelling