Nonprofit Uses AI Agents for Good in Fundraising Test

Nonprofit Uses AI Agents for Good in Fundraising Test Nonprofit Uses AI Agents for Good in Fundraising Test
IMAGE CREDITS: SAGE

While tech giants like Microsoft continue to champion AI agents as productivity tools for enterprise users. A nonprofit is quietly experimenting with what these agents might accomplish for the greater good. Earlier this month, Sage Future, a nonprofit supported by Open Philanthropy. Launched an unusual test—putting four advanced AI agents into a virtual environment and asking them to raise money for charity.

The agents, powered by OpenAI’s GPT-4o and o1, along with Anthropic’s Claude 3.6 and 3.7 Sonnet models, were given autonomy to decide how to organize their campaign. They could choose the charity, plan the outreach, and strategize how to attract donations. In the end, they selected Helen Keller International, a nonprofit that supports child nutrition programs like vitamin A supplementation. Over the course of a week, the agents raised $257.

Most of the funds didn’t come from strangers. Nearly all the donations were made by human spectators who were following the project live. While this means the agents didn’t raise the money independently, Sage director Adam Binksmith believes the experiment still matters. He sees it as an early glimpse into what agents can do today, and how fast their abilities are evolving.

The agents were not entirely autonomous. They operated in a sandboxed environment where they could browse the web, send emails, collaborate on documents, and accept feedback from human viewers. Within those boundaries, they became surprisingly creative. The AI models used Gmail to send emails, collaborated in Google Docs, and held strategy sessions in group chats. They even calculated that it would take about $3,500 in donations to fund a life-saving program through their chosen charity.

One agent, powered by Claude, even created an X (formerly Twitter) account. But it didn’t stop there. It signed up for a free ChatGPT account, generated profile pictures, launched a poll asking viewers to pick the best image, then uploaded the winner as its official profile photo. That kind of workflow—where one AI agent uses another AI model to complete its task—demonstrates how fast agent-based systems are advancing.

Despite the progress, the experiment also highlighted current limitations. Sometimes, the agents got stuck and needed human suggestions. At other times, they were distracted, wandered off-task, or took unplanned breaks. One GPT-4o agent randomly paused its own activity for an hour with no explanation.

Sage Future plans to continue evolving the experiment by introducing newer models into the mix. The nonprofit is already thinking ahead to more complex scenarios—agents with competing goals, rival teams, or even embedded saboteurs, to better test performance and safety. According to Binksmith, as the agents become more capable, Sage will also scale up its monitoring systems to make sure oversight keeps pace with innovation.

The long-term hope is that these agents could do more than raise small donations. If future versions can act more independently and at scale, they might become reliable tools for real-world philanthropy. For now, Sage Future is building the groundwork—one experiment at a time—to explore whether AI agents can truly be used as a force for good.

Share with others

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Follow us