Let me ask you something: How many times has a piece of technology promised to change everything… and then promptly driven you absolutely crazy?
You know the scenarios. It can do all the things, but only after you’ve configured everything yourself. “Integration” turned out to mean something very different from what you imagined. The upgrade wiped out every custom setting you spent hours building. And whenever you try to do something just slightly outside the norm, the software fights you like a toddler at bedtime.
I could go on. We have all been there.
And yet—here’s the tension—technology genuinely has made our lives easier. Microsoft Word may not make complex formatting a walk in the park, but it has transformed how we create documents. And because it plays nicely with the rest of the MS Office suite, whole categories of headaches have simply disappeared.
Welcome to the tug of war.
The Two Ends of the Rope
When it comes to A.I. in fundraising, this same push and pull is playing out in real time. On one end of the rope are the people who believe A.I. is too messy, too risky, and too unreliable to touch. On the other end are the people who believe A.I. has ushered in such a leap in accuracy that we can use machine-generated information as-is, no human review required.
New technologies that arrive with enormous hype—and A.I. certainly arrived with enormous hype—have a way of polarizing us. But is there something useful to be found in the middle of that rope?
Spoiler alert: There is.
Yes, A.I. Has Been Around. But This Feels Different.
A.I. has been woven into our digital experience for years. Recommendation engines. Spam filters. Autocomplete. But when OpenAI released ChatGPT in 2022, it felt less like a product launch and more like a digital eruption. Things are moving fast. New and genuinely exciting capabilities are emerging. And yes, things are getting broken along the way.
For many in our field, the speed of that change feels dangerous. Whatever you do, don’t ask A.I.
But much like the anxiety that greeted Google’s debut—remember when people worried that nobody would learn anything anymore?—there is real and practical value here, if you know how to use it.
One of the most useful features of a generative A.I. chatbot is that you can ask it to show its work. Where did that information come from? What sources support that conclusion? What transactions were used to build that summary? That transparency is actually a significant feature, not a quirk.
Where A.I. Is Changing the Game for Prospect Research
At Aspire Research Group, one of the most dramatic shifts A.I. has made in our day-to-day work is in writing bios. Even setting aside the time required to gather information, writing a few well-crafted paragraphs about a prospect has always been time-intensive. Using DonorAtlas, we now have well-written bios and the underlying sources for verification—almost instantly. We can deliver a significantly stronger product at the low end, in far less time.
Until, of course, A.I. fails us. And it does fail us.
People in the arts, for example, seem to get misrepresented by A.I. with striking frequency. What is their “job,” exactly? They don’t fit the pattern that it expects. In those cases, we take over the steering wheel and drive that one ourselves.
This is not a reason to abandon A.I. It’s a reason to understand it.
Algorithms Are Only as Good as the Data Behind Them
Remember when Netflix’s recommendations felt almost eerily accurate—until they didn’t? If you shared an account with someone whose taste was wildly different from yours, the algorithm got confused. It was doing its best with messy inputs.
The same principle applies to your fundraising database. If your data is a hot mess, A.I. is going to struggle to give you reliable scores or meaningful analysis. But here’s the thing: it might still give you better results than statistical modeling did. And if better-than-before scores get gift officers out the door and into conversations with donors faster, that’s not nothing. Something is better than nothing.
But that raises the next question—and it’s an important one.
If A.I. Is Better Than What Came Before, Why Not Just Trust It?
If A.I. analysis outperforms statistical modeling, why shouldn’t we lean on it entirely? Why not let it drive portfolio assignments, staffing decisions, campaign planning?
I recently interviewed Vered Siegel on the Prospect Research #ChatBytes podcast, and she said something that I keep coming back to:
“One of the biggest shifts generative AI has introduced in our industry is that information is no longer the scarce resource. Judgment is now the scarce resource. We can generate lists and summaries and signals faster than ever, but that doesn’t automatically make our decisions better. One key aspect of being a strategic partner right now means helping the room slow down just enough to ask the right questions.”
Read that again. Judgment is now the scarce resource.
Finding the Balance
The key to leveraging A.I. well is knowing where human judgment needs to enter the picture—and deciding what level of risk is acceptable for you and your organization.
I’m not suggesting that every single name assigned to a portfolio requires a human review. Not anymore. But what if a feedback loop was built into the prospect assignment process? What if gift officers had a routine way to tell your analytics team when things are working—and when they’re not. That loop is human judgment at scale.
Here’s what breaks down when human judgment is undervalued or eliminated altogether: efficiencies go down. Not up. The risk of an error that could damage donor trust or cause your organization harm goes up. The promise of A.I. is efficiency, but that promise only delivers when the humans in the process are engaged at the right moments.
Get the balance right, and productivity goes up. New opportunities surface. Gift officers work with better information. Researchers spend their energy where it actually matters.
Get it wrong—either by refusing to use A.I. at all or by outsourcing your judgment to it entirely—and you’re just holding a rope with nobody on your end.
This Is Your Moment to Lead
Here’s what I want you to take away from all of this: the disruption that A.I. is causing in our field is real. But it’s also creating space for researchers and prospect management professionals to step into a more strategic role.
A.I. can generate the bio. It can surface the signal. It can produce the list. But it cannot decide which signals matter for your organization’s specific mission and relationships. It cannot make the judgment call about when a score doesn’t pass the smell test. It cannot be the strategic partner in the room who helps leadership slow down and ask the right questions.
Only you can do that.
The question—as always—is whether you’re ready to step up and do it.
Additional Resources
- Vered Siegel on Why Judgment Is the New Scarce Resource in Fundraising | Prospect Research #ChatBytes | 2026
- I. in Prospect Research: Shifting the Focus from Fear to Strategy | Jen Filla | 2025
- Don’t Just Ask the Database Directly | Vered Siegel | 2026
- Beyond Episodic Wealth Screenings: Major Gift Prospect Identification That Hums | Jen Filla | 2026
- Fire your Prospect Researcher! Artificial Intelligence (AI) has arrived. | Jen Filla | 2016


