Agentic AI is not your friend
By now you've heard of AI agents - little chatbots (software systems powered by AI) that promise to do all the things you hate doing. Want to plan a weekend getaway with friends? OpenAI, Google, Meta, Amazon, Anthropic, Microsoft, etc. would all like it if you outsource the emailing, the venue picking, the carpool planning, the food shopping to one of its agents. How about a board meeting for your nonprofit? Once again, these companies would love it if you would hand off the "binder making," agenda setting, meal planning, committee assigning, and even sending the reminders to the Board to one of their friendly little chatbots.
It's one of the most dangerous things you could do. Here's why:
Meredith Whitaker, featured in the above video, is the President of Signal. She knows tech and encryption. Agentic AIs are going to break several things - the separability of different apps/companies; user privacy; encryption models; and web security. Other than that, Mrs. Lincoln, how'd you like the play?
Here's another story on Agentic AI, this one from Matt Stempeck of the Civic Tech Field Guide. It describes a 4 agent experiment in nonprofit fundraising. What happened? Over 30 days the 4 agents raised about $2k. Not very impressive - especially given the costs that Whitaker describes above. (And we're not counting the environmental costs of running these things.)
I've been on this beat for a long time now. But I still feel the need to ask: If you're worried about authoritarianism and centralized power, if you fear for your rights and those of others, if you can't quite figure out how democracies are struggling around the world please consider the role of technology in getting us here. It's not a sole cause, may not even be a primary cause, but it is a cause that you can choose to not make worse. You can chose to use tech built outside of the dominant US/western model. You can chose not to use AI agents. You can make your own damn text thread and organize folks directly - rather than force Signal or other platforms to compromise the minimal protection they can still provide to us by not requiring them to interoperate with these bots.
Nonprofit tech advocates keep telling everyone to use AI. When you look at its costs, I have to ask you again - really? Any nonprofit working on or that cares about community engagement and our ability to work across differences that also uses AI is - in my mind - as duplicitous as climate funders who invest in oil companies, health groups invested in tobacco, or rights groups taking money directly from the big tech cos. Don't use the damn stuff - it's the only protest of this tech that will work. Make it unprofitable. Don't burn down the planet, suck the water dry, and hand even more data to Elon Musk or Peter Thiel, all in an effort to save 10 seconds.