PROGRESSIVES FOR AI NEWSLETTER
ProgressivesforAI.com
Our astounding website
If you’d like to read this newsletter via RSS, the link can be found here.
More sharing links and whatnot coming soon. We’re just getting started!
QUICK TAKE
Labor’s taking on the fight for more AI regulations. We’re here for it.
We hear a lot about how regulation "stifles innovation." But look at what's actually happening: Workers are organizing to demand transparency in how AI makes decisions about their jobs; states are passing laws requiring companies to test for bias. And the companies that comply? They're building more reliable, more trustworthy products!
The labor movement just fired a massive shot across the bow in California — and whether you're a union organizer, a nonprofit staffer, or just someone who uses ChatGPT to draft emails, you should be paying attention.
Let's get into it.
AI NEWS ROUNDUP
Unions Tell Newsom: Regulate AI or Forget the Presidency
What happened: On February 4, the California Labor Federation held a press conference in Sacramento with a blunt message for Governor Gavin Newsom: if you want union support for your expected 2028 presidential run, you need to get serious about protecting workers from AI.
AFL-CIO President Liz Shuler was there. So were labor federation leaders from Iowa, Georgia, and North Carolina. This wasn't just a California story — it was a national one.
California Labor Federation President Lorena Gonzalez didn't mince words: "I don't think you're going to have a lot of motivation to walk precincts for somebody who won't engage working class voters on the very things that are taking away their jobs."
The federation is backing 24 bills this session, including:
SB 947: Bans management decisions based solely on AI predictions about employees
SB 951: Requires employers to give advance notice before replacing jobs with AI
A surveillance bill that would prevent AI-powered workplace monitoring designed to prevent union
organizing
Why this is actually good for AI: These bills don't ban AI — they require it to be used well. SB 947 doesn't say you can't use AI in management. It says a human has to be in the loop. That's the kind of guardrail that builds trust. And when workers trust AI tools, they're more likely to adopt and benefit from them.
Consider the numbers: A Gallup poll from September 2025 found that 80% of Americans want AI regulation, even if it slows innovation. That's not an anti-tech number. That's a "we want to trust this stuff" number.
What you can do: If you're in California, contact your state legislators about SB 947 and SB 951. Even if you're not, the AFL-CIO has been building a national framework on AI and labor — read it and share it with your networks. These are the kind of thoughtful, specific proposals that move the conversation beyond "AI bad" to "AI accountable."
The AI Gap Is Real — And Progressive Orgs Are on Both Sides of It
What happened: A January report from Social Current revealed a growing divide in the nonprofit sector: organizations earning over $1 million are adopting AI at nearly twice the rate of smaller ones. And over half of all nonprofits earn less than $1 million annually.
The orgs using AI are seeing real results — 20-30% increases in donations through personalized outreach, and 15-20 hours saved per week on admin tasks. But 41% of nonprofits rely on a single person for all AI decisions. And only about 10% have any kind of written AI governance policy.
What it means: This is a classic equity gap playing out in real time. Well-funded nonprofits are using AI to raise even more money, while grassroots orgs serving the communities that need it most are falling behind. The tool isn't the problem — access is.
The hopeful part: The barrier to entry has never been lower. AI tools that cost thousands per month two years ago now have free tiers powerful enough for small organizations. The real bottleneck isn't money — it's knowledge and confidence.
What you can do right now: If you work at a nonprofit or advocacy org, here are free and low-cost tools you can start using this week:
Grant writing: Use Claude or ChatGPT (both have free tiers) to generate first drafts of grant narratives. Orgs report saving 35-50% of proposal development time. Don't paste in confidential info — use anonymized details and add specifics yourself after.
Donor communications: Draft personalized thank-you notes and updates at scale. What used to take 6 hours can take 90 minutes with an AI first draft that you review and personalize.
Meeting notes: Otter.ai (free, 600 min/month) or Fathom (free for individuals) will auto-transcribe your meetings and pull out action items. Just get consent from attendees first, and don't transcribe sessions where you're discussing specific clients by name.
Social media: Canva (free tier) now has AI image generation and design tools. Buffer ($6/mo) includes an AI assistant for writing posts. You can go from program update to polished social content in minutes.
Research: Perplexity gives sourced answers — useful for rapid-response research when news breaks. Great for building fact sheets and talking points quickly.
One important rule: Always review AI-generated content before it goes out. These tools draft; you decide. That's not a limitation — that's how it should work.
When Regulation Works: State AI Laws Are Making Products Better

Photo by Nathan Cima on Unsplash
What happened: Despite the Trump administration's threats (which we covered in Issue 1), state AI laws are quietly doing exactly what they're supposed to: pushing companies to build better products.
California's Generative AI Training Data Transparency Act, effective January 1, now requires AI developers to publish information about what data they used to train their models. California's AB 489 bans AI chatbots from impersonating healthcare professionals. Texas's Responsible AI Governance Act requires transparency from AI developers or face civil penalties. And Colorado's AI Act, taking effect June 30, will require "reasonable care" to prevent algorithmic discrimination.
Meanwhile, the Workday hiring discrimination lawsuit — which we also mentioned last issue — is now proceeding as a nationwide collective action. Millions of job applicants over 40 may have been filtered out by AI screening tools, sometimes at 1:50 AM when no human could possibly be reviewing applications.
Why regulation is pro-innovation: Here's the part that often gets lost. When California required police to disclose AI use in official reports (SB 524), it didn't kill police AI tools. It made them more transparent — which made them more credible in court. When states require bias testing, companies that comply end up with products that work better for more people. That's not a burden. That's a competitive advantage.
The companies fighting regulation aren't defending innovation. They're defending the right to ship untested products. There's a difference.
What you can do: Know your state's AI laws. The Future of Privacy Forum maintains a solid tracker of state-level AI legislation. If you're in a state with strong protections, support them vocally — they're under federal attack. If you're in a state without them, that's an organizing opportunity. The Brookings Institution has a good breakdown of how different states are approaching this.
AI FLEX OF THE WEEK
Two things that blew our minds recently:

Photo: David Baillot/UC San Diego Jacobs School of Engineering
AI just made 100-year climate projections possible in 25 hours. Researchers at UC San Diego and the Allen Institute for AI built Spherical DYffusion, a generative AI model that can simulate a century of global climate patterns in about a day — a process that used to take weeks on supercomputers. Even better: it runs on standard GPU clusters, not billion-dollar infrastructure. This is the kind of tool that gives climate scientists and policymakers the ability to model scenarios fast enough to actually act on them. Imagine an advocacy org being able to say "here's what happens to your district under three different emissions scenarios" with real data backing it up.
A free app is giving blind and low-vision users superhuman access to the visual world. Be My Eyes launched Be My AI — a free tool that lets blind users snap a photo of anything and get a detailed, conversational description in 36 languages. Reading a menu, checking an expiration date, navigating a store. Microsoft deployed it at their Disability Answer Desk and it's resolving over 90% of calls without needing a human. This isn't a gimmick — it's genuine independence, powered by AI, available to anyone with a smartphone.
This is what we mean when we say AI can be a force for good. Not hypothetically. Right now.
ACTION ITEMS
Put AI to Work

Photo by Michaela St on Unsplash
Write a Public Comment in 15 Minutes
State legislatures and federal agencies are taking public comments on AI regulation right now. AI can help you participate even if you don't have a policy background:
Find an open comment period (check regulations.gov or your state legislature's website)
Paste the proposed rule or bill summary into Claude or ChatGPT
Ask it to explain the key provisions in plain language
Ask it to help you draft a comment from your perspective — as a worker, organizer, parent, small business owner, whatever applies
Review, personalize, and submit. Adding your personal story and voice matters most!
Build a Fact Sheet Fast
Got a meeting with a legislator or a community forum coming up? AI can help you prep:
Use Perplexity to research the topic (it cites sources, so you can verify)
Paste your research into Claude and ask for a one-page fact sheet with key stats, talking points, and counter-arguments
Add your local context and print it out
Start Your Org's AI Policy
If you're one of the 90% of nonprofits without an AI governance policy, here's a shortcut: ask an AI to help you write one. Seriously! Ask Claude or ChatGPT: "Help me draft a simple AI use policy for a small nonprofit. Cover data privacy, content review, and which tasks are appropriate for AI assistance." Then customize it for your org. It won't be perfect at the start, but it’s a place to jump off from.
TOOLS SPOTLIGHT
Claude Code & Claude Cowork

If you haven’t tried out the new Claude Cowork, or its slightly nerdier sibling Claude Code, you’re missing out on some serious power.
The biggest difference between these products and the “regular” chatbot products is that they can actually interact with files on your computer, and on other computers like servers you have control of.
We know, it can sound a little scary! There’s been lots of discussion about OpenClaw, a product that went viral recently for the way it could “take control” of your computer to act as a personal assistant, that’s been giving folks the idea that allowing AI to work on your computer is dangerous. But Claude Code and Claude Cowork are a more refined and trustworthy than OpenClaw, which still has a lot of security considerations to work out.
Claude Code uses a “terminal” interface (a text-only interface with your computer) while Claude Cowork does similar things, except in the graphical interface of the Claude MacOS app, making it a bit more user friendly.
While it would seem like Claude Code, and maybe by extension Claude Cowork, are just for coding, they can do so much more! The power is in the ability to create and manage files on your computer, instead of just getting instructions from the chatbot for doing something yourself. Here’s a few practical applications:
Document accessibility and translation prep. For orgs doing multilingual outreach or working with communities that need accessible formats, Cowork/Code can take your latest one-pagers, fact sheets, or action alerts and reformat them into plain-language versions, create large-print layouts, or prep content for translation by stripping formatting and flagging idioms that won't translate well. Every time you publish new materials, same workflow.
Turn board/funder reports into multiple formats. Every quarter (or month), point Cowork/Code at a folder of scattered notes, budget exports, and program updates, and have it compile them into a polished board report, a shorter funder summary, and talking points for your ED — all at once. The recurring value is that you can refine the prompt over time with a plugin so it learns your org's format and voice.
Grant reporting data assembly. Most advocacy orgs are juggling multiple grants with different reporting requirements and timelines. Cowork/Code can take your program activity logs, financial exports, and social media metrics and assemble the numbers and narrative sections each grantor needs, formatted to their specific template. You still review and finalize, but the assembly work — which eats hours every reporting cycle — is handled.
But even if you don’t need these kinds of applications very often, you’ll find immense power in the ability of these tools to actually write you scripts and programs in natural language. Tell Claude Code that you want to build a task management app tailored to you, and it’ll walk you through a discussion of what it should focus on when building it. 10 minutes later and you’ll have an app running on your computer!
Want to try it out? Check out this tutorial.
Parting Thought
The next few months are going to be big. Colorado's AI Act takes effect June 30 — and the administration has already targeted it. California's 24 labor-backed AI bills will be working through the legislature. And the Workday lawsuit could set nationwide precedent for algorithmic accountability in hiring. We'll keep you updated!
In the meantime: think about what kind of future we want to build where AI will be an inevitable part. The future of AI isn't just a tech story. It's a labor story, a civil rights story, and an organizing story. And progressives should be leading it — not running from it. It’s our job to have a clear vision of the what we want and drive the discussion, and policy, in that direction.
Share this newsletter with someone who needs to hear that being pro-AI and pro-accountability aren't opposites.
Until next time,
Jordan

Ok I guess I’ll keep the pic.

