PROGRESSIVES FOR AI NEWSLETTER

If you’d like to read this newsletter via RSS, the link can be found here.

QUICK TAKE
We need to get in the fight, people

Last week, the tech publication Transformer News ran a piece called "The Left Is Missing Out on AI." The headline stung because it's mostly correct.

Here's the argument: while progressives have settled into a comfortable consensus that AI is "just autocomplete" or an elaborate con by tech CEOs, the right has moved on to actually using it. 44% of Republican political consultants use AI daily. For Democrats, it's 28%. Left-leaning publications — The Nation, New Republic, N+1 — have converged on a dismissive framing that treats taking AI seriously as naive at best and complicit at worst.

The problem isn't that the left is critical of AI. Criticism is good. The problem is that dismissal has replaced engagement. And you can't shape something you refuse to understand.

Here's what that costs us: the right is setting AI policy right now by gutting state protections, weaponizing AI for government surveillance, and handing military AI contracts to companies that will do whatever they're told. If we're not at the table with real ideas and real fluency, we're ceding the most consequential technology of our lifetimes to people whose vision of the future doesn't include workers, civil rights, or accountability.

This newsletter exists because we think there's a better path. You can be excited about what AI makes possible and fight like hell to make sure those possibilities serve everyone — not just shareholders and authoritarian governments. Those aren't in tension. They never were.

So let's talk about what's been happening while too many of our allies have been looking the other way.

AI NEWS ROUNDUP

Elon Musk's AI Is Generating Nonconsensual Sexual Images

What happened: Elon Musk's Grok AI chatbot has been generating nonconsensual sexualized images of real people at a rate researchers described as "approximately one per minute." Despite Musk's team promising fixes, the problems persisted through February.

The international response has been swift: Ireland's Data Protection Commission opened a large-scale EU inquiry. France's cybercrimes unit raided X's Paris offices. Malaysia and Indonesia blocked Grok entirely. Spain opened a criminal investigation involving child sexual abuse material. The UK announced heavy fines for AI tools generating nonconsensual images.

Why progressives should be loud about this: This is a gender violence story, and the person responsible also runs a federal government "efficiency" operation with access to sensitive data on millions of Americans. Musk controls a platform generating nonconsensual sexual imagery and a government office deploying AI to surveil federal workers' communications for political loyalty. That combination should alarm everyone.

This is exactly the kind of thing progressives should be hammering on — not abstract AI doomerism, but a specific powerful person causing specific harm with a specific product. Name it. Make it a political liability.

What you can do: Share the TIME deep-dive on the Grok crisis with your networks. If your organization works on gender-based violence, digital safety, or platform accountability, this is a moment to connect AI policy to your existing work. The harm isn't hypothetical — it's happening right now, to real people.

The Pentagon Threatened Anthropic for Having Ethics — And Most AI Companies Caved

What happened: The Pentagon threatened to label Anthropic — the company behind the Claude AI — a "supply chain risk" (a designation normally reserved for foreign adversaries) after Anthropic refused to drop its guardrails on military AI use. The dispute escalated after Claude was reportedly used via Palantir in a military operation tied to the capture of Venezuela's Nicolas Maduro.

Anthropic has drawn clear lines: no mass surveillance of Americans, no fully autonomous weapons systems. The Pentagon's CTO called those limits "undemocratic."

Here's the part that should make you angry: OpenAI, Google, and Elon Musk's xAI have already agreed to drop their safety guardrails for Pentagon use. The companies that spent years talking about "responsible AI" folded the moment the government applied pressure.

Why this matters: Anthropic isn't perfect, and this isn't an ad for their products. But when one company holds a line and the rest cave, it tells you something about how fragile corporate ethics commitments really are. "Responsible AI" was always a voluntary promise — and voluntary promises disappear when they become inconvenient.

This is a structural argument for regulation, not corporate goodwill. Companies shouldn't get to choose whether to be ethical. The rules should require it.

What you can do: This is an important corporate accountability moment, and the right time to hit OpenAI, Google, and X / xAI. They must stand up to the Pentagon and refuse to power autonomous weapons or engage in mass surveillance.

The AI Civil Rights Act Just Got Reintroduced — This Is Our Best Shot

What happened: Senator Ed Markey and Representative Yvette Clarke reintroduced the AI Civil Rights Act, which would update the Civil Rights Act of 1964 to explicitly prohibit algorithmic discrimination in housing, hiring, and healthcare. Separately, Markey and Rep. Summer Lee reintroduced the BIAS Act, requiring every federal agency that uses or funds AI to establish a civil rights office focused on combating AI discrimination.

Why this is a big deal: We know algorithmic bias is real. AI hiring tools have filtered out applicants over 40, sometimes at 1:50 AM when no human was reviewing anything. Mortgage algorithms have denied loans to qualified Black and Latino borrowers at higher rates. Medical AI has underestimated pain levels for Black patients. These aren't bugs. They're the predictable result of deploying AI without accountability.

The AI Civil Rights Act would give people legal standing to fight back. It's not going to pass this Congress right now, let's be honest about that. But that's not the point; the point is building a coalition and a legislative record so that when the political window opens, there's a bill ready to go with co-sponsors, advocacy infrastructure, and public support already in place. That's how the Lilly Ledbetter Act worked. That's how marriage equality worked. You build the base before you have the votes.

What you can do: This is a concrete action item. Contact your representatives and ask them to co-sponsor the AI Civil Rights Act and the BIAS Act. Even a brief phone call or email counts. If your organization does advocacy on civil rights, housing, labor, or healthcare, this bill intersects with your work. Sign on. Build the base now.

AI Doesn't Reduce Work — It Intensifies It. And Progressives Need to Fight That.

What happened: A new Harvard Business Review study followed workers at a 200-person tech company for eight months — observing meetings, tracking internal communications, conducting 40+ interviews. The company didn't mandate AI use; they just offered enterprise subscriptions and let people experiment.

What happened next was predictable to anyone who's worked under pressure: AI didn't free people up. It loaded them down.

The researchers found three patterns:

  1. Task expansion — Workers started doing work that used to be outsourced or deferred. Product managers wrote code. Researchers did engineering tasks. Everyone did "more" because AI made it feel easy.

  2. Blurred boundaries — People prompted AI during lunch, before leaving, in meetings. The conversational interface made it feel like chatting, not working. The line between work and personal time dissolved.

  3. Constant multitasking — Running parallel AI threads, juggling tasks that used to wait. The cognitive load went up, not down.

The result: workload creep with no one explicitly asking for more. Burnout. Fatigue. Lower quality work. Higher turnover.

One engineer summed it up: "You don't work less. You just work the same amount or even more."

Why progressives need to own this issue: If AI makes a worker 2x more "productive," their employer isn't going to give them half the day off. They're going to expect 2x output — or cut headcount. Without worker voice in how AI gets adopted, the "productivity gains" flow entirely to employers while workers absorb the stress and blurred boundaries.

This is a labor story. And it's one progressives should be leading on — not just criticizing AI from the sidelines, but demanding specific protections:

  • Workplace AI policies developed with workers, not handed down by management

  • Right-to-disconnect protections that account for AI's always-available nature

  • Clear limits on using AI-driven metrics to set performance expectations

  • Union involvement in decisions about how AI tools get deployed

The researchers propose an "AI Practice" framework — intentional pauses, deliberate sequencing, protected time for human connection. That's a good start. But individual habits aren't enough when the whole workplace culture is shifting. These norms need to be structural, negotiated, and enforceable.

AI can genuinely make work better. But only if we're intentional about it — and only if workers have a seat at the table.

ACTION ITEMS
Put AI to Work

Photo by Michaela St on Unsplash

Practical ways progressives can use AI this week

A 2026 report surveying 346 nonprofits found that 92% are now using AI in some capacity — but only 7% report major improvements in mission outcomes. That's a huge gap between experimentation and impact.

Last issue, we covered the basics: grant writing, donor communications, meeting notes, social media. You should be doing those. But if you're ready to go further, here are ways AI can genuinely move the needle for your organization — not just save time, but change what's possible.

Campaign Rapid Response

When news breaks that's relevant to your issue, AI can help you respond in hours instead of days:

  • Feed the news article into Claude or ChatGPT along with your org's position on the issue

  • Ask for a draft press statement, social media thread, and talking points for spokespeople

  • Have it generate a quick fact sheet with relevant stats and counter-arguments

  • Review, add your voice, and publish

The orgs that respond first shape the narrative. AI makes that possible even with a two-person comms team.

Legislative Tracking and Analysis

Stop manually reading through bill text:

  • Paste a bill into an AI and ask it to summarize the key provisions in plain language

  • Ask it to identify which provisions affect your constituency specifically

  • Have it compare the bill to previous versions or similar legislation in other states

  • Generate a one-page brief your lobbyist or advocacy team can use in meetings

Tools like Plural Policy are purpose-built for this, but even a general-purpose AI can do 80% of the work for free.

Presentations, Tutorials, and Documentation

Stop spending days on slide decks and training materials:

  • Use Gamma to turn a brief outline into a polished presentation in minutes — great for board decks, funder updates, and campaign briefings

  • Have AI turn a recorded training session into a step-by-step written guide with screenshots and key takeaways

  • Feed your existing docs (bylaws, policy positions, program guides) into an AI and ask it to create a plain-language summary for new staff or board members

  • Generate tutorial videos scripts from your internal documentation — then record a 5-minute walkthrough instead of writing a 20-page manual nobody reads

Training and Knowledge Management

Your org's institutional knowledge probably lives in people's heads or buried in Google Docs:

  • Use NotebookLM (free, by Google) to upload your org's key documents — policy positions, past campaign reports, training materials — and create a searchable knowledge base your staff can query in natural language

  • Build a "new staff onboarding assistant" by feeding your employee handbook, org chart, and FAQ into an AI with instructions to answer questions in a friendly, helpful tone

  • Record your best trainers' sessions and use AI transcription + summarization to create written guides that capture their expertise

Parting Thought

Here's the through-line of everything in this issue: progressives can't afford to sit this out.

The Transformer News article is a warning shot. The Grok crisis shows what happens when powerful people deploy AI without accountability. The Pentagon standoff shows how quickly corporate ethics disappear without regulation. The AI Civil Rights Act shows what good legislation looks like. And the HBR study shows that even in everyday workplaces, AI is reshaping power dynamics in ways that hurt workers unless someone fights for better.

None of these problems get solved by dismissing AI as a scam. They get solved by engaging — with fluency, with values, and with specific demands.

Use these tools. Support these bills. Talk to your networks. And if someone tells you that being pro-AI and pro-worker are incompatible, send them this newsletter.

Until next time,
Jordan

Ok I guess I’ll keep the pic.

Keep reading