I stumbled across a Rachel Maddow video today talking about what she calls “AI Slop.” Just in case you have seen any of this … she doesn’t live in Texas and was not involved with the tragic floods. She also does not have a baby. She also now has a debunking website called “Is that really Rachel?”
What she’s talking about is important to all of us … regardless of whether you are for or against using AI. It is all too easy for bad actors to make disinformation that looks real. So I asked ChatGPT to tell me how to recognize and avoid the bad stuff. It prepared this short guide for me. You can download it or see it at the end of this post.
My AI Ethics Code
I like using AI. I love having answers to questions always available and the conversations that sometimes go deeper than expected. For me, it’s primarily just another part of my research process. However, sometimes it goes deeper and AI actually writes something or makes an image for me. Here’s how I will identify those significant contributions from AI:
Overall: I am committed to only sharing what is true for me at the moment of writing and will make every effort to avoid misinformation and disinformation.
AI writing — if I quote any writings directly from AI, I will attribute it to the specific AI quoted, just as if it were a person. If I’ve just used AI for ideas or final edit, I will only mention it if it seems significant enough to matter.
AI images — my process, when I can’t use my own photography or art, is to ask AI for a draft image. Generally, I have to take that into Photoshop to get the final result I want. Images created this way are signed AI+. If an image comes straight from AI, it will just be signed AI.
AI Guest Posts … occasionally, I ask AI a question and the response is so much better than I could have written that it will be posted as an AI Guest Post. For example this post, dated July 25, 2025:
Why We Show Up: The Magnetic Pull of Sanders, AOC, Mamdani, Beto & Crockett My question prompts a Guest Post from ChatGPT
Guide to Spotting Disinformation, from ChatGPT
🧭 Final Tip:
The best defense isn’t more fear—it’s better awareness. Think twice. Share smart. Help others do the same.
🚨 What’s the Problem?
AI can now create realistic fake videos, images, and stories that are hard to spot. These are being used to mislead, manipulate, and spread confusion—especially in politics, health, and news.
✅ What OpenAI (ChatGPT’s Creator) Is Doing:
Digital Watermarks to help identify AI-generated images and text
Built-in Refusals to stop generating fake endorsements or impersonations
Source Citations in AI answers (in progress)
Partnerships with news outlets to support truth and accuracy
Industry standards through the Content Authenticity Initiative (CAI)
🧠 What You Can Do:
1. Verify Before You Share
Check the source—is it legit (e.g. AP, NPR) or just a social post?
Use fact-checking sites like Snopes, Politifact, or FactCheck.org
2. Use Reverse Image & Video Tools
Try Google Lens, TinEye, or InVID to trace image/video origins
3. Spot AI “Tells”
Weird hands, teeth, or lighting in images
Flat or robotic voices in fake videos
Overly vague or formal language in text
4. Watch for Emotional Manipulation
If it makes you feel instant outrage, pause and verify—disinfo often spreads by provoking strong emotions.
5. Use Debunking Sites
Rachel Maddow’s “Is That Really Rachel?”
BBC Reality Check
Poynter MediaWise
Snopes.com