Plaintext with Rich
Cybersecurity is an everyone problem. So why does it always sound like it’s only for IT people?
Each week, Rich takes one topic, from phishing to ransomware to how your phone actually tracks you, and explains it in plain language in under ten minutes or less. No buzzwords. No condescension. Just the stuff you need to know to stay safer online, explained like you’re a smart person who never had anyone break it down properly. Because you are!
Plaintext with Rich
AI Deepfakes: When Trust Becomes the Attack Surface
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Someone calls you, sounds exactly like your boss, uses the phrases they always use, and says they need help right now. You don't hesitate. But what if the voice is real and the person isn't?
This episode breaks down AI deepfakes: audio, video, and images created by AI to convincingly impersonate real people. It explains why this threat exists now (the tools got easier, not the attackers smarter), why deepfakes don't need perfection to work (they just need sixty seconds of urgency), and how the real vulnerability isn't technology but our natural wiring to trust familiar voices and faces. The episode covers the most common attack patterns, from fake CEO calls to fabricated video meetings, and closes with a practical starter kit built around slowing down urgent requests, verifying through a second channel, creating no-exception approval rules, and accepting that audio and video can now be faked.
Whether you're a professional handling sensitive decisions or someone who wants to protect their family from voice-cloning scams, Plaintext with Rich explains how deepfakes actually work and what to do about them.
Is there a topic/term you want me to discuss next? Text me!!
YouTube more your speed? → https://links.sith2.com/YouTube
Apple Podcasts your usual stop? → https://links.sith2.com/Apple
Neither of those? Spotify’s over here → https://links.sith2.com/Spotify
Prefer reading quietly at your own pace? → https://links.sith2.com/Blog
Join us in The Cyber Sanctuary (no robes required) → https://links.sith2.com/Discord
Follow the human behind the microphone → https://links.sith2.com/linkedin
Need another way to reach me? That’s here → https://linktr.ee/rich.greene
The Deepfake Problem Framed
SPEAKER_00If someone calls you, sounds exactly like your boss or a loved one, uses the phrase they always use, and says they need help right now, you don't hesitate because why would you? But what if the voice is real and the person isn't? Welcome to Plain Text with Rich. Today we're talking about AI deepfakes. And as always, let's start simple. A deepfake is audio, video, or images created by AI to convincingly impersonate a real person. Not cartoons, not bad impressions, not obviously fake videos. Real sounding, real looking, good enough to pass a quick gut check. And that's the important part. Deepfakes don't have to be perfect. They just have to be believable long enough for you to act. Now, this didn't suddenly appear because attackers got smarter. It exists because the tools got easier. What used to require a studio, professional equipment, hours of editing now requires a few voice samples, a short video clip, and a laptop. AI models can learn tone, cadence, phrasing, and rhythm from surprisingly little data, which means attackers don't need your password anymore. They simply need your voice, your face, or your boss's voice. And those samples, well, they already exist. Voicemails, Zoom calls, social media videos, public talks. We all left breadcrumbs. AI learned how to follow them. Now, most deep fake attacks, they're not flashy, they're boring. And that's why they work. Here's what they usually look like. A fake CEO voice calls finance says there's a confidential deal that needs a transfer right now. Or a fake family member calls in distress, claims their phone broke, needs money sent immediately. Or a fake video meeting pops up, looks right, sounds right, asks for credentials or approval. Do you notice something? The goal isn't to convince you forever. The goal is to convince you for 60 seconds. Deepfakes don't win by realism, they win by urgency. Now, when this works, the damage is real as always. Money can move, accounts can be compromised, reputations can take some hits. But here's the part that again I feel matters most. Afterward, victims often feel foolish. They replay that moment in their minds over and over. They think they should have known, they should have seen the signs. But the technology is designed to bypass that instinct. This isn't about intelligence, it's about timing and trust. And trust is exactly what deep fakes exploit. Look, we're wired to trust certain signals: a familiar voice, a known face, an authoritative tone. For most of human history, those signals were reliable. AI has broken that assumption. Your senses are still working, they're just being fed synthetic input. So the future or the failure isn't that you trusted. The failure is assuming trust equals verification. That assumption used to be safe. It isn't anymore. So, as I like to do, let's get practical. Let's look at four things that we can do. Step one, slow down urgent requests. Urgency is the weapon, speed is the trap. Any request involving money, access, or secrecy, it earns a pause. Even 30 seconds is going to help you just think things through and process a little bit more. Step two, verify using a second channel or what you might hear of as an out-of-band channel. If you get a call, hang up and contact them via another method other than that particular phone number. If you get a video request, confirm by text or email, again, you want to find another channel. If it's internal, check with someone else or go physically find them if feasible. Again, never verify inside the same conversation. You want to take it out of band. That's how deepfakes win if you keep it inside. Step three, create no exception rules. Money. If money needs to move, make it require two people. Access approvals require confirmation. Executives don't bypass controls. We shouldn't have important functions handled by just one person, right? We need to have checks and balances. Not because you don't trust them, because attackers impersonate them. Step four, assume audio and video can be faked. Not always, not everywhere, but enough that proof requires verification. Deep fakes don't mean panic, they mean process. Trust is still possible. Blind trust is not. Security doesn't fail when AI gets better. It fails when we keep using old assumptions in a new reality. Again, deepfakes exploit trust, not technology. They don't need perfection. Again, they need urgency. Your defense isn't better eyesight, it's verification habits. Slow down. Check twice. Use a second channel. That's how you beat a fake that sounds real. Now, if there's a security topic you want broken down in plain text, send it my way. As always, email me, DM me, drop it in my comments. However, you choose to reach me, I will read it and I will respond. If this episode helped, share it with someone who'd actually benefit. This has been Plain Text with Rich. 10 minutes or less, one topic, no panic. See you next time.