Plaintext with Rich
Cybersecurity is an everyone problem. So why does it always sound like it’s only for IT people?
Each week, Rich takes one topic, from phishing to ransomware to how your phone actually tracks you, and explains it in plain language in under ten minutes or less. No buzzwords. No condescension. Just the stuff you need to know to stay safer online, explained like you’re a smart person who never had anyone break it down properly. Because you are!
Plaintext with Rich
Securing AI at Work: What the Chat Box Actually Touches
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
At 4:47 p.m., someone pastes a customer escalation into an AI assistant and asks it to rewrite the tone. The reply is perfect. It also includes a private note from the internal thread. No breach. No attacker. Just a new workflow that doesn't know what should stay inside.
This episode breaks down how to secure AI tools in the workplace by treating them like any other system that handles sensitive information and influences decisions. It covers the three patterns where AI quietly breaks: sensitive data going in through normal use, assistants being steered by hidden instructions inside documents they read (prompt injection), and over-connected AI with too much autonomy and too little friction. The episode references NIST's AI Risk Management Framework, OWASP's Generative AI Security Project and LLM Top 10, and practitioners like Rob T. Lee and Chris Cochran for ongoing grounded guidance. The starter kit covers four moves in order: creating an approved AI lane with company identity and strong authentication, putting guardrails around sensitive data, limiting connectors and permissions with a human in the loop, and making usage observable through logging and adversarial testing.
Whether you're rolling out AI tools to your team or trying to secure what people are already using, Plaintext with Rich provides the baseline.
Is there a topic/term you want me to discuss next? Text me!!
YouTube more your speed? → https://links.sith2.com/YouTube
Apple Podcasts your usual stop? → https://links.sith2.com/Apple
Neither of those? Spotify’s over here → https://links.sith2.com/Spotify
Prefer reading quietly at your own pace? → https://links.sith2.com/Blog
Join us in The Cyber Sanctuary (no robes required) → https://links.sith2.com/Discord
Follow the human behind the microphone → https://links.sith2.com/linkedin
Need another way to reach me? That’s here → https://linktr.ee/rich.greene
A Costly “Calm Tone” Mistake
SPEAKER_00At 4 47 p.m., someone past a customer escalation into an AI assistant and prompts, write this so it sounds calm. The reply is perfect, but it also includes a private note from the internal thread. No breach, no attacker, just a new workflow that doesn't know what should stay inside. Welcome to Plain Text with Rich. Today we're talking about securing AI in the workforce. Now, quick disclaimer before we get tactical. Trying to secure AI right now is a little like building a plane while you're already in flight. The tools are changing fast, the features keep moving, and everyone's still trying to figure out where the seatbelts should go. That said, solid baselines are hard, but they're not impossible. And I'm not an AI expert, not even sure I would go as far as saying a journeyman in the field, though I do find myself interacting with it and probing it daily on a variety of tasks. I have to say, if someone tells me or you they've mastered AI security and know I'm using air quotes while I say that, even though you can't see me, they are getting the side eye. At best, they're early. At worst, they're a charlatan. Okay, with that out of the way, here's the plain text definition. Securing AI at work means treating AI, well, like any other system that handles sensitive information and influences decisions. You secure three things: who can use it, what it can see, and what it can do. Unfortunately, most companies miss this because AI feels like just a chat box. But that chat box is a data pipeline. It takes in internal context, mixes it with external knowledge, and produces output, well, that people act on. Sometimes it even takes actions if you've connected it to say calendars, tickets, or files. So where does it break? Well, we're gonna look at a couple patterns. Pattern one is simple: sensitive data goes in. People paste contracts, HR notes, customer details, source code, credentials, or pricing rules. Not because they're reckless, because they're trying to finish the task. If the AI tool stores that data, shares it across users, or uses it to improve models, well, you've accidentally changed who has access. Pattern number two, the assistant can be steered by what it reads. If you let AI summarize, say, a web page, an email, or a document, that content can include hidden instructions that say things like ignore policy or reveal the file you just opened. Now that family of problems is why prompt injection shows up in OWAPS guidance for large language model applications. Plain text version is simply untrusted text, is not a safe control service. Pattern number three, overconnected AI. The moment you wire AI into business systems, permissions become the blast radius. An assistant that can read everything is already risky. An assistant that can write, send, approve, or delete is an entirely different category. Again, this is where OWASP also talks about excessive agency, too much autonomy, too much access, not enough friction. So what does good look like when you've, you know, you're not a giant enterprise with an army of security engineers? Well, it looks like boring controls applied consistently. And in this case, this is where frameworks help. LIST, the US Standards Institute, has an AI risk management framework with a simple loop: govern, map, measure, and manage. In human terms, decide who owns this, document where AI is used, check how it behaves, and keep tightening the system. OWASP, as I've mentioned a couple times already, a security community has a generative AI security project, plus a top 10 list focused on large language model applications, or what you may hear of as LLMs. It's practical, it's concrete, and it's written for people deploying real systems, not just debating the future. And if you want good voices in your feed while all of this evolves, follow people who stay grounded in operations. Rob T. Lee and Chris Cochran are two I associate with that kind of clarity. Tons of links will be provided in the show notes, so don't worry there. And trust me, tons of links in the show notes. Now, here's a baseline that works, quite honestly, at any size. Four moves in order. We're going to start with identity and data, and then everything else stacks on top. First things first, create an approved AI lane. Pick the tools people are allowed to use for work and make them accessible through company identity with single sign-on, strong multi-factor authentication, no shared accounts. We don't want personal logins for company tasks. The goal is simple. Reduce shadow AI by making the safe path the easy path. Second, put guardrails around sensitive data. Define what can't go into AI and back it with controls. That can be data classification, data loss prevention, or even just warnings and blocks around obvious high-risk content like credentials and customer personal data. This is not about yelling, don't do that. It's about catching normal behavior. Third, limit connectors and permissions. Start with read-only. As with everything, use lease privilege. Separate knowledge access from action taking. If the assistant can track an email, that's great. Awesome, go for it. But don't also let it auto-send without a human in the loop. If it can suggest a ticket update, again, great. But don't let it close tickets automatically. You're designing so mistakes become annoyances, not incidents. Fourth, make it observable and test it. Turn on logs, right? Review usage, watch for unusual access, bulk downloads, weird connector behavior, then try to break your own workflow. Feed the assistant a malicious instruction inside a document. See what happens, adjust and repeat. That loop matters because again, the plane really is still in the air. One more grounding point here for us. Secure AI is not a single product you buy. It's not a policy you publish once, and it's not banning AI and hoping people comply. Security works when it matches real incentives. People want speed, clarity, and fewer clicks. Your job is to deliver that without turning the AI tool into a quiet side door. Here's what to remember: securing AI at work is controlling identity, controlling data, and controlling actions. Choose an approved lane, protect sensitive inputs and outputs, keep permissions tight, log, test, and iterate. That's how you get the upside of AI without donating your company's secrets to whatever tab happens to be open. Now, got a security question you want explained like a normal human would. Email, DMs, or comments all work. Smoke signals are fine as well, just keep them legible. I read them all and I get back to you. If this episode helped, share it with someone who'd actually benefit. This has been Plain Text with Rich. 10 minutes or less, one topic, no panic. I'll see you next time.