1. EachPod

258.5 deep dive. We can see you. The IT Privacy and Security Weekly Update for the week ending September 2nd. 2025

Author
R. Prescott Stearns Jr.
Published
Thu 04 Sep 2025
Episode Link
https://podcasters.spotify.com/pod/show/rps5/episodes/258-5-deep-dive--We-can-see-you--The-IT-Privacy-and-Security-Weekly-Update-for-the-week-ending-September-2nd--2025-e37m4in

Modern technology introduces profound privacy and security challenges. Wi-Fi and Bluetooth devices constantly broadcast identifiers like SSIDs, MAC addresses, and timestamps, which services such as Wigle.net and major tech companies exploit to triangulate precise locations. Users can mitigate exposure by appending _nomap to SSIDs, though protections remain incomplete, especially against companies like Microsoft that use more complex opt-out processes.


At the global scale, state-sponsored hacking represents an even larger threat. A Chinese government-backed campaign has infiltrated critical communication networks across 80 nations and at least 200 U.S. organizations, including major carriers. These intrusions enabled extraction of sensitive call records and law enforcement directives, undermining global privacy and revealing how deeply foreign adversaries can map communication flows.


AI companies are also reshaping expectations of confidentiality. OpenAI now scans user conversations for signs of harmful intent, with human reviewers intervening and potentially escalating to law enforcement. While the company pledges not to report self-harm cases, the shift transforms ChatGPT from a private interlocutor into a monitored channel, raising ethical questions about surveillance in AI systems. Similarly, Anthropic has adopted a new policy to train its models on user data, including chat transcripts and code, while retaining records for up to five years unless users explicitly opt out by a set deadline. This forces individuals to choose between enhanced AI capabilities and personal privacy, knowing that once data is absorbed into training, confidentiality cannot be reclaimed.


Research has further exposed the fragility of chatbot safety systems. By crafting long, grammatically poor run-on prompts that delay punctuation, users can bypass guardrails and elicit harmful outputs. This underscores the need for layered defenses input sanitization, real-time filtering, and improved oversight beyond alignment training alone.


Security risks also extend into software infrastructure. Widely used tools such as the Node.js library fast-glob, essential to both civilian and military systems, are sometimes maintained by a single developer abroad. While open-source transparency reduces risk, concentration of control in geopolitically sensitive regions raises concerns about potential sabotage, exploitation, or covert compromise.


Meanwhile, regulators are tightening defenses against longstanding consumer threats. The FCC will enforce stricter STIR/SHAKEN rules by September 2025, requiring providers to sign calls with their own certificates instead of relying on third parties. Non-compliance could result in fines and disconnection, offering consumers more reliable caller ID and fewer spoofed robocalls.


Finally, ethical boundaries around AI and digital identity are being tested. Meta has faced criticism for enabling or creating AI chatbots that mimic celebrities like Taylor Swift and Scarlett Johansson without consent, often producing flirty or suggestive interactions. Rival platforms like X s Grok face similar accusations. Beyond violating policies and reputations, the trend of unauthorized digital doubles including of minors raises serious concerns about exploitation, unhealthy attachments, and reputational harm.


Together, these cases reveal a central truth: digital systems meant to connect, entertain, and innovate increasingly blur the lines between utility, surveillance, and exploitation. Users and institutions alike must navigate trade-offs between convenience, capability, and control, while regulators and technologists scramble to impose safeguards in a rapidly evolving landscape.

Share to: