1. EachPod

EP 255.5 Deep Dive. Sweet Thing and The IT Privacy and Security Weekly Update for the Week ending August 12th., 2025

Author
R. Prescott Stearns Jr.
Published
Thu 14 Aug 2025
Episode Link
https://podcasters.spotify.com/pod/show/rps5/episodes/EP-255-5-Deep-Dive--Sweet-Thing-and-The-IT-Privacy-and-Security-Weekly-Update-for-the-Week-ending-August-12th---2025-e36pmaq

How AI Can Inadvertently Expose Personal Data

AI tools often unintentionally leak private information. For example, meeting transcription software can include offhand comments, personal jokes, or sensitive details in auto-generated summaries. ChatGPT conversations—when publicly shared—can also be indexed by search engines, revealing confidential topics such as NDAs or personal relationship issues. Even healthcare devices like MRIs and X-ray machines have exposed private data due to weak or absent security controls, risking identity theft and phishing attacks.


Cybercriminals Exploiting AI for Attacks

AI is a double-edged sword: while offering defensive capabilities, it's also being weaponized. The group “GreedyBear” used AI-generated code in a massive crypto theft operation. They deployed malicious browser extensions, fake websites, and executable files to impersonate trusted crypto platforms, harvesting users’ wallet credentials. Their tactic involves publishing benign software that gains trust, then covertly injecting malicious code later. Similarly, AI-generated TikTok ads lead to fake “shops” pushing malware like SparkKitty spyware, which targets cryptocurrency users.


Security Concerns with Advanced AI Models like GPT-5

Despite advancements, new AI models such as GPT-5 remain vulnerable. Independent researchers, including NeuralTrust and SPLX, were able to bypass GPT-5's safeguards within 24 hours. Methods included multi-turn “context smuggling” and text obfuscation to elicit dangerous outputs like instructions for creating weapons. These vulnerabilities suggest that even the latest models lack sufficient security maturity, raising concerns about their readiness for enterprise use.


AI Literacy and Education Initiatives

There is a growing push for AI literacy, especially in schools. Microsoft has pledged $4 billion to fund AI education in K–12 schools, community colleges, and nonprofits. The traditional "Hour of Code" is being rebranded as "Hour of AI," reflecting a shift from learning to code to understanding AI itself. The aim is to empower students with foundational knowledge of how AI works, emphasizing creativity, ethics, security, and systems thinking over rote programming.


Legal and Ethical Issues Around Posthumous Data Use

One emerging ethical challenge is the use of deceased individuals' data to train AI models. Scholars advocate for postmortem digital rights, such as a 12-month grace period for families to delete a person’s data. Currently, U.S. laws offer little protection in this area, and acts like RUFADAA don’t address AI recreations.


Encryption Weaknesses in Law Enforcement and Critical Systems

Recent research highlights significant encryption vulnerabilities in communication systems used by police, military, and critical infrastructure. A Dutch study uncovered a deliberate backdoor in a radio encryption algorithm. Even the updated, supposedly secure version reduces key strength from 128 bits to 56 bits—dramatically weakening security. This suggests that critical communications could be intercepted, leaving sensitive systems exposed despite the illusion of protection.


Public Trust in Government Digital Systems

Trust in digital governance is under strain. The UK’s HM Courts & Tribunals Service reportedly concealed an IT error that caused key evidence to vanish in legal cases. The lack of transparency and inadequate investigation risk undermining judicial credibility. Separately, the UK government secretly authorized facial recognition use across immigration databases, far exceeding the scale of traditional criminal databases.


AI for Cybersecurity Defense

On the defensive side, AI is proving valuable in finding vulnerabilities. Google’s “Big Sleep,” an LLM-based tool developed by DeepMind and Project Zero, has independently discovered 20 bugs in major open-source projects like FFmpeg and ImageMagick.

Share to: