• CRYPTO-GRAM, November 15, 2025 Part1

    From Sean Rima@21:1/229 to All on Tue Nov 18 14:29:34 2025
    Crypto-Gram
    November 15, 2025

    by Bruce Schneier
    Fellow and Lecturer, Harvard Kennedy School schneier@schneier.com https://www.schneier.com

    A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

    For back issues, or to subscribe, visit Crypto-Gram's web page.

    Read this issue on the web

    These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.

    ** *** ***** ******* *********** *************

    In this issue:

    If these links don't work in your email client, try reading this issue of Crypto-Gram on the web.

    Apple?s Bug Bounty Program
    Cryptocurrency ATMs
    A Surprising Amount of Satellite Traffic Is Unencrypted Agentic AI?s OODA Loop Problem
    A Cybersecurity Merit Badge
    Failures in Face Recognition
    Serious F5 Breach
    Part Four of The Kryptos Sculpture
    First Wap: A Surveillance Computer You?ve Never Heard Of Louvre Jewel Heist Social Engineering People?s Credit Card Details Signal?s Post-Quantum Cryptographic Implementation The AI-Designed Bioweapon Arms Race Will AI Strengthen or Undermine Democracy? AI Summarization Optimization
    Cybercriminals Targeting Payroll Sites Scientists Need a Positive Vision for AI Rigged Poker Games
    Faking Receipts with AI
    New Attacks Against Secure Enclaves Prompt Injection in AI Browsers
    On Hacking Back
    Book Review: The Business of Secrets The Role of Humans in an AI-Powered World Upcoming Speaking Engagements
    ** *** ***** ******* *********** *************

    Apple?s Bug Bounty Program

    [2025.10.15] Apple is now offering a $2M bounty for a zero-click exploit. According to the Apple website:

    Today we?re announcing the next major chapter for Apple Security Bounty, featuring the industry?s highest rewards, expanded research categories, and a flag system for researchers to objectively demonstrate vulnerabilities and obtain accelerated awards.

    We?re doubling our top award to $2 million for exploit chains that can achieve similar goals as sophisticated mercenary spyware attacks. This is an unprecedented amount in the industry and the largest payout offered by any bounty program we?re aware of and our bonus system, providing additional rewards for Lockdown Mode bypasses and vulnerabilities discovered in beta software, can more than double this reward, with a maximum payout in excess of
    $5 million. We?re also doubling or significantly increasing rewards in many other categories to encourage more intensive research. This includes $100,000 for a complete Gatekeeper bypass, and $1 million for broad unauthorized iCloud access, as no successful exploit has been demonstrated to date in either category.
    Our bounty categories are expanding to cover even more attack surfaces. Notably, we?re rewarding one-click WebKit sandbox escapes with up to $300,000, and wireless proximity exploits over any radio with up to $1 million. We?re introducing Target Flags, a new way for researchers to objectively demonstrate exploitability for some of our top bounty categories, including remote code execution and Transparency, Consent, and Control (TCC) bypasses and to help determine eligibility for a specific award. Researchers who submit reports with Target Flags will qualify for accelerated awards, which are processed immediately after the research is received and verified, even before a fix becomes available.
    ** *** ***** ******* *********** *************

    Cryptocurrency ATMs

    [2025.10.16] CNN has a great piece about how cryptocurrency ATMs are used to scam people out of their money. The fees are usurious, and they?re a common place for scammers to send victims to buy cryptocurrency for them. The companies behind the ATMs, at best, do not care about the harm they cause; the profits are just too good.

    ** *** ***** ******* *********** *************

    A Surprising Amount of Satellite Traffic Is Unencrypted

    [2025.10.17] Here?s the summary:

    We pointed a commercial-off-the-shelf satellite dish at the sky and carried out the most comprehensive public study to date of geostationary satellite communication. A shockingly large amount of sensitive traffic is being broadcast unencrypted, including critical infrastructure, internal corporate and government communications, private citizens? voice calls and SMS, and consumer Internet traffic from in-flight wifi and mobile networks. This data can be passively observed by anyone with a few hundred dollars of consumer-grade hardware. There are thousands of geostationary satellite transponders globally, and data from a single transponder may be visible from an area as large as 40% of the surface of the earth.

    Full paper. News article.

    ** *** ***** ******* *********** *************

    Agentic AI?s OODA Loop Problem

    [2025.10.20] The OODA loop -- for observe, orient, decide, act -- is a framework to understand decision-making in adversarial situations. We apply the same framework to artificial intelligence agents, who have to make their decisions with untrustworthy observations and orientation. To solve this problem, we need new systems of input, processing, and output integrity.

    Many decades ago, U.S. Air Force Colonel John Boyd introduced the concept of the ?OODA loop,? for Observe, Orient, Decide, and Act. These are the four steps of real-time continuous decision-making. Boyd developed it for fighter pilots, but it?s long been applied in artificial intelligence (AI) and robotics. An AI agent, like a pilot, executes the loop over and over, accomplishing its goals iteratively within an ever-changing environment. This is Anthropic?s definition: ?Agents are models using tools in a loop.?1

    OODA Loops for Agentic AI

    Traditional OODA analysis assumes trusted inputs and outputs, in the same way that classical AI assumed trusted sensors, controlled environments, and physical boundaries. This no longer holds true. AI agents don?t just execute OODA loops; they embed untrusted actors within them. Web-enabled large language
    models (LLMs) can query adversary-controlled sources mid-loop. Systems that allow AI to use large corpora of content, such as retrieval-augmented generation (https://en.wikipedia.org/wiki/Retrieval-augmented_generation), can ingest poisoned documents. Tool-calling application programming interfaces can execute untrusted code. Modern AI sensors can encompass the entire Internet; their environments are inherently adversarial. That means that fixing AI hallucination is insufficient because even if the AI accurately interprets its inputs and produces corresponding output, it can be fully corrupt.

    In 2022, Simon Willison identified a new class of attacks against AI systems: ?prompt injection.?2 Prompt injection is possible because an AI mixes untrusted inputs with trusted instructions and then confuses one for the other. Willison?s insight was that this isn?t just a filtering problem; it?s architectural. There is no privilege separation, and there is no separation between the data and control paths. The very mechanism that makes modern AI powerful -- treating all inputs uniformly -- is

    --- BBBS/LiR v4.10 Toy-7
    * Origin: TCOB1: https/binkd/telnet binkd.rima.ie (21:1/229)