• $1 Part1

    From TCOB1 Security Posts@21:1/229 to All on Thu Jan 15 20:29:29 2026
    Crypto-Gram
    January 15, 2026

    by Bruce Schneier
    Fellow and Lecturer, Harvard Kennedy School
    schneier@schneier.com
    https://www.schneier.com

    A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

    For back issues, or to subscribe, visit Crypto-Gram's web page.

    Read this issue on the web

    These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.

    ** *** ***** ******* *********** *************

    In this issue:

    If these links don't work in your email client, try reading this issue of Crypto-Gram on the web.

    Against the Federal Moratorium on State-Level Regulation of AI
    Chinese Surveillance and AI
    Deliberate Internet Shutdowns
    Someone Boarded a Plane at Heathrow Without a Ticket or Passport
    AI Advertising Company Hacked
    Microsoft Is Finally Killing RC4
    Denmark Accuses Russia of Conducting Two Cyberattacks
    Urban VPN Proxy Surreptitiously Intercepts AI Chats
    IoT Hack
    Are We Ready to Be Governed by Artificial Intelligence?
    Using AI-Generated Images to Get Refunds
    LinkedIn Job Scams
    Flock Exposes Its AI-Enabled Surveillance Cameras
    Telegram Hosting World's Largest Darknet Market
    A Cyberattack Was Part of the US Assault on Venezuela
    The Wegman's Supermarket Chain Is Probably Using Facial Recognition
    AI & Humans: Making the Relationship Work
    Palo Alto Crosswalk Signals Had Default Passwords
    Corrupting LLMs Through Weird Generalizations
    1980s Hacker Manifesto
    Upcoming Speaking Engagements
    Hacking Wheelchairs over Bluetooth
    ** *** ***** ******* *********** *************

    Against the Federal Moratorium on State-Level Regulation of AI

    [2025.12.15] Cast your mind back to May of this year: Congress was in the throes of debate over the massive budget bill. Amidst the many seismic provisions, Senator Ted Cruz dropped a ticking time bomb of tech policy: a ten-year moratorium on the ability of states to regulate artificial intelligence. To many, this was catastrophic. The few massive AI companies seem to be swallowing our economy whole: their energy demands are overriding household needs, their data demands are overriding creators' copyright, and their products are triggering mass unemployment as well as new types of clinical psychoses. In a moment where Congress is seemingly unable to act to pass any meaningful consumer protections or market regulations, why would we hamstring the one entity evidently capable of doing so -- the states? States that have already enacted consumer protections and other AI regulations, like California, and those actively debating them, like Massachusetts, were alarmed. Seventeen Republican governors wrote a letter decrying the idea, and it was ultimately killed in a rare vote of bipartisan near-unanimity.

    The idea is back. Before Thanksgiving, a House Republican leader suggested they might slip it into the annual defense spending bill. Then, a draft document leaked outlining the Trump administration's intent to enforce the state regulatory ban through executive powers. An outpouring of opposition (including from some Republican state leaders) beat back that notion for a few weeks, but on Monday, Trump posted on social media that the promised Executive Order is indeed coming soon. That would put a growing cohort of states, including California and New York, as well as Republican strongholds like Utah and Texas, in jeopardy.

    The constellation of motivations behind this proposal is clear: conservative ideology, cash, and China.

    The intellectual argument in favor of the moratorium is that "freedom"-killing state regulation on AI would create a patchwork that would be difficult for AI companies to comply with, which would slow the pace of innovation needed to win an AI arms race with China. AI companies and their investors have been aggressively peddling this narrative for years now, and are increasingly backing it with exorbitant lobbying dollars. It's a handy argument, useful not only to kill regulatory constraints, but also -- companies hope -- to win federal bailouts and energy subsidies.

    Citizens should parse that argument from their own point of view, not Big Tech's. Preventing states from regulating AI means that those companies get to tell Washington what they want, but your state representatives are powerless to represent your own interests. Which freedom is more important to you: the freedom for a few near-monopolies to profit from AI, or the freedom for you and your neighbors to demand protections from its abuses?

    There is an element of this that is more partisan than ideological. Vice President J.D. Vance argued that federal preemption is needed to prevent "progressive" states from controlling AI's future. This is an indicator of creeping polarization, where Democrats decry the monopolism, bias, and harms attendant to corporate AI and Republicans reflexively take the opposite side. It doesn't help that some in the parties also have direct financial interests in the AI supply chain.

    But this does not need to be a partisan wedge issue: both Democrats and Republicans have strong reasons to support state-level AI legislation. Everyone shares an interest in protecting consumers from harm created by Big Tech companies. In leading the charge to kill Cruz's initial AI moratorium proposal, Republican Senator Masha Blackburn explained that "This provision could allow Big Tech to continue to exploit kids, creators, and conservatives? we can't block states from making laws that protect their citizens." More recently, Florida Governor Ron DeSantis wants to regulate AI in his state.

    The often-heard complaint that it is hard to comply with a patchwork of state regulations rings hollow. Pretty much every other consumer-facing industry has managed to deal with local regulation -- automobiles, children's toys, food, and drugs -- and those regulations have been effective consumer protections. The AI industry includes some of the most valuable companies globally and has demonstrated the ability to comply with differing regulations around the world, including the EU's AI and data privacy regulations, substantially more onerous than those so far adopted by US states. If we can't leverage state regulatory power to shape the AI industry, to what industry could it possibly apply?

    The regulatory superpower that states have here is not size and force, but rather speed and locality. We need the "laboratories of democracy" to experiment with different types of regulation that fit the specific needs and interests of their constituents and evolve responsively to the concerns they raise, especially in such a consequential and rapidly changing area such as AI.

    We should embrace the ability of regulation to be a driver -- not a limiter -- of innovation. Regulations don't restrict companies from building better products or making more profit; they help channel that innovation in specific ways that protect the public interest. Drug safety regulations don't prevent pharma companies from inventing drugs; they force them to invent drugs that are safe and efficacious. States can direct
    --- FMail-lnx 2.3.2.6-B20251227
    * Origin: TCOB1 A Mail Only System (21:1/229)
  • From TCOB1 Security Posts@21:1/229 to All on Sun Feb 15 18:38:12 2026

    Crypto-Gram
    February 15, 2026

    by Bruce Schneier
    Fellow and Lecturer, Harvard Kennedy School
    schneier@schneier.com
    https://www.schneier.com

    A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

    For back issues, or to subscribe, visit Crypto-Gram's web page.

    Read this issue on the web

    These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.

    ** *** ***** ******* *********** *************
    In this issue:

    If these links don't work in your email client, try reading this issue of Crypto-Gram on the web.

    New Vulnerability in n8n
    AI and the Corporate Capture of Knowledge
    AI-Powered Surveillance in Schools
    Could ChatGPT Convince You to Buy Something?
    Internet Voting is Too Insecure for Use in Elections
    Why AI Keeps Falling for Prompt Injection Attacks
    Ireland Proposes Giving Police New Digital Surveillance Powers
    The Constitutionality of Geofence Warrants
    AIs Are Getting Better at Finding and Exploiting Security Vulnerabilities
    AI Coding Assistants Secretly Copying All Code to China
    Microsoft is Giving the FBI BitLocker Keys
    US Declassifies Information on JUMPSEAT Spy Satellites
    Backdoor in Notepad++
    iPhone Lockdown Mode Protects Washington Post Reporter
    I Am in the Epstein Files
    LLMs are Getting a Lot Better and Faster at Finding and Exploiting Zero-Days
    AI-Generated Text and the Detection Arms Race
    Prompt Injection Via Road Signs
    Rewiring Democracy Ebook is on Sale
    3D Printer Surveillance
    Upcoming Speaking Engagements

    ** *** ***** ******* *********** *************
    New Vulnerability in n8n

    [2026.01.15] This isn't good:

    We discovered a critical vulnerability (CVE-2026-21858, CVSS 10.0) in n8n that enables attackers to take over locally deployed instances, impacting an estimated 100,000 servers globally. No official workarounds are available for this vulnerability. Users should upgrade to version 1.121.0 or later to remediate the vulnerability.

    Three technical links and two news links.

    ** *** ***** ******* *********** *************
    AI and the Corporate Capture of Knowledge

    [2026.01.16] More than a decade after Aaron Swartz's death, the United States is still living inside the contradiction that destroyed him.

    Swartz believed that knowledge, especially publicly funded knowledge, should be freely accessible. Acting on that, he downloaded thousands of academic articles from the JSTOR archive with the intention of making them publicly available. For this, the federal government charged him with a felony and threatened decades in prison. After two years of prosecutorial pressure, Swartz died by suicide on Jan. 11, 2013.

    The still-unresolved questions raised by his case have resurfaced in today's debates over artificial intelligence, copyright and the ultimate control of knowledge.

    At the time of Swartz's prosecution, vast amounts of research were funded by taxpayers, conducted at public institutions and intended to advance public understanding. But access to that research was, and still is, locked behind expensive paywalls. People are unable to read work they helped fund without paying private journals and research websites.

    Swartz considered this hoarding of knowledge to be neither accidental nor inevitable. It was the result of legal, economic and political choices. His actions challenged those choices directly. And for that, the government treated him as a criminal.

    Today's AI arms race involves a far more expansive, profit-driven form of information appropriation. The tech giants ingest vast amounts of copyrighted material: books, journalism, academic papers, art, music and personal writing. This data is scraped at industrial scale, often without consent, compensation or transparency, and then used to train large AI models.

    AI companies then sell their proprietary systems, built on public and private knowledge, back to the people who funded it. But this time, the government's response has been markedly different. There are no criminal prosecutions, no threats of decades-long prison sentences. Lawsuits proceed slowly, enforcement remains uncertain and policymakers signal caution, given AI's perceived economic and strategic importance. Copyright infringement is reframed as an unfortunate but necessary step toward "innovation."

    Recent developments underscore this imbalance. In 2025, Anthropic reached a settlement with publishers over allegations that its AI systems were trained on copyrighted books without authorization. The agreement reportedly valued infringement at roughly $3,000 per book across an estimated 500,000 works, coming at a cost of over $1.5 billion. Plagiarism disputes between artists and accused infringers routinely settle for hundreds of thousands, or even millions, of dollars when prominent works are involved. Scholars estimate Anthropic avoided over $1 trillion in liability costs. For well-capitalized AI firms, such settlements are likely being factored as a predictable cost of doing business.

    As AI becomes a larger part of America's economy, one can see the writing on the wall. Judges will twist themselves into knots to justify an innovative technology premised on literally stealing the works of artists, poets, musicians, all of academia and the internet, and vast expanses of literature. But if Swartz's actions were criminal, it is worth asking: What standard are we now applying to AI companies?

    The question is not simply whether copyright law applies to AI. It is why the law appears to operate so differently depending on who is doing the extracting and for what purpose.

    The stakes extend beyond copyright law or past injustices. They concern who controls the infrastructure of knowledge going forward and what that control means for democratic participation, accountability and public trust.

    Systems trained on vast bodies of publicly funded research are increasingly becoming the primary way people learn about science, law, medicine and public policy. As search, synthesis and explanation are mediated through AI models, control over training data and infrastructure translates into control over what questions can be asked, what answers are surfaced, and whose expertise is treated as authoritative. If public knowledge is absorbed into proprietary systems that the public cannot inspect, audit or meaningfully challenge, then access to information is no longer governed by democratic norms but by corporate priorities.

    Like the early internet, AI is often described as a democratizing force. But also like the internet, AI's current trajectory suggests something closer to consolidation. Control over data, models and computational infrastructure is concentrated in the hands of a small number of powerful tech companies. They will decide who gets access to knowledge, under what conditions and at what price.

    Swartz's fight was not simply about access, but about whether knowledge should be governed by openness or corporate capture, and who that knowledge is ultimately for.
    --- FMail-lnx 2.3.2.6-B20251227
    * Origin: TCOB1 A Mail Only System (21:1/229)