$1 Part4
From
TCOB1 Security Posts@21:1/229 to
All on Thu Jan 15 20:29:29 2026
since 2014, that was the root cause of the initial intrusion into Ascension's network.
Fun fact: RC4 was a trade secret until I published the algorithm in the second edition of Applied Cryptography in 1995.
** *** ***** ******* *********** *************
Denmark Accuses Russia of Conducting Two Cyberattacks
[2025.12.23] News:
The Danish Defence Intelligence Service (DDIS) announced on Thursday that Moscow was behind a cyber-attack on a Danish water utility in 2024 and a series of distributed denial-of-service (DDoS) attacks on Danish websites in the lead-up to the municipal and regional council elections in November.
The first, it said, was carried out by the pro-Russian group known as Z-Pentest and the second by NoName057(16), which has links to the Russian state.
Slashdot thread.
** *** ***** ******* *********** *************
Urban VPN Proxy Surreptitiously Intercepts AI Chats
[2025.12.24] This is pretty scary:
Urban VPN Proxy targets conversations across ten AI platforms: ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok (xAI), Meta AI.
For each platform, the extension includes a dedicated "executor" script designed to intercept and capture conversations. The harvesting is enabled by default through hardcoded flags in the extension's configuration.
There is no user-facing toggle to disable this. The only way to stop the data collection is to uninstall the extension entirely.
[...]
The data collection operates independently of the VPN functionality. Whether the VPN is connected or not, the harvesting runs continuously in the background.
[...]
What gets captured:
Every prompt you send to the AI
Every response you receive
Conversation identifiers and timestamps
Session metadata
The specific AI platform and model used
Boing Boing post.
EDITED TO ADD (12/15): Two news articles.
** *** ***** ******* *********** *************
IoT Hack
[2025.12.26] Someone hacked an Italian ferry.
It looks like the malware was installed by someone on the ferry, and not remotely.
** *** ***** ******* *********** *************
Are We Ready to Be Governed by Artificial Intelligence?
[2025.12.29] Artificial Intelligence (AI) overlords are a common trope in science-fiction dystopias, but the reality looks much more prosaic. The technologies of artificial intelligence are already pervading many aspects of democratic government, affecting our lives in ways both large and small. This has occurred largely without our notice or consent. The result is a government incrementally transformed by AI rather than the singular technological overlord of the big screen.
Let us begin with the executive branch. One of the most important functions of this branch of government is to administer the law, including the human services on which so many Americans rely. Many of these programs have long been operated by a mix of humans and machines, even if not previously using modern AI tools such as Large Language Models.
A salient example is healthcare, where private insurers make widespread use of algorithms to review, approve, and deny coverage, even for recipients of public benefits like Medicare. While Biden-era guidance from the Centers for Medicare and Medicaid Services (CMS) largely blesses this use of AI by Medicare Advantage operators, the practice of overriding the medical care recommendations made by physicians raises profound ethical questions, with life and death implications for about thirty million Americans today.
This April, the Trump administration reversed many administrative guardrails on AI, relieving Medicare Advantage plans from the obligation to avoid AI-enabled patient discrimination. This month, the Trump administration took a step further. CMS rolled out an aggressive new program that financially rewards vendors that leverage AI to reject rapidly prior authorization for "wasteful" physician or provider-requested medical services. The same month, the Trump administration also issued an executive order limiting the abilities of states to put consumer and patient protections around the use of AI.
This shows both growing confidence in AI's efficiency and a deliberate choice to benefit from it without restricting its possible harms. Critics of the CMS program have characterized it as effectively establishing a bounty on denying care; AI -- in this case -- is being used to serve a ministerial function in applying that policy. But AI could equally be used to automate a different policy objective, such as minimizing the time required to approve pre-authorizations for necessary services or to minimize the effort required of providers to achieve authorization.
Next up is the judiciary. Setting aside concerns about activist judges and court overreach, jurists are not supposed to decide what law is. The function of judges and courts is to interpret the law written by others. Just as jurists have long turned to dictionaries and expert witnesses for assistance in their interpretation, AI has already emerged as a tool used by judges to infer legislative intent and decide on cases. In 2023, a Colombian judge was the first publicly to use AI to help make a ruling. The first known American federal example came a year later when United States Circuit Judge Kevin Newsom began using AI in his jurisprudence, to provide second "opinions" on the plain language meaning of words in statute. A District of Columbia Court of Appeals similarly used ChatGPT in 2025 to deliver an interpretation of what common knowledge is. And there are more examples from Latin America, the United Kingdom, India, and beyond.
Given that these examples are likely merely the tip of the iceberg, it is also important to remember that any judge can unilaterally choose to consult an AI while drafting his opinions, just as he may choose to consult other human beings, and a judge may be under no obligation to disclose when he does.
This is not necessarily a bad thing. AI has the ability to replace humans but also to augment human capabilities, which may significantly expand human agency. Whether the results are good or otherwise depends on many factors. These include the application and its situation, the characteristics and performance of the AI model, and the characteristics and performance of the humans it augments or replaces. This general model applies to the use of AI in the judiciary.
Each application of AI legitimately needs to be considered in its own context, but certain principles should apply in all uses of AI in democratic contexts. First and foremost, we argue, AI should be applied in ways that decentralize rather than concentrate power. It should be used to empower individual human actors rather than automating the decision-making of a central authority. We are open to independent judges selecting and leveraging AI models as tools in their own jurisprudence, but we remain concerned about Big Tech companies building and operating a dominant AI product that becomes widely used throughout the judiciary.
This principle brings us to the legislature. Policymakers worldwide are already using AI in many aspects of lawmaking. In 2023, the first law writ
--- FMail-lnx 2.3.2.6-B20251227
* Origin: TCOB1 A Mail Only System (21:1/229)
From
TCOB1 Security Posts@21:1/229 to
All on Sun Feb 15 18:38:12 2026
ommercial, claim to be the head of security doing an audit, or dress like a bank manager collecting the cash receipts for the night. But even these have only a slim chance of success. Most of us, most of the time, can smell a scam.
Con artists are astute observers of human defenses. Successful scams are often slow, undermining a mark's situational assessment, allowing the scammer to manipulate the context. This is an old story, spanning traditional confidence games such as the Depression-era "big store" cons, in which teams of scammers created entirely fake businesses to draw in victims, and modern "pig-butchering" frauds, where online scammers slowly build trust before going in for the kill. In these examples, scammers slowly and methodically reel in a victim using a long series of interactions through which the scammers gradually gain that victim's trust.
Sometimes it even works at the drive-through. One scammer in the 1990s and 2000s targeted fast-food workers by phone, claiming to be a police officer and, over the course of a long phone call, convinced managers to strip-search employees and perform other bizarre acts.
Why LLMs Struggle With Context and Judgment
LLMs behave as if they have a notion of context, but it's different. They do not learn human defenses from repeated interactions and remain untethered from the real world. LLMs flatten multiple levels of context into text similarity. They see "tokens," not hierarchies and intentions. LLMs don't reason through context, they only reference it.
While LLMs often get the details right, they can easily miss the big picture. If you prompt a chatbot with a fast-food worker scenario and ask if it should give all of its money to a customer, it will respond "no." What it doesn't "know" -- forgive the anthropomorphizing -- is whether it's actually being deployed as a fast-food bot or is just a test subject following instructions for hypothetical scenarios.
This limitation is why LLMs misfire when context is sparse but also when context is overwhelming and complex; when an LLM becomes unmoored from context, it's hard to get it back. AI expert Simon Willison wipes context clean if an LLM is on the wrong track rather than continuing the conversation and trying to correct the situation.
There's more. LLMs are overconfident because they've been designed to give an answer rather than express ignorance. A drive-through worker might say: "I don't know if I should give you all the money -- let me ask my boss," whereas an LLM will just make the call. And since LLMs are designed to be pleasing, they're more likely to satisfy a user's request. Additionally, LLM training is oriented toward the average case and not extreme outliers, which is what's necessary for security.
The result is that the current generation of LLMs is far more gullible than people. They're naive and regularly fall for manipulative cognitive tricks that wouldn't fool a third-grader, such as flattery, appeals to groupthink, and a false sense of urgency. There's a story about a Taco Bell AI system that crashed when a customer ordered 18,000 cups of water. A human fast-food worker would just laugh at the customer.
The Limits of AI Agents
Prompt injection is an unsolvable problem that gets worse when we give AIs tools and tell them to act independently. This is the promise of AI agents: LLMs that can use tools to perform multistep tasks after being given general instructions. Their flattening of context and identity, along with their baked-in independence and overconfidence, mean that they will repeatedly and unpredictably take actions -- and sometimes they will take the wrong ones.
Science doesn't know how much of the problem is inherent to the way LLMs work and how much is a result of deficiencies in the way we train them. The overconfidence and obsequiousness of LLMs are training choices. The lack of an interruption reflex is a deficiency in engineering. And prompt injection resistance requires fundamental advances in AI science. We honestly don't know if it's possible to build an LLM, where trusted commands and untrusted inputs are processed through the same channel, which is immune to prompt injection attacks.
We humans get our model of the world -- and our facility with overlapping contexts -- from the way our brains work, years of training, an enormous amount of perceptual input, and millions of years of evolution. Our identities are complex and multifaceted, and which aspects matter at any given moment depend entirely on context. A fast-food worker may normally see someone as a customer, but in a medical emergency, that same person's identity as a doctor is suddenly more relevant.
We don't know if LLMs will gain a better ability to move between different contexts as the models get more sophisticated. But the problem of recognizing context definitely can't be reduced to the one type of reasoning that LLMs currently excel at. Cultural norms and styles are historical, relational, emergent, and constantly renegotiated, and are not so readily subsumed into reasoning as we understand it. Knowledge itself can be both logical and discursive.
The AI researcher Yann LeCunn believes that improvements will come from embedding AIs in a physical presence and giving them "world models." Perhaps this is a way to give an AI a robust yet fluid notion of a social identity, and the real-world experience that will help it lose its naivete.
Ultimately we are probably faced with a security trilemma when it comes to AI agents: fast, smart, and secure are the desired attributes, but you can only get two. At the drive-through, you want to prioritize fast and secure. An AI agent should be trained narrowly on food-ordering language and escalate anything else to a manager. Otherwise, every action becomes a coin flip. Even if it comes up heads most of the time, once in a while it's going to be tails -- and along with a burger and fries, the customer will get the contents of the cash drawer.
This essay was written with Barath Raghavan, and originally appeared in IEEE Spectrum.
** *** ***** ******* *********** *************
Ireland Proposes Giving Police New Digital Surveillance Powers
[2026.01.26] This is coming:
The Irish government is planning to bolster its police's ability to intercept communications, including encrypted messages, and provide a legal basis for spyware use.
** *** ***** ******* *********** *************
The Constitutionality of Geofence Warrants
[2026.01.27] The US Supreme Court is considering the constitutionality of geofence warrants.
The case centers on the trial of Okello Chatrie, a Virginia man who pleaded guilty to a 2019 robbery outside of Richmond and was sentenced to almost 12 years in prison for stealing $195,000 at gunpoint.
Police probing the crime found security camera footage showing a man on a cell phone near the credit union that was robbed and asked Google to produce anonymized location data near the robbery site so they could determine who committed the crime. They did so, providing police with subscriber data for three people, one of whom was Chatrie. Police then searched Chatrie's
--- FMail-lnx 2.3.2.6-B20251227
* Origin: TCOB1 A Mail Only System (21:1/229)