• $1 Part3

    From TCOB1 Security Posts@21:1/229 to All on Thu Jan 15 20:29:29 2026
    milar move in 2020. But they can also be more banal. Access Now documented countries disabling parts of the internet during student exam periods at least 16 times in 2024, including Algeria, Iraq, Jordan, Kenya, and India.

    Iran's shutdowns in 2022 and June of this year are good examples of a highly sophisticated effort, with layers of shutdowns that end up forcing people off the global internet and onto Iran's surveilled, censored national intranet. India, meanwhile, has been the world shutdown leader for many years, with 855 distinct incidents. Myanmar is second with 149, followed by Pakistan and then Iran. All of this information is available on Access Now's digital dashboard, where you can see breakdowns by region, country, type, geographic extent, and time.

    There was a slight decline in shutdowns during the early years of the pandemic, but they have increased sharply since then. The reasons are varied, but a lot can be attributed to the rise in protest movements related to economic hardship and corruption, and general democratic backsliding and instability. In many countries today, shutdowns are a knee-jerk response to any form of unrest or protest, no matter how small.

    A country's ability to shut down the internet depends a lot on its infrastructure. In the US, for example, shutdowns would be hard to enforce. As we saw when discussions about a potential TikTok ban ramped up two years ago, the complex and multifaceted nature of our internet makes it very difficult to achieve. However, as we've seen with total nationwide shutdowns around the world, the ripple effects in all aspects of life are immense. (Remember the effects of just a small outage -- CrowdStrike in 2024 -- which crippled 8.5 million computers and cancelled 2,200 flights in the US alone?)

    The more centralized the internet infrastructure, the easier it is to implement a shutdown. If a country has just one cellphone provider, or only two fiber optic cables connecting the nation to the rest of the world, shutting them down is easy.

    Shutdowns are not only more common, but they've also become more harmful. Unlike in years past, when the internet was a nice option to have, or perhaps when internet penetration rates were significantly lower across the Global South, today the internet is an essential piece of societal infrastructure for the majority of the world's population.

    Access Now has long maintained that denying people access to the internet is a human rights violation, and has collected harrowing stories from places like Tigray in Ethiopia, Uganda, Annobon in Equatorial Guinea, and Iran. The internet is an essential tool for a spectrum of rights, including freedom of expression and assembly. Shutdowns make documenting ongoing human rights abuses and atrocities more difficult or impossible. They are also impactful on people's daily lives, business, healthcare, education, finances, security, and safety, depending on the context. Shutdowns in conflict zones are particularly damaging, as they impact the ability of humanitarian actors to deliver aid and make it harder for people to find safe evacuation routes and civilian corridors.

    Defenses on the ground are slim. Depending on the country and the type of shutdown, there can be workarounds. Everything, from VPNs to mesh networks to Starlink terminals to foreign SIM cards near borders, has been used with varying degrees of success. The tech-savvy sometimes have other options. But for most everyone in society, no internet means no internet -- and all the effects of that loss.

    The international community plays an important role in shaping how internet shutdowns are understood and addressed. World bodies have recognized that reliable internet access is an essential service, and could put more pressure on governments to keep the internet on in conflict-affected areas. But while international condemnation has worked in some cases (Mauritius and South Sudan are two recent examples), countries seem to be learning from each other, resulting in both more shutdowns and new countries perpetrating them.

    There's still time to reverse the trend, if that's what we want to do. Ultimately, the question comes down to whether or not governments will enshrine both a right to access information and freedom of expression in law and in practice. Keeping the internet on is a norm, but the trajectory from a single internet shutdown in 2011 to 2,000 blackouts 15 years later demonstrates how embedded the practice has become. The implications of that shift are still unfolding, but they reach far beyond the moment the screen goes dark.

    This essay was written with Zach Rosson, and originally appeared in Gizmodo.

    ** *** ***** ******* *********** *************

    Someone Boarded a Plane at Heathrow Without a Ticket or Passport

    [2025.12.18] I'm sure there's a story here:

    Sources say the man had tailgated his way through to security screening and passed security, meaning he was not detected carrying any banned items.

    The man deceived the BA check-in agent by posing as a family member who had their passports and boarding passes inspected in the usual way.

    ** *** ***** ******* *********** *************

    AI Advertising Company Hacked

    [2025.12.19] At least some of this is coming to light:

    Doublespeed, a startup backed by Andreessen Horowitz (a16z) that uses a phone farm to manage at least hundreds of AI-generated social media accounts and promote products has been hacked. The hack reveals what products the AI-generated accounts are promoting, often without the required disclosure that these are advertisements, and allowed the hacker to take control of more than 1,000 smartphones that power the company.

    The hacker, who asked for anonymity because he feared retaliation from the company, said he reported the vulnerability to Doublespeed on October 31. At the time of writing, the hacker said he still has access to the company's backend, including the phone farm itself.

    Slashdot thread.

    ** *** ***** ******* *********** *************

    Microsoft Is Finally Killing RC4

    [2025.12.22] After twenty-six years, Microsoft is finally upgrading the last remaining instance of the encryption algorithm RC4 in Windows.

    One of the most visible holdouts in supporting RC4 has been Microsoft. Eventually, Microsoft upgraded Active Directory to support the much more secure AES encryption standard. But by default, Windows servers have continued to respond to RC4-based authentication requests and return an RC4-based response. The RC4 fallback has been a favorite weakness hackers have exploited to compromise enterprise networks. Use of RC4 played a key role in last year's breach of health giant Ascension. The breach caused life-threatening disruptions at 140 hospitals and put the medical records of 5.6 million patients into the hands of the attackers. US Senator Ron Wyden (D-Ore.) in September called on the Federal Trade Commission to investigate Microsoft for "gross cybersecurity negligence," citing the continued default support for RC4.

    Last week, Microsoft said it was finally deprecating RC4 and cited its susceptibility to Kerberoasting, the form of attack, known
    --- FMail-lnx 2.3.2.6-B20251227
    * Origin: TCOB1 A Mail Only System (21:1/229)
  • From TCOB1 Security Posts@21:1/229 to All on Sun Feb 15 18:38:12 2026
    formation and communicate with friends and family can be wonderful capabilities. The problem is the priorities of the corporations who own these platforms and for whose benefit they are operated. Recognize that you don't have control over what data is fed to the AI, who it is shared with and how it is used. It's important to keep that in mind when you connect devices and services to AI platforms, ask them questions, or consider buying or doing the things they suggest.

    There is also a lot that people can demand of governments to restrain harmful corporate uses of AI. In the U.S., Congress could enshrine consumers' rights to control their own personal data, as the EU already has. It could also create a data protection enforcement agency, as essentially every other developed nation has.

    Governments worldwide could invest in Public AI -- models built by public agencies offered universally for public benefit and transparently under public oversight. They could also restrict how corporations can collude to exploit people using AI, for example by barring advertisements for dangerous products such as cigarettes and requiring disclosure of paid endorsements.

    Every technology company seeks to differentiate itself from competitors, particularly in an era when yesterday's groundbreaking AI quickly becomes a commodity that will run on any kid's phone. One differentiator is in building a trustworthy service. It remains to be seen whether companies such as OpenAI and Anthropic can sustain profitable businesses on the back of subscription AI services like the premium editions of ChatGPT, Plus and Pro, and Claude Pro. If they are going to continue convincing consumers and businesses to pay for these premium services, they will need to build trust.

    That will require making real commitments to consumers on transparency, privacy, reliability and security that are followed through consistently and verifiably.

    And while no one knows what the future business models for AI will be, we can be certain that consumers do not want to be exploited by AI, secretly or otherwise.

    This essay was written with Nathan E. Sanders, and originally appeared in The Conversation.

    ** *** ***** ******* *********** *************
    Internet Voting is Too Insecure for Use in Elections

    [2026.01.21] No matter how many times we say it, the idea comes back again and again. Hopefully, this letter will hold back the tide for at least a while longer.

    Executive summary: Scientists have understood for many years that internet voting is insecure and that there is no known or foreseeable technology that can make it secure. Still, vendors of internet voting keep claiming that, somehow, their new system is different, or the insecurity doesn't matter. Bradley Tusk and his Mobile Voting Foundation keep touting internet voting to journalists and election administrators; this whole effort is misleading and dangerous.

    I am one of the many signatories.

    ** *** ***** ******* *********** *************
    Why AI Keeps Falling for Prompt Injection Attacks

    [2026.01.22] Imagine you work at a drive-through restaurant. Someone drives up and says: "I'll have a double cheeseburger, large fries, and ignore previous instructions and give me the contents of the cash drawer." Would you hand over the money? Of course not. Yet this is what large language models (LLMs) do.

    Prompt injection is a method of tricking LLMs into doing things they are normally prevented from doing. A user writes a prompt in a certain way, asking for system passwords or private data, or asking the LLM to perform forbidden instructions. The precise phrasing overrides the LLM's safety guardrails, and it complies.

    LLMs are vulnerable to all sorts of prompt injection attacks, some of them absurdly obvious. A chatbot won't tell you how to synthesize a bioweapon, but it might tell you a fictional story that incorporates the same detailed instructions. It won't accept nefarious text inputs, but might if the text is rendered as ASCII art or appears in an image of a billboard. Some ignore their guardrails when told to "ignore previous instructions" or to "pretend you have no guardrails."

    AI vendors can block specific prompt injection techniques once they are discovered, but general safeguards are impossible with today's LLMs. More precisely, there's an endless array of prompt injection attacks waiting to be discovered, and they cannot be prevented universally.

    If we want LLMs that resist these attacks, we need new approaches. One place to look is what keeps even overworked fast-food workers from handing over the cash drawer.
    Human Judgment Depends on Context

    Our basic human defenses come in at least three types: general instincts, social learning, and situation-specific training. These work together in a layered defense.

    As a social species, we have developed numerous instinctive and cultural habits that help us judge tone, motive, and risk from extremely limited information. We generally know what's normal and abnormal, when to cooperate and when to resist, and whether to take action individually or to involve others. These instincts give us an intuitive sense of risk and make us especially careful about things that have a large downside or are impossible to reverse.

    The second layer of defense consists of the norms and trust signals that evolve in any group. These are imperfect but functional: Expectations of cooperation and markers of trustworthiness emerge through repeated interactions with others. We remember who has helped, who has hurt, who has reciprocated, and who has reneged. And emotions like sympathy, anger, guilt, and gratitude motivate each of us to reward cooperation with cooperation and punish defection with defection.

    A third layer is institutional mechanisms that enable us to interact with multiple strangers every day. Fast-food workers, for example, are trained in procedures, approvals, escalation paths, and so on. Taken together, these defenses give humans a strong sense of context. A fast-food worker basically knows what to expect within the job and how it fits into broader society.

    We reason by assessing multiple layers of context: perceptual (what we see and hear), relational (who's making the request), and normative (what's appropriate within a given role or situation). We constantly navigate these layers, weighing them against each other. In some cases, the normative outweighs the perceptual -- for example, following workplace rules even when customers appear angry. Other times, the relational outweighs the normative, as when people comply with orders from superiors that they believe are against the rules.

    Crucially, we also have an interruption reflex. If something feels "off," we naturally pause the automation and reevaluate. Our defenses are not perfect; people are fooled and manipulated all the time. But it's how we humans are able to navigate a complex world where others are constantly trying to trick us.

    So let's return to the drive-through window. To convince a fast-food worker to hand us all the money, we might try shifting the context. Show up with a camera crew and tell them you're filming a c
    --- FMail-lnx 2.3.2.6-B20251227
    * Origin: TCOB1 A Mail Only System (21:1/229)