All posts
Security Infrastructure Linux May 2026

Canonical Got DDoS’d Right After Copy Fail Dropped

The day after Copy Fail disclosed a root exploit affecting every major Linux distribution, Canonical’s web infrastructure went offline under a sustained DDoS attack. They aren’t directly related. But the convergence created a patching crisis that’s worth understanding.

On April 29, 2026, Theori disclosed Copy Fail (CVE-2026-31431) - a Linux kernel privilege escalation affecting essentially every major distribution shipped since 2017. The full details are in our post from last week. The short version: 732 bytes of Python, any local user to root, no race condition, 100% reliable.

Twenty-four hours later, Canonical’s web infrastructure went dark.

The two events are not directly connected. But the timing created exactly the kind of compound crisis that exposes how brittle a lot of organizations’ patching workflows really are.

What happened with the DDoS

A group calling itself the Islamic Cyber Resistance in Iraq - 313 Team - announced the attack via Telegram. They claimed to be using Beamed, a DDoS-for-hire service capable of attacks exceeding 3.5 Tbps. Canonical confirmed the attack in a statement: “Canonical’s web infrastructure is under a sustained, cross-border Distributed Denial of Service (DDoS) attack.”

The group initially announced a four-hour assault. The disruption persisted for more than 20 hours.

Eleven Canonical services went offline: ubuntu.com, canonical.com, security.ubuntu.com, archive.ubuntu.com, developer.ubuntu.com, blog.ubuntu.com, the Snap store, Launchpad, Canonical SSO, and several satellite services. TechCrunch verified that package updates failed completely on a test Ubuntu device during the outage. The Ubuntu Security API - the CVE feed that patch management tools and security automation pipelines pull from worldwide - was among the services that went down.

The group then sent Canonical a follow-up message: “There is a simple way out. We have emailed you with our Session Contact ID. If you fail to reach out, we will continue our assault. You are in an awful position, don’t be foolish.” This is the pattern of hacktivist groups pivoting to extortion. We’ve seen it before and we’ll see it again.

The 313 Team has hit other targets before this - including eBay’s Japan and US divisions and BlueSky. The Canonical attack coincided with the Ubuntu 26.04 LTS release, which may have been deliberate timing to maximize visibility, or may have been coincidence. The group’s stated motivation was political.

Are the DDoS and Copy Fail related?

The short answer is: not directly.

The 313 Team’s communications focus on political grievances and extortion, not on Copy Fail or vulnerability exploitation. There’s no evidence the DDoS was specifically timed to block patching. The Hacker News thread surfaced early speculation that the attack might have been coordinated with the vulnerability disclosure - “a competitor wants to exploit copy.fail on some ubuntu servers, and is DDoSing canonical so that they can’t update” - but nothing confirmed that.

What is true is that the effect was the same whether or not the timing was intentional.

Copy Fail requires a local user to exploit - it’s not a remote attack by itself. But it chains cleanly with any other foothold. And the mitigation is straightforward: disable the algif_aead kernel module and apply the patched kernel package. The module disable is a one-liner. The kernel update comes from Ubuntu’s repos.

On May 1, both of those paths went away for more than 20 hours. The official mitigation documentation was inaccessible precisely when sysadmins needed it. archive.ubuntu.com was down, so pulling the patched package required finding a working mirror or having a local cache. For teams that had set up local package mirrors or apt-cacher proxies, this was a non-event. For teams that hadn’t - meaning most teams - the answer was “wait and hope.”

That’s the part I want to stay on for a moment.

Single points of failure in your update infrastructure

Most organizations running Ubuntu servers pull packages directly from archive.ubuntu.com or the regional mirror redirector. They read CVE notices from ubuntu.com. They check security.ubuntu.com for patch status. All of this is fine when those services are available, which is almost all the time.

“Almost all the time” is not the same as “always.” And the times they’re unavailable tend to cluster with other bad events - vulnerability disclosures, active incidents, attacker-caused outages.

Local apt mirrors have been around forever. apt-cacher-ng, Squid, Nexus, Pulp - whatever fits your stack. They’re not exotic infrastructure. They’re just infrastructure that almost nobody sets up proactively, because archive.ubuntu.com is always there. Until it isn’t.

The same logic applies to security advisory feeds. Keeping a local snapshot of Ubuntu’s CVE database, or subscribing to separate advisory channels (oss-security mailing list, CERT-EU advisories), means your response workflow doesn’t depend on ubuntu.com being reachable.

This particular incident lasted 20 hours. That’s annoying but manageable. A more targeted attack - or an attack that lasted days instead of hours - is a different conversation.

What to do if you haven’t patched Copy Fail yet

If you missed it: ubuntu.com and canonical.com are back online. Apply the patched kernel packages now. If you need an interim step before you can reboot:

echo "install algif_aead /bin/false" > /etc/modprobe.d/disable-algif.conf
rmmod algif_aead 2>/dev/null || true

The practical impact of disabling algif_aead is minimal for most workloads. The full breakdown is in our Copy Fail post.


The lesson from the Canonical incident isn’t that DDoS attacks on open source infrastructure are going to become a regular patching obstacle. The lesson is that dependencies you’ve never thought about become critical exactly when something else is already going wrong.

Treat your upstream package infrastructure like any other dependency: add redundancy before you need it.