News

Attacking Machine Learning Systems

The field of machine learning (ML) security—and corresponding adversarial ML—is rapidly advancing as researchers develop sophisticated techniques to perturb, disrupt, or steal the ML model or data. It’s a heady time; because we know so little about the security of these systems, there are many opportunities for new researchers to publish in this field. In many ways, this circumstance reminds me of the cryptanalysis field in the 1990. And there is a lesson in that similarity: the complex mathematical attacks make for good academic papers, but we mustn’t lose sight of the fact that insecure software will be the likely attack vector for most ML systems…

Finland’s Most-Wanted Hacker Nabbed in France

Julius “Zeekill” Kivimäki, a 25-year-old Finnish man charged with extorting a local online psychotherapy practice and leaking therapy notes for more than 22,000 patients online, was arrested this week in France. A notorious hacker convicted of perpetrating tens of thousands of cybercrimes, Kivimäki had been in hiding since October 2022, when he failed to show up in court and Finland issued an international warrant for his arrest.

A Hacker’s Mind News

A Hacker’s Mind will be published on Tuesday.

I have done a written interview and a podcast interview about the book. It’s been chosen as a “February 2023 Must-Read Book” by the Next Big Idea Club. And an “Editor’s Pick”—whatever that means—on Amazon.

There have been three reviews so far. I am hoping for more. And maybe even a published excerpt or two.

Amazon and others will start shipping the book on Tuesday. If you ordered a signed copy from me, it is already in the mail.

If you can leave a review somewhere, I would appreciate it.

Manipulating Weights in Face-Recognition AI Systems

Interesting research: “Facial Misrecognition Systems: Simple Weight Manipulations Force DNNs to Err Only on Specific Persons“:

Abstract: In this paper we describe how to plant novel types of backdoors in any facial recognition model based on the popular architecture of deep Siamese neural networks, by mathematically changing a small fraction of its weights (i.e., without using any additional training or optimization). These backdoors force the system to err only on specific persons which are preselected by the attacker. For example, we show how such a backdoored system can take any two images of a particular person and decide that they represent different persons (an anonymity attack), or take any two images of a particular pair of persons and decide that they represent the same person (a confusion attack), with almost no effect on the correctness of its decisions for other persons. Uniquely, we show that multiple backdoors can be independently installed by multiple attackers who may not be aware of each other’s existence with almost no interference…

AIs as Computer Hackers

Hacker “Capture the Flag” has been a mainstay at hacker gatherings since the mid-1990s. It’s like the outdoor game, but played on computer networks. Teams of hackers defend their own computers while attacking other teams’. It’s a controlled setting for what computer hackers do in real life: finding and fixing vulnerabilities in their own systems and exploiting them in others’. It’s the software vulnerability lifecycle.

These days, dozens of teams from around the world compete in weekend-long marathon events held all over the world. People train for months. Winning is a big deal. If you’re into this sort of thing, it’s pretty much the most fun you can possibly have on the Internet without committing multiple felonies…