coppsilgold
8 hours ago
While useful it needs a big red warning to potential leakers. If they were personally served documents (such as via email, while logged in, etc) there really isn't much that can be done to ascertain the safety of leaking it. It's not even safe if there are two or more leakers and they "compare notes" to try and "clean" something for release.
https://en.wikipedia.org/wiki/Traitor_tracing#Watermarking
https://arxiv.org/abs/1111.3597
The watermark can even be contained in the wording itself (multiple versions of sentences, word choice etc stores the entropy). The only moderately safe thing to leak would be a pure text full paraphrasing of the material. But that wouldn't inspire much trust as a source.
crazygringo
8 hours ago
This doesn't seem to be designed for leakers, i.e. people sending PDF's -- it's specifically for people receiving untrusted files, i.e. journalists.
And specifically about them not being hacked by malicious code. I'm not seeing anything that suggests it's about trying to remove traces of a file's origin.
I don't see why it would need a warning for something it's not designed for at all.
coppsilgold
8 hours ago
It would be natural for a leaker to assume that the PDF contains something "extra" and to try and and remove it with this method. It may not occur to them that this something extra could be part of the content they are going to get back.
david_shaw
7 hours ago
From the tool description linked:
> Dangerzone works like this: You give it a document that you don't know if you can trust (for example, an email attachment). Inside of a sandbox, Dangerzone converts the document to a PDF (if it isn't already one), and then converts the PDF into raw pixel data: a huge list of RGB color values for each page. Then, outside of the sandbox, Dangerzone takes this pixel data and converts it back into a PDF.
With this in mind, Dangerzone wouldn't even remove conventional watermarks (that inlay small amounts of text on the image).
I think the "freedomofpress" GitHub repo primed you to think about protecting someone leaking to journalists, but really it's designed to keep journalists (and other security-minded folk) safe from untrusted attachments.
The official website -- https://dangerzone.rocks/ -- is a lot more clear about exactly what the tool does. It removes malware, removes network requests, supports various filetypes, and is open source.
Their about page ( https://dangerzone.rocks/about/ ) shows common use cases for journalists and others.
3eb7988a1663
2 hours ago
Canary traps have been popularized in a few works of fiction. Seems trivial to do in the modern era. The sophisticated version I heard is to make the differences in the white space between individual words/lines/wherever.
fiddlerwoaroof
2 hours ago
Genius did something like this to prove that Google was stealing lyrics from them: https://www.pcmag.com/news/genius-we-caught-google-red-hande...?
coppsilgold
2 hours ago
> The sophisticated version I heard is to make the differences in the white space between individual words/lines/wherever.
That would be a naive way to do it.
Here is an example of a more sophisticated way:
A canary trap is a (method, way) for (exposing, determining) an information leak by giving (different, differing) versions of a (sensitive, secret) (document, file) to each of (several, two or more) (suspects, persons) and (seeing, observing) which version gets (leaked, exposed).
I can now include 9 bits of a watermark in there. If I expand the lists from two options to four it would be 18 bits. Four to eight would double that again - so diminishing returns after 4. The lists can vary in size too of course.The sentiment of an entire paragraph can serve as single bit, it would have a chance to be robust to paraphrasing.
In the example above, if two or more leakers get together you might think that they could figure out a way to generate a clean version. But it turns out if there are enough watermark bits in the content and you use Tardos codes (a crafted Arcsine distribution of bits) small coalitions of traitors will betray themselves. Even large coalitions of 100 or more will betray themselves eventually (after 100s of 1000s of watermarked bits, the scaling is a constant + square of the number of traitors). The Google keyword is "traitor tracing scheme".
alphazard
8 hours ago
I seem to remember Yahoo finance (I think it was them, maybe someone else) introducing benign errors into their market data feeds, to prevent scraping. This lead to people doing 3 requests instead of just 1, to correct the errors, which was very expensive for them, so they turned it off.
I don't think watermarking is a winning game for the watermarker, with enough copies any errors can be cancelled.
coppsilgold
8 hours ago
> I don't think watermarking is a winning game for the watermarker, with enough copies any errors can be cancelled.
This is a very common assumption that turns out to be false.
There are Tardos probabilistic codes (see the paper I linked) which have the watermark scale as the square of the traitor count.
For example, with a watermark of just 400 bits, 4 traitors (who try their best to corrupt the watermark) will stand out enough to merit investigation and with 800 bits be accused without any doubt. This is for a binary alphabet, with text you can generate a bigger alphabet and have shorter watermarks.
These are typically intended for tracing pirated content, so they carry the so-called Marking Assumption (if given two or more versions of a piece of content, you must choose one. A pirate isn't going to corrupt or remove a piece of video, that would be unsuitable for leaking). So it would likely be possible to get better results with documents, may require larger watermarks to get such traitors reliably.
alphazard
6 hours ago
This was a fascinating read, thanks for posting.
I'm not totally convinced that the threat model is realistic. The watermarker has to embed the watermark, the only place to do that is in the least significant bits of whatever the message is. If it's an audio file then the least significant bits of each sample would work. If it's a video file then the LSBs in a DCT bin may also be unnoticeable. It can really only go in certain places, without it affecting the content in a meaningful way. If it's in a header, or separate known location, then the pirate can just delete those bits.
The threat model presented says the pirates have to go with one of the copies, or only correct errors that are different between 2 copies. That's the part that I don't think is realistic. If the pirates knew that the file was marked, and the scheme used to mark it, but didn't know the key (a standard threat model for things like encryption), then they could inject their own noise into wherever the watermark could be hiding, and now the problem is the watermarker trying to send a message on a noisy channel, where the pirates have a jammer. I don't even think you have to sacrifice quality, since the copy you have already has noise, and you just need to inject the same amount (or more).
coppsilgold
6 hours ago
It's more sophisticated than that. A single movie can be fragmented into 1000s of fragments, each fragment carries 1 bit. It's called A/B forensic watermarking. So you need to insert a 1-bit watermark into a video segment that is a few megabytes, there is no feasible way to defeat this as a pirate unless the watermarker is incompetent. Averaging will not work.
See AWS offering:
For large-scale per-viewer, implement a content identification strategy that allows you to trace back to specific clients, such as per-user session-based watermarking. With this approach, media is conditioned during transcoding and the origin serves a uniquely identifiable pattern of media segments to the end user. A session to a user-mapping service receives encrypted user ID information in the header or cookies of the request context and uses this information to determine the uniquely identifiable pattern of media segments to serve to the viewer. This approach requires multiple distinctly watermarked copies of content to be transcoded, with a minimum of two sets of content for A/B watermarking. Forensic watermarking also requires YUV decompression, so encoding time for 4K feature length content can take upwards of 20 hours. DRM service providers in the AWS Partner Network (APN) are available to aid in the deployment of per-viewer content forensics.
<https://docs.aws.amazon.com/wellarchitected/latest/streaming...>This will be more challenging for text. Not as difficult for images.
> the only place to do that is in the least significant bits
This is also false, it's the most naive way to watermark content. They do it in the mid range frequencies these days. And then make the watermarks robust to resizing, re-encoding, cropping and even rotation in some cases. They survive when someone holds a camera to record a screen.
normie3000
5 hours ago
> The only moderately safe thing to leak would be a pure text full paraphrasing of the material. But that wouldn't inspire much trust as a source.
Isn't this what newspapers do?
robertk
6 hours ago
Why not leak a dataset of N full text paraphrasings of the material, together with a zero-knowledge proof of how to take one of the paraphrasings and specifically "adjust" it to the real document (revealed in private to trusted asking parties)? Then the leaker can prove they released "at least the one true leak" without incriminating themselves. There is a cryptographic solution to this issue.