cultural reviewer and dabbler in stylistic premonitions

  • 1.15K Posts
  • 1.63K Comments
Joined 4 years ago
cake
Cake day: January 17th, 2022

help-circle


  • So in summary. You’re right. Sealed sender is not a great solution. But

    Thanks :)

    But, I still maintain it is entirely useless - its only actual use is to give users the false impression that the server is unable to learn the social graph. It is 100% snake oil.

    it is a mitigation for the period where those messages are being accepted.

    It sounds like you’re assuming that, prior to sealed sender, they were actually storing the server-visible sender information rather than immediately discarding it after using it to authenticate the sender? They’ve always said that they weren’t doing that, but, if they were, they could have simply stopped storing that information rather than inventing their “sealed sender” cryptographic construction.

    To recap: Sealed sender ostensibly exists specifically to allow the server to verify the sender’s permission to send without needing to know the sender identity. It isn’t about what is being stored (as they could simply not store the sender information), it is about what is being sent. As far as I can tell it only makes any sense if one imagines that a malicious server somehow would not simply infer the senders’ identities from their (obviously already identified) receiver connections from the same IPs.


  • Sure. If a state serves a subpoena to gather logs for metadata analysis, sealed sender will prevent associating senders to receivers, making this task very difficult.

    Pre sealed-sender they already claimed not to keep metadata logs, so, complying with such a subpoena[1] should already have required them to change the behavior of their server software.

    If a state wanted to order them to add metadata logging in a non-sealed-sender world, wouldn’t they also probably ask them to log IPs for all client-server interactions (which would enable breaking sealed-sender through a trivial correlation)?

    Note that defeating sealed sender doesn’t require any kind of high-resolution timing or costly analysis; with an adversary-controlled server (eg, one where a state adversary has compelled the operator to alter the server’s behavior via a National Security Letter or something) it is easy to simply record the IP which sent each “sealed” message and also record which account(s) are checked from which IPs at all times.


    1. it would more likely be an NSL or some other legal instrument rather than a subpoena ↩︎


  • sealed sender isn’t theater, in my view. It is a best effort attempt to mitigate one potential threat

    But, what is the potential threat which is mitigated by sealed sender? Can you describe a specific attack scenario (eg, what are the attacker’s goals, and what capabilities do you assume the attacker has) which would be possible if Signal didn’t have sealed sender but which is no longer possible because sealed sender exists?


  • In case it wasn’t clear, I’m certainly not advocating for using WhatsApp or any other proprietary, centralized, or Facebook-operated communication systems 😂

    But I do think Facebook probably really actually isn’t exploiting the content of the vast majority of whatsapp traffic (even if they do turn out to be able to exploit it for any specific users at any time, which i wouldn’t be surprised by).


  • “Anonymity” is a vague term which you introduced to this discussion; I’m talking about metadata privacy which is a much clearer concept.

    TLS cannot prevent an observer from seeing the source and destination IPs, but it does include some actually-useful metadata mitigations such as Encrypted Client Hello, which encrypts (among other things) the Server Name Indicator. ECH a very mild mitigation, since the source and destination IPs are intrinsically out of scope for protection by TLS, but unlike Sealed Sender it is not an entirely theatrical use of cryptography: it does actually prevent an on-path observer from learning the server hostname (at least, if used alongside some DNS privacy system).

    The on path part is also an important detail here: the entire world’s encrypted TLS traffic is not observable from a single choke point the way that the entire world’s Signal traffic is.




  • Signal protocol is awesome for privacy, not anonymity

    The “privacy, not anonymity” dichotomy is some weird meme that I’ve seen spreading in privacy discourse in the last few years. Why would you not care about metadata privacy if you care about privacy?

    Signal is not awesome for metadata privacy, and metadata is the most valuable data for governments and corporations alike. Why do you think Facebook enabled e2ee after they bought WhatsApp? They bought it for the metadata, not the message content.

    Signal pretends to mitigate the problem it created by using phone numbers and centralizing everyone’s metadata on AWS, but if you think about it for just a moment (see linked comment) the cryptography they use for that doesn’t actually negate its users’ total reliance on the server being honest and following their stated policies.

    Signal is a treasure-trove of metadata of activists and other privacy-seeking people, and the fact that they invented and advertise their “sealed-sender” nonsense to pretend to blind themselves to it is an indicator that this data is actually being exploited: Signal doth protest too much, so to speak.







  • I don’t think anyone called those “web apps” though. I sure didn’t.

    As I recall, the phrase didn’t enter common usage until the advent of AJAX, which allowed for dynamically loading data without loading or re-loading a whole page. Early webmail sites simply loaded a new page every time you clicked a link. They didn’t even need JavaScript.

    The term “web app” hadn’t been coined yet but, even without AJAX I think in retrospect it’s reasonable to call things like the early versions of Hotmail and RocketMail applications - they were functional replacements for a native application, on the web, even though they did require a new page load for every click (or at least every click that required network interaction).

    At some point, though, I’m pretty sure that some clicks didn’t require server connections, and those didn’t require another page load (at least if js was enabled): this is what “DHTML” originally meant: using JavaScript to modify the DOM client-side, in the era before sans-page-reload network connections were technically possible.

    The term DHTML definitely predates AJAX and the existence of XMLHTTP (later XMLHttpRequest), so it’s also odd that this article writes a lot about the former while not mentioning the latter. (The article actually incorrectly defines DHTML as making possible “websites that could refresh interactive data without the need for a page reload” - that was AJAX, not DHTML.)