Skip Navigation

ᗪᗩᗰᑎ
Posts
4
Comments
455
Joined
4 yr. ago

  • I've been hoping this project makes significant progress for the last few years to run GUI apps. unfortunately it's been slow as there's not as much interest in getting Mac apps to run on Linux as there is with WINE. that said, I don't fault them, it's a daunting task and wine has the benefit of three decades of progress under their belt.

  • for those not familiar, this basically lets you run command line tools. anything with a GUI will not work.

  • To be fair Google was already making this information public via their transparency reports, albeit in aggregate, since 2010 [0].

    "Google's transparency report, Ars confirmed, already documents requests for push notification data in aggregated data of all government requests for user information."

    Apple conveniently played it safe until the coast was clear. Maybe they'd have been allowed to comment on this privacy issue if they published it in aggregate like Google - e.g. not specifically calling out the U.S. Govt? But that wasn't a risk Apple was willing to take for its users.

    [0] https://en.wikipedia.org/wiki/Transparency_report

  • Posted this somewhere else but figured it may help others here. I can remove it if it's considered spam.


    Tangentially related, if you use iMessage, I'd recommend you switch to Signal.

    text below from a hackernews comment:


    Gonna repeat myself since iMessage hasn't improved one bit after four years. I also added some edits since attacks and Signal have improved.

    iMessage has several problems:

    1. iMessage uses RSA instead of Diffie-Hellman. This means there is no forward secrecy. If the endpoint is compromised at any point, it allows the adversary who has

    a) been collecting messages in transit from the backbone, or

    b) in cases where clients talk to server over forward secret connection, who has been collecting messages from the IM server

    to retroactively decrypt all messages encrypted with the corresponding RSA private key. With iMessage the RSA key lasts practically forever, so one key can decrypt years worth of communication.

    I've often heard people say "you're wrong, iMessage uses unique per-message key and AES which is unbreakable!" Both of these are true, but the unique AES-key is delivered right next to the message, encrypted with the public RSA-key. It's like transport of safe where the key to that safe sits in a glass box that's strapped against the safe.

    1. The RSA key strength is only 1280 bits. This is dangerously close to what has been publicly broken. On Feb 28 2023, Boudet et. al broke a 829-bit key.

    To compare these key sizes, we use https://www.keylength.com/en/2/

    1280-bit RSA key has 79 bits of symmetric security. 829-bit RSA key has ~68 bits of symmetric security. So compared to what has publicly been broken, iMessage RSA key is only 11 bits, or, 2048 times stronger.

    The same site estimates that in an optimistic scenario, intelligence agencies can only factor about 1507-bit RSA keys in 2024. The conservative (security-consious) estimate assumes they can break 1708-bit RSA keys at the moment.

    (Sidenote: Even the optimistic scenario is very close to 1536-bit DH-keys OTR-plugin uses, you might want to switch to OMEMO/Signal protocol ASAP).

    Under e.g. keylength.com, no recommendation suggest using anything less than 2048 bits for RSA or classical Diffie-Hellman. iMessage is badly, badly outdated in this respect.

    1. iMessage uses digital signatures instead of MACs. This means that each sender of message generates irrefutable proof that they, and only could have authored the message. The standard practice since 2004 when OTR was released, has been to use Message Authentication Codes (MACs) that provide deniability by using a symmetric secret, shared over Diffie-Hellman.

    This means that Alice who talks to Bob can be sure received messages came from Bob, because she knows it wasn't her. But it also means she can't show the message from Bob to a third party and prove Bob wrote it, because she also has the symmetric key that in addition to verifying the message, could have been used to sign it. So Bob can deny he wrote the message.

    Now, this most likely does not mean anything in court, but that is no reason not to use best practices, always.

    1. The digital signature algorithm is ECDSA, based on NIST P-256 curve, which according to https://safecurves.cr.yp.to/ is not cryptographically safe. Most notably, it is not fully rigid, but manipulable: "the coefficients of the curve have been generated by hashing the unexplained seed c49d3608 86e70493 6a6678e1 139d26b7 819f7e90".
    2. iMessage is proprietary: You can't be sure it doesn't contain a backdoor that allows retrieval of messages or private keys with some secret control packet from Apple server
    3. iMessage allows undetectable man-in-the-middle attack. Even if we assume there is no backdoor that allows private key / plaintext retrieval from endpoint, it's impossible to ensure the communication is secure. Yes, the private key never leaves the device, but if you encrypt the message with a wrong public key (that you by definition need to receive over the Internet), you might be encrypting messages to wrong party.

    You can NOT verify this by e.g. sitting on a park bench with your buddy, and seeing that they receive the message seemingly immediately. It's not like the attack requires that some NSA agent hears their eavesdropping phone 1 beep, and once they have read the message, they type it to eavesdropping phone 2 that then forwards the message to the recipient. The attack can be trivially automated, and is instantaneous.

    So with iMessage the problem is, Apple chooses the public key for you. It sends it to your device and says: "Hey Alice, this is Bob's public key. If you send a message encrypted with this public key, only Bob can read it. Pinky promise!"

    Proper messaging applications use what are called public key fingerprints that allow you to verify off-band, that the messages your phone outputs, are end-to-end encrypted with the correct public key, i.e. the one that matches the private key of your buddy's device.

    1. iMessage allows undetectable key insertion attacks.

    EDIT: This has actually has some improvements made a month ago! Please see the discussion in replies.

    When your buddy buys a new iDevice like laptop, they can use iMessage on that device. You won't get a notification about this, but what happens on the background is, that new device of your buddy generates an RSA key pair, and sends the public part to Apple's key management server. Apple will then forward the public key to your device, and when you send a message to that buddy, your device will first encrypt the message with the AES key, and it will then encrypt the AES key with public RSA key of each device of your buddy. The encrypted message and the encrypted AES-keys are then passed to Apple's message server where they sit until the buddy fetches new messages for some device.

    Like I said, you will never get a notification like "Hey Alice, looks like Bob has a brand new cool laptop, I'm adding the iMessage public keys for it so they can read iMessages you send them from that device too".

    This means that the government who issues a FISA court national security request (stronger form of NSL), or any attacker who hacks iMessage key management server, or any attacker that breaks the TLS-connection between you and the key management server, can send your device a packet that contains RSA-public key of the attacker, and claim that it belongs to some iDevice Bob has.

    You could possibly detect this by asking Bob how many iDevices they have, and by stripping down TLS from iMessage and seeing how many encrypted AES-keys are being output. But it's also possible Apple can remove keys from your device too to keep iMessage snappy: they can very possibly replace keys in your device. Even if they can't do that, they can wait until your buddy buys a new iDevice, and only then perform the man-in-the-middle attack against that key.

    To sum it up, like Matthew Green said[1]: "Fundamentally the mantra of iMessage is “keep it simple, stupid”. It’s not really designed to be an encryption system as much as it is a text message system that happens to include encryption."

    Apple has great security design in many parts of its ecosystem. However, iMessage is EXTREMELY bad design, and should not be used under any circumstances that require verifiable privacy.

    In comparison, Signal

    • Uses Diffie Hellman + Kyber, not RSA
    • Uses Curve25519 that is a safe curve with 128-bits of symmetric security, not 79 bits like iMessage.
    • Uses Kyber key exchange for post quantum security
    • Uses MACs instead of digital signatures
    • Is not just free and open source software, but has reproducible builds so you can be sure your binary matches the source code
    • Features public key fingerprints (called safety numbers) that allows verification that there is no MITM attack taking place
    • Does not allow key insertion attacks under any circumstances: You always get a notification that the encryption key changed. If you've verified the safety numbers and marked the safety numbers "verified", you won't even be able to accidentally use the inserted key without manually approving the new keys.

    So do yourself a favor and switch to Signal ASAP.

    [1] https://blog.cryptographyengineering.com/2015/09/09/lets-tal...

  • I'm sure they'll exercise caution in this endeavor /s

  • Tangentially related, if you use iMessage, I'd recommend you switch to Signal.

    text below from a hackernews comment:


    Gonna repeat myself since iMessage hasn't improved one bit after four years. I also added some edits since attacks and Signal have improved.

    iMessage has several problems:

    1. iMessage uses RSA instead of Diffie-Hellman. This means there is no forward secrecy. If the endpoint is compromised at any point, it allows the adversary who has

    a) been collecting messages in transit from the backbone, or

    b) in cases where clients talk to server over forward secret connection, who has been collecting messages from the IM server

    to retroactively decrypt all messages encrypted with the corresponding RSA private key. With iMessage the RSA key lasts practically forever, so one key can decrypt years worth of communication.

    I've often heard people say "you're wrong, iMessage uses unique per-message key and AES which is unbreakable!" Both of these are true, but the unique AES-key is delivered right next to the message, encrypted with the public RSA-key. It's like transport of safe where the key to that safe sits in a glass box that's strapped against the safe.

    1. The RSA key strength is only 1280 bits. This is dangerously close to what has been publicly broken. On Feb 28 2023, Boudet et. al broke a 829-bit key.

    To compare these key sizes, we use https://www.keylength.com/en/2/

    1280-bit RSA key has 79 bits of symmetric security. 829-bit RSA key has ~68 bits of symmetric security. So compared to what has publicly been broken, iMessage RSA key is only 11 bits, or, 2048 times stronger.

    The same site estimates that in an optimistic scenario, intelligence agencies can only factor about 1507-bit RSA keys in 2024. The conservative (security-consious) estimate assumes they can break 1708-bit RSA keys at the moment.

    (Sidenote: Even the optimistic scenario is very close to 1536-bit DH-keys OTR-plugin uses, you might want to switch to OMEMO/Signal protocol ASAP).

    Under e.g. keylength.com, no recommendation suggest using anything less than 2048 bits for RSA or classical Diffie-Hellman. iMessage is badly, badly outdated in this respect.

    1. iMessage uses digital signatures instead of MACs. This means that each sender of message generates irrefutable proof that they, and only could have authored the message. The standard practice since 2004 when OTR was released, has been to use Message Authentication Codes (MACs) that provide deniability by using a symmetric secret, shared over Diffie-Hellman.

    This means that Alice who talks to Bob can be sure received messages came from Bob, because she knows it wasn't her. But it also means she can't show the message from Bob to a third party and prove Bob wrote it, because she also has the symmetric key that in addition to verifying the message, could have been used to sign it. So Bob can deny he wrote the message.

    Now, this most likely does not mean anything in court, but that is no reason not to use best practices, always.

    1. The digital signature algorithm is ECDSA, based on NIST P-256 curve, which according to https://safecurves.cr.yp.to/ is not cryptographically safe. Most notably, it is not fully rigid, but manipulable: "the coefficients of the curve have been generated by hashing the unexplained seed c49d3608 86e70493 6a6678e1 139d26b7 819f7e90".
    2. iMessage is proprietary: You can't be sure it doesn't contain a backdoor that allows retrieval of messages or private keys with some secret control packet from Apple server
    3. iMessage allows undetectable man-in-the-middle attack. Even if we assume there is no backdoor that allows private key / plaintext retrieval from endpoint, it's impossible to ensure the communication is secure. Yes, the private key never leaves the device, but if you encrypt the message with a wrong public key (that you by definition need to receive over the Internet), you might be encrypting messages to wrong party.

    You can NOT verify this by e.g. sitting on a park bench with your buddy, and seeing that they receive the message seemingly immediately. It's not like the attack requires that some NSA agent hears their eavesdropping phone 1 beep, and once they have read the message, they type it to eavesdropping phone 2 that then forwards the message to the recipient. The attack can be trivially automated, and is instantaneous.

    So with iMessage the problem is, Apple chooses the public key for you. It sends it to your device and says: "Hey Alice, this is Bob's public key. If you send a message encrypted with this public key, only Bob can read it. Pinky promise!"

    Proper messaging applications use what are called public key fingerprints that allow you to verify off-band, that the messages your phone outputs, are end-to-end encrypted with the correct public key, i.e. the one that matches the private key of your buddy's device.

    1. iMessage allows undetectable key insertion attacks.

    EDIT: This has actually has some improvements made a month ago! Please see the discussion in replies.

    When your buddy buys a new iDevice like laptop, they can use iMessage on that device. You won't get a notification about this, but what happens on the background is, that new device of your buddy generates an RSA key pair, and sends the public part to Apple's key management server. Apple will then forward the public key to your device, and when you send a message to that buddy, your device will first encrypt the message with the AES key, and it will then encrypt the AES key with public RSA key of each device of your buddy. The encrypted message and the encrypted AES-keys are then passed to Apple's message server where they sit until the buddy fetches new messages for some device.

    Like I said, you will never get a notification like "Hey Alice, looks like Bob has a brand new cool laptop, I'm adding the iMessage public keys for it so they can read iMessages you send them from that device too".

    This means that the government who issues a FISA court national security request (stronger form of NSL), or any attacker who hacks iMessage key management server, or any attacker that breaks the TLS-connection between you and the key management server, can send your device a packet that contains RSA-public key of the attacker, and claim that it belongs to some iDevice Bob has.

    You could possibly detect this by asking Bob how many iDevices they have, and by stripping down TLS from iMessage and seeing how many encrypted AES-keys are being output. But it's also possible Apple can remove keys from your device too to keep iMessage snappy: they can very possibly replace keys in your device. Even if they can't do that, they can wait until your buddy buys a new iDevice, and only then perform the man-in-the-middle attack against that key.

    To sum it up, like Matthew Green said[1]: "Fundamentally the mantra of iMessage is “keep it simple, stupid”. It’s not really designed to be an encryption system as much as it is a text message system that happens to include encryption."

    Apple has great security design in many parts of its ecosystem. However, iMessage is EXTREMELY bad design, and should not be used under any circumstances that require verifiable privacy.

    In comparison, Signal

    • Uses Diffie Hellman + Kyber, not RSA
    • Uses Curve25519 that is a safe curve with 128-bits of symmetric security, not 79 bits like iMessage.
    • Uses Kyber key exchange for post quantum security
    • Uses MACs instead of digital signatures
    • Is not just free and open source software, but has reproducible builds so you can be sure your binary matches the source code
    • Features public key fingerprints (called safety numbers) that allows verification that there is no MITM attack taking place
    • Does not allow key insertion attacks under any circumstances: You always get a notification that the encryption key changed. If you've verified the safety numbers and marked the safety numbers "verified", you won't even be able to accidentally use the inserted key without manually approving the new keys.

    So do yourself a favor and switch to Signal ASAP.

    [1] https://blog.cryptographyengineering.com/2015/09/09/lets-tal...

  • Because they get your profile picture, name, and email address when you click accept. I went through with it just to test, but definitely getting some data from its users.

  • Stop using facebook/meta, Instagram, Whatsapp. You're giving them power by using their services. Use alternatives like Friendica, PixelFed, or Signal.

  • You're right, but security and privacy is about layers, not always 100% effective mitigations, especially not when the mitigation is a function (contact discovery) that requires a private list (your contacts) be compared against another one. For anyone where this is an actual security risk, they don't have to to share their contacts. They will not know which of their friends/family are on Signal, but they can still use the service.

    This feature does protect users in that any legal court order for Signal to present who is friends with who (as almost every other messaging provider has actual access to your list of contacts) is not possible. They've been subpoenaed multiple times[0] and all they can show is when an account was created and the last day (not time) a client pinged their servers.

    Lastly, I'm not sure if this is even a feature or not but it wouldn't be too difficult to introduce rate-limiting to mitigate this issue even more. As an example, its very unlikely that most people have thousands (or even tens of thousands) of people in their contacts. Assuming we go just a step beyond the 99th percentile, you can effectively block anyone as soon as they start trying to crawl the entire phone number address space, preventing the issue you're describing.

    [0] https://signal.org/bigbrother/

  • Not necessarily.

    Signal has people who are experts in their field. They engineer solutions that don't exist anywhere else in the market to ensure they have as little information on you as possible while keeping you secure [0]. This in turn means high compensation + benefits. You don't want to be paying your key developers peanuts as that makes them liable to taking bribes from adversaries to "oops" a security vulnerability in the service. In addition, the higher compensation is a great way to mitigate losing talent to private organizations who can afford it.

    [0] Signal has engineered the following technologies that all work to ensure your privacy and security:

  • I have 3 Google home products ( varying sizes) that sync music across the kitchen/living room, bathroom and bedroom. it makes fora. great listening experience.

  • "A journey of a thousand miles begins with a single step."

    If they're just starting this might be their moment to show off something they're picking up as they go - don't shit on that. And advertise? It's a free/open code. It's a show and tell, let them be proud. Maybe in a few year's they'll build the next open source, federated Spotify competitor. For now, let them bask in the glory of making something fun.

  • have been able to do so with much smaller funding

    It's easy to "stand on the shoulders of giants" and claim some software is better when you're adding 1-5% of additional work on top of a fully developed service/app/infrastructure. It's why generally forks of software tend to have more features than the original source - See the following examples where people polish something and release it as their own improved creation:

    • Chromium/Chrome > Edge/Brave
    • Debian > Ubuntu/Mint/Pop!_OS
    • Android Open Source Project (AOSP) > WhateverSamsung's_is_called
    • Firefox > LibreWolf

    Now, I'm not trying to say people should stop forking software, I'm all for it as it breeds competition and innovation, but to complain that a software project is not meeting your specific demands and their forks are doing so much more means you're not understanding the other projects would probably die without all the hard work that goes on in the core product.

    whereas even such basic shit implemented solely in Molly, such as app passwords that actually encrypt it’s database is pretty useful.

    You say this but do you have any evidence to back up the claim that it's useful and to who? Who's asking for it? What percentage of Signal users would enable the feature? Is it 1%. Is that worth it? There's barely a demand for privacy from the general populace otherwise Signal would be a hit and everyone would leave Whatsapp immediately, but it isn't.

    if you use most tiling compositor

    You're the 1% of the 1% when it comes to desktop configurations if you're using a tiling window manager. I used one about 10 years ago and have yet to find one other person in the real world who has ever used one and I work in IT. Whether you like it or not, Signal developers are not going to spend any effort on making your very niche use case any better. I'm not saying that to be rude, but you have to be realistic. Your expectations are high for a free service that generally works for 99% of the population.

  • anyone else getting "page unavailable"?

  • awesome! I obviously haven't been keeping up. thanks!

  • Likely because while simplex looks great and is very promising, it doesn't add much to the conversation here. Signal is primarily a replacement for SMS/MMS, this means people generally would want their contacts readily available and discoverable to minimize the friction of securely messaging friends/family. Additionally it's dangerous to be recommending a service that hasn't been audited nor proven itself secure over time.

  • link to report so we can track? thanks!