To limit access to addictive feeds, this act will require social media companies to use commercially reasonable methods to determine user age. Regulations by the attorney general will provide guidance, but this flexible standard will be based on the totality of the circumstances, including the size, financial resources, and technical capabilities of a given social media company, and the costs and effectiveness of available age determination techniques for users of a given social media platform. For example, if a social media company is technically and financially capable of effectively determining the age of a user based on its existing data concerning that user, it may be commercially reasonable to present that as an age determination option to users. Although the legislature considered a statutory mandate for companies to respect automated browser or device signals whereby users can inform a covered operator that they are a covered minor, we determined that the attorney general would already have discretion to promulgate such a mandate through its rulemaking authority related to commercially reasonable and technologically feasible age determination methods. The legislature believes that such a mandate can be more effectively considered and tailored through that rulemaking process. Existing New York antidiscrimination laws and the attorney general's regulations will require, regardless, that social media companies provide a range of age verification methods all New Yorkers can use, and will not use age assurance methods that rely solely on biometrics or require government identification that many New Yorkers do not possess.
In other words: sites will have to figure it out and make sure that it's both effective and non-discriminatory, and the safe option would be for sites to treat everyone like children until proven otherwise.
I'm not sure I'm surprised at this point any more, just disappointed. All they have to do is just make a stable and secure platform to run apps on. They're going to run out of foot to shoot themselves in sooner or later if they keep this kind of thing up. Too many unforced errors.
It should never have gotten to the external feedback stage because internal feedback should have been sufficient to kill the idea before it even got a name due to it being such a security and privacy risk. The fact that it didn't is worrying from a management perspective.
To be fair to Microsoft, this was a local model too and encrypted (through Bitlocker). I just feel like the only way you could possibly even try to secure it would be to lock the user out of the data with some kind of separate storage and processing because anything the user can do can be done by malware run by the user. Even then, DRM and how it gets cracked has shown us that nothing like that is truly secure against motivated attackers. Since restricting a user's access like that won't happen and might not even be sufficient, it's just way too risky.
The Microsoft accounts are already required (without resorting to increasingly convoluted methods) and I think the hardware for Hello might be too now for OEM built computers, I'm not sure.
I'm pretty sure the main picture on the article is what the revised opt in/out message looks like. Previously it was opt-out with just a message describing the feature with a check box to have it open Settings when you were finished with the out of box experience so that you can look at the options later.
Edit: Fixed mention of opt-in to opt-out, thanks tal.
Important: These changes are gradually rolling out to all users of the Google Maps app. You'll get a notification when an update is available for your account.
Location History is now called Timeline, and you now have new choices for your data. To continue using Timeline, you must have an up-to-date version of the Google Maps app. Otherwise, you may lose data and access to your Timeline on Google Maps.
Timeline is created on your devices.
Basically they're getting rid of the web version because they're moving the data to being stored on local devices only. Part of this might be because they got a lot of flak for stuff like recording location data for people who went near reproductive health clinics and other sensitive things. They can't be forced to respond to subpoenas for data if they don't have the data and can thus stay out of it, so I wouldn't necessarily say it's all that altruistic on their part.
Looks like you can use Ctrl+Spacebar to open the "MenuComplete" function that should show you the different available options. I don't think you can get a direct list of the parameters that have explanations without using something like Get-Help though.
Someone has already demonstrated using an off-the-shelf infostealer to steal the Recall database from a test computer. It won't take any special skills or technology for this to be a problem.
I'm pretty sure that people were unhappy because it was opt-out at first. Now that bridging is opt-in, I don't think most people have a problem with it and I've seen a number of posts from both sides of the bridge so it seems to be working.
Malware won't even need to wait for the user to access something sensitive, they can just go back through the user's Recall history and get the data for immediate exfiltration. No chance for anti-malware software to update and catch it before it does anything truly bad, it will just always be too late if given even a minute.
And of Course, if you stream Netflix, tons of copyright protected material, lol.
Nope, DRM protected content like Netflix is one of the few things it doesn't capture, it's even mentioned in Recall's privacy section. I'll admit that that's likely due to technical reasons with how the video stream is decrypted and decoded on the GPU and is never actually accessible to the user, not necessarily because they wouldn't want to save that as well.
It's always been a possibility that someone could do this but this makes it a default on feature for a lot of users you might interact with and makes them a prime target for malware to steal the sensitive data that wouldn't have existed in most cases before.
To protect against casual theft of a device causing the data to be in the thief's hands in addition to the actual device.
The average person unfortunately is not likely to properly backup their encryption keys so if they forget their password (or don't use one and rely on the default of just TPM), they'll complain about losing their data. Having the key backed up gives them a way to get their data back in non-theft situations.
From the description of the bill law (bold added):
https://legislation.nysenate.gov/pdf/bills/2023/S7694A
In other words: sites will have to figure it out and make sure that it's both effective and non-discriminatory, and the safe option would be for sites to treat everyone like children until proven otherwise.