Google engineers want to introduce DRMs for web pages, making ad-blocking near-impossible in the browser
eth0p @ eth0p @iusearchlinux.fyi Posts 1Comments 45Joined 2 yr. ago
Having thought about it for a bit, it's possible for this proposal to be abused by authoritarian governments.
Suppose a government—say, Wadiya—mandated that all websites allowed on the Wadiyan Internet must ensure that visitors are using a list of verified browsers. This list is provided by the Wadiyan government, and includes: Wadiya On-Line, Wadiya Explorer, and WadiyaScape Navigator. All three of those browsers are developed in cooperation with the Wadiyan government.
Each of those browsers also happen to send a list of visited URLs to a Wadiyan government agency, and routinely scan the hard drive for material deemed "anti-social."
Because the attestations are cryptographically verified, citizens would not be able to fake the browser environment. They couldn't just download Firefox and install an extension to pretend to be Wadiya Explorer; they would actually have to install the spyware browser to be able to browse websites available on the Wadiyan Internet.
In my other comments, I did say that I don't trust this proposal either. I even edited the comment you're replying to to explain how the proposal could be used in a way to hurt adblockers.
My issue is strictly with how the original post is framed. It's using a sensationalized title, doesn't attempt to describe the proposal, and doesn't explain how the conclusion of "Google [...] [wants] to introduce DRM for web pages" follows the premise (the linked proposal).
I wouldn't be here commenting if the post had used a better title such as "Google proposing web standard for web browser verification: a slippery slope that may hurt adblockers and the open web," summarized the proposal, and explained the potential consequences of it being implemented.
Frankly, I don't trust that the end result won't hurt users. This kind of thing, allowing browser environments to be sent to websites, is ripe for abuse and is a slippery slope to a walled garden of "approved" browsers and devices.
That being said, the post title is misleading, and that was my whole reason to comment. It frames the proposal as a direct and intentional attack on users ability to locally modify the web pages served to them. I wouldn't have said anything if the post body made a reasonable attempt to objectively describe the proposal and explain why it would likely hurt users who install adblockers.
I don't disagree with you. If this gets implemented, the end result is going to be a walled garden web that only accepts "trusted" browsers. That's the concern here for ad blocking: every website demanding a popular browser that just so happens to not support extensions.
My issue is with how the OP framed the post. The title is misleading and suggests that this is a direct attempt to DRM the web, when it's not. I wouldn't have said anything if the post was less sensationalized, laying out the details of the proposal and its long-term consequences in an objective and informative way.
This post title is misleading.
They aren't proposing a way for browsers to DRM page contents and prevent modifications from extensions. This proposal is for an API that allows for details of the browser environment to be shared and cryptographically verified. Think of it like how Android apps have a framework to check that a device is not rooted, except it will also tell you more details like what flavor of OS is being used.
Is it a pointless proposal that will hurt the open web more than it will help? Yes.
Could it be used to enforce DRM? Also, yes. A server could refuse to provide protected content to unverified browsers or browsers running under an environment they don't trust (e.g. Linux).
Does it aim to destroy extensions and adblockers? No.
Straight from the page itself:
Non-goals:
...
- Enforce or interfere with browser functionality, including plugins and extensions.
Edit: To elaborate on the consequences of the proposal...
Could it be used to prevent ad blocking? Yes. There are two hypothetical ways this could hurt adblock extensions:
- As part of the browser "environment" data, the browser could opt to send details about whether built-in ad-block is enabled, any ad-block extensions are enabled, or even if there are any extensions installed at all.
Knowing this data and trusting it's not fake, a website could choose to refuse to serve contents to browsers that have extensions or ad blocking software.
- This could lead to a walled-garden web. Browsers that don't support the standard, or minority usage browsers could be prevented from accessing content.
Websites could then require that users visit from a browser that doesn't support adblock extensions.
I'm not saying the proposal is harmless and should be implemented. It has consequences that will hurt both users and adblockers, but it shouldn't be sensationalized to "Google wants to add DRM to web pages".
Edit 2: Most of the recent feedback on the GitHub issues seems to be lacking in feedback on the proposal itself, but here's some good ones that bring up excellent concerns:
- Browsers developed and distributed by large tech firms have a conflict of interest with holding back or limiting attestation. Attestation enables the web to be restricted in a way that benefits tech firms. For example, Office 365 could require that it is used only on Windows and/or only through Edge.
- Similarly to what I brought up, having the ability for websites to trust a (browser, os) tuple could allow for certain browsers to be preferred, simply because they do not support extensions.
- How it will create hostile discrimination, and two-tiered services based on whether browsers are attested or not.
- The proposal does not do an adequate job explaining how a browser may be attested to.. Would this require something like Secure Boot in order for a browser to be attested to? That would discriminate against users with outdated hardware lacking support for boot integrity, or users who don't have it enabled for some reason or another.
I suspect to get downvotes into oblivion for this, but there's nothing wrong with the concept of C2PA.
It's basically just Git commit signing, but for images. An organization (user) signs image data (a commit) with their public key, and other users can check that the image provenance (chain of signed commits) exists and the signing key is known to be owned by the organization (the signer's public key is trusted). It does signing of images created using multiple assets (merge commits), too.
All of this is opt-in, and you need a private key. No private key, no signing. You can also strip the provenance by just copying the raw pixels and saving it as a new image (copying the worktree and deleting .git).
A scummy manufacturer could automatically generate keys on a per-user basis and sign the images to "track" the creator, but C2PA doesn't make it any easier than just throwing a field in the EXIF or automatically uploading photos to some government-owned server.
Circular dependencies can be removed in almost every case by splitting out a large module into smaller ones and adding an interface or two.
In your bot example, you have a circular dependency where (for example) the bot needs to read messages, then run a command from a module, which then needs to send messages back.
v-----------\ bot command_foo \-----------^
This can be solved by making a command conform to an interface, and shifting the responsibility of registering commands to the code that creates the bot instance.
main <--- ^ \ | \ bot ---> command_foo
The bot
module would expose the Bot
class and a Command
instance. The command_foo
module would import Bot
and export a class implementing Command
.
The main
function would import Bot
and CommandFoo
, and create an instance of the bot with CommandFoo
registered:
ts
// bot module export interface Command { onRegister(bot: Bot, command: string); onCommand(user: User, message: string); } // command_foo module import {Bot, Command} from "bot"; export class CommandFoo implements Command { private bot: Bot; onRegister(bot: Bot, command: string) { this.bot = bot; } onCommand(user: User, message: string) { this.bot.replyTo(user, "Bar."); } } // main import {Bot} from "bot"; import {CommandFoo} from "command_foo"; let bot = new Bot(); bot.registerCommand("/foo", new CommandFoo()); bot.start();
It's a few more lines of code, but it has no circular dependencies, reduced coupling, and more flexibility. It's easier to write unit tests for, and users are free to extend it with whatever commands they want, without needing to modify the bot
module to add them.
Permanently Deleted
A couple years back, I had some fun proof-of-concepting the terrible UX of preventing password managers or pasting passwords.
It can get so much worse than just an alert()
when right-clicking.
A small note: It doesn't work with mobile virtual keyboards, since they don't send keystrokes. Maybe that's a bug, or maybe it's a security feature ;)
But yeah, best tried with a laptop or desktop computer.
How it detects password managers:
- Unexpected CSS or DOM changes to the
input
element, such as an icon overlay for LastPass. - Paste event listening.
- Right clicking.
- Detecting if more than one character is inserted or deleted at a time.
In hindsight, it could be even worse by using Object.defineProperty
to check if the value
property is manipulated or if setAttribute
is called with the value
attribute.
This may be an unpopular opinion, but I like some of the ideas behind functional programming.
An excellent example would be where you have a stream of data that you need to process. With streams, filters, maps, and (to a lesser extent) reduction functions, you're encouraged to write maintainable code. As long as everything isn't horribly coupled and lambdas are replaced with named functions, you end up with a nicely readable pipeline that describes what happens at each stage. Having a bunch of smaller functions is great for unit testing, too!
But in Java... yeah, no. Java, the JVM and Java bytecode is not optimized for that style of programming.
As far as the language itself goes, the lack of suffix functions hurts readability. If we have code to do some specific, common operation over streams, we're stuck with nesting. For instance,
java
var result = sortAndSumEveryNthValue(2, data.stream() .map(parseData) .filter(ParsedData::isValid) .map(ParsedData::getValue) ) .map(value -> value / 2) ...
That would be much easier to read at a glance if we had a pipeline operator or something like Kotlin extension functions.
java
var result = data.stream() .map(parseData) .filter(ParsedData::isValid) .map(ParsedData::getValue) .sortAndSumEveryNthValue(2) // suffix form .map(value -> value / 2) ...
Even JavaScript added a pipeline operator to solve this kind of nesting problem.
And then we have the issues caused by the implementation of the language. Everything except primitives are an object, and only objects can be passed into generic functions.
Lambda functions? Short-lived instances of anonymous classes that implement some interface.
Generics over a primitive type (e.g. HashMap<Integer, String>
)? Short-lived boxed primitives that automatically desugar to the primitive type.
If I wanted my functional code to be as fast as writing everything in an imperative style, I would have to trust that the JIT performs appropriate optimizations. Unfortunately, I don't. There's a lot that needs to be optimized:
- Inlining lambdas and small functions.
- Recognizing boxed primitives and replacing them with raw primitives.
- Escape analysis and avoiding heap memory allocations for temporary objects.
- Avoiding unnecessary copying by constructing object fields in-place.
- Converting the stream to a loop.
I'm sure some of those are implemented, but as far as benchmarks have shown, Streams are still slower in Java 17. That's not to say that Java's functional programming APIs should be avoided at all costs—that's premature optimization. But in hot loops or places where performance is critical, they are not the optimal choice.
Outside of Java but still within the JVM ecosystem, Kotlin actually has the capability to inline functions passed to higher-order functions at compile time.
/rant
Aw. I was going to post the link to his video, but you beat me to it.
But yeah, Technology Connections makes some excellent and informative videos. To anyone else who sees this: If heat pumps, refrigeration, or climate control technology aren't your cup of tea, he also covers older technology based around electromechanical designs (as in, pre-dating microcontrollers and programmable logic) and analog media recording devices.
Permanently Deleted
From what I can tell, that's basically what this is trying to do. Some company can sign a source image, then other companies can sign the changes made to the image. You can see that the image was created by so-and-so and then manipulated by so-and-other-so, and if you trust them both, you can trust the authenticity of the image.
It's basically git
commit signing for images, but with the exclusionary characteristics of certificate signing (for their proposed trust model, at least. It could be used more like PGP, too).
Permanently Deleted
I glossed through some of the specifications, and it appears to be voluntary. In a way, it's similar to signing git commits: you create an image and chose to give provenance to (sign) it. If someone else edits the image, they can choose to keep the record going by signing the change with their identity. Different images can also be combined, and that would be noted down and signed as well.
So, suppose I see some image that claims to be an advertisement for "the world's cheapest car", a literal rectangle of sheet metal and wooden wheels. I could then inspect the image to try and figure out if that's a legitimate product by BestCars Ltd, or if someone was trolling/memeing. It turns out that the image was signed by LegitimateAdCompany, Inc and combined signed assets from BestCars, Ltd and StockPhotos, LLC. Seeing that all of those are legitimate businesses, the chain of provenance isn't broken, and BestCars being known to work with LegitimateAdCompany, I can be fairly confident that it's not a meme photo.
Now, with that being said...
It doesn't preclude scummy camera or phone manufacturers from generating identities unique their customers and/or hardware and signing photos without the user's consent. Thankfully, at least, it seems like you can just strip away all the provenance data by copy-pasting the raw pixel data into a new image using a program that doesn't support it (Paint?).
All bets are off if you publish or upload the photo first, though—a perceptual hash lookup could just link the image back to original one that does contain provenance data.
Yep! I ended up doing my entire co-op with them, and it meshed really well with my interest in creating developer-focused tooling and automation.
Unfortunately I didn't have the time to make the necessary changes and get approval from legal to open-source it, but I spent a good few months creating a tool for validating constraints for deployments on a Kubernetes cluster. It basically lets the operations team specify rules to check deployments for footguns that affect the cluster health, and then can be run by the dev-ops teams locally or as a Kubernetes operator (a daemon service running on the cluster) that will spam a Slack channel if a team deploys something super dangerous.
The neat part was that the constraint checking logic was extremely powerful, completely customizable, versioned, and used a declarative policy language instead of a scripting language. None of the rules were hard-coded into the binary, and teams could even write their own rules to help them avoid past deployment issues. It handled iterating over arbitrary-sized lists, and even could access values across different files in the deployment to check complex constraints like some value in one manifest didn't exceed a value declared in some other manifest.
I'm not sure if a new tool has come along to fill the niche that mine did, but at the time, the others all had their own issues that failed to meet the needs I was trying to satisfy (e.g. hard-coded, used JavaScript, couldn't handle loops, couldn't check across file boundaries, etc.).
It's probably one of the tools I'm most proud of, honestly. I just wish I wrote the code better. Did not have much experience with Go at the time, and I really could have done a better job structuring the packages to have fewer layers of nested dependencies.
Back when I was in school, we had typing classes. I'm not sure if that's because I'm younger than you and they assumed we has basic computer literacy, or older than you and they assumed we couldn't type at all. In either case, we used Macs.
It wasn't until university that we even had an option to use Linux on school computers, and that's only because they have a big CS program. They're also heavily locked-down Ubuntu instances that re-image the drive on boot, so it's not like we could tinker much or learn how to install anything.
Unfortunately—at least in North America—you really have to go out of your way to learn how to do things in Linux. That's just something most people don't have the time for, and there's not much incentive driving people to switch.
A small side note: I'm pretty thankful for Valve and the Steam Deck. I feel like it's been doing a pretty good job teaching people how to approach Linux.
By going for a polished console-like experience with game mode by default, people are shown that Linux isn't a big, scary mish-mash of terminal windows and obscure FOSS programs without a consistent design language. And by also making it possible to enter a desktop environment and plug in a keyboard and mouse, people can* explore a more conventional Linux graphical environment if they're comfortable trying that.
Ah, that's fair.
I'm having the opposite experience, unfortunately. I loved working at {co-op company} where I had a choice of developer environment (OS, IDE, and the permissions to freely install whatever software was needed without asking IT) and used Golang for most tasks.
The formal education has been nothing but stress and anxiety, though. Especially exams.
It's possible that Google doesn't, although that would be weird since the ability to push apps is probably standardized and baked into the stock Android OS source code.
Or maybe you just used MVNOs that don't purposefully install anything that isn't strictly necessary.
Android OS developers or software devs working for cell providers would probably know the answer, though.
Anecdotally, I can confirm otherwise. I bought an unlocked Galaxy phone directly from Samsung, and putting in a SIM card provisioned it for my cell provider and installed their apps.
Thankfully, I'm not on a provider that pushes adware.
Did the formal education before the job ruin it for you, or did the job itself ruin it?
Oh cool, there's a 200mp camera. Something that only pro photographers care about lol.
Oh this is a fun one! Trained, professional photographers generally don't care either, since more megapixels aren't guaranteed to make better photos.
Consider two sensors that take up the same physical space and capture light with the same efficiency/ability, but are 10 vs 40 megapixels. (Note: Realistically, a higher density would mean design trade-offs and more generous manufacturing tolerances.)
From a physics perspective, the higher megapixel sensor will collect the same amount of light spread over a more dense area. This means that the resolution of the captured light will be higher, but each single pixel will get less overall light.
So imagine we have 40 photons of light:
More Pixels Less Pixels ----------- ----------- 1 2 1 5 2 6 2 3 11 11 1 9 0 1 15 3 4 1 1 1
When you zoom in to the individual pixels, the higher-resolution sensor will appear more noisy. This can be mitigated by pixel binning, which groups (or "bins") those physical pixels into larger, virtual ones—essentially mimicking the lower-resolution sensor. Software can get crafty and try to use some more tricks to de-noise it without ruining the sharpness, though. Or if you could sit completely still for a few seconds, you could significantly lower the ISO and get a better average for each pixel.
Strictly from a physics perspective (and assuming the sensors are the same overall quality), higher megapixel sensors are better simply because you can capture more detail and end up with similar quality when you scale the picture down to whatever you're comparing it against. More detail never hurts.
... Except when it does. Unless you save your photos as RAW (which take a massice amount of space), they're going to be compressed into a lossy image format like JPEG. And the lovely thing about JPEG, is that it takes advantage of human vision to strip away visual information that we generally wouldn't perceive, like slight color changes and high frequency details (like noise!)
And you can probably see where this is going: the way that the photo is encoded and stored destroys data that would have otherwise ensured you could eventually create a comparable (or better) photo. Luckily, though, the image is pre-processed by the camera software before encoding it as a JPEG, applying some of those quality-improving tricks before the data is lost. That leaves you at the mercy of the manufacturer's software, however.
In summary: more megapixels is better in theory. In practice, bad software and image compression negate the advantages that a higher resolution provides, and higher-density sensors likely mean lower-quality data. Also, don't expect more megapixels to mean better zoom. You would need an actual lense for that.
Oh, for sure. When bullet point number one involves advertising, they don't make it hard to see that the underlying motivation is to assist advertising platforms somehow.
I think this is an extremely slippery and dangerous slope to go down, and I've commented as such and explained how this sort of thing could end up harming users directly as well as providing ways to shut out users with adblocking software.
But, that doesn't change my opinion that the original post is framed in a sensationalized manner and comes across as ragebaiting and misinforming. The proposal doesn't directly endorse or enable DRMing of web pages and their contents, and the post text does not explain how the conclusion of adblockers being killed follows from the premise of the proposal being implemented. To understand how OP came to that conclusion, I had to read the full document, read the feedback on the GitHub issues, and put myself in the shoes of someone trying to abuse it. Unfortunately, not everyone will take the time to do that.
As an open community, we need to do better than incite anger and lead others into jumping to conclusions. Teach and explain. Help readers understand what this is all about, and then show them how these changes would negatively impact them.