Pixel 8 leak promises 7 years of OS updates—even more than an iPhone
pup_atlas @ pup_atlas @pawb.social Posts 1Comments 139Joined 2 yr. ago
Do what Google does when trying to grant far reaching permissions to another account. Show a non-dismissible banner or nag notification constantly for 10 days, and then allow the user to dismiss permanently. It’s the best of both worlds. It makes it impossible for the user to miss, even if a shady repair shop tries to cheat them with aftermarket parts, but it gives the user a reasonable course to permanently dismiss any warnings.
I’ll believe it when I see it. Apple has a demonstrated track record of supporting their phones for years, Google has a demonstrated track record of killing anything that isn’t an immediate run-away success. So sorry Google, but I can’t just take your word for it.
In this case the “they” is referring to Unity. The value of Brand trust is one of the primary assets any company has, and this sort of behavior destroys that. Why would you invest tens millions of dollars in developing a game in an engine that could suddenly bankrupt a company in licensing fees, with little to no warning or transparency? It isn’t 2010 any more, and there are plenty of options for platforms.
I would argue this level of delay is a miscarriage of justice. Actions this malicious could easily put companies out of business full years before a trial would even start. Where is a fair distinction between “slow/thorough” and “delay until the problem goes away”. There’s a non-zero chance some of the perpetrators will literally die before ever facing justice. How is that fair to the plaintiff?
The absolutely are, because it’s not a binary “try it and see if it works” change. This is a one-time, irreversible loss of brand trust from game developers who have a lot at stake, and a TON of options. There are no take backsies on stuff like this. Choosing a game engine for your game is a big decision, often researched and backed by some form of business team who are never gonna swing for a company with a track record of pulling out the financial rug from their customers. They will loose billions, if not outright kill their company by even suggesting this sort of thing with a straight face.
I would love to switch, and I tried to the other day, but I discovered that Firefox still doesn’t support integrated WebAuthn tokens (I.E. using Touch ID in lieu of YubiKeys). That is (unfortunately) a non-starter for me, as I use that technology everywhere, and I’m not intentionally weakening my security posture to switch. I’m honestly really surprised to find this feature disparity, as this feature has been generally available elsewhere for years. I’m a developer, so maybe I’ll take a crack at implementing it myself sometime, but it’s a big enough deal that I genuinely can’t switch yet :(
You’ve been able to use DNS-based solutions on iOS basically forever. I don’t really like them because they can be more technical than the average user is likely to jive with (they tend to cause a lot of issues browsing the web normally in my experience), but it’s pretty much always been around.
You can also use the system-wide ad blockers that function via iOS’s built in VPN functionality. That’s how Android does it too.
This just, isn’t true? You can just download the Ad Block Plus Safari extension, just like you can on a desktop/laptop machine. You could even add a user script manager to block ads yourself if you’re so inclined. This has been in iOS for years, at least 4.
Because it’s the last holdout. I use mostly apple devices, and lightning is pretty much the only device I have that uses something other than usb c to charge. It would also make the “hey does anyone have a charger?” Question at parties or work a lot more tolerable.
I work in tech, and I’m still using Chrome. I don’t like it, and I know a lot of other tech people are in the same boat, but I can’t just switch. That’s what I’m working towards, but the amount of tooling we use every day that depends on specifically Chrome is, significant to say the least. This is tooling we built internally to help ourselves, that depends on Chrome-specific APIs that are either different, or do mot yet exist in Firefox.
We’re working to port this stuff over to Firefox, but that takes time, and not everyone can just drop what they are doing to reimplement the tooling they already have in a different browser. On top of userspace tooling, we also have tens of thousands of unit tests based in some part on chrome (through tools like jest and puppet) to validate certain aspects of massive distributed web platforms that cannot easily be unit tested in normal code (though we have high coverage where we can). These also need to be ported, and are VERY specific to Chrome (or Chromium in some cases) in particular. We’re talking entire teams of people, and tens of thousands of man hours.
A lot of users truly can just switch at the drop of a hat. The UI switch is annoying sure, but its doable. For a lot of users in the tech space though, it’s just not feasible to drop Chrome overnight. We’ve started the process to be clear, but it’s going to be a very long transition.
Permanently Deleted
It’s worth noting that wireless transfer does not mean “cloud storage”. It can, and often does, but it is also easy to wirelessly back up things like photos entirely locally. With most prebuilt NAS units, all you have to do is buy something like a synology, some of them even come prefilled with hard drives, and go through the wizard in the app. That’s it, and the app will wirelessly, automatically back up things like pictures to your own locally controlled storage. I’m pretty sure you can do it natively with Time Machine too if you really wanted.
I have another point that I don’t see talked about a lot that I would like to consider. Their XDR Pro model is targeting actual professionals in the video field. Unlike pretty much everything that Apple makes, this monitor is comparable (and downright cheap) when compared side by side with other industry standard color calibrated video monitors. Professional grade video equipment has always been super expensive, and it’s not just an Apple thing. For example, here’s a Sony model commonly used in live broadcast. Same size, but the Apple monitor is actually 1.6X as bright for HDR, it’s higher resolution, and less than half the price. The only downsides being no SDI input, but it can still be used for post-processing just fine, or even live with a converter box. It also informs why the stand for the monitor doesn’t come standard, and is expensive as hell— it’s because they don’t expect anyone in their target market to buy it. They expect most of these monitors to be installed as a drop in replacement in color grading workflows or broadcast trucks, which are all pretty much fully vesa mounted already anyway.
In that context, their XDR Pro monitor makes perfect sense. On a cost basis alone, Apple’s monitor offering is very competitive for the professional video demographic they’re targeting. It’s not for the average power user, it’s for people whose literal sole job rides on colors being accurate.
As for the polishing clothes, yes they’re expensive when purchased separately, but they come “for free” in the box. I would rather they sell them separately than not at all, but the screen really doesn’t require anything special, just any old microfiber cloth should be fine, as long as the cloth is kept clean. Even that markup isn’t insane IMO, it appears to only be a 5-10$ markup on an accessory of a monitor they expect to be very low volume.
Overall, I think the product is just misunderstood more than anything. I don’t think it’s being advertised wrong, I think Apple just has such a proclivity to advertise their other products wrong that people’s expectations aren’t set correctly for when they are actually addressing the actual professional market (cough cough the iPhone PRO, a product that isn’t really a “professional” product in any sense of the word). These are just what professional grade products cost. Sure it’s expensive, but that’s what they have to cost to make these devices viable for any business to manufacture. The combination of low volume, high cost for components with a better than average precision, and pro grade calibration means that they just plain cost a lot more to make.
Equipment like this and the Sony monitor above are used in environments where they just need to work EVERY single time, and there is 0 room for failure. As an example, running shading (color grading) on a live broadcast, think events like the super bowl. Using any old monitor, you may not be able to tell the coke ad you are cueing up is going through your shading workflow, and their red branding is slightly off-color. That could easily be a million+ dollar mistake, I’ve seen similar things actually happen in the field (with other advertisers I will not mention for my own sake). Or god forbid you loose picture entirely. I’ve been in similar positions, and broadcast engineers/companies will pay any amount of money to make sure their equipment is top of the line, and won’t fail ever. If you don’t believe me, take a look at some other pro grade video gear, like a grass valley kayenne. The scale of money is simply, different with pro video equipment.
I’m out and about today, so apologies if my responses don’t contain the level of detail I’d like; As for the law being collective morality, all sorts of religious prohibitions and moral scares HAVE ended up in the law. The idea is that the “collective” is large enough to dispel any niche restrictive beliefs. Whether or not you agree with that strategy aside, that is how I believe the current system works in an ideal sense (even if it works differently in practice), that’s what it is designed to protect from my perspective.
As for anti-AI artists, let me pose a situation for you to illustrate my perspective. As a prerequisite for this situation, a large part of a lawsuit, and the ability to advocate for a law is based on standing, the idea that you personally, or a group you represent has been directly, tangibly harmed by the thing you are trying to restrict. Here is the situation:
I am a furry, and a LARGE part of the fandom is based on art and artists. A core furry experience is getting art commissioned of your character from other artists. It’s commonplace for all these artists to have a very specific, identifiable signature style, so much so that it is trivial for me and other furs to be able to identify artists by their work alone at just a glance. Many of these artists have shifted to making their living full time off of creating art. With the advent of some new generational models, it is now possible to train a model exclusively off of one singular artists style, and generate art indistinguishable from the real thing without ever contacting them. This puts their livelihood directly at risk, and also muddies the waters in terms of subject matter, and what they support. Without laws regulating training, this could take away their livelihood, or even give a (very convincing, and hard to disprove) impression that they support things they don’t, like making art involving political parties, or illegal activities, which I have seen happen already. This almost approaches defamation in my opinion.
One argument you could make is that this is similar to the invention of photography, which may have directly threatened the work of painters. And while there are some comparisons you could draw from that situation, photography didn’t fundamentally replace their work verbatim, it merely provided an alternative that filled a similar role. This situation is distinct because in many cases, it’s not possible, or at least immediately apparent which pieces are authentic, or not. That is a VERY large problem the law needs to solve as soon as possible.
Further, I believe the same, or similar problems exist in LLMs, like they do in the situation involving generative image models above. Sure with enough training, those issues are lessened in impact, but where is the line of what is ok and what isn’t? Ultimately the models themselves don’t contain any copyrighted content, but they (by design) combine related ideas and patterns found in the training data, in a way that will always approximate it, depending on the depth of training data. While “overfitting” might be considered a negative in the industry, it’s still a possibility, and until there is some sort of regulations establishing the fitness of commercially available LLMs, I can envision situations in which management would cut training short once it’s “good enough”, leaving overfitting issues in place.
Lastly, with respect, I’d like to push back on both the notion that I’d like to ban AI or LLMs, as well as the notion that I’m not educated enough on the subject to adequately debate regulations on it. Both are untrue. I’m very much in favor of developing the technology, and exploring all it’s applications. It’s revolutionary, and worthy of the research attention it’s getting. I work on a variety of models across the AI and LLM space professionally, and I’ve seen how versatile it is; That said, I have also seen how over publicized it is. We’re clearly (from my perspective) in a bubble that will eventually pop. We’re claiming products use AI to do this and that across nearly every industry, and while LLMs in particular are amazing, and can be used in a ton of applications, it’s certainly not all of them— and I’m particularly cautious of putting new models in charge of dangerous or risky processes where they shouldn’t be before we develop adequate metrics, regulation, and guardrails. To summarize my position, I’m very excited to work towards developing them further, but I want to publicly express the notion that it’s not a silver bullet, and we need to develop legal frameworks for protecting people now, rather than later.
The law is (in an ideal world), the reflection of our collective morality. It is supposed to dictate what is “right” and “wrong”. That said— I see too many folks believing that it works the other way too, that what is illegal must be wrong, and what is legal must be ok. This is (decisively) not the case.
In AI terms, I do believe some of the things that LLMs and the companies behind them are doing now may turn out to be illegal under certain interpretations of the law. But further, I think a lot of the things companies are doing to train these models are seen as “immoral” (me included), and that the law should be changed to reflect that.
Sure that may mean that “stuff these companies are doing now is legal”, but that doesn’t mean we don’t have the right to be upset about it. Tons of stuff large corporations have done was fully legal until public outcry forced the government to legislate against it. The first step in many laws being passed is the public demonstrating a vested interest in it. I believe the same is happening here.
I’m aware the model doesn’t literally contain the training data, but for many models and applications, the training data is by nature small enough, and the application is restrictive enough that it is trivial to get even snippets of almost verbatim training data back out.
One of the primary models I work on involves code generation, and in those applications we’ve actually observed verbatim code being output by the model from the training data, even if there’s a fair amount of training data it’s been trained on. This has spurred concerns about license violation on open source code that was trained on.
There’s also the concept of less verbatim, but more “copied” style. Sure making a movie in the style of Wes Anderson is legitimate artistic expression, but what about a graphic designer making a logo in the “style of McDonalds”? The law is intentionally pretty murky in this department, with even some colors being trademarked for certain categories in the states. There’s not a clear line here, and LLMs are well positioned to challenge what we have on the books already. IMO this is not an AI problem, it’s a legal one that AI just happens to exacerbate.
That’s not what’s happening though, they are using that data to train their AI models, which pretty irreparably embeds identifiable aspects of it into their model. The only way to remove that data from the model would be an incredibly costly retrain. It’s not literally embedded verbatim anywhere, but it’s almost as if you took an image of a book. The data is definitely different, but if you read it (i.e. make the right prompts, or enough of them), there’s the potential to get parts of the original data back.
I doubt that would hold up in a court of law. The ability to record in public hinges on having no “reasonable expectation to privacy” while in public spaces. You DO have a reasonable right to privacy in the backyard of your own property, even if it’s visible from some public airspaces.
It is two different modes of the same system, one just has more features enabled than the other. You also can’t tell if the driver is paying attention, as they are mostly out of frame. Even if they are, their hands are entirely off the wheel, and it’s unlikely that they would be able to react in time to prevent an accident even if they are paying attention.
The promises they’ve made previous have been FAR less than their competitors. Previous pixel phones have only enjoyed 3 years of updates according to my research (Pixel 4, 4A, 5, and 5A), where as Apple devices (a clear competitor in their space) will still let you load the latest version of iOS (17) on the iPhone XR, a phone released in 2018, 5 years ago. The iPhone 8 is still receiving security updates, which was release in 2017, a full 6 years ago. I would be happy to see some competition in the space, but Googles promises fly in the face of their reputation here, and actions speak louder than words. I hope they do live up to their promises, but I simply won’t believe it until I see it for myself.