Meta and Yandex are de-anonymizing Android users’ web browsing identifiers - Ars Technica
LainTrain @ LainTrain @lemmy.dbzer0.com Posts 12Comments 2,083Joined 1 yr. ago
Well maybe if they stopped taking all those screenshots with their fancy rice avocado phones they could afford a house!
Agreed! Thanks for this discussion.
You're spot on, I'm sorry for my loose usage of the term.
But I dont necessarily fully agree with your definitions either. Don't get me wrong they are soundly constructed, I just don't personally see it the same way. The slur one is fine and I largely agree.
As for the ideological definition I'm not sure that you can decidedly say that self-improvement is at odds with it.
I think a full definition for me would be mostly related to the culture of specific online spaces, what I would call "4chan social dynamics" and it's memes and people who buy into them past a certain point, and very little to do with any sexual status or lack thereof, which there's nothing actually wrong with, it's fairly obvious that lack of sexual access for men is a significant detriment to their quality of life and it's complex societal problem to be solved like any other. It's not for me to really talk about at length because I'm not a man.
I would argue though that "femcels" in communities like FDS and (I'm not sure if it's a thing still) Vindicta are very much thinking along the exact same lines despite very few of them being involuntarily celibate or identifying with the label in any significant way.
Likewise men who have sex but still buy into these memes are also still ideologically incels under this definition.
I think this grouping is more useful because it allows us to identify ideological and philosophical commonalities between more stereotypical incels and those who wouldn't on the surface be such.
What I've found across my observations is if grouped this way, plenty of these people actually do practice self-improvement and are explicitly motivated to do so via their ideology. A particular telltale sign is active hostility and contempt towards those who do not buy into the same memes.
So I don't necessarily see a toxic variety of "self-improvement" as being contrary at all to this ideology, in fact if anything I see the "blackpilled" (or in less internet terms - hopeless) variety as being far less harmful than the "redpilled" variety which is often used to exploit people economically via endless lifestyle coaching guides and courses and even transparent pyramid schemes alongside usually a serving of mysogyny and reducing women's behaviour to some sort of conspiratorial pseudo-scientific evolutionary psychology "harsh truth" that only serves to further disconnect people from the real world and entrap them in cycles of financial exploitation.
In plain English, in the best case the very framing of "work out to get a girlfriend" is inherently distorting reality into 4chan social dynamics and implying a causality that isn't there (e.g. "guys who work out get girls") when in reality women aren't something to "get", and many men don't work out and have women all the same.
Through this lens "looksmaxxing" is problematic the same way those fake "diet" aids are, even when the implication is unspoken.
When you introduce the profit motive into this, there's never any incentive for people to get out of the cycle, only deeper in.
In contrast the ones who think self-improvement is a cope maybe have some unproductive ideas but honestly I just don't think people holding communions of shared self-loathing is a particularly huge problem unless they threaten and go through on real world harm ala Elliot Rodgers or whatever his name was.
Also sorry for the long-winded response, I just want to add that I may sound overly harsh on self-improvement but this is coming from someone who is a staunch self-improver myself.
I'm not at all against self-improvement in the slightest, but I think it's important to work towards whatever goals one may set for the right reasons as well, and recognize when a drive for self-improvement stems from a "toxic" (less productive) place and when it stems from a "legitimate" place (actually solves problems).
I'm not going along with this tiktok diagnosis shit when the way I see it I have extremely fundamental problems with the plausibility of the entire concept.
Yes, while reading. I miss music to be specific, so this applies to comic books, manga etc.
A good soundtrack to me is everything in terms of tone and atmosphere and mood.
Less subjectively, it makes sense, since you can't touch or smell the world inside a book or a game or a film or whatever, the remaining types of information are auditory and visual, so 50% of the information about a thing is aural, so games, movies, shows etc. get that as a leg up on books etc.
On the other hand a lack of music does often force my brain to make some up which gets my lazy ass to go nurture that hobby and produce some sounds so I'm not complaining!
I don't have aphantasia and I don't particularly fancy any medium over the other, but what I often miss is sound. Music is a whole different language to either visual or conceptual as conveyed by words, whereas imagery to me feels the most direct and laziest, music can convey feelings there are neither words nor imagery for, and so often I like adaptations of written works for injecting some fitting music, and will listen to fitting music as I read books.
You both seem nuts to me. I can conceptually imagine, but obviously cannot see things in my head because I'm not schizo, my surroundings don't disappear but it doesn't mean I don't appreciate descriptions and conjure concepts from them, just not imagery.
I think all this aphantasia stuff is just trappings of the English language and having "imagine" have the word "image" as a root, which is wrong, because imagination is more about concepts, it's a unique data structure that's not related to jpegs or photons and doesn't involve them. But some people conflate the two because their language doesn't allow them to think otherwise so they assume concepts are literal images in their head, and others with enough self-awareness to know they don't actually "see" anything in their head assume they have an issue/divergence. It's so bizarre to watch.
You see a lot of "Z warriors" in Russia too, but much less Dragon Ball sadly :(
Nah, fuck em rich narcissistic bastards
But if it has to be someone, I'd probably pay respects to Carl Sagan when he died if I was alive then.
Okay, I'd be interested to hear what you think is wrong with this, because I'm pretty sure it's more or less correct.
Some sources for you to help you understand these concepts a bit better:
What DLSS is and how it works as a starter: https://en.wikipedia.org/wiki/Deep_Learning_Super_Sampling
Issues with modern "optimization", including DLSS: https://www.youtube.com/watch?v=lJu_DgCHfx4
TAA comparisons (yes, biased, but accurate): https://old.reddit.com/r/FuckTAA/comments/1e7ozv0/rfucktaa_resource/
No that's not really true from what I heard. Tons of vindicta and FDS type femcels and Tate fan manosphere loons are into looksmaxxing and moneymaxxing and whatnot. Ironically enough, blackpilled proper withdrawn hikki incels at least don't get taken for a ride by grifters so much.
Link to bot?
What are you talking about “temporal+quality” for DLSS? That’s not a thing.
Sorry I was mistaken, it's not "temporal", I meant "transformer", as in the "transformer model", as here in CP2077.
DLSS I’m talking about. There are many comparisons out there showing how amazing it is, often resulting in better IQ than native.
Let me explain:
No, AI upscaling from a lower resolution will never be better than just running the game at the native resolution it's being upscaled to.
By it's very nature, the ML model is just "guessing" what the frame might look like if it was rendered at native resolution. It's not an accurate representation of the render output or artistic intent. Is it impressive? Yes of course, it's a miracle of technology and a result of brilliant engineering and research in the ML field applied creatively and practically in real time computer graphics, but it does not result in a better image than native, nor does it aim to do so.
It's mainly there to increase performance when rendering at native resolution is too computationally expensive and results in poor performance, while minimizing the loss in detail. It may do a good job of it for sure, relatively speaking, but it can never match an actual native image, and compressed YouTube video with bitrates less than a DVD aren't a good reference point because they don't represent anything even close to what a real render looks like, and not a compressed motion jpeg of it.
Even if it seems like there's "added detail", any "added detail" is either literally just an illusion stemming from the sharpening post-processing filter akin to the "added detail" of a cheap Walmart "HD Ready" TV circa 2007 with sharpening cranked up, or outright fictional, and does not exist within the game files itself, and if by "better" we agree that it's the most high fidelity representation of the game as it exists on disk, then AI cannot ever be better.
FXAA is not an AI upscaler, what are you talking about?
I mention FXAA because really the only reason we use "AI upscalers" is because anti-aliasing is really really computationally expensive.
The single most immediately evident and obvious consequence of a low render resolution is aliasing first and foremost. Almost all other aspects of a game's graphics are usually completely detached from this like e.g. texture resolution.
The reason aliasing happens in the first place is because our ability to create, ship, process and render increasingly high polygon count games has massive surpassed our ability to push pixels on screen in real time.
Or course legibility suffers at lower resolution as well, but not nearly as much as smoothness of edges on high-polygon objects.
So for assets that would look really good at say, 4K, we run them at 720p instead, and this creates jagged edges because we literally cannot make the thing fit into the pixels we're pushing.
The best and most direct solution will always be just to render the game at a much higher resolution. But that kills framerates.
We can't do that, so we resort to Anti-Aliasing techniques instead. The most simple of which is MSAA which just multi-samples (renders at higher res) those edges and downscales them.
But it's also very very expensive to do computationally. GPUs capable of doing it alongside other bells and whistles we have like Ray Tracing simply don't exist, and if they did they'd cost too much, and even then, most games have to target consoles, which are solidly beat out by a flagship GPU even from several years ago.
One other solution is to blur these jagged edges out, sacrificing detail for a "smooth" look.
This is what FXAA does, but this creates a blurry image. This became very prevalent during the 7th Gen console era in particular because they simply couldn't push more than 720p in most games, in an era where Full HD TVs had become fairly common towards the end and shiny, polished graphics in trailers became a major way to make sales, this was further worsened by the fact Motion Blur was often used to cover up low framerates and replicate the look of sleek modern (at the time) digital blockbusters.
SMAA fixed some of FXAA's issues by being more selective about which pixels were blurred, and TAA eliminated the shimmering effect by also taking into account which pixels should be blurred across multiple frames.
Beyond this there are other tricks, like checkerboard rendering, where we render the frame in chunks at different resolutions based on what the player may or may not be looking at.
In VR we also use foveated rendering to render an FOV cone in front of the players immediate vision at a higher res than what would be in their periphery/outside the eye's natural focus, with eye tracking tech, this actually works really well.
But none of these are very good solutions, so we resort to another ugly, but potentially less bad solution, which is just rendering the game at a lower resolution and upscaling it, like a DVD played on an HDTV, but instead of a traditional upscaling algo like Lanczoz, we use DLSS, which reconstructs detail lost from a lower resolution render, based on context of the frame using machine learning, which is efficient because of tensor cores now included on every GPU making N-dimensional array multiplication and mixed precision FP math relatively computationally cheap.
DLSS often looks better compared to FXAA, SMAA and TAA because all of those just literally blur the image in different ways, without any detail reconstruction, but it is not comparable to any real anti-aliasing technique like MSAA.
But DLSS always renders at a lower res than native, so it will never be 1:1 a true native image, it's just an upscale. That's okay, because that's not the point. The purpose of DLSS isn't to boost quality, it's to be a crutch for low performance, it's why turning off even Quality presets for DLSS will often tank performance.
There is one situation where DLSS can look better than native, and it's if you instead of typical applications of DLSS which downscales the image, then upscales it with ML guesswork, use it to upscale the image from native, to a higher target res instead and output that.
In Nvidia settings I believe this is called DL DSR factors.
Oh yeah totally, I just meant more as a best case scenario where you have very well documented very specific functions with a very limited scope that are reused throughout.
Honestly I find this drive towards efficiency and automation many professional programmers have quite admirable.
While I studied programming, I think I just lacked this drive altogether, and I also really loved computers and to some degree I liked un-abstracting processes a lot more than I loved abstracting them, also due to my at-the-time untreated ADHD. When reasonable, I always pick a more manual way of doing something to maintain this control and understanding of system state.
I think cybersec was a great fit for me because I just found it much more stimulating to focus on the <1% of cases rather than the >99% of cases.
There is also just something very alienating when you work in large teams where each dev only contributes a small component, a lack of knowledge about the system is not only a good thing there but an expected paradigm to create reusable code, and it's a good one I think, just not actually all that fun to write for me personally.
If I have the brains for it, I'd love to try professional embedded at some point. Maybe it's something I could be good at.
Lolwut? No it doesn't? Yeah it turns off TAA so it might look sharper at first, and if you turn off the ugly ass sharpening then it's playable but literally any other option looks better than TAA, including TXAA from early 2010s lol.
Do you maybe mean DLAA? I Have an RTX 3090 and a 9800X3D. It's ok. When the option exists I just crank up the res or turn on MSAA instead. Much better.
If you mean DLSS, my condolences. I'd rather play with FXAA most of the time.
The only game I'll use DLSS (on Transformer model+Quality) in is CP2077 with Path Tracing. With Ray Reconstruction it's almost worth the blurriness, especially because that game forces TAA unless you use DLAA/DLSS and I don't get a playable framerate without it, but also don't want to play without Path Tracing. Maybe one day I'll have the hardware needed to run it with PT and DLAA
I'm genuinely curious what you do that a 7b model is "trash" to you? Like yeah sure a gippity now tends to beat out a mistral 7b but I'm pretty happy with my mistral most of the time if I ever even need ai at all.
And it's fucking awful.
People didn't "want it" neither before nor after it was forced into being a thing, people had no choice because of GPU prices, especially console peasants stuck with their AMD APUs on par with like a GTX 1070 where a middleman built their PC for them under £600 + hundreds in PS Plus/game fees over years to come.
DLSS is even worse cancer than TAA, the washed out blurry slop only looks good on YouTube videos due to the compression. It's one thing if you're playing in the extremes of low performance and need a crutch, e.g. steam deck, it's a whole other when you make your game look like dog shit then use fancy FXAA and motion blur to cover it up so you can't see.
I agree with you on making the personal choice to steer away from megacorps, and I practice this myself as much as I can, but it hasn't ever worked en-masse and I don't expect it will, nor do I expect people will have much choice as every smaller company will do what every big company does and AI will be integrated in such small ways, like all the ways it has pre-Covid pre-AI spring that people will use it unknowingly and love it.
Nah fuck that shit I was so glad when everything moved to computers in schools. Kids are never gonna need to handwrite or speak in their lives, they need typing practice.
Block all tracking scripts and use Firefox Nightly with ublock when possible.