Skip Navigation

User banner
Posts
107
Comments
233
Joined
2 yr. ago

  • As an aside, you can edit your submission title on lemmy/kbin/mbin.

  • I'm not sure if the image has since been updated, but the horn-y boy before/after isn't the same image twice despite looking very similar. The left image has light-colored areas on the horns and some other similarly minor differences which are more noticeable when flicking between them but kinda hard to spot in a side-by-side.

  • Different sources claim different numbers, but the rate is considered by most sources to be low.

    While the statistics on false allegations vary – and refer most often to rape and sexual assault – they are invariably and consistently low. Research for the Home Office suggests that only 4% of cases of sexual violence reported to the UK police are found or suspected to be false. Studies carried out in Europe and in the US indicate rates of between 2% and 6%.

    https://theconversation.com/heres-the-truth-about-false-accusations-of-sexual-violence-88049

  • Typically no, the top two PCIE x16 slots are normally directly to the CPU, though when both are plugged in they will drop down to both being x8 connectivity.

    Any PCIE x4 or X1 are off the chipset, as well as some IO, and any third or fourth x16 slots.

    I think the relevant part of my original comment might've been misunderstood -- I'll edit to clarify that but I'm already aware that the 16 "GPU-assigned" lanes are coming directly from the CPU (including when doing 2x8, if the board is designed in this way -- the GPU-assigned lanes aren't what I'm getting at here).

    So yes, motherboards typically do implement more IO connectivity than can be used simultaneously, though they will try to avoid disabling USB ports or dropping their speed since regular customers will not understand why.

    This doesn't really address what I was getting at though. The OP's point was basically "the reason there isn't more USB is because there's not enough bandwidth - here are the numbers". The missing bandwidth they're mentioning is correct, but the reality is that we already design boards with more ports than bandwidth - hence why it doesn't seem like a great answer despite being a helpful addition to the discussion.

  • Isn't this glossing over that (when allocating 16 PCIe lanes to a GPU as per your example), most of the remaining I/O connectivity comes from the chipset, not directly from the CPU itself?

    There'll still be bandwidth limitations, of course, as you'll only be able to max out the bandwidth of the link (which in this case is 4x PCIe 4.0 lanes), but this implies that it's not only okay but normal to implement designs that don't support maximum theoretical bandwidth being used by all available ports and so we don't need to allocate PCIe lanes lt;- USB ports as stringently as your example calculations require.

    Note to other readers (I assume OP already knows): PCIe lane bandwidth doubles/halves when going up/down one generation respectively. So 4x PCIe 4.0 lanes are equivalent in maximum bandwidth to 2x PCIe 5.0 lanes, or 8x PCIe 3.0 lanes.

    edit: clarified what I meant about the 16 "GPU-assigned" lanes.

  • Which is exactly how the real world works. Harm has to be identified to suggest solutions.

    According to the submission, some harms have been identified, and some solutions have been suggested [that could reduce the same and similar harms from occurring to new and existing users] (but mostly it sounds like a "more work needs to be done" thing).

    I imagine your perspective on the issues being discussed are different from those of the author. The helicopter parent analogy makes sense in a low-danger environment; I think what the author has suggested is that some people don't feel like it's a low-danger environment for them to be in (though I of course -- not being the author or one such person -- may be mistaken).

    Edit: [clarified] because I realised it might seem contradictory if read literally.

  • I’m not understanding why blocking is ineffective…?

    As I understand it, because it requires harm to be experienced before the negating action is taken.

    A parallel might be having malware infect a system before it can be identified and removed (harm experienced - future harm negated), vs proactively preventing malware from infecting the system in the first place (no harm experienced before negation).

  • Fwiw I also had ongoing issues with the ZF9 pocket-dialing. Ymmv of course, but I haven't had it happen ever since changing its position in my pocket from [up facing up and screen facing me] to [up facing up and screen facing away from me]. It's been at least several months since I made the change, so maybe it will help some of the people having the issue?

  • Sure, but not much of that battery improvement is coming from migrating the APU's process node. Moving from TSMC's 7nm process to their 6nm process is only an incremental improvement; a "half-node" shrink rather than a full-node shrink like going from their 7nm to their 5nm.

    The biggest battery improvement is (almost definitely) from having a 25% larger battery (40Whr - 50Whr), with the APU and screen changes providing individually-smaller battery life improvements than that. Hence the APU change improving efficiency "a little".

  • They were careful with how they phrased it, leaving the possibility of a refresh without a performance uplift still on the table (as speculated by media). It looks like the OLED model's core performance will be only marginally better due to faster RAM, but that the APU itself is the same thing with a process node shrink (which improves efficiency a little).


    See also: PCGamer article about an OLED version. They didn't say "no", and (just like with the previously linked article), media again speculated about a refresh happening.

    It looks like they were consistent with what they were talking about with how it wasn't simple to just drop in a new screen and leave everything else as-is, and used that opportunity to upgrade basically everything a little bit while they were tinkering with the screen upgrade.

  • It is, I've used that to prevent automatic removal of leading zeroes when reading the values of bytes.

    Based on the article it seems like it's just a matter of not having to spend the time (and mental overhead) of doing that for all required columns and never slipping up on it (now just set and forget).

  • Unless you're also throwing money at YouTube premium (etc), isn't this by definition unsustainable to do? So it's not really a viable long-term strategy either.

    Like don't get me wrong, I don't want all the tracking and stuff either, but somebody has to pay those server bills. If it's not happening through straight cash then it's going to be through increasingly aggressive monetization and cost-cutting strategies.

  • Fair enough. I suppose the terminology has evolved somewhat with time, and I can't say I have much insight into a time period from before I was born.

  • You seem to be using the term "open source" for what is instead commonly called "source-available", which has a distinct meaning from open source.

    [Source-available software] includes arrangements where the source can be viewed, and in some cases modified, but without necessarily meeting the criteria to be called open-source.

    [Open-source software] is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose.

    edit: fixed duplicated phrasing

  • Yes, though just nitro basic. Discord doesn't show ads and claims to not sell my data. While I can afford to do so, I'd much rather pay a few bucks a month to keep it that way.

    The number of people in this thread aggressively against a free-to-use service having any kind of way to pay employees and server bills makes me fucking depressed, and helps to explain why most free services I enjoy never seem to stay afloat with just an optional payment-based membership thing.

    Edit: To people suggesting less corporate-based (whether FOSS or not) alternatives, that's totally cool! Just remember that the people behind these projects need some way to pay the bills the same way the corporate ones do, so I encourage you to contribute to them, whether that's through e.g., code improvements (which doesn't pay bills but is still helpful!) or plain old donations.

  • UPDATE: the shutdown has been (for now) retracted.

    The admin (jerry) has switched from kbin to a fork called mbin that has apparently been able to integrate changes faster than the base kbin project. Jerry seems satisfied with the number of issues fixed in the fork (for now), so has retracted the shutdown announcement (for now).

    FEDIA.IO update!!!

    After I made the announcement about shutting down fedia.io, someone pointed out that Melroy, a very active developer on kbin, forked kbin to mbin. I just migrated to mbin and so far it seems to have resolved all the problems I've seen. It's likely too early to tell, but I think that Melroy is VERY responsive and helpful, so I am retracting my shutdown announcement. And that makes me very happy.

    https://infosec.exchange/@jerry/111235153655966812


    Followup: https://fedia.io/m/fedia/t/350673 tl;dr retraction has become more concrete. No need for the "for now" qualifier anymore.

  • Hahaha, I think you're giving me a bit too much credit - I was just curious enough to run some tests on my own, then share the results when I saw a relevant post about it!

    My interest in image compression is only casual, so I lack both breadth and depth of knowledge. The only "sub-field" where I might quality as almost an actual expert is in exactly what I posted about - image compression for sharing digital art online. For anything else (compressing photos, compressing for the purpose of storage, etc) I don't really know enough to give recommendations or the same level of insight!

    Edit: fixed typo and clarified a point.

  • It depends a lot on what's being encoded, which is also why different people (who've actually tested it with some sample images) give slightly different answers. On "average" photos, there's broadly agreement that WebP and MozJpeg are close. Some will say WebP is a little better, some will say they're even, some will say MozJpeg is still a little better. Seems to mostly come down to the samples tested, what metric is used for performance, etc.

    I (re)compress a lot of digital art, and WebP does really well most of the time there. Its compression artifacts are (subjectively) less perceptible at the level of quality I compress at (fairly high quality settings), and it can typically achieve slightly-moderately better compression than MozJpeg in doing so as well. Based on my results, it seems to come down to being able to optimize for low-complexity areas of the image much more efficiently, such as a flatly/ evenly shaded area (which doesn't happen in a photo).

    One thing WebP really struggles with by comparison is the opposite: grainy or noisy images, which I believe is a big factor in why different sets of images seems to produce different results favoring either WebP or JPEG. Take this (PNG) digital artwork as an extreme example: https://www.pixiv.net/en/artworks/111638638

    This image has had a lot of grain added to it, and so both encoders end up with a much higher file size than typical for digital artwork at this resolution. But if I put a light denoiser on there to reduce the grain, look at how the two encoders scale:

    • MozJpeg (light denoise, Q88, 4:2:0): 394,491 bytes (10% reduction)
    • WebP (light denoise, Picture preset, Q90): 424,612 bytes (29% reduction)

    Subjectively I have a preference for the visual tradeoffs on the WebP version of this image. I think the minor loss of details (e.g., in her eyes) is less noticeable than the JPEG version's worse preservation of the grain and more obvious "JPEG compression" artifacts around the edges of things (e.g., the strand of hair on her cheek).

    And you might say "fair enough it's the bigger image", but now let's take more typical digital art that hasn't been doused in artificial grain (and was uploaded as a PNG): https://www.pixiv.net/en/artworks/112049434

    Subjectively I once again prefer the tradeoffs made by WebP. Its most obvious downside in this sample is on the small red-tinted particles coming off of the sparkler being less defined, [see second edit notes] probably the slightly blockier background gradient, but I find this to be less problematic than e.g., the fuzz around all of the shooting star trails.. and all of the aforementioned particles.

    Across dozens of digital art samples I tested on, this paradigm of "WebP outperforms for non-grainy images, but does comparable or worse for grainy images" has held up. So yeah, depends on what you're trying to compress! I imagine grain/noise and image complexity would scale in a similar way for photos, hence some of (much of?) the variance in people's results when comparing the two formats with photos.


    Edit: just to showcase the other end of the spectrum, namely no-grain, low complexity images, here's a good example that isn't so undetailed that it might feel contrived (the lines are still using textured [digital] brushes): https://www.pixiv.net/en/artworks/112404351

    I quite strongly prefer the WebP version here, even though the JPEG is 39% larger!

    Edit2: I've corrected the example with the sparkler - I wrote the crossed out section from memory from when I did this comparison for my own purposes, but when I was doing that I was also testing MozJpeg without chroma subsampling (4:4:4 - better color detail). With chroma subsampling set to 4:2:0, improved definition of the sparkler particles doesn't really apply anymore and is certainly no longer the "most obvious" difference to the WebP image!

  • I think in this context (particularly with a very quick skim of the paper for some additional context), it it might be more helpful to think of air "powering" this design in the same way that electricity "powers" things. The focus isn't on the energy source, it's on the structural design of the "robot" itself.

    Consider it another way: if their system/model/whatever designed a conventional electrically-powered robot without also designing an electrical generator or batteries etc, would you still discount it as "not being a robot"? The problem might be in our expectation based on the language being used. I might also be full of crap haha, but hopefully that's another perspective to consider.