Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BA
Posts
0
Comments
393
Joined
1 yr. ago

  • Ahh, gotcha. Apologies, I haven't had enough caffeine yet, so it went completely over my head.

    That makes sense to me. I also prefer Briar on that basis, although I currently don't use it at all. I've had a hard enough time getting folks to switch to Signal, so I don't want to try to push them to move once again. If Signal starts enshittifying then I'll probably start the Sisyphean push to switch again.

    edit: ugh it's Sisyphean not Sisyphusian

  • Claims require evidence in proportion to their extremity. There is no evidence of a backdoor in that issue. If a security researcher made a post saying "Signal is CIA backdoored, here is exactly how it works," then I would read it and use my relevant domain knowledge as a software dev to make a decision. No explanation is provided, so I have nothing to use to decide. Therefore, my viewpoint is unchanged.

    Signal has been audited, and I believe it's been audited multiple times. If you're worried about your 4th amendment rights in the US, don't turn on backups. If you have something serious to hide and your threat model includes state actors, send messages that delete themselves after a certain time period and enforce that discipline amongst your peers. The poster's concerns sound like a skill issue to me.

  • I think that the person you're responding to is asking for the specifics of why Briar is ethically superior. Do the other options have ethical issues? Or does Briar have a specific characteristic that makes it ethically superior (e.g. its p2p nature)?

    I'd also like to know. It's never occurred to me to look at the technical nature of secure messaging systems through the lens ethics so I find the idea intriguing.

  • I watched a streamer play it and you couldn't even hear his voice by the end thanks to all of the beautiful stimulation

    edit: I watched it in PiP mode on my phone while scrolling through Lemmy, of course

  • I've been very pleased with my factory-seconds Framework 13 (11th gen i7, 64 gigs of RAM and 2TB storage acquired through other channels). Linux support has been basically perfect for me, although there were some kinks earlier on. The Framework 16 might work for you if you need something with a discrete GPU.

    If you want something more mainstream, ThinkPads are often great for running Linux. Not every model is perfect, so I'd recommend doing some research there. The Arch Linux wiki often has laptop specific web pages that show how well supported the laptop is. For example, here's the page for the Framework 13.

  • Cities probably have a higher density of towers, or the towers in cities have more capable antennas. Point-to-point microwave links can be pretty damn fast and reliable. They have their limitations, but even low-end systems like some of Ubiquiti's 60ghz stuff can form full duplex 5Gbps links at 10+ kilometers. Fiber is still king, but I'm guessing the backhaul isn't the issue.

    I'm guessing that the issue is congestion on the client radios. 5g is supposed to be much better at dealing with this thanks to time sharing improvements, but it seems likely that there just aren't enough towers. One scenario that seems reasonable is that your telco (incorrectly) assumed that they wouldn't need as many towers when upgrading, so they only upgraded a subset of their towers and removed old ones once 4g was deprecated.

    edit: you might be able to get better information about wtf is going on by using a community-sourced site like https://cellmapper.net/

    I believe you can use that site to get info about how many towers there are and what the client-side congestion is like.

    EDIT: ew, cellmapper is closed source. OpenCellid or beaconDB seem to be open source equivalents.

  • Abso-fucking-lutely, amen and hallelujah. I want 6G to focus on improving range and performance in marginal conditions. When shit is good, 5g is fast enough for now. I don't know how you improve range and penetration without going to lower frequencies, so maybe we should try to do that? Lower frequencies mean less bandwidth, but RF is black magic fuckery and there's all kinds of crazy shit that can be done with time division, so maybe we can improve throughout in the sub-ghz regime. I dunno about that, I'm just an idiot software developer who is thankful that shit works without me having to sacrifice a goat.

    Maybe there's a way to broadcast at higher power levels, and maybe there are ways for base stations to be more sensitive or do filtering to increase SNR. I have no idea, but I think that should be what the telecos focus on. Better service over a wider area with the same number of towers would be huge.

  • Python is my primary language. For the way I write code and solve problems, it's the language where I need the least help from an LLM. Python lets you write code that is incredibly concise while still being easy to read. There's more of a case to be made for something like Go, since it seems like every single god damned function call ends up being variable, err := someFuckingShit() and then a if err!=nil and manually handling it instead of having nice exception handling. Even there, my IDE does that for me without requiring a computationally expensive LLM to do the work.

    Like, some people have a more conversational development style and I guess LLMs work well for them. I end up constantly context switching between code review mode and writing code mode which is incredibly disruptive.

  • As a senior dev, I have no use for it in my workflow. The only purpose it would serve for me is to reduce the amount of typing I do. I spend about 5-10% of my time actually writing code. The rest of my dev time is spent in architecting, debugging, testing, or documenting. LLMs aren't really good at most of those things once you move past the most superficial levels of complexity. Besides, I don't actually want something to reduce the amount I'm typing. If I'm typing too much and I'm getting annoyed then it's a sure sign that I've done something bad. If I'm writing boilerplate then it's time to write an abstraction to eliminate that. If I'm writing repetitive tests then it's a sign I need to move to a property based testing framework like Hypothesis. If the LLM spits all of this out for me, I will end up writing code that is harder to understand and maintain.

    LLMs are fine for learning and junior positions where you'll have more experienced folks reviewing code, but it just is not that helpful past a certain point.

    Also, this is probably a small thing, but I have yet to find an LLM that writes anything other than shitty, terrible shell scripts. Please for the love of God don't use an LLM to write shell scripts. If you must, then please pass the results through shellcheck and fix all of the issues there.

  • Not all Bluetooth stuff requires an app. I have dozens of BLE sensors all around my house and I haven't downloaded anything for the majority of them. My BLE proxies pick them up automagically and I get a notification on my phone about a new detected device.

    There are a few where you have to hack some bullshit, but I just avoided buying more of those once I learned that some of my shit needed that.

  • I spend all day at work exploring the inside of the k8s sausage factory so I'm inured to the horrors and can fix basically anything that breaks. The way k8s handles ingress and service discovery makes it absolutely worth it to me. The fact that I can create an HTTPProxy and have external-dns automagically expose it via DNS is really nice. I never have to worry about port conflicts, and I can upgrade my shit whenever with no (or minimal) downtime, which is nice for smart home stuff. Most of what I run tends to be singleton statefulsets or single-leader deployments managed with leases, and I only do horizontal for minimal HA, not at all for perf. If something gives me more trouble running in HA than it does in singleton mode then it's being run as a singleton.

    k8s is a complex system with priorities that diverge from what is ideal for usage at home, but it can be really nice. There are certain things that just get their own VM (Home Assistant is a big one) because they don't containerize/k8serize well though.

  • Onshape is an okay option for Linux (I've been able to do everything I used to to in Inventory), although I hate that it's cloud based. I know that a rug pull is inevitable, but I figure I'll stick with it until then.

  • Cat

    Jump
  • I think you're spot on. Those weird blurry things in the background on the right side even have that... wavy? stripey? idk what artifact that comes from the scanner's light not being consistent enough.

  • Cat

    Jump
  • There's something really weird with this photo and I can't tell what it is. Like, the little piece of dirt in the middle left, the distortion in the corners, and the general weirdness of the rendering makes me think "someone adapted some weird old lens to a modern camera." The slightly fucked up transition from cat to background bokeh either supports that, or the bokeh is computational and this is some fucking cursed piece of digital manipulation. EDIT: or, I suppose it could be from the eldritch dreams of some image model, but I didn't think about that when I first saw the picture.

    Either way, I love this photo even if I've spent way too long trying to figure out what even the fuck.