Skip Navigation

Posts
0
Comments
167
Joined
2 yr. ago

Permanently Deleted

Jump
  • It's a Chinese character. Pronounced jiǒng.

    It was really only used as an emoticon in Asia during the 2010s though

  • Permanently Deleted

    Jump
  • The argument is that processing data physically "near" where the data is stored (also known as NDP, near data processing, unlike traditional architecture designs, where data is stored off-chip) is more power efficient and lower latency for a variety of reasons (interconnect complexity, pin density, lane charge rate, etc). Someone came up with a design that can do complex computations much faster than before using NDP.

    Personally, I'd say traditional Computer Architecture is not going anywhere for two reasons: first, these esoteric new architecture ideas such as NDP, SIMD (probably not esoteric anymore. GPUs and vector instructions both do this), In-network processing (where your network interface does compute) are notoriously hard to work with. It takes CS MS levels of understanding of the architecture to write a program in the P4 language (which doesn't allow loops, recursion, etc). No matter how fast your fancy new architecture is, it's worthless if most programmers on the job market won't be able to work with it. Second, there're too many foundational tools and applications that rely on traditional computer architecture. Nobody is going to port their 30-year-old stable MPI program to a new architecture every 3 years. It's just way too costly. People want to buy new hardware, install it, compile existing code, and see big numbers go up (or down, depending on which numbers)

    I would say the future is where you have a mostly Von Newman machine with some of these fancy new toys (GPUs, Memory DIMMs with integrated co-processors, SmartNICs) as dedicated accelerators. Existing application code probably will not be modified. However, the underlying libraries will be able to detect these accelerators (e.g. GPUs, DMA engines, etc) and offload supported computations to them automatically to save CPU cycles and power. Think your standard memcpy() running on a dedicated data mover on the memory DIMM if your computer supports it. This way, your standard 9to5 programmer can still work like they used to and leave the fancy performance optimization stuff to a few experts.

  • Also if the router blocks icmp for some reason you can always manually send an ARP request and check the response latency.

  • So let me get this straight, you want other people to work on a project that you yourself think is a hassle to maintain for free while also expecting the same level of professionalism of a 9to5 job?

  • And that's fine. Plenty of authors are great at writing the journey and terrible at writing endings. And from what we've gotten so far at least he now knows what not to do when writing an ending.

  • This is solving a problem we DO have, albeit in a different way. Email is ancient, the protocol allows you to self identify as whoever you want. Let's say I send an email from the underworld (server ip address) claiming I'm Napoleon@france (user@domain), the only reason my email is rejected is because the recipient knows Napoleon resides on the server France, not underworld. This validation is mostly done via tricky DNS hacks and a huge part of it is built on top of Google's infrastructure. If for some reason Google decides I'm not trustworthy, then it doesn't matter if I'm actually sending Napoleon's mail from France, it's gonna be recognized as spam on most servers regardless.

    A decentralized chain of trust could potentially replace Google + all these DNS hacks we have in place. No central authority gets to control who is legitimate or not. Of all the bs use cases of block chain I think this one doesn't seem that bad. It's building a decentralized chain of trust for an existing decentralized system (email), which is exactly what "block chain" was originally designed for.

  • Love the folks at teamsters. They were super supportive when us folks at UAW went on strike (not the recent automotive one, a previous one)

  • Is there a specific reason you're looking at shadowsocks? The original developer has been MIA for years. People who used it in the past largely consider it insecure for its original stated purpose

    trojan-gfw is a better modern replacement. However that requires a certificate in order to work. You can easily get one via lets encrypt.

    At this point, let Shadowsocks, obfs, and kcp die a graceful death like GoAgent before it did.

  • I don't think either of us is the target audience here. I can see a "cheaper" (questionable) Pro laptop being useful for students going into college with a limited budget. An undergrad CS/graphic design degree shouldn't tax an 8gb machine too much, assuming students shut down everything else when doing their once-a-semester major rendering/compiling/model training. If people just want Macbook pro software with more ports, a "cheaper" machine is better than none. Personally, I would still get a used/refurbished machine though.

    That being said, my current laptop workload tends to be emacs, qpdfview, Firefox, and tmux on EL9. For the remaining stuff, I usually just spin up a VM then ssh/xrdp into it. As for slack, teams, jabber, etc, I'm happy to report I've been out of industry/IT for 1+ years and don't plan on going back anytime soon. For all I care, Apple can call their models unicorn edition. As long as it sells it's not stupid.

  • You don't understand. It's not like the self-driving feature is just software where they can price it at whatever they want. It's physically consuming brain cells every month. And those aren't free you know!

  • At this rate the only party they will have left will be their own farewell party.

  • My T480 is my favorite laptop. But this is NOT one of its use cases.

  • Do not get a Thinkpad if you're using it for graphic design. The screen color calibration is terrible (even when compared to low end devices)

    Last I checked I think some of the Dell laptops have a decent screen (XPS, latitude lines). But they tend to be more on the pricer side.

  • There are more places where bandwidth is a bottleneck now than 10 years ago.

    NIC speeds have gone from 100Gbps to 800Gbps in the last few years while PCIe and DRAM speeds have nowhere increased that much. No way are you going to push all that data through to the CPU on time. Bandwidth is the bottleneck these days and will continue to be a huge issue for the foreseeable future.

  • Oh great, a human failed the Turing Test...

  • Permanently Deleted

    Jump
  • This person must be fun at parties.

    Also, does nobody reach out to people privately to resolve conflicts these days? Even a simple "Hi, I saw my post was removed. Could you please clarify why it doesn't fall under the news category" would do (Not "I object. I'm right and you're wrong" though). There are more efficient ways to clear disagreements without immediately making a fool of yourself in public.

  • Pirating fonts?

    Jump
  • Another thing you can look into is apptainer/singularity. Basically portable container binaries. Executing the binary automatically runs a program/drops you into a shell inside the container with your $HOME mounted inside. Stuff like cuda also work as long as your host system has appropriate drivers.

    You can also port docker containers to apptainer directly via cli.

  • Oh wow. It supports Kobos as well. Gonna have to check this out. Thanks.