Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MO
Posts
18
Comments
433
Joined
2 yr. ago

  • Personally, I think Proxmox is somewhat unsecure too.

    Proxmox is unique from other projects, in it's much more hacky, and much of the stack is custom rather than standards. Like for example: For networking, they maintain a fork of the Linux's older networking stack, called ifupdown2, whereas similar projects, like openstack, or Incus, use either the standard Linux kernel networking, or a project called openvswitch.

    I think Proxmox is definitely secure enough, but I don't know if I would really trust it for higher value usecases due to some of their stack being custom, rather than standard and mantained by the wider community.

    If I end up wanting to run Proxmox, I’ll install Debian, distro-morph it to Kicksecure

    If you're interested in deploying a hypervisor on top of an existing operating system, I recommend looking into Incus or Openstack. They have packages/deployments than can be done on Debian or Red Hat distros, and I would argue that they are designed in a more secure manner (since they include multi tenancy) than Proxmox. In addition to that, they also use standard tooling for networking, like both can use Linux Bridge (in-kernel networking) for networking operations.

    I would trust Openstack the most when it comes to security, because it is designed to be used as a public cloud, like having your own AWS, and it is deployed with components publicly accessible in the real world.

  • Again, this is distracting from the original argument to make some kind of tertiary argument unrelated to the original one: Is ssh secure to expose to the internet?

    You said no. That is the argument being contested.

  • This is moving the goal posts. You went from "ssh is not fine to expose" to "VPN's add security". While the second is true, it's not what was being argued.

    Never expose your SSH port on the public web,

    Linux was designed as a multi user system. My college, Cal State Northridge, has an ssh server you can connect to, and put your site up. Many colleges continue to have a similar setup, and by putting stuff in your homedir you can have a website at no cost.

    There are plenty of usecases which involve exposing ssh to the public internet.

    And when it comes to raw vulnerabilities, ssh has had vastly less than stuff like apache httpd, which powers wordpress sites everywhere but has had so many path traversal and RCE vulns over the years.

  • Firstly, Xen is considered by secure by Qubes — but that's mainly the security of the hypervisor and virtualization system itself. They make a very compelling argument that escaping a Xen based virtual machine is going to be more difficult than a KVM virtual machine.

    But threat model matters a lot. Qubes aims to be the most secure OS ever, for use cases like high profile journalists or other people who absolutely need security, because they will literally get killed without it.

    Amazon moved to KVM because, despite the security trade off's, it's "good enough" for their usecase, and KVM is easier to manage because it's in the Linux kernel itself, meaning you get it if you install Linux on a machine.

    In addition to that, security is about more than just the hypervisor. You noted that Promox is Debian, and XCP-NG is Centos or a RHEL rebuild similar to Rocky/Alma, I think. I'll get to this later.

    Xen (and by extension XCP-NG) was better known for security whilst KVM (and thus Proxmox)

    I did some research on this, and was planning to make a blogpost and never got around to making it. But I still have the draft saved.

    NameSummaryFull ArticleNotes
    Performance Evaluation and Comparison of Hypervisors in a Multi-Cloud EnvironmentCompares WSL (kind of Hyper-V), VirtualBox, and VMWare-Workstation.springer.com, htmlNot honest comparison, since WSL is likely using inferior drivers for filesystem access, to promote integration with host.
    Performance Overhead Among Three Hypervisors: An Experimental Study using Hadoop BenchmarksCompares Xen, KVM, and an unnamed commercial hypervisor, simply referred to as CVM.pdf
    Hypervisors Comparison and Their Performance Testing (2018)Compares Hyper-V, XenServer, and vSpherespringer.com, html
    Performance comparison between hypervisor- and container-based virtualizations for cloud users (2017)Compares xen, native, and docker. Docker and native have neglible performance differences.ieee, html
    Hypervisors vs. Lightweight Virtualization: A Performance Comparison (2015)Docker vs LXC vs Native vs KVM. Containers have near identical performance, KVM is only slightly slower.ieee, html
    A component-based performance comparison of four hypervisors (2015)Hyper-V vs KVM vs vSphere vs XEN.ieee, html
    Virtualization Costs: Benchmarking Containers and Virtual Machines Against Bare-Metal (2021)VMWare workstation vs KVM vs XEnspringer, htmlMost rigorous and in depth on the list. Workstation, not esxi is tested.

    The short version is: it depends, and they can fluctuate slightly on certain tasks, but they are mostly the same in performance.

    default PROXMOX and XCP-NG installations.

    What do you mean by hardening? If you are talking about hardening the management operating system (Proxmox's Debian or XCP's RHEL-like), or the hypervisor itself?

    I agree with the other poster about CIS hardening and generally hardening the base operating system used. But I will note that XCP-NG is more designed to be an "appliance" and you're not really supposed to touch it. I wouldn't be suprised if it's immutable nowadays.

    For the hypervisor itself, it depends on how secure you want things, but I've heard that at Microsoft Azure datacenters, they disable hyperthreading because it becomes a security risk. In fact, Spectre/Meltdown can be mitigated by disabling hyper threading. Of course, their are other ways to mitigate those two vulnerabilities, but by disabling hyper threading, you can eliminate that entire class of vulnerabilities — at the cost of performance.

  • Now, I don't write code. So I can't really tell you if this is the truth or not — but:

    I've heard from software developers on the internet that OpenCL is much more difficult and less accessible to write than CUDA code. CUDA is easier to write, and thus gets picked up and used by more developers.

    In addition to that, someone in this thread mentions CUDA "sometimes" having better performance, but I don't think it's only sometimes. I think that due to the existence of the tensor cores (which are really good at neural nets and matrix multiplication), CUDA has vastly better performance when taking advantage of those hardware features.

    Tensor cores are not Nvidia specific, but they are the "most ahead". They have the most in their GPU's, and probably most importantly: CUDA only supports Nvidia, and therefore by extension, their tensor cores.

    There are alternative projects, like how leela chess zero mentions tensorflow for google's Tensor Processing Units, but those aren't anywhere near as popular due to performance and software support.

  • AFIK it’s only NVIDIA that allows containers shared access to a GPU on the host.

    This cannot be right. I'm pretty sure that it is possible to run OpenCL applications in containers that are sharing a GPU.

    I should test this if I have time. My plan was to use a distrobox container since that shares the GPU by default and run something like lc0 to see if opencl acceleration works.

    Now where is my remindme bot? (I won't have time).

  • I despise the way Canonical pretends discourse forum posts by their team members* are documentation.

    I've noticed they have been a bit better lately, and have migrated much of the posts to their documentation, but it seems they are doing it again.

    As this is developed, we will update this post to link to the new documentation and feature release notes.

    Pro tip: You could have just made the documentation directly, with the content of this post. Or maybe a blog post. But please stop with the forum posts. They are very confusing for people not used to these... unique locations.

    *Not that people are easily able to find this out when they don't give any indication that the forum post is something other than just another post by a rando. Actually, I'm just guessing here, based on the quoted reply, for all I know this could be a post by someone unrelated to Canonical. The account is 3 months, and the post itself is identical to a regular forum post from a regular forum member...

  • It actually is a language issue.

    Although rust can dynamically link with C/C++ libraries, it cannot dynamically link with other Rust libraries. Instead, they are statically compiled into the binary itself.

    But the GPL interacts differently with static linking than with dynamic. If you make a static binary with a GPL library or GPL code, your program must be GPL. If you dynamically link a GPL library, you're program doesn't have to be GPL. It's partially because of this, that the vast majority of Rust programs and libraries are permissively licensed — to make a GPL licensed rust library would mean it would see much less use than a GPL licensed C library, because corporations wouldn't be able to extend proprietary code off of it — not that I care about that, but the library makers often do.

    https://en.wikipedia.org/wiki/GNU_General_Public_License#Libraries — it's complicated.

    EDIT: Nvm I'm wrong. Rust does allow dynamic linking

    Hmmmm. But it seems that people really like to compile static rust binaries, however, due to their portability across Linux distros.

    EDIT2: Upon further research it seems that Rust's dynamic linking implementation lacks a "stable ABI" as compared to other languages such as Swift or C. So I guess we are back to "it is a language issue". Well thankfully this seems easier to fix than "Yeah Rust doesn't support dynamic linking at all."

    Edit3: Nvm, I'm very, very wrong. The GPL does require programs using GPL libraries, even dynamically linked, be GPL. It's the LGPL that doesn't.

  •  
        
    [moonpie@osiris ~]$ du -h $(which filelight)
    316K    /usr/bin/filelight
    
      

    K = kilobytes.

     
        
    [moonpie@osiris ~]$ pacman -Ql filelight | awk '{print $2}' | xargs du | awk '{print $1}' | paste -sd+ | bc
    45347740
    
      

    45347740 bytes is 43.247 megabytes. That is to say, the entire install of filelight is only 43 megabytes.

    KDE packages have many dependencies, which cause the packages themselves to be extremely tiny. By sharing a ton of code via libraries, they save a lot of space.

  • The documentation has long since been changed.

    Note that the anon user is able to become root without a password by default, as a development convenience. To prevent this, remove anon from the wheel group and it will no longer be able to run /bin/su.

    https://github.com/SerenityOS/serenity/commit/a2a6bc534868773b9320ec3ca7399283cf7a375b (this seems to have also switched to gender neutral language in other parts.'of the documentation and comments as well).

    Original drama: https://github.com/SerenityOS/serenity/pull/6814

  • I always wonder how Docker works on macOS with a more UNIX-style kernel than Linux

    It doesn't. Macos also uses a virtual machine for docker.

    but is it really that hard to do Docker/OCI out of Linux?

    Yes. The runtimes containers use are dependent on cgroups, seccomp, namespaces, and a few other linux kernel specific features.

    You could implement a wine like project to run the linux binaries that containers contain, and then run some sandboxing to make it be a proper container, but no virtual machines or virtual machine container runtimes* are easier.

    Linuxulator, a freebsd project does the above.

    https://people.freebsd.org/~dch/posts/2024-12-04-freebsd-containers/

    *these are much lighter than a normal vm, I'll need to check if this is what macos does. I know for a fact docker on windows uses a full Linux vm though.

  • Firstly, you are probably going to need a pdf version of your resume. I've tried to get people to accept a website resume but they refuse, and explicitly want pdf. I link to a pdf on my website because of this. Do something similar.

    Your notes are very in depth, and organized.

    However, I agree with the other commenters about the overall site design and (over)use of JS. The cropping and spacing is overall poor, which only harms the site design further, given the already bad overall organization.

    Another thing is icons. These are big and unevenly spaced. Use something like fontawesome (probably not this since it doesn't have everything, you may end up having to find svg logo's of the various things yourself) instead. If you are trying to do web development, your portfolio must look cleaner. Like in bootstrap, the place where the icon is, has sharp corners, which extend outward from bootstraps rounded corners.

    I do disagree with one of the other commenters on the use of the term "language". I like it. Especially for a resume, brevity is better. I think overall, you should compress your site down, rather than having so much wasted space.