Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)RC
Posts
23
Comments
181
Joined
4 yr. ago

  • "Having carefully considered all these matters, we are unanimously of the opinion that the evidence was sufficiently cogent to support the findings that the appellant murdered four Afghan men and to the extent that we have discerned error in the reasons of the primary judge, the errors were inconsequential," an executive summary from the court said.

  • QwQ-32B-Q6_K:

    Amt:3313, Process:107.21s (1.74T/s), Generate:3463.16s (0.96T/s), Total:3570.37s ::: spoiler QwQ-32B-Q6_K - Thoughts (11010 characters)

    <think>

    Okay, let's tackle this problem step by step. The user wants a script that sorts all .gguf files into subdirectories based on their publisher, which is derived from the URLs in the dl*.txt files. Hmm, first, I need to figure out how to extract the publisher from each URL.

    The example URL given is https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF/resolve/main/Qwen3-30B-A3B-Q6_K.gguf?download=true, and the publisher here is 'unsloth'. Looking at the URL structure, the publisher is the first path component after the domain. So, after 'huggingface.co', the next part is the publisher.

    So, the plan is: for each .gguf file in the current directory, find its corresponding URL in any of the dl*.txt files. Then parse that URL to get the publisher, create a directory if it doesn't exist, and move the file there. But wait, how do I link each .gguf file to its URL? The filenames might not directly correspond to the URLs, so maybe the filenames are part of the URL paths?

    Alternatively, perhaps each line in the dl*.txt files contains a URL that ends with the filename. So, the filename is the last part of the URL before any query parameters. For example, in the example URL, the filename is Qwen3-30B-A3B-Q6_K.gguf. So, the script can look through all the dl*.txt files, parse each URL, extract the filename from the URL, and then check if that filename exists in the current directory. If it does, then we can get the publisher from that URL and move the file accordingly.

    Wait, but the problem states that the URLs are in the dl files, and each .gguf file was downloaded from one of those URLs. So each .gguf file corresponds to exactly one URL in one of the dl files. Therefore, the script needs to:

    1. Collect all the URLs from all dl*.txt files, ignoring comments (lines starting with #).
    2. For each URL, extract the filename (the part after the last '/' before any '?') and the publisher (the first path component after the domain).
    3. Create a dictionary mapping filenames to their respective publishers.
    4. Iterate over all .gguf files in the current directory. For each file: a. Check if its name is in the filename-publisher map. b. If yes, create the publisher directory if it doesn't exist. c. Move the file into that directory. d. If not found, maybe log a warning or skip?

    But first, how to parse the URLs correctly. Let's think about splitting the URL. The example URL's path after the domain is '/unsloth/Qwen3-30B-A3B-GGUF/resolve/main/Qwen3-30B-A3B-Q6_K.gguf'. The first path component after the domain is 'unsloth', which is the publisher. So splitting the path by '/' and taking the second element (since the first is empty due to leading '/') would give the publisher.

    For the filename, the last part of the path before any query parameters. So split the URL on '?' to get the path part, then split on '/' and take the last element.

    Potential edge cases: URLs with different domains? The example uses huggingface.co, but maybe others could be present. But the problem says "the publisher can only be found in the URL used to originally download each .gguf file", so the structure should be consistent. So the first path component after the domain is always the publisher.

    Now, steps for the script:

    First, process all dl*.txt files. For each file:

    • Read each line.
    • Ignore lines starting with #.
    • For each valid URL line:
      • Parse the URL to get the publisher and filename.

    How to do this in bash? Maybe using parameter expansion or cut.

    Alternatively, using awk or sed. Let's think of code snippets.

    Parsing the publisher:

    Given a URL like 'https://huggingface.co/unsloth/...', the path starts after 'huggingface.co/'. So splitting the URL into parts after the domain.

    Maybe using cut:

    domain_part=$(echo $url | cut -d '/' -f 3) # gets huggingface.co path_part=$(echo $url | cut -d '/' -f 4-) # gets everything after the domain

    Then the first element of path_part is the publisher. So split path_part on '/' again.

    publisher=$(echo $path_part | cut -d '/' -f 1)

    Wait, let's see:

    For the example URL:

    Original URL: https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF/resolve/main/Qwen3-30B-A3B-Q6_K.gguf?download=true

    Breaking down:

    After splitting by '/', the components are:

    0: https: 1: 2: huggingface.co 3: unsloth 4: Qwen3-30B-A3B-GGUF 5: resolve 6: main 7: Qwen3-30B-A3B-Q6_K.gguf?download=true

    Wait, actually, the URL is split into parts by the slashes. The third component (index 2 if starting from 0) is huggingface.co. The fourth component (index 3) is unsloth, which is the publisher.

    So, to get the publisher, we can take the 4th field when splitting by '/'.

    Wait, perhaps using:

    publisher=$(echo $url | awk -F/ '{print $4}')

    Yes, because:

    The URL is split into fields by '/', so $4 would be 'unsloth'.

    Similarly, for the filename:

    The filename is the last part before any '?', take the first part, then split on '/' and take the last element. Alternatively, using basename:

    filename=$(basename $(echo $url | cut -d '?' -f1))

    Yes, that's better. For example, the URL part before ? is:

    https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF/resolve/main/Qwen3-30B-A3B-Q6_K.gguf

    basename of that is "Qwen3-30B-A3B-Q6_K.gguf".

    So putting it all together, for each URL line:

    publisher=$(echo "$url" | awk -F/ '{print $4}') filename=$(basename "$(echo "$url" | cut -d '?' -f1)")

    Then, we can build an associative array (in bash) mapping filenames to publishers.

    Once all URLs are processed, we can loop over the .gguf files in the current directory. For each file:

    • Check if its name is in the filename-publisher map.
    • If yes, create the directory if needed, then move.
    • Else, perhaps skip or report an error.

    Now, in bash, associative arrays can be used. Let's structure the code:

    First, read all the dl files.

    declare -A publisher_map

    for dlfile in dl*.txt; do while IFS= read -r line; do # Ignore comments and empty lines if [[ $line == * || -z $line ]]; then continue fi url="$line"

     
                # Extract publisher
            publisher=$(echo "$url" | awk -F/ '{print $4}')
            # Extract filename
            fname=$(basename "$(echo "$url" | cut -d '?' -f1)")
            
            # Store in the map
            publisher_map["$fname"]=$publisher
        done < "$dlfile"
    
      

    done

    Then, process each .gguf file:

    for file in *.gguf; do if [[ ! -f $file ]]; then continue fi fname=$(basename "$file")

     
            if [[ -v publisher_map[$fname] ]]; then
            publisher=${publisher_map[$fname]}
            dir="$publisher"
            mkdir -p "$dir"
            mv -- "$file" "$dir/"
            echo "Moved $file to $dir"
        else
            echo "Warning: $file has no corresponding URL in dl files. Skipping."
        fi
    
      

    done

    Wait, but in bash, checking if a key exists in an associative array uses -v, so that part is okay.

    But need to make sure that the filenames are exactly as they appear in the URL's path. For example, if the URL's filename has a different case, or maybe trailing spaces? Probably need to ensure exact matches.

    Another thing: what about files that are not in any of the dl files? The problem says "all .gguf files", but perhaps some files were not downloaded via these URLs? The user might want those to be skipped, or maybe an error. But according to the problem statement, they need to be sorted into subdirectories based on their original download URLs, so files not present in any dl file ca [...trimmed because lemmy won't handle so many chars] :::

  • The replacement battery you bought in 2017 was the last of the genuine stock for that 2012 Thinkpad model. Now it's only poor quality aftermarket. Maybe just stick with the existing genuine battery -- its 47 second runtime should be enough time for AC loss to trigger a custom script to make it hibernate.

  • These are Australian elections -- it is 100% paper ballots.

    https://www.aec.gov.au/Voting/counting/

    The starlink thing is just a backup link for communicating election-night preliminary count data counted by election staff at the booths. Then the ballots are transported to counting centres for the official count. Full legal results aren't known for a couple of weeks.

  • Whittaker's phrasing is ambiguous. Could be read as expressing one of a number of things:

    • The paper/article is misleading and distracting from meaningful threats to privacy.
    • That the original tweet is using misleading accusations to distract us from the article's revelations of meaningful threats to privacy.
    • That Appelbaum's authorship of the research is an unwanted negative association which undermines the attention deserved by the threats documented in the paper which are misleadingly justified as necessary by eg. governments.

    It's difficult to know without a better understanding of Whittaker's position on the various matters at hand, so I don't know.

  • Fuck Cars @lemmy.ml

    Push to lower driving age in Victoria as P-platers complain of job hurdles

    Australia @aussie.zone

    U16 Social Media Ban - Senate 1hr debate before the vote, some time tonight on the livestream

    Fuck Cars @lemmy.ml

    97yo woman appeals licence suspension after amassing hundreds of penalty points in driving tests

    Privacy @lemmy.ml

    The [Australian] government plans to ban under-16s from social media platforms.

    Australia @aussie.zone

    'It's going to scar them': NT to become first jurisdiction to lower criminal age of responsibility

    Technology @lemmy.ml

    Australia’s 3G Shutdown - Telcos to Block Working 4G/5G Phones!

    Technology @lemmy.world

    Australia’s 3G Shutdown - Telcos to Block Working 4G/5G Phones!

    Australia @aussie.zone

    Rezoning = private profit?! (Couple loses property fight after highway swallows $5.5 million dream)

    Fuck Cars @lemmy.ml

    Carbrained problems in narrow (wide) streets at the edge of suburban sprawl

    World News @lemmy.ml

    Australian PM First Western Leader to Be Referred to ICC as 'Accessory to Genocide in Gaza'

    Australia @aussie.zone

    Honest Government Ad | Whistleblower Protection Laws - The Juice Media

    Permacomputing @slrpnk.net

    Firefox is a waste of global resources. Move everything back to TTY.

    Memes @lemmy.ml

    South Africa is the Legal Arm of _____.

    World News @lemmy.world

    US officials monitored pro-Assange protests in Australia for ‘anti-US sentiment’, documents reveal

    Privacy @lemmy.ml

    Panquake releases source code

    Australia @aussie.zone

    The US-led Military Buildup in Australia Is Like Nothing We’ve Seen Before

    World News @lemmy.ml

    The US-led Military Buildup in Australia Is Like Nothing We’ve Seen Before

    Fuck Cars @lemmy.ml

    medication rule

    Fuck Cars @lemmy.ml

    Airbus books record India orders, confirms higher deliveries

    Fuck Cars @lemmy.ml

    Daily alarm checks