The US was the overwhelming choice (24% of respondents) for the country that represents the greatest threat to peace in the world today. This was followed by Pakistan (8%), China (6%), North Korea, Israel and Iran (5%). Respondents in Russia (54%), China (49%) and Bosnia (49%) were the most fearful of the US as a threat.
I often want to know the status code of a curl request, but I don't want that extra information to mess with the response body that it prints to stdout.
What to do?
Render an image instead, of course!
curlcat takes the same params as curl, but it uses iTerm2's imgcat tool to draw an "HTTP Cat" of the status code.
It even sends the image to stderr instead of stdout, so you can still pipe curlcat to jq or something.
#!/usr/bin/env zsh
stdoutfile=$( mktemp )
curl -sw "\n%{http_code}" $@ > $stdoutfile
exitcode=$?
if [[ $exitcode == 0 ]]; then
statuscode=$( cat $stdoutfile | tail -1 )
if [[ ! -f $HOME/.httpcat$statuscode ]]; then
curl -so $HOME/.httpcat$statuscode https://http.cat/$statuscode
fi
imgcat $HOME/.httpcat$statuscode 1>&2
fi
cat $stdoutfile | ghead -n -1
exit $exitcode
Note: This is macOS-specific, as written, but as long as your terminal supports images, you should be able to adapt it just fine.
Overall, my point was not that scraping is a universal moral good, but that legislating tighter boundaries for scraping in an effort to curb AI abuses is a bad approach.
We have better tools to combat this, and placing new limits on scraping will do collateral damage that we should not accept.
And at the very least, the portfolio value of Disney’s IP holdings should not be the motivating force behind AI regulation.
I'd say that scraping as a verb implies an element of intent. It's about compiling information about a body of work, not simply making a copy, and therefore if you can accurately call it "scraping" then it's always fair use. (Accuse me of "No True Scotsman" if you would like.)
But since it involves making a copy (even if only a temporary one) of licensed material, there's the potential that you're doing one thing with that copy which is fair use, and another thing with the copy that isn't fair use.
It doesn't only contain information about the work, but also a copy (or copies, plural) of the work itself. You could argue (and many have) that archive.org only claims to be about preserving an accurate history of a piece of content, but functionally mostly serves as a way to distribute unlicensed copies of that content.
I don't personally think that's a justified accusation, because I think they do everything in their power to be as fair as possible, and there's a massive public benefit to having a service like this. But it does illustrate how you could easily have a scenario where the stated purpose is fair use but the actual implementation is not, and the infringing material was "scraped" in the first place.
But in the case of gen AI, I think it's pretty clear that the residual data from the source content is much closer to a linguistic analysis than to an internet archive. So it's firmly in the fair use category, in my opinion.
Edit: And to be clear, when I say it's fair use, I only mean in the strict sense of following copyright law. I don't mean that it is (or should be) clear of all other legal considerations.
I want generative AI firms to get taken down. But I want them to be taken down for the right reasons.
Their products are toxic to communication and collaboration.
They are the embodiment of a pathology that sees humanity — what they might call inefficiency, disagreement, incoherence, emotionality, bias, chaos, disobedience — as a problem, and technology as the answer.
Dismantle them on the basis of what their poison does to public discourse, shared knowledge, connection to each other, mental well-being, fair competition, privacy, labor dignity, and personal identity.
Not because they didn’t pay the fucking Mickey Mouse toll.
On a Mac, you can do opt+hyphen or shift+opt+hyphen, for en dash or em dash respectively.