Trump administration wants to go on cyber offensive against China
e0qdk @ e0qdk @reddthat.com Posts 1Comments 167Joined 2 yr. ago

Games need to figure out what color to show for each pixel on the screen. Imagine shooting lines out from your screen into the game world and seeing what objects they run into. Take whatever color that object is supposed to be and put it on the screen. That's the basic idea.
To make it look better, you can repeat the process each time one of the lines hits an object. Basically, when a line hits an object, make more lines -- maybe a hundred or a thousand or whatever the programmer picks -- and then see what those lines run into as they shoot out from the point in all directions. Mix the colors of the objects they run into and now that becomes the color you put on screen.
You can repeat that process again and again with more and more bounces. As you add more and more bounces it gets slower though -- since there are so many lines to keep track of!
When you've done as many bounces as you want to do then you can shoot out lines one last time to all the lights in the game. If there is an object in the way blocking a light, the color for the object you're trying to figure out will be darker since it's in a shadow!
It's an old and simple idea to figure out what color something is like that by bouncing off objects repeatedly... but it's hard to do quickly. So, most games until very recently did not work that way. They used other clever tricks instead that were much faster, but made it hard to draw reflections and shadows. Games with those other techniques usually did not look as good -- but you could actually play them on old computers.
I suggest using H264 instead of H265 for better compatibility. The video doesn't play in my browser, and I think it's likely because of that. The audio works but the video is just black in my browser. (I can play it with another player like VLC though, of course.)
The registrar appears to have ignored the response from itch.io (source with additional details). I'm not sure what that means for them legally speaking -- but not following the DMCA process correctly probably opens them up to being sued for damages.
It's not a particular protocol right now, but it would be a URI that refers to a specific resource. A protocol could also be defined -- e.g. a restricted subset of HTTPS that returns JSON objects following a defined schema or something like that -- but the point really is that I want to be able to refer to a thread not a webpage. I don't think that's a silly thing to want to be able to do.
Right now, I can only effectively link to a post or thread as rendered by a specific interface -- e.g. for me, this thread is https://old.reddthat.com/post/30710789 using reddthat's mlmym interface. That's probably not how most users would like to view the thread if I want to link it to them. Any software that recognizes the new URI scheme could understand that I mean a particular thread rather than how it's rendered by a particular web app, and go fetch it and render it appropriately in their client if I link it. (If current clients try to be clever about HTTP links, it becomes ambiguous if I mean the thread as rendered into a webpage in specific way or if I actually meant the thread itself but had to refer to it indirectly; that causes problems too.)
I don't think lemmy://
is necessarily the best prefix -- especially if mbin, piefed, etc. get on board -- just that I would like functionality like that very much, and that something like a lemmy URI scheme (or whatever we can get people to agree on) might be a good way to accomplish it.
Not that I'm opposed, but I'm not sure if it's practical to make a fediverse-wide link that's resolvable between platforms since there are so many differences and little incompatibilities and developers who don't directly interact with each other -- or even know each other exist!
Even if it isn't though, it would be nice to be able to do something like lemmy://(rest of regular url)
to indicate data from a lemmy(-compatible) server that should be viewable by all other lemmy clients without leaving your particular client and having to open some other website.
Frankly, the only sane option is an "Are you over the age of (whatever is necessary) and willing to view potentially disturbing adult content?" style confirmation.
Anything else is going to become problematic/abusive sooner or later.
I just download the offline installers from GOG and keep those on my NAS organized into folders per game until I want to install them. Not fancy, but it works fine for me.
Try adding some prints to stderr through my earlier test program then and see if you can find where it stops giving you output. Does output work before curl_easy_init
? After it? Somewhere later on?
Note that I did update the program to add the line with CURLOPT_ERRORBUFFER
-- that's not strictly needed, but might provide more debug info if something goes wrong later in the program. (Forgot to add the setup line initially despite writing the rest of it... 🤦♂️️)
You could also try adding curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L);
to get it to explain more details about what it's doing internally if you can get it to print output at all.
Does hello world work? You should've gotten at least some console output.
#include <stdio.h> int main() { fprintf(stderr, "Hello world\n"); return 0; }
As a sanity check, does this work?
#include <curl/curl.h> #include <stdio.h> #include <stdlib.h> size_t save_to_disk(char* ptr, size_t size, size_t nmemb, void* user_data) { /* according to curl's docs size is always 1 */ FILE* fp = (FILE*)user_data; fprintf(stderr, "got %lu bytes\n", nmemb); return fwrite(ptr, size, nmemb, fp); } int main(int argc, char* argv[]) { char errbuf[CURL_ERROR_SIZE]; FILE* fp = NULL; CURLcode res; CURL* curl = curl_easy_init(); if(!curl) { fprintf(stderr, "Failed to initialize curl\n"); return EXIT_FAILURE; } fp = fopen("output.data", "wb"); if(!fp) { fprintf(stderr, "Failed to open file for writing!"); return EXIT_FAILURE; } curl_easy_setopt(curl, CURLOPT_URL, "https://www.wikipedia.org/"); curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, save_to_disk); curl_easy_setopt(curl, CURLOPT_WRITEDATA, fp); curl_easy_setopt(curl, CURLOPT_ERRORBUFFER, errbuf); errbuf[0] = 0; /* set error buffer to empty string */ res = curl_easy_perform(curl); if(fp) { fclose(fp); fp = NULL; } if(res != CURLE_OK) { fprintf(stderr, "error code : %d\n", res); fprintf(stderr, "error buffer : %s\n", errbuf); fprintf(stderr, "easy_strerror: %s\n", curl_easy_strerror(res)); return EXIT_FAILURE; } else { fprintf(stderr, "\nDone\n"); return EXIT_SUCCESS; } }
That should write a file called output.data with the HTML from https://www.wikipedia.org/ and print out the number of bytes each time the write callback receives data for processing.
On my machine, it prints the following when it works successfully (byte counts may vary for you):
got 13716 bytes got 16320 bytes got 2732 bytes got 16320 bytes got 16320 bytes got 128 bytes got 16320 bytes got 16320 bytes got 1822 bytes Done
If I change the URL to nonsense instead to make it fail, it prints text like this on my system:
error code : 6 error buffer : Could not resolve host: nonsense easy_strerror: Couldn't resolve host name
Edit: corrected missing line in source (i.e. added line with CURLOPT_ERRORBUFFER which is needed to get extra info in the error buffer on failure, of course)
Edit 2: tweaks to wording to try to be more clear
You might consider using Google Takeout to export the emails to an mbox file, and then importing that into your new mail server.
Magnitude 6.7 earthquake. Woke up to it shaking my bed violently in my dorm room. (Boarding school) Thankfully, I didn't have anything above me that could fall, but some of the other students kept books in the shelves above their beds. Suffice it to say they got an even ruder awakening than I did...
There was a big aftershock a few minutes later -- just after I'd gotten the hell out of the building, basically -- and smaller aftershocks for days afterwards.
It put a big crack in the floor of my dorm and everyone who lived there had to stay outside all day until the administration declared it safe for us to re-enter.
That was coincidentally the same day as a school festival and I'd spent the evening before working with my classmates converting the art room into a haunted house. I never got to see the mess, but whatever happened in there was so bad the room was unusable for months. Most of the rest of the festival (e.g. outdoor stalls and such) was still able to be run though, so they carried on with the parts they could. It was surreal.
Did you flip a power switch on the PSU at some point, perhaps? (Done that one a few times myself...)
I was getting error messages when I first saw your post and checked a few minutes ago, but it looks like it's back online now.
There is also !communitypromo@lemmy.ca
It's the Esperanto word for "forge", according to the FAQ.
According to their FAQ, they say it's supposed to be pronounced /forˈd͡ʒe.jo/ and provide an audio clip: https://forgejo.org/static/forgejo.mp4
To me that sounds like "for-jay-oh".
I used to see bots posting comments that were copied verbatim from Hacker News -- which was really obvious because of the "[1]" style footnoting they do on HN that rarely made sense on reddit where you could just use markdown to add descriptive links inline.
I reported a whole bunch of those, but no one ever seemed to do anything about them, and I eventually gave up. Been over a year since I've interacted significantly with reddit though, and I'm similarly in the "who knows what they're doing now" camp. Wouldn't surprise me if there are bots reposting comments scraped from lemmy to karma farm on reddit now too.
The point is deterrence. The Congressman is basically saying "Fuck off already, or ELSE!"
They're announcing that they will pursue a MAD-style defense policy, and MAD doesn't work unless you make it publicly known that you can and will retaliate.