Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)RE
Posts
0
Comments
212
Joined
2 yr. ago

  • I haven't used it for a while, but the last time I was using Lakka I don't think it had been ported to Pi yet. It worked great and was very much for PCs. I don't know about interfaces though; my install booted straight into RetroArch which isn't the slickest-looking thing but worked fine for me.

    Make sure to check compatibility lists for the emulators you want to use. You may be surprised by how many games don't run/can't be finished/have major glitches on later systems like PS2, PS3, and GameCube. Also, there are no PS3 RetroArch cores, so you'll need to use the standalone version of RPCS3.

  • I love low-level stuff and this still took me a little while to break down, so I'd like to share some notes on the author's code snippet that might help someone else.

    The function morse_decode is meant to be called iteratively by another routine, once per morse "character" c (dot, dash, or null) in a stream, while feeding its own output back into it as state. As long as the function returns a negative value, that value represents the next state of the machine, and the morse stream hasn't yet been resolved into an output symbol. When the return value is positive, that represents the decoded letter, and the next call to morse_decode should use a state of 0. If the return value is 0, something has gone wrong with the decoding.

    state is just a negated index into the array t, which is actually two arrays squeezed into one. The first 64 bytes are a binary heap of bytes in the format nnnnnnlr, each corresponding to one node in the morse code trie. l and r are single bits that represent the existence of a left or right child of the current node (i.e. reading a dot or dash in the current state leading to another valid state). nnnnnn is a 6-bit value that, when shifted appropriately and added to 63, becomes an index into the second part of the array, which is a list of UTF-8/ASCII codes for letters and numbers for the final output.

  • If you want to emulate those systems, then yes, you're going to need a fairly beefy computer. You could, as others have suggested, buy a good secondhand system and upgrade it with a GPU and more/better RAM.

    But I want to pass on a warning as someone who also loves emulation and wishes they could "have everything in one place": a lot of emulators just aren't there yet, but some people are eager to kid themselves and others that they are.

    16-bit systems and before typically have outstanding emulators available. Some systems from the next couple of generations are also very reliable (e.g. PS1, Dreamcast), while others mostly work well with minimal tinkering and only a small handful of exceptions (e.g. N64, Saturn). But after those, the reliability of emulators drops off fairly smoothly. Even the venerable PCSX2, for example, will run almost every known PS2 game in some fashion, but many games outside the biggest hits still have problems that make them terrible. And I don't mean picky things like, "Three notes in the bassline on this background music are slightly off," I mean, "The walls aren't rendered in most areas."

    I really recommend having a good look at the compatibility lists for emulators you're interested in before you dive too deep down this hole. It's one thing to have a powerful PC already and think, "why not give it a go?" but another thing to build a new (to you) PC specifically for emulating these systems. I suspect that you may have been spoiled a bit by that fact that even the RP4 only has enough power to run those more stable emulators for older systems.

  • It always bothered me that Ryo has a Saturn in a story set before the Megadrive had come out. Even the Mark III (later to be rebranded as the Master System) was released a mere 13 months prior to the murder of Ryo's father.

  • Assuming C/C++, dare we even ask what this teacher uses instead of switch statements? Or are her switch statements unreadable rat's nests of extra conditions?

    This is a good life lesson. We're all idiots about certain things. Your teacher, me, and even you. It's even possible to be a recognized expert in a field yet still be an idiot about some particular thing in that field.

    Just because some people use a screwdriver as a hammer and risk injuring themselves and damaging their work, that's not a good reason to insist that no-one should ever use a screwdriver under any circumstances, is it?

    Use break statements when they're appropriate. Don't use them when they're not. Learn the difference from code that many other people recommend, like popular open-source libraries and tutorials. If there's a preponderance of break statements in your code, you may be using a suboptimal approach.

    But unfortunately, for this course, your best bet is to nod, smile, and not use any break statements. Look at it as a personal learning experience; by forcing yourself sit down and reason out how you can do something without using break statements, you might find some situations where they weren't actually the best solution. And when you can honestly look back and say that the solution with break statements is objectively better, you'll be able to use that approach with greater confidence in the future.

  • I completely agree. And the video didn't discuss how any of that actually happens, except to say that they send the update over radio, and to give a brief description of how the storage system on Voyager works (physically, not logically). That's what I meant by "really nothing here", "here" meaning "in the video", not "in how the Voyager probe works and updates are carried out".

    That next line, "It turns out they they update the software by sending the update by radio," was meant to be a bit sarcastic, but I know that isn't obvious in text, so I've added a signifier.

  • This is a short, interesting video, but there's really nothing here for any competent programmer, even a fresh graduate. It turns out they they update the software by sending the update by radio (/s). The video hardly goes any deeper than that, and also makes a couple of very minor layman-level flubs.

    There is a preservation effort for the old NASA computing hardware from the missions in the 50s and 60s, and you can find videos about it on YouTube. They go into much more detail without requiring much prior knowledge about specific technologies from the period. Here's one I watched recently about the ROM and RAM used in some Apollo missions: https://youtu.be/hckwxq8rnr0?si=EKiLO-ZpQnJa-TQn

    One thing that struck me about the video was how the writers expressed surprise that it was still working and also so adaptable. And my thought was, "Well, yeah, it was designed by people who knew what they were doing, with a good budget, lead by managers whose goal was to make excellent equipment, rather than maximize short-term profits."

  • Some of the things you mentioned seem to belong more properly in the development environment (e.g. code editor), and there are plenty of those that offer all kinds of customization and extensibilty. Some other things are kind of core to the language, and you'd really be better off switching languages than trying to shoehorn something in where it doesn't fit.

    As for the rest, GCC (and most C/C++ compilers) generates intermediate files at each of the steps that you mentioned. You can also have it perform those steps atomically. So, if you wanted to perform some extra processing at any point, you could create your own program to do so by working with those intermediate files, and automate the whole thing with a makefile.

    You could be on to something here, but few people seem to take advantage of the possibilities that already exist, and combining that with the fact that most newer languages/compilers deliberately remove these intermediate steps, this suggests to me that whatever problems this situation causes may have other, existing solutions.

    I don't know much about them myself, but have you read about the LLVM toolchain or compiler-compilers like yacc? If you haven't, it might answer some questions.

  • Drawing on Japanese, which is the only non-English language I have significant experience with, object.method(parameter) would feel more natural as object.(parameter)method, possibly even replacing the period separator with a Japanese grammatical construct (with no equivalent in English) that really suits this use case. Even the alternative function(self, parameter, ...) would mesh better with natural Japanese grammar as (self、parameter、〜)function. The majority of human languages have sentences which run Subject-Verb-Object, but a handful which includes Japanese run in the order Subject-Object-Verb.

    I gave an example of an alternative for...in loop in another comment here, so I won't rehash it here. But following the general flow of Japanese grammar, that for at the beginning of the statement would feel much more natural as a (or "with") at the end of the statement, since particles (somewhat similar to prepositions in English) go after the noun that they indicate, rather than before. And since semicolons don't exist in Japanese either, even they might be replaced with a particle like "".

    There aren't any big problems here, but a plethora of little things that can slowly add up.

  • I'm no linguist, but I have some Japanese language ability, and Japanese seems to be pretty different, grammatically, from English, so I'll draw on it for examples. I also had a quick look at some Japanese-centric programming languages created by native speakers and found that they were even more different than I'd imagined.

    Here's a first example, from an actual language, "Nadeshiko". In pseudo-code, many of us would be used a statement like the following:

     
        
    print "Hello"
    
    
      

    Here's a similar statement in Nadeshiko, taken from an official tutorial:

     
        
    「こんにちは」と表示
    
    
      

    A naive translation of the individual words (taking some liberties with English) might be:

     
        
    "Hello" of displayment
    
    
      

    I know, I know, "displayment" isn't a real English word, but I wanted to make it clear that the function call here isn't even dressed up as a verb, but a noun (of a type which is often used in verb phrases... it's all very different from English, which is my point). And with a more English-like word order, it would actually be:

     
        
    displayment of "Hello"
    
      

    Here's another code sample from the same tutorial:

     
        
    「音が出ます!!」と表示。
    1秒待つ。
    「プログラミングは面白い」と話す。
    
    
      

    And another naive translation:

     
        
    "Sound comes out!!" of displayment.
    1 second wait.
    "Programming is interesting" of speak.
    
    
      

    And finally, in a more English-like grammar:

     
        
    displayment of "Sound comes out!!."
    wait 1 second.
    speak of "Programming is interesting".
    
      

    And here's a for...in loop, this time from my own imagination:

     
        
    for foo in bar {  }
    
    
      

    Becomes:

     
        
    バーのフーで {  }
    
    
      

    Naively:

     
        
    Bar's Foo with {  }
    
    
      

    More English-y:

     
        
    with foo of bar {  }
    
      

    You may have noticed that in all of these examples, the "Japanese" code has little whitespace. Natural written Japanese language doesn't use spaces, and it makes sense that a coding grammar devised by native speakers wouldn't need any either.

    Now, do these differences affect the computer's ability to compile/interpret and run the code? No, not at all. Is the imposition of English-like grammar onto popular programming languages an insurmountable barrier to entry for people who aren't native English speakers? Obviously not, as plenty of people around the world already use these languages. But I think that it's an interesting point, worth considering, in a community where people engage in holy wars over the superiority or inferiority of various programming languages which have more in common than many widely-spoken natural languages.

  • it shouldn't matter that much what language the keywords are in

    Another problem is that the grammars of many well-supported programming languages also mirror English/Romance language grammars. Unfortunately, dealing with that is more than just a matter of swapping out keywords.

    EDIT: I may have been unclear; I wasn't trying to imply that this problem is greater than or even equal to the lack of documentation, tutorials, libraries, etc. Just that it's another issue, aside from the individual words themselves, which is often overlooked by monolingual people.

  • I tought myself programming as a kid in the 80s and 90s, and just got used to diagnostic print statements because it was the first thing that occurred to me and I had no (advanced) books, mentors, teachers, or Internet to tell me any different.

    Then in university one of my lecturers insisted that diagnostic prints are completely unreliable and that we must always use a debugger. He may have overstated the case, but I saw that he had a point when I started working on the university's time-sharing mainframe systems and found my work constantly being preempted and moved around in memory in the middle of critical sections. Diagnostic prints would disappear, or worse, appear where, in theory, they shouldn't be able to, and they would come and go like a restless summer breeze. But for as much as that lecturer banged on about debuggers, he hardly taught us anything about how to use them, and they confused the hell out of me, so I made it through the rest of my degree without using debuggers except for one part of one subject (the "learn about debuggers" part).

    Over 20 years later, after a little professional work and a lot of personal projects and making things for other non-coding jobs I've had, I still haven't really used debuggers much. But lately I've been forcing myself to use them sometimes, partly to help me pick apart quirks in external libraries that I'm linking, and partly because I'd like to start using superscalar instructions and threading in my programs, and I remember how that sort of thing screwed up my diagnostic prints in university.

  • The definition of the Date object explicitly states that any attempt to set the internal timestamp to a value outside of the maximum range must result in it being set to "NaN". If there's an implementation out there that doesn't do that, then the issue is with that implementation, not the standard.

  • There are several reasons that people may prefer physical games, but I want people to stop propagating the false relationship of "physical copy = keep forever, digital copy = can be taken away by a publisher's whim". Most modern physical copies of games are glorified digital download keys. Sometimes, the games can't even run without downloading and installing suspiciously large day 0 "patches". When (not if) those services are shut down, you will no longer be able to play your "physical" game.

    Meanwhile GOG, itch, even Steam (to an extent), and other services have shown that you can offer a successful, fully digital download experience without locking the customer into DRM.

    I keep local copies of my DRM-free game purchases, just in case something happens to the cloud. As long as they don't get damaged, those copies will continue to install and run on any compatible computer until the heat death of the universe, Internet connection or no, just like an old PS1 game disc. So it is possible to have the convenience of digital downloads paired with the permanence that physical copies used to provide. It's not an either-or choice at all, and I'm sick of hearing people saying that it is.