Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)RE
Posts
0
Comments
214
Joined
2 yr. ago

  • There are several reasons that people may prefer physical games, but I want people to stop propagating the false relationship of "physical copy = keep forever, digital copy = can be taken away by a publisher's whim". Most modern physical copies of games are glorified digital download keys. Sometimes, the games can't even run without downloading and installing suspiciously large day 0 "patches". When (not if) those services are shut down, you will no longer be able to play your "physical" game.

    Meanwhile GOG, itch, even Steam (to an extent), and other services have shown that you can offer a successful, fully digital download experience without locking the customer into DRM.

    I keep local copies of my DRM-free game purchases, just in case something happens to the cloud. As long as they don't get damaged, those copies will continue to install and run on any compatible computer until the heat death of the universe, Internet connection or no, just like an old PS1 game disc. So it is possible to have the convenience of digital downloads paired with the permanence that physical copies used to provide. It's not an either-or choice at all, and I'm sick of hearing people saying that it is.

  • It really depends on your expectations. Once you clarified that you meant parity with current consoles, I understood why you wrote what you did.

    I'm almost the exact opposite of the PC princesses who can say with a straight face that running a new AAA release at anything less than high settings at 4K/120fps is "unplayable". I stopped watching/reading a lot of PC gaming content online because it kept making me feel bad about my system even though I'm very happy with its performance.

    Like a lot of patient gamers, I'm also an older gamer, and I grew up with NES, C64, and ancient DOS games. I'm satisfied with medium settings at 1080/60fps, and anything more is gravy to me. I don't even own a 4K display. I'm happy to play on low settings at 720/30fps if the actual game is good. The parts in my system range from 13 to 5 years old, much of it bought secondhand.

    The advantage of this compared to a console is that I can still try to run any PC game on my system, and I might be satisfied with the result; no-one can play a PS5 game on a PS3.

    Starfield is the first game to be released that (looking at online performance videos) I consider probably not being worth trying to play on my setup. It'll run, but the performance will be miserable. If I was really keen to play it I might try to put up with it, but fortunately I'm not.

    You could build a similar system to mine from secondhand parts for dirt cheap (under US$300, possibly even under US$200) although these days the price/performance sweet spot would be a few years newer.

  • I think that it's because a) the abstraction does solve a problem, and b) the idealized solutions aren't actually all that simple.

    But I still agree with the article because I also think that a) the problem solved by the added abstraction isn't practical, but emotional, and b) the idealized solutions aren't all that complex, either.

    It seems to me that many devs reach immediately for a tool or library, rather than looking into how to create their own solution, due more to fear of the unknown than a real drive for efficiency. And while learning the actual nuts and bolts of the task is rarely going to be the faster or easier option, it's frequently (IMO) not going to be much slower or more difficult than learning how to integrate someone else's solution. But at the end of it you'll have learned a lot more than you would've by using a tool or library.

    Another problem in the commercial world is accountability to management.

    Many decades ago there used to be a saying in tech: "No-one ever got fired for buying IBM.'" What that meant was that even if IBM's solution was completely beaten by something offered by one of their competitors, you personally may still be better off overall going with IBM. The reason being, if you went with the competitor, and everything worked out, the less tech-savvy managers were just as likely to pat you on the back as to assert that the IBM solution would've been even better. If the competitor's solution didn't meet expectations, you'd be hauled over the coals for going with some cowboy outfit instead of good old reliable IBM. Conversely, if you went with IBM and everything worked, everyone would be happy. But if you chose IBM and the project failed, it'd be, "Well, it's not your fault. Who could've predicted that IBM wouldn't come through?"

    In the modern era, replace "IBM" with the current tool-of-the-month, and your manager will be demanding to know why you're wasting time reinventing the wheel on the company's dime.

  • I think a part of it is how we look for information in the first place. If you search/ask "How do I do (task) in (environment)?", you're going to find out about various libraries/frameworks/whatever that abstract everything away for you. But if you instead look for information on "How do I do (task)?", you'll probably get more generalized information that you can take and use to write your own stuff from scratch. Try only to look for help related to your specific environment/language when you have a specific implementation issue, like how to access a file or get user input.

    We also need a willingness to learn how things actually work. I see quite a few folks who seem to be so worried that they'll never be able to understand some task that they unwittingly spend almost as much or even more time and effort learning all the ins and outs of someone else's codebase as a way to avoid what they see as the scarier unknown.

    Fortunately, I've seen an increase in the last year or two of people deliberately giving answers or writing tutorials that are "no-/low-library", for people who want to know what's actually going on in their programs.

    I would never say to avoid all libraries or frameworks, because many of them are well-written (small, modular, stable) and can save us a lot of boilerplate coding. But there are at least as many libraries which suffer from "kitchen-sinkism", where the authors want so much for their library to become the pre-eminent choice that it becomes a bloated tangle, trying to be all things to all people. This can be compounded by less-experienced coders including multiple huge libraries in one program, using only a fraction of each library's features without realizing that there's almost complete overlap. The cherry on top is when the end developer uses one of these libraries to do just one or two small tasks that could've been done in less than a dozen lines of standard code, if only someone had told them how, instead of sending them off to install yet another library.

  • I can't respond directly because I haven't played either Metroid Dread or Hollow Knight specifically, although I've played and enjoyed many other metroidvania games, including the majority of the Metroid series (I even enjoyed Metroid Other M... mostly). But I'll say that there's no rule that prevents metroidvanias from being entertaining until you unlock some specific part of the ability set. The search to unlock new abilities should be fun itself.

  • I think that we mostly agree. My contention is that pretty much the entire game should still be engaging to play; having a long total play time shouldn't excuse that, and a shorter play time simply doesn't allow for it. Plenty of games have shown that it's possible to gradually layer mechanics one or two at a time, creating experiences around those smaller subsets of abilities that are still entertaining. I work in education and this idea is vital to what I do. Asking students to sit down and listen quietly as I feed them a mountain of boring details while promising, "Soon you'll know enough to do something interesting, just a little longer," is a sure-fire recipe for losing my audience.

    And as I think you may have intimated, creating environments that require the use of only one ability at a time reduces those abilities to a boring list. When you've finally taught the player each ability in isolation, and suddenly start mixing everything up once they get to the "good part" of the game, they'll virtually have to "relearn" everything anyway.

    We don't need to give the player everything at once to make our games interesting, but we do need to make sure that what we're giving them piecemeal is interesting in the moment.

  • This isn't a slight against you, OP, or this game, but I'm just suddenly struck by the way that, "aside from the first few hours," or more commonly, "it gets better a couple of hours in," has become a fairly common and even somewhat acceptable thing to say in support of a game, as part of a recommendation.

    As I get older I'm finding that I actually want my games to have a length more akin to a movie or miniseries. If a game hasn't shown me something worthwhile within an hour or so, I'm probably quitting it and never coming back.

  • Yep, it's probably easier to get an Android device and install readers on it than to try for a prepackaged FOSS reader.

    I use several apps on my Android phone, but mostly Kindle (for Kindle, duh), PDF Reader (for PDFs, duh again), and Lithium (mostly for EPUB but pretty much everything else, too). I get most of my e-books as DRM-free EPUBs and PDFs.

  • I learned about Bloom filters from an article discussing how old systems and algorithms shouldn't be forgotten because you never know when they'll come in handy for another application. The example they gave was using Bloom filters to reduce data transmission for MMOs; break your world into sectors and just send everyone a Bloom filter of objects mapped to sectors, then the client can request more detail only for objects that are within a certain range of the individual PC.

  • We were managing our own work with (usually generous) milestones/deadlines determined by other people. As long as we kept meeting goals, no-one looked any deeper. It gave me the freedom to literally put everything else on hold and switch 100% of my attention to this project.

  • I once had a manager hand me a project brief and ask me how quickly I thought I could complete it. I was managing my own workload (it was a bad situation), but it was a very small project and I felt that I had time to put everything else on hold and focus on it. So, I said that I might be able to get it done in four days, but I wouldn't commit to less than a week just to be sure.

    The manger started off on this half-threatening, half-disappointed rant about how the project had a deadline set in stone (in four days' time), and how the head of the company had committed to it in public (which in hindsight was absolute rot). I was young and nervous, but fortunately for me every project brief had a timeline of who had seen it, and more importantly, when they had received it. I noticed that this brief had originated over three months prior, and had been sitting on this manager's desk for almost a month. I was the first developer in the chain. That gave me the guts to say that my estimate was firm, and that if anyone actually came down the ladder looking for heads to set rolling (one of the manager's threats), they could come to me and I would explain.

    In the end nothing ever came of it because I managed to get the job done in three days. They tried to put the screws to me over that small of a project.

  • I kind of agree with you but there's also the issue that when you have a problem with Windows, there are 30 people to tell you, "Here are the hoops, and here's how to jump through them," while on Linux there are often only 3-5 people, all telling you, "LOL wipe and replace your whole OS with the distro that I use because I don't have that specific problem."

  • I agree with most of what you said, except for the Windows examples. The pages that you linked begin with three-line TL;DRs that are enough for any barely-competent user to find and modify the necessary settings. While the full instructions may be tortuously detailed, are they actually hard to understand?

    And sure, those Windows pages don't advance the user's knowledge in any meaningful way, but neither does blindly copying and pasting a line of shell commands.

    By the way, while I appreciate that we're talking about if and how CLI is superior to GUI, and not Linux versus Windows...

    Where-as Linux users can easily share commands and fixes or tests over a simple irc chat, because the command line reaches the whole system.

    ... both of those tasks can be done via CLI in Windows, too. I am very happy that I switched to Linux, but there's no reason to misrepresent the other guys.

  • One thing that wasn't mentioned in the article is default settings. In so many CLI programs (and FOSS in general), there seems to be some kind of allergy to default settings. I don't know whether it's a fear of doing the wrong thing, or a failure to sympathize with users who haven't spent the last three months up to their elbows in whatever the program does. But so often I've come to a new program, and if I've managed to stick with it, come to the realization later that at least half of the settings I needed to research for hours and enter manually every time could have been set to static defaults or easily-derived assumptions based on other settings in 99% of cases. Absolutely let your users override any and all defaults, but please use defaults.

    I'd also be interested in the overlap between people saying, "LOL just get gud" about using the command line, and people who are terrified of using C++.