Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)AB
Posts
0
Comments
1,096
Joined
2 yr. ago

  • While I agree - the part you're missing is the vast majority of TikTok users are outside the United States.

    TikTok doesn't want to sell. They want some sort of "independent" subsidiary where ByteDance still profits from (and controls) TikTok and the subsidiary worries about compliance with US law. But the thing is, that's already the current structure.

    I wouldn't be surprised if they refuse to sell and wind up being banned. ByteDance doesn't want to lose all their US customers, but they'd likely prefer that to selling.

  • I wonder if a tablet or laptop might be more appropriate for your wife? When I think "web" I want a keyboard or touch screen and a TV typically has neither. But in any case, I think you're making a mistake trying to have one device that can fit every use case. Your TV will have multiple inputs (and it will also probably be a smart TV).

    Plug some sort of Mini PC into the TV for your wife, and let your kids use the TV's built in smart features to watch TV or buy a set top box such as an Apple TV/Nvidia Shield/etc.

    PS: I would 100% use a projector, not a TV. Just project onto the wall (assuming you don't have wallpaper/etc).

  • this guy says it’s ok that I’m a full stack dev

    I'm also a full stack dev - so maybe I'm biased. But I'll add that there's definitely a place for specialist work, but I don't agree, at all, with people who think specialist developers are better than full stack ones.

    The way I see it full stack devs either:

    • are good enough to be the only type of developer your hire; or
    • sit in between specialists and management

    Take OpenAI for example. They have a bunch of really smart people working on the algorithm - so much so they're not even engineers they're more like scientists, but they also have a full stack team who take that work and turn it into something users can actually interact with, and above the full stack team is the management team. Maybe OpenAI isn't structured that way, but that's how I'd structure it.

    But most software isn't like ChatGPT. Most software isn't bleeding edge technology - you can usually take open source libraries (or license proprietary ones). Let the specialists figure out how to make TCP/IP handle complex issues like buffer bloat... all the full stack dev needs to know is response = fetch(url).

  • I think the Apple silicon devices are going to have a pretty locked boot loader

    Couldn't be further from the truth. The OS is heavily locked down to prevent malware from modifying the kernel / boot process, however bypassing it is as simple as holding down the power button until you see an options screen (equivalent to BIOS on a PC) and one of the options is a tool to adjust boot security including the option to boot into an arbitrary third party kernel. As long as it's compiled for ARM64 (which is a decades old industry standard CPU architecture) it will boot.

    The only real headaches are around drivers. For example Mac laptop trackpads don't have any buttons at all. Instead the trackpad is pressure sensitive and the software should detect pressure that looks like a press action, treat that as a click, and send haptic feedback (vibrating the trackpad). None of that is standard stuff and if you want a Mac laptop to work at all... you need to figure it out yourself.

  • Sure — security is one area where you do need to be a specialist.

    I'd say it's the exception that proves the rule though. Don't write your own encryption algorithms, don't invent new auth flows, do hire third parties to audit and test your security systems, etc etc. If you want to specialise in something like security, then yeah that's something you should study. But at the same time - every programmer should have general knowledge in that area. Enough to know when it's OK to write your own security code and when you need to be outsourcing it.

  • An inverter will not let you run your fridge until the battery is "dead". It's going to have a low voltage cut off, likely somewhere around 11 Volts, specifically to avoid damaging batteries by fully discharging them.

    How many hours you'll get from the battery mostly depends on your ambient air temperature and how often you open the fridge. They don't use that much power when they're idle - my fridge averages at about 90 watts (I'm not running off grid, but I do have rooftop solar and our system produces pretty charts showing consumption). A large car battery can sustain 90 watts for a quite long time - well over 2 hours. Probably closer to 10.

    Running a fridge off a car battery long term is a bad idea. But in an emergency? Sure I'd totally do that - especially if your "emergency" is genuine such as needing to keep your medication cold. Just don't open the fridge unless you're taking your medication.

    LifePo4 FTW!

    Sure. Way better than lead acid. But that doesn't mean lead acid is useless. When I lived off grid, LifePo4 didn't exist and we got close ten years (of daily use) out of our lead acid batteries. They were bigger than car batteries and also deep cycle ones, but in an emergency a car battery would be a fine choice if it's the best one you have.

  • Apple said EVERYBODY MAKE ARM APPS NOW

    Uh, no. What they did is make sure x86 software still works perfectly. And not just Mac software - you can run x86 Linux server software on a Mac with Docker, and you can run DirectX x86 PC games on a Mac with WINE. Those third party projects didn't do it on their own, Apple made extensive contributions to those projects.

    I'd like to go into more detail but as a third party developer (not for any of the projects I mentioned above) I signed an NDA with Apple relating to the transition process before you could even buy an ARM powered Mac. Suffice to say the fruit company helped developers far and wide with the transition.

    And yes, they wanted developers to port software over to run natively, but that was step 2 of the transition. Step 1 was (and still is) making sure software doesn't actually need to be ported at all. Apple has done major architecture switches like this several times and are very good at them. This was by far the most difficult transition Apple has ever done but it was also the smoothest one.

    It's 2024, and I still have software running on my Mac that hasn't been ported. If that software is slow, I can't tell. It's certainly not buggy.

  • For some non-critical stuff you can experiment til you find something that appears to work, deploy it, and fix any issues that might appear. Too much of today’s Internet is done that way, but it really is ok sometimes

    For critical work, you can easily apply the same approach but replace the "deploy it" stage with "do extensive internal testing". It takes a longer and is more expensive, but it does work. For example the first ever hydrogen powered aircraft flew in 1957, was an airplane with three engines and only one of those three ran on Hydrgoen. Almost 70 years of engineering later and that's still the approach being used. Airbus claims they will have commercial hydrogen powered flights around 2035 and plan to flight test the final production engine next year on an A380 Aircraft.

    The A380 has four engines and each is powerful enough to fly safely with only one engine running. In fact, it should be able to land with four engine failures - with a "Ram Air Turbine" providing electricity and hydraulic pressure to critical systems.

    The best approach to critical systems is not to build a perfectly reliable system, but rather to have redundancy so that failures will not result in a "please explain" before congress.

  • In my opinion the best developers are "generalists" who know a little bit about everything. For example I have never written a single line of code in the Rust programming language... but I know at a high level all of the major pros and cons of the language. And if I ever face a problem where I need some of those pros/don't care about the cons then I will learn Rust and start using it for the first time.

    There's not much benefit to diving deep into specialised knowledge on any particular technology because chances are you will live your entire life without ever actually needing that knowledge and if anything, it might encourage you to force a square peg into a round hole - for example "I know how to do this with X, so I'm going to use X even though Y would be a better choice".

    Wikipedia has a list of "notable" programming languages, with 49 languages just under "A" alone and I've personally learned and used three of the "A" languages. I dislike all three, and I seriously hope I never use any of them ever again... but at the same time they were the best choice for the task I was trying to achieve and I would still use those languages if I was faced with the same situation again.

    That's nowhere near a complete list - which would probably have a few thousand under "A" alone. I know one more "A" language which didn't make Wikipedia's cut.

    The reality is you don't know what technology you need to learn until you actually need it. Even if you know something that could be used to solve a problem, you should not automatically choose that path. Always consider if some other tech would be a better choice.

    Since you're just starting out, I do recommend branching outside your comfort zone and experimenting with things you've never done before. But don't waste time going super deep - just cover the basics and then move on. If there's a company you really want to work for, and they're seeking skills you don't have... then maybe get those skills. But it's risky - the company might not hire you. Personally I would take a different approach, try to get a different job at the company first then after you've got that, start studying and ask your manager if they can help you transfer over to the job you previously weren't qualified for, but are qualified now. In a well run company they will support you in that.

    As for learning outside of your 9-5... you should spend your spare time doing whatever you want. If you really want to spend your evenings and weekends writing code then go ahead and do that... but honestly, I think it's more healthy long term to spend that time away from a desk and away from computers. I think it would be more productive, long term, to spend that time learning how to cook awesome food or do woodworking or play social football or play music... or of course the big one, find a partner, have kids, spend as much time with them as you can. As much as I love writing code, I love my kid more. A thousand times more.

    Programming is a job. It's a fun job, but it's still a job. Don't let it be your entire life.

  • Apple is working on models, but they seem to be focusing on ones that use tens of gigabytes of RAM, compared to tens of terabytes.

    I wouldn't be surprised Apple ships an "iPhone Pro" with 32GB of RAM dedicated to AI models. You can do a lot of really useful stuff with a model like that... but it can't compete with GPT4 or Gemini today - and those are moving targets. OpenAI/Google will have even better models (likely using even more RAM) by the time Apple enters this space.

    A split system, where some processing happens on device and some in the cloud, could work really well. For example analyse every email/message/call a user has ever sent/received with the local model, but if the user asks how many teeth a crocodile has... you send that one to the cloud.

  • I don't see how it's any different to using Google as the default search engine in Safari.

    Also - phones don't have terabytes of RAM. The idea that a (good) LLM can run on a phone is ridiculous. Yes, you can run small AI models on there - but they're about as intelligent as an ant... ants can do a lot of useful work, but they're not on the same level as Gemini or ChatGPT.

  • You don't need to "have faith". Just test the code and find out if it works.

    For example earlier today I asked ChatGPT to write some javascript to make a circle orbit around another circle, calculating the exact position it should be for a given radius/speed/time. Easy enough to verify that was working.

    Then I asked it to draw a 2D image of the earth, to put on that circle. I know what our planet looks like, so that was easy. I did need to ask several times with different to get the style I was looking for... but it was a hell of a lot easier than drawing one myself.

    Then the really tricky part... I asked it how to make a CSS inner shadow that is updated in real time as the earth rotates around the sun. That would've been really difficult for me to figure out on my own, since geometry ins't my strong point and neither is CSS.

    Repeated that for every other planet and moon in our solar system, added some asteroid belts... I got a pretty sweet representation of our solar system, not to scale but roughly to scale and fully animated, in a couple hours. Would have taken a week if I had to use Stack Overflow.

  • Business, money, and interests of managers who have never written code themselves will carry more weight than the results of your research and study of effective programming languages. This is the fucking reality of the industry today!

    That is not even remotely the reality where I work. The reality where I work is:

    Everyone we have hired, and everyone we plan to hire in the future, is familiar with languages X/Y/Z. Therefore you will use those three languages.

    If you want to use a fourth language, first you need approval to train every single employee in that language. To a high level of proficiency. That would take thousands of hours for each employee.

  • humans he can recognize their bias

    Can they? I'm not convinced.

    As far as i know chat GPT can’t do that.

    You do it with math. Measure how many females you have with a C level position at the company and introduce deliberate bias into hiring process (human or AI) to steer the company towards a target of 50%.

    It's not easy, but it can be done. And if you have smart people working on it you'll get it done.

  • Game engines are a lot simpler than a web rendering engine, so I'm not sure it's a good comparison.

    Gecko (the FireFox rendering engine) dates back to 1997. And KHTML — the common ancestor shared by Blink/Webkit (Chrome/Safari) is maybe one or two years younger - I wasn't able to find a source. An insane amount of work, by millions of people if you include minor contributes, has gone into those rendering engines.

    Creating another one would be an insane amount of work... assuming you want it to be competitive.