Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)GA
Posts
1
Comments
642
Joined
2 yr. ago

  • You can't say that, though, because it implies Chinese engineers and information technology scientists are trailblazers rather than plagarists and IP thieves.

    I mean, I said what I said and I meant it. The Chinese are trailblazing a path nobody has tried before: DUV only for sub-10nm processes. It's not ideal, and the reason why nobody did it before is because they already had EUV by the time they got there.

    But I wouldn't sleep on the ability of anyone to be able to solve problems using the tools at their disposal.

    Especially since there's nothing stopping the mainland Chinese companies from hiring Taiwanese engineers.

    Not a ton of people believed that Taiwan could surpass Japan, either, but that happened in the 90's. Not a ton of people believe that Japan can get back in the game, but Rapidus is making a play for 2nm.

    Nothing is forever, and things are always changing. I'm somewhat optimistic that western sanctions will keep China from competing on the world stage at semiconductor fabrication, but I don't think it's a guarantee or in any way inevitable.

  • Meanwhile, you've got companies in Taiwan, Korea, China, and Japan breaking into the 3nm and 2nm scales.

    The mainland Chinese SMIC is doing everything they can without access to ASML's EUV machines, and have gotten further than anyone else has on DUV. It remains to be seen just how far they can get without plateauing on the limits of that tech. Most doubted that they could get past 10nm, but some of their recent chips appear to be comparable to 7nm, and there are rumors that they have a low yield 5nm process that isn't economically feasible but can be a strong political statement.

    TSMC is delaying the transition to Gate All Around, announcing that they won't be trying it on the 3nm processes, and waiting until 2nm to roll that out. They're the undisputed leader today, so they're milking their current finFET advantage for as long as it will sustain them.

    Samsung has already switched to Gate All Around for their 3nm process, so they might get the jump on everyone else (even if they struggled with the previous paradigm of finFET). But they're not lining up external customers, as their yields still can't compete with TSMC's. It's entirely possible though that as the industry moves from finFETs to GAAFETs, Samsung could take a lead.

    Intel basically couldn't get finFETs to work, and are already trying to skip ahead to GAAFETs (which they call RibbonFET). Plus Intel (like the others) is trying to introduce backside power delivery, which, if it can be commercialized and mass produced, would achieve huge gains in power efficiency. Intel did introduce both technologies in its 20A process (supposedly 2nm class), but then canceled it due to low yield. So they're basically betting the company on their 18A process, and hoping they can get that to market before TSMC and Samsung hit their stride on 2nm.

  • So your market research has been to ask two women who don't listen to podcasts, and then look at the view counts on a video platform not primarily used for audio podcasts. Real solid methodology there.

    Just, like, look at any of the podcast metrics ranking podcasts by listens or subscribers from the US: #2 on Apple, #5 on Spotify, #4 on Edison's charts.

  • It's a top-5 podcast in terms of popularity, and probably literally the most popular podcast among women. It was, for a time, the #2 podcast after Rogan's.

    Spotify paid $60 million to make Call Her Daddy a Spotify exclusive for 2 years, and last year SiriusXM paid $125 million for the ad/distribution rights for 3 years.

    It's a heavy hitter in the podcast space, by pretty much every metric.

  • But they won't go bankrupt because the US gov./army need Intel to stay relevant against China, and Intel is basically the only American company that both designs and fabs their own processors and that is still relevant.

    That's never stopped anyone from going bankrupt and wiping out shareholders. If the tech is that critical, the US government might engineer a bailout of the company, but will make the shares worthless in the process.

    They did it with GM and Chrysler in 2009, they did it with Iridium in 2000.

  • what is your source for this?

    Familiarity with the industry, and knowledge that finFET was exactly what caused Intel to stall, Global Foundries to just give up and quit trying to keep up, and where Samsung fell behind TSMC. TSMC's dominance today all goes through its success at mass producing finFET and being able to iterate on that while everyone else was struggling to get those fundamentals figured out.

    Intel launched its chips using its 22nm process in 2012, its 14nm process in 2014, and its 10nm process in 2019. At each ITRS "nm" node, Intel's performance and density was somewhere better than TSMC's at the equivalent node, but somewhere worse than the next. Intel's 5-year lag between 14nm and 10nm is when TSMC passed them up, launching 10nm, and even 7nm before Intel got its 10nm node going. And even though Intel's 14nm was better than TSMC's 14nm, and arguably comparable to TSMC's 10nm, it was left behind by TSMC's 7nm.

    You can find articles from around 2018 or so trying to compare Intel's increasingly implausible claims that Intel's 14nm was comparable to TSMC's 10nm or 7nm processes, reflecting that Intel was stuck on 14nm for way too long, trying to figure out how to continue improving while grappling with finFET related technical challenges.

    You can also read reviews of AMD versus Intel chips around the mid-2010s to see that Intel had better fab techniques then, and that AMD had to try to pioneer innovating packaging techniques, like chiplets, to make up for that gap.

    If you're just looking at superficial developments at the mass production stage, you're going to miss out on the things that are in 20+ year pipelines between lab demonstrations, prototypes, low yield test production, etc.

    Whoever figures out GAA and backside power is going to have an opportunity to lead for the next 3-4 generations. TSMC hasn't figured it out yet, and there's no real reason to assume that their finFET dominance would translate to the next step.

  • Another example here is the Matrix protocol, specifically designed from the ground up to be open and distributed. In reality, the only option for full-featured stable server software is the one maintained by the project itself, and there aren't a lot of third party clients available.

    Openness itself is a good goal, but the complexity itself can pose a barrier openness.

  • Intel has only been behind for the last 7 years or so, because they were several years delayed in rolling out their 10nm node. Before 14nm, Intel was always about about 3 years ahead of TSMC. Intel got leapfrogged at that stage because it struggled to implement the finFET technology that is necessary for progressing beyond 14nm.

    The forward progress of semiconductor manufacturing tech isn't an inevitable march towards improvement. Each generation presents new challenges, and some of them are quite significant.

    In the near future, the challenge is in certain three dimensional gate structures more complicated than finFET (known as Gate All Around FETs) and in backside power delivery. TSMC has decided to delay introducing those techniques because of the complexity and challenges while they squeeze out a few more generations, but it remains to be seen whether they'll hit a wall where Samsung and/or Intel leapfrog them again. Or maybe Samsung or Intel hit a wall and fall even further behind. Either way, we're not yet at a stage where we know what things look like beyond 2nm, so there's still active competition for that future market.

    Edit: this is a pretty good description of the engineering challenges facing the semiconductor industry next:

    https://www.semianalysis.com/p/clash-of-the-foundries

  • No, there's still competition. Samsung and Intel are trying, but are just significantly behind. So leading the competition by this wide of a margin means that you can charge more, and customers decide whether they want to pay way more money for a better product now, whether they're going to wait for the price to drop, or whether they'll stick with an older, cheaper node.

    And a lot of that will depend on the degree to which their customers can pass on increased costs to their own customers. During this current AI bubble, maybe some of those can. Will those manufacturing desktop CPUs or mobile SoCs be as willing to spend? Maybe not as much.

    Or, if the AI hype machine crashes, so will the hardware demand, at which point TSMC might see reduced demand for their latest and greatest node.

  • I'll be honest: I found David Graeber to be way off the mark in this book (and only kinda off the mark in Debt, the book that put him on the map). Setting aside his completely unworkable definition of what makes a job "bullshit" or not, it still doesn't make a persuasive case that our social media activity is driven by idle downtime on the job.

    The majority of the time that people are spending on Facebook YouTube, Instagram, and Twitter are happening off the clock. It's people listening to podcasts in the car, watching YouTube videos on the bus, surfing Facebook and Instagram while they wait for their table at a restaurant, sitting at home with the vast Internet at their disposal from their couch, etc. And perhaps most importantly, it's a lot of younger people who don't have jobs at all.

    So the social media activity is largely driven by people who aren't working at that moment: commuting times in mornings and evenings, lunch breaks, etc. that's not the bullshitness of the job, but the reality that people have downtime outside of work, especially immediately before or after.

  • Bullshit Jobs

    No, the actual definition that Graeber uses for bullshit jobs is not relevant for this discussion. Corporate Lawyers are his classic example, but those are jobs that don't have a ton of idle time. Other jobs, like night security guard or condo doorman, are by no means recent inventions, and exactly the type of people who used to pass the time with radio and magazines.

    If you're saying that there's a rise in idle time for people, I'm not sure it comes from our jobs.

  • With social media came the timeline you could mindlessly scroll through or click on suggestions.

    I mean before broadband Internet you could sit around and passively consume cable television or radio pretty easily. There's always been a role for people to act as curators and recommendation engines, from the shelf of staff picks at a library/bookstore/video rental store to the published columns reviewing movies and books, to the radio DJ choosing what songs to play, to the editors and producers and executives who decide what gets made and distributed.

    I don't buy that social media was a big change to how actively we consume art, music, writing, etc. If anything, the change was to the publishing side, that it takes far less work to actually get something out there that can be seen. But the consumption side is the same.

  • Nowadays, I hear a lot of people say that the alternative to these massive services is to go back to old-school forums. My peeps, that is absurd. Nobody wants to go back to that clusterfuck just described. The grognards who suggest this are either some of the lucky ones who used to be in the "in-crowd" in some big forums and miss the community and power they had, or they are so scarred by having to work in that paradigm, that they practically feel more comfortable in it.

    I'm totally in agreement.

    I agree that the subreddit model took off in large part because centralized identity management was easy for users. We'll never go back to the old days where identity and login management was inextricably tied to the actual forum/channel being used, a bunch of different islands that don't actually interact with each other.

    I'm hopeful that some organizations will find it worthwhile to administer identity management for certain types of verified users: journalism/media outfits with verified accounts of their employees with known bylines, universities with their professors (maybe even students), government organizations that officially put out verified messaging on behalf of official agencies, sports teams or entertainment collectives (e.g. the actor's unions), and manage those identities across the fediverse. What if identity management goes back to the early days of email, where the users typically had a real relationship with their provider? What would that look like for different communities that federate with those instances?

  • Minitel launched in 1982, well after work had begun on interconnections between different computer networks, using the predecessor protocols to TCP/IP and what would become the addressing/domain name system. Minitel launched on protocols that were ultimately incompatible with the rest of the Internet, and didn't have an easy way to actually get joined in.

    Minitel was more of an alternative internet than it was the inspiration for the migration of the internet to becoming a HTTP/www-centered network.

  • I don't read it as magical energy created out of nothing, but I do read it as "free" energy that would exist whether this regeneration system is used or not, that would otherwise be lost as heat.

    With or without regenerative braking, the train system is still going to accelerate stopped trains up to operational speed, then slow them down to a stop, at regular intervals throughout the whole train system. Tapping into that existing energy is basically free energy at that point.