Skip Navigation

Posts
11
Comments
458
Joined
2 yr. ago

  • My understanding is that there had been an ongoing concern on /r/piracy that they would get shut down at some point, that this had been a concern in the past, and so the other stuff like the API restrictions and the rest of the spez drama was kind of just adding to the big factor pushing people away -- that the community could vanish at any time.

    The lead mod on /r/piracy also set up a dedicated instance -- there was definite commitment -- made it clear that he was making the move, and was demodded on /r/piracy, so there were factors creating more inertia.

    Those are all factors that did not generally exist for other communities.

  • I think that the dev and some volunteers at kbin.social has done a lot of work on scaling up that instance, but even so, kbin.social will probably hit hard-to-fix scaling issues at some point. It's also a big instance, like lemmy.world.

    I'd been looking at another (presently small) US-based one-person kbin instance, was planning to hop over if it stayed up to help spread load, but it looks like it went down.

    I suppose that once a number of instances have a track record of staying up for N months and with their uptime records established, it'll be easier to figure out what instances are good alternatives, are likely around for the long haul, and which ones will vanish in the wind.

  • What’s been your experience with youtube recommendations?

    I've never had a YouTube account, so YouTube doesn't have any persistent data on me as an individual to do recommendations unless it can infer who I am from other data.

    They seem to do a decent job of recommending the next video in a series done in a playlist by an author, which is really the only utility I get out of suggestions that YouTube gives me (outside of search results, which I suppose are themselves a form of recommendation). I'd think that YouTube could do better by just providing an easy way to get from the video to such a list, but...

  • I don't like the idea of link taxes myself.

    But even setting aside the question of whether link taxes are a good idea, I don't understand why they're making a -- what to me sounds dubious -- antitrust argument. It seems like a simply bizarre angle.

    If the Canadian government wants news aggregators to pay a percentage of income to news companies, I would assume that they can just tax news aggregators -- not per link to Canadian news source, but for operating in a market at all -- take the money and then subsidize Canadian news sources. It may or may not be a good idea economically, but it seems like it'd be on considerably firmer footing than trying to use antitrust law to bludgeon news aggregators into taking actions that would trigger a link tax by aggregating Canadian news sources.

  • When do we get the next one.

    Well, going off the article:

    and a fourth is expected to begin operations in 2024

  • It's available for me on kbin.social as of this writing, and I subscribed.

    As far as I can tell, what one needs to do on kbin is search for communityname@instance. I don't think that "!" goes in the search string.

    But that's already run by now.

    For people on kbin.social, you should be able to see it at:

    https://kbin.social/m/battlestations@lemmy.world

    If you're on another kbin instance, do the above search. I'm still a little fuzzy about the right syntax in a comment to produce a link to perform such an initial search in a cross-lemmy/kbin, cross-instance fashion. I think that it should be:

     
        
    !@battlestations@lemmy.world
    
    
      

    Giving the following:

    That generated link does work for me on kbin.social, but I could be wrong about it working elsewhere.

    I really wish that this particular issue would be made clear, as it's important for community discoverability.

    EDIT: Nope, generated link does not work on lemmy.world, so doesn't work on lemmy, at least.

    EDIT2: On fedia.io, another kbin instance, the link also doesn't work, so someone on the instance may need to have already subscribed for the link to be auto-generated. The ability to have a link format that directs to one's local instance in a way that works on all lemmy and kbin instances, regardless of whether anyone has subscribed, would be really nice.

    EDIT3: Trying:

     
        
    [battlestations@lemmy.world](/search?q=battlestations%40lemmy.world)
    
    
      

    Yields

    battlestations@lemmy.world

    Which works to generate a search on kbin.social.

    It also appears to work on fedia.io, so this is probably the right way to do a link, at least for kbin users.

    EDIT4: It also appears to work for lemmy instances! This should probably be the new syntax used on newcommunities@lemmy.world to link to a community!

  • To my understanding, here they use lasers to create fusion and the 2 megajoules are emitted by the lasers.

    Yes.

    Hence they need waaay more power than is generated to drive their lasers.

    googles

    It sounds like the additional power is due to energy exiting the system:

    https://en.wikipedia.org/wiki/Fusionenergygainfactor

    Most fusion reactions release at least some of their energy in a form that cannot be captured within the plasma, so a system at Q = 1 will cool without external heating. With typical fuels, self-heating in fusion reactors is not expected to match the external sources until at least Q ≈ 5. If Q increases past this point, increasing self-heating eventually removes the need for external heating. At this point the reaction becomes self-sustaining, a condition called ignition, and is generally regarded as highly desirable for practical reactor designs. Ignition corresponds to infinite Q.

    So it sounds like additional power requirements effectively means getting from their current 1.54 to 5.

    That is also why this research is not actually aiming at power geration, but at fusion weapons.

    I am confident that that is not the case. The US knows how to do fusion weapons and has for decades -- that's what a thermonuclear bomb is, the second stage. That's a much simpler problem than fusion power generation. You don't involve lasers or magnets or other things that you use in fusion power generation if you just want a fusion weapon; you only need to force the material together with a great deal of force for a very brief period of time, and then you're done.

  • I don't think that that's necessarily a huge issue, though, because their aim wasn't to address that.

    That experiment briefly achieved what’s known as fusion ignition by generating 3.15 megajoules of energy output after the laser delivered 2.05 megajoules to the target, the Energy Department said.

    In other words, it produced more energy from fusion than the laser energy used to drive it, the department said.

    A 2020 article, before the current success or the prior one at the same facility:

    https://www.powermag.com/fusion-energy-is-coming-and-maybe-sooner-than-you-think/

    No current device has been able to generate more fusion power than the heating energy required to start the reaction. Scientists measure this assessment with a value known as fusion gain (expressed as the symbol Q), which is the ratio of fusion power to the input power required to maintain the reaction. Q = 1 represents the breakeven point, but because of heat losses, burning plasmas are not reached until about Q = 5. Current tokamaks have achieved around Q = 0.6 with DT reactions. Fusion power plants will need to achieve Q values well above 10 to be economic.

    So if I understand this aright, on the specific thing they're working on, they're at 1.54 as of OP's article, that is (3.15/2.05), up from 0.6 in 2020. The target is somewhere "well above 10" for a commercially-viable fusion power plant. Still other problems to solve, but for the specific thing they're working on, that maybe gives some idea of where they are.

  • Even if it doesn't, I expect that we'll need fusion power at some point, interstellar travel or something.

    https://en.wikipedia.org/wiki/Interstellartravel

    Nuclear fusion rockets

    Fusion rocket starships, powered by nuclear fusion reactions, should conceivably be able to reach speeds of the order of 10% of that of light, based on energy considerations alone. In theory, a large number of stages could push a vehicle arbitrarily close to the speed of light.[48] These would "burn" such light element fuels as deuterium, tritium, 3He, 11B, and 7Li. Because fusion yields about 0.3–0.9% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases lt;0.1% of the fuel's mass-energy. The maximum exhaust velocities potentially energetically available are correspondingly higher than for fission, typically 4–10% of the speed of light. However, the most easily achievable fusion reactions release a large fraction of their energy as high-energy neutrons, which are a significant source of energy loss. Thus, although these concepts seem to offer the best (nearest-term) prospects for travel to the nearest stars within a (long) human lifetime, they still involve massive technological and engineering difficulties, which may turn out to be intractable for decades or centuries.

  • If you subsidize housing you create increased demand for housing, ultimately leading to rent going up for all.

    So, as I said, I'm not an advocate of subsidizing housing out of taxes. I'm just saying that people who are arguing for rent control are arguing for a policy that tends to exacerbate the problem in the long run.

    Subsidizing housing doesn't normally run into that, because it's normally possible to build more housing.

    It is true that that's not always the case, and one very real way in which that can not be the case is where there have been restrictions placed on constructing more housing. If housing prices are high, the first thing I would look at is "why can't developers build more housing, and are there regulatory restrictions preventing them from doing so". It is quite common to place height restrictions on new constructions, which prevents developers from building property to meet that demand, which drives up housing prices (and rents). In London, there are restrictions placed that disallow building upwards such that a building would be in line-of-sight between several landmarks. That restricts construction in London and makes housing prices artificially rise. Getting planning permission may also be a bottleneck. I agree with you that that sort of thing is the thing that I would tend to look at first as well: removing restrictions on housing construction is the preferable way to solve a housing problem.

    I remember an article from Edward Glaeser some time back talking about how much restrictions on construction -- he particularly objected to the expanding number of protected older, short buildings -- have led to cost of housing going up.

    How Skyscrapers Can Save the City

    Besides making cities more affordable and architecturally interesting, tall buildings are greener than sprawl, and they foster social capital and creativity. Yet some urban planners and preservationists seem to have a misplaced fear of heights that yields damaging restrictions on how tall a building can be. From New York to Paris to Mumbai, there’s a powerful case for building up, not out.

    By Edward Glaeser

    It looks like it's paywalled, so here:

    https://archive.is/jRQIm

  • My understanding is that rent control backfired pretty spectacularly in the long term.

    Yeah, the basic problem with rent control is that it creates the opposite long-term incentive from what you want.

    Rentable housing is like any other good -- it costs more when the supply is constrained relative to demand, costs less when supply is abundant relative to demand.

    If rent is high, what you want is to see more housing built.

    What rent control does is to cut the return on rents, which makes it less desirable to buy property to rent, which makes it less desirable to build property, which constrains the supply of housing, which exacerbates the original problem of not having as much housing as one would want in the market.

    I would not advocate for it myself, but if someone is a big fan of subsidizing housing the poor, what they realistically want is to subsidize housing for the poor out of taxes or something. They don't want to disincentivize purchase of housing for rent, which is what rent control does.

  • Came close to not being enough, even then.

    https://en.wikipedia.org/wiki/Ky%C5%ABj%C5%8Dincident

    The Kyūjō incident (宮城事件, Kyūjō Jiken) was an attempted military coup d'état in the Empire of Japan at the end of the Second World War. It happened on the night of 14–15 August 1945, just before the announcement of Japan's surrender to the Allies. The coup was attempted by the Staff Office of the Ministry of War of Japan and many from the Imperial Guard to stop the move to surrender.

    The officers murdered Lieutenant General Takeshi Mori of the First Imperial Guards Division and attempted to counterfeit an order to the effect of permitting their occupation of the Tokyo Imperial Palace (Kyūjō). They attempted to place Emperor Hirohito under house arrest, using the 2nd Brigade Imperial Guard Infantry. They failed to persuade the Eastern District Army and the high command of the Imperial Japanese Army to move forward with the action. Due to their failure to convince the remaining army to oust the Imperial House of Japan, they performed ritual suicide. As a result, the communiqué of the intent for a Japanese surrender continued as planned.

    They tried to seize the recording of Emperor Hirohito's surrender speech before it could go out:

    The rebels, led by Hatanaka, spent the next several hours fruitlessly searching for Imperial Household Minister Sōtarō Ishiwata [ja], Lord of the Privy Seal Kōichi Kido, and the recordings of the surrender speech. The two men were hiding in the "bank vault", a large chamber underneath the Imperial Palace.[15][16] The search was made more difficult by a blackout in response to Allied bombings, and by the archaic organization and layout of the Imperial House Ministry. Many of the names of the rooms were unrecognizable to the rebels. The rebels did find the chamberlain Yoshihiro Tokugawa. Although Hatanaka threatened to disembowel him with a samurai sword, Tokugawa lied and told them he did not know where the recordings or men were.[12][17] During their search, the rebels cut nearly all of the telephone wires, severing communications between their prisoners on the palace grounds and the outside world.

  • Because it would have been less-effective, I expect. The targets were chosen because they had military industry and had not yet been destroyed via conventional firebombing, which had already been done at mass scale in other places.

    I think that it's important to understand that the atomic bombs were simply seen as something of a significant multiplier in the existing bombing campaign. One bomber with an atomic bomb could maybe do what a thousand bombers with conventional weapons might...but there were, in fact, thousand-bomber raids happening. That is, cities were already being set afire. The Manhattan Project simply permitted doing so with a significantly-lower resource expenditure.

    EDIT: Also, to be clear, the US fully intended to ramp up to mass production and employment of atomic bombs, dozens a month, once production could be brought up, and would have done so had the surrender not occurred.

    Today, partly because of (significantly more powerful) thermonuclear weapons and because we know that the first two bombs did result in a surrender, the first two atomic bombs maybe look like something of a clear bookend to the war, but that's for us in 2023; in 1946, they would have been another step -- if a significant one -- of World War II's large-scale bombing campaigns, something that had been growing for years.

  • That's one of the risks of kicking off a war.

    Close to the end of the war, Japan -- which had made pretty extensive use of biological weapons against China -- was working on also hitting the US with biological weapons. We were far enough away that it would have been difficult, but where they had been able to employ biologicals, in Asia, they did.

    https://en.wikipedia.org/wiki/OperationPX

    Operation PX, also known as Operation Cherry Blossoms at Night, was a planned Japanese military attack on civilians in the United States using biological weapons, devised during World War II. The proposal was for Imperial Japanese Navy submarines to launch seaplanes that would deliver weaponized bubonic plague, developed by Unit 731 of the Imperial Japanese Army, to the West Coast of the United States.

    That being said, Japan wasn't even the expected target of the Manhattan Project. Germany would have been, but was defeated via conventional force prior to the project reaching completion.

  • Let me add that I don't think that we are at the end-all-and-be-all of audio. I can hypothetically imagine things that might be done if one threw more money at audio playback that would create a better experience than one can get today.

    • When you hear audio from a given point, some of how you detect the location of an audio source is due to the effect on it hitting your ears, which are of a distinct shape, which means that what's actually hitting your inner ear is slightly unique to an individual person Currently, if you're listening to a static audio file, it's the same for everyone. One could hypothetically ship hardware which fits inside the ear of and can build an audio model for the ear of a given individual to make audio which reflects their specific ears. Then audio could be played back that sounds as if it's actually coming from a given point in space relative to someone's ears. That's not a drop-in improvement for existing audio, because you'd need to have 3D location information available about the individual sources in the audio. But if audio companies wanted to sell a fancier experience for audio that does have that information, they could leverage that.
    • For decades, audio playback devices have tried to produce visual effects that synchronize with music. They haven't done a phenomenal job, at even basic stuff like beat detection, in my opinion, and so clubs and the like have people that have to rig up DMX512 gear with manually-created annotations to have effects happen at a given point. Audio tracks today don't have a standard format for annotations; if I go buy an album, it doesn't come with something like that. One could produce a standard for it and rig up various gear, like strobes or colored light or even do this in VR, to stimulate the other senses in time with the audio.
    • I suspect that very few people listen to audio in an environment where they can hear absolutely zero detectable background sound when they don't have their audio playing. You can get decent passive sound cancellation devices, but they only go so far; even good passive sound cancellation headphones are something that one can probably hear fairly quiet sound through. Right now, active sound cancellation devices are being worked on, but that doesn't get one to the point of inaudibility either, and I haven't seen anything that does both good active and passive cancellation, so using active noise cancellation means giving up good passive noise cancellation.

    My point is that I think that there are remaining areas for audio hardware companies to explore to try to create better experiences. I just don't think that playing audio at a sampling frequency hundreds of times above the frequencies that humans can hear is really a fantastic area to be banging on.

  • This feature ensures the NW-ZX707 can transform standard MP3 or PCM audio to the ultra-high frequency 11.2 Mhz DSD audio stream.

    That doesn't make a lot of sense to me.

    • Humans can only hear up to about 20kHz, so you're not getting much benefit above about double that.
    • Even assuming that humans could hear frequencies hundreds of times higher, audio isn't generally available sampled at 11.2 Mhz. If you're getting music, the recording and audio engineering work, the microphones, etc, aren't designed to accurately capture data at high frequencies.
    • Even assuming that none of that were the case, the audio engineer and artists weren't trying to make audio that sounds good at that frequency (which they can't hear either). The music doesn't intrinsically have some aesthetically-pleasing quality that you can extract; they were the ones who added it, and they did that via making judgments using their own senses, which can't hear this.
    • Even aside from that, it doesn't look like this comes with headphones. Whatever you are plugging into this has to induce vibration in the air for it to make it to your ears, and probably does not have a meaningful frequency response at that frequency.

    The NW-ZX707 also gets Sony's proprietary digital music processing technologies, including the DSEE Ultimate technology, developed in-house to restore compressed music files to the quality of a CD by interpolating sound algorithms.

    And it makes even less sense if your starting audio has actually thrown out data in frequencies that humans can hear by using lossy compression there, even if we aren't terribly sensitive to those.

  • I have what might be a dumb question: what makes something a gaming keyboard? Is there something that makes it specifically better for gaming? Cool RGB lighting? Simply aesthetic choices?

    I don't know, but I can can give you some guesses as to what I'd call at least theoretically legitimate features (though the actual features might be aesthetic):

    • Some keyswitches have linear force; these are apparently considered to be preferable for games. Low resistance for quicker response. Cherry MX Red switches are often billed as for gaming and have these properties.
    • N-key rollover is pretty common (a controller capable of detecting any arbitrary key being down on the keyboard, up to N keys) but it's a legit feature for some games that do rely on hitting a lot of keys simultaneously. I recall using a keyboard with a more-limited grid encoder, playing the original Team Fortress, and occasionally legitimately hitting the limits on what the keyboard encoder could detect; it was annoying when it happened.
    • Probably not what they're selling, but USB imposes protocol limitations on how many keys can be down at once. Basically, USB sends the whole state of the keyboard. PS/2 does not -- it is edge-triggered, just tells the keyboard when a key goes down. If an event gets missed, that means that PS/2 can have a key appear to be "stuck" down until it's tapped again. However, USB can only send so many keys in the key state, which bounds how many keys can actually be down (though IMHO it's at a limit that is so high that it doesn't matter), whereas PS/2 can send an unlimited number of keys down. I know that PS/2 used to sometimes be sold specifically for this characteristic.
    • Possibly macros, though I'd think that it'd be possible to better do that in software on the host machine. Putting it on the keyboard might be a way to defeat anti-cheating systems, I suppose.
    • T-shape arrow keys or possibly a numpad are necessary for some games. Ditto for F-keys. Not all laptops will have these (well, they probably have the F-Keys, but you might have to chord with Fn or something). Some games make use of arrow keys, a lot of older games used to use the numpad, and some relatively-new games make use of F-keys (by convention on Windows, F5 and F9) for quicksaves and quickloads.
    • Maybe lighting that integrates with games.
  • Maybe they should have called it "Temporary Test Kitchen" to drive the point home with even more of a sledgehammer.

    I suspect that some of it is that the author isn't used to apps that are already installed no longer running, but it sounds like this, like most AI things, doesn't just run on the local Android device, but leverages off-machine computational capacity, and you can't expect that to be permanent.