Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)LI
Posts
1
Comments
380
Joined
2 yr. ago

  • If you don’t support tariffs to bring back manufacturing jobs domestically, how do you think we could make it through a war with our manufacturing partners?

    I express no position here about China nor Taiwan, but the false dichotomy presented is between: 1) enforce trade barriers indiscriminately against every country, territory, and uninhabitable island in the world without regard for allies nor enemies, or 2) diversify economic dependency away from one particular country.

    The former is rooted in lunacy and harkens back to the mercantilism era, where every country sought to bring more gold back home and export more stuff and reducing imports. The latter is pragmatic and diplomatic, creating new allies (economically and probably militarily) and is compatible with modern global economic notions like comparative advantage, where some countries are simply better at producing a given product (eg Swiss watches) so that other countries can focus on their own specialization (eg American-educated computer scientists).

    As a specific example, see Mexico, which under NAFTA and USMCA stood to be America's new and rising manufacturing comrade. Mexico has the necessary geographical connectivity and transportation links to the mainland USA, its own diverse economy, relatively cheap labor, timezones and culture that make for easier business dealings than cross-Pacific, and overall was very receptive to the idea of taking a share of the pie from China.

    Long-term thinking would be to commit to this strategic position, this changing the domestic focus to: 1) replace China with North America suppliers for certain manufactured goods, 2) continue to foster industries which are "offshore-proof", such as small businesses that simply have to exist locally or industries whose products remain super-expensive or hazrdous to ship (eg lithium ion batteries). Sadly, the USA has not done this.

    It is sheer arrogance to believe that the economic tide for industries of yore (eg plastic goods, combustion motor vehicles, call centers) can be substantially turned around in even a decade, when that transition away from domestic manufacturing took decades to occur. Further egoism is expressed by unilateral tarrif decisions that don't pass muster logically nor arithmetically.

  • Agreed. Email has its uses -- ubiquity, mostly "Just Works" (tm), most people know how to use it -- and while I might send a symmetric encrypted PDF along with a plaintext email, I'm more inclined to suggest that my recipients adopt Signal and get all the benefits of e2ee. EFF even has a guide for it: https://ssd.eff.org/module/how-to-use-signal

  • This 100%. It is well-advised to consider what your security/privacy objectives are, since encryption-at-rest is different than guarding against eavesdropping when sending outbound mail. What threat model you use will define what is or isn't acceptable.

  • I previously looked into doing exactly this, and recall this comment on HN: https://news.ycombinator.com/item?id=31245923

    One could argue the price of smtp2go at $150/yr is a bit steep, but it would also neatly avoid issues with sending outbound mail, since you're paying them to deal with those headaches. For inbound mail, I can't see why any mail operator wouldn't deliver to the server designated by your MX records, though you'll also have to deal with spam and other concerns vis-a-vis self hosting.

    On the same thread but different comment, VPS operators might already run an SMTP server that you can relay through.

    I wish you good luck in this endeavor!

  • Permanently Deleted

    Jump
  • 300 kph is 186 mph, which is well beyond the posted speed limit of any jurisdiction I can think of. For reference, here in California, a conviction for driving over 160 kph (100 mph) is punishable as a felony, meaning at least one year in state prison. The highest speed limit in California is 113 kph (70 mph).

    In metric units, a triple digit speed (eg 100 kph) is the domain of motorways (aka freeways or expressways). And even arrow-straight motorways have a maximum posted speed limit of some 140 kph. In Germany, the motorway can sometimes have no limit, but the recommended speed -- equivalent to the yellow speed signs in the USA -- for German autobahns is 130 kph, with some speedy cars occasionally doing 200 kph, I've heard.

    For further reference, the fastest speed achieved during an F1 motor race is 372 kph. Also, Japanese bullet trains heading west from Tokyo on the Tokaido Shinkansen route run at 285 kph.

    300 kph on a public road is grossly irresponsible, since even with no one around, the road is not designed for that speed. Compare race tracks with freeways, and it becomes clear that surface quality, drainage, sight lines, clear space, and other requirements for 200+ kph just aren't present on public roads, with the notable exception of very special public roads like the Nürburgring.

  • For timekeeping, you're correct that timezones shouldn't affect anything. But in some parts of law, the local time of a particular place (eg state capital, naval observatory, etc...) is what might control when a deadline has passed or not.

    If we then have to reconcile that with high speed space travel, then there's a possibility of ending up in a legal pickle even when the timekeeping aspect might be simple. But now we're well into legal fanfiction, which is my favorite sort but we don't have any guardrails ground rules to follow.

  • Up until the astronaut part, I was fully convinced that this is a law school theoretical question for an inheritance class, because that's exactly where the vagaries of "is she my sister?" would also arise.

    Then again, if we include time dilation due to near-lightspeed travel, we then have to deal with oddball inheritance cases like if your sister dies mid-travel but then you also die. The Uniform Simultaneous Death Act adopted by several US States would only apply if the difference in time-of-death is within 120 hours, but the Act is silent as to which reference plane will be used, especially if your sister is considered to be traveling "internationally" due to being in space, thus not being in the same US state or time zone as you might be in.

    So maybe the entire question is a valid inheritance case study after all.

  • FYI, some domains can genuinely be acquired for an indefinite period, as the delegation has no expiration period. So long as the domain is kept in good standing (eg two working authoritative nameservers) and doesn't violate the parent domains' policies, it will persist. Granted, few people go through this rather-old process to get such domains but they do exist. See my earlier comment.

  • Some charging pads also prop up the phone at an angle, making it easy to read the screen while also not having to hold the phone up. Most phones have their charging port on the bottom, so a phone stand couldn't be used while charging with a cord.

  • This 100%. The other comments addressed the "should I withdraw?" aspect of OP's question, but this comment deals with "should I stop contributing?". The answer to the latter is: no.

    The mantra in investing has always been "buy low, sell high". If the stock market is down, continuing your 401k contributions is doing the "buy low" part.

  • A word of caution for anyone cutting out the slot: make sure there aren't other obstructions, like capacitors, ICs, and NVMe drives in the way of where the PCIe card will be.

    The manufacturers that have the slot pre-cut will have already reserved the space, but even then, it's on you to check that it's suitable for a x16 if they only reserved space for a x8 card.

  • stay in a license that still allows Hashicorp / IBM to benefit from community contributions?

    I don't see how this is the case. As Hashicorp explains, they switched from the open-source Mozilla Public License 2.0 (MPL) to the proprietary Business Source License (BSL) in order to apply restrictions upon users of Terraform:

    Organizations providing competitive offerings to HashiCorp will no longer be permitted to use the community edition products free of charge under our BSL license.

    The terms of the MPL and BSL are incompatible, insofar that Hashicorp cannot unilaterally relicense MPL code from OpenTofu into BSL code in Terraform. But Hashicorp could still use/incorporate OpenTofu MPL code into Terraform, provided that they honor the rest of the obligations of the MPL.

    This is exactly the same situation as what Hashicorp was obliged to do before the licensing kerfuffle, so it cuts against Hashicorp's objective: why continue developing legacy Terraform if OpenTofu is going to provide continuity? Perhaps they only intend to develop new, exclusive features that build upon the common legacy code, but users would now retain an option to reject those pricy add-ons and just stick with the free, open-source functionality from OpenTofu.

    It seems to me less about giving the finger to Hashicorp and more about giving users a choice in the matter. Without OpenTofu, the userbase are forced into the BSL terms of Terraform, where Hashicorp could unilaterally prohibit any production use by yet another license change. That's no way to live or work, with such a threat hanging overhead. OpenTofu lifts that threat by providing competition, and so maybe does kinda throw the finger at Hashicorp anyway.

    On the flip side, precisely because MPL code cannot be unilaterally relicensed to BSL, if OpenTofu starts to gain new features that Terrarform doesn't have, Hashicorp can incorporate those features but they won't be unique. Why would a paying customer give money to Hashicorp for something that OpenTofu provides for free? The ecosystem of features cuts both ways.

    Finally, it gives Hashicorp an out: if they acquiesce in future, their BSL code can be unilaterally relicensed as MPL once more, thus allowing code sharing with OpenTofu. Had OpenTofu picked a different license, this could have been much harder. But as described in the OpenTofu manifesto, continuity was the goal.

  • I can understand the pessimism in some of the answers given so far, especially with regards to the poor state of American public transit. But ending a discussion with "they guess" is unsatisfactory to me, and doesn't get to the meat of the question, which I understand to be: what processes might be used to identify candidate bus stop locations.

    And while it does often look like stops are placed by throwing darts at a map, there's at least some order and method to it. So that's what I'll try to describe, at least from the perspective of a random citizen in California that has attended open houses for my town's recently-revamped bus network.

    In a lot of ways, planning bus networks is akin to engineering problems, in that there's almost never a "clean slate" to start with. It's not like Cities Skylines where the town/city is built out by a single person, and even master planned developments can't predict what human traffic patterns will be in two or three decades. Instead, planning is done with regards to: what infrastructure already exists, where people already go, and what needs aren't presently being met by transit.

    Those are the big-picture factors, so we'll start with existing infrastructure. Infra is expensive and hard to retrofit. We're talking about the vehicle fleet, dedicated bus lanes, bus bulbs or curb extensions, overhead wires for trolleybuses, bus shelters, full-on BRT stops, and even the sidewalk leading up to a bus stop. If all these things need to be built out for a bus network, then that gets expensive. Instead, municipalities with some modicum of foresight will attach provisos to adjacent developments so that these things can be built at the same time in anticipation, or at least reserve the land or right-of-way for future construction. For this reason, many suburbs in the western USA will have a bulb-out for a bus to stop, even if there are no buses yet.

    A bus network will try to utilize these pieces of infrastructure when they make sense. Sometimes they don't make total sense, but the alternative of building it right-sized could be an outlandish expense. For example, many towns have a central bus depot in the middle of downtown. But if suburban sprawl means that the "center of population" has moved to somewhere else, then perhaps a second bus depot elsewhere is warranted to make bus-to-bus connections. But two depots cost more to operate than one, and that money could be used to run more frequent buses instead, if they already have those vehicles and drivers. Tradeoffs, tradeoffs.

    Also to consider are that buses tend to run on existing streets and roads. That alone will constrain which way the bus routes can operate, especially if there are one-way streets involved. In this case, circular loops can make sense, although patrons would need to know that they'll depart at one stop and return at another. Sometimes bus-only routes and bridges are built, ideally crossing orthogonal to the street grid to gain an edge over automobile traffic. In the worst case, buses get caught up in the same traffic as all the other automobiles, which sadly is the norm in America.

    I can only briefly speak of the inter-stop spacing, but it's broadly a function of the service frequency desired, end-to-end speed, and how distributed the riders are. A commuter bus from a suburb into the core city might have lots of stops in the suburb and in the city, but zero stops in between, since the goal is to pick people up around the suburb and take them somewhere into town. For a local bus in town, the goal is to be faster than walking, so with 15 minute frequencies, stops have to be no closer than 400-800 meters or so, or else people will just walk. But too far and it's a challenge for wheelchair users who need the bus. Whereas for a bus service which is purely meant to connect between two bus depots, it would prefer to make a few more stops in between that make sense, like a mall, but maybe not if it can travel exclusively on a freeway or in dedicated bus lanes. So many things to consider.

    As for existing human traffic patterns, the new innovation in the past decade or so has been to look at anonymized phone location data. Now, I'm glossing over the privacy concern of using people's coarse location data, but the large mobile carriers in the USA have always had this info, and this is a scenario where surveying people about which places they commute or travel to is imprecise, so using data collected in the background is fairly reliable. What this should hopefully show is where the "traffic centers" are (eg malls, regional parks, major employers, transit stations), how people are currently getting there (identifying travel mode based on speed, route, and time of day), and the intensity of such travel in relationship to everyone else (eg morning/evening rush hour, game days).

    I mentioned surveys earlier, which while imprecise for all the places that people go to, it's quite helpful for identifying the existing hurdles that current riders face. This is the third factor, identifying unmet needs. As in, difficulties with paying the fare, transfers that are too tight, or confusing bus depot layouts. But asking existing riders will not yield a recipe for growing ridership with new riders, people who won't even consider riding the existing service, if one exists at all. Then there's the matter of planning for ridership in the future, as a form of induced demand: a housing development that is built adjacent to an active bus line is more likely to create habitual riders from day 1.

    As an aside, here in California, transit operators are obliged to undergo regular analysis of how the service can be improved, using a procedure called Unmet Transit Needs. The reason for this procedure is that some state funds are earmarked for transit only, while others are marked for transit first and if no unmet needs exist, then those funds can be applied to general transport needs, often funding road maintenance.

    This process is, IMO, horrifically abused to funnel more money towards road maintenance, because the bar for what constitutes an Unmet Transit Need includes a proviso that if the need is too financially burdensome to meet, they can just not do it. That's about as wishy-washy as it gets, and that's before we consider the other provisio that requires an unmet need to also satisfy an expectation of a certain minimum ridership.... which is near impossible to predict in advance for a new bus route or service. As a result, transit operators -- under pressure by road engineers to spend less -- can basically select whichever outside consultant will give them the "this unmet transit need is unreasonable" stamp of disapproval that they want. /rant

    But I digress. A sensible bus route moves lots of people from places they're already at to places they want to go, ideally directly or maybe through a connection. The service needs to be reliable even if the road isn't, quick when it can be, and priced correctly to keep the lights on but maybe reduced to spur new ridership. To then build out a network of interlinking bus routes is even harder, as the network effect means people have more choices on where to go, but this adds pressure on wayfinding and fare structures. And even more involved is interconnecting a bus network to a train/tram/LRT system or an adjacent town's bus network.

    When they're doing their job properly, bus routing is not at all trivial for planners, and that's before citizens are writing in with their complaints and conservatives keep trying to cut funding.

  • have bandwidth that is some % of carrier frequency,

    In my limited ham radio experience, I've not seen any antennas nor amplifiers which specify their bandwidth as a percentage of "carrier frequency", and I think that term wouldn't make any sense for antennas and (analog) amplifiers, since the carrier is a property of the modulation; an antenna doesn't care about modulation, which is why "HDTV antennas" circa 2000s in the USA were merely a marketing term.

    The only antennas and amplifiers I've seen have given their bandwidth as fixed ranges, often accompanied with a plot of the varying gain/output across that range.

    going up in frequency makes bandwidth bigger

    Yes, but also no. If a 200 kHz FM commercial radio station's signal were shifted from its customary 88-108 MHz band up to the Terahertz range of the electromagnetic spectrum (where infrared and visible light are), the bandwidth would still remain 200 kHz. Indeed, this shifting is actually done, albeit for cable television, where those signals are modulated onto fibre optic cables.

    What is definitely true is that way up in the electromagnetic spectrum, there is simply more Hertz to utilize. If we include all radio/microwave bands, that would be the approximate frequencies from 30 kHz to 300 GHz. So basically 300 GHz of bandwidth. But for C band fibre optic cable, their usable band is from 1530-1565 nm, which would translate to 191-195 THz, with 4 THz of bandwidth. That's over eight times larger! So much room for activities!

    For less industrial use-cases, we can look to 60 GHz technology, which is used for so-called "Wireless HDMI" devices, because the 7 GHz bandwidth of the 60 GHz band enables huge data rates.

    To actually compare the modulation of different technologies irrespective of their radio band, we often look to special efficiency, which is how much data (bits/sec) can be sent over a given bandwidth (in Hz). Higher bits/sec/Hz means more efficient use of the radio waves, up to the Shannon-Hartley theoretical limits.

    getting higher % of bandwidth requires more sophisticated, more expensive, heavier designs

    Again, yes but also no. If a receiver need only receive a narrow band, then the most straightforward design is to shift the operating frequency down to something more manageable. This is the basis of superheterodyne FM radio receivers, from the era when a few MHz were considered to be very fast waves.

    We can and do have examples of this design for higher microwave frequency operation, such as shifting broadcast satellite signals down to normal television bands, suitable for reusing conventional TV coax, which can only carry signals in the 0-2 GHz band at best.

    The real challenge is when a massive chunk of bandwidth is of interest, then careful analog design is required. Well, maybe only for precision work. Software defined radio (SDR) is one realm that needs the analog firehose, since "tuning" into a specific band or transmission is done later in software. A cheap RTL-SDR can view a 2.4 MHz slice of bandwidth, which is suitable for plenty of things except broadcast TV, which needs 5-6 MHz.

    LoRa is much slower, caused by narrowed bandwidth but also because it's more noise-resistant

    I feel like this states the cause-and-effect in the wrong order. The designers of LoRa knew they wanted a narrow-band, low-symbol rate air interface, in order to be long range, and thus were prepared to trade away a faster throughput to achieve that objective. I won't say that slowness is a "feature" of LoRa, but given the same objectives and the limitations that this universe imposes, no one has produced a competitor with blisteringly fast data rate. So slowness is simply expected under these circumstances; it's not a "bug" that can be fixed.

    In the final edit of my original comment, I added this:

    Radio engineering, like all other disciplines of engineering, centers upon balancing competing requirements and limitations in elegant ways. Radio range is the product of intensely optimizing all factors for the desired objective.

  • Also, what if things that require very little data transmission used something lower than 2.4Ghz for longer range? (1Ghz or something?)

    No one seemed to touch upon this part, so I'll chime in. The range and throughput of a transmission depends on a lot of factors, but the most prominent are: peak and avg output power, modulation (the pattern of radio waves sent) and frequency, background noise, and bandwidth (in Hz; how much spectrum width the transmission will occupy), in no particular order.

    If all else were equal, changing the frequency to a lower band wouldn't impact range or throughput. But that's hardly ever the case, since reducing the frequency imposes limitations to the usable modulations, which means trying to send the same payload either takes longer or uses more spectral bandwidth. Those two approaches have the side-effect that slower transmissions are more easily recovered from farther away, and using more bandwidth means partial interference from noise has a lesser impact, as well as lower risk of interception. So in practice, a lower frequency could improve range, but the other factors would have to take up the slack to keep the same throughput.

    Indeed, actual radio systems manipulate some or all of those factors when longer distance reception is the goal. Some systems are clever with their modulation, such as FT8 used by amateur radio operators, in order to use low-power transmitters in noisy radio bands. On the flip side, sometimes raw power can overcome all obstacles. Or maybe just send very infrequent, impeccably narrow messages, using an atomic clock for frequency accuracy.

    To answer the question concretely though, there are LoRa devices which prefer to use the ISM band centered on 915 MHz in The Americas, as the objective is indeed long range (a few hundred km) and small payload (maybe <100 Bytes), and that means the comparatively wider (and noisier) 2.4 GHz band is unneeded and unwanted. But this is just one example, and LoRa has many implementations that change the base parameters. Like how MeshCore and Meshtastic might use the same physical radios but the former implements actual mesh routing, while the latter floods to all nodes (a bad thing).

    But some systems like WiFi or GSM can be tuned for longer range while still using their customary frequencies, by turning those other aforementioned knobs. Custom networks could indeed be dedicated to only sending very small amounts of data, like for telemetry (see SCADA). That said, GSM does have a hard cap of 35 km, for reasons having to do with how it handles multiple devices at once.

    Radio engineering, like all other disciplines of engineering, centers upon balancing competing requirements and limitations in elegant ways. Radio range is the product of intensely optimizing all factors for the desired objective.