I'm ollllddd
GamingChairModel @ GamingChairModel @lemmy.world Posts 1Comments 632Joined 2 yr. ago
You don't remember NetZero, do you? A free dial up ISP that gave free Internet connections under the condition that you give up like 25% of your screen to animated banner ads while you're online.
Or BonziBuddy? Literal spyware.
What about all the MSIE toolbars, some of which had spyware, and many of which had ads?
Or just plain old email spam in the days before more sophisticated filters came out?
C'mon, you're looking at the 1990s through rose tinted glasses. I'd argue that the typical web user saw more ads in 1998 than in 2008.
No, 1990s internet just hadn't actually fulfilled the full potential of the web.
Video and audio required plugins, most of which were proprietary. Kids today don't realize that before YouTube, the best place to watch trailers for upcoming movies was on Apple's website, as they tried to increase adoption for QuickTime.
Speaking of plugins, much of the web was hidden behind embedded flash elements, and linking to resources was limited. I could view something in my browser, but if I sent the URL to a friend they might still need to navigate within that embedded element to get to whatever it was I was talking about.
And good luck getting plugins if you didn't use the right operating system expected by the site. Microsoft and Windows were so busy fracturing the web standards that most site publishers simply ignored Mac or Linux users (and even ignored any browser other than MSIE).
Search engines were garbage. Yahoo actually provided a decent competition to search engines by paying humans to manually maintain an index, and review user submissions on whether to add a new site to the index.
People's identities were largely tied to their internet service provider, which might have been a phone company, university, or employer. The publicly available email address services, not tied to ISP or employer or university, were unreliable and inconvenient. We had to literally disconnect from the internet in order to dial into Eudora or whatever to fetch mail.
Email servers only held mail for just long enough for you to download your copy, and then would delete from the server. If you wanted to read an archived email, you had to go back to the specific computer you downloaded it to, because you couldn't just log into the email service from somewhere else. This was a pain when you used computer labs in your university (because very few of us had laptops).
User interactions with websites were clunky. Almost everything that a user submitted to a site required an actual HTTP POST transaction, and a reloading of the entire page. AJAX changed the web significantly in the mid 2000's. The simple act of dragging a map around, and zooming in and out, for Google Maps, was revolutionary.
Everything was insecure. Encryption was rare, and even if present was usually quite weak. Security was an afterthought, and lots of people broke their computers downloading or running the wrong thing.
Nope, I think 2005-2015 was the golden age of the internet. Late enough to where the tech started to support easy, democratized use, but early enough that the corporations didn't ruin everything.
I'm not sure that would work. Admins need to manage their instance users, yes, but they also need to look out for the posts and comments in the communities hosted on their instance, and be one level of appeal above the mods of those communities. Including the ability to actually delete content hosted in those communities, or cached media on their own servers, in response to legal obligations.
Yes, it's the exact same practice.
The main difference, though, is that Amazon as a company doesn't rely on this "just walk out" business in a capacity that is relevant to the overall financial situation of the company. So Amazon churns along, while that one insignificant business unit gets quietly shut down.
For this company in this post, though, they don't have a trillion dollar business subsidizing the losses from this AI scheme.
Permanently Deleted
They're actually only about 48% accurate, meaning that they're more often wrong than right and you are 2% more likely to guess the right answer.
Wait what are the Bayesian priors? Are we assuming that the baseline is 50% true and 50% false? And what is its error rate in false positives versus false negatives? Because all these matter for determining after the fact how much probability to assign the test being right or wrong.
Put another way, imagine a stupid device that just says "true" literally every time. If I hook that device up to a person who never lies, then that machine is 100% accurate! If I hook that same device to a person who only lies 5% of the time, it's still 95% accurate.
So what do you mean by 48% accurate? That's not enough information to do anything with.
Yeah, from what I remember of what Web 2.0 was, it was services that could be interactive in the browser window, without loading a whole new page each time the user submitted information through HTTP POST. "Ajax" was a hot buzzword among web/tech companies.
Flickr was mind blowing in that you could edit photo captions and titles without navigating away from the page. Gmail could refresh the inbox without reloading the sidebar. Google maps was impressive in that you could drag the map around and zoom within the window, while it fetched the graphical elements necessary on demand.
Or maybe web 2.0 included the ability to implement states in the stateless HTTP protocol. You could log into a page and it would only show you the new/unread items for you personally, rather than showing literally every visitor the exact same thing for the exact same URL.
Social networking became possible with Web 2.0 technologies, but I wouldn't define Web 2.0 as inherently social. User interactions with a service was the core, and whether the service connected user to user through that service's design was kinda beside the point.
Getting a smartphone in 2010 was what gave me the confidence to switch to Arch Linux, knowing I could always look things up on the wiki as necessary.
I also think my first computer that could boot from USB was the one I bought in 2011, too. Everything before that I had to physically burn a CD.
My gigabit connection is good enough for my NAS, as the read speeds on the hard drive itself tend to be limited to about a gigabit/s anyway. But I could see some kind of SSD NAS benefiting from a faster LAN connection.
Yeah, you're describing an algorithm that incorporates data about the user's previous likes. I'm saying that any decent user experience will include prioritization and weight of different posts, on a user by user basis, so the provider has no choice but to put together a ranking/recommendation algorithm that does more than simply sorts all available elements in chronological order.
It's not a movie, but the Fallout series had a great first season, and I'm looking forward to the second.
But also, you're right. Here's an article from 1994 describing Visicalc as a "Killer App" from the late 1970's, that prompted people to buy personal computers.
"Killer rap" was pretty popular in the 90's too.
It's because Al Gore invented the internet, so they are known as Al Gore Rhythms.
Windows is the first thing I can think of that used the word "application" in that way, I think even back before Windows could be considered an OS (and had a dependency on MS-DOS). Back then, the Windows API referred to the Application Programming Interface.
Here's a Windows 3.1 programming guide from 1992 that freely refers to programs as applications:
Common dialog boxes make it easier for you to develop applications for the Microsoft Windows operating system. A common dialog box is a dialog box that an application displays by calling a single function rather than by creating a dialog box procedure and a resource file containing a dialog box template.
Um excuse me the preferred term is "AI agent" if you want outside investment
Some people actively desire this kind of algorithm because they find it easier to find content they like this way.
Raw chronological order tends to overweight the frequent posters. If you follow someone who posts 10 times a day, and 99 people who post once a week, your feed will be dominated by 1% of the users representing 40% of the posts you see.
One simple algorithm that is almost always better for user experiences is to retrieve the most recent X posts from each of the followed accounts and then sort that by chronological order. Once you're doing that, though, you're probably thinking about ways to optimize the experience in other ways. What should the value of X be? Do you want to hide posts the user has already seen, unless there's been a lot of comment/followup activity? Do you want to prioritize posts in which the user was specifically tagged in a comment? Or the post itself? If so, how much?
It's a non-trivial problem that would require thoughtful design, even for a zero advertising, zero profit motive service.
Instead, I actively avoided conversations with my peers, particularly because I had nothing in common with them.
Looking at your own social interactions with others, do you now consider yourself to be socially well adjusted? Was the "debating child in a coffee shop" method actually useful at developing the social skills that are useful in adulthood?
I have some doubts.
It's worth pointing out that browser support is a tiny, but important, part of overall ecosystem support.
TIFF is the dominant standard for certain hardware and processes for digitizing physical documents, or publishing/printing digital files as physical prints. But most browsers don't bother supporting displaying TIFF, because that's not a good format for web use.
Note also that non-backwards-compatible TIFF extensions are usually what cameras capture as "raw" image data and what image development software stores as "digital negatives."
JPEG XL is trying to replace TIFF at the interface between the physical analog world and the digital files we use to represent that image data. I'm watching this space in particular, because the original web generation formats of JPEG, PNG, and GIF (and newer web-oriented formats like webp and avif) aren't trying to do anything with physical sensors, scans, prints, etc.
Meanwhile, JPEG XL is trying to replace JPEG on the web, with photographic images compressed with much more efficient and much higher quality compression. And it's trying to replace PNG for lossless compression.
It's trying to do it all, so watching to see where things get adopted and supported will be interesting. Apple appears to be going all in on JXL, from browser support to file manager previews to actual hardware sensors storing raw image data in JXL. Adobe supports it, too, so we might start to see full JXL workflows from image capture to postprocessing to digital/web publishing to full blown paper/print publishing.
iPhone 16 supports shooting in JPEG-XL and I expect that will be huge for hardware/processing adoption.
It wasn't the buffer itself that drew power. It was the need to physically spin the disc faster in order to read the data to build up a buffer. So it would draw more power even if you left it physically stable. And then, if it would actually skip in reading, it would need to seek back to where it was to build up the buffer again.