Samsung sees 95% drop in profits for a second consecutive quarter
TitanLaGrange @ TitanLaGrange @lemmy.world Posts 0Comments 93Joined 2 yr. ago
ChatGPT currently knows the text of the Harry Potter novels, but it does not recite them when asked to do so.
I tried that several weeks ago while discussing some details of the Harry Potter world with ChatGPT, and it was able to directly quote several passages to me to support its points (we were talking about house elf magic and I asked it to quote a paragraph). I checked against a dead-tree copy of the book and it had exactly reproduced the paragraph as published.
This may have changed with their updates since then, and it may not be able to quote passages reliably, but it is (or was) able to do so on a couple of occasions.
a couple years ago I asked an ER doctor to just let me die
Did they let you die?
C# is my primary language, so I'd certainly recommend it. It can be a little daunting to get into because it is a large ecosystem of tools, so you might want to watch some videos and keep things simple for a while.
For work I mostly use it for APIs for web sites, that might be a good place to start if you're familiar with JS/TS front-end work. From there you might want to try Razor or Blazor for handling web UI work in C#. I'm not very experienced with that aspect of it, but it's mostly been a positive experience (TBH I kind of prefer React, but I'd need to spend more time on the Razor/Blazor side to have a strong opinion).
The desktop development side in C# is kind of a mess at the moment. Maybe stick with web until you're feeling pretty comfortable with the language.
My biggest problem is figuring out what I want to do with any coding skills
Maybe some dumb little games? If you aren't interested in 3D gaming you can do 2D platformers, top-down Rogue-likes, or Zork-style interactive fiction (text) games (from scratch instead of with a Z-Machine).
As a self-taught developer, when I was learning I found it a lot more useful to just go code stuff, and then when I found something that seemed hard or ugly, I could go look for solutions to that kind of problem, which was much more interesting than just reading about various techniques. (Well, I was learning well before normal people had internet, so mostly I invented some shit to fix my own problem, but it got easier/faster after the internet became available).
How did you learn to talk to WolframAlpha?
I want to like WA, but the natural language interface is so opaque that I usually give up before I can get any non-trivial calculation out of it.
"Hey, how do I get to the Plex server?"
"Open your browser and go to Hell"
"..."
I just do it, usually while thinking about other stuff.
So, for me if I start thinking about other stuff very much while doing something like brushing my teeth, I stop doing the something. Like, literally I'll snap out of a chain of thought that I mostly don't remember ('why am I thinking about harvesting cranberries? I need to get to work!') and I'll realize I've just spent a minute or two standing still in the bathroom holding my toothbrush and toothpaste staring at nothing in particular.
If I start thinking about something mid-process I'll end up missing part. The entire process requires attention. I have to mentally label 12 areas, A-L, then count 10 seconds for each area, every single time, A 0123456789, B0123456789, C0123456789, etc.
What about playing musical instruments like learning guitar? Wouldn’t that constitute a habit that becomes automatic after enough practice?
Not for me. I've been trying to learn an instrument for several years. I can make it maybe a few days at a time before I forget to keep doing it. If I'm very persistent I can get in enough days of erratic practice to learn a short song.
Playing short sequence of notes is kinda-sorta automatic as in I don't have to think about all the details of positioning my right hand (it's a bit like touch typing on a keyboard, I don't have to think about the individual letters), but that's about the maximum level of 'automatic' I've been able to achieve. Finger placement for every chord is completely conscious, every key change, placement of emphasis, etc., all deliberate, conscious actions.
While it's kind of satisfying to make it through a piece, it's not relaxing in any way. I keep hoping that someday I'll get good enough at it that I can make it through at least one song without needing to think about each step, but it's slow-going.
Previously my server was just a Debian box where I had a 'docker' directory with a bunch of .sh files containing 'docker run' commands (and a couple of docker-compose files for services that that have closely related containers). That works really well, it's easy to understand and manage. I had nginx running natively to expose stuff as necessary.
Recently I decided to try TrueNAS Scale (I wanted more reliable storage for my media library, which is large enough to be annoying to replace when a simple drive fails), and I'm still trying to figure it out. It's kind of a pain in the ass for running containers since the documentation is garbage. The web interface is kind of nice (other than constantly logging me out), but the learning curve for charts and exposing services has been tough, and it seems that ZFS is just a bad choice for Docker.
I was attracted to the idea of being able to run my services on my NAS server as one appliance, but it's feeling like TrueNAS Scale is way too complicated for home-scale (and way too primitive for commercial, not entirely sure what market they are aiming for) and I'm considering dumping it and setting up two servers, one for NAS and for running my containers and VMs.
For fun I asked ChatGPT about that progressive learning approach, and it seems to like the idea.
I wish I had more time to undertake some experiments in model training, this seems like it would be a really fun research direction.
Sorry for the 'wall of AI text':
The idea you're describing seems to be a form of curriculum learning. In curriculum learning, models are trained in a progressive manner, starting with simple concepts (or in this case, simpler text data) and gradually moving to more complex ones. This approach is motivated by the way human learning often works, where we learn easier topics before moving on to harder ones.
The strategy you've suggested, where the model itself determines the complexity or understandability of the next round of training inputs, is interesting. While the traditional approach to curriculum learning is to manually design the learning progression based on human knowledge and intuition, the approach you're suggesting is more dynamic and autonomous, letting the model guide its own learning process.
As of my last update in September 2021, I can't confirm any specific projects or papers that have employed exactly this strategy in the domain of large language models. However, there are some related works in the field. For example, some reinforcement learning and computer vision projects have used a similar idea where models adaptively choose their next training samples based on their current understanding. This has been referred to as active learning, and is a promising avenue for improving the efficiency of model training.
However, there would be several technical challenges in applying this approach to large language models. For one, there would be the challenge of how to assess the "understandability" of a text sample in a reliable and useful way. This would likely require significant work to define and might also have to involve some form of reinforcement learning or other feedback mechanisms. Nonetheless, it's a fascinating idea and could potentially be an interesting direction for future research in machine learning.
I suspect future models are going to have to put some more focus on learning using techniques more like what humans use, and on cognition.
Like, compared to a human these language models need very large quantities of text input. When humans are first learning language they get lots of visual input along with language input, and can test their understanding with trial-and-error feedback from other intelligent actors. I wonder if perhaps those factors greatly increase the rate at which understanding develops.
Also, humans tend to cogitate on inputs while ingesting them during learning. So if the information in new inputs disagrees with current understanding, those inputs are less likely to affect current understanding (there's a whole 'how to change your mind' thing here that is necessary for people to use, but if we're training a model on curated data that's probably less important for early model training).
I don't know details of how model training works, but it would be interesting to know if anyone is using a progressive learning technique where the model that is being trained is used to judge new training data before it is used as a training input to update the model's weights. That would be kind of like how children learn by starting with very simple words and syntax and building up conceptual understanding gradually. I'd assume so, since it's an obvious idea, but I haven't heard about it.
they have a hard time seeing water
When my cat was a tiny kitten learning to drink from a bowl he would always dip his face in too far and get a nose full of water, causing him to sneeze. He learned to always dip his paw in the water first to see where the surface is before putting his face in. He still does it every single time. It's pretty cute.
my theory is that they might also want something else.
I keep about 4 different kinds of food around for our cats. I have three auto-feeders in different parts of the house each with a different foot and dispense schedule. That way the cat has some choice about what they are eating, and there's a bit of environmental enrichment where they can eat in different places.
I also hide little piles of cat treats in various locations around the house. Keeps the cats curious and exploring to see if they can find something good to munch.
cat didn’t like stale food
That's a thing too. Dry food in bags has more odor than food that's been sitting out for a while. The fats in food left out in the open air will eventually go rancid too, but that takes a long time and shouldn't happen if you're cleaning the bowls periodically.
Plus he's just fun to listen to!
all of humanity’s radio communications have traveled about 200 light years from Earth
Also interesting is that because the energy of those signals is spreading out as they move away from their point of origin they become less detectable as they travel. Most signals would fall below practical detection limits before making it halfway to the nearest star. At the extreme, the Arecibo Message, transmitted with a ridiculous ERP, will be detectable to reasonably sized receivers for tens of thousands of light years, assuming they are located along the path of the beam.
Tom Paris and the Cochrane have entered the chat
Does Singer explore how the limits of one's knowledge about the impacts of their actions might play into the decisions?
Like, I could send $5 to some overseas charity, but I don't have a good way to know how that money is being used. Conversely, I could use it locally myself to reduce suffering in a way I can verify.
It seems to me that morally I should prioritize actions I know will reduce suffering over actions that may reduce suffering but that I cannot verify. Verification is important because immoral actors exist, so I can't just assume that moral actions that I delegate to other actors will be carried out. Since it's easier to have good knowledge about local actions (in particular those I execute personally), this would tend to favor local actions.
The grasslands are underappreciated in my view (good for me, fewer people out there using them!)
I have spent a lot of time out in the Bessey Ranger District near Halsey Nebraska. It's a nice mix of hand-planted forest and prairie in the sandhills area. Great place to go for a variety of camping options, and really excellent for stargazing because it's one of the darkest places in the US. I've spent many deep-black nights laying on top of my vehicle parked on top of a tall hill listening to the breeze in the grass while watching meteor showers and satellites (saw the ISS go over once, amazing sight!).
The landscape ranges from impenetrably dense forest to wide-open rolling native prairie. You can drive miles of trails (best in a moderate clearance 4x4, and informed by the Motor Vehicle Use Map loaded into the Avenza mobile app) to see seemingly endless interesting areas, and in most places you can pull off the side of the road and set up camp. We are getting into a good time of year to see big puffball mushrooms out there (edible if you like mushrooms).
The other national forest areas in Nebraska are similar, though the forest at Bessey is unique, being the largest hand-planted forest in the US. Also there was a large fire there (last year I think), so it's a good time to see the recovery process and compare the recently burned areas to the unburned.
Yep, I'm hoping they'll do a slightly wider tri-fold model at some point. I'd like to have a wider front screen, like Galaxy S22 Ultra sized, and then be able to unfold twice to get a ~3x sized tablet-sized screen.
Not that that would help with the already astronomical price-tag of the Z-Fold.