To be fair, that's more than two words
9bananas @ 9bananas @lemmy.world Posts 0Comments 184Joined 2 yr. ago
got curious, googled it, here's something interesting:
https://news.usask.ca/articles/research/2018/u-of-s-study-hones-in-on-causes-of-ms-disability.php
seems genetic. which makes sense.
apparently that region just got unlucky with its gene pool, though, as the news release states: more research is necessary in order to be certain.
being caused by environmental chemicals hasn't been definitively ruled out, but it's not looking likely
(btw, bravo on an actually readable press release by a university!)
Meaning what?
meaning the models training data is what lets you work around or improve on that bias. without the training data, that's (borderline) impossible. so in order to tweak models and further development, you need to know what exactly went into the model, or you'll spend a lot of wasted time guessing around.
I omitted requirements on freely sharing it as implied, but otherwise?
you disregarded half of what makes an AI model. the half that actually results in a working model. without the training data, you'd only have some code that does...something.
and that something is entirely dependent on the training data!
so it's essential, not optional, for any kind of "open source" AI, because without it you're working with a black box. which is by definition NOT open source.
all models carry bias (see recent gemini headlines for an extreme example), and what exactly those are can range from important to extremely important, depending on the use case!
it's also important if you want to iterate on a model: if you use the same data set and train the model slightly differently, you could end up with entirely different models!
these are just 2 examples, there's many more.
also, you are thinking of LLMs, which is just one kind of model. this legislation applies to all AI models, not just LLMs!
(and your definition of open source is...unique.)
so you're basically saying it talked itself squarely into uncanny valley?
i honestly didn't consider that would be an issue for LLMs, but in hindsight...yeah, that's gonna be a problem...
oh, damn, you're right!
i got that mixed up; i thought ranked choice also includes proportional representation, because it frees up your secondary vote to be for whoever you want it to be, without pressure to vote for a canditate that "has a chance of winning", thus alleviating the issue of strategic voting...but that's pretty much the only thing it does.
but the proportional representation is tied to the way mandates/seats are distributed, which isn't tied to the how the vote works.
so if the senate still had the same number of seats per state, it wouldn't fix representation, because the weight of the votes still wouldn't be equal...
yeah, sorry for the confusion...long day...but thanks for the polite correction!
gerrymandering is rendered obsolete by points 1 and 2 on the list...so that's already included in the OP ;)
the reason gerrymandering is a thing, is because of the first-past-the-post/winner-takes-all voting system, which ranked choice replaces.
ranked choice allows propotional representation, which also fixes the 2 party problem!
edit, also fixes your point 2, because under ranked choice there is only a popular vote (also just known as "a vote", because there isn't any other one left)
nvm, got something mixed up...shouldn't comment when half asleep...
crocodiles actually get much bigger than that...think record is somewhere about 7m and about 1.1 metric tonnes!
I'd say it's worth a watch!
kinda gives a different perspective on the world it's set in (until the end, where it gets very "the boys"-like, but not in a bad way)
Patch 6 only really broke the script extender (well hotfix #18 really)
nvm, just checked, it's up to date! (hotfix #19 is supported!)
pretty much everything is up to date!
pretty sure ATM9 recommended minimum RAM is 10GB...i have it at 12GB.
but i also run it at about 100fps and view distance set around 16 with shaders...
there's probably already a tamperMonkey script out there, check greasyFork or something
i don't think so, but you can either entirely disable it, or make them passive, or tune it to your liking; there's tons of customizability in the difficulty!
it's honestly some pretty smart design in how they handled it! you should give it a try, see if you like it!
one little beginners tip that's kinda important: they always choose the shortest path to your base (so pretty much any structure you build) and they attack based on your power consumption! (there's a little widget that tells you when a wave is coming)
*They're the same picture
(the dark fog update is seriously sick tho)
that'd be great in theory!
if only the vanguard anti-cheat wasn't ridiculously easy to bypass and effectively useless while also being a major security concern...
i looked it over and ... holy mother of strawman.
that's so NOT related to what I've been saying at all.
i never said anything about the advances in AI, or how it's not really AI because it's just a computer program, or anything of the sort.
my entire argument is that the definition you are using for intelligence, artificial or otherwise, is wrong.
my argument isn't even related to algorithms, programs, or machines.
what these tools do is not intelligence: it's mimicry.
that's the correct word for what these systems are capable of. mimicry.
intelligence has properties that are simply not exhibited by these systems, THAT'S why it's not AI.
call it what it is, not what it could become, might become, will become. because that's what the wiki article you linked bases its arguments on: future development, instead of current achievement, which is an incredibly shitty argument.
the wiki talks about people using shifting goal posts in order to "dismiss the advances in AI development", but that's not what this is. i haven't changed what intelligence means; you did! you moved the goal posts!
I'm not denying progress, I'm denying the claim that the goal has been reached!
that's an entirely different argument!
all of the current systems, ML, LLM, DNN, etc., exhibit a massive advancement in computational statistics, and possibly, eventually, in AI.
calling what we have currently AI is wrong, by definition; it's like saying a single neuron is a brain, or that a drop of water is an ocean!
just because two things share some characteristics, some traits, or because one is a subset of the other, doesn't mean that they are the exact same thing! that's ridiculous!
the definition of AI hasn't changed, people like you have simply dismissed it because its meaning has been eroded by people trying to sell you their products. that's not ME moving goal posts, it's you.
you said a definition of 70 years ago is "old" and therefore irrelevant, but that's a laughably weak argument for anything, but even weaker in a scientific context.
is the Pythagorean Theorem suddenly wrong because it's ~2500 years old?
ridiculous.
just because the marketing idiots keep calling it AI, doesn't mean it IS AI.
words have meaning; i hope we agree on that.
what's around nowadays cannot be called AI, because it's not intelligence by any definition.
imagine if you were looking to buy a wheel, and the salesperson sold you a square piece of wood and said:
"this is an artificial wheel! it works exactly like a real wheel! this is the future of wheels! if you spin it in the air it can go much faster!"
would you go:
"oh, wow, i guess i need to reconsider what a wheel is, because that's what the salesperson said is the future!"
or would you go:
"that's idiotic. this obviously isn't a wheel and this guy's a scammer."
if you need to redefine what intelligence is in order to sell a fancy statistical model, then you haven't invented intelligence, you're just lying to people. that's all it is.
the current mess of calling every fancy spreadsheet an "AI" is purely idiots in fancy suits buying shit they don't understand from other fancy suits exploiting that ignorance.
there is no conspiracy here, because it doesn't require a conspiracy; only idiocy.
p.s.: you're not the only one here with university credentials...i don't really want to bring those up, because it feels like devolving into a dick measuring contest. let's just say I've done programming on industrial ML systems during my bachelor's, and leave it at that.
perceptual learning, memory organization and critical reasoning
i mean...by that definition nothing currently in existence deserves to be called "AI".
none of the current systems do anything remotely approaching "perceptual learning, memory organization, and critical reasoning".
they all require pre-processed inputs and/or external inputs for training/learning (so the opposite of perceptual), none of them really do memory organization, and none are capable of critical reasoning.
so OPs original question remains:
why is it called "AI", when it plainly is not?
(my bet is on the faceless suits deciding it makes them money to call everything "AI", even though it's a straight up lie)
actually, the law leaves remarkably little room for interpretation in this case.
here's the law in full, emphasis mine:
Strafgesetzbuch (StGB) § 202a Ausspähen von Daten (1) Wer unbefugt sich oder einem anderen Zugang zu Daten, die nicht für ihn bestimmt und die gegen unberechtigten Zugang besonders gesichert sind, unter Überwindung der Zugangssicherung verschafft, wird mit Freiheitsstrafe bis zu drei Jahren oder mit Geldstrafe bestraft. (2) Daten im Sinne des Absatzes 1 sind nur solche, die elektronisch, magnetisch oder sonst nicht unmittelbar wahrnehmbar gespeichert sind oder übermittelt werden.
the text is crystal clear, that security measures need to be "overcome" in order for a crime to have been committed.
it is also obvious that cleartext passwords are NOT a "security measure" in any sense of the word, but especially in this case, where the law specifically says that the data in question has to have been "specially secured". this was not the case, as evident by the fact that the defendant had easy access to the data in question.
this is blatant misuse of the law.
the data law makes no attempt to take into account the intent of the person, quite differently from when it comes to physical theft, which is immediately and obviously ridiculous.
you mentioned snooping around in a strangers car, and that's a good comparison!
you know what you definitely couldn't be charged with in the example you gave? breaking and entering!
because breaking and entering requires (in germany at least) that you gained access through illegal means (i.e.: literally broke in, as opposed to finding the key already in the lock).
but that's essentially what is happening in this case, and that is what's wrong with this case!
most people agree he shouldn't have tried to enter the PW.
what has large parts of the professional IT world up in arms is the way the law was applied, not that there was a violation of the law. (though most in IT, like i am, think this sort of "hacking" shouldn't be punishable, if it is solely for the purpose of finding and reporting vulnerabilities, which makes a lot of sense)
actually, that's not what the law says.
the law says that "overcoming" security measures is a crime. nothing was "overcome".
plaintext is simply not a "security measure" and the law was applied wrong.
there may have been some form of infringement in regards to privacy or sensitive data or whatever, but it definitely wasn't "hacking" of any kind.
just like it isn't "hacking" to browse someone's computer files when they leave a device unlocked and accessible to anyone. invasion of privacy? sure. but not hacking.
and the law as written (§202a StGB) definitely states that security measures have to be circumvented in order to be applied.
that's the problem with the case!
not that the guy overstepped his bounds, but that the law was applied blatantly wrong and no due diligence was used in determining the outcome of the case.
wrsl damit die direkte übersetzung ins englische einfacher zu verstehen ist