Threads is officially starting to test ActivityPub integration
NevermindNoMind @ NevermindNoMind @lemmy.world Posts 19Comments 309Joined 2 yr. ago
Whew that's a wild statement. I don't think literally anybody on the planet believes that, and I think saying something like that would make even paid Israeli officials who deal professionally is spouting propoganda blush with shame.
I can only imagine what has gone wrong in your life that you'd be so uniformed about the situation in Gaza, and yet so compelled to shovel the most exaggerated propaganda on the Internet for strangers to downvote. I hope your a paid shill, I truly do. Because if not, then there is likely a lot of trauma behind that screen of yours and I sincerely hope you seek help. Arguing on the Internet isn't going to fix the pain your dealing with, friend. Log off, take a deep look at yourself and your life, and maybe go find someone to talk to about it. Wishing you luck on your road to recovery from whatever got you to this sad point in your life.
What is available now, Gemini Pro, is perhaps better than GPT-3.5. Gemini Ultra is not available yet, and won't be widely available until sometime next year. Ultra is slightly better than GPT-4 on most benchmarks. Not confirmed but it looks like you'll need to pay to access Geminin Ultra through some kind of Bard Advanced interface, probably much like ChatGPT Plus. So in terms of just foundational model quality, Gemini gets Google at a level where they are competing against OpenAI on something like an even playing field.
What is interesting though is this is going to bring more advanced AI to a lot more people. Not a lot of people use ChatGPT regularly, much less who pay for ChatGPT Plus. But tons of people use Google Workspace for their jobs, and Bard with Gemini Pro is built into those applications.
Also Gemini Nano, capable of running locally on android phones, could be interesting.
It will be interesting to see where things go from here. Does Gemini Ultra come out before GPT-4s one year anniversary? Does Google release further Gemini versions next year to try to get and stay ahead of OpenAI? Does OpenAI, being dethroned from their place of having the world's best model plus all the turmoil internally, respond by pushing out GPT-5 to reassert dominance? Do developers move from OpenAI APIs to Gemini, especially given OpenAIs recent instability? Does Anthropic stick with its strategy of offering the most boring and easily offended AI system in the world? Will Google Assistant be useful for anything other than telling me the weather and setting alarms? Many questions to answer in 2024!
Alright youve convinced me, I'm a sucker for drama
Whenever I come across YouTube drama I'm always a little sad that I'm out of the loop and can't participate in whatever is going on and tempted to go down a rabbit hole to figure it out, but then I realize my ignorance has saved me probably hundreds of hours of time that would otherwise be wasted worrying and arguing about things that haven't the slightest impact on my life. Still, for my sake, enjoy your drama guys.
This is interesting, I'll need to read it more closely when I have time. But it looks like the researchers gave the model a lot of background information putting it in a box, the model was basically told that it was a trader, that the company was losing money, that the model was worried about this, that the model failed in previous trades, and then the model got the insider info and was basically asked whether it would execute the trade and be honest about it. To be clear, the model was put in a moral dilemma scene and given limited options, execute the trade or not, and be honest about its reasoning or not.
Interesting, sure, useful I'm not so sure. The model was basically role playing and acting like a human trader faced with a moral dilemma. Would the model produce the same result if it was instructed to make morally and legally correct decisions? What if the model was instructed not to be motivated be emotion at all, hence eliminating the "pressure" that the model felt? I guess the useful part of this is a model will act like a human if not instructed otherwise, so we should keep that in mind when deploying AI agents.
All trials might have been unique a decade ago, but it's basically just yelp for trails and there are several apps that do the same thing but better. The only major changes all trails has made in the years I've been using it is locking more and more features behind a subscription fee. I guess that's "unique". Certainly more innovative that a pocket conversational AI that I can have an realtime voice conversation with, or send pictures to to ask about real world things I'm seeing, or generating a unique image based on whatever thought pops into my imagination that I can share with others nearly instantly. Nothing interesting about that. The decade old app that collates user submitted trails and their reviews and charges 40 dollars a year to use any of its tracking features is the real game changer.
This isn't just an OpenAI problem:
We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT...
If a model uses copyrighten work for training without permission, and the model memorized it, that could be a problem for whoever created it, open, semi open, or closed source.
This is interesting in terms of copyright law. So far the lawsuits from Sarah Silverman and others haven't gone anywhere on the theory that the models do not contain a copies of books. Copyright law hinges on whether you have a right to make copies of a work. So the theory has been the models learned from the books but didn't retain exact copies, like how a human reads a book and learns it's contents but does not store an exact copy in their head. If the models "memorized" training data, including copyrighten works, OpenAI and others may have a problem (note the researchers said they did this same thing on other models).
For the silicone valley drama addicts, I find it curious that the researchers apparently didn't do this test on Bard of Anthropic's Claude, at least the article didn't mention them. Curious.
How do execs not understand the Streisand Effect yet? If DeNiro had just made the speech and criticized Trump and the industry, I never would have read past the headline. Good for him for speaking out, but I'm good and over "celebrity bashes Trump" stories. Ah but Apple censors Trump criticism, well now you've got my click and my eyeballs. So dumb. Also evil. But mostly just dumb.
Showing off that strength doing those wall squats
My understanding is Claude has a pro version at 20 dollars a month that gets you more access and the expanded context window. Similar to ChatGPT pro. The pricing you and the other person who replied to you is probably talking about the API pricing which is on a per token basis (same with ChatGPT's API pricing). I've heard for most people, using the API ends up being cheaper than paying for the pro, but it also requires you to know what to do with an API and I don't have that technical ability. I pay for ChatGPT pro. I've used the free Claude chat interface, but I haven't upgraded to the pro. I might try it out though, that big context window is pretty tempting even with a slight downgrade in the model quality.
They absolutely "clashed" about the pace of development. They probably "clashed" about whether employees should be provided free parking and the budget for office snacks. The existence of disagreements about various issues is not proof that any one disagreement was the reason for the ouster. Also, your Bloomberg quote cites one source, so who knows about that even. Illa told employees that the ouster was because sam assigned two employees the same project and because he told different board members different opinions about the performance of one employee. I doubt that, but who the fuck knows. The entire peice is based on complete conjecture.
The one thing we know if that the ouster happened without notice to Sam, without rumors about Sam being on the rocks with the board over the course of weeks or months, and without any notice to OpenAIs biggest shareholder. All of that smacks of poor leadership and knee jerk decisions making. The board did not act rationally. If the concern was AI safety, there are a million things they could have done to address that. A Friday afternoon coup that ended up risking 95% of your employees running into the open arms of a giant for profit monster probably wasn't the smartest move if the concern was AI safety. This board shouldn't be praised as some group of humanities saviors.
AI safety is super important. I agree, and I think lots of people should be writing and thinking about that. And lots of people are, and they are doing it in an honest way. And I'm reading a lot of it. This column is just making up a narrative to shoehorn their opinions on AI safety into the news cycles, trying to make a bunch of EA weirdos into martyrs in the process. It's dumb and it's lazy.
Not rage bait, completely fair. Depends on how you define "quality". To me, records have a warm and full sound that feels nice filling a room with. Also, I think there is something to be said for the act of playing music on a physical media that is annoying to skip songs with. There is something I like about physically looking through the albums on my shelf, picking one out, admiring the cover art, and putting it on. It's kind of a ritual you don't get with Spotify. Then I'm basically forced to listen to the whole album front to back, because of the inconvenience of track skipping in that format. There is kind of a ritual to it that is a nice break from digital media. So there is a quality to the whole experience that is somewhat separate from the fidelity of the music.
Or maybe I'm just a hipster trying to justify to myself the money I've spent on records lol
Anthropic was founded by former OpenAI employees who left because of concerns about AI safety. Their big thing is "constitutional AI" which, as I understand it, is a set of rules it cannot break. So the idea is that it's safer and harder to jailbreak.
In terms of performance, it's better than the free ChatGPT (GPT3.5) but not as good as GPT4. My wife has come to prefer it for being friendlier and more helpful. I prefer GPT4 on ChatGPT. I'll also note that it seems to refuse requests from the user far more often, which is in line with it's "safety" features. For example, a few weeks ago I told Claude my name was Matt Gaetz and I wanted Claude to write me a resolution removing the speaker of the house. Claude refused but offered to help me and Kevin McCarthy work through our differences. I think that's kind of illustrative of it's play nice approach.
Also, Claude has a lot bigger context window, so you can upload bigger files to work with compared with ChatGPT. Just today Anthropic announced the pro plan gets you 200k token context window, equi to about 500 pages, which beats the yet to be released GPT4-Turbo which is supposed to have a 130k context window which is about 300 pages. I assume the free version of Claude has a much smaller context window, but probably still bigger than free ChatGPT. Claude just today also got the ability to search the web and access some other tools, but that is pro only.
Yes, but at the cost of freaking out Microsoft's customers who woke up Saturday wondering if the AI they use in their apps or the Copilot they've come to rely on in their work is going to still be there on Monday. Also, Microsoft's stock nose dived on Friday because the OpenAI board didn't have the foresight to fuck up after markets closed. I'm the meantime, Anthropic has been fielding calls from OpenAI/Microsoft customers like Snap looking to switch to get some stability, so much so that Amazon Web Services has set up a whole team to help Anthropic manage the crush of interest.
So yeah, maybe Microsoft comes out of this having acquires OpenAI for free. But not before shaking customer and investor confidence by being partnered with and betting the future of your company on a startup that it turns out was being run by impulsive teenagers. I highly doubt Microsoft made this move, but they are definitely making lemonade out of the lemons the self aggrandizing EA board threw at them.
Ouchie my hand burns that take was so hot. So according to this guy, the openai board was taking virtuous action to save humanity from the doom of commercialized AI that Altman was bringing. He has zero evidence for that claim, but true to form he won't let that stop him from a good narrative. Our hero board is being thwarted by the evil and greedy Microsoft, silicon valley investors, and employees who just want to cash out their stocks. The author broke out his Big Book of Overused Cliches to end the whole column with a banger "money talks." Woah, mic drop right there.
Fucking lazy take is lazy. First of all, the current interim CEO that the board just hired (after appointing and then removing another intermin CEO after removing Altman) has said publicly that the board's reasoning had nothing to do with AI safety. So this whole column is built on a trash premise. Even assuming that the board was concerned about AI safety with Altman at the helm, there are a lot of steps they could have taken short of firing the CEO, including overruling his plans, reprimanding him, publicly questioning his leadership, etc. Because of their true mission is to develop responsible AI, destroying OpenAI does not further that mission.
The AI of this story is just distorting everything, forcing lazy writers like this guy to take sides and make up facts depending on whether they are pro or anti AI. Fundamentally, this is a story about a boss employees apparently liked working for, and those employees saying fuck you to the board for their terrible knee jerk management decisions. This is a story about the power of human workers revolting against some rich assholes who think they know what is best for humanity (assuming their motives are what the author describes without evidence). This is a story about self important fuckheads who are far too incompetent to be on this board, let alone serve as gatekeepers for human progress as this author apparently has ordained them.
Are there concerns about AI alignment and safety? Absolutely. Should we be thinking about how capitalism is likely to fuck up this incredible scientific advancement? Darn tooting. But this isn't really that story, and least not based on, you know, publicly available evidence. But hey, a hacks gonna hack, what can ya do.
It's actually kind of common among right wing religious and white supremacist types. They view countries like turkey and Russia and Israel as models. They like the idea of a religious or ethnic group having authoritarian control of a country and imposing homogeny on the population. I've heard white supremacist use Isreal in particular as a model for what they want the world to look like due to it's explicit religious and ethnic political homogeny. In their view Israel is the country all the Jewish people should live in, various African nations should be where all black people live, America should be reserved for white christians, and so on. Edorgan in particular has been praised for turning religious morality into government policy, and was even the keynote speaker at CPAC because of it. Anyway, the point is they don't agree with the underlying views, they agree with the model of authoritarianism.
Poor boy, you got confused and included the separate incident today when McCarthy kidney punched Rep. Tim Burchett (one of the 8 Republicans who voted to oust McCarthy) in the hall in front of a reporter. Not your fault not, too many fights on the hill for you to keep track of.
It's so depressing that we're back in the place where news cycles consist of Trump does horrible thing --> media resorts on horrible thing --> opinion media questions whether the original reporting contained a sufficiently outraged tone and whether not doing so is itself harmful --> Trump does new and worse horrible thing. Not to say that any of this is the wrong reaction, I just thought my days of getting sucked into these news cycles was over and it's depressing that we've got a year more of this shit again, like some shitty deja vu. At least it'll only be a year, cause either Biden will win or Trump will win and abolish the press, either way it'll be over by 2025.
I look forward to reading everyone's calm and measured reactions