I think there are free editors for LaTeX that show you the code and the end result next to each other, and let you edit either.
You need to learn the ability to resist the urge to tweak layout. You're using a professional document preparation tool that well make your document look professional. Playing with trendy fonts and margins and placement is how regular people make documents in a word processor that look less professional than LaTeX.
LaTeX gives you the respectability of the corporate style of the professional science researcher, but if you want free-form do-it-how-you-like, you really really really don't want LaTeX.
In the UK, for example, the “Liberal Democrats” are right-leaning.
Depends on the leadership. Sometimes they are, sometimes they aren't. There have been times when they've been further left than Labour.
Currently of course, that's easy because Labour is too busy trying to appeal to Reform voters and Conservatives and are governing like they were the Democratic Party, which is a shame, because the country is desperately needing some wealth redistribution.
Labour are in power because were gasping for some sanity after a succession of Conservative lunatics, but all the Conservative Party needs to do is stump up a leader who can sound like they have a couple of good ideas and have a bit of charisma and they'll be back in power before you can say "short memory".
No, which is why I avoid regexes for most production code and also why I would never use one written by a pathological liar and always guessing coder like an LLM.
LLM is great when you're coding in a pure fictional programming language like elm and are using loss of custom types to make impossible states unrepresentable, and the function you're writing could have been derived by the Haskell compiler, so mathematically the only possible way you could write it wrong is to use the wrong constructor, then it's usually right and when it's wrong either it doesn't compile or you can see it's chosen the wrong path.
The rest of the time it will make shit up and when you challenge it, out will happily rewrite it for you, but there's no particular reason why it wouldn't make up more nonsense.
Regexes are far easier to write than to debug, which is exactly why they're poison for a maintainable code base and a really bad use case for an LLM.
I also wouldn't use an LLM for languages in which there are lots and lots of ways to go wrong. That's exactly when you need an experienced developer, not someone who guesses based on what they read online and no understanding, never learning anything, because, my young padawan, that's exactly what an LLM is, every day.
The experienced developers in the study believed they were 20% faster. There's a chance you also measured your efficiency more subjectively than you think you did.
I suspect that unless you were considerably more rigorous in testing your efficiency than they were, you might just be in a time flies when you're having fun kind of situation.
I was trying to think of some other meaning than 'drinks dispensary' for 'bar' and I couldn't think of a sensible reason for putting a bar in your shower for quite a while until I realised metal bar.
On top of that, it's an annoyingly disproportionate graphic. The cow is much wider than the human so its area is much more than 60% of the area of the graphic.
The owl might be 3cm high and the hen 6cm high, but 9cm² and 36cm² would be the rough areas, even if it weren't for the fact that again, the hen picture is much, much wider than the owl.
With 30% and 70%, the owl should just be a little under half as big as the hen, but it looks like about 1/4 or 1/5 of the size of the hen.
(Personally and irrelevantly to your question, think it's weird to shave your pubes, and I think that based on who started that trend, why they started it and why it became popular, but people younger than me, who don't remember any different disagree strongly.)
But the fact that your son trusts you with that question and that you calmly helped him and didn't make a big deal out of it, is an absolute parenting win. Who does your teenaged son go to when he's worried about something personal and sensitive and embarrassing? He goes to you, and you help him and he is right to trust you.
I already told you my experience of the crapness of LLMs and even explained why I can't share the prompt etc. You clearly weren't listening or are incapable of taking in information.
There's also all the testing done by the people talked about in the article we're discussing which you're also irrationally dismissing.
You have extreme confirmation bias.
Everything you hear that disagrees with your absurd faith in the accuracy of the extreme blagging of LLMs gets dismissed for any excuse you can come up with.
It's like you didn't listen to anything I ever said, or you discounted everything I said as fiction, but everything your dear LLM said is gospel truth in your eyes. It's utterly irrational. You have to be trolling me now.
it’s so good at parsing text and documents, summarizing
No. Not when it matters. It makes stuff up. The less you carefully check every single fucking thing it says, the more likely you are to believe some lies it subtly slipped in as it went along. If truth doesn't matter, go ahead and use LLMs.
If you just want some ideas that you're going to sift through, independently verify and check for yourself with extreme skepticism as if Donald Trump were telling you how to achieve world peace, great, you're using LLMs effectively.
But if you're trusting it, you're doing it very, very wrong and you're going to get humiliated because other people are going to catch you out in repeating an LLM's bullshit.
You're better off asking one human to do the same task ten times. Humans get better and faster at things as they go along. Always slower than an LLM, but LLMs get more and more likely to veer off on some flight of fancy, further and further from reality, the more it says to you. The chances of it staying factual in the long term are really low.
It's a born bullshitter. It knows a little about a lot, but it has no clue what's real and what's made up, or it doesn't care.
If you want some text quickly, that sounds right, but you genuinely don't care whether it is right at all, go for it, use an LLM. It'll be great at that.
I would be in breach of contract to tell you the details. How about you just stop trying to blame me for the clear and obvious lies that the LLM churned out and start believing that LLMs ARE are strikingly fallible, because, buddy, you have your head so far in the sand on this issue it's weird.
The solution to the problem was to realise that an LLM cannot be trusted for accuracy even if the first few results are completely accurate, the bullshit well creep in. Don't trust the LLM. Check every fucking thing.
In the end I wrote a quick script that broke the input up on tab characters and wrote the sentence. That's how formulaic it was. I regretted deeply trying to get an LLM to use data.
The frustrating thing is that it is clearly capable of doing the task some of the time, but drifting off into FANTASY is its strong suit, and it doesn't matter how firmly or how often you ask it to be accurate or use the input carefully. It's going to lie to you before long. It's an LLM. Bullshitting is what it does. Get it to do ONE THING only, then check the fuck out of its answer. Don't trust it to tell you the truth any more than you would trust Donald J Trump to.
No no no no no no no no no no no no!