Absolutely this. I've found AI to be a great tool for nitty-gritty questions concerning some development framework. While googling/duckduckgo'ing, you need to match the documentation pretty specifically when asking about something specific. AI seems to be much better at "understanding" the content and is able to match with the documentation pretty reliably.
For example, I was reading docs up and down at ElasticSearch's website trying to find all possible values for the status field within an aggregated request. Google only lead me to general documentations without the specifics. However, a quick loosely worded question to chatGPT handed me the correct answer as well as a link to the exact spot in the docs where this was specified.
This reminds me when I was at a coffee shop where I ordered some hot coffee drink with milk. I don't remember what kind it was, but the barista brought out a big cup with a small coffee shot in it. Then she said something which I somehow didn't hear, so I just nodded and said "yes". She started pouring the milk into the cup. I just watched.
She continued to pour. Slowly.
After a few seconds, she raised her eyes up to me, still pouring the milk. I was starting to wonder how much milk this drink was supposed to have. I could see her starting to have more of a worried look while she filled up the last bit of the cup.
Finally she stopped. A full cup. I took the drink, said thanks and left.
When I tasted it, I noticed that the drink was incredibly weak. Then I realized that the barista had asked me to say stop, but I just made her basically make me a full cup of hot milk.
Shouldn't that be Cauchy and Schwarz walking together? Since they were actually the ones to generally prove that the absolute value of the sum of two vectors are smaller or equal to the sum of absolute values of each of those vectors separately.
Wow! I have not seen that standup routine or heard anyone mention those thoughts before. I'm very glad that I am not alone with having that problematic idea pop into my head.
And I - just like Daniel Fernandes - should clarify that I do not under any circumstance attempt to white wash Hitler. It's merely a stray thought which makes a crazy plot line.
When you ask people what they would do if they had a time machine, a lot of them would mention "go back in time and kill Hitler". But I've been having this thought lately - and I know it's deeply problematic and inconsistent - but what if Hitler actually was from the future and decided to travel back in time to prevent the genocide that Israel is currently executing.
I got a christmas card from my company. As a part of the christmas greeting, they promoted AI, something to the extent of "We wish you a merry christmas, much like the growth of AI technologies within our company" or something like that.
I listen moderately throughout the year, but during december, my GF runs christmas music like there's no next year. So maybe the exclusion is my salvation.
Not that I'm deeply into the darkness (or rock, for that matter), but the darkness brings a breath of fresh air to an otherwise repeated to hell list of music, both in style and melody.
Absolutely this. I've found AI to be a great tool for nitty-gritty questions concerning some development framework. While googling/duckduckgo'ing, you need to match the documentation pretty specifically when asking about something specific. AI seems to be much better at "understanding" the content and is able to match with the documentation pretty reliably.
For example, I was reading docs up and down at ElasticSearch's website trying to find all possible values for the status field within an aggregated request. Google only lead me to general documentations without the specifics. However, a quick loosely worded question to chatGPT handed me the correct answer as well as a link to the exact spot in the docs where this was specified.