This kind of work I find very important when talking about AI adoption.
I've been generating (the boring) parts of work documents via AI, and even though I put a lot of thought into my prompts and I reviewed and adjusted the output each time, I kept wondering constantly if people would notice the AI parts, and if that made me look either more efficient and 'complete' (we are talking about some template document where some parts seem to be designed to be repetitive), or lazy and disrespectful.
Because it's for sure that my own trust in content and a person drops when I notice auto-generated parts, which triggers that I use AI in turn, and I ask it to summarise all that verbose AI generated content.
I'm not sure that's how decoder-encoders are meant to work :)
I've spent some time reading the Wikipedia article looking for the relevant part, I guess I was 10 mins early (didn't get the chance to see your comment before that). Here's the (probably) corresponding video, the first video result when searching for the freestyle javelin technique, in case it helps anyone: https://youtu.be/52rvqtiBoow?si=RiLjhJG2ttv-0s1W
Not sure if it's the right place to ask, but where are these posters used normally? They are really nice, but I've only ever seen them in posts like this.
Maybe that's what we should get: yet another trilogy where they rebuild but without the attachment nonsense and other unhealthy jedi practices. Though I'm somewhat conflicted when writing 'yet another trilogy'.
Maybe to add, after having read some of the linked comments, that the survey is cool and all but that's it. It triggers people to explain their process of how they tried to spot things (as good experiment participants should), but I have only seen one comment on the purpose and methodology (as a researcher would).
For whatever interesting research question, the data selection doesn't allow for insightful answers. We can interpret these results as we want, and as we can see, people are already taking what they want from it.
Sincerely, Reviewer 2
I got a 9, while even knowing the two examples in this post, marking way too many as AI generated.
But it got me thinking, seeing the deviantart links for the non-AI images: though maybe those are not generated by single prompts, there is a chance of heavy AI involvement in the process, one way or another. So I would actually be curious what the practical/applicable purpose of the survey was.
This kind of work I find very important when talking about AI adoption.
I've been generating (the boring) parts of work documents via AI, and even though I put a lot of thought into my prompts and I reviewed and adjusted the output each time, I kept wondering constantly if people would notice the AI parts, and if that made me look either more efficient and 'complete' (we are talking about some template document where some parts seem to be designed to be repetitive), or lazy and disrespectful. Because it's for sure that my own trust in content and a person drops when I notice auto-generated parts, which triggers that I use AI in turn, and I ask it to summarise all that verbose AI generated content. I'm not sure that's how decoder-encoders are meant to work :)