Did they really just pull a Unity move with charging per download??
This is not going to be good for any developers that sit in that danger zone of offering a free app with in-app purchases. If they don't make enough money (over €500k) once they hit that 1 million download threshold.. they could owe more money than they make.
Edit: Looks like the first million downloads are always free for that year, but anything after that and they start charging per download.
Still bad for free apps if they grow a lot without getting much income from their users.
Computer Engineering is still a degree where you combine both Computer Science courses with Electrical Engineering courses.
You typically want to go this route if you want to be the kind of person that can create the logic for next generation GPUs/CPUs or if you like working with where hardware meets programming.
It's worth noting that one reason grills can get so large is to have better cooling for larger engines.
With a larger grill you get more air flowing through the radiator which allows the engine to be more efficient.
Electric vehicles don't need the same kind of cooling that ICE engines do, so having an electric Truck/SUV would allow for different designs which could be beneficial for pedestrians if they were struck.
It's a small model by comparison. If you want something that's offline and actually closer to comparing to ChatGPT 3.5, you'll want the Mixtral 8x7B model instead (running on a beefy machine):
You'd have to read the article to know what they're getting at.
The use case provided was for businesses like a car wash that puts a sticker on a car windshield. The ML model would be able to detect if the customer attempted to transfer the sticker from one car to another.
A pretrained ML model to detect this is actually a very good use case.
However, I think the implimentation of this as an "anti-tampering detector" is a dangerous route to tread since there are other factors that need to be considered.
The researchers noticed that if someone attempted to remove a tag from a product, it would slightly alter the glue with metal particles making the original signature slightly different. To counter this they trained a model:
The researchers produced a light-powered antitampering tag that is about 4 square millimeters in size. They also demonstrated a machine-learning model that helps detect tampering by identifying similar glue pattern fingerprints with more than 99 percent accuracy.
It's a good use case for an ML model.
In my opinion, this should only be used for continuing to detect the product itself.
The danger that I can see with this product would be a decision made by management thinking that they can rely on this to detect tampering without considering other factors.
The use case provided in the article was for something like a car wash sticker placed on a customers car.
If the customer tried to peel it off and reattach it to a different car, the business could detect that as tampering.
However, in my opinion, there are a number of other reasons where this model could falsely accuse someone of tampering:
Temperature swings. A hot day could warp the glue/sticker slightly which would cause the antitampering device to go off the next time it's scanned.
Having to get the windshield replaced because of damage/cracks. The customer would transfer the sticker and unknowingly void the sticker.
Kids, just don't underestimate them.
In the end, most management won't really understand this device well beyond statements like, "You can detect tampering with more than 99 percent accuracy!"
And, unless they inform the customers of how the anti-tampering works, Customers won't understand why they're being accused of tampering with the sticker.
"AI" is the broadest umbrella term for any of these tools. That's why I pointed out that OP really should be a bit more specific as to what they mean with their question.
AI doesn't have the same meaning that it had over 10 years ago when we used to use it exclusively for machines that could think for themselves.
AI is a very broad topic. Unless you only want to talk about Large Language Models (like ChatGPT) or AI Image Generators (Midjourney) there are a lot of uses for AI that you seem to not be considering.
It's great for upscaling old videos: (this would fall under image generating AI since it can be used for colorizing, improving details, and adding in additional frames) so that you end up with something like:
https://www.youtube.com/watch?v=hZ1OgQL9_Cw
It's useful for scanning an image for text and being able to copy it out (OCR).
It's excellent if you're deaf, or sitting in a lobby with a muted live broadcast and want to see what is being said with closed captions (Speech to Text).
Flying your own drone with object detection/avoidance.
There's a lot more, but basically, it's great at taking mundane tasks where you're stuck doing the same (or similar) thing over, and over, and over again, and automating it.
Then they won't get your messages are any other information specific to your device.
But cars don't need that connection to phone home with all of the data that the car itself is picking up on. Cars today all have some sort of cheap connection so that they can pass on your data one way or another.
Did they really just pull a Unity move with charging per download??
This is not going to be good for any developers that sit in that danger zone of offering a free app with in-app purchases. If they don't make enough money (over €500k) once they hit that 1 million download threshold.. they could owe more money than they make.
Edit: Looks like the first million downloads are always free for that year, but anything after that and they start charging per download. Still bad for free apps if they grow a lot without getting much income from their users.