- Pro
- Security
Devices treat public text as commands without checking the intent
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
(Image credit: Traffic Safety Warehouse)
- Copy link
- X
- Threads
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.You are now subscribed
Your newsletter sign-up was successful
An account already exists for this email address, please log in. Subscribe to our newsletter- Printed words can override sensors and context inside autonomous decision systems
- Vision language models treat public text as commands without verifying intent
- Road signs become attack vectors when AI reads language too literally
Autonomous vehicles and drones rely on vision systems that combine image recognition with language processing to interpret their surroundings, helping them read road signs, labels, and markings as contextual information that supports navigation and identification.
Researchers from the University of California, Santa Cruz, and Johns Hopkins set out to test whether that assumption holds when written language is deliberately manipulated.
The experiment focused on whether text visible to autonomous vehicle cameras could be misread as an instruction rather than simple environmental data, and found large vision language models could be coerced into following commands embedded in road signs.
You may like-
Can top AI tools be bullied into malicious work? ChatGPT, Gemini, and more are put to the test, and the results are actually genuinely surprising
-
That's not very trendy of them - AI browsers can be hacked with a simple hashtag, experts warn
-
Experts tried to get AI to create malicious security threats - but what it did next was a surprise even to them
What the experiments revealed
In simulated driving scenarios, a self-driving car initially behaved correctly when approaching a stop signal and an active crosswalk.
When a modified sign entered the camera’s view, the same system interpreted the text as a directive and attempted a left turn despite pedestrians being present.
This shift occurred without any change to traffic lights, road layout, or human activity, indicating that written language alone influenced the decision.
This class of attack relies on indirect prompt injection, where input data is processed as a command.
Are you a pro? Subscribe to our newsletterContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.The team altered words such as “proceed” or “turn left” using AI tools to increase the likelihood of compliance.
Language choice mattered less than expected, as commands written in English, Chinese, Spanish, and mixed-language forms were all effective.
Visual presentation also played a role, with color contrast, font style, and placement affecting outcomes.
You may like-
Can top AI tools be bullied into malicious work? ChatGPT, Gemini, and more are put to the test, and the results are actually genuinely surprising
-
That's not very trendy of them - AI browsers can be hacked with a simple hashtag, experts warn
-
Experts tried to get AI to create malicious security threats - but what it did next was a surprise even to them
In several cases, green backgrounds with yellow text produced consistent results across models.
The experiments compared two vision language models across driving and drone scenarios.
While many results were similar, self-driving car tests showed a large gap in success rates between models.
Drone systems proved even more predictable in their responses.
In one test, a drone correctly identified a police vehicle based on appearance alone.
Adding specific words to a generic vehicle caused the system to misidentify it as a police car belonging to a specific department, despite no physical indicators supporting that claim.
All testing took place in simulated or controlled environments to avoid real-world harm.
Even so, the findings raise concerns about how autonomous systems validate visual input.
Traditional safeguards, such as a firewall or endpoint protection, do not address instructions embedded in physical spaces.
Malware removal are irrelevant when the attack requires only printed text, leaving responsibility with system designers and regulators rather than end users.
Manufacturers must ensure that autonomous systems treat environmental text as contextual information instead of executable instructions.
Until those controls exist, users can protect themselves by limiting reliance on autonomous features and maintaining manual oversight whenever possible.
Via The Register
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
Efosa UdinmwenFreelance JournalistEfosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.
View MoreYou must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
Logout Read more
Can top AI tools be bullied into malicious work? ChatGPT, Gemini, and more are put to the test, and the results are actually genuinely surprising
That's not very trendy of them - AI browsers can be hacked with a simple hashtag, experts warn
Experts tried to get AI to create malicious security threats - but what it did next was a surprise even to them
Researchers claim ChatGPT has a whole host of worrying security flaws - here's what they found
If hackers can use AI to automate cyber attacks, killer robots are the least of our problems
DeepSeek took off as an AI superstar a year ago - but could it also be a major security risk? These experts think so
Latest in Security
Russian hackers are targeting a new Office 365 zero-day, so patch now or face attack
Dangerous new malware targets macOS devices via OpenVSX extensions - here's how to stay safe
Malwarebytes and ChatGPT team up to check all of those suspicious texts, emails, and URLs with one simple phrase
AI agent social media network Moltbook is a security disaster - millions of credentials and other details left unsecured
Panera Bread data breach much more serious than we thought - over 5 million customers were hit, new reports claim
Notepad++ hit by suspected Chinese state-sponsored hackers - here's what we know so far
Latest in News
'We're not going to go down the road of pay-to-win or trapping you to buy monetized products' — Sea of Remnants developer discusses microtransactions in the upcoming free-to-play game
No, Ubisoft did actually announce The Division: Definitive Edition but no one saw it, and it's not a remake or remaster like fans expected
Where hi-fi, art and chemistry collide, you get Molecular Audio
I didn't even know Netflix was on the PS3, but it won't matter soon — the streaming app will leave the console after 16 years next month
Independent auditors confirm NordVPN never stores your data – for the 6th time
Sea of Remnants has 400+ named NPCs in its open world, each 'with their own individual story arcs' that can be altered by your actions
LATEST ARTICLES- 1Road markers are a new target for hackers - experts find self-driving cars and autonomous drones can be misled by malicious instructions written on road signs
- 2AI agent social media network Moltbook is a security disaster - millions of credentials and other details left unsecured
- 3Monarch: Legacy of Monsters season 2’s official trailer reveals more of Titan X, and it looks a lot like Kong’s last rival in Netflix’s Monsterverse TV series
- 4How to turn Minecraft into Animal Crossing, Pokemon and more
- 5Canon's latest PowerShot proves the compact camera isn't dead — and testing it reminded me why I got into photography in the first place