________________________________________________________________________________________________________________________
are difficult or impossible to explain, even to their own developers. Hallucinations, in particular, can pose a significant risk when AI is entrusted with control over physical systems or provides operators with misleading information. When a robot makes a harmful decision, it is also not always clear who is accountable. The legal and ethical ambiguity surrounding liability has prompted calls for clear frameworks to govern AI deployment.
AI model integrity and vulnerabilities
As robotics systems increasingly rely on AI decision-making, researchers are warning that flaws or manipulation in underlying models could lead to unpredictable and potentially hazardous behavior.
Security experts say AI models are vulnerable to data poisoning, malicious training manipulation, and adversarial interference. These attacks can compromise system integrity, triggering erratic outputs in motion planning or control functions. Feeding corrupted data into a model can have serious consequences. In physical systems, that could mean unsafe actions or even harm. Foundation models, including large language models and computer vision systems, have also come under scrutiny for their susceptibility to targeted manipulation. Once compromised, these models may produce uncontrolled or misleading outputs, undermining trust in autonomous platforms. Industry leaders are calling for tighter controls over training
FREE DOWNLOAD“ AI in Robotics” position paper by The International Federation of Robotics
24