This Microsoft security team stress-tests AI for its worst-case scenarios
As soon as new AI products are released, security researchers and pranksters begin probing them for weaknesses, trying to push systems to violate their own safety precautions and coax them into pro...
Source: www.fastcompany.com
As soon as new AI products are released, security researchers and pranksters begin probing them for weaknesses, trying to push systems to violate their own safety precautions and coax them into producing anything from offensive content to instructions for building weapons. After all, AI risks are not just theoretical. In recent months, various AI companies have faced criticism for their software allegedly contributing to mental illness and suicide, nonconsensual fake nude images of real people, and aiding hackers in cybercrime. At the same time, techniques for bypassing safeguards also continue to evolve, with recent methods including everything from malicious prompts disguised with poetry to surreptitiously planting ideas in AI assistant memories via innocuous-looking online tools. But long before new models reach the public, internal security teams are already stress-testing them. At Microsoft, that responsibility largely falls to the company’s AI