New year, new AI challenges! NIST has published a taxonomy of AI attacks with potential mitigations. And the list is scary… since we know very little about how to prevent some of the attacks.
Alex takes you on a tour of the LLM attacks and adds examples and commentary for each of them.
Links:
- The AI Business article on the NIST report https://aibusiness.com/ml/nist-creates-cybersecurity-playbook-for-generative-ai
- The NIST article announcing the report https://www.nist.gov/news-events/news/2024/01/nist-identifies-types-cyberattacks-manipulate-behavior-ai-systems
- The NewStack article on new AWS AI tools for Java and Rust https://thenewstack.io/aws-gifts-java-rust-developers-with-useful-tools/
- 7 Guiding principles for working with LLMs https://thenewstack.io/7-guiding-principles-for-working-with-llms/