Resources

Resource Description
OWASP LLM Prompt Hacking OWASP project offering tutorials and a playground to understand and practice prompt injection attacks on LLMs. owasp.org](https://owasp.org/www-project-llm-prompt-hacking/?utm_source=chatgpt.com))
PortSwigger's Web Security Academy: WebLM Attacks Provides labs and exercises focused on exoiting vulnerabilities in web applications that integrate LLMs. (portswigger.net)
LLM Hacker's Handbook A comprensive guide detailing fundamentals, prompt injection techniques, offensive strategies, and defense mechanisms related to LLM hackg. (github.com)
[Red Teaming LLM Application(https://www.deeplearning.ai/short-courses/red-teaming-llm-applications/) A short course by DeepLearning.AIhat teaches how to identify vulnerabilities in LLM applications using red teaming techniques. (deeplearning.ai)
Hacc-Man: An Arcade Game for Jailbreaking LLMs An interactive game designed to challenge players to "jailbreak" LLMs, enhancing understanding of potential vulnerabilities. (arxiv.org)
Resource Description
Top 10 Web Hacking Techniques of 2024 A comprehensive overview by PortSwigger highlighting the most significant web hacking techniques identified in 2024.
Beyond Flesh and Code: Building an LLM-Based Attack Lifecycle with a Self-Guided Agent Deep Instinct's exploration of constructing an attack lifecycle leveraging Large Language Models (LLMs) and self-guided agents.
Practical Attacks on LLMs Iterasec's blog discussing real-world attack scenarios targeting Large Language Models, emphasizing practical implications.
LLM Attacks GitHub Repository A GitHub repository compiling various methods and tools related to attacks on Large Language Models.
Advance Prompt Injection for LLM Pentesting An article on Medium discussing advanced techniques for prompt injection attacks in the context of pentesting LLMs.
Generative AI for HR: 3 Fascinating Use Cases for Practitioners ADP's exploration of how generative AI can be applied within Human Resources, presenting three practical use cases.
OWASP - Machine Learning Security Top Ten OWASP's list highlighting the top ten security concerns in machine learning, aiming to raise awareness and provide guidance.
MITRE - ATLAS Matrix for ML Attacks and Tactics MITRE's ATLAS framework detailing adversarial tactics, techniques, and case studies related to machine learning systems.
NIST - Artificial Intelligence Risk Management Framework (AI RMF 1.0) NIST's framework providing guidelines for managing risks associated with artificial intelligence systems.
BugCrowd - AI Deep Dive: Pen testing & Ultimate Guide to AI Security BugCrowd's comprehensive guide on penetration testing and security considerations specific to AI systems.
Microsoft - Planning Red Teaming for Large Language Models (LLMs) and Their Applications Microsoft's insights into organizing red teaming exercises tailored for Large Language Models and their applications.
HackerOne - The Ultimate Guide to Managing Ethical and Security Risks in AI HackerOne's guide addressing ethical considerations and security risks inherent in AI deployment.
NVIDIA AI Red Team: An Introduction NVIDIA's introductory piece on their AI Red Team, focusing on identifying and mitigating AI-related vulnerabilities.
Gandalf - Prompt Injection Skills Game An interactive game designed to teach and test skills related to prompt injection attacks.
Lakera - Real World LLM Exploits Lakera's analysis of real-world exploits targeting Large Language Models, discussing vulnerabilities and mitigation strategies.
GPT Prompt Attack Game A game that simulates prompt injection attacks, allowing users to practice and understand potential vulnerabilities in LLMs.
SpyLogic Prompt Injection Attack Playground A sandbox environment for experimenting with prompt injection attacks in a controlled setting.
Offensive ML Playbook A resource detailing offensive strategies and techniques applicable to machine learning systems.
Snyk - Top Considerations for Addressing Risks in the OWASP Top 10 for LLMs Snyk's discussion on mitigating risks outlined in OWASP's top ten list specific to Large Language Models.
Hacking Google Bard - From Prompt Injection to Data Exfiltration A write-up detailing the process of exploiting Google's Bard through prompt injection leading to data exfiltration.
Threat Modeling LLM Applications An article focusing on identifying and mitigating threats specific to applications utilizing Large Language Models.
Portswigger - Web LLM Attacks & LLM Attacks and Prompt Injection PortSwigger's labs providing hands-on exercises related to LLM attacks and prompt injection techniques.
PortSwigger LLM Lab Walkthroughs Detailed walkthroughs of PortSwigger's LLM labs, guiding users through various attack scenarios.
Universal and Transferable Adversarial Attacks on Aligned Language Models A white paper discussing adversarial attacks that are both universal and transferable across different aligned language models.
AI and Prompt Injection Games from Secdim A collection of games designed to educate users on AI vulnerabilities and prompt injection attacks.
Large Language Model (LLM) Pen testing — Part I The first part of a series on penetration testing methodologies tailored for Large Language Models.
Fuzzing Labs AI Security Playlist A curated playlist of videos focusing on AI security topics, hosted by Fuzzing Labs.
LLM Pentest: Leveraging Agent Integration for RCE A write-up on achieving remote code execution through agent integration in LLMs during penetration tests.
AI/LLM-integrated Apps Penetration Testing A resource covering penetration testing methodologies for applications integrating AI/LLMs.
LLM Hacker's Handbook (Retiring Soon) A guide detailing LLM vulnerabilities, security risks, and exploitation techniques.
Damn Vulnerable LLM Project A deliberately vulnerable LLM environment designed for learning and testing security weaknesses.
Damn Vulnerable LLM Agent A vulnerable LLM agent designed for security testing and learning about common AI exploits.
Bug Bounty Platform for AI/ML A bug bounty platform specifically focused on AI and machine learning vulnerabilities.
Netsec Explained: The Cyberpunks Guide to Attacking Generative AI A video resource explaining how to attack generative AI systems.
Netsec Explained: Attacking and Defending Generative AI A collection of resources on offensive and defensive strategies for securing generative AI models.