# README

![](https://633807366-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FS47JFAYVSkba1eVNFwFM%2Fuploads%2Fgit-blob-076d4fd531d34a162ab4cafd4869c9182607d826%2Fbanner.svg?alt=media)

Welcome to the **AI LLM Red Team Handbook**.

We designed this toolkit for security consultants, red teamers, and AI engineers. It provides end-to-end methodologies for identifying, assessing, and mitigating risks in Large Language Models (LLMs) and Generative AI systems.

***

## 🚀 Choose Your Path

| **🔬 The Consultant's Handbook**                                                                                                                    | **⚔️ The Field Manual**                                                                                                                   |
| --------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| <p><br><br>The foundational work. Theoretical deep-dives, detailed methodologies, compliance frameworks, and strategies for building a program.</p> | <p><br><br>The hands-on work. Operational playbooks, copy-paste payloads, quick reference cards, and checklists for live engagements.</p> |
| [**📖 Browse Handbook Chapters**](#-handbook-structure)                                                                                             | [**⚡ Go to Field Manuals**](https://github.com/Shiva108/ai-llm-red-team-handbook/blob/main/docs/Field_Manual_00_Index.md)                 |

***

## 📚 Handbook Structure

<details>

<summary>Part I: Foundations (Ethics, Legal, Mindset)</summary>

* [Chapter 1: Introduction to AI Red Teaming](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_01_introduction_to_ai_red_teaming)
* [Chapter 2: Ethics, Legal, and Stakeholder Communication](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_02_ethics_legal_and_stakeholder_communication)
* [Chapter 3: The Red Teamer's Mindset](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_03_the_red_teamers_mindset)

</details>

<details>

<summary>Part II: Project Preparation (Scoping, Threat Modeling)</summary>

* [Chapter 4: SOW, Rules of Engagement, and Client Onboarding](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_04_sow_rules_of_engagement_and_client_onboarding)
* [Chapter 5: Threat Modeling and Risk Analysis](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_05_threat_modeling_and_risk_analysis)
* [Chapter 6: Scoping an Engagement](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_06_scoping_an_engagement)
* [Chapter 7: Lab Setup and Environmental Safety](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_07_lab_setup_and_environmental_safety)
* [Chapter 8: Evidence, Documentation, and Chain of Custody](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_08_evidence_documentation_and_chain_of_custody)

</details>

<details>

<summary>Part III: Technical Fundamentals (Architecture, Tokenization)</summary>

* [Chapter 9: LLM Architectures and System Components](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_09_llm_architectures_and_system_components)
* [Chapter 10: Tokenization, Context, and Generation](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_10_tokenization_context_and_generation)
* [Chapter 11: Plugins, Extensions, and External APIs](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_11_plugins_extensions_and_external_apis)

</details>

<details>

<summary>Part IV: Pipeline Security (RAG, Supply Chain)</summary>

* [Chapter 12: Retrieval-Augmented Generation (RAG) Pipelines](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_12_retrieval_augmented_generation_rag_pipelines)
* [Chapter 13: Data Provenance and Supply Chain Security](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_13_data_provenance_and_supply_chain_security)

</details>

<details>

<summary>Part V: Attacks &#x26; Techniques (The Red Team Core)</summary>

* [Chapter 14: Prompt Injection](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_14_prompt_injection)
* [Chapter 15: Data Leakage and Extraction](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_15_data_leakage_and_extraction)
* [Chapter 16: Jailbreaks and Bypass Techniques](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_16_jailbreaks_and_bypass_techniques)
* [Chapter 17: Plugin and API Exploitation](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_17_01_fundamentals_and_architecture)
  * [Fundamentals and Architecture](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_17_01_fundamentals_and_architecture)
  * [API Authentication & Authorization](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_17_01_fundamentals_and_architecture/chapter_17_02_api_authentication_and_authorization)
  * [Plugin Vulnerabilities](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_17_01_fundamentals_and_architecture/chapter_17_03_plugin_vulnerabilities)
  * [API Exploitation & Function Calling](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_17_01_fundamentals_and_architecture/chapter_17_04_api_exploitation_and_function_calling)
  * [Third-Party Risks & Testing](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_17_01_fundamentals_and_architecture/chapter_17_05_third_party_risks_and_testing)
  * [Case Studies & Defense](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_17_01_fundamentals_and_architecture/chapter_17_06_case_studies_and_defense)
* [Chapter 18: Evasion, Obfuscation, and Adversarial Inputs](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_18_evasion_obfuscation_and_adversarial_inputs)
* [Chapter 19: Training Data Poisoning](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_19_training_data_poisoning)
* [Chapter 20: Model Theft and Membership Inference](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_20_model_theft_and_membership_inference)
* [Chapter 21: Model DoS and Resource Exhaustion](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_21_model_dos_resource_exhaustion)
* [Chapter 22: Cross-Modal and Multimodal Attacks](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_22_cross_modal_multimodal_attacks)
* [Chapter 23: Advanced Persistence and Chaining](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_23_advanced_persistence_chaining)
* [Chapter 24: Social Engineering with LLMs](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_24_social_engineering_llms)

</details>

<details>

<summary>Part VI: Defense &#x26; Mitigation</summary>

* [Chapter 25: Advanced Adversarial ML](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_25_advanced_adversarial_ml)
* [Chapter 26: Supply Chain Attacks on AI](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_26_supply_chain_attacks_on_ai)
* [Chapter 27: Federated Learning Attacks](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_27_federated_learning_attacks)
* [Chapter 28: AI Privacy Attacks](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_28_ai_privacy_attacks)
* [Chapter 29: Model Inversion Attacks](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_29_model_inversion_attacks)
* [Chapter 30: Backdoor Attacks](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_30_backdoor_attacks)

</details>

<details>

<summary>Part VII: Advanced Operations</summary>

* [Chapter 31: AI System Reconnaissance](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_31_ai_system_reconnaissance)
* [Chapter 32: Automated Attack Frameworks](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_32_automated_attack_frameworks)
* [Chapter 33: Red Team Automation](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_33_red_team_automation)
* [Chapter 34: Defense Evasion Techniques](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_34_defense_evasion_techniques)
* [Chapter 35: Post-Exploitation in AI Systems](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_35_post-exploitation_in_ai_systems)
* [Chapter 36: Reporting and Communication](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_36_reporting_and_communication)
* [Chapter 37: Remediation Strategies](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_37_remediation_strategies)
* [Chapter 38: Continuous Red Teaming](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_38_continuous_red_teaming)
* [Chapter 39: AI Bug Bounty Programs](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_39_ai_bug_bounty_programs)

</details>

<details>

<summary>Part VIII: Advanced Topics</summary>

* [Chapter 40: Compliance and Standards](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_40_compliance_and_standards)
* [Chapter 41: Industry Best Practices](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_41_industry_best_practices)
* [Chapter 42: Case Studies and War Stories](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_42_case_studies_and_war_stories)
* [Chapter 43: Future of AI Red Teaming](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_43_future_of_ai_red_teaming)
* [Chapter 44: Emerging Threats](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_44_emerging_threats)
* [Chapter 45: Building an AI Red Team Program](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_45_building_an_ai_red_team_program)
* [Chapter 46: Conclusion and Next Steps](https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/chapter_46_conclusion_and_next_steps)

</details>

***

## 🧩 Reference & Resources

* [**Configuration Guide**](https://github.com/Shiva108/ai-llm-red-team-handbook/blob/main/scripts/docs/Configuration.md)
* [**Field Manual Index**](https://github.com/Shiva108/ai-llm-red-team-handbook/blob/main/docs/Field_Manual_00_Index.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://cph-sec.gitbook.io/ai-llm-red-team-handbook-and-field-manual/readme.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
