Future Tech & AI

LLM Security

Securing the new wave of Generative AI applications.

Operational Phase

01

Prompt Injection

Tricking the LLM into ignoring instructions (DAN mode).

02

Jailbreaking

Bypassing safety filters to generate harmful content.

03

Data Leakage

Ensuring PII doesn't end up in training data.

04

Indirect Injection

Attacking LLMs via web pages they read.