Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add preprocessor to patch PromptGuard scores for inserted characters #636

Merged
merged 1 commit into from
Aug 20, 2024

Conversation

cynikolai
Copy link
Member

Problem: Inserting spaces between characters in given prompts causes misclassifications in PromptGuard. See meta-llama/llama-models#50 for more context.

Solution: Tokenize the string with all spaces removed, to ensure that larger tokens (for example, [“ignore”]) are not broken up into smaller tokens (for example, [“i”, “g”, “n”, “o”, “r”, “e”] . Add back spaces between the larger tokens if spaces exist in the original string.

This approach showed a slight positive impact on all of our evaluation datasets, suggesting that making the system more robust to jailbreaks that disrupt tokenization like this one will be an important part of improving model quality. Notably, simply subtracting spaces from the string lead to a moderate quality regression on some datasets, which is why we don’t take that simpler approach here.

This solution only targets jailbreaks enabled by inserted spaces and not other special characters. For a more complete approach longer term, we’re continuing to work on building more adversarial examples into our dataset.

The preprocessor is used by default by our inference utilities.

Copy link
Contributor

@mreso mreso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM but this might be easily circumvented if the tokenizer is not lossless (see comment)

last_end = 0
for token in tokens:
token_str = tokenizer.convert_tokens_to_string([token])
start = cleaned_text.index(token_str, last_end)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC tokenizer are not always lossless. Could this be circumvented by adding a word/string combination into the text which can not be tokenized "losslessly" leading to a ValueError being raised in .index() as the "detokenized" string can not be found?

@mreso mreso merged commit 3a99a54 into main Aug 20, 2024
3 checks passed
@mreso mreso deleted the promptguard-spaces-early-fix branch August 20, 2024 17:45
@mreso
Copy link
Contributor

mreso commented Aug 20, 2024

Turns out tiktoken is actually lossless.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants