Automation

Adversarial examples causes NLP models to misrecognition.

Using TextAttack

TextAttack is a Python framework for adversarial attacks, training models in NLP.

# TextFooler
textattack attack --model bert-base-uncased-mr --recipe textfooler --num-examples 100

# DeepWordBug
textattack attack --model distilbert-base-uncased-cola --recipe deepwordbug --num-examples 100

Last updated