
Backdoor Defense and Attack Strategies Quiz
Authored by Crappy Things
English
University

AI Actions
Add similar questions
Adjust reading levels
Convert to real-world scenario
Translate activity
More...
Content View
Student View
16 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
According to the simplified illustration, how does a backdoor trigger alter the model's decision-making process?
It pushes the decision boundary further away from the non-target classes (B and C), making them harder to misclassify.
It creates a 'shortcut' or a new, small region of misclassification (backdoor area) for non-target classes that is very close to their original location in the feature space.
It moves the representations of all inputs (A, B, and C) into a single point in the feature space.
It only affects the representations of the target class A, making them more robust.
2.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
The 'Minimum Δ needed to misclassify all samples into A' is a key metric. What does a significantly smaller Δ for an infected model imply?
The model is poorly trained and generally unstable.
The model has learned a highly efficient pathway (the backdoor) to the target class A for inputs from other classes, requiring minimal perturbation.
The clean model was already close to misclassifying everything as A.
The trigger pattern itself is very large and complex.
3.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
True or False: The illustration shows that in an infected model, a triggered input from class B is represented in the exact same location in the feature space as a clean input from class A.
True
False
4.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
True or False: Neural Cleanse operates on the assumption that an attacker wants to make the backdoor trigger as large and noticeable as possible to ensure its effectiveness.
True
False
5.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
The slide states, 'On CIFAR-10, even if the poisoning rate is less than 1%, various attacks can still achieve high attack success rates.' What is the most critical implication of this for defense design?
Defenses must be able to perfectly identify every single backdoored sample to be effective.
Simply removing a random 1% of the training data is a viable defense strategy.
The backdoor signal is very strong and easily learned, so defenses must be highly sensitive and cannot rely on the rarity of poisoned samples alone.
The CIFAR-10 dataset is inherently flawed and should not be used for security research.
6.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
What is the primary risk of a defense strategy that 'may accidentally remove a lot of valuable data when the dataset is completely clean'?
It increases the time and computational cost of training.
It would alert the attacker that a defense is in place.
It degrades the model's performance on its primary task (i.e., hurts clean accuracy) by removing valid training examples.
It might remove the wrong backdoored samples, leaving the most effective ones in the dataset.
7.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
True or False: The bar chart suggests that the 'Blend' attack is generally less effective than the 'Trojan' attack at lower poisoning rates.
True
False
Access all questions and much more by creating a free account
Create resources
Host any resource
Get auto-graded reports

Continue with Google

Continue with Email

Continue with Classlink

Continue with Clever
or continue with

Microsoft
%20(1).png)
Apple
Others
Already have an account?