Anti-Backdoor Learning Quiz

Anti-Backdoor Learning Quiz

University

14 Qs

quiz-placeholder

Similar activities

Flocabulary Two Bad Cousins

Flocabulary Two Bad Cousins

KG - University

15 Qs

Spider-man Quiz 1

Spider-man Quiz 1

1st Grade - Professional Development

10 Qs

Stink and the Attack of the Slime Mold

Stink and the Attack of the Slime Mold

3rd Grade - University

15 Qs

Exploring Hatchet: Chapters 1-7

Exploring Hatchet: Chapters 1-7

6th Grade - University

10 Qs

Is eating eggs really bad for your health? Mark or complete.

Is eating eggs really bad for your health? Mark or complete.

University

14 Qs

Insights from Chapter 24 of I Am Malala

Insights from Chapter 24 of I Am Malala

10th Grade - University

10 Qs

Review Week 1

Review Week 1

University

10 Qs

quiz si 24 c inggris

quiz si 24 c inggris

University

10 Qs

Anti-Backdoor Learning Quiz

Anti-Backdoor Learning Quiz

Assessment

Quiz

English

University

Hard

Created by

Crappy Things

FREE Resource

AI

Enhance your content

Add similar questions
Adjust reading levels
Convert to real-world scenario
Translate activity
More...

14 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Why would filtering out low-loss examples become 'significantly inaccurate' after many training epochs (e.g., beyond epoch 20)?

Because by that stage, backdoor examples start to have high training loss.

Because by that stage, many correctly learned clean examples also have low training loss, making them indistinguishable from backdoor examples on this metric alone.

Because the model will have completely forgotten the backdoor by epoch 20.

Because the definition of 'low loss' changes after epoch 20.

2.

OPEN ENDED QUESTION

3 mins • 1 pt

The strategy of removing low-loss examples early in training is ineffective partly because some powerful attacks can succeed even with a very small number of backdoor examples.

Evaluate responses using AI:

OFF

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

In the ABL method, what is 'gradient ascent' used for?

To train the model normally on the clean set of data.

To isolate the data with the lowest loss values.

To actively 'unlearn' the suspected backdoored data by maximizing its loss function.

To initialize the model's weights.

4.

OPEN ENDED QUESTION

3 mins • 1 pt

The ABL method requires the defender to know the exact poisoning rate to set the isolation rate p.

Evaluate responses using AI:

OFF

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

According to the table, how does ABL (Ours) perform on the 'SIG' attack type?

It achieves 0.09% Attack Success Rate (ASR) and 88.27% Clean Accuracy (CA).

It fails to defend, resulting in 99.46% ASR.

It results in the lowest CA of all defenses.

It performs worse than 'No Defense'.

6.

OPEN ENDED QUESTION

3 mins • 1 pt

Based on the 'Average' results, ABL achieves both the lowest average Attack Success Rate and the highest average Clean Accuracy compared to the other defenses listed.

Evaluate responses using AI:

OFF

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

What is the trade-off shown in the graphs when setting the isolation rate p?

Using a higher isolation rate perfectly defends against attacks with no negative side effects.

A higher isolation rate better defends against attacks (lower ASR) but at the cost of harming the model's accuracy on clean data (lower CA).

A higher isolation rate improves clean accuracy but weakens the defense against attacks.

The isolation rate has no effect on either ASR or CA.

Create a free account and access millions of resources

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

By signing up, you agree to our Terms of Service & Privacy Policy

Already have an account?