Deepfake

Deepfake

University

10 Qs

quiz-placeholder

Similar activities

5MM OOP Classes

5MM OOP Classes

University

12 Qs

Enitity Relationship

Enitity Relationship

University

15 Qs

ระบบจัดการฐานข้อมูล

ระบบจัดการฐานข้อมูล

University

12 Qs

DBC_CHAPTER 4

DBC_CHAPTER 4

University

15 Qs

Database Management Systems

Database Management Systems

University

10 Qs

Advanced Database Development Part 3

Advanced Database Development Part 3

University

8 Qs

DBMS Quiz1

DBMS Quiz1

University - Professional Development

10 Qs

Advanced Topics in Software Engineering (Part 1)

Advanced Topics in Software Engineering (Part 1)

University

13 Qs

Deepfake

Deepfake

Assessment

Quiz

Computers

University

Medium

Created by

Tomasz Szandała

Used 1+ times

FREE Resource

AI

Enhance your content

Add similar questions
Adjust reading levels
Convert to real-world scenario
Translate activity
More...

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

“Fairness” means the model does not misclassify individuals based on attributes such as race or gender.

Select correct conclusion.

A model is fair only when its accuracy is 100 %.

A model can still be considered fair even if everyone is treated equally badly.

Fairness requires separate models for each demographic.

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

In the context of fairness, what does the term 'protected attribute' refer to?

Attributes that enhance the model's accuracy

Attributes that should not influence the model's predictions

Attributes that are irrelevant to model performance

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A false positive in the deep-fake detection task is...

Correctly identifying a fake image.

Failing to detect a fake image.

Classifying a real image as fake.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

The harmonic mean of Precision and Recall—recommended for imbalanced data —is the:

Accuracy

Specificity

F1 score

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

“Demographic parity” aims for what outcome in a deep-fake detector?

Identical classification thresholds for each group

Our dataset should have proportional to real-world ethnicity groups distribution

Equal proportion of images predicted ‘FAKE’ across all protected attributes

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

False-Positive Parity (FPP)
 FP/ (FP+TN)

for perfectly fair model is equal to

0

1

infinity

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Undersampling with Tomek links, SMOTE/ADASYN oversampling and data-augmentation are cited as examples of which de-biasing stage?

Pre-process

In-process

Meta-learning

Post-process

Create a free account and access millions of resources

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

By signing up, you agree to our Terms of Service & Privacy Policy

Already have an account?