Deepfake

Deepfake

University

10 Qs

quiz-placeholder

Similar activities

Python Week 1 & 2

Python Week 1 & 2

University

10 Qs

OM_SBENT3C_Q2

OM_SBENT3C_Q2

University - Professional Development

15 Qs

Hands-on Modul 3

Hands-on Modul 3

University

10 Qs

Visual Basic 2010

Visual Basic 2010

8th Grade - University

10 Qs

CS100||MsWord

CS100||MsWord

University

10 Qs

Types of Mass Media

Types of Mass Media

10th Grade - Professional Development

11 Qs

Linked List 2

Linked List 2

University

10 Qs

Website Usability and UX

Website Usability and UX

University

10 Qs

Deepfake

Deepfake

Assessment

Quiz

Computers

University

Practice Problem

Medium

Created by

Tomasz Szandała

Used 1+ times

FREE Resource

AI

Enhance your content in a minute

Add similar questions
Adjust reading levels
Convert to real-world scenario
Translate activity
More...

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

“Fairness” means the model does not misclassify individuals based on attributes such as race or gender.

Select correct conclusion.

A model is fair only when its accuracy is 100 %.

A model can still be considered fair even if everyone is treated equally badly.

Fairness requires separate models for each demographic.

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

In the context of fairness, what does the term 'protected attribute' refer to?

Attributes that enhance the model's accuracy

Attributes that should not influence the model's predictions

Attributes that are irrelevant to model performance

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A false positive in the deep-fake detection task is...

Correctly identifying a fake image.

Failing to detect a fake image.

Classifying a real image as fake.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

The harmonic mean of Precision and Recall—recommended for imbalanced data —is the:

Accuracy

Specificity

F1 score

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

“Demographic parity” aims for what outcome in a deep-fake detector?

Identical classification thresholds for each group

Our dataset should have proportional to real-world ethnicity groups distribution

Equal proportion of images predicted ‘FAKE’ across all protected attributes

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

False-Positive Parity (FPP)
 FP/ (FP+TN)

for perfectly fair model is equal to

0

1

infinity

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Undersampling with Tomek links, SMOTE/ADASYN oversampling and data-augmentation are cited as examples of which de-biasing stage?

Pre-process

In-process

Meta-learning

Post-process

Create a free account and access millions of resources

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?