Deepfake

Deepfake

University

10 Qs

quiz-placeholder

Similar activities

Sains Komputer Tingkatan 4 - 2.1.2 Membezakan model-model..

Sains Komputer Tingkatan 4 - 2.1.2 Membezakan model-model..

1st Grade - Professional Development

10 Qs

Data base Management System

Data base Management System

University

11 Qs

Database Design Quiz Week 4

Database Design Quiz Week 4

University

15 Qs

Data Modeling - Revision

Data Modeling - Revision

University

15 Qs

RDBMS

RDBMS

University

11 Qs

Fairness Machine Learning

Fairness Machine Learning

University

6 Qs

concepts od DBMS

concepts od DBMS

University

10 Qs

Expectation Maximization & Gaussian Mixture Model

Expectation Maximization & Gaussian Mixture Model

University

12 Qs

Deepfake

Deepfake

Assessment

Quiz

Computers

University

Medium

Created by

Tomasz Szandała

Used 1+ times

FREE Resource

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

“Fairness” means the model does not misclassify individuals based on attributes such as race or gender.

Select correct conclusion.

A model is fair only when its accuracy is 100 %.

A model can still be considered fair even if everyone is treated equally badly.

Fairness requires separate models for each demographic.

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

In the context of fairness, what does the term 'protected attribute' refer to?

Attributes that enhance the model's accuracy

Attributes that should not influence the model's predictions

Attributes that are irrelevant to model performance

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A false positive in the deep-fake detection task is...

Correctly identifying a fake image.

Failing to detect a fake image.

Classifying a real image as fake.

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

The harmonic mean of Precision and Recall—recommended for imbalanced data —is the:

Accuracy

Specificity

F1 score

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

“Demographic parity” aims for what outcome in a deep-fake detector?

Identical classification thresholds for each group

Our dataset should have proportional to real-world ethnicity groups distribution

Equal proportion of images predicted ‘FAKE’ across all protected attributes

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

False-Positive Parity (FPP)
 FP/ (FP+TN)

for perfectly fair model is equal to

0

1

infinity

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Undersampling with Tomek links, SMOTE/ADASYN oversampling and data-augmentation are cited as examples of which de-biasing stage?

Pre-process

In-process

Meta-learning

Post-process

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?