Search Header Logo

ML708-Explainability-2

Authored by Hanoona rasheed

Education

University

Used 1+ times

ML708-Explainability-2
AI

AI Actions

Add similar questions

Adjust reading levels

Convert to real-world scenario

Translate activity

More...

    Content View

    Student View

7 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Which of the following are the ingredients for an interpretability method?

Model

Data

Humans

Task

All of these

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Concept activation vector is a local interpretability method that quantifies the importance of a concept towards a trained deep neural network.

True

False

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

One can validate the CAVs using

By sorting the dataset using cosine similarity

Calculating TCAV on complete dataset

By comparing with Saliency Maps

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Concept Activation Vectors (CAVs) do not require positive/negative dataset

True

False

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Unintended entanglement of concepts can be solved using

Concept Activation Vector (TAV)

Using concepts from CLIP latent space

Using small number of carefully chosen concept-annotated images

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

Advantage of projection and Inverse Projection in Clip-based CounTEX

Can be learned on small dataset with annotations

Can be learned on any other dataset with annotations

Can be learned on any other dataset without annotations

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

correct prediction

incorrect prediction

Access all questions and much more by creating a free account

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?