Week2_S2

Week2_S2

University

10 Qs

quiz-placeholder

Similar activities

Q6. Cybersecurity Best Practices

Q6. Cybersecurity Best Practices

University

15 Qs

1.4 Logic Gate and Simple Logic Circuit

1.4 Logic Gate and Simple Logic Circuit

12th Grade - University

13 Qs

Workshop_quiz_on Sk-learn

Workshop_quiz_on Sk-learn

University

15 Qs

MST 24: Chapter V quiz

MST 24: Chapter V quiz

University

11 Qs

Forum Activity Tech quiz

Forum Activity Tech quiz

University

15 Qs

Operating System

Operating System

University

15 Qs

Q1 DPM overview

Q1 DPM overview

University

11 Qs

3.1(a)(b) - Lecture

3.1(a)(b) - Lecture

University

15 Qs

Week2_S2

Week2_S2

Assessment

Quiz

Information Technology (IT)

University

Practice Problem

Easy

Created by

Samiratu Ntohsi

Used 3+ times

FREE Resource

AI

Enhance your content in a minute

Add similar questions
Adjust reading levels
Convert to real-world scenario
Translate activity
More...

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

Why is generalization more important than training accuracy in neural networks?

 Generalization proves convergence.

Training accuracy ensures bias reduction.

 It prevents vanishing gradients.

It reflects the ability to predict unseen data, the true goal.

2.

FILL IN THE BLANK QUESTION

1 min • 1 pt

A multilayer network without nonlinearities collapses into a ______ model.

3.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

When using ReLU in hidden layers instead of sigmoid, which benefit typically emerges?

  1. Guarantees linear separability in the input space.

Prevents exploding gradients.

Ensures all neurons remain active.

Reduces vanishing gradient problems

4.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

Which of the following statements about gradient descent is correct?

It always finds the global minimum.

 It updates weights by moving against the gradient of the loss function.

It requires linear separability.

It is identical to the perceptron update rule.

5.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

Why is nonlinearity essential in deep neural networks?

It guarantees zero error.

 It reduces training time.

It allows the composition of layers to model complex, non-linear boundaries.

It simplifies the optimization problem.

6.

FILL IN THE BLANK QUESTION

1 min • 1 pt

Backpropagation uses the ______ rule to propagate gradients backward through layers

7.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

In a neural network, forward propagation refers to:

Feeding inputs through the network to generate predictions

Updating weights using gradient descent

Reversing gradients to find errors

Adjusting biases to prevent saturation

Access all questions and much more by creating a free account

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?