Week2_S2

Week2_S2

University

10 Qs

quiz-placeholder

Similar activities

Chapter 4

Chapter 4

University

11 Qs

Android Studio Quiz - BSCS IV

Android Studio Quiz - BSCS IV

University

10 Qs

Software Engineering II - Confrotando mi saber Unidad 1

Software Engineering II - Confrotando mi saber Unidad 1

University

12 Qs

Arduino Basics: Inputs and Outputs Quiz

Arduino Basics: Inputs and Outputs Quiz

12th Grade - University

11 Qs

Microsoft Word 2019 Advanced – Unit 1 (Review of Basic Concepts)

Microsoft Word 2019 Advanced – Unit 1 (Review of Basic Concepts)

10th Grade - University

15 Qs

Automation And AI Agents Quiz

Automation And AI Agents Quiz

University

15 Qs

MongoDB Concepts 2

MongoDB Concepts 2

University

10 Qs

QUIZ-group5

QUIZ-group5

University

10 Qs

Week2_S2

Week2_S2

Assessment

Quiz

Information Technology (IT)

University

Practice Problem

Easy

Created by

Samiratu Ntohsi

Used 3+ times

FREE Resource

AI

Enhance your content in a minute

Add similar questions
Adjust reading levels
Convert to real-world scenario
Translate activity
More...

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

Why is generalization more important than training accuracy in neural networks?

 Generalization proves convergence.

Training accuracy ensures bias reduction.

 It prevents vanishing gradients.

It reflects the ability to predict unseen data, the true goal.

2.

FILL IN THE BLANK QUESTION

1 min • 1 pt

A multilayer network without nonlinearities collapses into a ______ model.

3.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

When using ReLU in hidden layers instead of sigmoid, which benefit typically emerges?

  1. Guarantees linear separability in the input space.

Prevents exploding gradients.

Ensures all neurons remain active.

Reduces vanishing gradient problems

4.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

Which of the following statements about gradient descent is correct?

It always finds the global minimum.

 It updates weights by moving against the gradient of the loss function.

It requires linear separability.

It is identical to the perceptron update rule.

5.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

Why is nonlinearity essential in deep neural networks?

It guarantees zero error.

 It reduces training time.

It allows the composition of layers to model complex, non-linear boundaries.

It simplifies the optimization problem.

6.

FILL IN THE BLANK QUESTION

1 min • 1 pt

Backpropagation uses the ______ rule to propagate gradients backward through layers

7.

MULTIPLE CHOICE QUESTION

45 sec • 1 pt

In a neural network, forward propagation refers to:

Feeding inputs through the network to generate predictions

Updating weights using gradient descent

Reversing gradients to find errors

Adjusting biases to prevent saturation

Access all questions and much more by creating a free account

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?