In machine learning, the condition in which the model is too simple to describe the data or to learn the structure of the data is called?
Artificial Intelligence (AI) GLOW Class quizz

Quiz
•
Computers
•
University
•
Hard
istiqomah iskak
Used 3+ times
FREE Resource
30 questions
Show all answers
1.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
Underfitting
Overfitting
Unfitting
Bad fitting
2.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
The main difference between supervised learning and unsupervised learning is that supervised learning requires little data to conduct training, while unsupervised learning requires a lot of data.
True
False
3.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
When the target variable that we are going to predict has a discrete value, the method in machine learning is called
Discretisation
Classification
Supervision
Regression
4.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
The key difference between Logistic Regression and Adaline rule is that (choose the right answer below):
Logistic regression updates the weights based on a linear activation function rather than a unit step function like in the Adaline
Logistic regression updates the weights based on a sigmoid function rather than a unit step function like in the Adaline
Logistic regression updates the weights based on a sigmoid function rather than an identity function like in the Adaline
Logistic regression updates the weights based on an identity function rather than an a sigmoid function like in the Adaline
5.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
One of the key ingredients of supervised machine learning algorithms is a defined objective function that will be used during learning process. What is the objective function that we want to maximize in Logistic Regression:
Sum of squared errors
Mean absolute errors
The logit function
The log-likelihood
6.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
The correct sequence of the gradient descent (GD) algorithm in terms of the amount of training data processed for each iteration, from the smallest to the largest is:
Batch GD, Mini-Batch GD, Stochastic Mini-Batch GD
Stochastic GD, Batch GD, Mini-Batch GD
Stochastic GD, Mini-Batch GD, Batch GD
Mini-Batch GD, Batch GD, Stochastic GD
7.
MULTIPLE CHOICE QUESTION
30 sec • 1 pt
The stochastic gradient descent method has a higher probability of reaching the global minimum compared to batch gradient descent.
True
False
Create a free account and access millions of resources
Similar Resources on Quizizz
33 questions
Python Programming Quiz

Quiz
•
University
25 questions
C Programing

Quiz
•
University
30 questions
BRAIN STRUCK

Quiz
•
University
25 questions
5 sem 2024 AI Elective

Quiz
•
University
27 questions
ML WORKSHOP QUIZ

Quiz
•
University
27 questions
ML WORKSHOP QUIZ

Quiz
•
University
30 questions
Quiz 1 - IT 104A

Quiz
•
University
30 questions
INTRODUCTION TO COMPUTER

Quiz
•
University
Popular Resources on Quizizz
15 questions
Multiplication Facts

Quiz
•
4th Grade
20 questions
Math Review - Grade 6

Quiz
•
6th Grade
20 questions
math review

Quiz
•
4th Grade
5 questions
capitalization in sentences

Quiz
•
5th - 8th Grade
10 questions
Juneteenth History and Significance

Interactive video
•
5th - 8th Grade
15 questions
Adding and Subtracting Fractions

Quiz
•
5th Grade
10 questions
R2H Day One Internship Expectation Review Guidelines

Quiz
•
Professional Development
12 questions
Dividing Fractions

Quiz
•
6th Grade