Data mining#ANN

Data mining#ANN

University

10 Qs

quiz-placeholder

Similar activities

Internetna varnost za študente - pojmi

Internetna varnost za študente - pojmi

University

11 Qs

UTS TBO

UTS TBO

University

10 Qs

DKS - PR3 & PR4

DKS - PR3 & PR4

University

10 Qs

Test WORD

Test WORD

KG - University

15 Qs

Lectures at steleks

Lectures at steleks

University

10 Qs

Ekologia Internetu

Ekologia Internetu

University

14 Qs

Sprawdzenie wiedzy z HTML 1

Sprawdzenie wiedzy z HTML 1

University

14 Qs

Flip Flop

Flip Flop

University

15 Qs

Data mining#ANN

Data mining#ANN

Assessment

Quiz

Computers

University

Easy

Created by

Rafeeque PC

Used 32+ times

FREE Resource

AI

Enhance your content

Add similar questions
Adjust reading levels
Convert to real-world scenario
Translate activity
More...

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Which of the following statement about ANN is false?

ANN is preferred when Input is high-dimensional discrete or real-valued

ANN is preferred when less training time is desired

ANN is preferred when input data contain noisy data

ANN is preferred when Output is discrete or real valued

2.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

In the perceptron training rule, weights are updated as follows. 
 Wi = Wi+ ΔWi W_i\ =\ W_i+\ \Delta W_{i\ }  Where  ΔWi  = \Delta W_{i\ }\ =\   -----------?

 η(t o)xi \eta\left(t\ -o\right)x_{i\ }  

 (t  o)xi\left(t\ -\ o\right)x_i  

 η(t  o)\eta\left(t\ -\ o\right)  

 η(to)wi\eta\left(t-o\right)w_i  

3.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Perceptron training rule guaranteed to succeed if

1) Training examples are linearly separable

2) Even when training data contains noise

3) Even when training data not linearly seperable

4) Sufficiently small learning rate η

1 and 2 only

1 only

1 and 4 only

4 only

4.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Which of the following statement about "Linear unit training rule uses gradient descent" is false

Guaranteed to converge to hypothesis with minimum squared error

Gradient descent works when given sufficiently small learning rate η

Gradient descent works when training data contains noise

Gradient descent works only when training examples are linearly separable

5.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

What is the derivative of the sigmoid function σ(x) = 1(1 + ex)\sigma\left(x\right)\ =\ \frac{1}{\left(1\ +\ e^{-x}\right)} ?

 σ(x)(1  σ(x))\sigma\left(x\right)\left(1\ -\ \sigma\left(x\right)\right)  

 σ(x)(1σ(x)2)\sigma\left(x\right)\left(1-\sigma\left(x\right)^2\right)  

 (1 σ(x))\left(1\ -\sigma\left(x\right)\right)  

 σ(x)(1σ(x))\frac{\sigma\left(x\right)}{\left(1-\sigma\left(x\right)\right)}  

6.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Consider a multilayer feedforward network .  The error function is  12d D(td od)2\frac{1}{2}\sum_{d\ \in D}^{ }\left(t_d\ -o_d\right)^2  where D is the set of training examples  tt   is the target value and  oo   is the outout value. Assume backpropagation algorithm is used. What will be  EWi\frac{\partial E}{\partial W_i}  

 d D(tdod)od(1  od)xi,d-\sum_{d\ \in D}^{ }\left(t_d-o_d\right)o_d\left(1\ -\ o_d\right)x_{i,d}  

 dD(td od)xi,d-\sum_{d\in D}^{ }\left(t_{d\ }-o_d\right)x_{i,d}  

 dDod(1od)xi,d-\sum_{d\in D}^{ }o_d\left(1-o_d\right)x_{i,d}  

 dD(tdod)od(1od)xi,d\sum_{d\in D}^{ }\left(t_d-o_d\right)o_d\left(1-o_d\right)x_{i,d}  

7.

MULTIPLE CHOICE QUESTION

3 mins • 1 pt

Media Image

Consider the following Multilayer feedforward network with initial values as follows . Find the output at the fifth unit.

0.332

0.525

0.474

-0.7

Create a free account and access millions of resources

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

By signing up, you agree to our Terms of Service & Privacy Policy

Already have an account?