Search Header Logo

Data mining#ANN

Authored by Rafeeque PC

Computers

University

Used 32+ times

Data mining#ANN
AI

AI Actions

Add similar questions

Adjust reading levels

Convert to real-world scenario

Translate activity

More...

    Content View

    Student View

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Which of the following statement about ANN is false?

ANN is preferred when Input is high-dimensional discrete or real-valued

ANN is preferred when less training time is desired

ANN is preferred when input data contain noisy data

ANN is preferred when Output is discrete or real valued

2.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

In the perceptron training rule, weights are updated as follows. 
 Wi = Wi+ ΔWi W_i\ =\ W_i+\ \Delta W_{i\ }  Where  ΔWi  = \Delta W_{i\ }\ =\   -----------?

 η(t o)xi \eta\left(t\ -o\right)x_{i\ }  

 (t  o)xi\left(t\ -\ o\right)x_i  

 η(t  o)\eta\left(t\ -\ o\right)  

 η(to)wi\eta\left(t-o\right)w_i  

3.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Perceptron training rule guaranteed to succeed if

1) Training examples are linearly separable

2) Even when training data contains noise

3) Even when training data not linearly seperable

4) Sufficiently small learning rate η

1 and 2 only

1 only

1 and 4 only

4 only

4.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Which of the following statement about "Linear unit training rule uses gradient descent" is false

Guaranteed to converge to hypothesis with minimum squared error

Gradient descent works when given sufficiently small learning rate η

Gradient descent works when training data contains noise

Gradient descent works only when training examples are linearly separable

5.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

What is the derivative of the sigmoid function σ(x) = 1(1 + ex)\sigma\left(x\right)\ =\ \frac{1}{\left(1\ +\ e^{-x}\right)} ?

 σ(x)(1  σ(x))\sigma\left(x\right)\left(1\ -\ \sigma\left(x\right)\right)  

 σ(x)(1σ(x)2)\sigma\left(x\right)\left(1-\sigma\left(x\right)^2\right)  

 (1 σ(x))\left(1\ -\sigma\left(x\right)\right)  

 σ(x)(1σ(x))\frac{\sigma\left(x\right)}{\left(1-\sigma\left(x\right)\right)}  

6.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Consider a multilayer feedforward network .  The error function is  12d D(td od)2\frac{1}{2}\sum_{d\ \in D}^{ }\left(t_d\ -o_d\right)^2  where D is the set of training examples  tt   is the target value and  oo   is the outout value. Assume backpropagation algorithm is used. What will be  EWi\frac{\partial E}{\partial W_i}  

 d D(tdod)od(1  od)xi,d-\sum_{d\ \in D}^{ }\left(t_d-o_d\right)o_d\left(1\ -\ o_d\right)x_{i,d}  

 dD(td od)xi,d-\sum_{d\in D}^{ }\left(t_{d\ }-o_d\right)x_{i,d}  

 dDod(1od)xi,d-\sum_{d\in D}^{ }o_d\left(1-o_d\right)x_{i,d}  

 dD(tdod)od(1od)xi,d\sum_{d\in D}^{ }\left(t_d-o_d\right)o_d\left(1-o_d\right)x_{i,d}  

7.

MULTIPLE CHOICE QUESTION

3 mins • 1 pt

Media Image

Consider the following Multilayer feedforward network with initial values as follows . Find the output at the fifth unit.

0.332

0.525

0.474

-0.7

Access all questions and much more by creating a free account

Create resources

Host any resource

Get auto-graded reports

Google

Continue with Google

Email

Continue with Email

Classlink

Continue with Classlink

Clever

Continue with Clever

or continue with

Microsoft

Microsoft

Apple

Apple

Others

Others

Already have an account?

Discover more resources for Computers