Machine Learning Quiz-3

Q1. In neural networks, where do we apply batch normalization?

  1. Before applying activation function
  2. After applying activation function

Answer: 1
Explanation: We generally apply batch normalization before applying activation function. Refer to this beautiful explanation by Andrew Ng to know more.

Q2. In Mini-batch gradient descent, if the mini-batch size is set equal to training set size it will become Stochastic gradient descent and if the mini-batch size is set equal to 1 training example it will become batch gradient descent?

  1. True
  2. False

Answer: 2
Explanation: It is actually opposite. In Mini-batch gradient descent, if the mini-batch size is set equal to training set size it will become Batch gradient descent and if the mini-batch size is set equal to 1 training example it will become Stochastic gradient descent.

Q3. If we have enough computation power, it would be wiser to train multiple parallel model and then choose the best one instead of babysitting a single model.

  1. True
  2. False

Answer: 1
Explanation: In deep learning, there is as such no general rule to find the best set of hyperparameters for any task. So, one need to follow the iterative process of Idea -> Code -> Experiment and being able to try out different ideas quickly is more suited instead of babysitting a single model.

Q4. Vectorization allows you to compute forward propagation in an L-layer neural network without an explicit for-loop (or any other explicit iterative loop) over the layers l=1, 2, …,L.?

  1. True
  2. False

Answer: 2
Explanation: We cannot avoid the for-loop iteration over the computations among layers. Refer to this beautiful explanation by Andrew Ng to know more.

Q5. Suppose you ran logistic regression twice, once with regularization parameter λ=0, and once with λ=1. One of the times, you got weight parameters w=[26.29 65.41], and the other time you got w=[2.75 1.32]. However, you forgot which value of λ corresponds to which value of w. Which one do you think corresponds to λ=1?

  1. w=[26.29 65.41]
  2. w=[2.75 1.32]

Answer: 2
Explanation: λ=0 means no regularization is used whereas λ=1 means regularization is used. And as we know that regularization results in weights shrinkage so without regularization you will get larger weights as compared to with regularization.

Q6. What is the value of Sigmoid activation function (let’s denote by g(z)) at an input value of z=0?

  1. 0
  2. 0.5
  3. -♾️
  4. +♾️

Answer: 2
Explanation: As we know that sigmoid is given by g(z) = 1/ (1 + exp(–z)) so at an input value of z=0 this outputs the value of 0.5. Refer to this beautiful explanation by Andrew Ng to know more.

Q7. Suppose you have built a neural network having 1 input, 1 hidden and 1 output layer. You decide to initialize the weights and biases to be zero. Which of the following statements is true?

  1. Each neuron in the hidden layer will perform the same computation in the first iteration. But after one iteration of gradient descent they will learn to compute different things because we have “broken symmetry”.
  2. The hidden layer’s neurons will perform different computations from each other even in the first iteration; their parameters will thus keep evolving in their own way.
  3. Each neuron in the hidden layer will perform the same computation. So even after multiple iterations of gradient descent each neuron in the layer will be computing the same thing as other neurons.

Answer: 3
Explanation: By initializing the weights and biases to 0, Each neuron in the hidden layer will perform the same computation. So even after multiple iterations of gradient descent each neuron in the layer will be computing the same thing as other neurons. Refer to this beautiful explanation by Andrew Ng to know more.

Q8. Suppose we have a neural network having 10 nodes in the input layer, 5 nodes in the hidden layer and 1 node in the output layer. What will be the dimension of b1 (first layer bias) and b2 (second layer bias)?

  1. b1:5×1, b2:1×1
  2. b1:1×10, b2:1×5
  3. b1:1×5, b2:5×10
  4. b1:5×10, b2:1×5

Answer: 1
Explanation: Generally, the bias dimensions for a layer is (next layer nodes x 1) so the answer is b1:5×1, b2:1×1. Refer to this beautiful explanation by Andrew Ng to know more.

Leave a Reply