Artificial Intelligence - Artificial Intelligence Section 2

Consider a two-layer network of the form shown in Figure 1 with M hidden units having Φ(.) = tanh(.) activation functions and full connectivity in both layers.


Figure1: Network diagram for the two-layer neural network corresponding to questions 1,2 and 3. The input, hidden, and output variables are represented by nodes, and the weight parameters are represented by links between the nodes, in which the bias parameters are denoted by links coming from additional input and hidden variables x0 and z0. Arrows denote the direction of information flow through the network during forward propagation

56. If we change the sign of all of the weights and the bias feeding into a particular hidden unit, then, for a given input pattern

Cancel reply

Your email address will not be published. Required fields are marked *


Cancel reply

Your email address will not be published. Required fields are marked *


Consider a two-layer network of the form shown in Figure 1 with M hidden units having Φ(.) = tanh(.) activation functions and full connectivity in both layers.


Figure1: Network diagram for the two-layer neural network corresponding to questions 1,2 and 3. The input, hidden, and output variables are represented by nodes, and the weight parameters are represented by links between the nodes, in which the bias parameters are denoted by links coming from additional input and hidden variables x0 and z0. Arrows denote the direction of information flow through the network during forward propagation

57. (Refer to Figure 1)In continuation with the above question, If we change the sign of all of the weights leading out of hidden unit also, then

Cancel reply

Your email address will not be published. Required fields are marked *


Cancel reply

Your email address will not be published. Required fields are marked *


Consider a two-layer network of the form shown in Figure 1 with M hidden units having Φ(.) = tanh(.) activation functions and full connectivity in both layers.

Figure1: Network diagram for the two-layer neural network corresponding to questions 1,2 and 3. The input, hidden, and output variables are represented by nodes, and the weight parameters are represented by links between the nodes, in which the bias parameters are denoted by links coming from additional input and hidden variables x0 and z0. Arrows denote the direction of information flow through the network during forward propagation

58. In continuation with the questions 25, 26 and 27, for M hidden units, there will be M such 'sign-flip' symmetries, and thus any given weight vector will be one of a set ____ equivalent weight vectors.

Cancel reply

Your email address will not be published. Required fields are marked *


Cancel reply

Your email address will not be published. Required fields are marked *


59. Memory can be modeled with a feedback loop as shown in figure. With |w| < 1, the system with feedback loop has

Cancel reply

Your email address will not be published. Required fields are marked *


Cancel reply

Your email address will not be published. Required fields are marked *


60. Suppose if you were to design a neural network architecture for an object recognition task which is capable of identifying the object irrespective of its orientation, then which of the following would be built into the neural network architecture?

Cancel reply

Your email address will not be published. Required fields are marked *


Cancel reply

Your email address will not be published. Required fields are marked *