π
<-

CHAPTER 7 OPEN QUESTION WITH ANSWERS


File hierarchy

 Downloads
 Files created online(31469)
 TI-Nspire
(22208)

 nCreator(4702)

DownloadTélécharger


LicenceLicense : Non spécifiée / IncluseUnspecified / Included

 TéléchargerDownload

Actions



Vote :

ScreenshotAperçu


Informations

Catégorie :Category: nCreator TI-Nspire
Auteur Author: SPITZER2001
Type : Classeur 3.0.1
Page(s) : 1
Taille Size: 4.58 Ko KB
Mis en ligne Uploaded: 05/05/2025 - 23:35:42
Uploadeur Uploader: SPITZER2001 (Profil)
Téléchargements Downloads: 9
Visibilité Visibility: Archive publique
Shortlink : https://tipla.net/a4621253

Description 

Fichier Nspire généré sur TI-Planet.org.

Compatible OS 3.0 et ultérieurs.

<<
1. What are artificial neural networks, and how are they inspired by biological neurons? Artificial neural networks (ANNs) are computational models designed to mimic the structure and behavior of the human brain. They consist of interconnected units called neurons, which process information in layers. Just as biological neurons receive input, process it, and pass output to other neurons, artificial neurons apply mathematical operations (like weighted sums and activation functions) to input data and propagate signals through the network. 2. Explain the role of weights and biases in a neural network. Weights determine the importance of input features by scaling them, while biases shift the activation function to allow the network to model complex relationships. Together, they control how input data is transformed as it flows through the network. 3. What is the purpose of an activation function in a neural network? Activation functions introduce non-linearity into the model, enabling the network to learn complex patterns and decision boundaries. Without them, a neural network would behave like a simple linear model regardless of its depth. 4. Describe the main characteristics of a shallow neural network. A shallow neural network consists of an input layer, a single hidden layer, and an output layer. It is capable of solving simple tasks and is easier to train than deep networks, though it may not capture very complex patterns. 5. What are the differences between sigmoid, tanh, and ReLU activation functions? Sigmoid outputs values between 0 and 1 and is prone to vanishing gradients. Tanh outputs values between -1 and 1 and centers the data but still suffers from vanishing gradients. ReLU (Rectified Linear Unit) outputs zero for negative inputs and the input itself for positive inputs, allowing faster and more effective training. 6. Why is ReLU preferred over sigmoid and tanh in most hidden layers? ReLU avoids the vanishing gradient problem, accelerates convergence during training, and introduces sparsity in the network by outputting zeros for negative inputs, improving computational efficiency. 7. What does the softmax function compute, and where is it used? The softmax function converts a vector of raw scores into probabilities, ensuring they sum to 1. It is used in the output layer of a neural network for multi-class classification tasks. 8. Explain how a neural network is trained using the backpropagation algorithm. Backpropagation computes the gradient of the loss function with respect to each weight by applying the chain rule from output to input. It updates the weights using these gradients to minimize prediction error through gradient descent. 9. What is gradient descent, and how is it used in neural network training? Gradient descent is an optimization algorithm used to minimize a loss function. In neural networks, it adjusts weights and biases iteratively in the opposite direction of the gradient to reduce prediction error. 10. How does mini-batch gradient descent differ from stochastic and batch gradient descent? Batch gradient descent uses all training data per update. Stochastic gradient descent (SGD) uses one data point per update. Mini-batch gradient descent balances both by updating weights based on small subsets of data, offering faster and more stable convergence. 11. What is the impact of using a very high or very low learning rate? A high learning rate may cause divergence or overshooting of the minimum, while a low learning rate can lead to slow convergence and getting stuck in local minima. 12. What is the universal approximation theorem, and why is it important? The universal approximation theorem states that a neural network with at least one hidden layer and a sufficient number of neurons can approximate any continuous function. It highlights the theoretical power of even shallow networks. 13. How does a neuron compute its output? A neuron multiplies inputs by corresponding weights, adds a bias term, and passes the result through an activation function to generate the output. 14. What role does the loss function play in neural network training? The loss function quantifies the difference between predicted outputs and true labels. It guides the optimization process, allowing the network to learn by minimizing the prediction error. 15. What is the difference between shallow and deep neural networks? Shallow networks have only one hidden layer, while deep networks have multiple hidden layers. Deep networks can model more complex relationships but require more data and computational resources. 16. How does the output layer differ between binary and multi-class classification tasks? For binary classification, the output layer uses a sigmoid activation. For multi-class classification, the softmax activation is used to generate a probability distribution over classes. 17. Why is regularization (like dropout) used in training neural networks? Regularization prevents overfitting by adding constra
[...]

>>

-
Rechercher
-
Social TI-Planet
-
Sujets à la une
Ndless for CX 4.5.5 / CX II 6.2.0
Comparaisons des meilleurs prix pour acheter sa calculatrice !
"1 calculatrice pour tous", le programme solidaire de Texas Instruments. Reçois gratuitement et sans aucune obligation d'achat, 5 calculatrices couleur programmables en Python à donner aux élèves les plus nécessiteux de ton lycée. Tu peux recevoir au choix 5 TI-82 Advanced Edition Python ou bien 5 TI-83 Premium CE Edition Python.
Enseignant(e), reçois gratuitement 1 exemplaire de test de la TI-82 Advanced Edition Python. À demander d'ici le 31 décembre 2024.
Aidez la communauté à documenter les révisions matérielles en listant vos calculatrices graphiques !
12345
-
Faire un don / Premium
Pour plus de concours, de lots, de tests, nous aider à payer le serveur et les domaines...
Faire un don
Découvrez les avantages d'un compte donateur !
JoinRejoignez the donors and/or premium!les donateurs et/ou premium !


Partenaires et pub
Notre partenaire Jarrety Calculatrices à acheter chez Calcuso
-
Stats.
3997 utilisateurs:
>3989 invités
>2 membres
>6 robots
Record simultané (sur 6 mois):
43991 utilisateurs (le 10/09/2025)
-
Autres sites intéressants
Texas Instruments Education
Global | France
 (English / Français)
Banque de programmes TI
ticalc.org
 (English)
La communauté TI-82
tout82.free.fr
 (Français)