MCQ CHAPTER 5
DownloadTélécharger
Actions
Vote :
ScreenshotAperçu

Informations
Catégorie :Category: nCreator TI-Nspire
Auteur Author: SPITZER2001
Type : Classeur 3.0.1
Page(s) : 1
Taille Size: 4.94 Ko KB
Mis en ligne Uploaded: 05/05/2025 - 23:05:06
Uploadeur Uploader: SPITZER2001 (Profil)
Téléchargements Downloads: 8
Visibilité Visibility: Archive publique
Shortlink : https://tipla.net/a4621151
Type : Classeur 3.0.1
Page(s) : 1
Taille Size: 4.94 Ko KB
Mis en ligne Uploaded: 05/05/2025 - 23:05:06
Uploadeur Uploader: SPITZER2001 (Profil)
Téléchargements Downloads: 8
Visibilité Visibility: Archive publique
Shortlink : https://tipla.net/a4621151
Description
Fichier Nspire généré sur TI-Planet.org.
Compatible OS 3.0 et ultérieurs.
<<
What is the primary goal of unsupervised learning? a) Minimize loss over labeled data b) Predict output labels c) Discover structure in unlabeled data d) Train deep learning models Answer: c) Discover structure in unlabeled data In clustering, what are we trying to identify? a) Target variables b) Hidden labels c) Natural groupings in the data d) Decision boundaries Answer: c) Natural groupings in the data Which of the following is not a clustering algorithm? a) K-means b) Hierarchical clustering c) PCA d) DBSCAN Answer: c) PCA In K-means, what does the algorithm aim to minimize? a) Number of clusters b) Cross-entropy c) Inertia (within-cluster sum of squares) d) Entropy Answer: c) Inertia (within-cluster sum of squares) What is required before running K-means? a) Labeled data b) Number of clusters k c) Maximum number of epochs d) Cluster labels Answer: b) Number of clusters k What metric is commonly used to evaluate clustering quality? a) Accuracy b) Silhouette score c) Precision d) Recall Answer: b) Silhouette score What does a silhouette score close to 1 indicate? a) Poor clustering b) Ambiguous clustering c) Perfectly defined clusters d) Incorrect number of clusters Answer: c) Perfectly defined clusters Which of the following challenges is associated with K-means? a) Works with non-numeric data b) Automatically determines k c) Sensitive to initialization d) Handles outliers well Answer: c) Sensitive to initialization Which statement about K-means is true? a) It guarantees a global optimum b) It handles categorical data well c) It assumes spherical clusters d) It is robust to outliers Answer: c) It assumes spherical clusters The Elbow Method is used to: a) Detect overfitting b) Determine optimal k c) Reduce dimensions d) Train neural networks Answer: b) Determine optimal k What type of learning does K-means belong to? a) Supervised b) Reinforcement c) Unsupervised d) Semi-supervised Answer: c) Unsupervised Which algorithm is best for non-spherical clusters? a) K-means b) Logistic regression c) DBSCAN d) Naive Bayes Answer: c) DBSCAN Which metric in clustering compares each points cohesion and separation? a) Entropy b) Silhouette score c) MSE d) Gini impurity Answer: b) Silhouette score Which of the following is a disadvantage of K-means? a) Works with large data b) Unsupervised c) Sensitive to outliers d) Converges quickly Answer: c) Sensitive to outliers In K-means, centroids are updated using: a) Median of the cluster b) Mean of the cluster c) Mode of the cluster d) Random point in the cluster Answer: b) Mean of the cluster What happens if we increase k too much in K-means? a) Clusters become more general b) Inertia increases c) Overfitting risk increases d) Model becomes faster Answer: c) Overfitting risk increases Which technique is often used alongside K-means to validate the number of clusters? a) PCA b) Silhouette analysis c) Cross-entropy d) ROC curve Answer: b) Silhouette analysis When using the Elbow Method, what are you looking for in the graph? a) Peak point b) Minimum silhouette c) Sharp bend d) Lowest accuracy Answer: c) Sharp bend What does inertia represent in K-means? a) Spread across clusters b) Variance within a cluster c) Number of samples d) Accuracy of clustering Answer: b) Variance within a cluster What happens to inertia as k increases? a) It increases b) It remains constant c) It decreases d) It drops to zero Answer: c) It decreases Which of the following is true about clustering evaluation? a) You use accuracy b) You need labeled data c) Use silhouette score or inertia d) You need a test dataset Answer: c) Use silhouette score or inertia What is a centroid in K-means? a) Median of the dataset b) Center of all clusters c) Mean of points in a cluster d) Random point in the data Answer: c) Mean of points in a cluster Which statement is false about unsupervised learning? a) No ground truth labels are needed b) It always uses PCA c) It can reveal hidden patterns d) Evaluation is more difficult Answer: b) It always uses PCA The K in K-means refers to: a) Kernel b) Number of iterations c) Number of clusters d) Knowledge Answer: c) Number of clusters When is hierarchical clustering preferred over K-means? a) When k is known b) For small datasets or nested structures c) For high-dimensional data d) For large, flat clusters Answer: b) For small datasets or nested structures How is clustering different from classification? a) Clustering is supervised b) Classification doesnt require labels c) Clustering finds structure without labels d) They are the same Answer: c) Clustering finds structure without labels K-means clustering works best when: a) Clusters are elliptical b) Data is highly imbalanced c) Clusters are spherical and equally sized d) Clusters overlap significantly Answer: c) Clusters are spherical and equally sized The silhouette score for poorly clustered points tends to be: a) Close to +1 b) Close to 0 c) Close to 1 d) Exactly 1 Answer: c) Close to 1 Why is K-means initial
[...]
>>
Compatible OS 3.0 et ultérieurs.
<<
What is the primary goal of unsupervised learning? a) Minimize loss over labeled data b) Predict output labels c) Discover structure in unlabeled data d) Train deep learning models Answer: c) Discover structure in unlabeled data In clustering, what are we trying to identify? a) Target variables b) Hidden labels c) Natural groupings in the data d) Decision boundaries Answer: c) Natural groupings in the data Which of the following is not a clustering algorithm? a) K-means b) Hierarchical clustering c) PCA d) DBSCAN Answer: c) PCA In K-means, what does the algorithm aim to minimize? a) Number of clusters b) Cross-entropy c) Inertia (within-cluster sum of squares) d) Entropy Answer: c) Inertia (within-cluster sum of squares) What is required before running K-means? a) Labeled data b) Number of clusters k c) Maximum number of epochs d) Cluster labels Answer: b) Number of clusters k What metric is commonly used to evaluate clustering quality? a) Accuracy b) Silhouette score c) Precision d) Recall Answer: b) Silhouette score What does a silhouette score close to 1 indicate? a) Poor clustering b) Ambiguous clustering c) Perfectly defined clusters d) Incorrect number of clusters Answer: c) Perfectly defined clusters Which of the following challenges is associated with K-means? a) Works with non-numeric data b) Automatically determines k c) Sensitive to initialization d) Handles outliers well Answer: c) Sensitive to initialization Which statement about K-means is true? a) It guarantees a global optimum b) It handles categorical data well c) It assumes spherical clusters d) It is robust to outliers Answer: c) It assumes spherical clusters The Elbow Method is used to: a) Detect overfitting b) Determine optimal k c) Reduce dimensions d) Train neural networks Answer: b) Determine optimal k What type of learning does K-means belong to? a) Supervised b) Reinforcement c) Unsupervised d) Semi-supervised Answer: c) Unsupervised Which algorithm is best for non-spherical clusters? a) K-means b) Logistic regression c) DBSCAN d) Naive Bayes Answer: c) DBSCAN Which metric in clustering compares each points cohesion and separation? a) Entropy b) Silhouette score c) MSE d) Gini impurity Answer: b) Silhouette score Which of the following is a disadvantage of K-means? a) Works with large data b) Unsupervised c) Sensitive to outliers d) Converges quickly Answer: c) Sensitive to outliers In K-means, centroids are updated using: a) Median of the cluster b) Mean of the cluster c) Mode of the cluster d) Random point in the cluster Answer: b) Mean of the cluster What happens if we increase k too much in K-means? a) Clusters become more general b) Inertia increases c) Overfitting risk increases d) Model becomes faster Answer: c) Overfitting risk increases Which technique is often used alongside K-means to validate the number of clusters? a) PCA b) Silhouette analysis c) Cross-entropy d) ROC curve Answer: b) Silhouette analysis When using the Elbow Method, what are you looking for in the graph? a) Peak point b) Minimum silhouette c) Sharp bend d) Lowest accuracy Answer: c) Sharp bend What does inertia represent in K-means? a) Spread across clusters b) Variance within a cluster c) Number of samples d) Accuracy of clustering Answer: b) Variance within a cluster What happens to inertia as k increases? a) It increases b) It remains constant c) It decreases d) It drops to zero Answer: c) It decreases Which of the following is true about clustering evaluation? a) You use accuracy b) You need labeled data c) Use silhouette score or inertia d) You need a test dataset Answer: c) Use silhouette score or inertia What is a centroid in K-means? a) Median of the dataset b) Center of all clusters c) Mean of points in a cluster d) Random point in the data Answer: c) Mean of points in a cluster Which statement is false about unsupervised learning? a) No ground truth labels are needed b) It always uses PCA c) It can reveal hidden patterns d) Evaluation is more difficult Answer: b) It always uses PCA The K in K-means refers to: a) Kernel b) Number of iterations c) Number of clusters d) Knowledge Answer: c) Number of clusters When is hierarchical clustering preferred over K-means? a) When k is known b) For small datasets or nested structures c) For high-dimensional data d) For large, flat clusters Answer: b) For small datasets or nested structures How is clustering different from classification? a) Clustering is supervised b) Classification doesnt require labels c) Clustering finds structure without labels d) They are the same Answer: c) Clustering finds structure without labels K-means clustering works best when: a) Clusters are elliptical b) Data is highly imbalanced c) Clusters are spherical and equally sized d) Clusters overlap significantly Answer: c) Clusters are spherical and equally sized The silhouette score for poorly clustered points tends to be: a) Close to +1 b) Close to 0 c) Close to 1 d) Exactly 1 Answer: c) Close to 1 Why is K-means initial
[...]
>>