講解:CS 435/535、Python,Java、c++、datasetC/C++|Prolog

Fall 2019Assignment 2Assigned on 13 November 2019Due on 9 December 2019Total Points: 150This assignment focuses on clustering study, and in particular, the well-known k-means method. The associated data file is named as water-treatment.data, and the documentation file for this data set is given in the file named water-treatment-dataDescription.txt. Specifically, there are 527 data items and each of them is a 38-dimensional vector. Please note that each attribute (i.e., dimension) has a different range of the values. Also please note that there are missing values. Please pay attention to the final output format specified in the description file. For undergrad students, you are required to complete questions 1 – 5 with a full credit of 100 pts; for grad students, you are required to complete all the six questions with a full credit of 150 pts.1. (20 pts.) Clean up the data set. This includes filling up the missing values and normalizing all the data items. Please state clearly the methods you use for filling up the missing values and normalizing the values in English to answer this question.2. (20 pts.) It is well-known that the k-means algorithm requires that the number of clusters, k, be given in advance. In this problem, 代寫CS 435/535、Python,Java程序語言代做we do not know the k value in advance. Propose a specific termination condition for the modified k-means when searching the true k value. State clearly your proposed condition or method in English.3. (20 pts.) Implement the modified k-means algorithm with your proposed termination condition and run the algorithm using the water-treatment dataset. Please note that you must use the output format given in the description file. Report your output.4. (20 pts.) Apply the PCA method you implemented in the first assignment to this dataset. Then apply the implemented modified k-means method above to this reduced data set to report the output. Please follow the same protocol of the output format specified in the description file.5. (20 pts.) Compare the two clustering results and analyze any differences that you have observed and state why there is such difference if there is or why there is no difference if there is no.6. (50 pts.) Implement an autoencoder (either shallow or deep) for dimensionality reduction and apply the implemented autoencoder to the given dataset. Report the dimensionality reduction result using the autoencoder and discuss the difference between PCA and autoencoder for dimensionality reduction with this dataset.轉(zhuǎn)自:http://www.daixie0.com/contents/13/4350.html

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容