Comparing prediction algorithms in disorganized data
Main Article Content
Abstract
Real estate market is very effective in today’s world but finding best price for house is a big problem. This problem creates a propose of this work. In this study, we try to compare and find best prediction algorithms on disorganized house data. Dataset was collected from real estate websites and three different regions selected for this experiment. KNN, KSTAR, Simple Linear Regression, Linear Regression, RBFNetwork and Decision Stump algorithms were used. This study shows us KStar and KNN algorithms are better than the other prediction algorithms for disorganized data.
Keywords: KNN, simple linear regression, rbfnetwork, disorganized data, bfnetwork.
Downloads
Download data is not yet available.
Article Details
Section
Articles
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (SeeThe Effect of Open Access).
References
[1] Altman, N. S. An introduction to kernel and nearest-neighbor nonparametric regression, The American Statistician, 1992, pp. 175-185.
[2] Cleary, J. G and L. E. Trigg, K*: An instance-based learner using an entropic distance measure, Proceedings of the 12th International Conference on Machine learning, 1995, pp. 108-114.
[3] Broomhead, D. S. and Lowe, D. Radial basis functions, multi-variable functional interpolation and adaptive networks, 1988.
[4] Iba, W. and Langley, P. Induction of One-Level Decision Trees, Proceedings of the Ninth International Conference on Machine Learning, 1992, pp. 233–240.
[5] Haara, A. and Kangas, A. S. Comparing K nearest neighbours methods and linear regression – is there reason to select one over the other?, MCFNS, 2012, 4 (1), pp. 50-65.
[6] Sahibinden, Sahibinden.com, [Online]. Available from: www.sahibinden.com.
[7] Hurriyet Emlak, Hurriyet Emlak, [Online]. Available from: www.hurriyetemlak.com.
[8] Ripper, V. W. Visual Web Ripper, [Online]. Available from: http://www.visualwebripper.com/.
[9] Witten, H. and Frank, E. Data Mining Practical Machine Learning Tools and Techniques, Morgan Kaufmann, 2005.
[2] Cleary, J. G and L. E. Trigg, K*: An instance-based learner using an entropic distance measure, Proceedings of the 12th International Conference on Machine learning, 1995, pp. 108-114.
[3] Broomhead, D. S. and Lowe, D. Radial basis functions, multi-variable functional interpolation and adaptive networks, 1988.
[4] Iba, W. and Langley, P. Induction of One-Level Decision Trees, Proceedings of the Ninth International Conference on Machine Learning, 1992, pp. 233–240.
[5] Haara, A. and Kangas, A. S. Comparing K nearest neighbours methods and linear regression – is there reason to select one over the other?, MCFNS, 2012, 4 (1), pp. 50-65.
[6] Sahibinden, Sahibinden.com, [Online]. Available from: www.sahibinden.com.
[7] Hurriyet Emlak, Hurriyet Emlak, [Online]. Available from: www.hurriyetemlak.com.
[8] Ripper, V. W. Visual Web Ripper, [Online]. Available from: http://www.visualwebripper.com/.
[9] Witten, H. and Frank, E. Data Mining Practical Machine Learning Tools and Techniques, Morgan Kaufmann, 2005.