Acknowledgement
This research is supported by the National Key R&D Plan (No. 2018YFC1200200 and 2018YFC1200205) and the National Natural Science Foundation of China (No. 61463016).
References
- A. R. Bahrehdar and R. S. Purves, Description and characterization of place properties using topic modeling on georeferenced tags, Geo-Spatial Inf. Sci. 21 (2018), 173-184. https://doi.org/10.1080/10095020.2018.1493238
- Z. Jiang et al., Variational deep embedding: An unsupervised generative approach to clustering, in Proc. IJCAI Int. Joint Conf. Artif. Intell. (Melbourne, Australia), 2017, pp. 1965-1972.
- M. Caron et al., Deep clustering for unsupervised learning of visual features, in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in, Bioinformatics) 2018.
- V. Melnykov and R. Maitra, Finite mixture models and model-based clustering, Stat. Surv. 4 (2010), 80-116. https://doi.org/10.1214/09-SS053
- L. Qiu, F. Fang, and S. Yuan, Improved density peak clustering-based adaptive Gaussian mixture model for damage monitoring in aircraft structures under time-varying conditions, Mech. Syst. Signal Process. 126 (2019), 281-304. https://doi.org/10.1016/j.ymssp.2019.01.034
- G. J. McLachlan, S. X. Lee, and S. I. Rathnayake, Finite Mixture Models, Annu. Rev. Stat. Its Appl. 6 (2019), 355-378. https://doi.org/10.1146/annurev-statistics-031017-100325
- C. M. Bishop, Pattern recognition and machine learning, Springer, 2006.
- A. K. Jain, R. P. W. Duin, and J. Mao, Statistical pattern recognition: A review, IEEE Trans. Pattern Anal. Mach. Intell. 22 (2000), 4-37. https://doi.org/10.1109/34.824819
- A. R. Webb, Statistical Pattern Recognition, Wiley, England, vol. 2002.
- D. Reynolds, Gaussian mixture models, S. Z. Li, A. Jain (eds) Encyclopedia of Biometrics, Boston, MA, 2009.
- D. A. Reynolds, T. F. Quatieri, and R. B. Dunn, Speaker verification using adapted gaussian mixture models, Digit. Signal Process. 10 (2000), 19-41. https://doi.org/10.1006/dspr.1999.0361
- J. P. Vila and P. Schniter, Expectation-maximization gaussian-mixture approximate message passing, IEEE Trans. Signal Process. 61 (2013), 4658-4672. https://doi.org/10.1109/TSP.2013.2272287
- I. C. McDowell et al., Clustering gene expression time series data using an infinite Gaussian process mixture model, PLoS Comput. Biol. 14 (2018), e1005896. https://doi.org/10.1371/journal.pcbi.1005896
- X. Zhu and D. R. Hunter, Clustering via finite nonparametric ICA mixture models, Adv. Data Anal. Classif. 13 (2019), 65-87. https://doi.org/10.1007/s11634-018-0338-x
- C. E. Rasmussen, The infinite Gaussian mixture model, Adv. Neural Inf. Process. Syst. 12 (2000), 554-560.
- N. Bouguila, D. Ziou, A dirichlet process mixture of generalized dirichlet distributions for proportional data modeling, IEEE Trans. Neural Networks 21 (2010), 107-122. https://doi.org/10.1109/TNN.2009.2034851
- D. M. Blei and M. I. Jordan, Variational inference for Dirichlet process mixtures, Bayesian Anal. 1 (2006), 121-144. https://doi.org/10.1214/06-BA104
- Y. Yu, M. Li, and Y. Fu, Forest type identification by random forest classification combined with SPOT and multitemporal SAR data, J. For. Res. 29 (2018), 1407-1414. https://doi.org/10.1007/s11676-017-0530-4
- H. M. Ebied, K. Revett, and M. F. Tolba, Evaluation of unsupervised feature extraction neural networks for face recognition, Neural Comput. Appl. 22 (2013), 1211-1222. https://doi.org/10.1007/s00521-012-0889-2
- T. Wiatowski and H. Bolcskei, A mathematical theory of deep convolutional neural networks for feature extraction, IEEE Trans. Inf. Theory 64 (2018), 1845-1866. https://doi.org/10.1109/tit.2017.2776228
- A. Dosovitskiy et al., Discriminative unsupervised feature learning with exemplar convolutional neural networks, IEEE Trans. Pattern Anal. Mach. Intell. 38 (2016), 1734-1747. https://doi.org/10.1109/TPAMI.2015.2496141
- W. Zhang et al., Collaborative and adversarial network for unsupervised domain adaptation, in Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recogn. (Salt Lake City, UT, USA), 2018, pp. 3801-3809.
- A. Pirbonyeh et al., A linear unsupervised transfer learning by preservation of cluster-and-neighborhood data organization, Pattern Anal. Appl. 22 (2019), 1149-1160. https://doi.org/10.1007/s10044-018-0753-9
- S. Nejatian et al., An innovative linear unsupervised space adjustment by keeping low-level spatial data structure, Knowl. Inf. Syst. 59 (2019), 437-464. https://doi.org/10.1007/s10115-018-1216-8
- Y. W. The et al., Hierarchical Dirichlet processes, J. Am. Stat. Assoc. 101 (2006), 1566-1581. https://doi.org/10.1198/016214506000000302
- M. C. Hughes and E. B. Sudderth, Memoized online variational inference for Dirichlet process mixture models, Adv. Neural Inf. Process. Syst. 26 (2013), 2013.
- K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv e-prints, arXiv: 1409.1556, 2014.
- D. Bartholomew, M. Knott, and I. Moustaki, Latent variable models and factor analysis: A unified approach (3rd ed.), Wiley, 2011.
- P. D. McNicholas, and T. B. Murphy, Parsimonious Gaussian mixture models, Stat. Comput. 18 (2008), 285-296. https://doi.org/10.1007/s11222-008-9056-0
- B. Zhou et al., Places: A 10 million image database for scene recognition, IEEE Trans. Pattern Anal. Mach. Intell. 40 (2018), 1452-1464. https://doi.org/10.1109/tpami.2017.2723009
- N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, in Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recogn. (San Diego, CA, USA), 2005, pp. 1-8.
- T. Ojala, M. Pietikäinen, and T. Maenpaa, Multiresolution grayscale and rotation invariant texture classification with local binary patterns, IEEE Trans. Pattern Anal. Mach. Intell. 24 (2002), 971-987. https://doi.org/10.1109/TPAMI.2002.1017623
- A. Rosenberg and J. Hirschberg, V-measure: A conditional entropy-based external cluster evaluation measure, in Proc. Conf. Empir. Methods Nat. Lang. Process. Comput. Nat. Lang. Learn. (Prague, Czech Republic), 2007, pp. 410-420.
- N. X. Vinh, J. Epps, and J. Bailey, Information theoretic measures for clusterings comparison: Is a correction for chance necessary?, in Proc. Annu. Int. Conf. Mach. Learn. (Montreal, Canada), 2009, pp. 1-8.
- N. X. Vinh, J. Epps, and J. Bailey, Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance, J. Machine Learn. Res. 11 (2010), 2837-2854.
- J. Deng et al., ImageNet: A large-scale hierarchical image database, in Proc. IEEE Conf. Comput. Vision Pattern Recogn. (Miami, FL, USA), 2009, pp.