• Title/Summary/Keyword: 코호넨

Search Result 52, Processing Time 0.014 seconds

Community Patterning of Bethic Macroinvertebrates in Streams of South Korea by Utilizing an Artificial Neural Network (인공신경망을 이용한 남한의 저서성 대형 무척추동물 군집 유형)

  • Kwak, Inn-Sil;Liu, Guangchun;Park, Young-Seuk;Chon, Tae-Soo
    • Korean Journal of Ecology and Environment
    • /
    • v.33 no.3 s.91
    • /
    • pp.230-243
    • /
    • 2000
  • A large-scale community data were patterned by utilizing an unsupervised learning algorithm in artificial neural networks. Data for benthic macroinvertebrates in streams of South Korea reported in publications for 12 years from 1984 to 1995 were provided as inputs for training with the Kohonen network. Taxa included for the training were 5 phylum, 10 class, 26 order, 108 family and 571 species in 27 streams. Abundant groups were Diptera, Ephemeroptera, Trichoptera, Plecoptera, Coleoptera, Odonata, Oligochaeta, and Physidae. A wide spectrum of community compositions was observed: a few tolerant taxa were collected at polluted sites while a high species richness was observed at relatively clean sites. The trained mapping by the Kohonen network effectively showed patterns of communities from different river systems, followed by patterns of communities from different environmental disturbances. The training by the proposed artificial neural network could be an alternative for organizing community data in a large-scale ecological survey.

  • PDF

Speech Visualization of Korean Vowels Based on the Distances Among Acoustic Features (음성특징의 거리 개념에 기반한 한국어 모음 음성의 시각화)

  • Pok, Gouchol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.5
    • /
    • pp.512-520
    • /
    • 2019
  • It is quite useful to represent speeches visually for learners who study foreign languages as well as the hearing impaired who cannot directly hear speeches, and a number of researches have been presented in the literature. They remain, however, at the level of representing the characteristics of speeches using colors or showing the changing shape of lips and mouth using the animation-based representation. As a result of such approaches, those methods cannot tell the users how far their pronunciations are away from the standard ones, and moreover they make it technically difficult to develop such a system in which users can correct their pronunciation in an interactive manner. In order to address these kind of drawbacks, this paper proposes a speech visualization model based on the relative distance between the user's speech and the standard one, furthermore suggests actual implementation directions by applying the proposed model to the visualization of Korean vowels. The method extract three formants F1, F2, and F3 from speech signals and feed them into the Kohonen's SOM to map the results into 2-D screen and represent each speech as a pint on the screen. We have presented a real system implemented using the open source formant analysis software on the speech of a Korean instructor and several foreign students studying Korean language, in which the user interface was built using the Javascript for the screen display.