• Title/Summary/Keyword: 한국어 음성처리

Search Result 263, Processing Time 0.021 seconds

The Effects of Silicate Nitrogen, Phosphorus and Potassium Fertilizers on the Chemical Components of Rice Plants and on the Incidence of Blast Disease of Rice Caused by Pyricularia oryzae Cavara (규산 및 삼요소 시비수준이 도체내 성분함량과 도열병 발생에 미치는 영향)

  • Paik Soo Bong
    • Korean journal of applied entomology
    • /
    • v.14 no.3 s.24
    • /
    • pp.97-109
    • /
    • 1975
  • In an attempt to develop an effective integrated system of controlling blast disease of rice caused by Pyricularia oryzae Cav., the possibility of minimizing the disease incidence by proper application of fertilizers has been investigated. Thus the effect of silicate, nitrogen, phosphorus and potassium fertilizers on the development of blast disease as well as the correlation between the rice varieties an4 strains of P. oryzae were studied. The experiments were made in 1971 and 1973 by artificial inoculation and under natural development of the blast disease on rice plants. The results obtained are summarized as follows. 1. Application of silicate fertilizer resulted in the increase of silicate as well as total sugar and potassium content but decrease of total nitrogen and phosphorus in tile leaf blades of rice plants. 2. The ratios of total C/total N. $ SiO_2/total$ N, and $K_2O/total$ N in leaf blades of rice plants increased by the application of silicate fertilizers. There was high level of negative correlation between the ratios mentioned above and the incidence of rice blast disease. 3. Application of silicate fertilizer reduced the incidence of rice blast disease. 4. The over dressing of nitrogen fertilizer resulted in the increase of total nitrogen and decrease of silicate and total sugar content in leaf blades, thus disposing the rice plants more susceptible to blast disease. 5. Over dressing of phosphorus fertilizer resulted in the increase of both total nitrogen and Phosphorus, and decrease of silicate content in the leaf blades inducing the rice plants to become more susceptible to blast disease. 6. Increased dressing of potash resulted in the increase of silicate content and $K_2O/total$ N ratio but decrease of total nitrogen content in leaf blades. When potassium content is low in the leaf blades of rice plants, the additional dressing of potash to rice plant contributed to the increase of resistance to blast disease. However, there was no significant correlation between additional potassium application and the resistance to blast disease when the potassium content is already high in the leaf blades. 7. When four rice varieties were artificially inoculated with three strains of P. oryzae, the incidence of blast disease was most severe on Pungok, least severe on Jinheung and moderate on Pungkwang and Paltal varieties. 8. Disease incidence was most severe on the second leaf from top and less sever on top and there leaf regardless of the fertilizer application when 5-6 leaf stage rice seedlings of four rice varieties were artificially inoculated with three strains of P. oryzae. 9. The pathogenicity of three strains of P. oryzae was in the order of $P_1,\;P_2,\;and\;P_3$ in their virulence when inoculated to Jinheung, Paltal, Pungkwang varieties but not with Pungok. The interaction between strains of P. oryzae and rice varieties was significant.

  • PDF

Research on Generative AI for Korean Multi-Modal Montage App (한국형 멀티모달 몽타주 앱을 위한 생성형 AI 연구)

  • Lim, Jeounghyun;Cha, Kyung-Ae;Koh, Jaepil;Hong, Won-Kee
    • Journal of Service Research and Studies
    • /
    • v.14 no.1
    • /
    • pp.13-26
    • /
    • 2024
  • Multi-modal generation is the process of generating results based on a variety of information, such as text, images, and audio. With the rapid development of AI technology, there is a growing number of multi-modal based systems that synthesize different types of data to produce results. In this paper, we present an AI system that uses speech and text recognition to describe a person and generate a montage image. While the existing montage generation technology is based on the appearance of Westerners, the montage generation system developed in this paper learns a model based on Korean facial features. Therefore, it is possible to create more accurate and effective Korean montage images based on multi-modal voice and text specific to Korean. Since the developed montage generation app can be utilized as a draft montage, it can dramatically reduce the manual labor of existing montage production personnel. For this purpose, we utilized persona-based virtual person montage data provided by the AI-Hub of the National Information Society Agency. AI-Hub is an AI integration platform aimed at providing a one-stop service by building artificial intelligence learning data necessary for the development of AI technology and services. The image generation system was implemented using VQGAN, a deep learning model used to generate high-resolution images, and the KoDALLE model, a Korean-based image generation model. It can be confirmed that the learned AI model creates a montage image of a face that is very similar to what was described using voice and text. To verify the practicality of the developed montage generation app, 10 testers used it and more than 70% responded that they were satisfied. The montage generator can be used in various fields, such as criminal detection, to describe and image facial features.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.