• Title/Summary/Keyword: 생성문법

Search Result 245, Processing Time 0.018 seconds

Modeling Nutrient Uptake of Cucumber Plant Based on EC and Nutrient Solution Uptake in Closed Perlite Culture (순환식 펄라이트재배에서 EC와 양액흡수량을 이용한 오이 양분흡수 모델링)

  • 김형준;우영회;김완순;조삼증;남윤일
    • Proceedings of the Korean Society for Bio-Environment Control Conference
    • /
    • 2001.04b
    • /
    • pp.75-76
    • /
    • 2001
  • 순환식 펄라이트재배에서 배액 재사용을 위한 양분흡수 모델링을 작성하고자 EC 처리(1.5, 1.8, 2.1, 2.4, 2.7 dSㆍm-1)를 수행하였다. 생육 중기까지 EC 수준에 따른 양액흡수량은 차이가 없었지만 중기 이후 EC가 높을수록 흡수량이 감소되는 경항을 보였다(Fig. 1). NO$_3$-N, P 및 K의 흡수량은 생육기간 동안 처리간 차이를 유지하였는데 N과 K는 생육 중기 이후 일정 수준을 유지하였으나 P는 생육기간 동안 다소 증가되는 경향을 보였다. S의 흡수량은 생육 중기 이후 모든 처리에서 급격한 감소를 보였으며 생육 후기에는 처리간에 차이가 없었다(Fig. 2). 오이의 무기이온 흡수율에서와 같이 흡수량에서도 EC간 차이를 보여 EC를 무기이온 흡수량을 추정하는 요소로 이용할 수 있을 것으로 생각되었다. 무기이온 흡수량은 모든 EC 처리간에 생육 초기에는 차이를 보이지 않았으나 생육중기 이후에는 뚜렷한 차이를 보인 후 생육 후기의 높은 농도에서 그 차이가 다소 감소되는 경향을 보였다. 단위일사량에 따른 양액흡수량과 EC를 주된 변수로 한 오이의 이온 흡수량 예측 회귀식을 작성하였는데 모든 무기이온 흡수량 추정식의 상관계수는 S를 제외한 모든 이온에서 높게 나타났는데 특히 N, P, K 및 Ca에서 높았다. S이온에서의 상관계수는 0.47로 낮게 나타났으나 각 이온들의 회귀식에 대한 상관계수는 모두 1% 수준에서 유의성을 보여 위의 모델식을 순환식 양액재배에서 무기이온 추정식으로 사용이 가능할 것으로 생각되었다(Table 1). 이를 이용한 실측치와의 비교는 신뢰구간 1%내에서 높은 정의상관을 보여 실제적인 적용이 가능할 것으로 생각되었다(Fig 3)..ble 3D)를 바탕으로 MPEG-4 시스템의 특징들을 수용하여 구성되고 BIFS와 일대일로 대응된다. 반면에 XMT-0는 멀티미디어 문서를 웹문서로 표현하는 SMIL 2.0 을 그 기반으로 하였기에 MPEG-4 시스템의 특징보다는 컨텐츠를 저작하는 제작자의 초점에 맞추어 개발된 형태이다. XMT를 이용하여 컨텐츠를 저작하기 위해서는 사용자 인터페이스를 통해 입력되는 저작 정보들을 손쉽게 저장하고 조작할 수 있으며, 또한 XMT 파일 형태로 출력하기 위한 API 가 필요하다. 이에, 본 논문에서는 XMT 형태의 중간 자료형으로의 저장 및 조작을 위하여 XML 에서 표준 인터페이스로 사용하고 있는 DOM(Document Object Model)을 기반으로 하여 XMT 문법에 적합하게 API를 정의하였으며, 또한, XMT 파일을 생성하기 위한 API를 구현하였다. 본 논문에서 제공된 API는 객체기반 제작/편집 도구에 응용되어 다양한 멀티미디어 컨텐츠 제작에 사용되었다.x factorization (NMF), generative topographic mapping (GTM)의 구조와 학습 및 추론알고리즘을소개하고 이를 DNA칩 데이터 분석 평가 대회인 CAMDA-2000과 CAMDA-2001에서 사용된cancer diagnosis 문제와 gene-drug dependency analysis 문제에 적용한 결과를 살펴본다.0$\mu$M이 적당하며, 초기배발달을 유기할 때의 효과적인 cysteamine의 농도는 25~50$\mu$M인 것으로 판단된다.N)A(N)/N을 제시하였다(A(N)=N에 대한 A값). 위의 실험식을 사용하여 헝가리산 Zempleni 시료(15%

  • PDF

A study of the Implications of French vocabularies and the de-locality in LEE Sang's Poems (이상(李箱)의 시 작품에 구사되는 프랑스어와 탈 지방성)

  • Lee, Byung-soo
    • Cross-Cultural Studies
    • /
    • v.53
    • /
    • pp.1-24
    • /
    • 2018
  • This following research is a study on the use of French and de-locality in the modern Korean poet Lee Sang's poetry (1910-1937). His hometown was Kyung Sung, Seoul. He mainly wrote his works in Korean, Chinese character, and Japanese, using the language of education and his native language at that time. So then, what was the spirit that he wanted to embody through use of French words? By using words like "ESQUISSE", "AMOUREUSE", Sang's French was not a one-time use of foreign words intended to amuse, but to him the words were as meticulously woven as his intentions. French words were harmonized with other non-poetic symbols such as "${\Box}$, ${\triangle}$, ${\nabla}$", and described as a type of typographical hieroglyphics. Instead of his mother-tongue language, French was applied as a surrealistic vocabulary that implemented the moral of infinite freedom and imagination, and expressed something new or extrasensory. Subsequently, the de-localized French (words) in his poetry can be seen as poetic words to implement a "new spirit", proposed by western avant-garde artists. Analysis of French in his poetry, showed a sense of yearning for the scientific civilization, calling for his sense of defeat and escape from the colonized inferior native land. Most of all, comparing his pursuit of western civilization and avant-garde art to French used in his poetry, is regarded as world-oriented poetry intended to implement the new tendency of the "the locomotive of modernity," transcending the territory of the native country.

Deletion-Based Sentence Compression Using Sentence Scoring Reflecting Linguistic Information (언어 정보가 반영된 문장 점수를 활용하는 삭제 기반 문장 압축)

  • Lee, Jun-Beom;Kim, So-Eon;Park, Seong-Bae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.125-132
    • /
    • 2022
  • Sentence compression is a natural language processing task that generates concise sentences that preserves the important meaning of the original sentence. For grammatically appropriate sentence compression, early studies utilized human-defined linguistic rules. Furthermore, while the sequence-to-sequence models perform well on various natural language processing tasks, such as machine translation, there have been studies that utilize it for sentence compression. However, for the linguistic rule-based studies, all rules have to be defined by human, and for the sequence-to-sequence model based studies require a large amount of parallel data for model training. In order to address these challenges, Deleter, a sentence compression model that leverages a pre-trained language model BERT, is proposed. Because the Deleter utilizes perplexity based score computed over BERT to compress sentences, any linguistic rules and parallel dataset is not required for sentence compression. However, because Deleter compresses sentences only considering perplexity, it does not compress sentences by reflecting the linguistic information of the words in the sentences. Furthermore, since the dataset used for pre-learning BERT are far from compressed sentences, there is a problem that this can lad to incorrect sentence compression. In order to address these problems, this paper proposes a method to quantify the importance of linguistic information and reflect it in perplexity-based sentence scoring. Furthermore, by fine-tuning BERT with a corpus of news articles that often contain proper nouns and often omit the unnecessary modifiers, we allow BERT to measure the perplexity appropriate for sentence compression. The evaluations on the English and Korean dataset confirm that the sentence compression performance of sentence-scoring based models can be improved by utilizing the proposed method.

A Study on Speech Recognition Using the HM-Net Topology Design Algorithm Based on Decision Tree State-clustering (결정트리 상태 클러스터링에 의한 HM-Net 구조결정 알고리즘을 이용한 음성인식에 관한 연구)

  • 정현열;정호열;오세진;황철준;김범국
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2
    • /
    • pp.199-210
    • /
    • 2002
  • In this paper, we carried out the study on speech recognition using the KM-Net topology design algorithm based on decision tree state-clustering to improve the performance of acoustic models in speech recognition. The Korean has many allophonic and grammatical rules compared to other languages, so we investigate the allophonic variations, which defined the Korean phonetics, and construct the phoneme question set for phonetic decision tree. The basic idea of the HM-Net topology design algorithm is that it has the basic structure of SSS (Successive State Splitting) algorithm and split again the states of the context-dependent acoustic models pre-constructed. That is, it have generated. the phonetic decision tree using the phoneme question sets each the state of models, and have iteratively trained the state sequence of the context-dependent acoustic models using the PDT-SSS (Phonetic Decision Tree-based SSS) algorithm. To verify the effectiveness of the above algorithm we carried out the speech recognition experiments for 452 words of center for Korean language Engineering (KLE452) and 200 sentences of air flight reservation task (YNU200). Experimental results show that the recognition accuracy has progressively improved according to the number of states variations after perform the splitting of states in the phoneme, word and continuous speech recognition experiments respectively. Through the experiments, we have got the average 71.5%, 99.2% of the phoneme, word recognition accuracy when the state number is 2,000, respectively and the average 91.6% of the continuous speech recognition accuracy when the state number is 800. Also we haute carried out the word recognition experiments using the HTK (HMM Too1kit) which is performed the state tying, compared to share the parameters of the HM-Net topology design algorithm. In word recognition experiments, the HM-Net topology design algorithm has an average of 4.0% higher recognition accuracy than the context-dependent acoustic models generated by the HTK implying the effectiveness of it.

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.