• Title/Summary/Keyword: Word order

Search Result 1,011, Processing Time 0.024 seconds

An Improved Automatic Text Summarization Based on Lexical Chaining Using Semantical Word Relatedness (단어 간 의미적 연관성을 고려한 어휘 체인 기반의 개선된 자동 문서요약 방법)

  • Cha, Jun Seok;Kim, Jeong In;Kim, Jung Min
    • Smart Media Journal
    • /
    • v.6 no.1
    • /
    • pp.22-29
    • /
    • 2017
  • Due to the rapid advancement and distribution of smart devices of late, document data on the Internet is on the sharp increase. The increment of information on the Web including a massive amount of documents makes it increasingly difficult for users to understand corresponding data. In order to efficiently summarize documents in the field of automated summary programs, various researches are under way. This study uses TextRank algorithm to efficiently summarize documents. TextRank algorithm expresses sentences or keywords in the form of a graph and understands the importance of sentences by using its vertices and edges to understand semantic relations between vocabulary and sentence. It extracts high-ranking keywords and based on keywords, it extracts important sentences. To extract important sentences, the algorithm first groups vocabulary. Grouping vocabulary is done using a scale of specific weight. The program sorts out sentences with higher scores on the weight scale, and based on selected sentences, it extracts important sentences to summarize the document. This study proved that this process confirmed an improved performance than summary methods shown in previous researches and that the algorithm can more efficiently summarize documents.

Scope Minimization of Join Queries using a Range Window on Streaming XML Data (스트리밍 XML 데이타에서 영역 윈도우를 사용한 조인 질의의 범위 최소화 기법)

  • Park, Seog;Kim, Mi-Sun
    • Journal of KIISE:Databases
    • /
    • v.33 no.2
    • /
    • pp.224-238
    • /
    • 2006
  • As XML became the standard of data exchange in the internet, the needs for effective query processing for XML data in streaming environment is increasing. Applying the existing database technique which processes data with the unit of tuple to the streaming XML data causes the out-of-memory problem due to limited memory volume. Likewise the cost for searching query path and accessing specific data may be remarkably increased because of special structure of XML. In a word it is unreasonable to apply the existing database system to the streaming environment that processes query for partial data, not the whole one. Thus, it should be able to search partial streaming data that rapidly satisfies join predicate through using low-capacity memory, based on a store technique suitable to streaming XML data. In this thesis, in order to study the store technique for low-capacity memory, the PCDATA and the CDATA-related parts, which can be used as predicate on join query, were fetched and saved. In addition, in an attempt to compare rapid join predicates, the range window of streaming XML data was set with the object of selectively joining windows that satisfies the query, based on Cardinality * and + among the structure information of DTD.

Database Interface System with Dialog (대화를 통한 데이타베이스 인터페이스 시스템)

  • Woo, Yo-Seop;Kang, Seok-Hoon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.3
    • /
    • pp.417-428
    • /
    • 1996
  • In this paper, a database interface system with natural language dialogue is designed and implemented. The system is made up of language analysis, context processing, dialogue processing and DB processing unit. The method for classifying and processing an undefined word in language analysis is proposed. It reduces the dictionary size, which gives difficulties in DB Interface. And the current DB Interfaces dealt with an input utterance independently. But the system in this paper provides a user with the interface environment in which he or she can have a continuous conversation with the system and retrieve DB information. Thus in this paper, speech acts which include user's inattentions well as propositional contents are defined, and user action hierarchical model for library DB retrieval is constructed. And the system uses the defined knowledge to recognize-user's plan, effectively understanding and managing the ongoing dialogue. And the system is implemented in the domain of library database in order to prove the proposed methods in this paper.

  • PDF

Research Trends of Traditional Chinese Medicine Containing Haematitum in the Neuropsychiatric Clinical Scene (대자석의 중의 신경정신과 임상연구 현황)

  • Jung, Jin-Hyeong;Choi, Yun-Hee;Kim, Tae-Heon;Kim, Bo-Kyung
    • Journal of Oriental Neuropsychiatry
    • /
    • v.25 no.4
    • /
    • pp.401-410
    • /
    • 2014
  • Objectives: This study was intended to review the research trends of treating neuropsychiatric diseases and symptoms with Traditional Chinese Medicine containing Haematitum. Methods: Articles were obtained through the CNKI (China National Knowledge Infrastructure) by searching with 'Haematitum' as the main key word, and supportive words related with neuropsychiatric diseases and symptoms were selected. There were 61 articles related to clinical fields, which were then classified according to study design. Results: The 61 articles were categorized into the following types of study design: 3 randomized controlled trials, 1 quasi-randomized trial, 3 simple-designed clinical trials, and 54 case studies. Decoctions containing Haematitum were used to treat diseases and symptoms such as vertigo, headache, stroke, epilepsy, neurosis, globus hystericus, fishbilepoisoning, insomnia, mania, post-traumatic brain syndrome, and kinesia. All articles reported a good rate of effectiveness. There was no poor responsiveness regarding the effects of Haematitum in 9 studies, but it was not mentioned in the other 52 studies. Decoctions self-prepared by the authors were used in 28 studies. Modified Seonbokdeja-tang, modified Banhabeakchulcheonma-tang, modified Ondam-tang were used in that order of frequency. The daily dosage of Haematitum provided was 0.2~6 g in powder, and 9~60 g in decoction. Conclusions: Decoctions containing Haematitum are used restrictively in the neuropsychiatric clinical scene. While there were no reports of poor responsiveness of the effects of Haematitum, more research is needed to confirm its clinical stability.

Design and Application of XTML Script Language based on XML (XML을 이용한 스크립트 언어 XTML 의 설계 및 응용)

  • Jeong, Byeong-Hui;Park, Jin-U;Lee, Su-Yeon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.6
    • /
    • pp.816-833
    • /
    • 1999
  • 스타일 정보를 중심으로 하는 기존의 워드 프로세서의 출력 문서들을 차세대 인터넷 문서인 XML문서방식에 따라서 표기하고 또한 제목, 초록, 장 및 단락 등과 같은 논리적인 구조를 반영할 수 있도록 구조화함으로써 문서들의 상호교환뿐만 아니라 인터넷에서 유효하게 사용할 수가 있다. 본 논문에서는 스타일 또는 표현 속성 중심으로 하는 다양한 문서의 평면 구조를 XML의 계층적인 논리적인 구조로, 또한 다양한 DTD(Document Type Definition)환경하에서 변경시킬 수가 있는 변환 스크립트 언어를 표현할 수 있도록 하기 위하여 XTML(XML Transformation Markup Language)을 DTD형식으로 정의하고 이를 이용하여 변환 스크립트를 작성하였으며 자동태깅에 적용하여 보았다.XTML은 그 인스턴스에 해당하는 변환 알고리즘의 효과적인 수행을 위하여 즉 기존의 XML문서를 효과적으로 다루기 위하여 문서를 GROVE라는 트리 구조로 만들어 저장하고 또한 이를 조작할 수 있는 기능 및 다양한 명령어 인터페이스를 제공하였다. Abstract Output documents of existing word processors based on style informations or presentation attributes can be structured by converting them into XML(Extensible Markup Language) documents based on hierarchically logical structures such as title, abstract, chapter and so on. If so, it can be very useful to interchange and manipulate documents under Internet environment. The conversion need the complicate process calling auto-tagging by which elements of output documents can be inferred from style informations and sequences of text etc, and which is different from various kinds of simple conversion.In this paper, we defined XTML(XML Transformation Markup Language) of DTD(Document Type Definition) form and also defined the script language as instances of its DTD for the auto-tagging. XTML and its DTD are represented in XML syntax.Especially XTML includes various functions and commands to generate tree structure named as "GROVE" and also to process, store and manipulate the GROVE in order to process efficiently XML documents.documents.

Dragon-MAC: Securing Wireless Sensor Network with Authenticated Encryption (Dragon-MAC: 인증 암호를 이용한 효율적인 무선센서네크워크 보안)

  • Lim, Shu-Yun;Pu, Chuan-Chin;Lim, Hyo-Taek;Lee, Hoon-Jae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.8
    • /
    • pp.1519-1527
    • /
    • 2007
  • In order to combat the security threats that sensor networks are exposed to, a cryptography protocol is implemented at sensor nodes for point-to-point encryption between nodes. Given that nodes have limited resources, symmetric cryptography that is proven to be efficient for low power devices is implemented. Data protection is integrated into a sensor's packet by the means of symmetric encryption with the Dragon stream cipher and incorporating the newly designed Dragon-MAC Message Authentication Code. The proposed algorithm was designed to employ some of the data already computed by the underlying Dragon stream cipher for the purpose of minimizing the computational cost of the operations required by the MAC algorithm. In view that Dragon is a word based stream cipher with a fast key stream generation, it is very suitable for a constrained environment. Our protocol regarded the entity authentication and message authentication through the implementation of authenticated encryption scheme in wireless sensor nodes.

A Study on the Consumer's Perception of HiSeoul Fashion Show Using Big Data Analysis (빅데이터 분석을 활용한 하이서울패션쇼에 대한 소비자 인식 조사)

  • Han, Ki Hyang
    • Journal of Fashion Business
    • /
    • v.23 no.5
    • /
    • pp.81-95
    • /
    • 2019
  • The purpose of this study is to research consumers' perception of the HiSeoul fashion show, which is being used by new designers as a means of promotion, and to propose a strategy for revitalizing new designer brands. This was done in order to secure basic data from fashion consumers, to help guide marketing strategies and promote rising designers. In this research, the consumers' perception of HiSeoul fashion show was verified using text-mining, data refinement and word clouding that was undertaken by TEXTOM3.0. Also, semantic network analysis, CONCOR analysis and visualization of the analysis results were performed using Ucinet 6.0 and NetDraw. "HiSeoul fashion show" was used as the keyword for text-mining and data was collected from March 1, 2018 to April 30, 2019. Using frequency analysis, TF-IDF, and N-gram, it was also shown that consumers are aware of places where shows are held, such as DDP and Igansumun. It was also revealed that consumers recognize rising designer brands, designer's names, the names of guests attending the show and the photo times. This study is meaningful in that it not only confirmed consumers' interest in new designer brands participating in the HiSeoul Fashion Show through big data but also confirmed that it is available as a marketing strategy to boost brand sales. This study suggests using HiSeoul show room to induce consumer sales, or inviting guests that match the brand image to promote them on SNS on the day the show is held for a marketing strategy.

English Conversation System Using Artificial Intelligent of based on Virtual Reality (가상현실 기반의 인공지능 영어회화 시스템)

  • Cheon, EunYoung
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.55-61
    • /
    • 2019
  • In order to realize foreign language education, various existing educational media have been provided, but there are disadvantages in that the cost of the parish and the media program is high and the real-time responsiveness is poor. In this paper, we propose an artificial intelligence English conversation system based on VR and speech recognition. We used Google CardBoard VR and Google Speech API to build the system and developed artificial intelligence algorithms for providing virtual reality environment and talking. In the proposed speech recognition server system, the sentences spoken by the user can be divided into word units and compared with the data words stored in the database to provide the highest probability. Users can communicate with and respond to people in virtual reality. The function provided by the conversation is independent of the contextual conversations and themes, and the conversations with the AI assistant are implemented in real time so that the user system can be checked in real time. It is expected to contribute to the expansion of virtual education contents service related to the Fourth Industrial Revolution through the system combining the virtual reality and the voice recognition function proposed in this paper.

How to Express Emotion: Role of Prosody and Voice Quality Parameters (감정 표현 방법: 운율과 음질의 역할)

  • Lee, Sang-Min;Lee, Ho-Joon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.11
    • /
    • pp.159-166
    • /
    • 2014
  • In this paper, we examine the role of emotional acoustic cues including both prosody and voice quality parameters for the modification of a word sense. For the extraction of prosody parameters and voice quality parameters, we used 60 pieces of speech data spoken by six speakers with five different emotional states. We analyzed eight different emotional acoustic cues, and used a discriminant analysis technique in order to find the dominant sequence of acoustic cues. As a result, we found that anger has a close relation with intensity level and 2nd formant bandwidth range; joy has a relative relation with the position of 2nd and 3rd formant values and intensity level; sadness has a strong relation only with prosody cues such as intensity level and pitch level; and fear has a relation with pitch level and 2nd formant value with its bandwidth range. These findings can be used as the guideline for find-tuning an emotional spoken language generation system, because these distinct sequences of acoustic cues reveal the subtle characteristics of each emotional state.

Classification and analysis of error types for deep learning-based Korean spelling correction (딥러닝 기반 한국어 맞춤법 교정을 위한 오류 유형 분류 및 분석)

  • Koo, Seonmin;Park, Chanjun;So, Aram;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.65-74
    • /
    • 2021
  • Recently, studies on Korean spelling correction have been actively conducted based on machine translation and automatic noise generation. These methods generate noise and use as train and data set. This has limitation in that it is difficult to accurately measure performance because it is unlikely that noise other than the noise used for learning is included in the test set In addition, there is no practical error type standard, so the type of error used in each study is different, making qualitative analysis difficult. This paper proposes new 'error type classification' for deep learning-based Korean spelling correction research, and error analysis perform on existing commercialized Korean spelling correctors (System A, B, C). As a result of analysis, it was found the three correction systems did not perform well in correcting other error types presented in this paper other than spacing, and hardly recognized errors in word order or tense.