• Title/Summary/Keyword: 문장 압축

Search Result 27, Processing Time 0.026 seconds

Sentence Compression based on Sentence Scoring Reflecting Linguistic Information (언어 정보를 반영한 문장 점수 측정 기반의 문장 압축)

  • Lee, Jun-Beom;Kim, So-Eon;Park, Seong-Bae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.389-392
    • /
    • 2021
  • 문장 압축은 원본 문장의 중요한 의미를 보존하는 짧은 길이의 압축 문장을 생성하는 자연어처리 태스크이다. 문장 압축은 사용자가 텍스트로부터 필요한 정보를 빠르게 획득할 수 있도록 도울 수 있어 활발히 연구되고 있지만, 기존 연구들은 사람이 직접 정의한 압축 규칙이 필요하거나, 모델 학습을 위해 대량의 데이터셋이 필요하다는 문제점이 존재한다. 사전 학습된 언어 모델을 통한 perplexity 기반의 문장 점수 측정을 통해 문장을 압축하여 압축 규칙과 모델 학습을 위한 데이터셋이 필요하지 않은 연구 또한 존재하지만, 문장 점수 측정에 문장에 속한 단어들의 의미적 중요도를 반영하지 못하여 중요한 단어가 삭제되는 문제점이 존재한다. 본 논문은 언어 정보 중 품사 정보, 의존관계 정보, 개체명 정보의 중요도를 수치화하여 perplexity 기반의 문장 점수 측정에 반영하는 방법을 제안한다. 또한 제안한 문장 점수 측정 방법을 활용하였을 때 문장 점수 측정 기반 문장 압축 모델의 문장 압축 성능이 향상됨을 확인하였으며, 이를 통해 문장에 속한 단어의 언어 정보를 문장 점수 측정에 반영하는 것이 의미적으로 적절한 압축 문장을 생성하는 데 도움이 될 수 있음을 보였다.

Deletion-Based Sentence Compression Using Sentence Scoring Reflecting Linguistic Information (언어 정보가 반영된 문장 점수를 활용하는 삭제 기반 문장 압축)

  • Lee, Jun-Beom;Kim, So-Eon;Park, Seong-Bae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.125-132
    • /
    • 2022
  • Sentence compression is a natural language processing task that generates concise sentences that preserves the important meaning of the original sentence. For grammatically appropriate sentence compression, early studies utilized human-defined linguistic rules. Furthermore, while the sequence-to-sequence models perform well on various natural language processing tasks, such as machine translation, there have been studies that utilize it for sentence compression. However, for the linguistic rule-based studies, all rules have to be defined by human, and for the sequence-to-sequence model based studies require a large amount of parallel data for model training. In order to address these challenges, Deleter, a sentence compression model that leverages a pre-trained language model BERT, is proposed. Because the Deleter utilizes perplexity based score computed over BERT to compress sentences, any linguistic rules and parallel dataset is not required for sentence compression. However, because Deleter compresses sentences only considering perplexity, it does not compress sentences by reflecting the linguistic information of the words in the sentences. Furthermore, since the dataset used for pre-learning BERT are far from compressed sentences, there is a problem that this can lad to incorrect sentence compression. In order to address these problems, this paper proposes a method to quantify the importance of linguistic information and reflect it in perplexity-based sentence scoring. Furthermore, by fine-tuning BERT with a corpus of news articles that often contain proper nouns and often omit the unnecessary modifiers, we allow BERT to measure the perplexity appropriate for sentence compression. The evaluations on the English and Korean dataset confirm that the sentence compression performance of sentence-scoring based models can be improved by utilizing the proposed method.

Jam-packing Korean sentence classification method robust for spacing errors (띄어쓰기 오류에 강건한 문장 압축 기반 한국어 문장 분류)

  • Park, Keunyoung;Kim, Kyungduk;Kang, Inho
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.600-604
    • /
    • 2018
  • 한국어 문장 분류는 주어진 문장의 내용에 따라 사전에 정의된 유한한 범주로 할당하는 과업이다. 그런데 분류 대상 문장이 띄어쓰기 오류를 포함하고 있을 경우 이는 분류 모델의 성능을 악화시킬 수 있다. 이에 한국어 텍스트 혹은 음성 발화 기반의 문장을 대상으로 분류 작업을 수행할 경우 띄어쓰기 오류로 인해 발생할 수 있는 분류 모델의 성능 저하 문제를 해결해 보고자 문장 압축 기반 학습 방식을 사용하였다. 학습된 모델의 성능을 한국어 영화 리뷰 데이터셋을 대상으로 실험한 결과 본 논문이 제안하는 문장 압축 기반 학습 방식이 baseline 모델에 비해 띄어쓰기 오류에 강건한 분류 성능을 보이는 것을 확인하였다.

  • PDF

Perception of Time-altered Sentences and Selective Word Stress by Normal-hearing Listeners (시간 변화와 선택적 단어 강조법이 정상 청력 성인의 문장인지도에 미치는 영향)

  • Han, Woojae;Yu, Jyaehyoung;Cho, Soojin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.5
    • /
    • pp.430-437
    • /
    • 2013
  • The present study examined whether sentence perception scores were changed under various conditions of time alteration (compression and/or expansion) and selective word stress in normal hearing listeners. Twenty young normal hearing adults (ten males) were participated. As stimuli, Korean standard-sentence list for adults (KS-SL-A) modified to semantically anomalous sentences was newly recorded by a female speaker. Seven different time-altered conditions (e.g., ${\pm}60%$, ${\pm}40%$, ${\pm}20%$, 0 %) were controlled. To see the effect of selective word stress (i.e., the emphasis of specific syllables in the sentence), all subjects were tested twice 2 weeks apart. The results showed 1) there was significantly different sentence perception scores among the different time-altered conditions, yet only in the 60 % compression condition; 2) there was no significant difference of the sentence perception scores in the effect of stress; however, there was a positive effect of the selective word stress in the sentences consisting of 6 ~ 7 syllables at the 40 % compression condition; 3) there was no significant gender difference. The pattern of results suggests that the combination of time compression and selective word stress is more effective to understand speech, instead of only using time expansion condition. However, further studies should be needed for standardization.

Improving the effectiveness of document extraction summary based on the amount of sentence information (문장 정보량 기반 문서 추출 요약의 효과성 제고)

  • Kim, Eun Hee;Lim, Myung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.11 no.3
    • /
    • pp.31-38
    • /
    • 2022
  • In the document extraction summary study, various methods for selecting important sentences based on the relationship between sentences were proposed. In the Korean document summary using the summation similarity of sentences, the summation similarity of the sentences was regarded as the amount of sentence information, and the summary sentences were extracted by selecting important sentences based on this. However, the problem is that it does not take into account the various importance that each sentence contributes to the entire document. Therefore, in this study, we propose a document extraction summary method that provides a summary by selecting important sentences based on the amount of quantitative and semantic information in the sentence. As a result, the extracted sentence agreement was 58.56% and the ROUGE-L score was 34, which was superior to the method using only the combined similarity. Compared to the deep learning-based method, the extraction method is lighter, but the performance is similar. Through this, it was confirmed that the method of compressing information based on semantic similarity between sentences is an important approach in document extraction summary. In addition, based on the quickly extracted summary, the document generation summary step can be effectively performed.

Query-Based Document Summarization using Important Sentence Selection Heuristics and MMR. (중요 문장추출 휴리스틱과 MMR을 이용한 질의기반 문서요약.)

  • Kim, Dong-Hyun;Lee, Seung-Woo;Lee, Gary Geun-Bae
    • Annual Conference on Human and Language Technology
    • /
    • 2002.10e
    • /
    • pp.285-291
    • /
    • 2002
  • 본 논문은 자연어 검색엔진에서의 검색결과에 대한 HIT LIST[6]와 검색 문서의 요약을 위하여 질의 기반의 3단계 문서요약을 제안한다. 첫째단계로 IR에 주어지는 질의를 유의어 DB를 통해 질의확장을 거친다. 둘째로 질의와 검색문서상의 문장의 유사도 계산을 통해 문장의 중요도 점수를 구한다. 좀더 정확한 요약을 위해 4가지 방법론을 적용하여 각 문장의 중요도를 ranking한다. 셋째로 MMR (Maximal Marginal Relevance)방식을 적용하여 요약 시 중복이 되는 부분을 줄인다. 이때 요약 압축률을 임의로 조절할 수 있다. 실험은 KORDIC의 신문기사로 구성된 문서요약 테스트 집합을 사용하여 좋은 요약결과를 얻었다.

  • PDF

Analysis of the 3rd Graders' Solving Processes of the Word Problems by Nominalization (수학 문장제의 명사화 여부에 따른 초등학교 3학년의 해결 과정 분석)

  • Kang, Yunji;Chang, Hyewon
    • Education of Primary School Mathematics
    • /
    • v.26 no.2
    • /
    • pp.83-97
    • /
    • 2023
  • Nominalization is one of the grammatical metaphors that makes it easier to mathematize the target that needs to be converted into a formula, but it has the disadvantage of making problem understanding difficult due to complex and compressed sentence structures. To investigate how this nominalization affects students' problem-solving processes, an analysis was conducted on 233 third-grade elementary school students' problem solving of eight arithmetic word problems with or without nominalization. The analysis showed that the presence or absence of nominalization did not have a significant impact on their problem understanding and their ability to convert sentences to formulas. Although the students did not have any prior experience in nominalization, they restructured the sentences by using nominalization or agnation in the problem understanding stage. When the types of nominalization change, the rate of setting the formula correctly appeared high. Through this, the use of nominalization can be a pedagogical strategy for solving word problems and can be expected to help facilitate deeper understanding.

Video Compression Standard Prediction using Attention-based Bidirectional LSTM (어텐션 알고리듬 기반 양방향성 LSTM을 이용한 동영상의 압축 표준 예측)

  • Kim, Sangmin;Park, Bumjun;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.870-878
    • /
    • 2019
  • In this paper, we propose an Attention-based BLSTM for predicting the video compression standard of a video. Recently, in NLP, many researches have been studied to predict the next word of sentences, classify and translate sentences by their semantics using the structure of RNN, and they were commercialized as chatbots, AI speakers and translator applications, etc. LSTM is designed to solve the gradient vanishing problem in RNN, and is used in NLP. The proposed algorithm makes video compression standard prediction possible by applying BLSTM and Attention algorithm which focuses on the most important word in a sentence to a bitstream of a video, not an sentence of a natural language.

A Sentence Reduction Method using Part-of-Speech Information and Templates (품사 정보와 템플릿을 이용한 문장 축소 방법)

  • Lee, Seung-Soo;Yeom, Ki-Won;Park, Ji-Hyung;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.5
    • /
    • pp.313-324
    • /
    • 2008
  • A sentence reduction is the information compression process which removes extraneous words and phrases and retains basic meaning of the original sentence. Most researches in the sentence reduction have required a large number of lexical and syntactic resources and focused on extracting or removing extraneous constituents such as words, phrases and clauses of the sentence via the complicated parsing process. However, these researches have some problems. First, the lexical resource which can be obtained in loaming data is very limited. Second, it is difficult to reduce the sentence to languages that have no method for reliable syntactic parsing because of an ambiguity and exceptional expression of the sentence. In order to solve these problems, we propose the sentence reduction method which uses templates and POS(part of speech) information without a parsing process. In our proposed method, we create a new sentence using both Sentence Reduction Templates that decide the reduction sentence form and Grammatical POS-based Reduction Rules that compose the grammatical sentence structure. In addition, We use Viterbi algorithms at HMM(Hidden Markov Models) to avoid the exponential calculation problem which occurs under applying to Sentence Reduction Templates. Finally, our experiments show that the proposed method achieves acceptable results in comparison to the previous sentence reduction methods.

An algorithm for optimal reduction of HTTP Message Traffic (웹 문서의 효율적인 전송을 위한 시스템 설계)

  • 정옥란;조동섭
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04a
    • /
    • pp.181-183
    • /
    • 2001
  • 현재 인터넷상에서 전자상거래의 급속한 성장은 HTML 문서나 Javascript와 같은 웹 문서의 빈번한 전송을 요구하며 이는 현재뿐만 아니라 향후 인터넷 전송 트래픽을 야기하는 주요 요인이 될 것이다. 웹 페이지는 비슷한 문장열이 인수에 해당하는 부분만이 변화되면서 반복하는 특징을 갖고 있다. 본 연구에서는 웹 페이지의 이러한 특징을 이용하여 매크로 기법을 사용한 웹 문서 압축 알고리즘이 웹 페이지의 저장공간 압축에 좋은 성능을 가짐을 보여줌으로써 전송시간의 축소의 부가적인 효과를 거둘 수 있었다.