• Title/Summary/Keyword: 영어처리

Search Result 470, Processing Time 0.03 seconds

On the Effectiveness of the Special Token Cutoff Method for Korean Sentence Representation in Unsupervised Contrastive Learning (비지도 대조 학습에서 한국어 문장 표현을 위한 특수 토큰 컷오프 방법의 유효성 분석)

  • Myeongsoo Han;Yoo Hyun Jeong;Dong-Kyu Chae
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.491-496
    • /
    • 2023
  • 사전학습 언어모델을 개선하여 고품질의 문장 표현(sentence representation)을 도출하기 위한 다양한 대조 학습 방법에 대한 연구가 진행되고 있다. 그러나, 대부분의 대조학습 방법들은 문장 쌍의 관계만을 고려하며, 문장 간의 유사 정도를 파악하는데는 한계가 있어서 근본적인 대조 학습 목표를 저해하였다. 이에 최근 삼중항 손실 (triplet loss) 함수를 도입하여 문장의 상대적 유사성을 파악하여 대조학습의 성능을 개선한 연구들이 제안되었다. 그러나 많은 연구들이 영어를 기반으로한 사전학습 언어모델을 대상으로 하였으며, 한국어 기반의 비지도 대조학습에 대한 삼중항 손실 함수의 실효성 검증 및 분석은 여전히 부족한 실정이다. 본 논문에서는 이러한 방법론이 한국어 비지도 대조학습에서도 유효한지 면밀히 검증하였으며, 다양한 평가 지표를 통해 해당 방법론의 타당성을 확인하였다. 본 논문의 결과가 향후 한국어 문장 표현 연구 발전에 기여하기를 기대한다.

  • PDF

Named Entity Detection Using Generative Al for Personal Information-Specific Named Entity Annotation Conversation Dataset (개인정보 특화 개체명 주석 대화 데이터셋 기반 생성AI 활용 개체명 탐지)

  • Yejee Kang;Li Fei;Yeonji Jang;Seoyoon Park;Hansaem Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.499-504
    • /
    • 2023
  • 본 연구에서는 민감한 개인정보의 유출과 남용 위험이 높아지고 있는 상황에서 정확한 개인정보 탐지 및 비식별화의 효율을 높이기 위해 개인정보 항목에 특화된 개체명 체계를 개발하였다. 개인정보 태그셋이 주석된 대화 데이터 4,981세트를 구축하고, 생성 AI 모델을 활용하여 개인정보 개체명 탐지 실험을 수행하였다. 실험을 위해 최적의 프롬프트를 설계하여 퓨샷러닝(few-shot learning)을 통해 탐지 결과를 평가하였다. 구축한 데이터셋과 영어 기반의 개인정보 주석 데이터셋을 비교 분석한 결과 고유식별번호 항목에 대해 본 연구에서 구축한 데이터셋에서 더 높은 탐지 성능이 나타났으며, 이를 통해 데이터셋의 필요성과 우수성을 입증하였다.

  • PDF

The Processing of Causative and Passive Verbs in Korean (한국어의 사.피동문 처리에 관한 연구:실어증 환자의 처리 양상을 바탕으로)

  • 문영선;김동휘;남기춘
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2000.05a
    • /
    • pp.267-272
    • /
    • 2000
  • 본 연구에서는 한국어의 사·피동문을 실어증 환자가 처리하는 양상에 대하여 살펴보았다. 한국어의 사·피동문은 용언에 파생접사가 붙어 이루어지는 경우와 '-게 하다'나 '-어 지다'와 같이 구문 변형으로 하여, 실어증 환자에게 실험을 하였다. 실험에 참여한 환자는 명칭성 실어증 환자, 이해성 실어증 환자, 표현성 실어증 환자, 전반성 실어증 환자로 구성되어 있다. 본 실험에서는 단어 채워 넣기 과제(word completion task)를 사용하였다. 명칭성 실어증 환자의 경우 피동에서는 처리 오류를 보이는 반면, 사동에는 아무런 문제도 보이지 않았다. 표현성 실어증 환자의 경우, <피동-비변형>에서 오류를 많이 보였다. 이를 통해 한국어의 사·피동은 영어와 달리 통사상의 문제가 아니라는 결론을 내릴 수 있다. 즉 이미 사·피동 접사에 의해 파생된 단어가 어휘부에 저장되어 있고, 각 단어의 논항 정보에 따라 문장이 생성되는 것이다. 표현성 실어증 환자가 피동의 비변형에서 지배적인 오류를 보이는 것은 피동의 비변형이 타동사로서 변형인 피동형에 비해 하나의 논항을 더 취하기 때문이다. 이해성 실어증 환자의 경우 사·피동 생성에 별 어려움을 보이지 않았다. 이는 이해성 실어증 환자가 개별 어휘의 논항 정보에 손실을 적게 입고 있음을 시사하는 결과이다. 본 연구에서는 서로 다른 유형을 보이는 환자들을 대상으로 한국어의 사·피동의 처리양상을 대조한 결과, 첫재 사·피동은 서로 다른 통사, 의미상의 처리 양상을 보이고 있고, 둘째 파생접사가 결합된 형태로 어휘부에 저장되어 있는 개별 사·피동사에 의해 형성되는 것임을 확인하였다.d CO2 quantity causes flame temperature to fall since at high strain retes diluent effect is prevailing and at low strain rates the products inhibits chain branching. It is also found that the contribution of NO production by N2O and NO2 mechanisms are negligible and that thermal mechanism is concentrate on only the reaction zone. As strain rate and CO2 quantity increase, NO production is remarkably augmented.our 10%를 대용한 것이 무첨가한 것보다 많이 단단해졌음을 알 수 있었다. 혼합중의 반죽의 조사형 전자현미경 관찰로 amarans flour로 대체한 gluten이 단단해졌음을 알수 있었다. 유화제 stearly 칼슘, 혹은 hemicellulase를 amarans 10% 대체한 밀가루에 첨가하면 확연히 비용적을 증대시킬 수 있다는 사실을 알 수 있었다. quinoa는 명아주과 Chenopodium에 속하고 페루, 볼리비아 등의 고산지에서 재배 되어지는 것을 시료로 사용하였다. quinoa 분말은 중량의 5-20%을 quinoa를 대체하고 더욱이 분말중량에 대하여 0-200ppm의 lipase를 lipid(밀가루의 2-3배)에 대하여 품질개량제로서 이용했다.

  • PDF

An ERP study on the processing of Syntactic and lexical negation in Korean (부정문 처리와 문장 진리치 판단의 인지신경기제: 한국어 통사적 부정문과 어휘적 부정문에 대한 ERP 연구)

  • Nam, Yunju
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.3
    • /
    • pp.469-499
    • /
    • 2016
  • The present study investigated the cognitive mechanism underlying online processing of Korean syntactic (for example, A bed/a clock belongs to/doesn't belong to the furniture "침대는/시계는 가구에 속한다/속하지 않는다") and lexical negation (for example, A tiger/a butterfly has/doesn't have a tail "호랑이는/나비는 꼬리가 있다/없다") using an ERP(Event-related potentials) technique and a truth-value verification task. 23 Korean native speakers were employed for the whole experiment and 15's brain responses (out of 23) were recorded for the ERP analysis. The behavioral results (i.e. verification task scores) show that there is universal pattern of the accuracy and response time for verification process: True-Affirmative (high accuracy and short latency) > False-Affirmative > False-Negated > True-Negated. However, the components (early N400 & P600) reflecting the immediate processing of a negation operator were observed only in lexical negation. Moreover, the ERP patterns reflecting an effect of truth value were not identical: N400 effect was observed in the true condition compared to the false condition in the lexically negated sentences, whereas Positivity effect (like early P600) was observed in the false condition compared to the true condition in the syntactically negated sentences. In conclusion, the form and location of negation operator varied by languages and negation types influences the strategy and pattern of online negation processing, however, the final representation resulting from different computational processing of negation appears to be language universal and is not directly affected by negation types.

Antecedent Identification of Zero Subjects using Anaphoricity Information and Centering Theory (조응성 정보와 중심화 이론에 기반한 영형 주어의 선행사 식별)

  • Kim, Kye-Sung;Park, Seong-Bae;Lee, Sang-Jo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.12
    • /
    • pp.873-880
    • /
    • 2013
  • This paper approaches the problem of resolving Korean zero pronouns using Centering Theory modeling local coherence. Centering Theory has been widely used to resolve English pronouns. However, it is much difficult to apply the centering framework for zero pronoun resolution in languages such as Japanese and Korean. Since in particular the use of non-anaphoric zero pronouns without explicit antecedents is not considered in the Centering Theory of Grosz et al., the presence of non-anaphoric cases negatively affects the performance of the resolution system based on Centering Theory. To overcome this, this paper presents a method which determines the intra-sentential anaphoricity of zero pronouns in subject position by using relationships between clauses, and then identifies antecedents of zero subjects. In our experiments, the proposed method outperforms the baseline method relying solely on Centering Theory.

Korean Web Content Extraction using Tag Rank Position and Gradient Boosting (태그 서열 위치와 경사 부스팅을 활용한 한국어 웹 본문 추출)

  • Mo, Jonghoon;Yu, Jae-Myung
    • Journal of KIISE
    • /
    • v.44 no.6
    • /
    • pp.581-586
    • /
    • 2017
  • For automatic web scraping, unnecessary components such as menus and advertisements need to be removed from web pages and main contents should be extracted automatically. A content block tends to be located in the middle of a web page. In particular, Korean web documents rarely include metadata and have a complex design; a suitable method of content extraction is therefore needed. Existing content extraction algorithms use the textual and structural features of content blocks because processing visual features requires heavy computation for rendering and image processing. In this paper, we propose a new content extraction method using the tag positions in HTML as a quasi-visual feature. In addition, we develop a tag rank position, a type of tag position not affected by text length, and show that gradient boosting with the tag rank position is a very accurate content extraction method. The result of this paper shows that the content extraction method can be used to collect high-quality text data automatically from various web pages.

Abusive Detection Using Bidirectional Long Short-Term Memory Networks (양방향 장단기 메모리 신경망을 이용한 욕설 검출)

  • Na, In-Seop;Lee, Sin-Woo;Lee, Jae-Hak;Koh, Jin-Gwang
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.35-45
    • /
    • 2019
  • Recently, the damage with social cost of malicious comments is increasing. In addition to the news of talent committing suicide through the effects of malicious comments. The damage to malicious comments including abusive language and slang is increasing and spreading in various type and forms throughout society. In this paper, we propose a technique for detecting abusive language using a bi-directional long short-term memory neural network model. We collected comments on the web through the web crawler and processed the stopwords on unused words such as English Alphabet or special characters. For the stopwords processed comments, the bidirectional long short-term memory neural network model considering the front word and back word of sentences was used to determine and detect abusive language. In order to use the bi-directional long short-term memory neural network, the detected comments were subjected to morphological analysis and vectorization, and each word was labeled with abusive language. Experimental results showed a performance of 88.79% for a total of 9,288 comments screened and collected.

  • PDF

A Study on the Voice Dialing using HMM and Post Processing of the Connected Digits (HMM과 연결 숫자음의 후처리를 이용한 음성 다이얼링에 관한 연구)

  • Yang, Jin-Woo;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.5
    • /
    • pp.74-82
    • /
    • 1995
  • This paper is study on the voice dialing using HMM and post processing of the connected digits. HMM algorithm is widely used in the speech recognition with a good result. But, the maximum likelihood estimation of HMM(Hidden Markov Model) training in the speech recognition does not lead to values which maximize recognition rate. To solve the problem, we applied the post processing to segmental K-means procedure are in the recognition experiment. Korea connected digits are influenced by the prolongation more than English connected digits. To decrease the segmentation error in the level building algorithm some word models which can be produced by the prolongation are added. Some rules for the added models are applied to the recognition result and it is updated. The recognition system was implemented with DSP board having a TMS320C30 processor and IBM PC. The reference patterns were made by 3 male speakers in the noisy laboratory. The recognition experiment was performed for 21 sort of telephone number, 252 data. The recognition rate was $6\%$ in the speaker dependent, and $80.5\%$ in the speaker independent recognition test.

  • PDF

A Study on Environmental Pollution Issues in Fireworks Display (불꽃놀이의 환경오염 측면에 관한 연구)

  • Ahn, Myung-Seog;Lee, Jin-Ho;Shin, Chang-Young
    • Explosives and Blasting
    • /
    • v.26 no.2
    • /
    • pp.45-51
    • /
    • 2008
  • Fireworks display is called as younwha in korean, pokjuk in chinese, hanabi in japanese and fireworks display in English. Fireworks is a kind of art calling as engineering art program that presents its artistic sense by making up light, sound, heat, form, smoke, smoke screen, time delay and kinetic energy etc. which are made by combustion and deflagrations of explosives. Korea's fireworks skill is world class. In 1980s, we already developed the skills. After 2010 year, It would develop as Nano-biotechnology considering its environmental safety passing by 1990s' grow fully step. After pleasant fireworks, it requires a environmental pollution control measure, ability of emergency state control, management of storing place, a blind shell and waste disposal and citizenship elevation etc. This paper indicated around fireworks the present conditions, environmental pollution buzz, direction of development and plan.

Syntactic Category Prediction for Improving Parsing Accuracy in English-Korean Machine Translation (영한 기계번역에서 구문 분석 정확성 향상을 위한 구문 범주 예측)

  • Kim Sung-Dong
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.345-352
    • /
    • 2006
  • The practical English-Korean machine translation system should be able to translate long sentences quickly and accurately. The intra-sentence segmentation method has been proposed and contributed to speeding up the syntactic analysis. This paper proposes the syntactic category prediction method using decision trees for getting accurate parsing results. In parsing with segmentation, the segment is separately parsed and combined to generate the sentence structure. The syntactic category prediction would facilitate to select more accurate analysis structures after the partial parsing. Thus, we could improve the parsing accuracy by the prediction. We construct features for predicting syntactic categories from the parsed corpus of Wall Street Journal and generate decision trees. In the experiments, we show the performance comparisons with the predictions by human-built rules, trigram probability and neural networks. Also, we present how much the category prediction would contribute to improving the translation quality.