• Title/Summary/Keyword: Natural Language Processing

Search Result 940, Processing Time 0.022 seconds

A general-purpose model capable of image captioning in Korean and Englishand a method to generate text suitable for the purpose (한국어 및 영어 이미지 캡션이 가능한 범용적 모델 및 목적에 맞는 텍스트를 생성해주는 기법)

  • Cho, Su Hyun;Oh, Hayoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.8
    • /
    • pp.1111-1120
    • /
    • 2022
  • Image Capturing is a matter of viewing images and describing images in language. The problem is an important problem that can be solved by keeping, understanding, and bringing together two areas of image processing and natural language processing. In addition, by automatically recognizing and describing images in text, images can be converted into text and then into speech for visually impaired people to help them understand their surroundings, and important issues such as image search, art therapy, sports commentary, and real-time traffic information commentary. So far, the image captioning research approach focuses solely on recognizing and texturing images. However, various environments in reality must be considered for practical use, as well as being able to provide image descriptions for the intended purpose. In this work, we limit the universally available Korean and English image captioning models and text generation techniques for the purpose of image captioning.

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

Automatic Extraction of References for Research Reports using Deep Learning Language Model (딥러닝 언어 모델을 이용한 연구보고서의 참고문헌 자동추출 연구)

  • Yukyung Han;Wonsuk Choi;Minchul Lee
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.2
    • /
    • pp.115-135
    • /
    • 2023
  • The purpose of this study is to assess the effectiveness of using deep learning language models to extract references automatically and create a reference database for research reports in an efficient manner. Unlike academic journals, research reports present difficulties in automatically extracting references due to variations in formatting across institutions. In this study, we addressed this issue by introducing the task of separating references from non-reference phrases, in addition to the commonly used metadata extraction task for reference extraction. The study employed datasets that included various types of references, such as those from research reports of a particular institution, academic journals, and a combination of academic journal references and non-reference texts. Two deep learning language models, namely RoBERTa+CRF and ChatGPT, were compared to evaluate their performance in automatic extraction. They were used to extract metadata, categorize data types, and separate original text. The research findings showed that the deep learning language models were highly effective, achieving maximum F1-scores of 95.41% for metadata extraction and 98.91% for categorization of data types and separation of the original text. These results provide valuable insights into the use of deep learning language models and different types of datasets for constructing reference databases for research reports including both reference and non-reference texts.

Mapping between CoreNet and SUMO through WordNet (WordNet을 매개로 한 CoreNet-SUMO의 매핑)

  • Kang, Sin-Jae;Kang, In-Su;Nam, Se-Jin;Choi, Key-Sun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.276-282
    • /
    • 2011
  • CoreNet is a valuable resource to use in the domain of natural language processing including Korean-Chinese-Japanese multilingual text analysis, and translation among natural languages. CoreNet is mapped to SUMO in order to encourage its application in broader fields and enhance its international status as a multilingual lexical semantic network. To do this, indirect and direct mapping methodologies are used. Through the indirect mapping among CoreNet-KorLex-PWN-SUMO, we alleviate the difficulty of translating CoreNet concept terms in Korean into SUMO concepts in English, and maximize recall of SUMO concepts corresponding to the concept of CoreNet.

KB-BERT: Training and Application of Korean Pre-trained Language Model in Financial Domain (KB-BERT: 금융 특화 한국어 사전학습 언어모델과 그 응용)

  • Kim, Donggyu;Lee, Dongwook;Park, Jangwon;Oh, Sungwoo;Kwon, Sungjun;Lee, Inyong;Choi, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.191-206
    • /
    • 2022
  • Recently, it is a de-facto approach to utilize a pre-trained language model(PLM) to achieve the state-of-the-art performance for various natural language tasks(called downstream tasks) such as sentiment analysis and question answering. However, similar to any other machine learning method, PLM tends to depend on the data distribution seen during the training phase and shows worse performance on the unseen (Out-of-Distribution) domain. Due to the aforementioned reason, there have been many efforts to develop domain-specified PLM for various fields such as medical and legal industries. In this paper, we discuss the training of a finance domain-specified PLM for the Korean language and its applications. Our finance domain-specified PLM, KB-BERT, is trained on a carefully curated financial corpus that includes domain-specific documents such as financial reports. We provide extensive performance evaluation results on three natural language tasks, topic classification, sentiment analysis, and question answering. Compared to the state-of-the-art Korean PLM models such as KoELECTRA and KLUE-RoBERTa, KB-BERT shows comparable performance on general datasets based on common corpora like Wikipedia and news articles. Moreover, KB-BERT outperforms compared models on finance domain datasets that require finance-specific knowledge to solve given problems.

Range Detection of Wa/Kwa Parallel Noun Phrase using a Probabilistic Model and Modification Information (확률모형과 수식정보를 이용한 와/과 병렬사구 범위결정)

  • Choi, Yong-Seok;Shin, Ji-Ae;Choi, Key-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.2
    • /
    • pp.128-136
    • /
    • 2008
  • Recognition of parallel structure at early stage of sentence parsing can reduce the complexity of parsing. In this paper, we propose an unsupervised language-independent probabilistic model for recongition of parallel noun structures. The proposed model is based on the idea of swapping constituents, which replies the properties of symmetry (two or more identical constituents are repeated) and of reversibility (the order of constituents is inter-changeable) in parallel structures. The non-symmetric patterns that cannot be captured by the general symmetry rule are resolved additionally by the modifier information. In particular this paper shows how the proposed model is applied to recognize Korean parallel noun phrases connected by "wa/kwa" particle. Our model is compared with other models including supervised models and performs better on recongition of parallel noun phrases.

Automation of Service Level Agreement based on Active SLA (Active SLA 기반 서비스 수준 협약의 자동화)

  • Kim, Sang-Rak;Kang, Man-Mo;Bae, Jae-Hak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.4
    • /
    • pp.229-237
    • /
    • 2013
  • As demand for IT services increase, which are based on SOA and cloud computing, service level agreements (SLAs) have received more attention in the parties concerned. An SLA is usually a paper contract written in natural language. SLA management tools which are commercially available, implement SLAs implicitly in the application with a procedural language. This makes automation of SLA management difficult. It is also laborious to maintain contract management systems because changes in a contract give rise to extensive modifications in the source code. We see the source of the trouble is the existence of documentary SLAs (paper contracts) and corresponding executable SLAs (contracts coded in the procedural language). In this paper, to resolve the current SLA management problems we propose an active SLM (Active Service Level Management) system, which is based on the active SLA (Active Service Level Agreement). In the proposed system, the separated management and processing of dual SLAs can be unified into a single process with the introduction of active SLAs (ASLAs).

A Study on Automatic Classification of Profanity Sentences of Elementary School Students Using BERT (BERT를 활용한 초등학교 고학년의 욕설문장 자동 분류방안 연구)

  • Shim, Jaekwoun
    • Journal of Creative Information Culture
    • /
    • v.7 no.2
    • /
    • pp.91-98
    • /
    • 2021
  • As the amount of time that elementary school students spend online increased due to Corona 19, the amount of posts, comments, and chats they write increased, and problems such as offending others' feelings or using swear words are occurring. Netiquette is being educated in elementary school, but training time is insufficient. In addition, it is difficult to expect changes in student behavior. So, technical support through natural language processing is needed. In this study, an experiment was conducted to automatically filter profanity sentences by applying them to a pre-trained language model on sentences written by elementary school students. In the experiment, chat details of elementary school 4-6 graders were collected on an online learning platform, and general sentences and profanity sentences were trained through a pre-learned language model. As a result of the experiment, as a result of classifying profanity sentences, it was analyzed that the precision was 75%. It has been shown that if the learning data is sufficiently supplemented, it can be sufficiently applied to the online platform used by elementary school students.

u-SPACE: ubiquitous Smart Parenting And Customized Education (u-SPACE: 육아 보조 및 맞춤 교육을 위한 유비쿼터스 시스템)

  • Min, Hye-Jin;Park, Doo-Jin;Chang, Eun-Young;Lee, Ho-Joon;Park, Jong-C.
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.94-102
    • /
    • 2006
  • 부모의 사회 활동 시간이 늘어남에 따라 아이들이 혼자 집에서 보내는 시간도 늘어나고 있다. 따라서 아이들의 자립심을 크게 제한하지 않으면서 노출되기 쉬운 실내 위험으로부터 아이들을 보호하고 아이의 심리, 감정적 상태에 따라 적절한 지도를 해주는 도움이 필요하다. 본 연구에서는 RFID 기술을 기반으로 아이들을 물리적 위험으로부터 보호하고 자연언어처리 기술을 이용하여 아이의 심리, 감정 상태에 따른 음악과 애니메이션의 멀티미디어 콘텐츠를 제공한다. 또한 지속적인 관심이 필요한 일정 관리, 일상 생활에서 도움을 주는 전자제품 사용법 안내 등의 정보를 제공하여 아이 스스로 자신의 일을 할 수 있도록 도움을 준다. 본 연구에서는 가상의 가정을 디자인하여 실현 가능한 시나리오를 중심으로 이와 같은 서비스를 시뮬레이션 한 결과를 보인다.

  • PDF

An Algorithm for Predicting the Relationship between Lemmas and Corpus Size

  • Yang, Dan-Hee;Gomez, Pascual Cantos;Song, Man-Suk
    • ETRI Journal
    • /
    • v.22 no.2
    • /
    • pp.20-31
    • /
    • 2000
  • Much research on natural language processing (NLP), computational linguistics and lexicography has relied and depended on linguistic corpora. In recent years, many organizations around the world have been constructing their own large corporal to achieve corpus representativeness and/or linguistic comprehensiveness. However, there is no reliable guideline as to how large machine readable corpus resources should be compiled to develop practical NLP software and/or complete dictionaries for humans and computational use. In order to shed some new light on this issue, we shall reveal the flaws of several previous researches aiming to predict corpus size, especially those using pure regression or curve-fitting methods. To overcome these flaws, we shall contrive a new mathematical tool: a piecewise curve-fitting algorithm, and next, suggest how to determine the tolerance error of the algorithm for good prediction, using a specific corpus. Finally, we shall illustrate experimentally that the algorithm presented is valid, accurate and very reliable. We are confident that this study can contribute to solving some inherent problems of corpus linguistics, such as corpus predictability, compiling methodology, corpus representativeness and linguistic comprehensiveness.

  • PDF