• Title/Summary/Keyword: 텍스트분석

Search Result 2,629, Processing Time 0.031 seconds

A weighted method for evaluating software quality (가중치를 적용한 소프트웨어 품질 평가 방법)

  • Jung, Hye Jung
    • Journal of Digital Convergence
    • /
    • v.19 no.8
    • /
    • pp.249-255
    • /
    • 2021
  • This study proposed a method for determining weights for the eight quality characteristics, such as functionality, reliability, usability, maintainability, portability, efficiency, security, and interoperability, which are suggested by international standards, focusing on software test reports. Currently, the test results for software quality evaluation apply the same weight to 8 quality characteristics to obtain the arithmetic average. Weights for 8 quality characteristics were applied using the results from text analysis, and weights were applied using the results of text analysis of test reports for two products. It was confirmed that the average of test reports according to the weighted quality characteristics was more efficient.

Multi-Dimensional Keyword Search and Analysis of Hotel Review Data Using Multi-Dimensional Text Cubes (다차원 텍스트 큐브를 이용한 호텔 리뷰 데이터의 다차원 키워드 검색 및 분석)

  • Kim, Namsoo;Lee, Suan;Jo, Sunhwa;Kim, Jinho
    • Journal of Information Technology and Architecture
    • /
    • v.11 no.1
    • /
    • pp.63-73
    • /
    • 2014
  • As the advance of WWW, unstructured data including texts are taking users' interests more and more. These unstructured data created by WWW users represent users' subjective opinions thus we can get very useful information such as users' personal tastes or perspectives from them if we analyze appropriately. In this paper, we provide various analysis efficiently for unstructured text documents by taking advantage of OLAP (On-Line Analytical Processing) multidimensional cube technology. OLAP cubes have been widely used for the multidimensional analysis for structured data such as simple alphabetic and numberic data but they didn't have used for unstructured data consisting of long texts. In order to provide multidimensional analysis for unstructured text data, however, Text Cube model has been proposed precently. It incorporates term frequency and inverted index as measurements to search and analyze text databases which play key roles in information retrieval. The primary goal of this paper is to apply this text cube model to a real data set from in an Internet site sharing hotel information and to provide multidimensional analysis for users' reviews on hotels written in texts. To achieve this goal, we first build text cubes for the hotel review data. By using the text cubes, we design and implement the system which provides multidimensional keyword search features to search and to analyze review texts on various dimensions. This system will be able to help users to get valuable guest-subjective summary information easily. Furthermore, this paper evaluats the proposed systems through various experiments and it reveals the effectiveness of the system.

Analysis of Factors Affecting Surge in Container Shipping Rates in the Era of Covid19 Using Text Analysis (코로나19 판데믹 이후 컨테이너선 운임 상승 요인분석: 텍스트 분석을 중심으로)

  • Rha, Jin Sung
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.1
    • /
    • pp.111-123
    • /
    • 2022
  • In the era of the Covid19, container shipping rates are surging up. Many studies have attempted to investigate the factors affecting a surge in container shipping rates. However, there is limited literature using text mining techniques for analyzing the underlying causes of the surge. This study aims to identify the factors behind the unprecedented surge in shipping rates using network text analysis and LDA topic modeling. For the analysis, we collected the data and keywords from articles in Lloyd's List during past two years(2020-2021). The results of the text analysis showed that the current surge is mainly due to "US-China trade war", "rising blanking sailings", "port congestion", "container shortage", and "unexpected events such as the Suez canal blockage".

Construction Bid Data Analysis for Overseas Projects Based on Text Mining - Focusing on Overseas Construction Project's Bidder Inquiry (텍스트 마이닝을 통한 해외건설공사 입찰정보 분석 - 해외건설공사의 입찰자 질의(Bidder Inquiry) 정보를 대상으로 -)

  • Lee, JeeHee;Yi, June-Seong;Son, JeongWook
    • Korean Journal of Construction Engineering and Management
    • /
    • v.17 no.5
    • /
    • pp.89-96
    • /
    • 2016
  • Most data generated in construction projects is unstructured text data. Unstructured data analysis is very needed in order for effective analysis on large amounts of text-based documents, such as contracts, specifications, and RFI. This study analysed previously performed project's bid related documents (bidder inquiry) in overseas construction projects; as a results of the analysis frequent words in documents, association rules among the words, and various document topics were derived. This study suggests effective text analysis approach for massive documents with short time using text mining technique, and this approach is expected to extend the unstructured text data analysis in construction industry.

HTML Text Extraction Using Frequency Analysis (빈도 분석을 이용한 HTML 텍스트 추출)

  • Kim, Jin-Hwan;Kim, Eun-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.9
    • /
    • pp.1135-1143
    • /
    • 2021
  • Recently, text collection using a web crawler for big data analysis has been frequently performed. However, in order to collect only the necessary text from a web page that is complexly composed of numerous tags and texts, there is a cumbersome requirement to specify HTML tags and style attributes that contain the text required for big data analysis in the web crawler. In this paper, we proposed a method of extracting text using the frequency of text appearing in web pages without specifying HTML tags and style attributes. In the proposed method, the text was extracted from the DOM tree of all collected web pages, the frequency of appearance of the text was analyzed, and the main text was extracted by excluding the text with high frequency of appearance. Through this study, the superiority of the proposed method was verified.

Quantitative Text Mining for Social Science: Analysis of Immigrant in the Articles (사회과학을 위한 양적 텍스트 마이닝: 이주, 이민 키워드 논문 및 언론기사 분석)

  • Yi, Soo-Jeong;Choi, Doo-Young
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.5
    • /
    • pp.118-127
    • /
    • 2020
  • The paper introduces trends and methodological challenges of quantitative Korean text analysis by using the case studies of academic and news media articles on "migration" and "immigration" within the periods of 2017-2019. The quantitative text analysis based on natural language processing technology (NLP) and this became an essential tool for social science. It is a part of data science that converts documents into structured data and performs hypothesis discovery and verification as the data and visualize data. Furthermore, we examed the commonly applied social scientific statistical models of quantitative text analysis by using Natural Language Processing (NLP) with R programming and Quanteda.

구조생성기호학적 관점에서의 디지털게임의 의미생성방식 연구 - 스타크래프트, 리니지, 스페셜포스에 대한 분석을 중심으로 -

  • Park, Tae-Sun
    • 한국게임학회지
    • /
    • v.6 no.1
    • /
    • pp.41-43
    • /
    • 2009
  • 게임에서의 텍스트를 추출하고 이를 분석하고자 하였다. 기본적으로 그레마스의 구조생성기호학을 활용하여 텍스트를 분석하고자 하였는데, 연구방법론에서는 현상학, 해석에서는 정신분석학의 이론에서도 도움을 받았다. 구체적인 분석대상으로는 온라인게임인 스타크래프트, 리니지, 스페셜포스를 선택하였다. 연구문제는 이들 세 게임의 의미생성방식에 대한 탐구로 구성되었다. 각 게임의 의미생성방식은 구조생성기호학의 세 가지 층위에서 구분되어 연구되었다. 즉, 심층구조, 기호-설화구조, 담화구조의 세층위로 전환, 발화되면서 점진적으로 의미가 풍성해지는 과정이 탐구되었다. 각 게임의 차이, 나아가 게임 장르간의 차이는 주로 심층 수준의 차이에서 기인한다고 보인다. 이들 게임의 주요 공통점이자, 다른 매체와의 차이점은 행동자 모델에서 두드러지는데, 바로 이용자가 스스로 주체의 위치를 점하게 되는 것이다. 상호작용성으로 대변되는 게임의 특성은 이용자의 텍스트로의 적극적인 개입을 할 수 있게끔 한다. 이러한 적극적인 개입은 이용자가 스스로 텍스트의 주체가 되도록 허용하는데, 주체가 된다고 함은 곧 스스로의 욕망을 직접 대상에 투사하면서 텍스트를 창출함을 의미한다. 바로 이러한 점이 게임의 의미생성방식의 큰 특징이며 다른 매체들과의 주요한 차이점이기도 하다. 더불어 게임이 우리 문화와 사회에 커다란 영향을 미칠 수 있는 기제임이 입증되는 것이기도 하다.

  • PDF

An Automatic Classification of Discourse Relations in the Arguing Structure of Korean Texts (한국어 텍스트의 논증 구조 내 담화 관계의 자동 분류 연구)

  • Lee, Sana;Shin, Hyopil
    • Annual Conference on Human and Language Technology
    • /
    • 2015.10a
    • /
    • pp.59-64
    • /
    • 2015
  • 최근 온라인 텍스트 자료를 이용하여 대중의 의견을 분석하는 작업이 활발히 이루어지고 있다. 이러한 작업에는 주관적 방향성을 갖는 텍스트의 논증 구조와 중요 내용을 파악하는 과정이 필요하며, 자료의 양과 다양성이 급격히 증가하면서 그 과정의 자동화가 불가피해지고 있다. 본 연구에서는 정책에 대한 찬반 의견으로 구성된 한국어 텍스트 자료를 직접 구축하고, 글을 구성하는 기본 단위들 사이의 담화 관계를 정의하였다. 각 단위들 사이의 관계는 기계학습과 규칙 기반 방식을 이용하여 예측되고, 그 결과는 합성되어 하나의 글에 대응되는 트리 구조를 이룬다. 또한 텍스트의 구조상에서 주제문을 직접적으로 뒷받침하는 문장 혹은 절을 추출하여 글의 중요 내용을 얻고자 하였다.

  • PDF

In-depth Analysis of Soccer Game via Webcast and Text Mining (웹 캐스트와 텍스트 마이닝을 이용한 축구 경기의 심층 분석)

  • Jung, Ho-Seok;Lee, Jong-Uk;Yu, Jae-Hak;Lee, Han-Sung;Park, Dai-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.10
    • /
    • pp.59-68
    • /
    • 2011
  • As the role of soccer game analyst who analyzes soccer games and creates soccer wining strategies is emphasized, it is required to have high-level analysis beyond the procedural ones such as main event detection in the context of IT based broadcasting soccer game research community. In this paper, we propose a novel approach to generate the high-level in-depth analysis results via real-time text based soccer Webcast and text mining. Proposed method creates a metadata such as attribute, action and event, build index, and then generate available knowledges via text mining techniques such as association rule mining, event growth index, and pathfinder network analysis using Webcast and domain knowledges. We carried out a feasibility experiment on the proposed technique with the Webcast text about Spain team's 2010 World Cup games.

Text Classification using Cloze Question based on KorBERT (KorBERT 기반 빈칸채우기 문제를 이용한 텍스트 분류)

  • Heo, Jeong;Lee, Hyung-Jik;Lim, Joon-Ho
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.486-489
    • /
    • 2021
  • 본 논문에서는 KorBERT 한국어 언어모델에 기반하여 텍스트 분류문제를 빈칸채우기 문제로 변환하고 빈칸에 적합한 어휘를 예측하는 방식의 프롬프트기반 분류모델에 대해서 소개한다. [CLS] 토큰을 이용한 헤드기반 분류와 프롬프트기반 분류는 사전학습의 NSP모델과 MLM모델의 특성을 반영한 것으로, 텍스트의 의미/구조적 분석과 의미적 추론으로 구분되는 텍스트 분류 태스크에서의 성능을 비교 평가하였다. 의미/구조적 분석 실험을 위해 KLUE의 의미유사도와 토픽분류 데이터셋을 이용하였고, 의미적 추론 실험을 위해서 KLUE의 자연어추론 데이터셋을 이용하였다. 실험을 통해, MLM모델의 특성을 반영한 프롬프트기반 텍스트 분류에서는 의미유사도와 토픽분류 태스크에서 우수한 성능을 보였고, NSP모델의 특성을 반영한 헤드기반 텍스트 분류에서는 자연어추론 태스크에서 우수한 성능을 보였다.

  • PDF