• Title/Summary/Keyword: 유사 키워드

Search Result 311, Processing Time 0.025 seconds

Extracting Alternative Word Candidates for Patent Information Search (특허 정보 검색을 위한 대체어 후보 추출 방법)

  • Baik, Jong-Bum;Kim, Seong-Min;Lee, Soo-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.4
    • /
    • pp.299-303
    • /
    • 2009
  • Patent information search is used for checking existence of earlier works. In patent information search, there are many reasons that fails to get appropriate information. This research proposes a method extracting alternative word candidates in order to minimize search failure due to keyword mismatch. Assuming that two words have similar meaning if they have similar co-occurrence words, the proposed method uses the concept of concentration, association word set, cosine similarity between association word sets and a ranking modification technique. Performance of the proposed method is evaluated using a manually extracted alternative word candidate list. Evaluation results show that the proposed method outperforms the document vector space model in recall.

An Automatically Extracting Formal Information from Unstructured Security Intelligence Report (비정형 Security Intelligence Report의 정형 정보 자동 추출)

  • Hur, Yuna;Lee, Chanhee;Kim, Gyeongmin;Jo, Jaechoon;Lim, Heuiseok
    • Journal of Digital Convergence
    • /
    • v.17 no.11
    • /
    • pp.233-240
    • /
    • 2019
  • In order to predict and respond to cyber attacks, a number of security companies quickly identify the methods, types and characteristics of attack techniques and are publishing Security Intelligence Reports(SIRs) on them. However, the SIRs distributed by each company are huge and unstructured. In this paper, we propose a framework that uses five analytic techniques to formulate a report and extract key information in order to reduce the time required to extract information on large unstructured SIRs efficiently. Since the SIRs data do not have the correct answer label, we propose four analysis techniques, Keyword Extraction, Topic Modeling, Summarization, and Document Similarity, through Unsupervised Learning. Finally, has built the data to extract threat information from SIRs, analysis applies to the Named Entity Recognition (NER) technology to recognize the words belonging to the IP, Domain/URL, Hash, Malware and determine if the word belongs to which type We propose a framework that applies a total of five analysis techniques, including technology.

메타버스(Metaverse)와 방송 미디어

  • Jeong, Sang-Seop
    • Broadcasting and Media Magazine
    • /
    • v.27 no.1
    • /
    • pp.59-70
    • /
    • 2022
  • 메타버스 서비스란 '가상', '초월' 등을 뜻하는 영어 단어 '메타(Meta)'와 우주를 뜻하는 '유니버스(Universe)'의 합성어다. 현실세계와 같은 사회.경제.문화 활동이 이루어지는 3차원의 가상세계를 의미한다. 코로나 상황 속에서 비대면 소통의 수단 중 하나로 주목받으며 업무, 친목, 각종 행사 등 다양한 분야에서 활용되고 있다. 2022년 신년 사업 계획에서 메타버스 단어가 들어가지 않은 곳이 없다고 한다. 그만큼 핵심 키워드로 떠오르고 있다. 즉, 메타버스는 현실을 초월한 가상의 세계로 스마트폰, 컴퓨터 등 디지털 미디어에 담긴 세계를 뜻한다. 세상은 점차 바뀌어 가고 있다. 글로벌 통계 전문 업체 스태티스타는 2021년 307억 달러(약 35조 3265억 원) 규모이던 메타버스 시장 규모가 2025년에는 약 2969억 달러(약 341조 6428억 원)까지 커질 것으로 예측하였다. 현재 시장에서 통용되고 있는 메타버스에 대한 정의는, '현실세계의 사회·경제·문화적 활동이 유사하게 실현되거나, 현실에서 제공하지 못하는 경험을 제공하는 3차원 디지털 가상공간'으로 요약된다. 2021년의 메타버스는 더 이상 상상의 영역이 아니며, 현실세계 영역으로 침투하고 있는 것이다. 현실세계와 연결되는 가상세계, 실재감을 느낄 수 있는 가상공간이 점점 현실이 되어가고 있다. 1990년대 처음으로 등장한 메타버스 개념이 2020년대에 재부상 하였는데, 과거의 메타버스보다 몰입감과 실재감 있는 경험을 제공할 수 있는 XR 기술의 결합에 대한 기대감 때문이다. 지나온 30여 년간 메타버스가 뜨거운 주목을 받게 된 이유는 기술의 발전에 있다. 초고속인터넷 5G 상용화와 더불어 6G 출현, 가상현실, 증강현실이 일상에 스며들었기 때문이다. 이러한 기술 발달은 현실세계의 물리적 객체와 가상의 객체가 상호 작용할 수 있는 혼합현실까지 발전시키는 촉매제가 되었다. 여기에 지난 2년 동안 전 세계를 강타한 코로나19로 인해 비대면, 온라인 서비스가 확산되면서 메타버스는 개념이 아닌, 우리 일상의 한 부분으로 인정받게 되었다. 현재 우리 사회는 과거에는 불가능하다고 생각했던 사회적 거리두기, 재택근무, 온라인 수업 등을 진행하면서 이렇게도 사회가 돌아갈 수 있다는 것을 점차 느껴가고 있다. 더불어 현재 코로나로 인해 멀게만 느껴졌던 메타버스 세계를 반강제적으로 경험하고 있기도 하다. 이처럼 본 고에서는 최근에 나타난 메타버스를 이해하고 방송미디어(계)와 접목된 유형과 기술적, 서비스 사례를 파악하고, 주요 기업들의 추진 방향, 주요 시사점 및 결론으로 도출해보았다.

Design and analysis of monitoring system for illegal overseas direct purchase based on C2C (C2C에 기반으로 해외직구 불법거래에 관한 모니터링 시스템 설계 및 분석)

  • Shin, Yong-Hun;Kim, Jeong-Ho
    • Journal of Digital Convergence
    • /
    • v.20 no.5
    • /
    • pp.609-615
    • /
    • 2022
  • In this paper, we propose a monitoring system for illegal overseas direct purchase based on C2C transaction between individuals. The Customs Act stipulates that direct purchases from overseas are exempted from taxation only if they are less than a certain amount (US$150, but US$200 in the US) or are recognized as self-used goods. The act of reselling overseas direct purchase items purchased with exemption from taxation online, etc., is a crime of smuggling without a report. Nevertheless, the number of re-sells on online second-hand websites is increasing, and it is becoming a controversial social issue of continuous violation of the Customs Act. Therefore, this study collects unspecified transaction details related to overseas direct purchase, refines the data in a big data method, and designs it as a monitoring system through natural language processing, etc. analyzed. It will be possible to use it to crack down on illegal transactions of overseas direct purchase goods.

Data value extraction through comparison of online big data analysis results and water supply statistics (온라인 빅 데이터 분석 결과와 상수도 통계 비교를 통한 데이터 가치 추출)

  • Hong, Sungjin;Yoo, Do Guen
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.431-431
    • /
    • 2021
  • 4차 산업혁명의 도래로 사회기반시설물의 계획 및 운영관리에 있어 데이터 분석을 통한 가치추출에 대한 관심은 매우 높은 상황이다. 데이터의 가용성과 접근성, 정부 지원 등을 평가하는 공공데이터 개방지수에서 한국은 1점 만점에 0.93점을 획득하여 경제협력개발기구 회원국 중 1위(2019년 기준)를 할 정도로 매우 높은 수준(평균 0.60점)이다. 그러나 공식적으로 발표 및 배포되는 사회기반시설물 관련 정보와 심도 있는 연구 분석이 필요한 정보는 접근이 여전히 제한적이라 할 수 있다. 특히 대표적인 사회기반시설물인 상수도시스템은 대부분 국가중요시설로 지정되어 있어 다양한 정보를 획득하고 분석하는데 제약이 존재하며, 관련 국가통계인 상수도통계에서는 누수사고 등과 같은 비정상적 상황에 대한 사고지점, 원인 등과 같은 세부정보는 제공하고 있지 않다. 본 연구에서는 웹크롤링 및 빅데이터 분석기술을 활용하여 과거 일정기간 발생한 지자체의 상수도 누수사고 관련 뉴스를 전수조사하고 도출된 사고건수를 국가 공인 정보인 상수도통계자료와 비교·분석하였다. 독립적인 누수사고 기사를 추출하기 위해서 중복기사의 제거, 누수 관련 키워드 정립, 상수도분야 이외의 관련기사 제거 등의 절차가 필요하며, 이와 같은 기법은 R프로그래밍을 통해 구현되었다. 추가적으로 뉴스기사의 자연어 처리기반 정보추출기법을 통해 누수사고 건수 뿐만 아니라 사고발생일, 위치, 원인, 피해정도, 그리고 대상 관로의 크기 등을 획득하여 상수도 통계에서 제시하고 있는 정보보다 많은 가치를 추출하여 연계할 수 있는 방안을 제시하였다. 제시된 방법론을 국내 A광역시에 적용하여 누수사고 건수를 비교한 결과 상수도통계에서 제시하고 있는 누수발생건수와 유사한 규모의 사고건수를 뉴스기사분석을 통해 도출할 수 있었다. 제안된 방법론은 추가적인 정보의 추출이 가능하다는 점에서 향후 활용성이 높을 것으로 기대된다.

  • PDF

Web crawling process of each social network service for recognizing water quality accidents in the water supply networks (물공급네트워크 수질사고인지를 위한 소셜네트워크 서비스 별 웹크롤링 방법론 개발)

  • Yoo, Do Guen;Hong, Seunghyeok;Moon, Gihoon
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.398-398
    • /
    • 2022
  • 최근 수돗물 공급과정에 있어 적수, 유충 발생 등 지역 단위의 수질문제로 국민의 직간접적인 피해가 발생된 바 있다. 수질문제 발생 시, 소셜네트워크서비스(SNS)에 게시되는 피해 관련 의견은 시공간적으로 빠르게 확산되며, 궁극적으로는 물공급과정 전체의 부정적 인식증가와 신뢰도 저하를 초래한다. 따라서, 물공급시스템에서의 수질사고 발생을 빠르게 인지하는 다양한 방법론의 적용을 통한 피해 최소화를 위한 노력이 반드시 필요하다. 일반적으로 수질사고는 다양한 항목의 실시간 계측기에서 획득되는 시계열자료의 변화양상을 통해 판단할 수 있으나, 이와 같은 방법론의 효율적 적용을 위해서는 선진계측인프라의 도입이 선행되어야 한다. 본 연구에서는 국내의 발달된 정보통신기술환경을 활용하여, 물공급네트워크 내 수질사고인지를 위한 SNS 별 웹크롤링 방법론을 제안하고, 적용결과를 분석하였다. 방법론의 구현에 앞서, 각종 SNS 별(트위터, 인스타그램, 블로그, 네이버 카페 등) 프로그래밍을 통한 웹크롤링 가능여부, 정보획득 기간 등을 확인하였으며, 과거 유사 수질사고 발생 시 영향력과 관련 게시글이 크게 나타난 네이버 카페와 트위터를 중심으로 웹 크롤링 절차를 제시하였다. 네이버 카페의 경우 대상급수구역 내의 시민들이 다수 참여하는 카페를 목록화하고, 지자체명과 핵심 키워드(수돗물, 유충, 적수) 조합을 활용한 웹크롤링을 수행하여, 관련 게시물 건수와 의미를 실시간으로 분석하는 절차를 마련하였다. 개발된 SNS 별 웹크롤링 방법론에 따라 과거 수질사고가 발생된 바 있는 2개 이상의 지자체에 대한 분석을 실시하였으며, SNS 별 결과에 있어 차이점을 확인하여 제시하였다. 향후 제안된 방법을 적용하여 시공간적 수질사고 정보의 전파 및 확산양상을 추가적으로 분석할수 있을 것으로 기대된다.

  • PDF

The Need for Paradigm Shift in Semantic Similarity and Semantic Relatedness : From Cognitive Semantics Perspective (의미간의 유사도 연구의 패러다임 변화의 필요성-인지 의미론적 관점에서의 고찰)

  • Choi, Youngseok;Park, Jinsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.111-123
    • /
    • 2013
  • Semantic similarity/relatedness measure between two concepts plays an important role in research on system integration and database integration. Moreover, current research on keyword recommendation or tag clustering strongly depends on this kind of semantic measure. For this reason, many researchers in various fields including computer science and computational linguistics have tried to improve methods to calculating semantic similarity/relatedness measure. This study of similarity between concepts is meant to discover how a computational process can model the action of a human to determine the relationship between two concepts. Most research on calculating semantic similarity usually uses ready-made reference knowledge such as semantic network and dictionary to measure concept similarity. The topological method is used to calculated relatedness or similarity between concepts based on various forms of a semantic network including a hierarchical taxonomy. This approach assumes that the semantic network reflects the human knowledge well. The nodes in a network represent concepts, and way to measure the conceptual similarity between two nodes are also regarded as ways to determine the conceptual similarity of two words(i.e,. two nodes in a network). Topological method can be categorized as node-based or edge-based, which are also called the information content approach and the conceptual distance approach, respectively. The node-based approach is used to calculate similarity between concepts based on how much information the two concepts share in terms of a semantic network or taxonomy while edge-based approach estimates the distance between the nodes that correspond to the concepts being compared. Both of two approaches have assumed that the semantic network is static. That means topological approach has not considered the change of semantic relation between concepts in semantic network. However, as information communication technologies make advantage in sharing knowledge among people, semantic relation between concepts in semantic network may change. To explain the change in semantic relation, we adopt the cognitive semantics. The basic assumption of cognitive semantics is that humans judge the semantic relation based on their cognition and understanding of concepts. This cognition and understanding is called 'World Knowledge.' World knowledge can be categorized as personal knowledge and cultural knowledge. Personal knowledge means the knowledge from personal experience. Everyone can have different Personal Knowledge of same concept. Cultural Knowledge is the knowledge shared by people who are living in the same culture or using the same language. People in the same culture have common understanding of specific concepts. Cultural knowledge can be the starting point of discussion about the change of semantic relation. If the culture shared by people changes for some reasons, the human's cultural knowledge may also change. Today's society and culture are changing at a past face, and the change of cultural knowledge is not negligible issues in the research on semantic relationship between concepts. In this paper, we propose the future directions of research on semantic similarity. In other words, we discuss that how the research on semantic similarity can reflect the change of semantic relation caused by the change of cultural knowledge. We suggest three direction of future research on semantic similarity. First, the research should include the versioning and update methodology for semantic network. Second, semantic network which is dynamically generated can be used for the calculation of semantic similarity between concepts. If the researcher can develop the methodology to extract the semantic network from given knowledge base in real time, this approach can solve many problems related to the change of semantic relation. Third, the statistical approach based on corpus analysis can be an alternative for the method using semantic network. We believe that these proposed research direction can be the milestone of the research on semantic relation.

A Proposal of Methods for Extracting Temporal Information of History-related Web Document based on Historical Objects Using Machine Learning Techniques (역사객체 기반의 기계학습 기법을 활용한 웹 문서의 시간정보 추출 방안 제안)

  • Lee, Jun;KWON, YongJin
    • Journal of Internet Computing and Services
    • /
    • v.16 no.4
    • /
    • pp.39-50
    • /
    • 2015
  • In information retrieval process through search engine, some users want to retrieve several documents that are corresponding with specific time period situation. For example, if user wants to search a document that contains the situation before 'Japanese invasions of Korea era', he may use the keyword 'Japanese invasions of Korea' by using searching query. Then, search engine gives all of documents about 'Japanese invasions of Korea' disregarding time period in order. It makes user to do an additional work. In addition, a large percentage of cases which is related to historical documents have different time period between generation date of a document and record time of contents. If time period in document contents can be extracted, it may facilitate effective information for retrieval and various applications. Consequently, we pursue a research extracting time period of Joseon era's historical documents by using historic literature for Joseon era in order to deduct the time period corresponding with document content in this paper. We define historical objects based on historic literature that was collected from web and confirm a possibility of extracting time period of web document by machine learning techniques. In addition to the machine learning techniques, we propose and apply the similarity filtering based on the comparison between the historical objects. Finally, we'll evaluate the result of temporal indexing accuracy and improvement.

A Study on the Changes in Perspectives on Unwed Mothers in S.Korea and the Direction of Government Polices: 1995~2020 Social Media Big Data Analysis (한국미혼모에 대한 관점 변화와 정부정책의 방향: 1995년~2020년 소셜미디어 빅데이터 분석)

  • Seo, Donghee;Jun, Boksun
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.305-313
    • /
    • 2021
  • This study collected and analyzed big data from 1995 to 2020, focusing on the keywords "unwed mother", "single mother," and "single mom" to present appropriate government support policy directions according to changes in perspectives on unwed mothers. Big data collection platform Textom was used to collect data from portal search sites Naver and Daum and refine data. The final refined data were word frequency analysis, TF-IDF analysis, an N-gram analysis provided by Textom. In addition, Network analysis and CONCOR analysis were conducted through the UCINET6 program. As a result of the study, similar words appeared in word frequency analysis and TF-IDF analysis, but they differed by year. In the N-gram analysis, there were similarities in word appearance, but there were many differences in frequency and form of words appearing in series. As a result of CONCOR analysis, it was found that different clusters were formed by year. This study confirms the change in the perspective of unwed mothers through big data analysis, suggests the need for unwed mothers policies for various options for independent women, and policies that embrace pregnancy, childbirth, and parenting without discrimination within the new family form.

A Study on Differences of Contents and Tones of Arguments among Newspapers Using Text Mining Analysis (텍스트 마이닝을 활용한 신문사에 따른 내용 및 논조 차이점 분석)

  • Kam, Miah;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.53-77
    • /
    • 2012
  • This study analyses the difference of contents and tones of arguments among three Korean major newspapers, the Kyunghyang Shinmoon, the HanKyoreh, and the Dong-A Ilbo. It is commonly accepted that newspapers in Korea explicitly deliver their own tone of arguments when they talk about some sensitive issues and topics. It could be controversial if readers of newspapers read the news without being aware of the type of tones of arguments because the contents and the tones of arguments can affect readers easily. Thus it is very desirable to have a new tool that can inform the readers of what tone of argument a newspaper has. This study presents the results of clustering and classification techniques as part of text mining analysis. We focus on six main subjects such as Culture, Politics, International, Editorial-opinion, Eco-business and National issues in newspapers, and attempt to identify differences and similarities among the newspapers. The basic unit of text mining analysis is a paragraph of news articles. This study uses a keyword-network analysis tool and visualizes relationships among keywords to make it easier to see the differences. Newspaper articles were gathered from KINDS, the Korean integrated news database system. KINDS preserves news articles of the Kyunghyang Shinmun, the HanKyoreh and the Dong-A Ilbo and these are open to the public. This study used these three Korean major newspapers from KINDS. About 3,030 articles from 2008 to 2012 were used. International, national issues and politics sections were gathered with some specific issues. The International section was collected with the keyword of 'Nuclear weapon of North Korea.' The National issues section was collected with the keyword of '4-major-river.' The Politics section was collected with the keyword of 'Tonghap-Jinbo Dang.' All of the articles from April 2012 to May 2012 of Eco-business, Culture and Editorial-opinion sections were also collected. All of the collected data were handled and edited into paragraphs. We got rid of stop-words using the Lucene Korean Module. We calculated keyword co-occurrence counts from the paired co-occurrence list of keywords in a paragraph. We made a co-occurrence matrix from the list. Once the co-occurrence matrix was built, we used the Cosine coefficient matrix as input for PFNet(Pathfinder Network). In order to analyze these three newspapers and find out the significant keywords in each paper, we analyzed the list of 10 highest frequency keywords and keyword-networks of 20 highest ranking frequency keywords to closely examine the relationships and show the detailed network map among keywords. We used NodeXL software to visualize the PFNet. After drawing all the networks, we compared the results with the classification results. Classification was firstly handled to identify how the tone of argument of a newspaper is different from others. Then, to analyze tones of arguments, all the paragraphs were divided into two types of tones, Positive tone and Negative tone. To identify and classify all of the tones of paragraphs and articles we had collected, supervised learning technique was used. The Na$\ddot{i}$ve Bayesian classifier algorithm provided in the MALLET package was used to classify all the paragraphs in articles. After classification, Precision, Recall and F-value were used to evaluate the results of classification. Based on the results of this study, three subjects such as Culture, Eco-business and Politics showed some differences in contents and tones of arguments among these three newspapers. In addition, for the National issues, tones of arguments on 4-major-rivers project were different from each other. It seems three newspapers have their own specific tone of argument in those sections. And keyword-networks showed different shapes with each other in the same period in the same section. It means that frequently appeared keywords in articles are different and their contents are comprised with different keywords. And the Positive-Negative classification showed the possibility of classifying newspapers' tones of arguments compared to others. These results indicate that the approach in this study is promising to be extended as a new tool to identify the different tones of arguments of newspapers.