• Title/Summary/Keyword: 이슈 추출

Search Result 282, Processing Time 0.023 seconds

A Study on the Deduction of Social Issues Applying Word Embedding: With an Empasis on News Articles related to the Disables (단어 임베딩(Word Embedding) 기법을 적용한 키워드 중심의 사회적 이슈 도출 연구: 장애인 관련 뉴스 기사를 중심으로)

  • Choi, Garam;Choi, Sung-Pil
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.1
    • /
    • pp.231-250
    • /
    • 2018
  • In this paper, we propose a new methodology for extracting and formalizing subjective topics at a specific time using a set of keywords extracted automatically from online news articles. To do this, we first extracted a set of keywords by applying TF-IDF methods selected by a series of comparative experiments on various statistical weighting schemes that can measure the importance of individual words in a large set of texts. In order to effectively calculate the semantic relation between extracted keywords, a set of word embedding vectors was constructed by using about 1,000,000 news articles collected separately. Individual keywords extracted were quantified in the form of numerical vectors and clustered by K-means algorithm. As a result of qualitative in-depth analysis of each keyword cluster finally obtained, we witnessed that most of the clusters were evaluated as appropriate topics with sufficient semantic concentration for us to easily assign labels to them.

CUTIG: An Automated C Unit Test Data Generator Using Static Analysis (CUTIG: 정적 분석을 이용한 C언어 단위 테스트 데이타 추출 자동화 도구)

  • Kim, Taek-Su;Park, Bok-Nam;Lee, Chun-Woo;Kim, Ki-Moon;Seo, Yun-Ju;Wu, Chi-Su
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.1
    • /
    • pp.10-20
    • /
    • 2009
  • As unit testing should be performed repeatedly and continuously, it is a high-cost software development activity. Although there are many studies on unit test automation, there are less studies on automated test case generation which are worthy of note. In this paper, we discuss a study on automated test data generation from source codes and indicate algorithms for each stage. We also show some issues of test data generation and introduce an automated test data generating tool: CUTIG. As CUTIG generates test data not from require specifications but from source codes, software developers could generate test data when specifications are insufficient or discord with real implementation. Moreover we hope that the tool could help software developers to reduce cost for test data preparation.

Comparison of responses to issues in SNS and Traditional Media using Text Mining -Focusing on the Termination of Korea-Japan General Security of Military Information Agreement(GSOMIA)- (텍스트 마이닝을 이용한 SNS와 언론의 이슈에 대한 반응 비교 -"한일군사정보보호협정(GSOMIA) 종료"를 중심으로-)

  • Lee, Su Ryeon;Choi, Eun Jung
    • Journal of Digital Convergence
    • /
    • v.18 no.2
    • /
    • pp.277-284
    • /
    • 2020
  • Text mining is a representative method of big data analysis that extracts meaningful information from unstructured and large amounts of text data. Social media such as Twitter generates hundreds of thousands of data per second and acts as a one-person media that instantly and directly expresses public opinions and ideas. The traditional media are delivering informations, criticizing society, and forming public opinions. For this, we compare the responses of SNS with the responses of media on the issue of the termination of the Korea-Japan GSOMIA (General Security of Military Information Agreement), one of the domestic issues in the second half of 2019. Data collected from 201,728 tweets and 20,698 newspaper articles were analyzed by sentiment analysis, association keyword analysis, and cluster analysis. As a result, SNS tends to respond positively to this issue, and the media tends to react negatively. In association keyword analysis, SNS shows positive views on domestic issues such as "destruction, decision, we," while the media shows negative views on external issues such as "disappointment, regret, concern". SNS is faster and more powerful than media when studying or creating social trends and opinions, rather than the function of information delivery. This can complement the role of the media that reflects public perception.

Exploring Issues Related to the Metaverse from the Educational Perspective Using Text Mining Techniques - Focusing on News Big Data (텍스트마이닝 기법을 활용한 교육관점에서의 메타버스 관련 이슈 탐색 - 뉴스 빅데이터를 중심으로)

  • Park, Ju-Yeon;Jeong, Do-Heon
    • Journal of Industrial Convergence
    • /
    • v.20 no.6
    • /
    • pp.27-35
    • /
    • 2022
  • The purpose of this study is to analyze the metaverse-related issues in the news big data from an educational perspective, explore their characteristics, and provide implications for the educational applicability of the metaverse and future education. To this end, 41,366 cases of metaverse-related data searched on portal sites were collected, and weight values of all extracted keywords were calculated and ranked using TF-IDF, a representative term weight model, and then word cloud visualization analysis was performed. In addition, major topics were analyzed using topic modeling(LDA), a sophisticated probability-based text mining technique. As a result of the study, topics such as platform industry, future talent, and extension in technology were derived as core issues of the metaverse from an educational perspective. In addition, as a result of performing secondary data analysis under three key themes of technology, job, and education, it was found that metaverse has issues related to education platform innovation, future job innovation, and future competency innovation in future education. This study is meaningful in that it analyzes a vast amount of news big data in stages to draw issues from an education perspective and provide implications for future education.

Protecting Fingerprint Data for Remote Applications (원격응용에 적합한 지문 정보 보호)

  • Moon, Dae-Sung;Jung, Seung-Hwan;Kim, Tae-Hae;Lee, Han-Sung;Yang, Jong-Won;Choi, Eun-Wha;Seo, Chang-Ho;Chung, Yong-Wha
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.16 no.6
    • /
    • pp.63-71
    • /
    • 2006
  • In this paper, we propose a secure solution for user authentication by using fingerprint verification on the sensor-client-server model, even with the client that is not necessarily trusted by the sensor holder or the server. To protect possible attacks launched at the untrusted client, our solution makes the fingerprint sensor validate the result computed by the client for the feature extraction. However, the validation should be simple so that the resource-constrained fingerprint sensor can validate it in real-time. To solve this problem, we separate the feature extraction into binarization and minutiae extraction, and assign the time-consuming binarization to the client. After receiving the result of binarization from the client, the sensor conducts a simple validation to check the result, performs the minutiae extraction with the received binary image from the client, and then sends the extracted minutiae to the server. Based on the experimental results, the proposed solution for fingerprint verification can be performed on the sensor-client-server model securely and in real-time with the aid of an untrusted client.

Analysis of Major COVID-19 Issues Using Unstructured Big Data (비정형 빅데이터를 이용한 COVID-19 주요 이슈 분석)

  • Kim, Jinsol;Shin, Donghoon;Kim, Heewoong
    • Knowledge Management Research
    • /
    • v.22 no.2
    • /
    • pp.145-165
    • /
    • 2021
  • As of late December 2019, the spread of COVID-19 pandemic began which put the entire world in panic. In order to overcome the crisis and minimize any subsequent damage, the government as well as its affiliated institutions must maximize effects of pre-existing policy support and introduce a holistic response plan that can reflect this changing situation- which is why it is crucial to analyze social topics and people's interests. This study investigates people's major thoughts, attitudes and topics surrounding COVID-19 pandemic through the use of social media and big data. In order to collect public opinion, this study segmented time period according to government countermeasures. All data were collected through NAVER blog from 31 December 2019 to 12 December 2020. This research applied TF-IDF keyword extraction and LDA topic modeling as text-mining techniques. As a result, eight major issues related to COVID-19 have been derived, and based on these keywords, this research presented policy strategies. The significance of this study is that it provides a baseline data for Korean government authorities in providing appropriate countermeasures that can satisfy needs of people in the midst of COVID-19 pandemic.

A Study on Social Issues and Consumption Behavior Using Big Data (빅데이터를 활용한 사회적 이슈와 소비행동 연구)

  • Baek, Seung-Heon;Kim, Gi-Tak
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.8
    • /
    • pp.377-389
    • /
    • 2019
  • This study conducted social network big data analysis to investigate consumer's perception of Japanese sporting goods related to Japanese boycott and to extract problems and variables by recognition. Social network big data analysis was conducted in two areas, "Japanese boycott" and "Japanese sporting goods". Months of data were collected and investigated. If you specify the research method, you will identify the issues of the times - keyword setting using social network analysis - clustering using CONCOR analysis using TEXTOM and Ucinet 6 programs - variable selection through expert meetings - questionnaire preparation and answering - and validity of questionnaire Reliability Verification - It consists of hypothesis verification using the structural model equation. Based on the results of using the big data of social networks, four variables of relevant characteristics, nationality, attitude, and consumption behavior were extracted. A total of 30 questions and 292 questionnaires were used for final hypothesis verification. As a result of the analysis, first, the boycott-related characteristics showed a positive relationship with nationality. Specifically, all of the characteristics related to boycotts (necessary boycott, sense of boycott, and perceived boycott benefits were positively related to nationality. In addition, nationality was found to have a positive relationship with consumption behavior.

A Generation and Matching Method of Normal-Transient Dictionary for Realtime Topic Detection (실시간 이슈 탐지를 위한 일반-급상승 단어사전 생성 및 매칭 기법)

  • Choi, Bongjun;Lee, Hanjoo;Yong, Wooseok;Lee, Wonsuk
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.5
    • /
    • pp.7-18
    • /
    • 2017
  • Recently, the number of SNS user has rapidly increased due to smart device industry development and also the amount of generated data is exponentially increasing. In the twitter, Text data generated by user is a key issue to research because it involves events, accidents, reputations of products, and brand images. Twitter has become a channel for users to receive and exchange information. An important characteristic of Twitter is its realtime. Earthquakes, floods and suicides event among the various events should be analyzed rapidly for immediately applying to events. It is necessary to collect tweets related to the event in order to analyze the events. But it is difficult to find all tweets related to the event using normal keywords. In order to solve such a mentioned above, this paper proposes A Generation and Matching Method of Normal-Transient Dictionary for realtime topic detection. Normal dictionaries consist of general keywords(event: suicide-death-loop, death, die, hang oneself, etc) related to events. Whereas transient dictionaries consist of transient keywords(event: suicide-names and information of celebrities, information of social issues) related to events. Experimental results show that matching method using two dictionary finds more tweets related to the event than a simple keyword search.

A Study on the Document Topic Extraction System Based on Big Data (빅데이터 기반 문서 토픽 추출 시스템 연구)

  • Hwang, Seung-Yeon;An, Yoon-Bin;Shin, Dong-Jin;Oh, Jae-Kon;Moon, Jin Yong;Kim, Jeong-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.5
    • /
    • pp.207-214
    • /
    • 2020
  • Nowadays, the use of smart phones and various electronic devices is increasing, the Internet and SNS are activated, and we live in the flood of information. The amount of information has grown exponentially, making it difficult to look at a lot of information, and more and more people want to see only key keywords in a document, and the importance of research to extract topics that are the core of information is increasing. In addition, it is also an important issue to extract the topic and compare it with the past to infer the current trend. Topic modeling techniques can be used to extract topics from a large volume of documents, and these extracted topics can be used in various fields such as trend prediction and data analysis. In this paper, we inquire the topic of the three-year papers of 2016, 2017, and 2018 in the field of computing using the LDA algorithm, one of Probabilistic Topic Model Techniques, in order to analyze the rapidly changing trends and keep pace with the times. Then we analyze trends and flows of research.

Analyzing the Trend of False·Exaggerated Advertisement Keywords Using Text-mining Methodology (1990-2019) (텍스트마이닝 기법을 활용한 허위·과장광고 관련 기사의 트렌드 분석(1990-2019))

  • Kim, Do-Hee;Kim, Min-Jeong
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.4
    • /
    • pp.38-49
    • /
    • 2021
  • This study analyzed the trend of the term 'false and exaggerated advertisement' in 5,141 newspaper articles from 1990 to 2019 using text mining methodology. First of all, we identified the most frequent keywords of false and exaggerated advertisements through frequency analysis for all newspaper articles, and understood the context between the extracted keywords. Next, to examine how false and exaggerated advertisements have changed, the frequency analysis was performed by separating articles by 10 years, and the tendency of the keyword that became an issue was identified by comparing the number of academic papers on the subject of the highest keywords of each year. Finally, we identified trends in false and exaggerated advertisements based on the detailed keywords in the topic using the topic modeling. In our results, it was confirmed that the topic that became an issue at a specific time was extracted as the frequent keywords, and the keyword trends by period changed in connection with social and environmental factors. This study is meaningful in helping consumers spend wisely by cultivating background knowledge about unfair advertising. Furthermore, it is expected that the core keyword extraction will provide the true purpose of advertising and deliver its implications to companies and related employees who commit misconduct.