• Title/Summary/Keyword: Language network analysis

Search Result 363, Processing Time 0.031 seconds

Analysis of media trends related to spent nuclear fuel treatment technology using text mining techniques (텍스트마이닝 기법을 활용한 사용후핵연료 건식처리기술 관련 언론 동향 분석)

  • Jeong, Ji-Song;Kim, Ho-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.33-54
    • /
    • 2021
  • With the fourth industrial revolution and the arrival of the New Normal era due to Corona, the importance of Non-contact technologies such as artificial intelligence and big data research has been increasing. Convergent research is being conducted in earnest to keep up with these research trends, but not many studies have been conducted in the area of nuclear research using artificial intelligence and big data-related technologies such as natural language processing and text mining analysis. This study was conducted to confirm the applicability of data science analysis techniques to the field of nuclear research. Furthermore, the study of identifying trends in nuclear spent fuel recognition is critical in terms of being able to determine directions to nuclear industry policies and respond in advance to changes in industrial policies. For those reasons, this study conducted a media trend analysis of pyroprocessing, a spent nuclear fuel treatment technology. We objectively analyze changes in media perception of spent nuclear fuel dry treatment techniques by applying text mining analysis techniques. Text data specializing in Naver's web news articles, including the keywords "Pyroprocessing" and "Sodium Cooled Reactor," were collected through Python code to identify changes in perception over time. The analysis period was set from 2007 to 2020, when the first article was published, and detailed and multi-layered analysis of text data was carried out through analysis methods such as word cloud writing based on frequency analysis, TF-IDF and degree centrality calculation. Analysis of the frequency of the keyword showed that there was a change in media perception of spent nuclear fuel dry treatment technology in the mid-2010s, which was influenced by the Gyeongju earthquake in 2016 and the implementation of the new government's energy conversion policy in 2017. Therefore, trend analysis was conducted based on the corresponding time period, and word frequency analysis, TF-IDF, degree centrality values, and semantic network graphs were derived. Studies show that before the 2010s, media perception of spent nuclear fuel dry treatment technology was diplomatic and positive. However, over time, the frequency of keywords such as "safety", "reexamination", "disposal", and "disassembly" has increased, indicating that the sustainability of spent nuclear fuel dry treatment technology is being seriously considered. It was confirmed that social awareness also changed as spent nuclear fuel dry treatment technology, which was recognized as a political and diplomatic technology, became ambiguous due to changes in domestic policy. This means that domestic policy changes such as nuclear power policy have a greater impact on media perceptions than issues of "spent nuclear fuel processing technology" itself. This seems to be because nuclear policy is a socially more discussed and public-friendly topic than spent nuclear fuel. Therefore, in order to improve social awareness of spent nuclear fuel processing technology, it would be necessary to provide sufficient information about this, and linking it to nuclear policy issues would also be a good idea. In addition, the study highlighted the importance of social science research in nuclear power. It is necessary to apply the social sciences sector widely to the nuclear engineering sector, and considering national policy changes, we could confirm that the nuclear industry would be sustainable. However, this study has limitations that it has applied big data analysis methods only to detailed research areas such as "Pyroprocessing," a spent nuclear fuel dry processing technology. Furthermore, there was no clear basis for the cause of the change in social perception, and only news articles were analyzed to determine social perception. Considering future comments, it is expected that more reliable results will be produced and efficiently used in the field of nuclear policy research if a media trend analysis study on nuclear power is conducted. Recently, the development of uncontact-related technologies such as artificial intelligence and big data research is accelerating in the wake of the recent arrival of the New Normal era caused by corona. Convergence research is being conducted in earnest in various research fields to follow these research trends, but not many studies have been conducted in the nuclear field with artificial intelligence and big data-related technologies such as natural language processing and text mining analysis. The academic significance of this study is that it was possible to confirm the applicability of data science analysis technology in the field of nuclear research. Furthermore, due to the impact of current government energy policies such as nuclear power plant reductions, re-evaluation of spent fuel treatment technology research is undertaken, and key keyword analysis in the field can contribute to future research orientation. It is important to consider the views of others outside, not just the safety technology and engineering integrity of nuclear power, and further reconsider whether it is appropriate to discuss nuclear engineering technology internally. In addition, if multidisciplinary research on nuclear power is carried out, reasonable alternatives can be prepared to maintain the nuclear industry.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

GenAI(Generative Artificial Intelligence) Technology Trend Analysis Using Bigkinds: ChatGPT Emergence and Startup Impact Assessment (빅카인즈를 활용한 GenAI(생성형 인공지능) 기술 동향 분석: ChatGPT 등장과 스타트업 영향 평가)

  • Lee, Hyun Ju;Sung, Chang Soo;Jeon, Byung Hoon
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.4
    • /
    • pp.65-76
    • /
    • 2023
  • In the field of technology entrepreneurship and startups, the development of Artificial Intelligence(AI) has emerged as a key topic for business model innovation. As a result, venture firms are making various efforts centered on AI to secure competitiveness(Kim & Geum, 2023). The purpose of this study is to analyze the relationship between the development of GenAI technology and the startup ecosystem by analyzing domestic news articles to identify trends in the technology startup field. Using BIG Kinds, this study examined the changes in GenAI-related news articles, major issues, and trends in Korean news articles from 1990 to August 10, 2023, focusing on the emergence of ChatGPT before and after, and visualized the relevance through network analysis and keyword visualization. The results of the study showed that the mention of GenAI gradually increased in the articles from 2017 to 2023. In particular, OpenAI's ChatGPT service based on GPT-3.5 was highlighted as a major issue, indicating the popularization of language model-based GenAI technologies such as OpenAI's DALL-E, Google's MusicLM, and VoyagerX's Vrew. This proves the usefulness of GenAI in various fields, and since the launch of ChatGPT, Korean companies have been actively developing Korean language models. Startups such as Ritten Technologies are also utilizing GenAI to expand their scope in the technology startup field. This study confirms the connection between GenAI technology and startup entrepreneurship activities, which suggests that it can support the construction of innovative business strategies, and is expected to continue to shape the development of GenAI technology and the growth of the startup ecosystem. Further research is needed to explore international trends, the utilization of various analysis methods, and the possibility of applying GenAI in the real world. These efforts are expected to contribute to the development of GenAI technology and the growth of the startup ecosystem.

  • PDF

A Study on the Digital Filter Design using Software for Analysis of Observation Data in Radio Astronomy (전파천문 관측데이터 분석을 위해 소프트웨어를 이용한 디지털필터 설계에 관한 연구)

  • Yeom, Jae-Hwan;Oh, Se-Jin;Roh, Duk-Gyoo;Oh, Chung-Sik;Jung, Dong-Kyu;Shin, Jae-Sik;Kim, Hyo-Ryoung;Hwang, Ju-Yeon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.16 no.4
    • /
    • pp.175-181
    • /
    • 2015
  • In this paper, we propose a design method for a digital filter using software in order to analyze the radio astronomy observation data. Recently the analysis method for radio astronomy observing system is transferring from hardware to software by developing of state-of-the-art of computer system. The existing hardware system is not able to easily change the specification because it is implemented to meet special requirements and it takes a high cost and time. In case of software, however, it has an advantage to implement with small cost if open software is used, and flexibly changes to satisfy the desired specification. But, in order to analyze the massive data like radio astronomy with software, the good performance system is needed for computer. Therefore, this paper proposes a digital filter design method using software with the same performance as that of digital filter implemented with hardware in observation system which is operated by the KVN(Korean VLBI Network). To design a digital filter, the proposed method is performed with standard C language and the simulation is conducted with GNU(GNU's Not Unix) Octave and investigated to show its effectiveness. In addition, for the high speed operation of the designed digital filter, the SSE(Streaming SIMD Extensions) library is adopted for available parallel operation. By the proposed digital filter, the digital filtering is performed for the wide band observation data in the KVN observation mode, the filtering result of narrow band observation has no ripple inside of stop band, and confirmed the effectiveness of the proposed method.

Analysis on the Linkage between SDGs Framework and Forest Policy in Korea (국내 산림정책과 지속가능발전목표(SDGs)간의 연관성 분석)

  • Moon, Jooyeon;Kim, Nahui;Song, Cholho;Lee, Sle-Gee;Kim, Moonil;Lim, Chul-Hee;Cha, Sung-Eun;Kim, Gangsun;Lee, Woo-Kyun;Son, Yowhan;Young, Soogil;Jin, Seabom;Son, Young-Mo
    • Journal of Climate Change Research
    • /
    • v.8 no.4
    • /
    • pp.425-442
    • /
    • 2017
  • This study analysed the linkage between national forest policy in Korea, namely the $5^{th}$ National Forest Master Plan, 2016 Korea Forest Service Performance Management Plan, the $3^{rd}$ National Sustainable Development Plan, and UN Sustainable Development Goals (SDGs). The 7 strategies of the $5^{th}$ National Forest Master Plan were related to 11 Goals of SDGs, and 5 strategies of 2016 Korea Forest Service Performance Management Plan were associated with 7 areas of SDGs, and 4 strategies within $3^{rd}$ National Sustainable Development Plan were linked to 7 Goals of SDGs. Among 87 national forest indicators compiled from three respective forest-related policies of Korea, 45 national indicators were related to 18 SDGs indicators. This indicates that 52% of national indicators of Korean forest policy are reflecting the language of SDGs. However, seeing from SDGs perspective, only 18 out of 241, which accounts for 7.8% of SDGs indicators are related to national indicators. The findings imply that a number of national forest-related indicators do not meet the diverse dimension of SDGs which provides potential areas for forest to contribute. Based on the findings, following recommendations were suggested: 1) the term used in forest policy should be aligned to SDGs targets so that it can be embedded in national policies, and 2) indicators should be further contextualized as well as in its assessment system. Lastly, it suggests for leveraging 3) '5 Processes of sub-national climate change adaptation plan' and the core concept of REDD+ MRV which could provide fundamental background for implementing SDGs framework to national forest policy.

Automatic Classification and Vocabulary Analysis of Political Bias in News Articles by Using Subword Tokenization (부분 단어 토큰화 기법을 이용한 뉴스 기사 정치적 편향성 자동 분류 및 어휘 분석)

  • Cho, Dan Bi;Lee, Hyun Young;Jung, Won Sup;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2021
  • In the political field of news articles, there are polarized and biased characteristics such as conservative and liberal, which is called political bias. We constructed keyword-based dataset to classify bias of news articles. Most embedding researches represent a sentence with sequence of morphemes. In our work, we expect that the number of unknown tokens will be reduced if the sentences are constituted by subwords that are segmented by the language model. We propose a document embedding model with subword tokenization and apply this model to SVM and feedforward neural network structure to classify the political bias. As a result of comparing the performance of the document embedding model with morphological analysis, the document embedding model with subwords showed the highest accuracy at 78.22%. It was confirmed that the number of unknown tokens was reduced by subword tokenization. Using the best performance embedding model in our bias classification task, we extract the keywords based on politicians. The bias of keywords was verified by the average similarity with the vector of politicians from each political tendency.

Content Diversity Analysis of Elementary Science Authorized Textbooks according to the 2015 Revised Curriculum: Focusing on the "Weight of an Object" Unit (2015 개정 교육과정에 따른 초등 과학 검정 교과서 내용 다양성 분석 - '물체의 무게' 단원을 중심으로 -)

  • Shin, Jung-Yun;Park, Sang-Woo;Jeong, Hyeon-Ji;Hong, Mi-Na;Kim, Hyeon-Jae
    • Journal of Korean Elementary Science Education
    • /
    • v.41 no.2
    • /
    • pp.307-324
    • /
    • 2022
  • This study examined the content diversity of seven authorized science textbooks by comparing the characteristics of the science concept description and the contents of inquiry activities in the "weight of objects" unit. For each textbook, the flow of concept description content and the uniqueness of the concept description process were analyzed, and the number of nodes and links and words with high connections were determined using language network analysis. In addition, for the inquiry activities described in each textbook, the inquiry subject, inquiry type, science process skill, and uniqueness were investigated. Results showed that the authorized textbooks displayed no more diversity than expected in their scientific concept description method or their inquiry activity composition. The learning elements, inclusion of subconcepts, and central words were similar for each textbook. The comparison of inquiry activities showed similarities in their contents, inquiry types, and scientific process skills. Specifically, these textbooks did not introduce any research topics or experimental methods that were absent in previous textbooks. However, despite the fact that the authorized textbook system was developed based on the same curriculum, some efforts were made to make use of its strengths. Since the sequence of subconcepts to explain the core contents differed across textbooks, this explanation process was divided into several types, and although the contents of inquiry activities were the same, the materials for inquiry activities were shown differently for each textbook to improve and overcome the difficulties in the existing experiments. These findings necessitate the continuation of efforts to utilize the strengths of certified textbooks.

An Analysis of IT Trends Using Tweet Data (트윗 데이터를 활용한 IT 트렌드 분석)

  • Yi, Jin Baek;Lee, Choong Kwon;Cha, Kyung Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.143-159
    • /
    • 2015
  • Predicting IT trends has been a long and important subject for information systems research. IT trend prediction makes it possible to acknowledge emerging eras of innovation and allocate budgets to prepare against rapidly changing technological trends. Towards the end of each year, various domestic and global organizations predict and announce IT trends for the following year. For example, Gartner Predicts 10 top IT trend during the next year, and these predictions affect IT and industry leaders and organization's basic assumptions about technology and the future of IT, but the accuracy of these reports are difficult to verify. Social media data can be useful tool to verify the accuracy. As social media services have gained in popularity, it is used in a variety of ways, from posting about personal daily life to keeping up to date with news and trends. In the recent years, rates of social media activity in Korea have reached unprecedented levels. Hundreds of millions of users now participate in online social networks and communicate with colleague and friends their opinions and thoughts. In particular, Twitter is currently the major micro blog service, it has an important function named 'tweets' which is to report their current thoughts and actions, comments on news and engage in discussions. For an analysis on IT trends, we chose Tweet data because not only it produces massive unstructured textual data in real time but also it serves as an influential channel for opinion leading on technology. Previous studies found that the tweet data provides useful information and detects the trend of society effectively, these studies also identifies that Twitter can track the issue faster than the other media, newspapers. Therefore, this study investigates how frequently the predicted IT trends for the following year announced by public organizations are mentioned on social network services like Twitter. IT trend predictions for 2013, announced near the end of 2012 from two domestic organizations, the National IT Industry Promotion Agency (NIPA) and the National Information Society Agency (NIA), were used as a basis for this research. The present study analyzes the Twitter data generated from Seoul (Korea) compared with the predictions of the two organizations to analyze the differences. Thus, Twitter data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. To overcome these challenges, we used SAS IRS (Information Retrieval Studio) developed by SAS to capture the trend in real-time processing big stream datasets of Twitter. The system offers a framework for crawling, normalizing, analyzing, indexing and searching tweet data. As a result, we have crawled the entire Twitter sphere in Seoul area and obtained 21,589 tweets in 2013 to review how frequently the IT trend topics announced by the two organizations were mentioned by the people in Seoul. The results shows that most IT trend predicted by NIPA and NIA were all frequently mentioned in Twitter except some topics such as 'new types of security threat', 'green IT', 'next generation semiconductor' since these topics non generalized compound words so they can be mentioned in Twitter with other words. To answer whether the IT trend tweets from Korea is related to the following year's IT trends in real world, we compared Twitter's trending topics with those in Nara Market, Korea's online e-Procurement system which is a nationwide web-based procurement system, dealing with whole procurement process of all public organizations in Korea. The correlation analysis show that Tweet frequencies on IT trending topics predicted by NIPA and NIA are significantly correlated with frequencies on IT topics mentioned in project announcements by Nara market in 2012 and 2013. The main contribution of our research can be found in the following aspects: i) the IT topic predictions announced by NIPA and NIA can provide an effective guideline to IT professionals and researchers in Korea who are looking for verified IT topic trends in the following topic, ii) researchers can use Twitter to get some useful ideas to detect and predict dynamic trends of technological and social issues.

Transfer and Validation of NIRS Calibration Models for Evaluating Forage Quality in Italian Ryegrass Silages (이탈리안 라이그라스 사일리지의 품질평가를 위한 근적외선분광 (NIRS) 검량식의 이설 및 검증)

  • Cho, Kyu Chae;Park, Hyung Soo;Lee, Sang Hoon;Choi, Jin Hyeok;Seo, Sung;Choi, Gi Jun
    • Journal of Animal Environmental Science
    • /
    • v.18 no.sup
    • /
    • pp.81-90
    • /
    • 2012
  • This study was evaluated high end research grade Near infrared spectrophotometer (NIRS) to low end popular field grade multiple Near infrared spectrophotometer (NIRS) for rapid analysis at forage quality at sight with 241 samples of Italian ryegrass silage during 3 years collected whole country for evaluate accuracy and precision between instruments. Firstly collected and build database high end research grade NIRS using with Unity Scientific Model 2500X (650 nm~2,500 nm) then trim and fit to low end popular field grade NIRS with Unity Scientific Model 1400 (1,400 nm~2,400 nm) then build and create calibration, transfer calibration with special transfer algorithm. The result between instruments was 0.000%~0.343% differences, rapidly analysis for chemical constituents, NDF, ADF, and crude protein, crude ash and fermentation parameter such as moisture, pH and lactic acid, finally forage quality parameter, TDN, DMI, RFV within 5 minutes at sight and the result equivalent with laboratory data. Nevertheless during 3 years collected samples for build calibration was organic samples that make differentiate by local or yearly bases etc. This strongly suggest population evaluation technique needed and constantly update calibration and maintenance calibration to proper handling database accumulation and spread out by knowledgable control laboratory analysis and reflect calibration update such as powerful control center needed for long lasting usage of forage analysis with NIRS at sight. Especially the agriculture products such as forage will continuously changes that made easily find out the changes and update routinely, if not near future NIRS was worthless due to those changes. Many research related NIRS was shortly study not long term study that made not well using NIRS, so the system needed check simple and instantly using with local language supported signal methods Global Distance (GD) and Neighbour Distance (ND) algorithm. Finally the multiple popular field grades instruments should be the same results not only between research grade instruments but also between multiple popular field grade instruments that needed easily transfer calibration and maintenance between instruments via internet networking techniques.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.