• Title/Summary/Keyword: SNS-빅데이터

Search Result 214, Processing Time 0.033 seconds

An Analysis of the Current State of Marine Sports through the Analysis of Social Big Data: Use of the Social MaxtixTM Method (소셜 빅 데이터분석을 통한 해양스포츠 현황 분석 : 소셜매트릭스TM 기법의 활용)

  • PARK, Tae-Seung
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.29 no.2
    • /
    • pp.593-606
    • /
    • 2017
  • This study aims to provide preliminary data capable of suggesting directivity of an initiating start by understanding consumer awareness through analysis of SNS social big data on marine sports. This study selected windsurfing, yacht, jet ski, scuba diving and sea fishing as research subjects, and produced following results by setting period of total 1 month from January 22 through February 22, 2017 on the SNS (twitter, blog) through the Social MatrixTM service of Daumsoft Co., Ltd., and analyzing frequency of mention, associated words etc. First, sports that was mentioned the most out of marine sports was yacht, which was 3,273 cases on twitter and 2,199 on blog respectively. Second, the word which was shown the most associated with marine sports was the attribute showing unique characteristic of marine sports, which was 6,261 cases in total.

Designing issue prediction system using web media data (웹 미디어 데이터를 이용한 이슈 예측 시스템 설계)

  • Yun, Hyun-Noh;Moon, Nammeee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.501-503
    • /
    • 2019
  • IT 기술의 발달에 따라 다양한 웹 미디어의 데이터가 기하급수적으로 증가하고 있으며 이는 비정형 형태의 빅 데이터로 활용도가 매우 높다. 그 중 인터넷 뉴스나 SNS 등은 시간의 흐름에 따라 다양한 이슈들이 서로 영향을 주며 발생, 결합, 분화, 소멸된다. 본 논문에서는 인터넷상에서 발생하는 비정형 데이터들을 수집하여 텍스트 마이닝을 통해 글의 주요이슈 키워드, 카테고리, 날짜 등을 추출한다. 추출한 데이터를 일정 기간별로 나누어 이슈 매핑을 통해 이슈간의 상관관계를 분석한다. 나아가 LSTM 또는 GRU를 이용한 딥러닝을 통해 앞으로의 이슈를 예측하는 시스템 설계를 제안한다.

Evaluation of Major Projects of the 5th Basic Forest Plan Utilizing Big Data Analysis (빅데이터 분석을 활용한 제5차 산림기본계획 주요 사업에 대한 평가)

  • Byun, Seung-Yeon;Koo, Ja-Choon;Seok, Hyun-Deok
    • Journal of Korean Society of Forest Science
    • /
    • v.106 no.3
    • /
    • pp.340-352
    • /
    • 2017
  • In This study, we examined the gap between supply and demand of forest policy by year through big data analysis for macroscopic evaluation of the 5th Basic Forest Plan. We collected unstructured data based on keywords related to the projects mentioned in the news, SNS and so on in the relevant year for the policy demand side; and based on the documents published by the Korea Forest Service for the policy supply side. based on the collected data, we specified the network structure through the social network analysis technique, and identified the gap between supply and demand of the Korea Forest Service's policies by comparing the network of the demand side and that of the supply side. The results of big data analysis indicated that the network of the supply side is less radial than that of the demand side, implying that various keywords other than forest could considerably influence on the network. Also we compared the trends of supply and demand for 33 keywords related to 27 major projects. The results showed that 7 keywords shows increasing demand but decreasing supply: sustainable, forest management, forest biota, forest protection, forest disease and pest, urban forest, and North Korea. Since the supply-demand gap is confirmed for the 7 keywords, it is necessary to strengthen the forest policy regarding the 7 keywords in the 6th Basic Plan.

An Analysis of Social Perception on Forest Using News Big Data (뉴스 빅데이터를 활용한 산림에 대한 사회적 인식 변화 분석)

  • Jang, Youn-Sun;Lee, Ju-Eun;Na, So-Yeon;Lee, Jeong-Hee;Seo, Jeong-Weon
    • Journal of Korean Society of Forest Science
    • /
    • v.110 no.3
    • /
    • pp.462-477
    • /
    • 2021
  • The purpose of this study was to understand changes in domestic forest policy and social perception of forests from a macro perspective using big data analysis of news articles and editorials. A total of 13,570 'forest' related data were collected from metropolitan and economic journals from 1946-2017 using keyword and CONCOR (Convergence of iterated Correlations) analysis. First, we found the percentage of articles and editorials using the keyword 'forest'increased overall. Second, news data on 'forest' in the field of reporting was concentrated in the "social" sector during the first period (1946-1966), followed by forest-related issues expanding to various fields from the second (1967-1972) to fifth (1988-1997) periods, then toward the "culture" sector in the sixth (1998-2007) and "politics" after the seventh (2008-2017) period. Third, we found changes in the policy paradigm over time significantly changed social awareness. In the first and second periods, people experienced livelihood issues rather than forest greening or forest protection policy and expanded their awareness of planned and scientific afforestation (third) to environmental protection (fourth) and ecological perspectives (sixth to seventh). The key outcome of our analysis was leveraging news big data that reflected polices on forests and public social perception To further derive future social issues,more in-depth analysis of public discourse and perception will be possible using textual big data and GDP of various social network services (SNS), such as combining blogs and YouTube.

A Study on the Big Data Analysis System for Searching of the Flooded Road Areas (도로 침수영역의 탐색을 위한 빅데이터 분석 시스템 연구)

  • Song, Youngmi;Kim, Chang Soo
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.8
    • /
    • pp.925-934
    • /
    • 2015
  • The frequency of natural disasters because of global warming is gradually increasing, risks of flooding due to typhoon and torrential rain have also increased. Among these causes, the roads are flooded by suddenly torrential rain, and then vehicle and personal injury are happening. In this respect, because of the possibility that immersion of a road may occur in a second, it is necessary to study the rapid data collection and quick response system. Our research proposes a big data analysis system based on the collected information and a variety of system information collection methods for searching flooded road areas by torrential rains. The data related flooded roads are utilized the SNS data, meteorological data and the road link data, etc. And the big data analysis system is implemented the distributed processing system based on the Hadoop platform.

Ensure intellectual property rights for 3D pringting 3D modeling design (딥러닝 인공지능을 활용한 사물인터넷 비즈니스 모델 설계)

  • Lee, Yong-keu;Park, Dae-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.351-354
    • /
    • 2016
  • The competition of Go between AlphaGo and Lee Sedol attracted global interest leading AlphaGo to victory. The core function of AlphaGo is deep-learning system, studying by computer itself. Afterwards, the utilization of deep-learning system using artificial intelligence is said to be verified. Recently, the government passed the loT Act and developing its business model to promote loT. This study is on analyzing IoT business environment using deep-learning AI and constructing specialized business models.

  • PDF

Efficient Keyword Extraction from Social Big Data Based on Cohesion Scoring

  • Kim, Hyeon Gyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.10
    • /
    • pp.87-94
    • /
    • 2020
  • Social reviews such as SNS feeds and blog articles have been widely used to extract keywords reflecting opinions and complaints from users' perspective, and often include proper nouns or new words reflecting recent trends. In general, these words are not included in a dictionary, so conventional morphological analyzers may not detect and extract those words from the reviews properly. In addition, due to their high processing time, it is inadequate to provide analysis results in a timely manner. This paper presents a method for efficient keyword extraction from social reviews based on the notion of cohesion scoring. Cohesion scores can be calculated based on word frequencies, so keyword extraction can be performed without a dictionary when using it. On the other hand, their accuracy can be degraded when input data with poor spacing is given. Regarding this, an algorithm is presented which improves the existing cohesion scoring mechanism using the structure of a word tree. Our experiment results show that it took only 0.008 seconds to extract keywords from 1,000 reviews in the proposed method while resulting in 15.5% error ratio which is better than the existing morphological analyzers.

Emotion Analysis Using a Bidirectional LSTM for Word Sense Disambiguation (양방향 LSTM을 적용한 단어의미 중의성 해소 감정분석)

  • Ki, Ho-Yeon;Shin, Kyung-shik
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.197-208
    • /
    • 2020
  • Lexical ambiguity means that a word can be interpreted as two or more meanings, such as homonym and polysemy, and there are many cases of word sense ambiguation in words expressing emotions. In terms of projecting human psychology, these words convey specific and rich contexts, resulting in lexical ambiguity. In this study, we propose an emotional classification model that disambiguate word sense using bidirectional LSTM. It is based on the assumption that if the information of the surrounding context is fully reflected, the problem of lexical ambiguity can be solved and the emotions that the sentence wants to express can be expressed as one. Bidirectional LSTM is an algorithm that is frequently used in the field of natural language processing research requiring contextual information and is also intended to be used in this study to learn context. GloVe embedding is used as the embedding layer of this research model, and the performance of this model was verified compared to the model applied with LSTM and RNN algorithms. Such a framework could contribute to various fields, including marketing, which could connect the emotions of SNS users to their desire for consumption.

Extracting of Interest Issues Related to Patient Medical Services for Small and Medium Hospital by SNS Big Data Text Mining and Social Networking (중소병원 환자의료서비스에 관한 관심 이슈 도출을 위한 SNS 빅 데이터 텍스트 마이닝과 사회적 연결망 적용)

  • Hwang, Sang Won
    • Korea Journal of Hospital Management
    • /
    • v.23 no.4
    • /
    • pp.26-39
    • /
    • 2018
  • Purposes: The purpose of this study is to analyze the issue of interest in patient medical service of small and medium hospitals using big data. Methods: The method of this study was implemented by data mining and social network using SNS big data. The analysis tool were extracted key keywords and analyzed correlation by using Textom, Ucinet6 and NetDraw program. Findings: In the results of frequency, the network-centered and closeness centrality analysis, It was shown that the government center is interested in the major explanations and evaluations of the technology, information, security, safety, cost and problems of small and medium hospitals, coping with infections, and actual involvement in bank settlement. And, were extracted care for disabilities such as pediatrics, dentistry, obstetrics and gynecology, dementia, nursing, the elderly, and rehabilitation. Practical Implications: Future studies will be more useful if analyzed the needs of customers for medical services in the metropolitan area and provinces may be different in the small and medium hospitals to be studied, further classification studies.

Development of water elevation prediction algorithm using unstructured data : Application to Cheongdam Bridge, Korea (비정형화 데이터를 활용한 수위예측 알고리즘 개발 : 청담대교 적용)

  • Lee, Seung Yeon;Yoo, Hyung Ju;Lee, Seung Oh
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2019.05a
    • /
    • pp.121-121
    • /
    • 2019
  • 특정 지역에 집중적으로 비가 내리는 현상인 국지성호우가 빈번히 발생함에 따라 하천 주변 사회기반시설의 침수 위험성이 증가하고 있다. 침수 위험성 판단 여부는 주로 수위정보를 이용하며 수위 예측은 대부분 수치모형을 이용한다. 본 연구에서는 빅데이터 기반의 RNN(Recurrent Neural Networks)기법 알고리즘을 활용하여 수위를 예측하였다. 연구대상지는 조위의 영향을 많이 받는 한강 전역을 대상으로 하였다. 2008년~2018년(10개년)의 실제 침수 피해 실적을 조사한 결과 잠수교, 한강대교, 청담대교 등에서 침수 피해 발생률이 높게 나타났고 SNS(Social Network Services)와 같은 비정형화 자료에서는 청담대교가 가장 많이 태그(Tag)되어 청담대교를 연구범위로 설정하였다. 본 연구에서는 Python에서 제공하는 Tensor flow Library를 이용하여 수위예측 알고리즘을 적용하였다. 데이터는 정형화 데이터와 비정형 데이터를 사용하였으며 정형화 데이터는 한강홍수 통제소나 기상청에서 제공하는 최근 10년간의 (2008~2018) 수위 및 강우량 자료를 수집하였다. 비정형화 데이터는 SNS를 이용하여 민간 정보를 수집하여 정형화된 자료와 함께 전체자료를 구축하였다. 민감도 분석을 통하여 모델의 은닉층(5), 학습률(0.02) 및 반복횟수(100)의 최적값을 설정하였고, 24시간 동안의 데이터를 이용하여 3시간 후의 수위를 예측하였다. 2008년~ 2017년 까지의 데이터는 학습 데이터로 사용하였으며 2018년의 수위를 예측 및 평가하였다. 2018년의 관측수위 자료와 비교한 결과 90% 이상의 데이터가 10% 이내의 오차를 나타내었으며, 첨두수위도 비교적 정확하게 예측되는 것을 확인하였다. 향후 수위와 강우량뿐만 아니라 다양한 인자들도 고려한다면 보다 신속하고 정확한 예측 정보를 얻을 수 있을 것으로 기대된다.

  • PDF