• Title/Summary/Keyword: 소셜네트워크 시각화

Search Result 61, Processing Time 0.024 seconds

Diagnosis Model for Closed Organizations based on Social Network Analysis (소셜 네트워크 분석 기반 통제 조직 진단 모델)

  • Park, Dongwook;Lee, Sanghoon
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.6
    • /
    • pp.393-402
    • /
    • 2015
  • Human resources are one of the most essential elements of an organization. In particular, the more closed a group is, the higher the value each member has. Previous studies have focused on personal attributes of individual, such as medical history, and have depended upon self-diagnosis to manage structures. However, this method has weak points, such as the timeconsuming process required, the potential for concealment, and non-disclosure of participants' mental states, as this method depends on self-diagnosis through extensive questionnaires or interviews, which is solved in an interactive way. It also suffers from another problem in that relations among people are difficult to express. In this paper, we propose a multi-faced diagnosis model based on social network analysis which overcomes former weaknesses. Our approach has the following steps : First, we reveal the states of those in a social network through 9 questions. Next, we diagnose the social network to find out specific individuals such as victims or leaders using the proposed algorithm. Experimental results demonstrated our model achieved 0.62 precision rate and identified specific people who are not revealed by the existing methods.

Analyzing Disaster Response Terminologies by Text Mining and Social Network Analysis (텍스트 마이닝과 소셜 네트워크 분석을 이용한 재난대응 용어분석)

  • Kang, Seong Kyung;Yu, Hwan;Lee, Young Jai
    • Information Systems Review
    • /
    • v.18 no.1
    • /
    • pp.141-155
    • /
    • 2016
  • This study identified terminologies related to the proximity and frequency of disaster by social network analysis (SNA) and text mining, and then expressed the outcome into a mind map. The termdocument matrix of text mining was utilized for the terminology proximity analysis, and the SNA closeness centrality was calculated to organically express the relationship of the terminologies through a mind map. By analyzing terminology proximity and selecting disaster response-related terminologies, this study identified the closest field among all the disaster response fields to disaster response and the core terms in each disaster response field. This disaster response terminology analysis could be utilized in future core term-based terminology standardization, disaster-related knowledge accumulation and research, and application of various response scenario compositions, among others.

A MVC Framework for Visualizing Text Data (텍스트 데이터 시각화를 위한 MVC 프레임워크)

  • Choi, Kwang Sun;Jeong, Kyo Sung;Kim, Soo Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.39-58
    • /
    • 2014
  • As the importance of big data and related technologies continues to grow in the industry, it has become highlighted to visualize results of processing and analyzing big data. Visualization of data delivers people effectiveness and clarity for understanding the result of analyzing. By the way, visualization has a role as the GUI (Graphical User Interface) that supports communications between people and analysis systems. Usually to make development and maintenance easier, these GUI parts should be loosely coupled from the parts of processing and analyzing data. And also to implement a loosely coupled architecture, it is necessary to adopt design patterns such as MVC (Model-View-Controller) which is designed for minimizing coupling between UI part and data processing part. On the other hand, big data can be classified as structured data and unstructured data. The visualization of structured data is relatively easy to unstructured data. For all that, as it has been spread out that the people utilize and analyze unstructured data, they usually develop the visualization system only for each project to overcome the limitation traditional visualization system for structured data. Furthermore, for text data which covers a huge part of unstructured data, visualization of data is more difficult. It results from the complexity of technology for analyzing text data as like linguistic analysis, text mining, social network analysis, and so on. And also those technologies are not standardized. This situation makes it more difficult to reuse the visualization system of a project to other projects. We assume that the reason is lack of commonality design of visualization system considering to expanse it to other system. In our research, we suggest a common information model for visualizing text data and propose a comprehensive and reusable framework, TexVizu, for visualizing text data. At first, we survey representative researches in text visualization era. And also we identify common elements for text visualization and common patterns among various cases of its. And then we review and analyze elements and patterns with three different viewpoints as structural viewpoint, interactive viewpoint, and semantic viewpoint. And then we design an integrated model of text data which represent elements for visualization. The structural viewpoint is for identifying structural element from various text documents as like title, author, body, and so on. The interactive viewpoint is for identifying the types of relations and interactions between text documents as like post, comment, reply and so on. The semantic viewpoint is for identifying semantic elements which extracted from analyzing text data linguistically and are represented as tags for classifying types of entity as like people, place or location, time, event and so on. After then we extract and choose common requirements for visualizing text data. The requirements are categorized as four types which are structure information, content information, relation information, trend information. Each type of requirements comprised with required visualization techniques, data and goal (what to know). These requirements are common and key requirement for design a framework which keep that a visualization system are loosely coupled from data processing or analyzing system. Finally we designed a common text visualization framework, TexVizu which is reusable and expansible for various visualization projects by collaborating with various Text Data Loader and Analytical Text Data Visualizer via common interfaces as like ITextDataLoader and IATDProvider. And also TexVisu is comprised with Analytical Text Data Model, Analytical Text Data Storage and Analytical Text Data Controller. In this framework, external components are the specifications of required interfaces for collaborating with this framework. As an experiment, we also adopt this framework into two text visualization systems as like a social opinion mining system and an online news analysis system.

Case Study for the Communication Method of Information Design Type Advertising (정보디자인형 광고의 커뮤니케이션 기법에 관한 연구)

  • Kim, Jong-Min;Park, Han-Sol
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.11
    • /
    • pp.90-101
    • /
    • 2017
  • This study will analyze that the meaning and the characteristic of Information design type advertising. This study research the advertisement and Information with the issue and explore Information design type advertising samples by doing an in-depth analysis with an expert group and an inexpert group. It attracts customers visualizing sensational information and data as information design technique. It can be classified in to Manual type ad, Identity type ad, Data visualizing type ad. The communication formula of it goes through the keywords: Attention, Curation, Study, and these Curation and Study are new steps which didn't exist before in consumer behavior model. Information used in it comes from common sense or storytelling made by imagination, but there is no example of using false information distorting truth. Not exaggeration and falsehood, interesting which based on confidence creates a bond of sympathy: period time.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

A Study on the Interactive Visualization of Social Networks Using Closeness In Online Community (온라인 커뮤니티에서의 친밀도 요소 분석을 통한 소셜 네트워크 시각화 연구)

  • Lee, So-Hyun;Kim, Hyo-Dong;Lee, Kyung-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.1087-1094
    • /
    • 2009
  • As online community was revitalized, the internet became the second space for people's everyday life. People enter into a connection with other on-line members and they maintain and extend their relationships. Such relationships can be analyzed and visualized with social network analysis. The method oftentimes envisions the structural elements of complex social life. The study aims at visualizing the relationships among the Cyworld users and designs an application "Blow Blow Your Pinwheel", the main purpose of this application is visualizing social relationships between ego and '1chons' which is a concept of friendship in Cyworld. Designing such an application, the study focuses on closeness of relationships which we think is composed of 1)proximity 2)similarity, 3)familiarity, and 4)reciprocity. The study used these concepts in measuring the strength of relationship between ego and other 1chons(friends). Specifically, we devised survey questionnaires which asked users to evaluate the importance of the above factors of closeness, and implemented the result in calculating the strength of the relationship between ego and other by giving weights for each factor. These measurements then were applied in visualizing the relationships in the application, we designed. Through the application, we can compare on-line relationships with off-line relationships and attempt for the new approach of Social Networks.

  • PDF

Smart SNS Map: Location-based Social Network Service Data Mapping and Visualization System (스마트 SNS 맵: 위치 정보를 기반으로 한 스마트 소셜 네트워크 서비스 데이터 맵핑 및 시각화 시스템)

  • Yoon, Jangho;Lee, Seunghun;Kim, Hyun-chul
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.428-435
    • /
    • 2016
  • Hundreds of millions of new posts and information are being uploaded and propagated everyday on Online Social Networks(OSN) like Twitter, Facebook, or Instagram. This paper proposes and implements a GPS-location based SNS data mapping, analysis, and visualization system, called Smart SNS Map, which collects SNS data from Twitter and Instagram using hundreds of PlanetLab nodes distributed across the globe. Like no other previous systems, our system uniquely supports a variety of functions, including GPS-location based mapping of collected tweets and Instagram photos, keyword-based tweet or photo searching, real-time heat-map visualization of tweets and instagram photos, sentiment analysis, word cloud visualization, etc. Overall, a system like this, admittedly still in a prototype phase though, is expected to serve a role as a sort of social weather station sooner or later, which will help people understand what are happening around the SNS users, systems, society, and how they feel about them, as well as how they change over time and/or space.

P-TAF: A Big Data-based Platform for Total Air Traffic Forecast (빅데이터 기반 항공 수요예측 통합 플랫폼 설계 및 실증)

  • Jung, Jooik;Son, Seokhyun;Cha, Hee-June
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.281-282
    • /
    • 2021
  • 본 논문에서는 항공 수요예측을 위한 빅데이터 기반 플랫폼의 설계 및 실증 결과를 제시한다. 항공 수요예측 통합 플랫폼은 항공산업 관련 데이터를 Open API, RSS Feed, 웹크롤러(Web Crawler) 등을 이용하여 수집 및 분석하여 자체 개발한 항공 수요예측 알고리즘을 기반으로 결과를 시각화하여 보여주도록 구현되어 있다. 또한, 제안하는 플랫폼의 사용자 인터페이스를 통해 변수 설정을 하여 단위별(Global, National 등), 기간별(단기, 중장기 등), 유형별(여객, 화물 등) 예측 통계 자료를 도출할 수 있다. 플랫폼의 성능 검증을 위해 정형화된 데이터를 비롯하여 소셜네트워크서비스(SNS), 검색엔진 등에서 수집한 비정형 데이터까지 활용하여 특정 키워드의 빈도와 특정 노선에 대한 항공 수요간 상관관계를 분석하였다. 개발한 통합 플랫폼의 지능형 항공 수요예측 알고리즘을 통해 전반적인 공항 운영 및 공항 운영 정책 수립에 기여할 것으로 예상한다.

  • PDF

Automatic Classification of Department Types and Analysis of Co-Authorship Network: Focusing on Korean Journals in the Computer Field

  • Byungkyu Kim;Beom-Jong You;Min-Woo Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.4
    • /
    • pp.53-63
    • /
    • 2023
  • The utilization of department information in bibliometric analysis using scientific and technological literature is highly advantageous. In this paper, the department information dataset was built through the screening, data refinement, and classification processing of authors' department type belonging to university institutions appearing in academic journals in the field of science and technology published in Korea, and the automatic classification model based on deep learning was developed using the department information dataset as learning data and verification data. In addition, we analyzed the co-authorship structure and network in the field of computer science using the department information dataset and affiliation information of authors from domestic academic journals. The research resulted in a 98.6% accuracy rate for the automatic classification model using Korean department information. Moreover, the co-authorship patterns of Korean researchers in the computer science and engineering field, along with the characteristics and centralities of the co-author network based on institution type, region, institution, and department type, were identified in detail and visually presented on a map.

Managing Duplicate Memberships of Websites : An Approach of Social Network Analysis (웹사이트 중복회원 관리 : 소셜 네트워크 분석 접근)

  • Kang, Eun-Young;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.153-169
    • /
    • 2011
  • Today using Internet environment is considered absolutely essential for establishing corporate marketing strategy. Companies have promoted their products and services through various ways of on-line marketing activities such as providing gifts and points to customers in exchange for participating in events, which is based on customers' membership data. Since companies can use these membership data to enhance their marketing efforts through various data analysis, appropriate website membership management may play an important role in increasing the effectiveness of on-line marketing campaign. Despite the growing interests in proper membership management, however, there have been difficulties in identifying inappropriate members who can weaken on-line marketing effectiveness. In on-line environment, customers tend to not reveal themselves clearly compared to off-line market. Customers who have malicious intent are able to create duplicate IDs by using others' names illegally or faking login information during joining membership. Since the duplicate members are likely to intercept gifts and points that should be sent to appropriate customers who deserve them, this can result in ineffective marketing efforts. Considering that the number of website members and its related marketing costs are significantly increasing, it is necessary for companies to find efficient ways to screen and exclude unfavorable troublemakers who are duplicate members. With this motivation, this study proposes an approach for managing duplicate membership based on the social network analysis and verifies its effectiveness using membership data gathered from real websites. A social network is a social structure made up of actors called nodes, which are tied by one or more specific types of interdependency. Social networks represent the relationship between the nodes and show the direction and strength of the relationship. Various analytical techniques have been proposed based on the social relationships, such as centrality analysis, structural holes analysis, structural equivalents analysis, and so on. Component analysis, one of the social network analysis techniques, deals with the sub-networks that form meaningful information in the group connection. We propose a method for managing duplicate memberships using component analysis. The procedure is as follows. First step is to identify membership attributes that will be used for analyzing relationship patterns among memberships. Membership attributes include ID, telephone number, address, posting time, IP address, and so on. Second step is to compose social matrices based on the identified membership attributes and aggregate the values of each social matrix into a combined social matrix. The combined social matrix represents how strong pairs of nodes are connected together. When a pair of nodes is strongly connected, we expect that those nodes are likely to be duplicate memberships. The combined social matrix is transformed into a binary matrix with '0' or '1' of cell values using a relationship criterion that determines whether the membership is duplicate or not. Third step is to conduct a component analysis for the combined social matrix in order to identify component nodes and isolated nodes. Fourth, identify the number of real memberships and calculate the reliability of website membership based on the component analysis results. The proposed procedure was applied to three real websites operated by a pharmaceutical company. The empirical results showed that the proposed method was superior to the traditional database approach using simple address comparison. In conclusion, this study is expected to shed some light on how social network analysis can enhance a reliable on-line marketing performance by efficiently and effectively identifying duplicate memberships of websites.