• Title/Summary/Keyword: 데이터 확장 기법

Search Result 827, Processing Time 0.026 seconds

A Suggestion and an analysis on Changes on trend of the 'Virtual Tourism' before and after the Covid 19 Crisis using Textmining Method (텍스트 마이닝을 활용한 '가상관광'의 코로나19 전후 트렌드 분석 및 방향성 제언)

  • Sung, Yun-A
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.4
    • /
    • pp.155-161
    • /
    • 2022
  • The outbreak of the Covid 19 increased the interest on the 'Virtual Tourism. In this research the key word related to "Virtual Tourism" was collected through the search engine and was analyzed through the data mining method such as Log-odds ratio, Frequency, and network analysis. It is clear that the information and communication dependency increased in the field of "Virtual Tourism" after Covid 19 and also the trend have changed from "securement of the contents diversity" to "project related to economic recovery." Since the demands for the "Virtual Reality" such as metaverse is increasing, there should be an economic and circular structure in which the government establishing a related policy and the funding plan based on the research, local government and the private companies planning and producing discriminate contents focusing on AISAS(Attension, Interest, Search, Action, Share) aand the research institutions and universities developing, applying, assessing and commercializing the technology.

Effective Dynamic Broadcast Method in Hybrid Broadcast Environment (하이브리드 브로드캐스트 환경에서 효과적인 동적 브로드캐스팅 기법)

  • Choi, Jae-Hoon;Lee, Jin-Seung;Kang, Jae-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.103-110
    • /
    • 2009
  • We are witnessing rapid increase of the number of wireless devices available today such as cell phones, PDAs, Wibro enabled devices. Because of the inherent limitation of the bandwidth available for wireless channels, broadcast systems have attracted the attention of the research community. The main problem in this area is to develop an efficient broadcast program. In this paper, we propose a dynamic broadcast method that overcomes the limitations of static broadcast programs. It optimizes the scheduling based on the probabilistic model of user requests. We show that dynamic broadcast system can indeed improve the quality of service using user requests. This paper extends our previous work in [1] to include more thorough explanation of the proposed methodology and diverse performance evaluation models.

Research on Driving Pattern Analysis Techniques Using Contrastive Learning Methods (대조학습 방법을 이용한 주행패턴 분석 기법 연구)

  • Hoe Jun Jeong;Seung Ha Kim;Joon Hee Kim;Jang Woo Kwon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.1
    • /
    • pp.182-196
    • /
    • 2024
  • This study introduces driving pattern analysis and change detection methods using smartphone sensors, based on contrastive learning. These methods characterize driving patterns without labeled data, allowing accurate classification with minimal labeling. In addition, they are robust to domain changes, such as different vehicle types. The study also examined the applicability of these methods to smartphones by comparing them with six lightweight deep-learning models. This comparison supported the development of smartphone-based driving pattern analysis and assistance systems, utilizing smartphone sensors and contrastive learning to enhance driving safety and efficiency while reducing the need for extensive labeled data. This research offers a promising avenue for addressing contemporary transportation challenges and advancing intelligent transportation systems.

Real-time Hand Region Detection and Tracking using Depth Information (깊이정보를 이용한 실시간 손 영역 검출 및 추적)

  • Joo, SungIl;Weon, SunHee;Choi, HyungIl
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.3
    • /
    • pp.177-186
    • /
    • 2012
  • In this paper, we propose a real-time approach for detecting and tracking a hand region by analyzing depth images. We build a hand model in advance. The model has the shape information of a hand. The detecting process extracts out moving areas in an image, which are possibly caused by moving a hand in front of a camera. The moving areas can be identified by analyzing accumulated difference images and applying the region growing technique. The extracted moving areas are compared against a hand model to get justified as a hand region. The tracking process keeps the track of center points of hand regions of successive frames. For this purpose, it involves three steps. The first step is to determine a seed point that is the closest point to the center point of a previous frame. The second step is to perform region growing to form a candidate region of a hand. The third step is to determine the center point of a hand to be tracked. This point is searched by the mean-shift algorithm within a confined area whose size varies adaptively according to the depth information. To verify the effectiveness of our approach, we have evaluated the performance of our approach while changing the shape and position of a hand as well as the velocity of hand movement.

A Study on Improvement for Service Proliferation Based on Blockchain (블록체인 기반 서비스 확산을 위한 개선 방안 연구)

  • Yoo, Soonduck;Kim, Kiheung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.1
    • /
    • pp.185-194
    • /
    • 2018
  • This study investigates the limitations of blockchain technology and the ways to improve it by using Delphi technique. Limit factors and improvement measures are classified into technology, service, and legal system. First, from a technical point of view, lack of standardization of the technology, insufficiency of integration, lack of scalability, unclear cancellation or correction policy, excessive cost of transaction verification, insufficient personal information protection and not enough to respond to hacking defense were the limiting factors. In order to improve these, the followings; ensuring standardization, securing integration and scalability, establishing cancellation of each applicable data, establishment of correction policy, efficiency of verification cost, the protection of personal information and countermeasure against hacking are provided. The related technology development and countermeasures must be established to effectively introduce the blockchain technology to the market. Second, in the early stage of blockchain service, it showed lack of utilization of the blockchain, security threat, shortage of skilled workers, and lack of legal liability. As a solution to these problems, it is necessary to suggest various applications, against security threat, training professional manpower, and securing legal responsibility. It should also provide a foundation for providing institutionally stable services. Third, from as legal system point of view, inadequate legal compliance, lack of relevant regulation, and uncertainty in the regulation were the limiting factors. Therefore establishing a legal system, which is the most important area for activating the service, should be accompanied by the provision of legal countermeasures, clearness of regulations and measures to be taken by relevant governmental authorities. This study will contribute as a reference for a research, related to the blockchain.

Analysis of Network Dynamics from Annals of the Chosun Dynasty (조선왕조실록 네트워크의 동적 변화 분석)

  • Kim, Hak Yong;Kim, Hak Bong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.9
    • /
    • pp.529-537
    • /
    • 2014
  • To establish a foundation to objectively interpret Chosun history, we construct people network of the Chosun dynasty. The network shows scale free network properties as if most social networks do. The people network is composed of 1,379 nodes and 3,874 links and its diameter is 14. To analysis of the network dynamics, whole network that is composed of 27 king networks were constructed by adding the first king, Taejo network to the second king, Jeongjong network and then continuously adding the next king networks. Interestingly, betweenness and closeness centralities were gradually decreased but stress centrality was drastically increased. These results indicate that information flow is gradually slowing and hub node position is more centrally oriented as growing the network. To elucidate key persons from the network, k-core and MCODE algorithms that can extract core or module structures from whole network were employed. It is a possible to obtain new insight and hidden information by analyzing network dynamics. Due to lack of the dynamic interacting data, there is a limit for network dynamic research. In spite of using concise data, this research provides us a possibility that annals of the Chosun dynasty are very useful historical data for analyzing network dynamics.

Speech extraction based on AuxIVA with weighted source variance and noise dependence for robust speech recognition (강인 음성 인식을 위한 가중화된 음원 분산 및 잡음 의존성을 활용한 보조함수 독립 벡터 분석 기반 음성 추출)

  • Shin, Ui-Hyeop;Park, Hyung-Min
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.326-334
    • /
    • 2022
  • In this paper, we propose speech enhancement algorithm as a pre-processing for robust speech recognition in noisy environments. Auxiliary-function-based Independent Vector Analysis (AuxIVA) is performed with weighted covariance matrix using time-varying variances with scaling factor from target masks representing time-frequency contributions of target speech. The mask estimates can be obtained using Neural Network (NN) pre-trained for speech extraction or diffuseness using Coherence-to-Diffuse power Ratio (CDR) to find the direct sounds component of a target speech. In addition, outputs for omni-directional noise are closely chained by sharing the time-varying variances similarly to independent subspace analysis or IVA. The speech extraction method based on AuxIVA is also performed in Independent Low-Rank Matrix Analysis (ILRMA) framework by extending the Non-negative Matrix Factorization (NMF) for noise outputs to Non-negative Tensor Factorization (NTF) to maintain the inter-channel dependency in noise output channels. Experimental results on the CHiME-4 datasets demonstrate the effectiveness of the presented algorithms.

Topic Model Augmentation and Extension Method using LDA and BERTopic (LDA와 BERTopic을 이용한 토픽모델링의 증강과 확장 기법 연구)

  • Kim, SeonWook;Yang, Kiduk
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.3
    • /
    • pp.99-132
    • /
    • 2022
  • The purpose of this study is to propose AET (Augmented and Extended Topics), a novel method of synthesizing both LDA and BERTopic results, and to analyze the recently published LIS articles as an experimental approach. To achieve the purpose of this study, 55,442 abstracts from 85 LIS journals within the WoS database, which spans from January 2001 to October 2021, were analyzed. AET first constructs a WORD2VEC-based cosine similarity matrix between LDA and BERTopic results, extracts AT (Augmented Topics) by repeating the matrix reordering and segmentation procedures as long as their semantic relations are still valid, and finally determines ET (Extended Topics) by removing any LDA related residual subtopics from the matrix and ordering the rest of them by F1 (BERTopic topic size rank, Inverse cosine similarity rank). AET, by comparing with the baseline LDA result, shows that AT has effectively concretized the original LDA topic model and ET has discovered new meaningful topics that LDA didn't. When it comes to the qualitative performance evaluation, AT performs better than LDA while ET shows similar performances except in a few cases.

Raft-D: A Consensus Algorithm for Dynamic Configuration of Participant Peers (Raft-D: 참여 노드의 동적 구성을 허용하는 컨센서스 알고리즘)

  • Ha, Yeoun-Ui;Jin, Jae-Hwan;Lee, Myung-Joon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.2
    • /
    • pp.267-277
    • /
    • 2017
  • One of fundamental problems in developing robust distributed services is how to achieve distributed consensus agreeing some data values that should be shared among participants in a distributed service. As one of algorithms for distributed consensus, Raft is known as a simple and understandable algorithm by decomposing the distributed consensus problem into three subproblems(leader election, log replication and safety). But, the algorithm dose not mention any types of dynamic configuration of participant peers such as adding new peers to a consensus group or deleting peers from the group. In this paper, we present a new consensus algorithm named Raft-D, which supports the dynamic configuration of participant peers by extending the Raft algorithm. For this, Raft-D manages the additional information maintained by participant nodes, and provides a technique to check the connection status of the nodes belonging to the consensus group. Based on the technique, Raft-D defines conditions and states to deal with adding new peers to the consensus group or deleting peers from the group. Based on those conditions and states, Raft-D performs the dynamic configuration process for a consensus group through the log update mechanism of the Raft algorithm.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.