• Title/Summary/Keyword: multimedia big data

Search Result 144, Processing Time 0.024 seconds

Location-Based Military Simulation and Virtual Training Management System (위치인식 기반의 군사 시뮬레이션 및 가상훈련 관리 시스템)

  • Jeon, Hyun Min;Kim, Jae Wan
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.1
    • /
    • pp.51-57
    • /
    • 2017
  • The purpose of this study is to design a system that can be used for military simulation and virtual training using the location information of individual soldier's weapons. After acquiring the location information using Arduino's GPS shield, it is designed to transmit data to the Smartphone using Bluetooth Shield, and transmit the data to the server using 3G/4G of Smartphone in real time. The server builds the system to measure, analyze and manage the current position and the tracking information of soldier. Using this proposed system makes it easier to analyze the training situation for individual soldiers and expect better training results.

On correlation and causality in the analysis of big data (빅 데이터 분석에서 상관성과 인과성)

  • Kim, Joonsung
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.8
    • /
    • pp.845-852
    • /
    • 2018
  • Mayer-Schönberger and Cukier(2013) explain why big data is important for our life, while showing many cases in which analysis of big data has great significance for our life and raising intriguing issues on the analysis of big data. The two authors claim that correlation is in many ways practically far more efficient and versatile in the analysis of big data than causality. Moreover, they claim that causality could be abandoned since analysis and prediction founded on correlation must prevail. I critically examine the two authors' accounts of causality and correlation. First, I criticize that corelation is sufficient for our analysis of data and our prediction founded on the analysis. I point out their misunderstanding of the distinction between correlation and causality. I show that spurious correlation misleads our decision while analyzing Simpson paradox. Second, I criticize not only that causality is more inefficient in the analysis of big data than correlation, but also that there is no mathematical theory for causality. I introduce the mathematical theories of causality founded on structural equation theory, and show that causality has great significance for the analysis of big data.

An Efficient data management Scheme for Hierarchical Multi-processing using Double Hash Chain (이중 해쉬체인을 이용한 계층적 다중 처리를 위한 효율적인 데이터 관리 기법)

  • Jeong, Yoon-Su;Kim, Yong-Tae;Park, Gil-Cheol
    • Journal of Digital Convergence
    • /
    • v.13 no.10
    • /
    • pp.271-278
    • /
    • 2015
  • Recently, bit data is difficult to easily collect the desired data because big data is collected via the Internet. Big data is higher than the rate at which the data type and the period of time for which data is collected depending on the size of data increases. In particular, since the data of all different by the intended use and the type of data processing accuracy and computational cost is one of the important items. In this paper, we propose data processing method using a dual-chain in a manner to minimize the computational cost of the data when data is correctly extracted at the same time a multi-layered process through the desired number of the user and different kinds of data on the Internet. The proposed scheme is classified into a hierarchical data in accordance with the intended use and method to extract various kinds of data. At this time, multi-processing and tie the data hash with the double chain to enhance the accuracy of the reading. In addition, the proposed method is to organize the data in the hash chain for easy access to the hierarchically classified data and reduced the cost of processing the data. Experimental results, the proposed method is the accuracy of the data on average 7.8% higher than conventional techniques, processing costs were reduced by 4.9% of the data.

Analysis of Market Trajectory Data using k-NN

  • Park, So-Hyun;Ihm, Sun-Young;Park, Young-Ho
    • Journal of Multimedia Information System
    • /
    • v.5 no.3
    • /
    • pp.195-200
    • /
    • 2018
  • Recently, as the sensor and big data analysis technology have been developed, there have been a lot of researches that analyze the purchase-related data such as the trajectory information and the stay time. Such purchase-related data is usefully used for the purchase pattern prediction and the purchase time prediction. Because it is difficult to find periodic patterns in large-scale human data, it is necessary to look at actual data sets, find various feature patterns, and then apply a machine learning algorithm appropriate to the pattern and purpose. Although existing papers have been used to analyze data using various machine learning methods, there is a lack of statistical analysis such as finding feature patterns before applying the machine learning algorithm. Therefore, we analyze the purchasing data of Songjeong Maeil Market, which is a data gathering place, and finds some characteristic patterns through statistical data analysis. Based on the results of 1, we derive meaningful conclusions by applying the machine learning algorithm and present future research directions. Through the data analysis, it was confirmed that the number of visits was different according to the regional characteristics around Songjeong Maeil Market, and the distribution of time spent by consumers could be grasped.

Designing an Automated Production Information Platform for Small and Medium-sized Businesses (중소기업의 자동화 생산 정보 플랫폼 구축 모델 설계)

  • Jeong, Yoon-Su;Kim, Yong-Tae;Park, Gil-Cheol
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.1
    • /
    • pp.116-122
    • /
    • 2019
  • In recent years, small and medium-sized businesses are rapidly changing to an industrial structure where process/quality/energy data aggregates can be automatically or real-time to achieve global competitiveness. In particular, real-time information analysis produced in the production process of small businesses is evolving into a new process process that analyzes, predicts, prescribes and implements significant performance of small businesses. In this paper, we propose a platform-building model that can transform the automated production information system of small businesses into big data so that they can upgrade data that is generated by small businesses. The proposed model has the capability to support operational efficiency (consulting and training) and strategic decision making of small businesses by utilizing a variety of data on the basic information of products produced by small businesses for data collection by smart SMEs. In addition, the proposed model is characterized by close cooperation between small and medium-sized businesses with different regional characteristics and areas of information sharing and system linkage.

Image Deduplication Based on Hashing and Clustering in Cloud Storage

  • Chen, Lu;Xiang, Feng;Sun, Zhixin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1448-1463
    • /
    • 2021
  • With the continuous development of cloud storage, plenty of redundant data exists in cloud storage, especially multimedia data such as images and videos. Data deduplication is a data reduction technology that significantly reduces storage requirements and increases bandwidth efficiency. To ensure data security, users typically encrypt data before uploading it. However, there is a contradiction between data encryption and deduplication. Existing deduplication methods for regular files cannot be applied to image deduplication because images need to be detected based on visual content. In this paper, we propose a secure image deduplication scheme based on hashing and clustering, which combines a novel perceptual hash algorithm based on Local Binary Pattern. In this scheme, the hash value of the image is used as the fingerprint to perform deduplication, and the image is transmitted in an encrypted form. Images are clustered to reduce the time complexity of deduplication. The proposed scheme can ensure the security of images and improve deduplication accuracy. The comparison with other image deduplication schemes demonstrates that our scheme has somewhat better performance.

A Study on Conversational Public Administration Service of the Chatbot Based on Artificial Intelligence (인공지능 기반 대화형 공공 행정 챗봇 서비스에 관한 연구)

  • Park, Dong-ah
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1347-1356
    • /
    • 2017
  • Artificial intelligence-based services are expanding into a new industrial revolution. There is artificial intelligence technology applied in real life due to the development of big data and deep learning related technology. And data analysis and intelligent assistant services that integrate information from various fields have also been commercialized. Chatbot with interactive artificial intelligence provide shopping, news or information. Chatbot service, which has begun to be adopted by some public institutions, is now just a first step in the steps. This study summarizes the services and technical analysis of chatbot. and the direction of public administration service chatbot was presented.

Collective Betweenness Centrality in Networks

  • Gombojav, Gantulga;Purevsuren, Dalaijargal;Sengee, Nyamlkhagva
    • Journal of Multimedia Information System
    • /
    • v.9 no.2
    • /
    • pp.121-126
    • /
    • 2022
  • The shortest path betweenness value of a node quantifies the amount of information passing through the node when all the pairs of nodes in the network exchange information in full capacity measured by the number of the shortest paths between the pairs assuming that the information travels in the shortest paths. It is calculated as the cumulative of the fractions of the number of shortest paths between the node pairs over how many of them actually pass through the node of interest. It's possible for a node to have zero or underrated betweenness value while sitting just next to the giant flow of information. These nodes may have a significant influence on the network when the normal flow of information is disrupted. We propose a betweenness centrality measure called collective betweenness that takes into account the surroundings of a node. We will compare our measure with other centrality metrics and show some applications of it.

A Development of Digital Curation System for Creativity and Personality Education (창의 인성 교육에 대한 디지털 큐레이션 시스템 개발)

  • Kim, Jung-In;Kim, Byung-Man;Kim, Jung-Ju
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.9
    • /
    • pp.1710-1722
    • /
    • 2016
  • With the advancement of information and communications technologies and the universal dissemination of smartphones, ICT-utilizing education is also getting the limelight. In the recent ICT-utilizing education, teachers and learners produce massive digital data by consulting massive information on the Internet, and the produced data is filtered in the process of education and utilized for the current and next educational programs. In order to construct the data well-suited for education from massive data available on the Internet, it is important to verify the quality of the educational data. To this end, we propose an educational website that can provide the data satisfying visual needs demanded by learners in terms of utilizing a digital curation system. In this paper, we also present a design and implementation of the website that non-ICT majors can easily use, which in turn enables them to conduct the education of creative personality utilizing image and video contents.

Machine Learning-based Estimation of the Concentration of Fine Particulate Matter Using Domain Adaptation Method (Domain Adaptation 방법을 이용한 기계학습 기반의 미세먼지 농도 예측)

  • Kang, Tae-Cheon;Kang, Hang-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1208-1215
    • /
    • 2017
  • Recently, people's attention and worries about fine particulate matter have been increasing. Due to the construction and maintenance costs, there are insufficient air quality monitoring stations. As a result, people have limited information about the concentration of fine particulate matter, depending on the location. Studies have been undertaken to estimate the fine particle concentrations in areas without a measurement station. Yet there are limitations in that the estimate cannot take account of other factors that affect the concentration of fine particle. In order to solve these problems, we propose a framework for estimating the concentration of fine particulate matter of a specific area using meteorological data and traffic data. Since there are more grids without a monitor station than grids with a monitor station, we used a domain adversarial neural network based on the domain adaptation method. The features extracted from meteorological data and traffic data are learned in the network, and the air quality index of the corresponding area is then predicted by the generated model. Experimental results demonstrate that the proposed method performs better as the number of source data increases than the method using conditional random fields.