• Title/Summary/Keyword: Big data processing

Search Result 1,038, Processing Time 0.024 seconds

KoBERT-based for parents with disabilities Implementation of Emotion Analysis Communication Platform (장애아 부모를 위한 KoBERT 기반 감정분석 소통 플랫폼 구현)

  • Jae-Hyung Ha;Ji-Hye Huh;Won-Jib Kim;Jung-Hun Lee;Woo-Jung Park
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1014-1015
    • /
    • 2023
  • 많은 장애아 부모들은 양육에 대한 스트레스, 미래에 대한 걱정으로 심리적으로 상당한 중압감을 느낀다. 이에 비해 매년 증가하는 장애인 수에 비해 장애아 부모 및 가족의 심리적·정신적 문제를 해결하기 위한 프로그램이 부족하다.[1] 이를 해결하고자 본 논문에서는 감정분석 소통 플랫폼을 제안한다. 제안하는 플랫폼은 KoBERT 모델을 fine-tunning 하여 사용자의 일기 속 감정을 분석하여 장애아를 둔 부모 및 가족 간의 소통을 돕는다. 성능평가는 제안하는 플랫폼의 주요 기능인 KoBERT 기반 감정분석의 성능을 확인하기위해 텍스트 분류 모델로 널리 사용되고 있는 LSTM, Bi-LSTM, GRU 모델 별 성능지표들과 비교 분석한다. 성능 평가결과 KoBERT 의 정확도가 다른 분류군의 정확도보다 평균 31.4% 높은 성능을 보였고, 이 외의 지표에서도 비교적 높은 성능을 기록했다.

An Efficient Algorithm of Data Anonymity based on Anonymity Groups (익명 그룹 기반의 효율적인 데이터 익명화 알고리즘)

  • Kwon, Ho Yeol
    • Journal of Industrial Technology
    • /
    • v.36
    • /
    • pp.89-92
    • /
    • 2016
  • In this paper, we propose an efficient anonymity algorithm for personal information protections in big data systems. Firstly, we briefly introduce fundamental algorithms of k-anonymity, l-diversity, t-closeness. And then we propose an anonymity algorithm using controlling the size of anonymity groups as well as exchanging the data tuple between anonymity groups. Finally, we demonstrate an example on which proposed algorithm applied. The proposed scheme gave an efficient and simple algorithms for the processing of a big amount of data.

  • PDF

Scalable Big Data Pipeline for Video Stream Analytics Over Commodity Hardware

  • Ayub, Umer;Ahsan, Syed M.;Qureshi, Shavez M.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.4
    • /
    • pp.1146-1165
    • /
    • 2022
  • A huge amount of data in the form of videos and images is being produced owning to advancements in sensor technology. Use of low performance commodity hardware coupled with resource heavy image processing and analyzing approaches to infer and extract actionable insights from this data poses a bottleneck for timely decision making. Current approach of GPU assisted and cloud-based architecture video analysis techniques give significant performance gain, but its usage is constrained by financial considerations and extremely complex architecture level details. In this paper we propose a data pipeline system that uses open-source tools such as Apache Spark, Kafka and OpenCV running over commodity hardware for video stream processing and image processing in a distributed environment. Experimental results show that our proposed approach eliminates the need of GPU based hardware and cloud computing infrastructure to achieve efficient video steam processing for face detection with increased throughput, scalability and better performance.

Transaction Processing Method for NoSQL Based Column

  • Kim, Jeong-Joon
    • Journal of Information Processing Systems
    • /
    • v.13 no.6
    • /
    • pp.1575-1584
    • /
    • 2017
  • As interest in big data has increased recently, NoSQL, a solution for storing and processing big data, is getting attention. NoSQL supports high speed, high availability, and high scalability, but is limited in areas where data integrity is important because it does not support multiple row transactions. To overcome these drawbacks, many studies are underway to support multiple row transactions in NoSQL. However, existing studies have a disadvantage that the number of transactions that can be processed per unit of time is low and performance is degraded. Therefore, in this paper, we design and implement a multi-row transaction system for data integrity in big data environment based on HBase, a column-based NoSQL which is widely used recently. The multi-row transaction system efficiently performs multi-row transactions by adding columns to manage transaction information for every user table. In addition, it controls the execution, collision, and recovery of multiple row transactions through the transaction manager, and it communicates with HBase through the communication manager so that it can exchange information necessary for multiple row transactions. Finally, we performed a comparative performance evaluation with HAcid and Haeinsa, and verified the superiority of the multirow transaction system developed in this paper.

Development of the Unified Database Design Methodology for Big Data Applications - based on MongoDB -

  • Lee, Junho;Joo, Kyungsoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.3
    • /
    • pp.41-48
    • /
    • 2018
  • The recent sudden increase of big data has characteristics such as continuous generation of data, large amount, and unstructured format. The existing relational database technologies are inadequate to handle such big data due to the limited processing speed and the significant storage expansion cost. Current implemented solutions are mainly based on relational database that are no longer adapted to these data volume. NoSQL solutions allow us to consider new approaches for data warehousing, especially from the multidimensional data management point of view. In this paper, we develop and propose the integrated design methodology based on MongoDB for big data applications. The proposed methodology is more scalable than the existing methodology, so it is easy to handle big data.

Big data-based piping material analysis framework in offshore structure for contract design

  • Oh, Min-Jae;Roh, Myung-Il;Park, Sung-Woo;Chun, Do-Hyun;Myung, Sehyun
    • Ocean Systems Engineering
    • /
    • v.9 no.1
    • /
    • pp.79-95
    • /
    • 2019
  • The material analysis of an offshore structure is generally conducted in the contract design phase for the price quotation of a new offshore project. This analysis is conducted manually by an engineer, which is time-consuming and can lead to inaccurate results, because the data size from previous projects is too large, and there are so many materials to consider. In this study, the piping materials in an offshore structure are analyzed for contract design using a big data framework. The big data technologies used include HDFS (Hadoop Distributed File System) for data saving, Hive and HBase for the database to handle the saved data, Spark and Kylin for data processing, and Zeppelin for user interface and visualization. The analyzed results show that the proposed big data framework can reduce the efforts put toward contract design in the estimation of the piping material cost.

A Big-Data Trajectory Combination Method for Navigations using Collected Trajectory Data (수집된 경로데이터를 사용하는 내비게이션을 위한 대용량 경로조합 방법)

  • Koo, Kwang Min;Lee, Taeho;Park, Heemin
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.386-395
    • /
    • 2016
  • In trajectory-based navigation systems, a huge amount of trajectory data is needed for efficient route explorations. However, it would be very hard to collect trajectories from all the possible start and destination combinations. To provide a practical solution to this problem, we suggest a method combining collected GPS trajectories data into additional generated trajectories with new start and destination combinations without road information. We present a trajectory combination algorithm and its implementation with Scala programming language on Spark platform for big data processing. The experimental results proved that the proposed method can effectively populate the collected trajectories into valid trajectory paths more than three hundred times.

Optimization and Performance Analysis of Cloud Computing Platform for Distributed Processing of Big Data (대용량 데이터의 분산 처리를 위한 클라우드 컴퓨팅 환경 최적화 및 성능평가)

  • Hong, Seung-Tae;Shin, Young-Sung;Chang, Jae-Woo
    • Spatial Information Research
    • /
    • v.19 no.4
    • /
    • pp.55-71
    • /
    • 2011
  • Recently, interest in cloud computing which provides IT resources as service form in IT field is increasing. As a result, much research has been done on the distributed data processing that store and manage a large amount of data in many servers. Meanwhile, in order to effectively utilize the spatial data which is rapidly increasing day by day with the growth of GIS technology, distributed processing of spatial data using cloud computing is essential. Therefore, in this paper, we review the representative distributed data processing techniques and we analyze the optimization requirements for performance improvement of the distributed processing techniques for a large amount of data. In addition, we uses the Hadoop and we evaluate the performance of the distributed data processing techniques for their optimization requirements.

An Encrypted Service Data Model for Using Illegal Applications of the Government Civil Affairs Service under Big Data Environments (빅데이터 환경에서 정부민원서비스센터 어플리케이션 불법 이용에 대한 서비스 자료 암호화 모델)

  • Kim, Myeong Hee;Baek, Hyun Chul;Hong, Suk Won;Park, Jae Heung
    • Convergence Security Journal
    • /
    • v.15 no.7
    • /
    • pp.31-38
    • /
    • 2015
  • Recently the government civil affairs administration system has been advanced to a cloud computing environment from a simple network environment. The electronic civil affairs processing environment in recent years means cloud computing environment based bid data services. Therefore, there exist lots of problems in processing big data for the government civil affairs service compared to the conventional information acquisition environment. That is, it processes new information through collecting required information from different information systems much further than the information service in conventional network environments. According to such an environment, applications of providing administration information for processing the big data have been becoming a major target of illegal attackers. The objectives of this study are to prevent illegal uses of the electronic civil affairs service based on IPs nationally located in civil affairs centers and to protect leaks of the important data retained in these centers. For achieving it, the safety, usability, and security of services are to be ensured by using different authentication processes and encryption methods based on these processes.

A Study of Big Data Information Systems Building and Cases (빅데이터 정보시스템의 구축 및 사례에 관한 연구)

  • Lee, Choong Kwon
    • Smart Media Journal
    • /
    • v.4 no.3
    • /
    • pp.56-61
    • /
    • 2015
  • Although many successful cases regarding big data have been reported, building information systems of big data is still difficult. From the perspective of technology the builders need to understand the whole process of systems development ranging from collecting, storing, processing, and analyzing data to presenting and using information. Whereas, from the perspective of business, the builders need to understand the values of the proposed big data project and explain to top managers who have to make a decision of the risky investment. This study proposes a framework of 5W 1H that can help the builder understand things related to the development of big data information systems. In addition, big data cases from the real world have been illustrated by applying to the framework. It is expected to help builders understand and manage big data projects and lead managers to make better decisions of the investment to the development of information systems.