• Title/Summary/Keyword: big data service

Search Result 1,019, Processing Time 0.032 seconds

A Study on Security Event Detection in ESM Using Big Data and Deep Learning

  • Lee, Hye-Min;Lee, Sang-Joon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.42-49
    • /
    • 2021
  • As cyber attacks become more intelligent, there is difficulty in detecting advanced attacks in various fields such as industry, defense, and medical care. IPS (Intrusion Prevention System), etc., but the need for centralized integrated management of each security system is increasing. In this paper, we collect big data for intrusion detection and build an intrusion detection platform using deep learning and CNN (Convolutional Neural Networks). In this paper, we design an intelligent big data platform that collects data by observing and analyzing user visit logs and linking with big data. We want to collect big data for intrusion detection and build an intrusion detection platform based on CNN model. In this study, we evaluated the performance of the Intrusion Detection System (IDS) using the KDD99 dataset developed by DARPA in 1998, and the actual attack categories were tested with KDD99's DoS, U2R, and R2L using four probing methods.

Design and Implementation of a Flood Disaster Safety System Using Realtime Weather Big Data (실시간 기상 빅데이터를 활용한 홍수 재난안전 시스템 설계 및 구현)

  • Kim, Yeonwoo;Kim, Byounghoon;Ko, Geonsik;Choi, Minwoong;Song, Heesub;Kim, Gihoon;Yoo, Seunghun;Lim, Jongtae;Bok, Kyungsoo;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.1
    • /
    • pp.351-362
    • /
    • 2017
  • Recently, analysis techniques to extract new meanings using big data analysis and various services using them have been developed. A disaster safety service among such services has been paid attention as the most important service. In this paper, we design and implement a flood disaster safety system using real time weather big data. The proposed system retrieves and processes vast amounts of information being collected in real time. In addition, it analyzes risk factors by aggregating the collected real time and past data and then provides users with prediction information. The proposed system also provides users with the risk prediction information by processing real time data such as user messages and news, and by analyzing disaster risk factors such a typhoon and a flood. As a result, users can prepare for potential disaster safety risks through the proposed system.

The Development and Application of the Big Data Analysis Course for the Improvement of the Data Literacy Competency of Teacher Training College Students (예비교사의 데이터 리터러시 역량 증진을 위한 빅데이터 분석 교양강좌의 개발 및 적용)

  • Kim, Seulki;Kim, Taeyoung
    • Journal of The Korean Association of Information Education
    • /
    • v.26 no.2
    • /
    • pp.141-151
    • /
    • 2022
  • Recently, basic literacy education related to digital literacy and data literacy has been emphasized for students who will live in a rapidly developing future digital society. Accordingly, demand for education to improve big data and data literacy is also increasing in general universities and universities of education as basic knowledge. Therefore, this study designed and applied big data analysis courses for pre-service teachers and analyzed the impact on data literacy. As a result of analyzing the interest and understanding of the input program, it was confirmed that it was an appropriate form for the level of pre-service teachers, and there was a significant improvement in competencies in all areas of 'knowledge', 'skills', and 'values and attitudes' of data literacy. It is hoped that the results of this study will contribute to enhancing the data literacy of students and pre-served teachers by helping with systematic data literacy educational research.

An Efficient Cloud Service Quality Performance Management Method Using a Time Series Framework (시계열 프레임워크를 이용한 효율적인 클라우드서비스 품질·성능 관리 방법)

  • Jung, Hyun Chul;Seo, Kwang-Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.2
    • /
    • pp.121-125
    • /
    • 2021
  • Cloud service has the characteristic that it must be always available and that it must be able to respond immediately to user requests. This study suggests a method for constructing a proactive and autonomous quality and performance management system to meet these characteristics of cloud services. To this end, we identify quantitative measurement factors for cloud service quality and performance management, define a structure for applying a time series framework to cloud service application quality and performance management for proactive management, and then use big data and artificial intelligence for autonomous management. The flow of data processing and the configuration and flow of big data and artificial intelligence platforms were defined to combine intelligent technologies. In addition, the effectiveness was confirmed by applying it to the cloud service quality and performance management system through a case study. Using the methodology presented in this study, it is possible to improve the service management system that has been managed artificially and retrospectively through various convergence. However, since it requires the collection, processing, and processing of various types of data, it also has limitations in that data standardization must be prioritized in each technology and industry.

Survey of Service Industry Policy and Big Data Analysis of Core Technology in Preparation of the Fourth Industrial Revolution (4차 산업혁명에 대비한 서비스산업 정책 고찰과 핵심기술의 빅데이터 분석)

  • Byun, Daeho
    • Journal of Service Research and Studies
    • /
    • v.8 no.1
    • /
    • pp.73-87
    • /
    • 2018
  • Countries around the world are preparing policies to promote service economy. Recently, as the fourth industrial revolution is accelerating, interest in the service industry is increasing. Korea's service industry is among the lowest among OECD countries in terms of employment, value-added and productivity, and it is time to explore new development strategies. The Korean government is establishing a service economic development strategy to promote employment and economic vitality. However, in the era of the 4th industrial revolution, the service industry is very important in that it has to be fused with the manufacturing industry. This study examines the service industry policy related to the 4th industrial revolution which the central government, local governments, and countries around the world are pursuing through literature review. The Big data analysis is used to determine the interest rate of the seven major service industries and core technologies for the fourth generation industrial revolution.

A Study on Construction of Platform Using Spectrum Big Data (전파 빅데이터 활용을 위한 플랫폼 구축방안 연구)

  • Kim, Hyoung Ju;Ra, Jong Hei;Jeon, Woong Ryul;Kim, Pankoo
    • Smart Media Journal
    • /
    • v.9 no.2
    • /
    • pp.99-109
    • /
    • 2020
  • This paper proposes a platform construction plan for the use of spectrum big data, collects and analyzes the big data in the radio wave field, establishes a linkage plan, and presents a support system scheme for linking and using the spectrum and public sector big data. It presented a plan to build a big data platform in connection with the spectrum public sector. In a situation where there is a lack of a support system for systematic analysis and utilization of big data in the field of radio waves, by establishing a platform construction plan for the use of big data by radio-related industries, the preemptive response to realize the 4th Industrial Revolution and the status and state of the domestic radio field. The company intends to contribute to enhancing the convenience of users of the big data platform in the public sector by securing the innovation growth engine of the company and contributing to the fair competition of the radio wave industry and the improvement of service quality. In addition, it intends to contribute to raising the social awareness of the value of spectrum management data utilization and establishing a collaboration system that uses spectrum big data through joint use of the platform.

A Study of Relationship between Dataveillance and Online Privacy Protection Behavior under the Advent of Big Data Environment (빅데이터 환경 형성에 따른 데이터 감시 위협과 온라인 프라이버시 보호 활동의 관계에 대한 연구)

  • Park, Min-Jeong;Chae, Sang-Mi
    • Knowledge Management Research
    • /
    • v.18 no.3
    • /
    • pp.63-80
    • /
    • 2017
  • Big Data environment is established by accumulating vast amounts of data as users continuously share and provide personal information in online environment. Accordingly, the more data is accumulated in online environment, the more data is accessible easily by third parties without users' permissions compared to the past. By utilizing strategies based on data-driven, firms recently make it possible to predict customers' preferences and consuming propensity relatively exactly. This Big Data environment, on the other hand, establishes 'Dataveillance' which means anybody can watch or control users' behaviors by using data itself which is stored online. Main objective of this study is to identify the relationship between Dataveillance and users' online privacy protection behaviors. To achieve it, we first investigate perceived online service efficiency; loss of control on privacy; offline surveillance; necessity of regulation influences on users' perceived threats which is generated by Dataveillance.

Application Of Open Data Framework For Real-Time Data Processing (실시간 데이터 처리를 위한 개방형 데이터 프레임워크 적용 방안)

  • Park, Sun-ho;Kim, Young-kil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.10
    • /
    • pp.1179-1187
    • /
    • 2019
  • In today's technology environment, most big data-based applications and solutions are based on real-time processing of streaming data. Real-time processing and analysis of big data streams plays an important role in the development of big data-based applications and solutions. In particular, in the maritime data processing environment, the necessity of developing a technology capable of rapidly processing and analyzing a large amount of real-time data due to the explosion of data is accelerating. Therefore, this paper analyzes the characteristics of NiFi, Kafka, and Druid as suitable open source among various open data technologies for processing big data, and provides the latest information on external linkage necessary for maritime service analysis in Korean e-Navigation service. To this end, we will lay the foundation for applying open data framework technology for real-time data processing.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

Study for Spatial Big Data Concept and System Building (공간빅데이터 개념 및 체계 구축방안 연구)

  • Ahn, Jong Wook;Yi, Mi Sook;Shin, Dong Bin
    • Spatial Information Research
    • /
    • v.21 no.5
    • /
    • pp.43-51
    • /
    • 2013
  • In this study, the concept of spatial big data and effective ways to build a spatial big data system are presented. Big Data is defined as 3V(volume, variety, velocity). Spatial big data is the basis for evolution from 3V's big data to 6V's big data(volume, variety, velocity, value, veracity, visualization). In order to build an effective spatial big data, spatial big data system building should be promoted. In addition, spatial big data system should be performed a national spatial information base, convergence platform, service providers, and providers as a factor of production. The spatial big data system is made up of infrastructure(hardware), technology (software), spatial big data(data), human resources, law etc. The goals for the spatial big data system build are spatial-based policy support, spatial big data platform based industries enable, spatial big data fusion-based composition, spatial active in social issues. Strategies for achieving the objectives are build the government-wide cooperation, new industry creation and activation, and spatial big data platform built, technologies competitiveness of spatial big data.