• Title/Summary/Keyword: big data

Search Result 6,038, Processing Time 0.036 seconds

A Big Data-Driven Business Data Analysis System: Applications of Artificial Intelligence Techniques in Problem Solving

  • Donggeun Kim;Sangjin Kim;Juyong Ko;Jai Woo Lee
    • The Journal of Bigdata
    • /
    • v.8 no.1
    • /
    • pp.35-47
    • /
    • 2023
  • It is crucial to develop effective and efficient big data analytics methods for problem-solving in the field of business in order to improve the performance of data analytics and reduce costs and risks in the analysis of customer data. In this study, a big data-driven data analysis system using artificial intelligence techniques is designed to increase the accuracy of big data analytics along with the rapid growth of the field of data science. We present a key direction for big data analysis systems through missing value imputation, outlier detection, feature extraction, utilization of explainable artificial intelligence techniques, and exploratory data analysis. Our objective is not only to develop big data analysis techniques with complex structures of business data but also to bridge the gap between the theoretical ideas in artificial intelligence methods and the analysis of real-world data in the field of business.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

A Case Study on Big Data Analysis Systems for Policy Proposals of Engineering Education (공학교육 정책제안을 위한 빅데이터 분석 시스템 사례 분석 연구)

  • Kim, JaeHee;Yoo, Mina
    • Journal of Engineering Education Research
    • /
    • v.22 no.5
    • /
    • pp.37-48
    • /
    • 2019
  • The government has tried to develop a platform for systematically collecting and managing engineering education data for policy proposals. However, there have been few cases of big data analysis platform for policy proposals in engineering education, and it is difficult to determine the major function of the platform, the purpose of using big data, and the method of data collection. This study aims to collect the cases of big data analysis systems for the development of a big data system for educational policy proposals, and to conduct a study to analyze cases using the analysis frame of key elements to consider in developing a big data analysis platform. In order to analyze the case of big data system for engineering education policy proposals, 24 systems collecting and managing big data were selected. The analysis framework was developed based on literature reviews and the results of the case analysis were presented. The results of this study are expected to provide from macro-level such as what functions the platform should perform in developing a big data system and how to collect data, what analysis techniques should be adopted, and how to visualize the data analysis results.

A Study on Open API of Securities and Investment Companies in Korea for Activating Big Data

  • Ryu, Gui Yeol
    • International journal of advanced smart convergence
    • /
    • v.8 no.2
    • /
    • pp.102-108
    • /
    • 2019
  • Big data was associated with three key concepts, volume, variety, and velocity. Securities and investment services produce and store a large data of text/numbers. They have also the most data per company on the average in the US. Gartner found that the demand for big data in finance was 25%, which was the highest. Therefore securities and investment companies produce the largest data such as text/numbers, and have the highest demand. And insurance companies and credit card companies are using big data more actively than banking companies in Korea. Researches on the use of big data in securities and investment companies have been found to be insignificant. We surveyed 22 major securities and investment companies in Korea for activating big data. We can see they actively use AI for investment recommend. As for big data of securities and investment companies, we studied open API. Of the major 22 securities and investment companies, only six securities and investment companies are offering open APIs. The user OS is 100% Windows, and the language used is mainly VB, C#, MFC, and Excel provided by Windows. There is a difficulty in real-time analysis and decision making since developers cannot receive data directly using Hadoop, the big data platform. Development manuals are mainly provided on the Web, and only three companies provide as files. The development documentation for the file format is more convenient than web type. In order to activate big data in the securities and investment fields, we found that they should support Linux, and Java, Python, easy-to-view development manuals, videos such as YouTube.

Big Data Patent Analysis Using Social Network Analysis (키워드 네트워크 분석을 이용한 빅데이터 특허 분석)

  • Choi, Ju-Choel
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.2
    • /
    • pp.251-257
    • /
    • 2018
  • As the use of big data is necessary for increasing business value, the size of the big data market is getting bigger. Accordingly, it is important to apply competitive patents in order to gain the big data market. In this study, we conducted the patent analysis based keyword network to analyze the trend of big data patents. The analysis procedure consists of big data collection and preprocessing, network construction, and network analysis. The results of the study are as follows. Most of big data patents are related to data processing and analysis, and the keywords with high degree centrality and between centrality are "analysis", "process", "information", "data", "prediction", "server", "service", and "construction". we expect that the results of this study will offer useful information in applying big data patent.

An Empirical Study on the Effects of Source Data Quality on the Usefulness and Utilization of Big Data Analytics Results (원천 데이터 품질이 빅데이터 분석결과의 유용성과 활용도에 미치는 영향)

  • Park, Sohyun;Lee, Kukhie;Lee, Ayeon
    • Journal of Information Technology Applications and Management
    • /
    • v.24 no.4
    • /
    • pp.197-214
    • /
    • 2017
  • This study sheds light on the source data quality in big data systems. Previous studies about big data success have called for future research and further examination of the quality factors and the importance of source data. This study extracted the quality factors of source data from the user's viewpoint and empirically tested the effects of source data quality on the usefulness and utilization of big data analytics results. Based on the previous researches and focus group evaluation, four quality factors have been established such as accuracy, completeness, timeliness and consistency. After setting up 11 hypotheses on how the quality of the source data contributes to the usefulness, utilization, and ongoing use of the big data analytics results, e-mail survey was conducted at a level of independent department using big data in domestic firms. The results of the hypothetical review identified the characteristics and impact of the source data quality in the big data systems and drew some meaningful findings about big data characteristics.

Correlation Measure for Big Data (빅데이터에서의 상관성 측도)

  • Jeong, Hai Sung
    • Journal of Applied Reliability
    • /
    • v.18 no.3
    • /
    • pp.208-212
    • /
    • 2018
  • Purpose: The three Vs of volume, velocity and variety are commonly used to characterize different aspects of Big Data. Volume refers to the amount of data, variety refers to the number of types of data and velocity refers to the speed of data processing. According to these characteristics, the size of Big Data varies rapidly, some data buckets will contain outliers, and buckets might have different sizes. Correlation plays a big role in Big Data. We need something better than usual correlation measures. Methods: The correlation measures offered by traditional statistics are compared. And conditions to meet the characteristics of Big Data are suggested. Finally the correlation measure that satisfies the suggested conditions is recommended. Results: Mutual Information satisfies the suggested conditions. Conclusion: This article builds on traditional correlation measures to analyze the co-relation between two variables. The conditions for correlation measures to meet the characteristics of Big Data are suggested. The correlation measure that satisfies these conditions is recommended. It is Mutual Information.

Analysis of problems caused by Big Data's private information handling (빅데이터 개인정보 취급에 따른 문제점 분석)

  • Choi, Hee Sik;Cho, Yang Hyun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.1
    • /
    • pp.89-97
    • /
    • 2014
  • Recently, spread of Smartphones caused activation of mobile services, because of that Big Data such as clouding service able to proceed with large amount of data which are hard to collect, save, search and analyze. Many companies collected variety of private and personal information without users' agreement for their business strategy and marketing. This situation raised social issues. As companies use Big Data, numbers of damage cases are growing. In this Thesis, when Big Data process, methods of analyze and research of data are very important. This thesis will suggest that choices of security levels and algorithms are important for security of private informations. To use Big Data, it has to encrypt the personal data to emphasize the importance of security level and selection of algorithm. Thesis will also suggest that research of utilization of Big Data and protection of private informations and making guidelines for users are require for security of private information and activation of Big Data industries.

A Big Data Preprocessing using Statistical Text Mining (통계적 텍스트 마이닝을 이용한 빅 데이터 전처리)

  • Jun, Sunghae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.470-476
    • /
    • 2015
  • Big data has been used in diverse areas. For example, in computer science and sociology, there is a difference in their issues to approach big data, but they have same usage to analyze big data and imply the analysis result. So the meaningful analysis and implication of big data are needed in most areas. Statistics and machine learning provide various methods for big data analysis. In this paper, we study a process for big data analysis, and propose an efficient methodology of entire process from collecting big data to implying the result of big data analysis. In addition, patent documents have the characteristics of big data, we propose an approach to apply big data analysis to patent data, and imply the result of patent big data to build R&D strategy. To illustrate how to use our proposed methodology for real problem, we perform a case study using applied and registered patent documents retrieved from the patent databases in the world.

A Study on Policy and System Improvement Plan of Geo-Spatial Big Data Services in Korea

  • Park, Joon Min;Yu, Seon Cheol;Ahn, Jong Wook;Shin, Dong Bin
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.6
    • /
    • pp.579-589
    • /
    • 2016
  • This research focuses on accomplishing analysis problems and issues by examining the policies and systems related to geo-spatial big data which have recently arisen, and suggests political and systemic improvement plan for service activation. To do this, problems and probable issues concerning geo-spatial big data service activation should be analyzed through the examination of precedent studies, policies and planning, pilot projects, the current legislative situation regarding geo-spatial big data, both domestic and abroad. Therefore, eight political and systematical improvement plan proposals are suggested for geo-spatial big data service activation: legislative-related issues regarding geo-spatial big data, establishing an exclusive organization in charge of geospatial big data, setting up systems for cooperative governance, establishing subsequent systems, preparing non-identifying standards for personal information, providing measures for activating civil information, data standardization on geo-spatial big data analysis, developing analysis techniques for geo-spatial big data, etc. Consistent governmental problem-solving approaches should be required to make these suggestions effectively proceed.