• Title/Summary/Keyword: Big data Problem

Search Result 571, Processing Time 0.023 seconds

A Sustainable Tourism Study in Underdeveloped Areas Using Big Data Analysis Techniques

  • Hyun-Seok Kim;Sang-Hak Lee;Gi-Hwan Ryu
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.112-118
    • /
    • 2024
  • We Design The problem of underdeveloped areas is emerging as a social problem. Industrialization drove the population to the cities, creating underdeveloped areas. Underdeveloped areas are causing social problems such as population decline and aging. It is necessary to study the continuous tourism development of underdeveloped areas through development and improvement projects. Using social media big data to investigate keywords in underdeveloped areas and see the connection between keywords. The purpose of this study was to conduct core research divided by type and to investigate the keywords of tourism in underdeveloped areas through concor analysis of underdeveloped areas. As a result of the study, keywords were connected for each type of redevelopment, regional development, regional economy, and underdeveloped areas. Through this, the keywords for sustainable tourism in underdeveloped areas were identified. It is hoped that this study will develop sustainable tourism for the keywords of underdeveloped areas.

Deduction of the Policy Issues for Activating the Geo-Spatial Big Data Services (공간 빅데이터 서비스 활성화를 위한 정책과제 도출)

  • Park, Joon Min;Lee, Myeong Ho;Shin, Dong Bin;Ahn, Jong Wook
    • Spatial Information Research
    • /
    • v.23 no.6
    • /
    • pp.19-29
    • /
    • 2015
  • This study was conducted with the purpose of suggesting the improvement plan of political for activating the Geo-Spatial Big Data Services. To this end, we were review the previous research for Geo-Spatial Big Data and analysis domestic and foreign Geo-Spatial Big Data propulsion system and policy enforcement situation. As a result, we have deduced the problem of insufficient policy of reaction for future Geo-Spatial Big Data, personal information protection and political basis service activation, relevant technology and policy, system for Geo-Spatial Big Data application and establishment, low leveled open government data and sharing system. In succession, we set up a policy direction for solving derived problems and deducted 5 policy issues : setting up a Geo-Spatial Big Data system, improving relevant legal system, developing technic related to Geo-Spatial Big Data, promoting business supporting Geo-Spatial Big Data, creating a convergence sharing system about public DB.

Implement of MapReduce-based Big Data Processing Scheme for Reducing Big Data Processing Delay Time and Store Data (빅데이터 처리시간 감소와 저장 효율성이 향상을 위한 맵리듀스 기반 빅데이터 처리 기법 구현)

  • Lee, Hyeopgeon;Kim, Young-Woon;Kim, Ki-Young
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.10
    • /
    • pp.13-19
    • /
    • 2018
  • MapReduce, the Hadoop's essential core technology, is most commonly used to process big data based on the Hadoop distributed file system. However, the existing MapReduce-based big data processing techniques have a feature of dividing and storing files in blocks predefined in the Hadoop distributed file system, thus wasting huge infrastructure resources. Therefore, in this paper, we propose an efficient MapReduce-based big data processing scheme. The proposed method enhances the storage efficiency of a big data infrastructure environment by converting and compressing the data to be processed into a data format in advance suitable for processing by MapReduce. In addition, the proposed method solves the problem of the data processing time delay arising from when implementing with focus on the storage efficiency.

A Study on the Classification of Variables Affecting Smartphone Addiction in Decision Tree Environment Using Python Program

  • Kim, Seung-Jae
    • International journal of advanced smart convergence
    • /
    • v.11 no.4
    • /
    • pp.68-80
    • /
    • 2022
  • Since the launch of AI, technology development to implement complete and sophisticated AI functions has continued. In efforts to develop technologies for complete automation, Machine Learning techniques and deep learning techniques are mainly used. These techniques deal with supervised learning, unsupervised learning, and reinforcement learning as internal technical elements, and use the Big-data Analysis method again to set the cornerstone for decision-making. In addition, established decision-making is being improved through subsequent repetition and renewal of decision-making standards. In other words, big data analysis, which enables data classification and recognition/recognition, is important enough to be called a key technical element of AI function. Therefore, big data analysis itself is important and requires sophisticated analysis. In this study, among various tools that can analyze big data, we will use a Python program to find out what variables can affect addiction according to smartphone use in a decision tree environment. We the Python program checks whether data classification by decision tree shows the same performance as other tools, and sees if it can give reliability to decision-making about the addictiveness of smartphone use. Through the results of this study, it can be seen that there is no problem in performing big data analysis using any of the various statistical tools such as Python and R when analyzing big data.

Intention to Use and Group Difference in Adopting Big Data: Towards a Comprehensive View (활용 주체별 빅데이터 수용 인식 차이에 관한 연구: 활용 목적, 조직 규모, 업종 특성을 중심으로)

  • Lee, Young-Joo;Yang, Hyun-Cheol
    • Informatization Policy
    • /
    • v.24 no.1
    • /
    • pp.79-99
    • /
    • 2017
  • Despite the early success story, the pan-industry diffusion of big data has been slow mostly due to lack of confidence of the value creation and privacy-related concerns. The problem leads us to the need to a stakeholder analysis on the adoption process of big data. The present study combines technology acceptance model, task-technology fit theory, and privacy calculus theory to integrate the positive and negative factors on the big data adoption. The empirical analysis was performed based on the survey from the current and potential big data users. Results revealed perceived usefulness, task-technology fit, and privacy concern are significant antecedents to the intention to use big data. Furthermore, there are significant differences in the perceptions of each constructs among groups divided by the types of big data use, with several exceptions. And the control effect was found in the magnitude of the relation between independent variables and dependent variable. The theoretical and politic implications of the analysis are discussed as to the promotion of big data industry.

Design of Log Management System based on Document Database for Big Data Management (빅데이터 관리를 위한 문서형 DB 기반 로그관리 시스템 설계)

  • Ryu, Chang-ju;Han, Myeong-ho;Han, Seung-jo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.11
    • /
    • pp.2629-2636
    • /
    • 2015
  • Recently Big Data management have a rapid increases interest in IT field, much research conducting to solve a problem of real-time processing to Big Data. Lots of resources are required for the ability to store data in real-time over the network but there is the problem of introducing an analyzing system due to aspect of high cost. Need of redesign of the system for low cost and high efficiency had been increasing to solve the problem. In this paper, the document type of database, MongoDB, is used for design a log management system based a document type of database, that is good at big data managing. The suggested log management system is more efficient than other method on log collection and processing, and it is strong on data forgery through the performance evaluation.

IoT-Based Health Big-Data Process Technologies: A Survey

  • Yoo, Hyun;Park, Roy C.;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.974-992
    • /
    • 2021
  • Recently, the healthcare field has undergone rapid changes owing to the accumulation of health big data and the development of machine learning. Data mining research in the field of healthcare has different characteristics from those of other data analyses, such as the structural complexity of the medical data, requirement for medical expertise, and security of personal medical information. Various methods have been implemented to address these issues, including the machine learning model and cloud platform. However, the machine learning model presents the problem of opaque result interpretation, and the cloud platform requires more in-depth research on security and efficiency. To address these issues, this paper presents a recent technology for Internet-of-Things-based (IoT-based) health big data processing. We present a cloud-based IoT health platform and health big data processing technology that reduces the medical data management costs and enhances safety. We also present a data mining technology for health-risk prediction, which is the core of healthcare. Finally, we propose a study using explainable artificial intelligence that enhances the reliability and transparency of the decision-making system, which is called the black box model owing to its lack of transparency.

Hazelcast Vs. Ignite: Opportunities for Java Programmers

  • Maxim, Bartkov;Tetiana, Katkova;S., Kruglyk Vladyslav;G., Murtaziev Ernest;V., Kotova Olha
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.406-412
    • /
    • 2022
  • Storing large amounts of data has always been a big problem from the beginning of computing history. Big Data has made huge advancements in improving business processes by finding the customers' needs using prediction models based on web and social media search. The main purpose of big data stream processing frameworks is to allow programmers to directly query the continuous stream without dealing with the lower-level mechanisms. In other words, programmers write the code to process streams using these runtime libraries (also called Stream Processing Engines). This is achieved by taking large volumes of data and analyzing them using Big Data frameworks. Streaming platforms are an emerging technology that deals with continuous streams of data. There are several streaming platforms of Big Data freely available on the Internet. However, selecting the most appropriate one is not easy for programmers. In this paper, we present a detailed description of two of the state-of-the-art and most popular streaming frameworks: Apache Ignite and Hazelcast. In addition, the performance of these frameworks is compared using selected attributes. Different types of databases are used in common to store the data. To process the data in real-time continuously, data streaming technologies are developed. With the development of today's large-scale distributed applications handling tons of data, these databases are not viable. Consequently, Big Data is introduced to store, process, and analyze data at a fast speed and also to deal with big users and data growth day by day.

Utilization Outlook of Medical Big Data in the Cloud Environment (클라우드 환경에서 의료 빅데이터 활용 및 전망)

  • Han, Jung-Soo
    • Journal of Digital Convergence
    • /
    • v.12 no.6
    • /
    • pp.341-347
    • /
    • 2014
  • Among methods of the big data process, big data process under the cloud environment is becoming a main topic. As part of solving faced problem and strengthening industrial competitiveness in the medical and health industry, discussion on ways to activate big data is actively being conducted. Because the reason is a paradigm shift, saving pressure for increasing health care costs, and increased consumer interest for the level of service. In this paper, we find out the relationship between the cloud and big data. And we are to research and analysis a cloud-based big data case in the medical field. Finally we propose the efficient utilization and future outlook. For the smooth functioning of cloud-based medical big data, we have to solve the problems like infrastructure extension, analysis/application software development, and professional manpower training. In addition, we have to correct insufficient laws maintenance to the Cloud utilization, and improve the security and the recognition to personal information, and solve authority for data centralization.

Method for Selecting a Big Data Package (빅데이터 패키지 선정 방법)

  • Byun, Dae-Ho
    • Journal of Digital Convergence
    • /
    • v.11 no.10
    • /
    • pp.47-57
    • /
    • 2013
  • Big data analysis needs a new tool for decision making in view of data volume, speed, and variety. Many global IT enterprises are announcing a variety of Big data products with easy to use, best functionality, and modeling capability. Big data packages are defined as a solution represented by analytic tools, infrastructures, platforms including hardware and software. They can acquire, store, analyze, and visualize Big data. There are many types of products with various and complex functionalities. Because of inherent characteristics of Big data, selecting a best Big data package requires expertise and an appropriate decision making method, comparing the selection problem of other software packages. The objective of this paper is to suggest a decision making method for selecting a Big data package. We compare their characteristics and functionalities through literature reviews and suggest selection criteria. In order to evaluate the feasibility of adopting packages, we develop two Analytic Hierarchy Process(AHP) models where the goal node of a model consists of costs and benefits and the other consists of selection criteria. We show a numerical example how the best package is evaluated by combining the two models.