• Title/Summary/Keyword: Big data Problem

Search Result 571, Processing Time 0.025 seconds

A cache placement algorithm based on comprehensive utility in big data multi-access edge computing

  • Liu, Yanpei;Huang, Wei;Han, Li;Wang, Liping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.3892-3912
    • /
    • 2021
  • The recent rapid growth of mobile network traffic places multi-access edge computing in an important position to reduce network load and improve network capacity and service quality. Contrasting with traditional mobile cloud computing, multi-access edge computing includes a base station cooperative cache layer and user cooperative cache layer. Selecting the most appropriate cache content according to actual needs and determining the most appropriate location to optimize the cache performance have emerged as serious issues in multi-access edge computing that must be solved urgently. For this reason, a cache placement algorithm based on comprehensive utility in big data multi-access edge computing (CPBCU) is proposed in this work. Firstly, the cache value generated by cache placement is calculated using the cache capacity, data popularity, and node replacement rate. Secondly, the cache placement problem is then modeled according to the cache value, data object acquisition, and replacement cost. The cache placement model is then transformed into a combinatorial optimization problem and the cache objects are placed on the appropriate data nodes using tabu search algorithm. Finally, to verify the feasibility and effectiveness of the algorithm, a multi-access edge computing experimental environment is built. Experimental results show that CPBCU provides a significant improvement in cache service rate, data response time, and replacement number compared with other cache placement algorithms.

Is Big Data Analysis to Be a Methodological Innovation? : The cases of social science (빅데이터 분석은 사회과학 연구에서 방법론적 혁신인가?)

  • SangKhee Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.655-662
    • /
    • 2023
  • Big data research plays a role of supplementing existing social science research methods. If the survey and experimental methods are somewhat inaccurate because they mainly rely on recall memories, big data are more accurate because they are real-time records. Social science research so far, which mainly conducts sample research for reasons such as time and cost, but big data research analyzes almost total data. However, it is not easy to repeat and reproduce social research because the social atmosphere can change and the subjects of research are not the same. While social science research has a strong triangular structure of 'theory-method-data', big data analysis shows a weak theory, which is a serious problem. Because, without the theory as a scientific explanation logic, even if the research results are obtained, they cannot be properly interpreted or fully utilized. Therefore, in order for big data research to become a methodological innovation, I proposed big thinking along with researchers' efforts to create new theories(black boxes).

A Study on the Procedure of Using Big Data to Solve Smart City Problems Based on Citizens' Needs and Participation (시민 니즈와 참여 기반의 스마트시티 문제해결을 위한 빅 데이터 활용 절차에 관한 연구)

  • Chang, Hye-Jung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.2
    • /
    • pp.102-112
    • /
    • 2020
  • Smart City's goal is to solve urban problems through smart city's component technology, thereby developing eco-friendly and sustainable economies and improving citizens' quality of life. Until now, smart cities have evolved into component technologies, but it is time to focus attention on the needs and participation of citizens in smart cities. In this paper, we present a big data procedure for solving smart city problems based on citizens' needs and participation. To this end, we examine the smart city project market by region and major industry. We also examine the development stages of the smart city market area by sector. Additionally it understands the definition and necessity of each sector for citizen participation, and proposes a method to solve the problem through big data in the seven-step big data problem solving process. The seven-step big data process for solving problems is a method of deriving tasks after analyzing structured and unstructured data in each sector of smart cities and deriving policy programs accordingly. To attract citizen participation in these procedures, the empathy stage of the design thinking methodology is used in the unstructured data collection process. Also, as a method of identifying citizens' needs to solve urban problems in smart cities, the problem definition stage of the design sinking methodology was incorporated into the unstructured data analysis process.

Current Issues with the Big Data Utilization from a Humanities Perspective (인문학적 관점으로 본 빅데이터 활용을 위한 당면 문제)

  • Park, Eun-ha;Jeon, Jin-woo
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.6
    • /
    • pp.125-134
    • /
    • 2022
  • This study aims to critically discuss the problems that need to be solved from a humanities perspective in order to utilize big data. It identifies and discusses three research problems that may arise from collecting, processing, and using big data. First, it looks at the fake information circulating with regard to problems with the data itself, specifically looking at article-type advertisements and fake news related to politics. Second, discrimination by the algorithm was cited as a problem with big data processing and its results. This discrimination was seen while searching for engineers on the portal site. Finally, problems related to the invasion of personal related information were seen in three categories: the right to privacy, the right to self-determination of information, and the right to be forgotten. This study is meaningful in that it points out the problems facing in the aspect of big data utilization from the humanities perspective in the era of big data and discusses possible problems in the collection, processing, and use of big data, respectively.

Big IoT Healthcare Data Analytics Framework Based on Fog and Cloud Computing

  • Alshammari, Hamoud;El-Ghany, Sameh Abd;Shehab, Abdulaziz
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1238-1249
    • /
    • 2020
  • Throughout the world, aging populations and doctor shortages have helped drive the increasing demand for smart healthcare systems. Recently, these systems have benefited from the evolution of the Internet of Things (IoT), big data, and machine learning. However, these advances result in the generation of large amounts of data, making healthcare data analysis a major issue. These data have a number of complex properties such as high-dimensionality, irregularity, and sparsity, which makes efficient processing difficult to implement. These challenges are met by big data analytics. In this paper, we propose an innovative analytic framework for big healthcare data that are collected either from IoT wearable devices or from archived patient medical images. The proposed method would efficiently address the data heterogeneity problem using middleware between heterogeneous data sources and MapReduce Hadoop clusters. Furthermore, the proposed framework enables the use of both fog computing and cloud platforms to handle the problems faced through online and offline data processing, data storage, and data classification. Additionally, it guarantees robust and secure knowledge of patient medical data.

Research on the Development of Big Data Analysis Tools for Engineering Education (공학교육 빅 데이터 분석 도구 개발 연구)

  • Kim, Younyoung;Kim, Jaehee
    • Journal of Engineering Education Research
    • /
    • v.26 no.4
    • /
    • pp.22-35
    • /
    • 2023
  • As information and communication technology has developed remarkably, it has become possible to analyze various types of large-volume data generated at a speed close to real time, and based on this, reliable value creation has become possible. Such big data analysis is becoming an important means of supporting decision-making based on scientific figures. The purpose of this study is to develop a big data analysis tool that can analyze large amounts of data generated through engineering education. The tasks of this study are as follows. First, a database is designed to store the information of entries in the National Creative Capstone Design Contest. Second, the pre-processing process is checked for analysis with big data analysis tools. Finally, analyze the data using the developed big data analysis tool. In this study, 1,784 works submitted to the National Creative Comprehensive Design Contest from 2014 to 2019 were analyzed. As a result of selecting the top 10 words through topic analysis, 'robot' ranked first from 2014 to 2019, and energy, drones, ultrasound, solar energy, and IoT appeared with high frequency. This result seems to reflect the current core topics and technology trends of the 4th Industrial Revolution. In addition, it seems that due to the nature of the Capstone Design Contest, students majoring in electrical/electronic, computer/information and communication engineering, mechanical engineering, and chemical/new materials engineering who can submit complete products for problem solving were selected. The significance of this study is that the results of this study can be used in the field of engineering education as basic data for the development of educational contents and teaching methods that reflect industry and technology trends. Furthermore, it is expected that the results of big data analysis related to engineering education can be used as a means of preparing preemptive countermeasures in establishing education policies that reflect social changes.

A Study on Automation of Big Data Quality Diagnosis Using Machine Learning (머신러닝을 이용한 빅데이터 품질진단 자동화에 관한 연구)

  • Lee, Jin-Hyoung
    • The Journal of Bigdata
    • /
    • v.2 no.2
    • /
    • pp.75-86
    • /
    • 2017
  • In this study, I propose a method to automate the method to diagnose the quality of big data. The reason for automating the quality diagnosis of Big Data is that as the Fourth Industrial Revolution becomes a issue, there is a growing demand for more volumes of data to be generated and utilized. Data is growing rapidly. However, if it takes a lot of time to diagnose the quality of the data, it can take a long time to utilize the data or the quality of the data may be lowered. If you make decisions or predictions from these low-quality data, then the results will also give you the wrong direction. To solve this problem, I have developed a model that can automate diagnosis for improving the quality of Big Data using machine learning which can quickly diagnose and improve the data. Machine learning is used to automate domain classification tasks to prevent errors that may occur during domain classification and reduce work time. Based on the results of the research, I can contribute to the improvement of data quality to utilize big data by continuing research on the importance of data conversion, learning methods for unlearned data, and development of classification models for each domain.

  • PDF

Data Central Network Technology Trend Analysis using SDN/NFV/Edge-Computing (SDN, NFV, Edge-Computing을 이용한 데이터 중심 네트워크 기술 동향 분석)

  • Kim, Ki-Hyeon;Choi, Mi-Jung
    • KNOM Review
    • /
    • v.22 no.3
    • /
    • pp.1-12
    • /
    • 2019
  • Recently, researching using big data and AI has emerged as a major issue in the ICT field. But, the size of big data for research is growing exponentially. In addition, users of data transmission of existing network method suggest that the problem the time taken to send and receive big data is slower than the time to copy and send the hard disk. Accordingly, researchers require dynamic and flexible network technology that can transmit data at high speed and accommodate various network structures. SDN/NFV technologies can be programming a network to provide a network suitable for the needs of users. It can easily solve the network's flexibility and security problems. Also, the problem with performing AI is that centralized data processing cannot guarantee real-time, and network delay occur when traffic increases. In order to solve this problem, the edge-computing technology, should be used which has moved away from the centralized method. In this paper, we investigate the concept and research trend of SDN, NFV, and edge-computing technologies, and analyze the trends of data central network technologies used by combining these three technologies.

A Big-Data Trajectory Combination Method for Navigations using Collected Trajectory Data (수집된 경로데이터를 사용하는 내비게이션을 위한 대용량 경로조합 방법)

  • Koo, Kwang Min;Lee, Taeho;Park, Heemin
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.386-395
    • /
    • 2016
  • In trajectory-based navigation systems, a huge amount of trajectory data is needed for efficient route explorations. However, it would be very hard to collect trajectories from all the possible start and destination combinations. To provide a practical solution to this problem, we suggest a method combining collected GPS trajectories data into additional generated trajectories with new start and destination combinations without road information. We present a trajectory combination algorithm and its implementation with Scala programming language on Spark platform for big data processing. The experimental results proved that the proposed method can effectively populate the collected trajectories into valid trajectory paths more than three hundred times.

Private information protection method and countermeasures in Big-data environment: Survey (빅데이터 환경에서 개인민감정보 보호 방안 및 대응책: 서베이)

  • Hong, Sunghyuck
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.10
    • /
    • pp.55-59
    • /
    • 2018
  • Big-data, a revolutionary technology in the era of the 4th Industrial Revolution, provides services in various fields such as health, public sector, distribution, marketing, manufacturing, etc. It is very useful technology for marketing analysis and future design through accurate and quick data analysis. It is very likely to develop further. However, the biggest problem when using Big-data is privacy and privacy. When various data are analyzed using Big-data, the tendency of each user can be analyzed, and this information may be sensitive information of an individual and may invade privacy of an individual. Therefore, in this paper, we investigate the necessary measures for Personal private information infringement that may occur when using Personal private information in Big-data environment, and propose necessary Personal private information protection technologies to contribute to protection of Personal private information and privacy.