• Title/Summary/Keyword: Data Visualize

Search Result 564, Processing Time 0.028 seconds

Visualization of Vector Fields from Density Data Using Moving Least Squares Based on Monte Carlo Method (몬테카를로 방법 기반의 이동최소제곱을 이용한 밀도 데이터의 벡터장 시각화)

  • Jong-Hyun Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.2
    • /
    • pp.1-9
    • /
    • 2024
  • In this paper, we propose a new method to visualize different vector field patterns from density data. We use moving least squares (MLS), which is used in physics-based simulations and geometric processing. However, typical MLS does not take into account the nature of density, as it is interpolated to a higher order through vector-based constraints. In this paper, we design an algorithm that incorporates Monte Carlo-based weights into the MLS to efficiently account for the density characteristics implicit in the input data, allowing the algorithm to represent different forms of white noise. As a result, we experimentally demonstrate detailed vector fields that are difficult to represent using existing techniques such as naive MLS and divergence-constrained MLS.

Analysis study of movement patterns using BigData analysis technology (BigData 분석 기법을 활용한 이동 패턴 분석 연구)

  • Yun, Jun-Soo;Kang, Hee-Soo;Moon, Il-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1073-1079
    • /
    • 2014
  • One of the techniques that are most in the spotlight today, it can be said that Big data. With Big Data, technologies already prevalent in our lives is GPS. Based on the GPS data and Big Data, in this paper, we try to analyze the pattern and path of movement of a particular target. Specific target collects the GPS data by classifying weather and grade and sex of college students, and day of the week in college students of one university. The collected data is analyzed such as movement path, movement time, pattern of repetitive behavior. And visualize it. The analysis method will be classified according to the purpose of data. By identifying relationships with other data results obtained. Based on the present study, the future, we will derive the results of the data more reliable. For this purpose, a wide range of information to be collected will additionally. Research will be developed add to such as Season, time, blood type, occupation data.

Implementation of public data contents using Big data Visualization technology - Map visualization technique (빅 데이터 가시화 기술을 적용한 공공데이터 콘텐츠 구현 - Map가시화 기법)

  • Bak, Seon-Hui;Kim, Jong Ho;You, Hyun-Bea
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1427-1434
    • /
    • 2017
  • Due to the acceleration of the 4th industrialization, the data around us rapidly increased. Therefore, it is necessary to be able to more easily grasp the nature and meaning of data obtained through data analysis than to collect data, and apply it flexibly to the value judgment of data. Visualization technology is now attracting attention in many fields. Visualization allows the user to more easily grasp the information of the data with graphs, charts, etc. so that the data analysis result can be understood more easily, so that the user can make an immediate judgment and make a quick decision. Among them, there is a high degree of interest in visualization using public data, which is highly useful to users. In this paper, we implemented R - library and R Studio to visualize public data at the installation sites of bicycle storage sites among various software that can express visualization.

A Study on the Development of a Problem Bank in an Automated Assessment Module for Data Visualization Based on Public Data

  • HakNeung Go;Sangsu Jeong;Youngjun Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.203-211
    • /
    • 2024
  • Utilizing programming languages for data visualization can enhance the efficiency and effectiveness in handling data volume, processing time, and flexibility. However, practice is required to become proficient in programming. Therefore public data-based the problem bank was developed to practice data visualization in a programming automatic assessment system. Public data were collected based on topics suggested in the curriculum and were preprocessed to make it suitable for users to visualize. The problem bank was associated with the mathematics curriculum to learn various data visualization methods. The developed problems were reviewed to expert and pilot testing, which validated the level of the questions and the potential of integrating data visualization in math education. However, feedback indicated a lack of student interest in the topics, leading us to develop additional questions using student-center data. The developed problem bank is expected to be used when students who have learned Python in primary school information gifted or middle school or higher learn data visualization.

Big Data-based Sensor Data Processing and Analysis for IoT Environment (IoT 환경을 위한 빅데이터 기반 센서 데이터 처리 및 분석)

  • Shin, Dong-Jin;Park, Ji-Hun;Kim, Ju-Ho;Kwak, Kwang-Jin;Park, Jeong-Min;Kim, Jeong-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.1
    • /
    • pp.117-126
    • /
    • 2019
  • The data generated in the IoT environment is very diverse. Especially, the development of the fourth industrial revolution has made it possible to increase the number of fixed and unstructured data generated in manufacturing facilities such as Smart Factory. With Big Data related solutions, it is possible to collect, store, process, analyze and visualize various large volumes of data quickly and accurately. Therefore, in this paper, we will directly generate data using Raspberry Pi used in IoT environment, and analyze using various Big Data solutions. Collected by using an Sqoop solution collected and stored in the database to the HDFS, and the process is to process the data by using the solutions available Hive parallel processing is associated with Hadoop. Finally, the analysis and visualization of the processed data via the R programming will be used universally to end verification.

A Development and Application of Data Visualization EducationProgram for 3rd Grade Students in Elementary School (초등학교 3학년 학생들을 위한 데이터 시각화 교육 프로그램 개발 및 적용)

  • Jiseon Woo;Kapsu Kim
    • Journal of The Korean Association of Information Education
    • /
    • v.26 no.6
    • /
    • pp.481-490
    • /
    • 2022
  • With the development of computing technology, the big data era has arrived, and we live with a lot of data around us. Elementary school students are no exception. Therefore, it is very important to learn to process data from elementary school. Since elementary school students have intuitive thinking, data visualization, which expresses data directly in pictures, is an important learning element. In this study, we study how effective elementary school students can visualize data in their daily lives to improve their information processing capabilities. Adata visualization program was developed by organizing and visualizing data using data visualization tools for the 8th class, which can be done by third graders in elementary school, and then experiencing the process of interaction. As a result of applying the developed program to 186 students in 7 classes, knowledge information processing competency factors were evaluated before and after class. As a result of the pre- and post-test, there was a significant difference in knowledge information processing capabilities. Therefore, the data visualization program developed in this study is effective.

Level of Detail Data Model for Efficient Data Transmission of 3-D GIS (3차원 공간정보시스템 데이터의 효율적 전송을 위한 세밀도 모델)

  • Lee, Hyun-Suk;Moon, Jung-Wook;Li, Ki-Joune
    • Spatial Information Research
    • /
    • v.14 no.3 s.38
    • /
    • pp.321-334
    • /
    • 2006
  • 3D spatial data are of increasing interest in landscape analysis, urban planning and map services based on Web, because of its reality. But the amount of 3D spatial data are very large in comparison with 2D spatial data. Therefore it is necessary to have a efficient methods to transfer and visualize 3D spatial data. The concept of Level of Detail in Computer Graphics is effective. This paper briefly presents two LOD data models of data transmission based on the spatial data model of international standards. First, it is separated LOD model that gives a LOD level to object. Second is Selective LOD model that gives a LOD level to object's element. We compared the efficiency of 3D data transmission based on two LOD model.

  • PDF

DQB (Dynamic Query Band): Dynamic Query Device for Efficient Exploration of Time-series Data (DQB (Dynamic Query Band): 시계열 데이터의 효율적인 탐색을 위한 동적 쿼리 장치)

  • Jo, Myeong-Su;Seo, Jin-Ok
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.715-718
    • /
    • 2009
  • Time series data is a sequence of data points, measured typically at successive, spaced at time intervals. Many devices for an efficient exploration is developed according as the items of time series data increase. Among these devices, there is a Timebox widget as a representative device of dynamic query for interactive data exploration. Timeboxes are rectangular query region of interest. The users can draw the region of interest using simple mouse manipulation and the query result sets is displayed. But there is a limitation to represent the concrete query region and Timeboxes visualize the query region inconsistent with the mental model of users. To resolve these problems, we propose a new device called DQB(Dynamic Query Band). DQB is a qeury region consisting of user defined polyline with a thickness on time series data. This device is possible to concretely specify the query region. Also, it provides a simple and convenient interface and a good conceptual model.

  • PDF

Method for Selecting a Big Data Package (빅데이터 패키지 선정 방법)

  • Byun, Dae-Ho
    • Journal of Digital Convergence
    • /
    • v.11 no.10
    • /
    • pp.47-57
    • /
    • 2013
  • Big data analysis needs a new tool for decision making in view of data volume, speed, and variety. Many global IT enterprises are announcing a variety of Big data products with easy to use, best functionality, and modeling capability. Big data packages are defined as a solution represented by analytic tools, infrastructures, platforms including hardware and software. They can acquire, store, analyze, and visualize Big data. There are many types of products with various and complex functionalities. Because of inherent characteristics of Big data, selecting a best Big data package requires expertise and an appropriate decision making method, comparing the selection problem of other software packages. The objective of this paper is to suggest a decision making method for selecting a Big data package. We compare their characteristics and functionalities through literature reviews and suggest selection criteria. In order to evaluate the feasibility of adopting packages, we develop two Analytic Hierarchy Process(AHP) models where the goal node of a model consists of costs and benefits and the other consists of selection criteria. We show a numerical example how the best package is evaluated by combining the two models.

Mobile-based Big Data Processing and Monitoring Technology in IoT Environment (IoT 환경에서 모바일 기반 빅데이터 처리 및 모니터링 기술)

  • Lee, Seung-Hae;Kim, Ju-Ho;Shin, Dong-Youn;Shin, Dong-Jin;Park, Jeong-Min;Kim, Jeong-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.6
    • /
    • pp.1-9
    • /
    • 2018
  • In the fourth industrial revolution, which has become an issue now, we have been able to receive instant analysis results faster than the existing slow speed through various Big Data technologies, and to conduct real-time monitoring on mobile and web. First, various irregular sensor Data is generated using IoT device, Raspberry Pi. Sensor Data is collected in real time, and the collected data is distributed and stored using several nodes. Then, the stored Sensor Data is processed and refined. Visualize and output the analysis result after analysis. By using these methods, we can train the human resources required for Big Data and mobile related fields using IoT, and process data efficiently and quickly. We also provide information that can confirm the reliability of research results through real time monitoring.