• Title/Summary/Keyword: Large-Scale Volume Data

Search Result 94, Processing Time 0.029 seconds

Volume Rendering using Grid Computing for Large-Scale Volume Data

  • Nishihashi, Kunihiko;Higaki, Toru;Okabe, Kenji;Raytchev, Bisser;Tamaki, Toru;Kaneda, Kazufumi
    • International Journal of CAD/CAM
    • /
    • v.9 no.1
    • /
    • pp.111-120
    • /
    • 2010
  • In this paper, we propose a volume rendering method using grid computing for large-scale volume data. Grid computing is attractive because medical institutions and research facilities often have a large number of idle computers. A large-scale volume data is divided into sub-volumes and the sub-volumes are rendered using grid computing. When using grid computing, different computers rarely have the same processor speeds. Thus the return order of results rarely matches the sending order. However order is vital when combining results to create a final image. Job-Scheduling is important in grid computing for volume rendering, so we use an obstacle-flag which changes priorities dynamically to manage sub-volume results. Obstacle-Flags manage visibility of each sub-volume when line of sight from the view point is obscured by other subvolumes. The proposed Dynamic Job-Scheduling based on visibility substantially increases efficiency. Our Dynamic Job-Scheduling method was implemented on our university's campus grid and we conducted comparative experiments, which showed that the proposed method provides significant improvements in efficiency for large-scale volume rendering.

Design of Distributed Cloud System for Managing large-scale Genomic Data

  • Seine Jang;Seok-Jae Moon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.119-126
    • /
    • 2024
  • The volume of genomic data is constantly increasing in various modern industries and research fields. This growth presents new challenges and opportunities in terms of the quantity and diversity of genetic data. In this paper, we propose a distributed cloud system for integrating and managing large-scale gene databases. By introducing a distributed data storage and processing system based on the Hadoop Distributed File System (HDFS), various formats and sizes of genomic data can be efficiently integrated. Furthermore, by leveraging Spark on YARN, efficient management of distributed cloud computing tasks and optimal resource allocation are achieved. This establishes a foundation for the rapid processing and analysis of large-scale genomic data. Additionally, by utilizing BigQuery ML, machine learning models are developed to support genetic search and prediction, enabling researchers to more effectively utilize data. It is expected that this will contribute to driving innovative advancements in genetic research and applications.

Large-scale Structure Studies with Mock Galaxy Sample from the Horizon Run 4 & Multiverse Simulations

  • Hong, Sungwook E.
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.45 no.1
    • /
    • pp.29.3-29.3
    • /
    • 2020
  • Cosmology is a study to understand the origin, fundamental property, and evolution of the universe. Nowadays, many observational data of galaxies have become available, and one needs large-volume numerical simulations with good quality of the spatial distribution for a fair comparison with observation data. On the other hand, since galaxies' evolution is affected by both gravitational and baryonic effects, it is nontrivial to populate galaxies only by N-body simulations. However, full hydrodynamic simulations with large volume are computationally costly. Therefore, alternative galaxy assignment methods to N-body simulations are necessary for successful cosmological studies. In this talk, I would like to introduce the MBP-galaxy abundance matching. This novel galaxy assignment method agrees with the spatial distribution of observed galaxies between 0.1Mpc ~ 100Mpc scales. I also would like to introduce mock galaxy catalogs of the Horizon Run 4 and Multiverse simulations, large-volume cosmological N-body simulations done by the Korean community. Finally, I would like to introduce some recent works with those mock galaxies used to understand our universe better.

  • PDF

Development of the Design Methodology for Large-scale Data Warehouse based on MongoDB

  • Lee, Junho;Joo, Kyungsoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.3
    • /
    • pp.49-54
    • /
    • 2018
  • A data warehouse is a system that collectively manages and integrates data of a company. And provides the basis for decision making for management strategy. Nowadays, analysis data volumes are reaching critical size challenging traditional data ware housing approaches. Current implemented solutions are mainly based on relational database that are no longer adapted to these data volume. NoSQL solutions allow us to consider new approaches for data warehousing, especially from the multidimensional data management point of view. In this paper, we extend the data warehouse design methodology based on relational database using star schema, and have developed a consistent design methodology from information requirement analysis to data warehouse construction for large scale data warehouse construction based on MongoDB, one of NoSQL.

A Study on Reversals after Stock Price Shock in the Korean Distribution Industry

  • Jeong-Hwan, LEE;Su-Kyu, PARK;Sam-Ho, SON
    • Journal of Distribution Science
    • /
    • v.21 no.3
    • /
    • pp.93-100
    • /
    • 2023
  • Purpose: The purpose of this paper is to confirm whether stocks belonging to the distribution industry in Korea have reversals, following large daily stock price changes accompanied by large trading volumes. Research design, data, and methodology: We examined whether there were reversals after the event date when large-scale stock price changes appeared for the entire sample of distribution-related companies listed on the Korea Composite Stock Price Index from January 2004 to July 2022. In addition, we reviewed whether the reversals differed depending on abnormal trading volume on the event date. Using multiple regression analysis, we tested whether high trading volume had a significant effect on the cumulative rate of return after the event date. Results: Reversals were confirmed after the stock price shock in the Korean distribution industry and the return after the event date varied depending on the size of the trading volume on the event day. In addition, even after considering both company-specific and event-specific factors, the trading volume on the event day was found to have significant explanatory power on the cumulative rate of return after the event date. Conclusions: Reversals identified in this paper can be used as a useful tool for establishing a trading strategy.

Large-Scale Ultrasound Volume Rendering using Bricking (블리킹을 이용한 대용량 초음파 볼륨 데이터 렌더링)

  • Kim, Ju-Hwan;Kwon, Koo-Joo;Shin, Byeong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.7
    • /
    • pp.117-126
    • /
    • 2008
  • Recent advances in medical imaging technologies have enabled the high-resolution data acquisition. Therefore visualization of such large data set on standard graphics hardware became a popular research theme. Among many visualization techniques, we focused on bricking method which divided the entire volume into smaller bricks and rendered them in order. Since it switches bet\W8n bricks on main memory and bricks on GPU memory on the fly, to achieve better performance, the number of these memory swapping conditions has to be minimized. And, because the original bricking algorithm was designed for regular volume data such as CT and MR, when applying the algorithm to ultrasound volume data which is based on the toroidal coordinate space, it revealed some performance degradation. In some areas near bricks' boundaries, an orthogonal viewing ray intersects the single brick twice, and it consequently makes a single brick memory to be uploaded onto GPU twice in a single frame. To avoid this redundancy, we divided the volume into bricks allowing overlapping between the bricks. In this paper, we suggest the formula to determine an appropriate size of these shared area between the bricks. Using our formula, we could minimize the memory bandwidth. and, at the same time, we could achieve better rendering performance.

  • PDF

Integration of a Large-Scale Genetic Analysis Workbench Increases the Accessibility of a High-Performance Pathway-Based Analysis Method

  • Lee, Sungyoung;Park, Taesung
    • Genomics & Informatics
    • /
    • v.16 no.4
    • /
    • pp.39.1-39.3
    • /
    • 2018
  • The rapid increase in genetic dataset volume has demanded extensive adoption of biological knowledge to reduce the computational complexity, and the biological pathway is one well-known source of such knowledge. In this regard, we have introduced a novel statistical method that enables the pathway-based association study of large-scale genetic dataset-namely, PHARAOH. However, researcher-level application of the PHARAOH method has been limited by a lack of generally used file formats and the absence of various quality control options that are essential to practical analysis. In order to overcome these limitations, we introduce our integration of the PHARAOH method into our recently developed all-in-one workbench. The proposed new PHARAOH program not only supports various de facto standard genetic data formats but also provides many quality control measures and filters based on those measures. We expect that our updated PHARAOH provides advanced accessibility of the pathway-level analysis of large-scale genetic datasets to researchers.

REDUCING LATENCY IN SMART MANUFACTURING SERVICE SYSTEM USING EDGE COMPUTING

  • Vimal, S.;Jesuva, Arockiadoss S;Bharathiraja, S;Guru, S;Jackins, V.
    • Journal of Platform Technology
    • /
    • v.9 no.1
    • /
    • pp.15-22
    • /
    • 2021
  • In a smart manufacturing environment, more and more devices are connected to the Internet so that a large volume of data can be obtained during all phases of the product life cycle. The large-scale industries, companies and organizations that have more operational units scattered among the various geographical locations face a huge resource consumption because of their unorganized structure of sharing resources among themselves that directly affects the supply chain of the corresponding concerns. Cloud-based smart manufacturing paradigm facilitates a new variety of applications and services to analyze a large volume of data and enable large-scale manufacturing collaboration. The manufacturing units include machinery that may be situated in different geological areas and process instances that are executed from different machinery data should be constantly managed by the super admin to coordinate the manufacturing process in the large-scale industries these environments make the manufacturing process a tedious work to maintain the efficiency of the production unit. The data from all these instances should be monitored to maintain the integrity of the manufacturing service system, all these data are computed in the cloud environment which leads to the latency in the performance of the smart manufacturing service system. Instead, validating data from the external device, we propose to validate the data at the front-end of each device. The validation process can be automated by script validation and then the processed data will be sent to the cloud processing and storing unit. Along with the end-device data validation we will implement the APM(Asset Performance Management) to enhance the productive functionality of the manufacturers. The manufacturing service system will be chunked into modules based on the functionalities of the machines and process instances corresponding to the time schedules of the respective machines. On breaking the whole system into chunks of modules and further divisions as required we can reduce the data loss or data mismatch due to the processing of data from the instances that may be down for maintenance or malfunction ties of the machinery. This will help the admin to trace the individual domains of the smart manufacturing service system that needs attention for error recovery among the various process instances from different machines that operate on the various conditions. This helps in reducing the latency, which in turn increases the efficiency of the whole system

The Fast 3D mesh generation method for a large scale of point data (대단위 점 데이터를 위한 빠른 삼차원 삼각망 생성방법)

  • Lee, Sang-Han;Park, Kang
    • Proceedings of the KSME Conference
    • /
    • 2000.11a
    • /
    • pp.705-711
    • /
    • 2000
  • This paper presents a fast 3D mesh generation method using a surface based method with a stitching algorithm. This method uses the surface based method since the volume based method that uses 3D Delaunay triangulation can hardly deal with a large scale of scanned points. To reduce the processing time, this method also uses a stitching algorithm: after dividing the whole point data into several sections and performing mesh generation on individual sections, the meshes from several sections are stitched into one mesh. Stitching method prevents the surface based method from increasing the processing time exponentially as the number of the points increases. This method works well with different types of scanned points: a scattered type points from a conventional 3D scanner and a cross-sectional type from CT or MRI.

  • PDF

Estimation of Available Permit Water for Large Scale Agricultural Reservoirs in Youngsan River Basin (영산강권역 대규모 농업용 저수지의 가용허가수량 추정)

  • Kim, Sun-Joo;Park, Ki-Chun;Park, Hee-Sung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.54 no.1
    • /
    • pp.93-97
    • /
    • 2012
  • Agricultural water reservoirs upstream of the intake on the basis of the intaking water volume is being made. Therefore, the supply capacity of reservoirs are not considered when the water balance analysis, storm water reservoirs are based on agriculture and further secured by the reservoir water is not used to using natural river water analysis. To overcome these problems can supply reservoirs are available to permit analysis of how much the quantity of water balance analysis, it should be reflected in the line to help. In this study, the natural daily flow data and apply the dimensions of the reservoir, and for more than 30 years of the long-term water balance analysis conducted by Date Youngsan river basin can supply reservoirs are large quantity of permits available is presented.