• 제목/요약/키워드: Large-Scale Volume Data

검색결과 94건 처리시간 0.029초

Volume Rendering using Grid Computing for Large-Scale Volume Data

  • Nishihashi, Kunihiko;Higaki, Toru;Okabe, Kenji;Raytchev, Bisser;Tamaki, Toru;Kaneda, Kazufumi
    • International Journal of CAD/CAM
    • /
    • 제9권1호
    • /
    • pp.111-120
    • /
    • 2010
  • In this paper, we propose a volume rendering method using grid computing for large-scale volume data. Grid computing is attractive because medical institutions and research facilities often have a large number of idle computers. A large-scale volume data is divided into sub-volumes and the sub-volumes are rendered using grid computing. When using grid computing, different computers rarely have the same processor speeds. Thus the return order of results rarely matches the sending order. However order is vital when combining results to create a final image. Job-Scheduling is important in grid computing for volume rendering, so we use an obstacle-flag which changes priorities dynamically to manage sub-volume results. Obstacle-Flags manage visibility of each sub-volume when line of sight from the view point is obscured by other subvolumes. The proposed Dynamic Job-Scheduling based on visibility substantially increases efficiency. Our Dynamic Job-Scheduling method was implemented on our university's campus grid and we conducted comparative experiments, which showed that the proposed method provides significant improvements in efficiency for large-scale volume rendering.

Design of Distributed Cloud System for Managing large-scale Genomic Data

  • Seine Jang;Seok-Jae Moon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제16권2호
    • /
    • pp.119-126
    • /
    • 2024
  • The volume of genomic data is constantly increasing in various modern industries and research fields. This growth presents new challenges and opportunities in terms of the quantity and diversity of genetic data. In this paper, we propose a distributed cloud system for integrating and managing large-scale gene databases. By introducing a distributed data storage and processing system based on the Hadoop Distributed File System (HDFS), various formats and sizes of genomic data can be efficiently integrated. Furthermore, by leveraging Spark on YARN, efficient management of distributed cloud computing tasks and optimal resource allocation are achieved. This establishes a foundation for the rapid processing and analysis of large-scale genomic data. Additionally, by utilizing BigQuery ML, machine learning models are developed to support genetic search and prediction, enabling researchers to more effectively utilize data. It is expected that this will contribute to driving innovative advancements in genetic research and applications.

Large-scale Structure Studies with Mock Galaxy Sample from the Horizon Run 4 & Multiverse Simulations

  • Hong, Sungwook E.
    • 천문학회보
    • /
    • 제45권1호
    • /
    • pp.29.3-29.3
    • /
    • 2020
  • Cosmology is a study to understand the origin, fundamental property, and evolution of the universe. Nowadays, many observational data of galaxies have become available, and one needs large-volume numerical simulations with good quality of the spatial distribution for a fair comparison with observation data. On the other hand, since galaxies' evolution is affected by both gravitational and baryonic effects, it is nontrivial to populate galaxies only by N-body simulations. However, full hydrodynamic simulations with large volume are computationally costly. Therefore, alternative galaxy assignment methods to N-body simulations are necessary for successful cosmological studies. In this talk, I would like to introduce the MBP-galaxy abundance matching. This novel galaxy assignment method agrees with the spatial distribution of observed galaxies between 0.1Mpc ~ 100Mpc scales. I also would like to introduce mock galaxy catalogs of the Horizon Run 4 and Multiverse simulations, large-volume cosmological N-body simulations done by the Korean community. Finally, I would like to introduce some recent works with those mock galaxies used to understand our universe better.

  • PDF

Development of the Design Methodology for Large-scale Data Warehouse based on MongoDB

  • Lee, Junho;Joo, Kyungsoo
    • 한국컴퓨터정보학회논문지
    • /
    • 제23권3호
    • /
    • pp.49-54
    • /
    • 2018
  • A data warehouse is a system that collectively manages and integrates data of a company. And provides the basis for decision making for management strategy. Nowadays, analysis data volumes are reaching critical size challenging traditional data ware housing approaches. Current implemented solutions are mainly based on relational database that are no longer adapted to these data volume. NoSQL solutions allow us to consider new approaches for data warehousing, especially from the multidimensional data management point of view. In this paper, we extend the data warehouse design methodology based on relational database using star schema, and have developed a consistent design methodology from information requirement analysis to data warehouse construction for large scale data warehouse construction based on MongoDB, one of NoSQL.

A Study on Reversals after Stock Price Shock in the Korean Distribution Industry

  • Jeong-Hwan, LEE;Su-Kyu, PARK;Sam-Ho, SON
    • 유통과학연구
    • /
    • 제21권3호
    • /
    • pp.93-100
    • /
    • 2023
  • Purpose: The purpose of this paper is to confirm whether stocks belonging to the distribution industry in Korea have reversals, following large daily stock price changes accompanied by large trading volumes. Research design, data, and methodology: We examined whether there were reversals after the event date when large-scale stock price changes appeared for the entire sample of distribution-related companies listed on the Korea Composite Stock Price Index from January 2004 to July 2022. In addition, we reviewed whether the reversals differed depending on abnormal trading volume on the event date. Using multiple regression analysis, we tested whether high trading volume had a significant effect on the cumulative rate of return after the event date. Results: Reversals were confirmed after the stock price shock in the Korean distribution industry and the return after the event date varied depending on the size of the trading volume on the event day. In addition, even after considering both company-specific and event-specific factors, the trading volume on the event day was found to have significant explanatory power on the cumulative rate of return after the event date. Conclusions: Reversals identified in this paper can be used as a useful tool for establishing a trading strategy.

블리킹을 이용한 대용량 초음파 볼륨 데이터 렌더링 (Large-Scale Ultrasound Volume Rendering using Bricking)

  • 김주환;권구주;신병석
    • 한국컴퓨터정보학회논문지
    • /
    • 제13권7호
    • /
    • pp.117-126
    • /
    • 2008
  • 최근 높은 해상도의 볼륨 데이터를 획득할 수 있게 되면서 제한된 용량의 메모리를 가진 그래픽 하드웨어에서 대용량 볼륨 데이터를 렌더링 하는 방법이 필요하게 되었다. 대용량 볼륨 데이터의 렌더링 방법 중 데이터를 적절히 분할하여 순차적으로 처리하는 블리킹 (bricking) 방법이 많이 사용된다. 그러나 일반적인 블리킹 방법은 직교 좌표계를 사용하는 CT와 MR 데이터를 위해 고안된 것으로, 원환체 (torus) 좌표계를 사용하는 부채꼴 형태의 초음파 볼륨 데이터에 적용하면, 관측광선이 블릭 (brick)의 곡면경계로 진입한 후 다시 빠져 나갈 때 동일한 블릭이 GPU메모리에 두번 적재되는 경우가 발생한다. 본 논문에서는 초음파 볼륨을 랜더링 할 때 반복적인 텍스쳐 스위칭이 발생하지 않도록 블릭의 크기를 결정하는 방법을 제안한다. 블릭의 경계는 곡면으로 되어 있으므로 이들의 곡률을 계산하여, 관측광선이 동일한 블록을 두 번 참조하는 영역을 찾는다. 이 영역에 해당하는 복셀들을 인접한 두 블릭들이 공유하도록 크기를 정하면 둘 중의 한 블릭에서만 재샘플링하게 함으로써 블릭이 중복 적재되는 것을 피할 수 있다.

  • PDF

Integration of a Large-Scale Genetic Analysis Workbench Increases the Accessibility of a High-Performance Pathway-Based Analysis Method

  • Lee, Sungyoung;Park, Taesung
    • Genomics & Informatics
    • /
    • 제16권4호
    • /
    • pp.39.1-39.3
    • /
    • 2018
  • The rapid increase in genetic dataset volume has demanded extensive adoption of biological knowledge to reduce the computational complexity, and the biological pathway is one well-known source of such knowledge. In this regard, we have introduced a novel statistical method that enables the pathway-based association study of large-scale genetic dataset-namely, PHARAOH. However, researcher-level application of the PHARAOH method has been limited by a lack of generally used file formats and the absence of various quality control options that are essential to practical analysis. In order to overcome these limitations, we introduce our integration of the PHARAOH method into our recently developed all-in-one workbench. The proposed new PHARAOH program not only supports various de facto standard genetic data formats but also provides many quality control measures and filters based on those measures. We expect that our updated PHARAOH provides advanced accessibility of the pathway-level analysis of large-scale genetic datasets to researchers.

REDUCING LATENCY IN SMART MANUFACTURING SERVICE SYSTEM USING EDGE COMPUTING

  • Vimal, S.;Jesuva, Arockiadoss S;Bharathiraja, S;Guru, S;Jackins, V.
    • Journal of Platform Technology
    • /
    • 제9권1호
    • /
    • pp.15-22
    • /
    • 2021
  • In a smart manufacturing environment, more and more devices are connected to the Internet so that a large volume of data can be obtained during all phases of the product life cycle. The large-scale industries, companies and organizations that have more operational units scattered among the various geographical locations face a huge resource consumption because of their unorganized structure of sharing resources among themselves that directly affects the supply chain of the corresponding concerns. Cloud-based smart manufacturing paradigm facilitates a new variety of applications and services to analyze a large volume of data and enable large-scale manufacturing collaboration. The manufacturing units include machinery that may be situated in different geological areas and process instances that are executed from different machinery data should be constantly managed by the super admin to coordinate the manufacturing process in the large-scale industries these environments make the manufacturing process a tedious work to maintain the efficiency of the production unit. The data from all these instances should be monitored to maintain the integrity of the manufacturing service system, all these data are computed in the cloud environment which leads to the latency in the performance of the smart manufacturing service system. Instead, validating data from the external device, we propose to validate the data at the front-end of each device. The validation process can be automated by script validation and then the processed data will be sent to the cloud processing and storing unit. Along with the end-device data validation we will implement the APM(Asset Performance Management) to enhance the productive functionality of the manufacturers. The manufacturing service system will be chunked into modules based on the functionalities of the machines and process instances corresponding to the time schedules of the respective machines. On breaking the whole system into chunks of modules and further divisions as required we can reduce the data loss or data mismatch due to the processing of data from the instances that may be down for maintenance or malfunction ties of the machinery. This will help the admin to trace the individual domains of the smart manufacturing service system that needs attention for error recovery among the various process instances from different machines that operate on the various conditions. This helps in reducing the latency, which in turn increases the efficiency of the whole system

대단위 점 데이터를 위한 빠른 삼차원 삼각망 생성방법 (The Fast 3D mesh generation method for a large scale of point data)

  • 이상한;박강
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2000년도 추계학술대회논문집A
    • /
    • pp.705-711
    • /
    • 2000
  • This paper presents a fast 3D mesh generation method using a surface based method with a stitching algorithm. This method uses the surface based method since the volume based method that uses 3D Delaunay triangulation can hardly deal with a large scale of scanned points. To reduce the processing time, this method also uses a stitching algorithm: after dividing the whole point data into several sections and performing mesh generation on individual sections, the meshes from several sections are stitched into one mesh. Stitching method prevents the surface based method from increasing the processing time exponentially as the number of the points increases. This method works well with different types of scanned points: a scattered type points from a conventional 3D scanner and a cross-sectional type from CT or MRI.

  • PDF

영산강권역 대규모 농업용 저수지의 가용허가수량 추정 (Estimation of Available Permit Water for Large Scale Agricultural Reservoirs in Youngsan River Basin)

  • 김선주;박기춘;박희성
    • 한국농공학회논문집
    • /
    • 제54권1호
    • /
    • pp.93-97
    • /
    • 2012
  • Agricultural water reservoirs upstream of the intake on the basis of the intaking water volume is being made. Therefore, the supply capacity of reservoirs are not considered when the water balance analysis, storm water reservoirs are based on agriculture and further secured by the reservoir water is not used to using natural river water analysis. To overcome these problems can supply reservoirs are available to permit analysis of how much the quantity of water balance analysis, it should be reflected in the line to help. In this study, the natural daily flow data and apply the dimensions of the reservoir, and for more than 30 years of the long-term water balance analysis conducted by Date Youngsan river basin can supply reservoirs are large quantity of permits available is presented.