• Title/Summary/Keyword: high-volume data

Search Result 1,110, Processing Time 0.032 seconds

Flow Visualization Model Based on B-spline Volume (비스플라인 부피에 기초한 유동 가시화 모델)

  • 박상근;이건우
    • Korean Journal of Computational Design and Engineering
    • /
    • v.2 no.1
    • /
    • pp.11-18
    • /
    • 1997
  • Scientific volume visualization addresses the representation, manipulation, and rendering of volumetric data sets, providing mechanisms for looking closely into structures and understanding their complexity and dynamics. In the past several years, a tremendous amount of research and development has been directed toward algorithms and data modeling methods for a scientific data visualization. But there has been very little work on developing a mathematical volume model that feeds this visualization. Especially, in flow visualization, the volume model has long been required as a guidance to display the very large amounts of data resulting from numerical simulations. In this paper, we focus on the mathematical representation of volumetric data sets and the method of extracting meaningful information from the derived volume model. For this purpose, a B-spline volume is extended to a high dimensional trivariate model which is called as a flow visualization model in this paper. Two three-dimensional examples are presented to demonstrate the capabilities of this model.

  • PDF

High-Volume Data Processing using Complex Event Processing Engine in the Web of Next Generation (차세대 웹 환경에서 Complex Event Processing 엔진을 이용한 대용량데이터 처리)

  • Kang, Man-Mo;Koo, Ra-Rok;Lee, Dong-Hyung
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.300-307
    • /
    • 2010
  • According to growth of web, data processing technology is developing. In the Web of next generation, high-speed or high-volume data processing technologies for various wire-wireless users, USN and RFID are developing too. In this paper, we propose a high-volume data processing technology using Complex Event Processing(CEP) engine. CEP is the technology to process complex events. CEP Engine is the following characteristics. First it collects a high-volume event(data). Secondly it analyses events. Finally it lets event connect to new actions. In other words, CEP engine collects, analyses, filters high-volume events. Also it extracts events using pattern-matching for registered events and new events. As the results extracted. We use it by an input event of other work, real-time response for demanded event and can trigger to database for only valid data.

Does a Higher Coronary Artery Bypass Graft Surgery Volume Always have a Low In-hospital Mortality Rate in Korea? (관상동맥우회로술 환자의 위험도에 따른 수술량과 병원내 사망의 관련성)

  • Lee, Kwang-Soo;Lee, Sang-Il
    • Journal of Preventive Medicine and Public Health
    • /
    • v.39 no.1
    • /
    • pp.13-20
    • /
    • 2006
  • Objectives: To propose a risk-adjustment model with using insurance claims data and to analyze whether or not the outcomes of non-emergent and isolated coronary artery bypass graft surgery (CABG) differed between the low- and high-volume hospitals for the patients who are at different levels of surgical risk. Methods: This is a cross-sectional study that used the 2002 data of the national health insurance claims. The study data set included the patient level data as well as all the ICD-10 diagnosis and procedure codes that were recorded in the claims. The patient's biological, admission and comorbidity information were used in the risk-adjustment model. The risk factors were adjusted with the logistic regression model. The subjects were classified into five groups based on the predicted surgical risk: minimal (<0.5%), low (0.5% to 2%), moderate (2% to 5%), high (5% to 20%), and severe (=20%). The differences between the low- and high-volume hospitals were assessed in each of the five risk groups. Results: The final risk-adjustment model consisted of ten risk factors and these factors were found to have statistically significant effects on patient mortality. The C-statistic (0.83) and Hosmer-Lemeshow test ($x^2=6.92$, p=0.55) showed that the model's performance was good. A total of 30 low-volume hospitals (971 patients) and 4 high-volume hospitals (1,087 patients) were identified. Significant differences for the in-hospital mortality were found between the low- and high-volume hospitals for the high (21.6% vs. 7.2%, p=0.00) and severe (44.4% vs. 11.8%, p=0.00) risk patient groups. Conclusions: Good model performance showed that insurance claims data can be used for comparing hospital mortality after adjusting for the patients' risk. Negative correlation was existed between surgery volume and in-hospital mortality. However, only patients in high and severe risk groups had such a relationship.

Accuracy Assessment of Topographic Volume Estimation Using Kompsat-3 and 3-A Stereo Data

  • Oh, Jae-Hong;Lee, Chang-No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.4
    • /
    • pp.261-268
    • /
    • 2017
  • The topographic volume estimation is carried out for the earth work of a construction site and quarry excavation monitoring. The topographic surveying using instruments such as engineering levels, total stations, and GNSS (Global Navigation Satellite Systems) receivers have traditionally been used and the photogrammetric approach using drone systems has recently been introduced. However, these methods cannot be adopted for inaccessible areas where high resolution satellite images can be an alternative. We carried out experiments using Kompsat-3/3A data to estimate topographic volume for a quarry and checked the accuracy. We generated DEMs (Digital Elevation Model) using newly acquired Kompsat-3/3A data and checked the accuracy of the topographic volume estimation by comparing them to a reference DEM generated by timely operating a drone system. The experimental results showed that geometric differences between stereo images significantly lower the quality of the volume estimation. The tested Kompsat-3 data showed one meter level of elevation accuracy with the volume estimation error less than 1% while the tested Kompsat-3A data showed lower results because of the large geometric difference.

A COMPARATIVE STUDY BETWEEN DISCONTINUOUS GALERKIN AND SPECTRAL VOLUME METHODS ON STRUCTURED GRIDS (2차원 정렬 격자계에서의 불연속 갤러킨 기법과 Spectral Volume 기법 비교 연구)

  • Koo H. S.;Kim K. H.;Kim C. A.
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2005.10a
    • /
    • pp.131-134
    • /
    • 2005
  • Conventional high order interpolation schemes are limitative in several aspects mainly because they need data of neighboring cells at the reconstruction step. However, discontinuous Galerkin method and spectral volume method, two high order flux schemes which will be analyzed and compared in this paper, have an important benefit that they are not necessary to determine the flow gradients from data of neighboring cells or elements. These two schemes construct polynomial of variables within a cell so that even near wall or discontinuity, the high order does not deteriorate.

  • PDF

3-Dimensional Representation of Heart by Thresholding in EBT Images (EBT 영상에서 임계치 설정법에 의한 심장의 3차원 표현)

  • Won, C.H.;Koo, S.M.;Kim, M.N.;Cho, J.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.11
    • /
    • pp.533-536
    • /
    • 1997
  • In this paper, we visualized 3-dimensional volume of heart using volume method by thresholding in EBT slices data. Volume rendering is the method that acquire the color by casting a pixel ray to volume data. The gray level of heart region is so high that we decide heart region by thresholding method. When a pixel ray is cast to volume data, the region that is higher than threshold value becomes heart region. We effectively rendered the heart volume and showed the 3-dimensional heart volume.

  • PDF

Min-Max Octree Generation Using CUDA (CUDA를 이용한 최대-최소 8진트리 생성 기법)

  • Lim, Jong-Hyeon;Shin, Byeong-Seok
    • Journal of Korea Game Society
    • /
    • v.9 no.6
    • /
    • pp.191-196
    • /
    • 2009
  • Volume rendering is a method which extracts meaningful information from volume data and visualizes those information. In general, since the size of volume data gets larger, it is very important to devise acceleration methods for interactive rendering speed. Min-max octree is data structure for high-speed volume rendering, however, its creation time becomes long as the data size increases. In this paper, we propose acceleration method of min-max octree generation using CUDA. Firstly, we convert one-dimensional array from volume data using space filling curve. Then we make min-max octree structures from the sequential array and apply them to acceleration of volume ray casting.

  • PDF

A Data Structure for Real-time Volume Ray Casting (실시간 볼륨 광선 투사법을 위한 자료구조)

  • Lim, Suk-Hyun;Shin, Byeong-Seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.11 no.1
    • /
    • pp.40-49
    • /
    • 2005
  • Several optimization techniques have been proposed for volume ray casting, but these cannot achieve real-time frame rates. In addition, it is difficult to apply them to some applications that require perspective projection. Recently, hardware-based methods using 3D texture mapping are being used for real-time volume rendering. Although rendering speed approaches real time, the larger volumes require more swapping of volume bricks for the limited texture memory. Also, image quality deteriorates compared with that of conventional volume ray casting. In this paper, we propose a data structure for real-time volume ray casting named PERM (Precomputed dEnsity and gRadient Map). The PERM stores interpolated density and gradient vector for quantized cells. Since the information requiring time-consuming computations is stored in the PERM, our method can ensure interactive frame rates on a consumer PC platform. Our method normally produces high-quality images because it is based on conventional volume ray casting.

  • PDF

A Study on the Application of Survival Analysis to Terminated Life Insurance Polices

  • Kang, Jung-Chul
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.2
    • /
    • pp.237-253
    • /
    • 2005
  • In Korea, the volume of insurance industry has been increased rapidly with helping the economic growth, the increment of GNP and derive of public welfare policy. But the other side of the volume increment, the life insurers have some problems, such as the high rate of turnover, lapses and surrenders, in processing of acquiring more insurance contracts. The object of this paper is the analysis of the causes and properties of the high rate of turnover, lapses and surrenders using statistical survival model. Also we hope that the insurers will use the results of analysis to reduce the rates.

  • PDF

Volume Holographic Fingerprint Recognition System for Personal Identification (개인 인증을 위한 체적 홀로그래픽 지문인식 시스템)

  • 이승현
    • Journal of the Korean Society of Safety
    • /
    • v.13 no.4
    • /
    • pp.256-263
    • /
    • 1998
  • In this paper, we propose a volume holographic fingerprint recognition system based on optical correlator for personal identification. Optical correlator has high speed and parallel processing characteristics of optics. Matched filters are recorded into a volume hologram that can store data with high density, transfer them with high speed, and select a randomly chosen data element. The multiple reference images of database are prerecorded in a photorefractive crystal in the form of Fourier transform images, simply by passing the image displayed in a spatial light modulator through a Fourier transform lens. The angular multiplexing method for multiple holograms of database can be achieved by rotating the crystal by use of a step motor. Experimental results show that the proposed system can be used for the security verification system.

  • PDF