• Title/Summary/Keyword: volumetric 3d video

Search Result 15, Processing Time 0.019 seconds

Real-time Virtual Volumetric Scene Reconstruction System from Multiple Video Streaming (다중 비디오 영상을 이용한 실시간 가상공간 영상 재구성 시스템)

  • Choi, Hyok-S.;Han, Tae-Woo;Lee, Ju-Ho;Yang, Hyun-S.
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.11a
    • /
    • pp.7-10
    • /
    • 2003
  • 근래의 컴퓨터 그래픽스 분야의 중요 목표 중 하나는 동적으로 변화하는 3 차원 가상 공간을 재현해 내는 것이다. 일반적으로 공간정보 취득 기술은 특수한 하드웨어나 환경을 전제로 하는 능동적인 방법과, 특정 환경을 전제하지 않으나 상대적으로 계산 복잡도가 높은 수동적 방법으로 나뉜다. 이 논문에서는 수동적 알고리즘의 계산 복잡도를 개선하여 특수한 환경이나 물리적인 전제 없이 비교적 간단한 하드웨어를 이용하여 정보 취득 후 되도록 짧은 시간(latency)내에 가상 영상을 재구성하는 시스템을 설계 구현한다.

  • PDF

Compression and Visualization Techniques for Time-Varying Volume Data (시변 볼륨 데이터의 압축과 가시화 기법)

  • Sohn, Bong-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.3
    • /
    • pp.85-93
    • /
    • 2007
  • This paper describes a compression scheme for volumetric video data(3D space X 1D time) there each frame of the volume is decompressed and rendered in real-time. Since even one frame size of volume is very large, runtime decompression can be a bottleneck for real-time playback of time-varying volume data. To increase the run-time decompression speed and compression ratio, we decompose the volume into small blocks and only update significantly changing blocks. The results show that our compression scheme compromises decompression speed and image quality well enough for interactive time-varying visualization.

  • PDF

Surgical Strategy of Epilepsy Arising from Parietal and Occipital Lobes (두정엽 및 후두엽 간질에 대한 수술전략)

  • Sim, Byung-Su;Choi, Ha-Young
    • Journal of Korean Neurosurgical Society
    • /
    • v.29 no.2
    • /
    • pp.222-230
    • /
    • 2000
  • Purpose : Resection of the epileptogenic zone in the parietal and occipital lobes may be relevant although only few studies have been reported. Methods : Eight patients with parietal epilepsy and nine patients with occipital epilepsy were included for this study. Preoperatively, all had video-EEG monitoring with extracranial electrodes, MRI, 3D-surface rendering of MRI using Allegro(ISG Technologies Inc., Toronto, Canada), and PET scans. Sixteen patients underwent invasive recording with subdural grid. Eight had parietal resection including the sensory cortex in two. Seven had partial occipital resection. Two underwent total unilateral occipital lobectomy. The extent of the resection was made based mainly on the data of invasive EEG recordings, MRI, and 3D-surface rendering of MRI, not on the intraoperative electrocorticographic findings as usually done. During resection, electrocortical stimulation was performed on the motor cortex and speech area. Results : Out of eight patients with parietal epilepsy, three had sensory aura, two had gustatory aura, and two had visual aura. Six of nine patients with occipital epilepsy had visual auras. All had complex partial seizures with lateralizing signs in 15 patients. Four had quadrantopsia. One had mild right hemiparesis. Abnormality in MRI was noticed in six out of eight parietal epilepsy and in eight out of nine occipital epilepsy. 3D-surface rendering of MRI visualized volumetric abnormality with geometric spatial relationships adjacent to the normal brain, in all of parietal and occipital epilepsy. Surface EEG recording was not reliable in localizing the epileptogenic zone in any patient. The subdural grid electrodes can be implanted on the core of the structural abnormality in 3D-reconstructed brain. Ictal onset zone was localized accurately by subdural grid EEGs in 16 patients. Motor cortex in nine and sensory speech area in two were identified by electrocortical stimulation. Histopathologic findings revealed cortical dysplasia in 10 patients ; tuberous sclerosis was combined in two, hamartoma and ganglioglioma in one each, and subpial gliosis in six. Eleven patients were seizure free at follow-up of 6 months to 37 months(mean 19.7 months) after surgery. Seizures recurred in two and were unchanged in one. Six produced transient sensory loss and one developed hemiparesis and tactile agnosia. One revealed transient apraxia. Two patients with preoperative quadrantopsia developed homonymous hemianopsia. Conclusion : This study suggests that surgical treatment was relevant in parietal and occipital epilepsies with good surgical outcome, without significant neurologic sequelae. Neuroimaging studies including conventional MRI, 3Dsurface rendering of MRI were necessary in identifying the epileptogenic zone. In particular, 3D-surface rendering of MRI was very helpful in presuming the epileptogenic zone in patients with unidentifiable lesion in the conventional MRI, in planning surgical approach to lesions, and also in making a decision of the extent of the epileptogenic zone in patients with identifiable lesion in conventional MRI. Invasive EEG recording with the subdural grid electrodes helped to confirm a core of the epileptogenic zone which was revealed in 3D-surface rendered brain.

  • PDF

Interframe Coding of 3-D Medical Image Using Warping Prediction (Warping을 이용한 움직임 보상을 통한 3차원 의료 영상의 압축)

  • So, Yun-Sung;Cho, Hyun-Duck;Kim, Jong-Hyo;Ra, Jong-Beom
    • Journal of Biomedical Engineering Research
    • /
    • v.18 no.3
    • /
    • pp.223-231
    • /
    • 1997
  • In this paper, an interframe coding method for volumetric medical images is proposed. By treating interslice variations as the motion of bones or tissues, we use the motion compensation (MC) technique to predict the current frame from the previous frame. Instead of a block matching algorithm (BMA), which is the most common motion estimation (ME) algorithm in video coding, image warping with biolinear transformation has been suggested to predict complex interslice object variation in medical images. When an object disappears between slices, however, warping prediction has poor performance. In order to overcome this drawback, an overlapped block motion compensation (OBMC) technique is combined with carping prediction. Motion compensated residual images are then encoded by using an embedded zerotree wavelet (EZW) coder with small modification for consistent quality of reconstructed images. The experimental results show that the interframe coding suing warping prediction provides better performance compared with interframe coding, and the OBMC scheme gives some additional improvement over the warping-only MC method.

  • PDF

Recognizing the Direction of Action using Generalized 4D Features (일반화된 4차원 특징을 이용한 행동 방향 인식)

  • Kim, Sun-Jung;Kim, Soo-Wan;Choi, Jin-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.518-528
    • /
    • 2014
  • In this paper, we propose a method to recognize the action direction of human by developing 4D space-time (4D-ST, [x,y,z,t]) features. For this, we propose 4D space-time interest points (4D-STIPs, [x,y,z,t]) which are extracted using 3D space (3D-S, [x,y,z]) volumes reconstructed from images of a finite number of different views. Since the proposed features are constructed using volumetric information, the features for arbitrary 2D space (2D-S, [x,y]) viewpoint can be generated by projecting the 3D-S volumes and 4D-STIPs on corresponding image planes in training step. We can recognize the directions of actors in the test video since our training sets, which are projections of 3D-S volumes and 4D-STIPs to various image planes, contain the direction information. The process for recognizing action direction is divided into two steps, firstly we recognize the class of actions and then recognize the action direction using direction information. For the action and direction of action recognition, with the projected 3D-S volumes and 4D-STIPs we construct motion history images (MHIs) and non-motion history images (NMHIs) which encode the moving and non-moving parts of an action respectively. For the action recognition, features are trained by support vector data description (SVDD) according to the action class and recognized by support vector domain density description (SVDDD). For the action direction recognition after recognizing actions, each actions are trained using SVDD according to the direction class and then recognized by SVDDD. In experiments, we train the models using 3D-S volumes from INRIA Xmas Motion Acquisition Sequences (IXMAS) dataset and recognize action direction by constructing a new SNU dataset made for evaluating the action direction recognition.