• Title/Summary/Keyword: Image data-sets

Search Result 370, Processing Time 0.022 seconds

An Implementation of OTB Extension to Produce TOA and TOC Reflectance of LANDSAT-8 OLI Images and Its Product Verification Using RadCalNet RVUS Data (Landsat-8 OLI 영상정보의 대기 및 지표반사도 산출을 위한 OTB Extension 구현과 RadCalNet RVUS 자료를 이용한 성과검증)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.449-461
    • /
    • 2021
  • Analysis Ready Data (ARD) for optical satellite images represents a pre-processed product by applying spectral characteristics and viewing parameters for each sensor. The atmospheric correction is one of the fundamental and complicated topics, which helps to produce Top-of-Atmosphere (TOA) and Top-of-Canopy (TOC) reflectance from multi-spectral image sets. Most remote sensing software provides algorithms or processing schemes dedicated to those corrections of the Landsat-8 OLI sensors. Furthermore, Google Earth Engine (GEE), provides direct access to Landsat reflectance products, USGS-based ARD (USGS-ARD), on the cloud environment. We implemented the Orfeo ToolBox (OTB) atmospheric correction extension, an open-source remote sensing software for manipulating and analyzing high-resolution satellite images. This is the first tool because OTB has not provided calibration modules for any Landsat sensors. Using this extension software, we conducted the absolute atmospheric correction on the Landsat-8 OLI images of Railroad Valley, United States (RVUS) to validate their reflectance products using reflectance data sets of RVUS in the RadCalNet portal. The results showed that the reflectance products using the OTB extension for Landsat revealed a difference by less than 5% compared to RadCalNet RVUS data. In addition, we performed a comparative analysis with reflectance products obtained from other open-source tools such as a QGIS semi-automatic classification plugin and SAGA, besides USGS-ARD products. The reflectance products by the OTB extension showed a high consistency to those of USGS-ARD within the acceptable level in the measurement data range of the RadCalNet RVUS, compared to those of the other two open-source tools. In this study, the verification of the atmospheric calibration processor in OTB extension was carried out, and it proved the application possibility for other satellite sensors in the Compact Advanced Satellite (CAS)-500 or new optical satellites.

Template-Based Object-Order Volume Rendering with Perspective Projection (원형기반 객체순서의 원근 투영 볼륨 렌더링)

  • Koo, Yun-Mo;Lee, Cheol-Hi;Shin, Yeong-Gil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.7
    • /
    • pp.619-628
    • /
    • 2000
  • Abstract Perspective views provide a powerful depth cue and thus aid the interpretation of complicated images. The main drawback of current perspective volume rendering is the long execution time. In this paper, we present an efficient perspective volume rendering algorithm based on coherency between rays. Two sets of templates are built for the rays cast from horizontal and vertical scanlines in the intermediate image which is parallel to one of volume faces. Each sample along a ray is calculated by interpolating neighboring voxels with the pre-computed weights in the templates. We also solve the problem of uneven sampling rate due to perspective ray divergence by building more templates for the regions far away from a viewpoint. Since our algorithm operates in object-order, it can avoid redundant access to each voxel and exploit spatial data coherency by using run-length encoded volume. Experimental results show that the use of templates and the object-order processing with run-length encoded volume provide speedups, compared to the other approaches. Additionally, the image quality of our algorithm improves by solving uneven sampling rate due to perspective ray di vergence.

  • PDF

Region-based Building Extraction of High Resolution Satellite Images Using Color Invariant Features (색상 불변 특징을 이용한 고해상도 위성영상의 영역기반 건물 추출)

  • Ko, A-Reum;Byun, Young-Gi;Park, Woo-Jin;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.2
    • /
    • pp.75-87
    • /
    • 2011
  • This paper presents a method for region-based building extraction from high resolution satellite images(HRSI) using integrated information of spectral and color invariant features without user intervention such as selecting training data sets. The purpose of this study is also to evaluate the effectiveness of the proposed method by applying to IKONOS and QuickBird images. Firstly, the image is segmented by the MSRG method. The vegetation and shadow regions are automatically detected and masked to facilitate the building extraction. Secondly, the region merging is performed for the masked image, which the integrated information of the spectral and color invariant features is used. Finally, the building regions are extracted using the shape feature for the merged regions. The boundaries of the extracted buildings are simplified using the generalization techniques to improve the completeness of the building extraction. The experimental results showed more than 80% accuracy for two study areas and the visually satisfactory results obtained. In conclusion, the proposed method has shown great potential for the building extraction from HRSI.

Performance Testing of Satellite Image Processing based on OGC WPS 2.0 in the OpenStack Cloud Environment (오픈스택 클라우드 환경 OGC WPS 2.0 기반 위성영상처리 성능측정 시험)

  • Yoon, Gooseon;Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.617-627
    • /
    • 2016
  • Many kinds of OGC-based web standards have been utilized in the lots of geo-spatial application fields for sharing and interoperable processing of large volume of data sets containing satellite images. As well, the number of cloud-based application services by on-demand processing of virtual machines is increasing. However, remote sensing applications using these two huge trends are globally on the initial stage. This study presents a practical linkage case with both aspects of OGC-based standard and cloud computing. Performance test is performed with the implementation result for cloud detection processing. Test objects are WPS 2.0 and two types of geo-based service environment such as web server in a single core and multiple virtual servers implemented on OpenStack cloud computing environment. Performance test unit by JMeter is five requests of GetCapabilities, DescribeProcess, Execute, GetStatus, GetResult in WPS 2.0. As the results, the performance measurement time in a cloud-based environment is faster than that of single server. It is expected that expansion of processing algorithms by WPS 2.0 and virtual processing is possible to target-oriented applications in the practical level.

Prediction of the remaining time and time interval of pebbles in pebble bed HTGRs aided by CNN via DEM datasets

  • Mengqi Wu;Xu Liu;Nan Gui;Xingtuan Yang;Jiyuan Tu;Shengyao Jiang;Qian Zhao
    • Nuclear Engineering and Technology
    • /
    • v.55 no.1
    • /
    • pp.339-352
    • /
    • 2023
  • Prediction of the time-related traits of pebble flow inside pebble-bed HTGRs is of great significance for reactor operation and design. In this work, an image-driven approach with the aid of a convolutional neural network (CNN) is proposed to predict the remaining time of initially loaded pebbles and the time interval of paired flow images of the pebble bed. Two types of strategies are put forward: one is adding FC layers to the classic classification CNN models and using regression training, and the other is CNN-based deep expectation (DEX) by regarding the time prediction as a deep classification task followed by softmax expected value refinements. The current dataset is obtained from the discrete element method (DEM) simulations. Results show that the CNN-aided models generally make satisfactory predictions on the remaining time with the determination coefficient larger than 0.99. Among these models, the VGG19+DEX performs the best and its CumScore (proportion of test set with prediction error within 0.5s) can reach 0.939. Besides, the remaining time of additional test sets and new cases can also be well predicted, indicating good generalization ability of the model. In the task of predicting the time interval of image pairs, the VGG19+DEX model has also generated satisfactory results. Particularly, the trained model, with promising generalization ability, has demonstrated great potential in accurately and instantaneously predicting the traits of interest, without the need for additional computational intensive DEM simulations. Nevertheless, the issues of data diversity and model optimization need to be improved to achieve the full potential of the CNN-aided prediction tool.

Deep Learning Approach for Automatic Discontinuity Mapping on 3D Model of Tunnel Face (터널 막장 3차원 지형모델 상에서의 불연속면 자동 매핑을 위한 딥러닝 기법 적용 방안)

  • Chuyen Pham;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.508-518
    • /
    • 2023
  • This paper presents a new approach for the automatic mapping of discontinuities in a tunnel face based on its 3D digital model reconstructed by LiDAR scan or photogrammetry techniques. The main idea revolves around the identification of discontinuity areas in the 3D digital model of a tunnel face by segmenting its 2D projected images using a deep-learning semantic segmentation model called U-Net. The proposed deep learning model integrates various features including the projected RGB image, depth map image, and local surface properties-based images i.e., normal vector and curvature images to effectively segment areas of discontinuity in the images. Subsequently, the segmentation results are projected back onto the 3D model using depth maps and projection matrices to obtain an accurate representation of the location and extent of discontinuities within the 3D space. The performance of the segmentation model is evaluated by comparing the segmented results with their corresponding ground truths, which demonstrates the high accuracy of segmentation results with the intersection-over-union metric of approximately 0.8. Despite still being limited in training data, this method exhibits promising potential to address the limitations of conventional approaches, which only rely on normal vectors and unsupervised machine learning algorithms for grouping points in the 3D model into distinct sets of discontinuities.

Analysis of Manganese Nodule Abundance in KODOS Area (KODOS 지역의 망간단괴 부존률 분포해석)

  • Jung, Moon Young;Kim, In Kee;Sung, Won Mo;Kang, Jung Keuk
    • Economic and Environmental Geology
    • /
    • v.28 no.3
    • /
    • pp.199-211
    • /
    • 1995
  • The deep sea camera system could render it possible to obtain the detailed information of the nodule distribution, but difficult to estimate nodule abundance quantitatively. In order to estimate nodule abundance quantitatively from deep seabed photographs, the nodule abundance equation was derived from the box core data obtained in KODOS area(long.: $154^{\circ}{\sim}151^{\circ}W$, lat.: $9^{\circ}{\sim}12^{\circ}N$) during two survey cruises carried out in 1989 and 1990. The regression equation derived by considering extent of burial of nodule to Handa's equation compensates for the abundance error attributable to partial burial of some nodules by sediments. An average long axis and average extent of burial of nodules in photographed area are determined according to the surface textures of nodules, and nodule coverage is calculated by the image analysis method. Average nodule abundance estimated from seabed photographs by using the equation is approximately 92% of the actual average abundance in KODOS area. The measured sampling points by box core or free fall grab are in general very sparse and hence nodule abundance distribution should be interpolated and extrapolated from measured data to uncharacterized areas. The another goal of this study is to depict continuous distribution of nodule abundance in KODOS area by using PC-version of geostatistical model in which several stages are systematically proceeded. Geostatistics was used to analyse spatial structure and distribution of regionalized variable(nodule abundance) within sets of real data. In order to investigate the spatial structure of nodule abundance in KODOS area, experimental variograms were calculated and fitted to a spherical models in isotropy and anisotropy, respectively. The spherical structure models were used to map out distribution of the nodule abundance for isotropic and anisotropic models by using the kriging method. The result from anisotropic model is much more reliable than one of isotropic model. Distribution map of nodule abundance produced by PC-version of geostatistical model indicates that approximately 40% of KODOS area is considered to be promising area(nodule abundance > $5kg/m^2$) for mining in case of anisotropy.

  • PDF

The Influence of Iteration and Subset on True X Method in F-18-FPCIT Brain Imaging (F-18-FPCIP 뇌 영상에서 True-X 재구성 기법을 기반으로 했을 때의 Iteration과 Subset의 영향)

  • Choi, Jae-Min;Kim, Kyung-Sik;NamGung, Chang-Kyeong;Nam, Ki-Pyo;Im, Ki-Cheon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.122-126
    • /
    • 2010
  • Purpose: F-18-FPCIT that shows strong familiarity with DAT located at a neural terminal site offers diagnostic information about DAT density state in the region of the striatum especially Parkinson's disease. In this study, we altered the iteration and subset and measured SUV${\pm}$SD and Contrasts from phantom images which set up to specific iteration and subset. So, we are going to suggest the appropriate range of the iteration and subset. Materials and Methods: This study has been performed with 10 normal volunteers who don't have any history of Parkinson's disease or cerebral disease and Flangeless Esser PET Phantom from Data Spectrum Corporation. $5.3{\pm}0.2$ mCi of F-18-FPCIT was injected to the normal group and PET Phantom was assembled by ACR PET Phantom Instructions and it's actual ratio between hot spheres and background was 2.35 to 1. Brain and Phantom images were acquired after 3 hours from the time of the injection and images were acquired for ten minutes. Basically, SIEMENS Bio graph 40 True-point was used and True-X method was applied for image reconstruction method. The iteration and Subset were set to 2 iterations, 8 subsets, 3 iterations, 16 subsets, 6 iterations, 16 subsets, 8 iterations, 16 subsets and 8 iterations, 21 subsets respectively. To measure SUVs on the brain images, ROIs were drawn on the right Putamen. Also, Coefficient of variance (CV) was calculated to indicate the uniformity at each iteration and subset combinations. On the phantom study, we measured the actual ratio between hot spheres and back ground at each combinations. Same size's ROIs were drawn on the same slide and location. Results: Mean SUVs were 10.60, 12.83, 13.87, 13.98 and 13.5 at each combination. The range of fluctuation by sets were 22.36%, 10.34%, 1.1%, and 4.8% respectively. The range of fluctuation of mean SUV was lowest between 6 iterations 16 subsets and 8 iterations 16 subsets. CV showed 9.07%, 11.46%, 13.56%, 14.91% and 19.47% respectively. This means that the numerical value of the iteration and subset gets higher the image's uniformity gets worse. The range of fluctuation of CV by sets were 2.39, 2.1, 1.35, and 4.56. The range of fluctuation of uniformity was lowest between 6 iterations, 16 subsets and 8 iterations, 16 subsets. In the contrast test, it showed 1.92:1, 2.12:1, 2.10:1, 2.13:1 and 2.11:1 at each iteration and subset combinations. A Setting of 8 iterations and 16 subsets reappeared most close ratio between hot spheres and background. Conclusion: Findings on this study, SUVs and uniformity might be calculated differently caused by variable reconstruction parameters like filter or FWHM. Mean SUV and uniformity showed the lowest range of fluctuation at 6 iterations 16 subsets and 8 iterations 16 subsets. Also, 8 iterations 16 subsets showed the nearest hot sphere to background ratio compared with others. But it can not be concluded that only 6 iterations 16 subsets and 8 iterations 16 subsets can make right images for the clinical diagnosis. There might be more factors that can make better images. For more exact clinical diagnosis through the quantitative analysis of DAT density in the region of striatum we need to secure healthy people's quantitative values.

  • PDF

Feasibility Study on Producing 1:25,000 Digital Map Using KOMPSAT-5 SAR Stereo Images (KOMPSAT-5 레이더 위성 스테레오 영상을 이용한 1:25,000 수치지형도제작 가능성 연구)

  • Lee, Yong-Suk;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_3
    • /
    • pp.1329-1350
    • /
    • 2018
  • There have been many applications to observe Earth using synthetic aperture radar (SAR) since it could acquire Earth observation data without reference to weathers or local times. However researches about digital map generation using SAR have hardly been performed due to complex raw data processing. In this study, we suggested feasibility of producing digital map using SAR stereo images. We collected two sets, which include an ascending and a descending orbit acquisitions respectively, of KOMPSAT-5 stereo dataset. In order to suggest the feasibility of digital map generation from SAR stereo images, we performed 1) rational polynomial coefficient transformation from radar geometry, 2) digital resititution using KOMPSAT-5 stereo images, and 3) validation using digital-map-derived reference points and check points. As the results of two models, root mean squared errors of XY and Z direction were less than 1m for each model. We discussed that KOMPSAT-5 stereo image could generated 1:25,000 digital map which meets a standard of the digital map. The proposed results would contribute to generate and update digital maps for inaccessible areas and wherever weather conditions are unstable such as North Korea or Polar region.

Development of an Offline Based Internal Organ Motion Verification System during Treatment Using Sequential Cine EPID Images (연속촬영 전자조사 문 영상을 이용한 오프라인 기반 치료 중 내부 장기 움직임 확인 시스템의 개발)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Huh, Woong;Kim, Min-Kyu;Han, Young-Yih;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jing-Sung;Park, Hee-Chul;Ahn, Sung-Hwan;Lim, Do-Hoon;Choi, Doo-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.91-98
    • /
    • 2012
  • Verification of internal organ motion during treatment and its feedback is essential to accurate dose delivery to the moving target. We developed an offline based internal organ motion verification system (IMVS) using cine EPID images and evaluated its accuracy and availability through phantom study. For verification of organ motion using live cine EPID images, a pattern matching algorithm using an internal surrogate, which is very distinguishable and represents organ motion in the treatment field, like diaphragm, was employed in the self-developed analysis software. For the system performance test, we developed a linear motion phantom, which consists of a human body shaped phantom with a fake tumor in the lung, linear motion cart, and control software. The phantom was operated with a motion of 2 cm at 4 sec per cycle and cine EPID images were obtained at a rate of 3.3 and 6.6 frames per sec (2 MU/frame) with $1,024{\times}768$ pixel counts in a linear accelerator (10 MVX). Organ motion of the target was tracked using self-developed analysis software. Results were compared with planned data of the motion phantom and data from the video image based tracking system (RPM, Varian, USA) using an external surrogate in order to evaluate its accuracy. For quantitative analysis, we analyzed correlation between two data sets in terms of average cycle (peak to peak), amplitude, and pattern (RMS, root mean square) of motion. Averages for the cycle of motion from IMVS and RPM system were $3.98{\pm}0.11$ (IMVS 3.3 fps), $4.005{\pm}0.001$ (IMVS 6.6 fps), and $3.95{\pm}0.02$ (RPM), respectively, and showed good agreement on real value (4 sec/cycle). Average of the amplitude of motion tracked by our system showed $1.85{\pm}0.02$ cm (3.3 fps) and $1.94{\pm}0.02$ cm (6.6 fps) as showed a slightly different value, 0.15 (7.5% error) and 0.06 (3% error) cm, respectively, compared with the actual value (2 cm), due to time resolution for image acquisition. In analysis of pattern of motion, the value of the RMS from the cine EPID image in 3.3 fps (0.1044) grew slightly compared with data from 6.6 fps (0.0480). The organ motion verification system using sequential cine EPID images with an internal surrogate showed good representation of its motion within 3% error in a preliminary phantom study. The system can be implemented for clinical purposes, which include organ motion verification during treatment, compared with 4D treatment planning data, and its feedback for accurate dose delivery to the moving target.