• 제목/요약/키워드: real-time fusion

Search Result 293, Processing Time 0.019 seconds

National Disaster Management, Investigation, and Analysis Using RS/GIS Data Fusion (RS/GIS 자료융합을 통한 국가 재난관리 및 조사·분석)

  • Seongsam Kim;Jaewook Suk;Dalgeun Lee;Junwoo Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_2
    • /
    • pp.743-754
    • /
    • 2023
  • The global occurrence of myriad natural disasters and incidents, catalyzed by climate change and extreme meteorological conditions, has engendered substantial human and material losses. International organizations such as the International Charter have established an enduring collaborative framework for real-time coordination to provide high-resolution satellite imagery and geospatial information. These resources are instrumental in the management of large-scale disaster scenarios and the expeditious execution of recovery operations. At the national level, the operational deployment of advanced National Earth Observation Satellites, controlled by National Geographic Information Institute, has not only catalyzed the advancement of geospatial data but has also contributed to the provisioning of damage analysis data for significant domestic and international disaster events. This special edition of the National Disaster Management Research Institute delineates the contemporary landscape of major disaster incidents in the year 2023 and elucidates the strategic blueprint of the government's national disaster safety system reform. Additionally, it encapsulates the most recent research accomplishments in the domains of artificial satellite systems, information and communication technology, and spatial information utilization, which are paramount in the institution's disaster situation management and analysis efforts. Furthermore, the publication encompasses the most recent research findings relevant to data collection, processing, and analysis pertaining to disaster cause and damage extent. These findings are especially pertinent to the institute's on-site investigation initiatives and are informed by cutting-edge technologies, including drone-based mapping and LiDAR observation, as evidenced by a case study involving the 2023 landslide damage resulting from concentrated heavy rainfall.

Analysis of Respiratory Motion Artifacts in PET Imaging Using Respiratory Gated PET Combined with 4D-CT (4D-CT와 결합한 호흡게이트 PET을 이용한 PET영상의 호흡 인공산물 분석)

  • Cho, Byung-Chul;Park, Sung-Ho;Park, Hee-Chul;Bae, Hoon-Sik;Hwang, Hee-Sung;Shin, Hee-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.3
    • /
    • pp.174-181
    • /
    • 2005
  • Purpose: Reduction of respiratory motion artifacts in PET images was studied using respiratory-gated PET (RGPET) with moving phantom. Especially a method of generating simulated helical CT images from 4D-CT datasets was developed and applied to a respiratory specific RGPET images for more accurate attenuation correction. Materials and Methods: Using a motion phantom with periodicity of 6 seconds and linear motion amplitude of 26 mm, PET/CT (Discovery ST: GEMS) scans with and without respiratory gating were obtained for one syringe and two vials with each volume of 3, 10, and 30 ml respectively. RPM (Real-Time Position Management, Varian) was used for tracking motion during PET/CT scanning. Ten datasets of RGPET and 4D-CT corresponding to every 10% phase intervals were acquired. from the positions, sizes, and uptake values of each subject on the resultant phase specific PET and CT datasets, the correlations between motion artifacts in PET and CT images and the size of motion relative to the size of subject were analyzed. Results: The center positions of three vials in RGPET and 4D-CT agree well with the actual position within the estimated error. However, volumes of subjects in non-gated PET images increase proportional to relative motion size and were overestimated as much as 250% when the motion amplitude was increased two times larger than the size of the subject. On the contrary, the corresponding maximal uptake value was reduced to about 50%. Conclusion: RGPET is demonstrated to remove respiratory motion artifacts in PET imaging, and moreover, more precise image fusion and more accurate attenuation correction is possible by combining with 4D-CT.

Development of High-Resolution Fog Detection Algorithm for Daytime by Fusing GK2A/AMI and GK2B/GOCI-II Data (GK2A/AMI와 GK2B/GOCI-II 자료를 융합 활용한 주간 고해상도 안개 탐지 알고리즘 개발)

  • Ha-Yeong Yu;Myoung-Seok Suh
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1779-1790
    • /
    • 2023
  • Satellite-based fog detection algorithms are being developed to detect fog in real-time over a wide area, with a focus on the Korean Peninsula (KorPen). The GEO-KOMPSAT-2A/Advanced Meteorological Imager (GK2A/AMI, GK2A) satellite offers an excellent temporal resolution (10 min) and a spatial resolution (500 m), while GEO-KOMPSAT-2B/Geostationary Ocean Color Imager-II (GK2B/GOCI-II, GK2B) provides an excellent spatial resolution (250 m) but poor temporal resolution (1 h) with only visible channels. To enhance the fog detection level (10 min, 250 m), we developed a fused GK2AB fog detection algorithm (FDA) of GK2A and GK2B. The GK2AB FDA comprises three main steps. First, the Korea Meteorological Satellite Center's GK2A daytime fog detection algorithm is utilized to detect fog, considering various optical and physical characteristics. In the second step, GK2B data is extrapolated to 10-min intervals by matching GK2A pixels based on the closest time and location when GK2B observes the KorPen. For reflectance, GK2B normalized visible (NVIS) is corrected using GK2A NVIS of the same time, considering the difference in wavelength range and observation geometry. GK2B NVIS is extrapolated at 10-min intervals using the 10-min changes in GK2A NVIS. In the final step, the extrapolated GK2B NVIS, solar zenith angle, and outputs of GK2A FDA are utilized as input data for machine learning (decision tree) to develop the GK2AB FDA, which detects fog at a resolution of 250 m and a 10-min interval based on geographical locations. Six and four cases were used for the training and validation of GK2AB FDA, respectively. Quantitative verification of GK2AB FDA utilized ground observation data on visibility, wind speed, and relative humidity. Compared to GK2A FDA, GK2AB FDA exhibited a fourfold increase in spatial resolution, resulting in more detailed discrimination between fog and non-fog pixels. In general, irrespective of the validation method, the probability of detection (POD) and the Hanssen-Kuiper Skill score (KSS) are high or similar, indicating that it better detects previously undetected fog pixels. However, GK2AB FDA, compared to GK2A FDA, tends to over-detect fog with a higher false alarm ratio and bias.