• Title/Summary/Keyword: Reconstruction error

Search Result 433, Processing Time 0.031 seconds

Entropy-Based 6 Degrees of Freedom Extraction for the W-band Synthetic Aperture Radar Image Reconstruction (W-band Synthetic Aperture Radar 영상 복원을 위한 엔트로피 기반의 6 Degrees of Freedom 추출)

  • Hyokbeen Lee;Duk-jin Kim;Junwoo Kim;Juyoung Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1245-1254
    • /
    • 2023
  • Significant research has been conducted on the W-band synthetic aperture radar (SAR) system that utilizes the 77 GHz frequency modulation continuous wave (FMCW) radar. To reconstruct the high-resolution W-band SAR image, it is necessary to transform the point cloud acquired from the stereo cameras or the LiDAR in the direction of 6 degrees of freedom (DOF) and apply them to the SAR signal processing. However, there are difficulties in matching images due to the different geometric structures of images acquired from different sensors. In this study, we present the method to extract an optimized depth map by obtaining 6 DOF of the point cloud using a gradient descent method based on the entropy of the SAR image. An experiment was conducted to reconstruct a tree, which is a major road environment object, using the constructed W-band SAR system. The SAR image, reconstructed using the entropy-based gradient descent method, showed a decrease of 53.2828 in mean square error and an increase of 0.5529 in the structural similarity index, compared to SAR images reconstructed from radar coordinates.

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

Generation of Daily High-resolution Sea Surface Temperature for the Seas around the Korean Peninsula Using Multi-satellite Data and Artificial Intelligence (다종 위성자료와 인공지능 기법을 이용한 한반도 주변 해역의 고해상도 해수면온도 자료 생산)

  • Jung, Sihun;Choo, Minki;Im, Jungho;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.707-723
    • /
    • 2022
  • Although satellite-based sea surface temperature (SST) is advantageous for monitoring large areas, spatiotemporal data gaps frequently occur due to various environmental or mechanical causes. Thus, it is crucial to fill in the gaps to maximize its usability. In this study, daily SST composite fields with a resolution of 4 km were produced through a two-step machine learning approach using polar-orbiting and geostationary satellite SST data. The first step was SST reconstruction based on Data Interpolate Convolutional AutoEncoder (DINCAE) using multi-satellite-derived SST data. The second step improved the reconstructed SST targeting in situ measurements based on light gradient boosting machine (LGBM) to finally produce daily SST composite fields. The DINCAE model was validated using random masks for 50 days, whereas the LGBM model was evaluated using leave-one-year-out cross-validation (LOYOCV). The SST reconstruction accuracy was high, resulting in R2 of 0.98, and a root-mean-square-error (RMSE) of 0.97℃. The accuracy increase by the second step was also high when compared to in situ measurements, resulting in an RMSE decrease of 0.21-0.29℃ and an MAE decrease of 0.17-0.24℃. The SST composite fields generated using all in situ data in this study were comparable with the existing data assimilated SST composite fields. In addition, the LGBM model in the second step greatly reduced the overfitting, which was reported as a limitation in the previous study that used random forest. The spatial distribution of the corrected SST was similar to those of existing high resolution SST composite fields, revealing that spatial details of oceanic phenomena such as fronts, eddies and SST gradients were well simulated. This research demonstrated the potential to produce high resolution seamless SST composite fields using multi-satellite data and artificial intelligence.

Evaluating the Impact of Attenuation Correction Difference According to the Lipiodol in PET/CT after TACE (간동맥 화학 색전술에 사용하는 Lipiodol에 의한 감쇠 오차가 PET/CT검사에서 영상에 미치는 영향 평가)

  • Cha, Eun Sun;Hong, Gun chul;Park, Hoon;Choi, Choon Ki;Seok, Jae Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.1
    • /
    • pp.67-70
    • /
    • 2013
  • Purpose: Surge in patients with hepatocellular carcinoma, hepatic artery chemical embolization is one of the effective interventional procedures. The PET/CT examination plays an important role in determining the presence of residual cancer cells and metastasis, and prognosis after embolization. The other hand, the hepatic artery chemical embolization of embolic material used lipiodol produced artifacts in the PET/CT examination, and these artifacts results in quantitative evaluation influence. This study, the radioactivity density and the percentage error was evaluated by the extent of the impact of lipiodol in the image of PET/CT. Materials and Methods: 1994 NEMA Phantom was acquired for 2 minutes and 30 seconds per bed after the Teflon, water and lipiodol filled, and these three inserts into the enough to mix the rest behind radioactive injection with $20{\pm}10MBq$. Phantom reconfigure with the iterative reconstruction method the number of iterations for two times by law, a subset of 20 errors. We set up region of interest at each area of the Teflon, water, lipiodol, insert artifact occurs between regions, and background and it was calculated and compared by the radioactivity density(kBq/ml) and the% Difference. Results: Radioactivity density of the each region of interest area with the teflon, water, lipiodol, insert artifact occurs between regions, background activity was $0.09{\pm}0.04$, $0.40{\pm}0.17$, $1.55{\pm}0.75$, $2.5{\pm}1.09$, $2.65{\pm}1.16 kBq/ml$ (P <0.05) and it was statistically significant results. Percentage error of lipiodol in each area was 118%, compared to the water compared with the background activity 52%, compared with a teflon was 180% of the difference. Conclusion: We found that the error due to under the influence of the attenuation correction when PET/CT scans after lipiodol injection performed, and the radioactivity density is higher than compared to other implants, lower than background. Applying the nonattenuation correction images, and after hepatic artery chemical embolization who underwent PET/CT imaging so that the test should be take the consideration to the extent of the impact of lipiodol be.

  • PDF

The Evaluation of Quantitative Accuracy According to Detection Distance in SPECT/CT Applied to Collimator Detector Response(CDR) Recovery (Collimator Detector Response(CDR) 회복이 적용된 SPECT/CT에서 검출거리에 따른 정량적 정확성 평가)

  • Kim, Ji-Hyeon;Son, Hyeon-Soo;Lee, Juyoung;Park, Hoon-Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.2
    • /
    • pp.55-64
    • /
    • 2017
  • Purpose Recently, with the spread of SPECT/CT, various image correction methods can be applied quickly and accurately, which enabled us to expect quantitative accuracy as well as image quality improvement. Among them, the Collimator Detector Response(CDR) recovery is a correction method aiming at resolution recovery by compensating the blurring effect generated from the distance between the detector and the object. The purpose of this study is to find out quantitative change depending on the change in detection distance in SPECT/CT images with CDR recovery applied. Materials and Methods In order to find out the error of acquisition count depending on the change of detection distance, we set the detection distance according to the obit type as X, Y axis radius 30cm for circular, X, Y axis radius 21cm, 10cm for non-circular and non-circular auto(=auto body contouring, ABC_spacing limit 1cm) and applied reconstruction methods by dividing them into Astonish(3D-OSEM with CDR recovery) and OSEM(w/o CDR recovery) to find out the difference in activity recovery depending on the use of CDR recovery. At this time, attenuation correction, scatter correction, and decay correction were applied to all images. For the quantitative evaluation, calibration scan(cylindrical phantom, $^{99m}TcO_4$ 123.3 MBq, water 9293 ml) was obtained for the purpose of calculating the calibration factor(CF). For the phantom scan, a 50 cc syringe was filled with 31 ml of water and a phantom image was obtained by setting $^{99m}TcO_4$ 123.3 MBq. We set the VOI(volume of interest) in the entire volume of the syringe in the phantom image to measure total counts for each condition and obtained the error of the measured value against true value set by setting CF to check the quantitative accuracy according to the correction. Results The calculated CF was 154.28 (Bq/ml/cps/ml) and the measured values against true values in each conditional image were analyzed to be circular 87.5%, non-circular 90.1%, ABC 91.3% and circular 93.6%, non-circular 93.6%, ABC 93.9% in OSEM and Astonish, respectively. The closer the detection distance, the higher the accuracy of OSEM, and Astonish showed almost similar values regardless of distance. The error was the largest in the OSEM circular(-13.5%) and the smallest in the Astonish ABC(-6.1%). Conclusion SPECT/CT images showed that when the distance compensation is made through the application of CDR recovery, the detection distance shows almost the same quantitative accuracy as the proximity detection even under the distant condition, and accurate correction is possible without being affected by the change in detection distance.

  • PDF

Performance Evaluation of Radiochromic Films and Dosimetry CheckTM for Patient-specific QA in Helical Tomotherapy (나선형 토모테라피 방사선치료의 환자별 품질관리를 위한 라디오크로믹 필름 및 Dosimetry CheckTM의 성능평가)

  • Park, Su Yeon;Chae, Moon Ki;Lim, Jun Teak;Kwon, Dong Yeol;Kim, Hak Joon;Chung, Eun Ah;Kim, Jong Sik
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.93-109
    • /
    • 2020
  • Purpose: The radiochromic film (Gafchromic EBT3, Ashland Advanced Materials, USA) and 3-dimensional analysis system dosimetry checkTM (DC, MathResolutions, USA) were evaluated for patient-specific quality assurance (QA) of helical tomotherapy. Materials and Methods: Depending on the tumors' positions, three types of targets, which are the abdominal tumor (130.6㎤), retroperitoneal tumor (849.0㎤), and the whole abdominal metastasis tumor (3131.0㎤) applied to the humanoid phantom (Anderson Rando Phantom, USA). We established a total of 12 comparative treatment plans by the four geometric conditions of the beam irradiation, which are the different field widths (FW) of 2.5-cm, 5.0-cm, and pitches of 0.287, 0.43. Ionization measurements (1D) with EBT3 by inserting the cheese phantom (2D) were compared to DC measurements of the 3D dose reconstruction on CT images from beam fluence log information. For the clinical feasibility evaluation of the DC, dose reconstruction has been performed using the same cheese phantom with the EBT3 method. Recalculated dose distributions revealed the dose error information during the actual irradiation on the same CT images quantitatively compared to the treatment plan. The Thread effect, which might appear in the Helical Tomotherapy, was analyzed by ripple amplitude (%). We also performed gamma index analysis (DD: 3mm/ DTA: 3%, pass threshold limit: 95%) for pattern check of the dose distribution. Results: Ripple amplitude measurement resulted in the highest average of 23.1% in the peritoneum tumor. In the radiochromic film analysis, the absolute dose was on average 0.9±0.4%, and gamma index analysis was on average 96.4±2.2% (Passing rate: >95%), which could be limited to the large target sizes such as the whole abdominal metastasis tumor. In the DC analysis with the humanoid phantom for FW of 5.0-cm, the three regions' average was 91.8±6.4% in the 2D and 3D plan. The three planes (axial, coronal, and sagittal) and dose profile could be analyzed with the entire peritoneum tumor and the whole abdominal metastasis target, with planned dose distributions. The dose errors based on the dose-volume histogram in the DC evaluations increased depending on FW and pitch. Conclusion: The DC method could implement a dose error analysis on the 3D patient image data by the measured beam fluence log information only without any dosimetry tools for patient-specific quality assurance. Also, there may be no limit to apply for the tumor location and size; therefore, the DC could be useful in patient-specific QAl during the treatment of Helical Tomotherapy of large and irregular tumors.

Tracking the History of the Three-story Stone Pagoda from the Goseonsa Temple Site in Gyeongju throughan Analysis of Component (부재 해석을 통한 경주 고선사지 삼층석탑의 연혁 추적)

  • Jeon, Hyo Soo
    • Conservation Science in Museum
    • /
    • v.21
    • /
    • pp.41-52
    • /
    • 2019
  • The findings of a 2017 safety inspection of the Three-story Pagoda from the Goseonsa Temple site in Gyeongju suggested the possibility that the stone for the second story of the pagoda may have been rotated after the pagoda was disassembled for removal from its original site in 1975. The materials from the pagoda were investigated using photographs and other relevant data from both the Japanese colonial period and from around 1975. The analysis found that the materials of the pagoda were not changed after analleged reconstruction in 1943, but that during the process of relocating the pagoda in 1975 the body of the second story was indeed rotated counter clockwise by 90 degrees and one of the four stone elements making up the first-story roof was exchanged with a part from the second-story roof. In order to discover whether the materials had been incorrectly placed, each part of the pagoda was precisely measured and the elements of the roofs were virtually reconstructed using 3D scanning data. The investigation did not find any singularities with in the components of each roof; the four part sof the first-story roof were 75 to 76 centimeters thick and those for the second-story roof were 78 to 79 centimeters thick. The connections between each part of the roofs also appeared natural. This seems to indicate that there was indeed an undocumented repair of the pagoda at some point between its creation and 1943 and an error that took place during this repair was corrected in 1975. In addition, the study suggested a possibility that the body of the second story was rotated counter clockwised to a change in the locations of parts of the two roofs.

A Study on Stroke Extraction for Handwritten Korean Character Recognition (필기체 한글 문자 인식을 위한 획 추출에 관한 연구)

  • Choi, Young-Kyoo;Rhee, Sang-Burm
    • The KIPS Transactions:PartB
    • /
    • v.9B no.3
    • /
    • pp.375-382
    • /
    • 2002
  • Handwritten character recognition is classified into on-line handwritten character recognition and off-line handwritten character recognition. On-line handwritten character recognition has made a remarkable outcome compared to off-line hacdwritten character recognition. This method can acquire the dynamic written information such as the writing order and the position of a stroke by means of pen-based electronic input device such as a tablet board. On the contrary, Any dynamic information can not be acquired in off-line handwritten character recognition since there are extreme overlapping between consonants and vowels, and heavily noisy images between strokes, which change the recognition performance with the result of the preprocessing. This paper proposes a method that effectively extracts the stroke including dynamic information of characters for off-line Korean handwritten character recognition. First of all, this method makes improvement and binarization of input handwritten character image as preprocessing procedure using watershed algorithm. The next procedure is extraction of skeleton by using the transformed Lu and Wang's thinning: algorithm, and segment pixel array is extracted by abstracting the feature point of the characters. Then, the vectorization is executed with a maximum permission error method. In the case that a few strokes are bound in a segment, a segment pixel array is divided with two or more segment vectors. In order to reconstruct the extracted segment vector with a complete stroke, the directional component of the vector is mortified by using right-hand writing coordinate system. With combination of segment vectors which are adjacent and can be combined, the reconstruction of complete stroke is made out which is suitable for character recognition. As experimentation, it is verified that the proposed method is suitable for handwritten Korean character recognition.

Network Anomaly Detection Technologies Using Unsupervised Learning AutoEncoders (비지도학습 오토 엔코더를 활용한 네트워크 이상 검출 기술)

  • Kang, Koohong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.4
    • /
    • pp.617-629
    • /
    • 2020
  • In order to overcome the limitations of the rule-based intrusion detection system due to changes in Internet computing environments, the emergence of new services, and creativity of attackers, network anomaly detection (NAD) using machine learning and deep learning technologies has received much attention. Most of these existing machine learning and deep learning technologies for NAD use supervised learning methods to learn a set of training data set labeled 'normal' and 'attack'. This paper presents the feasibility of the unsupervised learning AutoEncoder(AE) to NAD from data sets collecting of secured network traffic without labeled responses. To verify the performance of the proposed AE mode, we present the experimental results in terms of accuracy, precision, recall, f1-score, and ROC AUC value on the NSL-KDD training and test data sets. In particular, we model a reference AE through the deep analysis of diverse AEs varying hyper-parameters such as the number of layers as well as considering the regularization and denoising effects. The reference model shows the f1-scores 90.4% and 89% of binary classification on the KDDTest+ and KDDTest-21 test data sets based on the threshold of the 82-th percentile of the AE reconstruction error of the training data set.

Precise Rectification of Misaligned Stereo Images for 3D Image Generation (입체영상 제작을 위한 비정렬 스테레오 영상의 정밀편위수정)

  • Kim, Jae-In;Kim, Tae-Jung
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.411-421
    • /
    • 2012
  • The stagnant growth in 3D market due to 3D movie contents shortage is encouraging development of techniques for production cost reduction. Elimination of vertical disparity generated during image acquisition requires heaviest time and effort in the whole stereoscopic film-making process. This matter is directly related to competitiveness in the market and is being dealt with as a very important task. The removal of vertical disparity, i.e. image rectification has been treated for a long time in the photogrammetry field. While computer vision methods are focused on fast processing and automation, photogrammetry methods on accuracy and precision. However, photogrammetric approaches have not been tried for the 3D film-making. In this paper, proposed is a photogrammetry-based rectification algorithm that enable to eliminate the vertical disparity precisely by reconstruction of geometric relationship at the time of shooting. Evaluation of proposed algorithm was carried out by comparing the performance with two existing computer vision algorithms. The epipolar constraint satisfaction, epipolar line accuracy and vertical disparity of result images were tested. As a result, the proposed algorithm showed excellent performance than the other algorithms in term of accuracy and precision, and also revealed robustness about position error of tie-points.