• Title/Summary/Keyword: Difference of Gaussian

Search Result 248, Processing Time 0.03 seconds

DYNAMICAL CHARACTERISTICS OF THE QUIET TRANSITION REGION: SPATIAL CORRELATION STUDIES OF H I 931 AND S VI 933 UV LINES

  • YUN HONG SIK;CHAE JONG CHUL;POLAND A. I.
    • Journal of The Korean Astronomical Society
    • /
    • v.31 no.1
    • /
    • pp.1-17
    • /
    • 1998
  • To understand the basic physics underlying large spatial fluctuations of intensity and Doppler shift, we have investigated the dynamical charctersitics of the transition region of the quiet sun by analyzing a raster scan of high resolution UV spectral band containing H Lyman lines and a S VI line. The spectra were taken from a quiet area of $100'\times100'$ located near the disk center by SUMER on board SOHO. The spectral band ranges from 906 A to 950 A with spatial and spectral resolution of 1v and $0.044 {\AA}$, respectively. The parameters of individual spectral lines were determined from a single Gaussian fit to each spectral line. Then, spatial correlation analyses have been made among the line parameters. Important findings emerged from the present analysis are as follows. (1) The integrated intensity maps of the observed area of H I 931 line $(1\times10^4 K)$ and S VI 933 line $(2\times10^5 K)$ look very smilar to each other with the same characterstic size of 5". An important difference, however, is that the intensity ratio of brighter network regions to darker cell regions is much larger in S VI 933 line than that in H I 931 line. (2) Dynamical features represented by Doppler shifts and line widths are smaller than those features seen in intensity maps. The features are found to be changing rapidly with time within a time scale shorter than the integration time, 110 seconds, while the intensity structure remains nearly unchanged during the same time interval. (3) The line intensity of S VI is quite strongly correlated with that of H I lines, but the Doppler shift correlation between the two lines is not as strong as the intensity correlation. The correlation length of the intensity structure is found to be about 5.7' (4100 km), which is at least 3 times larger than that of the velocity structure. These findings support the notion that the basic unit of the transition region of the quiet sun is a loop-like structure with a size of a few $10^3 km$, within which a number of unresolved smaller velocity structures are present.

  • PDF

Usefulness Assessment of Automatic Analysis Program for Flangeless Esser PET Phantom Images (Flangeless Esser PET Phantom 영상 자동 분석 프로그램의 유용성 평가)

  • NamGung, Chang-Kyeong;Nam, Ki-Pyo;Kim, Kyeong-Sik;Kim, Jeong-Seon;Lim, Ki-Cheon;Shin, Sang-Ki;Cho, Shee-Man;Dong, Kyung-Rae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.63-66
    • /
    • 2009
  • Purpose: ACR (American College of Radiology) offers variable parameters to PET/CT quality control by using ACR Phantom. ACR Phantom was made to evaluate parameters which are uniformity, attenuation, scatter, contrast and resolution. Manual analysis method wasn't good for the use of QC because values of parameter were changed as it may user and it takes long time to analysis. Ki-Chun Lim, a nuclear scientist in AMC, developed program that automatically analysis values of parameter by using ACR Phantom to overcome above problems. In this study, we evaluated automatic analysis program's usability, through the comparing SUV of each method, reproducibility of SUV when repeated analysis and the time required. Materials and Methods: Using Flangeless Esser PET Phantom, the ideal ratio of 4 : 1 hot cylinder and BKG but it actually showed a ratio of 3.89 to 1 hot cylinder and BKG. SIEMENS Biograph True Point 40 was used in this study. We obtained images using ACR phantom at Fusion WB PET Scan condition (2 min/bed) and 120 kV, 100 mAs CT condition. Using True X method, 3 iterations, 14 subsets, Gaussian filter, FWHM 4 mm and Zoom Factor 1.0, $168{\times}168$ image size. We obtained Max. & Min. SUV and SUV Mean values at Cylinder (8, 12, 16, 25 mm, Air, Bone, Water, BKG) by automatic program and obtained SUV by manual method. After that, we compared manual and automatic method. we estimate the time required from opened the image data to final work sheet was completed. Results: Automatic program always showed same result and same the time required. At 8, 12, 16 and 25 m cylinder, manual method showed 6.69, 3.46, 2.59, 1.24 CV values. The larger cylinder size became, the smaller CV became. In manual method, bone, air, water's CV were over 9.9 except BKG (2.32). Obtained CV of Mean SUV showed BKG was low (0.85) and bone was high (7.52). The time required was 45 second, 882 second respectably. Conclusions: As a result of difference automatic method and manual method, automatic method showed always same result, manual method showed that the smaller hot cylinders became, the lager CV became. Hot cylinders mean region size, the smaller hot cylinder size becomes we had some trouble in doing ROI poison setting. And it means increase in variation of SUV. The Study showed the time required of automatic method was shorten then manual method.

  • PDF

Machine learning-based Fine Dust Prediction Model using Meteorological data and Fine Dust data (기상 데이터와 미세먼지 데이터를 활용한 머신러닝 기반 미세먼지 예측 모형)

  • KIM, Hye-Lim;MOON, Tae-Heon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.24 no.1
    • /
    • pp.92-111
    • /
    • 2021
  • As fine dust negatively affects disease, industry and economy, the people are sensitive to fine dust. Therefore, if the occurrence of fine dust can be predicted, countermeasures can be prepared in advance, which can be helpful for life and economy. Fine dust is affected by the weather and the degree of concentration of fine dust emission sources. The industrial sector has the largest amount of fine dust emissions, and in industrial complexes, factories emit a lot of fine dust as fine dust emission sources. This study targets regions with old industrial complexes in local cities. The purpose of this study is to explore the factors that cause fine dust and develop a predictive model that can predict the occurrence of fine dust. weather data and fine dust data were used, and variables that influence the generation of fine dust were extracted through multiple regression analysis. Based on the results of multiple regression analysis, a model with high predictive power was extracted by learning with a machine learning regression learner model. The performance of the model was confirmed using test data. As a result, the models with high predictive power were linear regression model, Gaussian process regression model, and support vector machine. The proportion of training data and predictive power were not proportional. In addition, the average value of the difference between the predicted value and the measured value was not large, but when the measured value was high, the predictive power was decreased. The results of this study can be developed as a more systematic and precise fine dust prediction service by combining meteorological data and urban big data through local government data hubs. Lastly, it will be an opportunity to promote the development of smart industrial complexes.

Evaluation of Image for Phantom according to Normalization, Well Counter Correction in PET-CT (PET-CT Normalization, Well Counter Correction에 따른 팬텀을 이용한 영상 평가)

  • Choong-Woon Lee;Yeon-Wook You;Jong-Woon Mun;Yun-Cheol Kim
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.27 no.1
    • /
    • pp.47-54
    • /
    • 2023
  • Purpose PET-CT imaging require an appropriate quality assurance system to achieve high efficiency and reliability. Quality control is essential for improving the quality of care and patient safety. Currently, there are performance evaluation methods of UN2-1994 and UN2-2001 proposed by NEMA and IEC for PET-CT image evaluation. In this study, we compare phantom images with the same experiments before and after PET-CT 3D normalization and well counter correction and evaluate the usefulness of quality control. Materials and methods Discovery 690 (General Electric Healthcare, USA) PET-CT equiptment was used to perform 3D normalization and well counter correction as recommended by GE Healthcare. Based on the recovery coefficients for the six spheres of the NEMA IEC Body Phantom recommended by the EARL. 20kBq/㎖ of 18F was injected into the sphere of the phantom and 2kBq/㎖ of 18F was injected into the body of phantom. PET-CT scan was performed with a radioacitivity ratio of 10:1. Images were reconstructed by appliying TOF+PSF+TOF, OSEM+PSF, OSEM and Gaussian filter 4.0, 4.5, 5.0, 5.5, 6.0, 6,5 mm with matrix size 128×128, slice thickness 3.75 mm, iteration 2, subset 16 conditions. The PET image was attenuation corrected using the CT images and analyzed using software program AW 4.7 (General Electric Healthcare, USA). The ROI was set to fit 6 spheres in the CT image, RC (Recovery Coefficient) was measured after fusion of PET and CT. Statistical analysis was performed wilcoxon signed rank test using R. Results Overall, after the quality control items were performed, the recovery coefficient of the phantom image increased and measured. Recovery coefficient according to the image reconstruction increased in the order TOF+PSF, TOF, OSEM+PSF, before and after quality control, RCmax increased by OSEM 0.13, OSEM+PSF 0.16, TOF 0.16, TOF+PSF 0.15 and RCmean increased by OSEM 0.09, OSEM+PSF 0.09, TOF 0.106, TOF+PSF 0.10. Both groups showed a statistically significant difference in Wilcoxon signed rank test results (P value<0.001). Conclusion PET-CT system require quality assurance to achieve high efficiency and reliability. Standardized intervals and procedures should be followed for quality control. We hope that this study will be a good opportunity to think about the importance of quality control in PET-CT

  • PDF

Design and Fabrication of Binary Diffractive Optical Elements for the Creation of Pseudorandom Dot Arrays of Uniform Brightness (균일 밝기 랜덤 도트 어레이 생성을 위한 이진 회절광학소자 설계 및 제작)

  • Lee, Soo Yeon;Lee, Jun Ho;Kim, Young-Gwang;Rhee, Hyug-Gyo;Lee, Munseob
    • Korean Journal of Optics and Photonics
    • /
    • v.33 no.6
    • /
    • pp.267-274
    • /
    • 2022
  • In this paper, we report the design and fabrication of binary diffractive optical elements (DOEs) for random-dot-pattern projection for Schlieren imaging. We selected the binary phase level and a pitch of 10 ㎛ for the DOE, based on cost effectiveness and ease of manufacture. We designed the binary DOE using an iterative Fourier-transform algorithm with binary phase optimization. During initial optimization, we applied a computer-generated pseudorandom dot pattern of uniform intensity as a target pattern, and found significant intensity nonuniformity across the field. Based on the evaluation of the initial optimization, we weighted the target random dot pattern with Gaussian profiles to improve the intensity uniformity, resulting in the improvement of uniformity from 52.7% to 90.8%. We verified the design performance by fabricating the designed binary DOE and a beam projector, to which the same was applied. The verification confirmed that the projector produced over 10,000 random dot patterns over 430 mm × 430 mm at a distance of 5 meters, as designed, but had a slightly less uniformity of 84.5%. The fabrication errors of the DOE, mainly edge blurring and spacing errors, were strong possibilities for the difference.

Dynamics of the River Plume (하천수 플룸 퍼짐의 동력학적 연구)

  • Yu, Hong-Sun;Lee, Jun;Shin, Jang-Ryong
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.6 no.4
    • /
    • pp.413-420
    • /
    • 1994
  • Dynamics of the river plume is a very complicated non-linear problem with the free boundary changing in time and space. Mixing with the ambient water through the boundary makes the problem more complicated. In this paper we reduced 3-dimensional problem into 1-dimensional one by using the integral analysis method. Basic equations have been integrated over the lateral and vertical variations. For these integrations we adopted the well-established assumption that the flow-axis component of plume velocity and the density difference of the plume with the ambient water have Gaussian distributions in directions which are perpendicular to the flow-axis of the plume. We also used the result of our previous study on the lateral spreading velocity of the plume derived under the same assumption. And entrainment was included as a mixing process. The resultant 1-dimensional equations were solved by Runge-Kutta numerical method. Consequently, comparatively easy method of numerical analysis is presented for the 3-dimensional river plume. The method can also be used for the analysis of the thermal plume of cooling water of power plants.

  • PDF

Extraction of Renal Glomeruli Region using Genetic Algorithm (유전적 알고리듬을 이용한 신장 사구체 영역의 추출)

  • Kim, Eung-Kyeu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.30-39
    • /
    • 2009
  • Extraction of glomeruli region plays a very important role for diagnosing nephritis automatically. However, it is not easy to extract glomeruli region correctly because the difference between glomeruli region and other region is not obvious, simultaneously unevennesses that is brought in the sampling process and in the imaging process. In this study, a new method for extracting renal glomeruli region using genetic algorithm is proposed. The first, low and high resolution images are obtained by using Laplacian-Gaussian filter with ${\sigma}=2.1$ and ${\sigma}=1.8$, then, binary images by setting the threshold value to zero are obtained. And then border edge is detected from low resolution images, the border of glomeruli is expressed by a closed B-splines' curve line. The parameters that decide the closed curve line with this low resolution image prevent the noises and the border lines from breaking off in the middle by searching using genetic algorithm. Next, in order to obtain more precise border edges of glomeruli, the number of node points is increased and corrected in order from eight to sixteen and thirty two from high resolution images. Finally, the validity of this proposed method is shown to be effective by applying to the real images.

A Study on Optimization of Perovskite Solar Cell Light Absorption Layer Thin Film Based on Machine Learning (머신러닝 기반 페로브스카이트 태양전지 광흡수층 박막 최적화를 위한 연구)

  • Ha, Jae-jun;Lee, Jun-hyuk;Oh, Ju-young;Lee, Dong-geun
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.7
    • /
    • pp.55-62
    • /
    • 2022
  • The perovskite solar cell is an active part of research in renewable energy fields such as solar energy, wind, hydroelectric power, marine energy, bioenergy, and hydrogen energy to replace fossil fuels such as oil, coal, and natural gas, which will gradually disappear as power demand increases due to the increase in use of the Internet of Things and Virtual environments due to the 4th industrial revolution. The perovskite solar cell is a solar cell device using an organic-inorganic hybrid material having a perovskite structure, and has advantages of replacing existing silicon solar cells with high efficiency, low cost solutions, and low temperature processes. In order to optimize the light absorption layer thin film predicted by the existing empirical method, reliability must be verified through device characteristics evaluation. However, since it costs a lot to evaluate the characteristics of the light-absorbing layer thin film device, the number of tests is limited. In order to solve this problem, the development and applicability of a clear and valid model using machine learning or artificial intelligence model as an auxiliary means for optimizing the light absorption layer thin film are considered infinite. In this study, to estimate the light absorption layer thin-film optimization of perovskite solar cells, the regression models of the support vector machine's linear kernel, R.B.F kernel, polynomial kernel, and sigmoid kernel were compared to verify the accuracy difference for each kernel function.

Spatial Estimation of the Site Index for Pinus densiplora using Kriging (크리깅을 이용한 소나무림 지위지수 공간분포 추정)

  • Kim, Kyoung-Min;Park, Key-Ho
    • Journal of Korean Society of Forest Science
    • /
    • v.102 no.4
    • /
    • pp.467-476
    • /
    • 2013
  • Site index information given from forest site map only exist in the sampled locations. In this study, site index for unsampled locations were estimated using kriging interpolation method which can interpolate values between point samples to generate a continuous surface. Site index of Pinus densiplora in Danyang area were calculated using Chapman-Richards model by plot unit. Then site index for unsampled locations were interpolated by theoretical variogram models and ordinary kriging. Also in order to assess parameter selection, cross-validation was performed by calculating mean error (ME), average standard error (ASE) and root mean square error (RMSE). In result, gaussian model was excluded because of the biggest relative nugget (37.40%). Then spherical model (16.80%) and exponential model (8.77%) were selected. Site index estimates of Pinus densiplora throughout the entire area in Danyang showed 4.39~19.53 based on exponential model, and 4.54~19.23 based on spherical model. By cross-validation, RMSE had almost no difference. But ME and ASE from spherical model were slightly lower than exponential model. Therefore site index prediction map from spherical model were finally selected. Average site index from site prediction map was 10.78. It can be expected that regional variance can be considered by site index prediction map in order to estimate forest biomass which has big spatial variance and eventually it is helpful to improve an accuracy of forest carbon estimation.

A Feasibility study on the Simplified Two Source Model for Relative Electron Output Factor of Irregular Block Shape (단순화 이선원 모델을 이용한 전자선 선량율 계산 알고리듬에 관한 예비적 연구)

  • 고영은;이병용;조병철;안승도;김종훈;이상욱;최은경
    • Progress in Medical Physics
    • /
    • v.13 no.1
    • /
    • pp.21-26
    • /
    • 2002
  • A practical calculation algorithm which calculates the relative output factor(ROF) for irregular shaped electron field has been developed and evaluated the accuracy of the algorithm. The algorithm adapted two-source model, which assumes that the electron dose can be express as sum of the primary source component and the scattered component from the shielding block. Original two-source model has been modified in order to make the algorithm simpler and to reduce the number of parameters needed in the calculation, while the calculation error remains within clinical tolerance range. The primary source is assumed to have Gaussian distribution, while the scattered component follows the inverse square law. Depth and angular dependency of the primary and the scattered are ignored ROF can be calculated with three parameters such as, the effective source distance, the variance of primary source, and the scattering power of the block. The coefficients are obtained from the square shaped-block measurements and the algorithm is confirmed from the rectangular or irregular shaped-fields used in the clinic. The results showed less than 1.0 % difference between the calculation and measurements for most cases. None of cases which have bigger than 2.1 % have been found. By improving the algorithm for the aperture region which shows the largest error, the algorithm could be practically used in the clinic, since one can acquire the 1011 parameter's with minimum measurements(5∼6 measurements per cones) and generates accurate results within the clinically acceptable range.

  • PDF