• Title/Summary/Keyword: Image method

Search Result 17,714, Processing Time 0.053 seconds

Development and Analysis of COMS AMV Target Tracking Algorithm using Gaussian Cluster Analysis (가우시안 군집분석을 이용한 천리안 위성의 대기운동벡터 표적추적 알고리듬 개발 및 분석)

  • Oh, Yurim;Kim, Jae Hwan;Park, Hyungmin;Baek, Kanghyun
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.6
    • /
    • pp.531-548
    • /
    • 2015
  • Atmospheric Motion Vector (AMV) from satellite images have shown Slow Speed Bias (SSB) in comparison with rawinsonde. The causes of SSB are originated from tracking, selection, and height assignment error, which is known to be the leading error. However, recent works have shown that height assignment error cannot be fully explained the cause of SSB. This paper attempts a new approach to examine the possibility of SSB reduction of COMS AMV by using a new target tracking algorithm. Tracking error can be caused by averaging of various wind patterns within a target and changing of cloud shape in searching process over time. To overcome this problem, Gaussian Mixture Model (GMM) has been adopted to extract the coldest cluster as target since the shape of such target is less subject to transformation. Then, an image filtering scheme is applied to weigh more on the selected coldest pixels than the other, which makes it easy to track the target. When AMV derived from our algorithm with sum of squared distance method and current COMS are compared with rawindsonde, our products show noticeable improvement over COMS products in mean wind speed by an increase of $2.7ms^{-1}$ and SSB reduction by 29%. However, the statistics regarding the bias show negative impact for mid/low level with our algorithm, and the number of vectors are reduced by 40% relative to COMS. Therefore, further study is required to improve accuracy for mid/low level winds and increase the number of AMV vectors.

Analysis of Respiratory Motion Artifacts in PET Imaging Using Respiratory Gated PET Combined with 4D-CT (4D-CT와 결합한 호흡게이트 PET을 이용한 PET영상의 호흡 인공산물 분석)

  • Cho, Byung-Chul;Park, Sung-Ho;Park, Hee-Chul;Bae, Hoon-Sik;Hwang, Hee-Sung;Shin, Hee-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.3
    • /
    • pp.174-181
    • /
    • 2005
  • Purpose: Reduction of respiratory motion artifacts in PET images was studied using respiratory-gated PET (RGPET) with moving phantom. Especially a method of generating simulated helical CT images from 4D-CT datasets was developed and applied to a respiratory specific RGPET images for more accurate attenuation correction. Materials and Methods: Using a motion phantom with periodicity of 6 seconds and linear motion amplitude of 26 mm, PET/CT (Discovery ST: GEMS) scans with and without respiratory gating were obtained for one syringe and two vials with each volume of 3, 10, and 30 ml respectively. RPM (Real-Time Position Management, Varian) was used for tracking motion during PET/CT scanning. Ten datasets of RGPET and 4D-CT corresponding to every 10% phase intervals were acquired. from the positions, sizes, and uptake values of each subject on the resultant phase specific PET and CT datasets, the correlations between motion artifacts in PET and CT images and the size of motion relative to the size of subject were analyzed. Results: The center positions of three vials in RGPET and 4D-CT agree well with the actual position within the estimated error. However, volumes of subjects in non-gated PET images increase proportional to relative motion size and were overestimated as much as 250% when the motion amplitude was increased two times larger than the size of the subject. On the contrary, the corresponding maximal uptake value was reduced to about 50%. Conclusion: RGPET is demonstrated to remove respiratory motion artifacts in PET imaging, and moreover, more precise image fusion and more accurate attenuation correction is possible by combining with 4D-CT.

Assessment of Attenuation Correction Techniques with a $^{137}Cs$ Point Source ($^{137}Cs$ 점선원을 이용한 감쇠 보정기법들의 평가)

  • Bong, Jung-Kyun;Kim, Hee-Joung;Son, Hye-Kyoung;Park, Yun-Young;Park, Hae-Joung;Yun, Mi-Jin;Lee, Jong-Doo;Jung, Hae-Jo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.1
    • /
    • pp.57-68
    • /
    • 2005
  • Purpose: The objective of this study was to assess attenuation correction algorithms with the $^{137}Cs$ point source for the brain positron omission tomography (PET) imaging process. Materials & Methods: Four different types of phantoms were used in this study for testing various types of the attenuation correction techniques. Transmission data of a $^{137}Cs$ point source were acquired after infusing the emission source into phantoms and then the emission data were subsequently acquired in 3D acquisition mode. Scatter corrections were performed with a background tail-fitting algorithm. Emission data were then reconstructed using iterative reconstruction method with a measured (MAC), elliptical (ELAC), segmented (SAC) and remapping (RAC) attenuation correction, respectively. Reconstructed images were then both qualitatively and quantitatively assessed. In addition, reconstructed images of a normal subject were assessed by nuclear medicine physicians. Subtracted images were also compared. Results: ELEC, SAC, and RAC provided a uniform phantom image with less noise for a cylindrical phantom. In contrast, a decrease in intensity at the central portion of the attenuation map was noticed at the result of the MAC. Reconstructed images of Jaszack and Hoffan phantoms presented better quality with RAC and SAC. The attenuation of a skull on images of the normal subject was clearly noticed and the attenuation correction without considering the attenuation of the skull resulted in artificial defects on images of the brain. Conclusion: the complicated and improved attenuation correction methods were needed to obtain the better accuracy of the quantitative brain PET images.

Quantification of Cerebrovascular Reserve Using Tc-99m HMPAO Brain SPECT and Lassen's Algorithm (Tc-99m HMPAO 뇌 SPECT와 Lassen 알고리즘을 이용한 뇌혈관 예비능의 정량화)

  • Kim, Kyeong-Min;Lee, Dong-Soo;Kim, Seok-Ki;Lee, Jae-Sung;Kang, Keon-Wook;Yeo, Jeong-Seok;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.34 no.4
    • /
    • pp.322-335
    • /
    • 2000
  • Purpose: For quantitative estimation of cerebrovascular reserve (CVR), we estimated the cerebral blood flow (CBF) using Lassen's nonlinearity correction algorithm and Tc-99m HMPAO brain SPECT images acquired with consecutive acquisition protocol. Using the values of CBF in basal and acetaBolamide (ACZ) stress states, CBF increase was calculated. Materials and Methods: In 9 normal subjects (age; $72{\pm}4$ years), brain SPECT was performed at basal and ACZ stress states consecutively after injection of 555 MBq and 1,110 MBq of Tc-99m HMPAO, respectively. Cerebellum was automatically extracted as reference region on basal SPECT image using threshold method. Assuming basal CBF of cerebellum as 55 ml/100g/min, CBF was calculated lot every pixel at basal states using Lassen's algorithm. Cerebellar blood flow at stress was estimated comparing counts of cerebellum at rest and ACZ stress and Lassen's algorithm. CBF of every pixel at ACZ stress state was calculated using Lassen's algorithm and ACZ cerebellar count. CVR was calculated by subtracting basal CBF from ACZ stress CBF for every pixel. The percent CVR was calculated by dividing CVR by basal CBF. The CBF and percentage CVR parametric images were generated. Results: The CBF and percentage CVR parametric images were obtained successfully in all the subjects. Global mean CBF were $49.6{\pm}5.5ml/100g/min\;and\;64.4{\pm}10.2ml/100g/min$ at basal and ACZ stress states, respectively. The increase of CBF at ACZ stress state was $14.7{\pm}9.6ml/100g/min$. The global mean percent CVR was 30.7% and was higher than the 13.8% calculated using count images. Conclusion: The blood flow at basal and ACZ stress states and cerebrovascular reserve were estimated using basal/ACZ Tc-99m-HMPAO SPECT images and Lassen's algorithm. Using these values, parametric images for blood flow and cerebrovascular reserve were generated.

  • PDF

Precise, Real-time Measurement of the Fresh Weight of Lettuce with Growth Stage in a Plant Factory using a Nutrient Film Technique (NFT 수경재배 방식의 식물공장에서 생육단계별 실시간 작물 생체중 정밀 측정 방법)

  • Kim, Ji-Soo;Kang, Woo Hyun;Ahn, Tae In;Shin, Jong Hwa;Son, Jung Eek
    • Horticultural Science & Technology
    • /
    • v.34 no.1
    • /
    • pp.77-83
    • /
    • 2016
  • The measurement of total fresh weight of plants provides an essential indicator of crop growth for monitoring production. To measure fresh weight without damaging the vegetation, image-based methods have been developed, but they have limitations. In addition, the total plant fresh weight is difficult to measure directly in hydroponic cultivation systems because of the amount of nutrient solution. This study aimed to develop a real-time, precise method to measure the total fresh weight of Romaine lettuce (Lactuca sativa L. cv. Asia Heuk Romaine) with growth stage in a plant factory using a nutrient film technique. The total weight of the channel, amount of residual nutrient solution in the channel, and fresh shoot and root weights of the plants were measured every 7 days after transplanting. The initial weight of the channel during nutrient solution supply (Wi) and its weight change per second just after the nutrient solution supply stopped were also measured. When no more draining occurred, the final weight of the channel (Ws) and the amount of residual nutrient solution in the channel were measured. The time constant (${\tau}$) was calculated by considering the transient values of Wi and Ws. The relationship of Wi, Ws, ${\tau}$, and fresh weight was quantitatively analyzed. After the nutrient solution supply stopped, the change in the channel weight exponentially decreased. The nutrient solution in the channel slowly drained as the root weight in the channel increased. Large differences were observed between the actual fresh weight of the plant and the predicted value because the channel included residual nutrient solution. These differences were difficult to predict with growth stage but a model with the time constant showed the highest accuracy. The real-time fresh weight could be calculated from Wi, Ws, and ${\tau}$ with growth stage.

A Basic Study for the Retrieval of Surface Temperature from Single Channel Middle-infrared Images (단일 밴드 중적외선 영상으로부터 표면온도 추정을 위한 기초연구)

  • Park, Wook;Lee, Yoon-Kyung;Won, Joong-Sun;Lee, Seung-Geun;Kim, Jong-Min
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.2
    • /
    • pp.189-194
    • /
    • 2008
  • Middle-infrared (MIR) spectral region between 3.0 and $5.0\;{\mu}m$ in wavelength is useful for observing high temperature events such as volcanic activities and forest fire. However, atmospheric effects and sun irradiance in day time has not been well studied for this MIR spectral band. The objectives of this basic study is to evaluate atmospheric effects and eventually to estimate surface temperature from a single channel MIR image, although a typical approach utilize split-window method using more than two channels. Several parameters are involved for the correction including various atmospheric data and sun-irradiance at the area of interest. To evaluate the effect of sun irradiance, MODIS MIR images acquired in day and night times were used for comparison. Atmospheric parameters were modeled by MODTRAN, and applied to a radiative transfer model for estimating the sea surface temperature. MODIS Sea Surface Temperature algorithm based upon multi-channel observation was performed in comparison with results from the radiative transfer model from a single channel. Temperature difference of the two methods was $0.89{\pm}0.54^{\circ}C$ and $1.25{\pm}0.41^{\circ}C$ from the day-time and night-time images, respectively. It is also shown that the emissivity effect has by more largely influenced on the estimated temperature than atmospheric effects. Although the test results encourage using a single channel MR observation, it must be noted that the results were obtained from water body not from land surface. Because emissivity greatly varies on land, it is very difficult to retrieval land surface temperature from a single channel MIR data.

Time-Lapse Crosswell Seismic Study to Evaluate the Underground Cavity Filling (지하공동 충전효과 평가를 위한 시차 공대공 탄성파 토모그래피 연구)

  • Lee, Doo-Sung
    • Geophysics and Geophysical Exploration
    • /
    • v.1 no.1
    • /
    • pp.25-30
    • /
    • 1998
  • Time-lapse crosswell seismic data, recorded before and after the cavity filling, showed that the filling increased the velocity at a known cavity zone in an old mine site in Inchon area. The seismic response depicted on the tomogram and in conjunction with the geologic data from drillings imply that the size of the cavity may be either small or filled by debris. In this study, I attempted to evaluate the filling effect by analyzing velocity measured from the time-lapse tomograms. The data acquired by a downhole airgun and 24-channel hydrophone system revealed that there exists measurable amounts of source statics. I presented a methodology to estimate the source statics. The procedure for this method is: 1) examine the source firing-time for each source, and remove the effect of irregular firing time, and 2) estimate the residual statics caused by inaccurate source positioning. This proposed multi-step inversion may reduce high frequency numerical noise and enhance the resolution at the zone of interest. The multi-step inversion with different starting models successfully shows the subtle velocity changes at the small cavity zone. The inversion procedure is: 1) conduct an inversion using regular sized cells, and generate an image of gross velocity structure by applying a 2-D median filter on the resulting tomogram, and 2) construct the starting velocity model by modifying the final velocity model from the first phase. The model was modified so that the zone of interest consists of small-sized grids. The final velocity model developed from the baseline survey was as a starting velocity model on the monitor inversion. Since we expected a velocity change only in the cavity zone, in the monitor inversion, we can significantly reduce the number of model parameters by fixing the model out-side the cavity zone equal to the baseline model.

  • PDF

Introduction of GOCI-II Atmospheric Correction Algorithm and Its Initial Validations (GOCI-II 대기보정 알고리즘의 소개 및 초기단계 검증 결과)

  • Ahn, Jae-Hyun;Kim, Kwang-Seok;Lee, Eun-Kyung;Bae, Su-Jung;Lee, Kyeong-Sang;Moon, Jeong-Eon;Han, Tai-Hyun;Park, Young-Je
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_2
    • /
    • pp.1259-1268
    • /
    • 2021
  • The 2nd Geostationary Ocean Color Imager (GOCI-II) is the successor to the Geostationary Ocean Color Imager (GOCI), which employs one near-ultraviolet wavelength (380 nm) and eight visible wavelengths(412, 443, 490, 510, 555, 620, 660, 680 nm) and three near-infrared wavelengths(709, 745, 865 nm) to observe the marine environment in Northeast Asia, including the Korean Peninsula. However, the multispectral radiance image observed at satellite altitude includes both the water-leaving radiance and the atmospheric path radiance. Therefore, the atmospheric correction process to estimate the water-leaving radiance without the path radiance is essential for analyzing the ocean environment. This manuscript describes the GOCI-II standard atmospheric correction algorithm and its initial phase validation. The GOCI-II atmospheric correction method is theoretically based on the previous GOCI atmospheric correction, then partially improved for turbid water with the GOCI-II's two additional bands, i.e., 620 and 709 nm. The match-up showed an acceptable result, with the mean absolute percentage errors are fall within 5% in blue bands. It is supposed that part of the deviation over case-II waters arose from a lack of near-infrared vicarious calibration. We expect the GOCI-II atmospheric correction algorithm to be improved and updated regularly to the GOCI-II data processing system through continuous calibration and validation activities.

Relationship between Stratum Corneum Carbonylated Protein (SCCP) and Skin Biophysical Parameters (Stratum Corneum Carbonylated Protein (SCCP)의 피부 생물학적 파라미터와의 관계)

  • Lee, Yongjik;Nam, Gaewon
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.45 no.2
    • /
    • pp.131-138
    • /
    • 2019
  • Carbonylated proteins (CPs) are synthesized by the chemical reaction of basic amino acid residues in proteins with aldehyde compounds yielded by lipid peroxidation. CPs are excited by a range of light from UVA to blue light, and resulted in the generation of superoxide anion radicals ($^{\cdot}O_2{^-}$) by photosensitizing reaction. Then, they CPs induce new protein carbonylation in stratum corneum through ROS generation. Furthermore, the superoxide anion radicals produce CPs in the stratum corneum (SC) through lipid peroxidation and finally affects skin conditions including color and moisture functions. The purpose of this study was to investigate the relationship between the production of stratum corneum carbonylated protein (SCCP) and the skin elasticity. 46 healthy female Koream at the ages of 30 ~ 50 years old were participated in this study for 8 weeks. The skin test was experiment conducted into two groups; placebo group (N = 23) used cream that did not contain active ingredients, and the other group (N = 23) used cream containing the elasticity improving ingredients. Test areas were the crow 's feet and the cheek. Various non-invasive methods were carried out to measure biophysical parameters on human skin indicating that dermis density and skin wrinkle were measured by using DUB scanner and Primos premium, respectively. Skin elasticity were measured using dermal torque meter (DTM310) and balistometer (BLS780). SCCP was assessed in a simple and non-invasive method using skin surface biopsy on the cheek of the subject. The amount of SCCP was determined using image analysis. All measurements were taken at 0, 4 and 8 8week. Results revealed that the amount of CP in SC was reduced when the skin wrinkle and skin elasticity related parameters were improved. This indicates that the correlation between the elasticity improvement and the amount of CP can be used as a anti-aging indicator and applicable to the skin clinical test for the measurement of skin aging in the future.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.