• Title/Summary/Keyword: Modified Image

Search Result 1,241, Processing Time 0.024 seconds

An Electrical Conductivity Reconstruction for Evaluating Bone Mineral Density : Simulation (골 밀도 평가를 위한 뼈의 전기 전도도 재구성: 시뮬레이션)

  • 최민주;김민찬;강관석;최흥호
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.4
    • /
    • pp.261-268
    • /
    • 2004
  • Osteoporosis is a clinical condition in which the amount of bone tissue is reduced and the likelihood of fracture is increased. It is known that the electrical property of the bone is related to its density, and, in particular, the electrical resistance of the bone decreases as the bone loss increases. This implies that the electrical property of bone may be an useful parameter to diagnose osteoporosis, provided that it can be readily measured. The study attempted to evaluate the electrical conductivity of bone using a technique of electrical impedance tomography (EIT). It nay not be easy in general to get an EIT for the bone due to the big difference (an order of 2) of electrical properties between the bone and the surrounding soft tissue. In the present study, we took an adaptive mesh regeneration technique originally developed for the detection of two phase boundaries and modified it to be able to reconstruct the electrical conductivity inside the boundary provided that the geometry of the boundary was given. Numerical simulation was carried out for a tibia phantom, circular cylindrical phantom (radius of 40 mm) inside of which there is an ellipsoidal homeogenous tibia bone (short and long radius are 17 mm and 15 mm, respectively) surrounded by the soft tissue. The bone was located in the 15 mm above from the center of the circular cross section of the phantom. The electrical conductivity of the soft tissue was set to be 4 mS/cm and varies from 0.01 to 1 ms/cm for the bone. The simulation considered measurement errors in order to look into its effects. The simulated results showed that, if the measurement error was maintained less than 5 %, the reconstructed electrical conductivity of the bone was within 10 % errors. The accuracy increased with the electrical conductivity of the bone, as expected. This indicates that the present technique provides more accurate information for osteoporotic bones. It should be noted that tile simulation is based on a simple two phase image for the bone and the surrounding soft tissue when its anatomical information is provided. Nevertheless, the study indicates the possibility that the EIT technique may be used as a new means to detect the bone loss leading to osteoporotic fractures.

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

The PRISM-based Rainfall Mapping at an Enhanced Grid Cell Resolution in Complex Terrain (복잡지형 고해상도 격자망에서의 PRISM 기반 강수추정법)

  • Chung, U-Ran;Yun, Kyung-Dahm;Cho, Kyung-Sook;Yi, Jae-Hyun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.11 no.2
    • /
    • pp.72-78
    • /
    • 2009
  • The demand for rainfall data in gridded digital formats has increased in recent years due to the close linkage between hydrological models and decision support systems using the geographic information system. One of the most widely used tools for digital rainfall mapping is the PRISM (parameter-elevation regressions on independent slopes model) which uses point data (rain gauge stations), a digital elevation model (DEM), and other spatial datasets to generate repeatable estimates of monthly and annual precipitation. In the PRISM, rain gauge stations are assigned with weights that account for other climatically important factors besides elevation, and aspects and the topographic exposure are simulated by dividing the terrain into topographic facets. The size of facet or grid cell resolution is determined by the density of rain gauge stations and a $5{\times}5km$ grid cell is considered as the lowest limit under the situation in Korea. The PRISM algorithms using a 270m DEM for South Korea were implemented in a script language environment (Python) and relevant weights for each 270m grid cell were derived from the monthly data from 432 official rain gauge stations. Weighted monthly precipitation data from at least 5 nearby stations for each grid cell were regressed to the elevation and the selected linear regression equations with the 270m DEM were used to generate a digital precipitation map of South Korea at 270m resolution. Among 1.25 million grid cells, precipitation estimates at 166 cells, where the measurements were made by the Korea Water Corporation rain gauge network, were extracted and the monthly estimation errors were evaluated. An average of 10% reduction in the root mean square error (RMSE) was found for any months with more than 100mm monthly precipitation compared to the RMSE associated with the original 5km PRISM estimates. This modified PRISM may be used for rainfall mapping in rainy season (May to September) at much higher spatial resolution than the original PRISM without losing the data accuracy.

Digitization of Adjectives that Describe Facial Complexion to Evaluate Various Expressions of Skin Tone in Korean (피부색을 표현하는 형용사들의 수치화를 통한 안색 평가법 연구)

  • Lee, Sun Hwa;Lee, Jung Ah;Park, Sun Mi;Kim, Younghee;Jang, Yoon Jung;Kim, Bora;Kim, Nam Soo;Moon, Tae Kee
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.43 no.4
    • /
    • pp.349-355
    • /
    • 2017
  • Skin tone plays a key role in one of the determinant for facial attractiveness. Most female customers have an interest in choosing skin color and improving their skin tone and their needs have been contributed the expansion of cosmetic products in the market. Recently, cosmetic customers, who want bright skin, are also interested in healthy and lively-looking skin. However, there is no method to evaluate the skin tone with the complexion-describing adjectives (CDAs). Therefore, this study was conducted to find the ways to objectify and digitize the CDA. We obtained that quasi $L^*$ at dark skin is 65 and quasi $L^*$ at bright skin is 74 for standard images, which are selected from our data base. To match the following seven CDAs: pale, clear, radiant, lively, healthy, rosy and dull, the colors of both images were adjusted by 30 panels. The quasi $L^*$, $a^*$ and $b^*$ were converted from the RGB values of the manipulated images. The differences between the quasi $L^*$, $a^*$ and $b^*$ values of standard images and manipulated images reflecting each CDA were statistically significant (p < 0.05). However, there were no statistical significances between the $L^*$ values of dark and bright skin images that were modified in accordance with each CDA and there also were no statistical significances between the quasi $a^*$ values of dark and bright skin for pale and clear CDAs. From the statistical analysis, the CDAs were observed to form three groups: (i) pale-clear-radiant, (ii) lively-healthy-rosy and (iii) dull. We recognized that people have a similar opinion about perception of CDAs. Following our results of this study, we establish new standard method for sensibility evaluation which is difficult to carry out scientifically or objectively.

Development of Program for Renal Function Study with Quantification Analysis of Nuclear Medicine Image (핵의학 영상의 정량적 분석을 통한 신장기능 평가 프로그램 개발)

  • Song, Ju-Young;Lee, Hyoung-Koo;Suh, Tae-Suk;Choe, Bo-Young;Shinn, Kyung-Sub;Chung, Yong-An;Kim, Sung-Hoon;Chung, Soo-Kyo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.35 no.2
    • /
    • pp.89-99
    • /
    • 2001
  • Purpose: In this study, we developed a new software tool for the analysis of renal scintigraphy which can be modified more easily by a user who needs to study new clinical applications, and the appropriateness of the results from our program was studied. Materials and Methods: The analysis tool was programmed with IDL5.2 and designed for use on a personal computer running Windows. For testing the developed tool and studying the appropriateness of the calculated glomerular filtration rate (GFR), $^{99m}Tc$-DTPA was administered to 10 adults in normal condition. In order to study the appropriateness of the calculated mean transit time (MTT), $^{99m}Tc-DTPA\;and\;^{99m}Tc-MAG3$ were administered to 11 adults in normal condition and 22 kidneys were analyzed. All the images were acquired with ORBITOR. the Siemens gamma camera. Results: With the developed tool, we could show dynamic renal images and time activity curve (TAC) in each ROI and calculate clinical parameters of renal function. The results calculated by the developed tool were not different statistically from the results obtained by the Siemens application program (Tmax: p=0.68, Relative Renal Function: p:1.0, GFR: p=0.25) and the developed program proved reasonable. The MTT calculation tool proved to be reasonable by the evaluation of the influence of hydration status on MTT. Conclusion: We have obtained reasonable clinical parameters for the evaluation of renal function with the software tool developed in this study. The developed tool could prove more practical than conventional, commercial programs.

  • PDF

Increase of Tc-99m RBC SPECT Sensitivity for Small Liver Hemangioma using Ordered Subset Expectation Maximization Technique (Tc-99m RBC SPECT에서 Ordered Subset Expectation Maximization 기법을 이용한 작은 간 혈관종 진단 예민도의 향상)

  • Jeon, Tae-Joo;Bong, Jung-Kyun;Kim, Hee-Joung;Kim, Myung-Jin;Lee, Jong-Doo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.344-356
    • /
    • 2002
  • Purpose: RBC blood pool SPECT has been used to diagnose focal liver lesion such as hemangioma owing to its high specificity. However, low spatial resolution is a major limitation of this modality. Recently, ordered subset expectation maximization (OSEM) has been introduced to obtain tomographic images for clinical application. We compared this new modified iterative reconstruction method, OSEM with conventional filtered back projection (FBP) in imaging of liver hemangioma. Materials and Methods: Sixty four projection data were acquired using dual head gamma camera in 28 lesions of 24 patients with cavernous hemangioma of liver and these raw data were transferred to LINUX based personal computer. After the replacement of header file as interfile, OSEM was performed under various conditions of subsets (1,2,4,8,16, and 32) and iteration numbers (1,2,4,8, and 16) to obtain the best setting for liver imaging. The best condition for imaging in our investigation was considered to be 4 iterations and 16 subsets. After then, all the images were processed by both FBP and OSEM. Three experts reviewed these images without any information. Results: According to blind review of 28 lesions, OSEM images revealed at least same or better image quality than those of FBP in nearly all cases. Although there showed no significant difference in detection of large lesions more than 3 cm, 5 lesions with 1.5 to 3 cm in diameter were detected by OSEM only. However, both techniques failed to depict 4 cases of small lesions less than 1.5 cm. Conclusion: OSEM revealed better contrast and define in depiction of liver hemangioma as well as higher sensitivity in detection of small lesions. Furthermore this reconstruction method dose not require high performance computer system or long reconstruction time, therefore OSEM is supposed to be good method that can be applied to RBC blood pool SPECT for the diagnosis of liver hemangioma.

Time-Lapse Crosswell Seismic Study to Evaluate the Underground Cavity Filling (지하공동 충전효과 평가를 위한 시차 공대공 탄성파 토모그래피 연구)

  • Lee, Doo-Sung
    • Geophysics and Geophysical Exploration
    • /
    • v.1 no.1
    • /
    • pp.25-30
    • /
    • 1998
  • Time-lapse crosswell seismic data, recorded before and after the cavity filling, showed that the filling increased the velocity at a known cavity zone in an old mine site in Inchon area. The seismic response depicted on the tomogram and in conjunction with the geologic data from drillings imply that the size of the cavity may be either small or filled by debris. In this study, I attempted to evaluate the filling effect by analyzing velocity measured from the time-lapse tomograms. The data acquired by a downhole airgun and 24-channel hydrophone system revealed that there exists measurable amounts of source statics. I presented a methodology to estimate the source statics. The procedure for this method is: 1) examine the source firing-time for each source, and remove the effect of irregular firing time, and 2) estimate the residual statics caused by inaccurate source positioning. This proposed multi-step inversion may reduce high frequency numerical noise and enhance the resolution at the zone of interest. The multi-step inversion with different starting models successfully shows the subtle velocity changes at the small cavity zone. The inversion procedure is: 1) conduct an inversion using regular sized cells, and generate an image of gross velocity structure by applying a 2-D median filter on the resulting tomogram, and 2) construct the starting velocity model by modifying the final velocity model from the first phase. The model was modified so that the zone of interest consists of small-sized grids. The final velocity model developed from the baseline survey was as a starting velocity model on the monitor inversion. Since we expected a velocity change only in the cavity zone, in the monitor inversion, we can significantly reduce the number of model parameters by fixing the model out-side the cavity zone equal to the baseline model.

  • PDF

Derivation of Green Coverage Ratio Based on Deep Learning Using MAV and UAV Aerial Images (유·무인 항공영상을 이용한 심층학습 기반 녹피율 산정)

  • Han, Seungyeon;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1757-1766
    • /
    • 2021
  • The green coverage ratio is the ratio of the land area to green coverage area, and it is used as a practical urban greening index. The green coverage ratio is calculated based on the land cover map, but low spatial resolution and inconsistent production cycle of land cover map make it difficult to calculate the correct green coverage area and analyze the precise green coverage. Therefore, this study proposes a new method to calculate green coverage area using aerial images and deep neural networks. Green coverage ratio can be quickly calculated using manned aerial images acquired by local governments, but precise analysis is difficult because components of image such as acquisition date, resolution, and sensors cannot be selected and modified. This limitation can be supplemented by using an unmanned aerial vehicle that can mount various sensors and acquire high-resolution images due to low-altitude flight. In this study, we proposed a method to calculate green coverage ratio from manned or unmanned aerial images, and experimentally verified the proposed method. Aerial images enable precise analysis by high resolution and relatively constant cycles, and deep learning can automatically detect green coverage area in aerial images. Local governments acquire manned aerial images for various purposes every year and we can utilize them to calculate green coverage ratio quickly. However, acquired manned aerial images may be difficult to accurately analyze because details such as acquisition date, resolution, and sensors cannot be selected. These limitations can be supplemented by using unmanned aerial vehicles that can mount various sensors and acquire high-resolution images due to low-altitude flight. Accordingly, the green coverage ratio was calculated from the two aerial images, and as a result, it could be calculated with high accuracy from all green types. However, the green coverage ratio calculated from manned aerial images had limitations in complex environments. The unmanned aerial images used to compensate for this were able to calculate a high accuracy of green coverage ratio even in complex environments, and more precise green area detection was possible through additional band images. In the future, it is expected that the rust rate can be calculated effectively by using the newly acquired unmanned aerial imagery supplementary to the existing manned aerial imagery.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Emulsifying Properties of Gelatinized Octenyl Succinic Anhydride Modified starch from Barley (호화 옥테닐 호박산 전분의 유화 특성)

  • Kim, San-Seong;Kim, Sun-Hyung;Lee, Eui-Seok;Lee, Ki-Teak;Hong, Soon-Taek
    • Journal of the Korean Applied Science and Technology
    • /
    • v.36 no.1
    • /
    • pp.174-188
    • /
    • 2019
  • The present study was carried out to investigate the emulsifying properties of heat-treated octenyl succinic anhydride(OSA) starch and the interfacial structure at oil droplet surface in emulsions stabilized by heat-treated OSA starch. First, the aqueous suspensions of OSA starch were heated at $80^{\circ}C$ for 30 min. Oil-in-water emulsions were then prepared with the heat-treated OSA starch suspension as sole emulsifier and their physicochemical properties such as fat globule size, surface load, zeta-potential, dispersion stability, confocal laser scanning microscopic image(CLSM) were determined. It was found that fat globule size decreased as the concentration of OSA starch in emulsions increased, showing a lower limit value ($d_{32}:0.31{\mu}m$) at ${\geq}0.2wt%$. Surface load increased steadily with increasing OSA starch concentration in emulsions, possibly forming multiple layers. In addition, fat globule sizes were also influenced by pH: they were increased in acidic conditions and these results were interpreted in view of the change in zeta potentials. The dispersion stability by Turbiscan showed that it was more unstable in emulsions at acidic condition. Heat-treated OSA starch found to adsorb at the oil droplet surface as some forms of membrane (not starch granules), which might be indicative of stabilizing mechanism of OSA starch emulsions to be steric forces.