• Title/Summary/Keyword: Preprocessing Methods

Search Result 506, Processing Time 0.03 seconds

An Accelerated Approach to Dose Distribution Calculation in Inverse Treatment Planning for Brachytherapy (근접 치료에서 역방향 치료 계획의 선량분포 계산 가속화 방법)

  • Byungdu Jo
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.5
    • /
    • pp.633-640
    • /
    • 2023
  • With the recent development of static and dynamic modulated brachytherapy methods in brachytherapy, which use radiation shielding to modulate the dose distribution to deliver the dose, the amount of parameters and data required for dose calculation in inverse treatment planning and treatment plan optimization algorithms suitable for new directional beam intensity modulated brachytherapy is increasing. Although intensity-modulated brachytherapy enables accurate dose delivery of radiation, the increased amount of parameters and data increases the elapsed time required for dose calculation. In this study, a GPU-based CUDA-accelerated dose calculation algorithm was constructed to reduce the increase in dose calculation elapsed time. The acceleration of the calculation process was achieved by parallelizing the calculation of the system matrix of the volume of interest and the dose calculation. The developed algorithms were all performed in the same computing environment with an Intel (3.7 GHz, 6-core) CPU and a single NVIDIA GTX 1080ti graphics card, and the dose calculation time was evaluated by measuring only the dose calculation time, excluding the additional time required for loading data from disk and preprocessing operations. The results showed that the accelerated algorithm reduced the dose calculation time by about 30 times compared to the CPU-only calculation. The accelerated dose calculation algorithm can be expected to speed up treatment planning when new treatment plans need to be created to account for daily variations in applicator movement, such as in adaptive radiotherapy, or when dose calculation needs to account for changing parameters, such as in dynamically modulated brachytherapy.

Analysis of Research Trends Related to drug Repositioning Based on Machine Learning (머신러닝 기반의 신약 재창출 관련 연구 동향 분석)

  • So Yeon Yoo;Gyoo Gun Lim
    • Information Systems Review
    • /
    • v.24 no.1
    • /
    • pp.21-37
    • /
    • 2022
  • Drug repositioning, one of the methods of developing new drugs, is a useful way to discover new indications by allowing drugs that have already been approved for use in people to be used for other purposes. Recently, with the development of machine learning technology, the case of analyzing vast amounts of biological information and using it to develop new drugs is increasing. The use of machine learning technology to drug repositioning will help quickly find effective treatments. Currently, the world is having a difficult time due to a new disease caused by coronavirus (COVID-19), a severe acute respiratory syndrome. Drug repositioning that repurposes drugsthat have already been clinically approved could be an alternative to therapeutics to treat COVID-19 patients. This study intends to examine research trends in the field of drug repositioning using machine learning techniques. In Pub Med, a total of 4,821 papers were collected with the keyword 'Drug Repositioning'using the web scraping technique. After data preprocessing, frequency analysis, LDA-based topic modeling, random forest classification analysis, and prediction performance evaluation were performed on 4,419 papers. Associated words were analyzed based on the Word2vec model, and after reducing the PCA dimension, K-Means clustered to generate labels, and then the structured organization of the literature was visualized using the t-SNE algorithm. Hierarchical clustering was applied to the LDA results and visualized as a heat map. This study identified the research topics related to drug repositioning, and presented a method to derive and visualize meaningful topics from a large amount of literature using a machine learning algorithm. It is expected that it will help to be used as basic data for establishing research or development strategies in the field of drug repositioning in the future.

A Study on Developing a Web Care Model for Audiobook Platforms Using Machine Learning (머신러닝을 이용한 오디오북 플랫폼 기반의 웹케어 모형 구축에 관한 연구)

  • Dahoon Jeong;Minhyuk Lee;Taewon Lee
    • Information Systems Review
    • /
    • v.26 no.1
    • /
    • pp.337-353
    • /
    • 2024
  • The purpose of this study is to investigate the relationship between consumer reviews and managerial responses, aiming to explore the necessity of webcare for efficiently managing consumer reviews. We intend to propose a methodology for effective webcare and to construct a webcare model using machine learning techniques based on an audiobook platform. In this study, we selected four audiobook platforms and conducted data collection and preprocessing for consumer reviews and managerial responses. We utilized techniques such as topic modeling, topic inconsistency analysis, and DBSCAN, along with various machine learning methods for analysis. The experimental results yielded significant findings in clustering managerial responses and predicting responses to consumer reviews, proposing an efficient methodology considering resource constraints and costs. This research provides academic insights by constructing a webcare model through machine learning techniques and practical implications by suggesting an efficient methodology, considering the limited resources and personnel of companies. The proposed webcare model in this study can be utilized as strategic foundational data for consumer engagement and providing useful information, offering both personalized responses and standardized managerial responses.

Quantification of Myocardial Blood flow using Dynamic N-13 Ammonia PET and factor Analysis (N-13 암모니아 PET 동적영상과 인자분석을 이용한 심근 혈류량 정량화)

  • Choi, Yong;Kim, Joon-Young;Im, Ki-Chun;Kim, Jong-Ho;Woo, Sang-Keun;Lee, Kyung-Han;Kim, Sang-Eun;Choe, Yearn-Seong;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.33 no.3
    • /
    • pp.316-326
    • /
    • 1999
  • Purpose: We evaluated the feasibility of extracting pure left ventricular blood pool and myocardial time-activity curves (TACs) and of generating factor images from human dynamic N-13 ammonia PET using factor analysis. The myocardial blood flow (MBF) estimates obtained with factor analysis were compared with those obtained with the user drawn region-of-interest (ROI) method. Materials and Methods: Stress and rest N-13 ammonia cardiac PET imaging was acquired for 23 min in 5 patients with coronary artery disease using GE Advance tomograph. Factor analysis generated physiological TACs and factor images using the normalized TACs from each dixel. Four steps were involved in this algorithm: (a) data preprocessing; (b) principal component analysis; (c) oblique rotation with positivity constraints; (d) factor image computation. Area under curves and MBF estimated using the two compartment N-13 ammonia model were used to validate the accuracy of the factor analysis generated physiological TACs. The MBF estimated by factor analysis was compared to the values estimated by using the ROI method. Results: MBF values obtained by factor analysis were linearly correlated with MBF obtained by the ROI method (slope = 0.84, r = 0.91), Left ventricular blood pool TACs obtained by the two methods agreed well (Area under curve ratio: 1.02 ($0{\sim}1min$), 0.98 ($0{\sim}2min$), 0.86 ($1{\sim}2min$)). Conclusion: The results of this study demonstrates that MBF can be measured accurately and noninvasively with dynamic N-13 ammonia PET imaging and factor analysis. This method is simple and accurate, and can measure MBF without blood sampling, ROI definition or spillover correction.

  • PDF

Robust Eye Localization using Multi-Scale Gabor Feature Vectors (다중 해상도 가버 특징 벡터를 이용한 강인한 눈 검출)

  • Kim, Sang-Hoon;Jung, Sou-Hwan;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.25-36
    • /
    • 2008
  • Eye localization means localization of the center of the pupils, and is necessary for face recognition and related applications. Most of eye localization methods reported so far still need to be improved about robustness as well as precision for successful applications. In this paper, we propose a robust eye localization method using multi-scale Gabor feature vectors without big computational burden. The eye localization method using Gabor feature vectors is already employed in fuck as EBGM, but the method employed in EBGM is known not to be robust with respect to initial values, illumination, and pose, and may need extensive search range for achieving the required performance, which may cause big computational burden. The proposed method utilizes multi-scale approach. The proposed method first tries to localize eyes in the lower resolution face image by utilizing Gabor Jet similarity between Gabor feature vector at an estimated initial eye coordinates and the Gabor feature vectors in the eye model of the corresponding scale. Then the method localizes eyes in the next scale resolution face image in the same way but with initial eye points estimated from the eye coordinates localized in the lower resolution images. After repeating this process in the same way recursively, the proposed method funally localizes eyes in the original resolution face image. Also, the proposed method provides an effective illumination normalization to make the proposed multi-scale approach more robust to illumination, and additionally applies the illumination normalization technique in the preprocessing stage of the multi-scale approach so that the proposed method enhances the eye detection success rate. Experiment results verify that the proposed eye localization method improves the precision rate without causing big computational overhead compared to other eye localization methods reported in the previous researches and is robust to the variation of post: and illumination.

Development of a Small Animal Positron Emission Tomography Using Dual-layer Phoswich Detector and Position Sensitive Photomultiplier Tube: Preliminary Results (두층 섬광결정과 위치민감형광전자증배관을 이용한 소동물 양전자방출단층촬영기 개발: 기초실험 결과)

  • Jeong, Myung-Hwan;Choi, Yong;Chung, Yong-Hyun;Song, Tae-Yong;Jung, Jin-Ho;Hong, Key-Jo;Min, Byung-Jun;Choe, Yearn-Seong;Lee, Kyung-Han;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.5
    • /
    • pp.338-343
    • /
    • 2004
  • Purpose: The purpose of this study was to develop a small animal PET using dual layer phoswich detector to minimize parallax error that degrades spatial resolution at the outer part of field-of-view (FOV). Materials and Methods: A simulation tool GATE (Geant4 Application for Tomographic Emission) was used to derive optimal parameters of small PET, and PET was developed employing the parameters. Lutetium Oxyorthosilicate (LSO) and Lutetium-Yttrium Aluminate-Perovskite(LuYAP) was used to construct dual layer phoswitch crystal. $8{\times}8$ arrays of LSO and LuYAP pixels, $2mm{\times}2mm{\times}8mm$ in size, were coupled to a 64-channel position sensitive photomultiplier tube. The system consisted of 16 detector modules arranged to one ring configuration (ring inner diameter 10 cm, FOV of 8 cm). The data from phoswich detector modules were fed into an ADC board in the data acquisition and preprocessing PC via sockets, decoder block, FPGA board, and bus board. These were linked to the master PC that stored the events data on hard disk. Results: In a preliminary test of the system, reconstructed images were obtained by using a pair of detectors and sensitivity and spatial resolution were measured. Spatial resolution was 2.3 mm FWHM and sensitivity was 10.9 $cps/{\mu}Ci$ at the center of FOV. Conclusion: The radioactivity distribution patterns were accurately represented in sinograms and images obtained by PET with a pair of detectors. These preliminary results indicate that it is promising to develop a high performance small animal PET.

Review of Remote Sensing Studies on Groundwater Resources (원격탐사의 지하수 수자원 적용 사례 고찰)

  • Lee, Jeongho
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_3
    • /
    • pp.855-866
    • /
    • 2017
  • Several research cases using remote sensing methods to analyze changes of storage and dynamics of groundwater aquifer were reviewed in this paper. The status of groundwater storage, in an area with regional scale, could be qualitatively inferred from geological feature, surface water altimetry and topography, distribution of vegetation, and difference between precipitation and evapotranspiration. These qualitative indicators could be measured by geological lineament analysis, airborne magnetic survey, DEM analysis, LAI and NDVI calculation, and surface energy balance modeling. It is certain that GRACE and InSAR have received remarkable attentions as direct utilization from satellite data for quantification of groundwater storage and dynamics. GRACE, composed of twin satellites having acceleration sensors, could detect global or regional microgravity changes and transform them into mass changes of water on surface and inside of the Earth. Numerous studies in terms of groundwater storage using GRACE sensor data were performed with several merits such that (1) there is no requirement of sensor data, (2) auxiliary data for quantification of groundwater can be entirely obtained from another satellite sensors, and (3) algorithms for processing measured data have continuously progressed from designated data management center. The limitations of GRACE for groundwater storage measurement could be defined as follows: (1) In an area with small scale, mass change quantification of groundwater might be inaccurate due to detection limit of the acceleration sensor, and (2) the results would be overestimated in case of combination between sensor and field survey data. InSAR can quantify the dynamic characteristics of aquifer by measuring vertical micro displacement, using linear proportional relation between groundwater head and vertical surface movement. However, InSAR data might now constrain their application to arid or semi-arid area whose land cover appear to be simple, and are hard to apply to the area with the anticipation of loss of coherence with surface. Development of GRACE and InSAR sensor data preprocessing algorithms optimized to topography, geology, and natural conditions of Korea should be prioritized to regionally quantify the mass change and dynamics of the groundwater resources of Korea.

Voxel-based Morphometry (VBM) Based Assessment of Gray Matter Loss in Medial Temporal Lobe Epilepsy: Comparison with FDG PET (화소기반 형태분석 방법을 이용한 내측측두엽 간질환자의 회백질 부피/농도 감소평가; FDG PET과의 비교)

  • Kang, Hye-Jin;Lee, Ho-Young;Lee, Jae-Sung;Kang, Eun-Joo;Lee, Sang-Gun;Chang, Kee-Hyun;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.1
    • /
    • pp.30-40
    • /
    • 2004
  • Purpose: The aims of this study were to find brain regions in which gray matter volume was reduced and to show the capability of voxel-based morphometry (VBM) analysis for lateralizing epileptogenic zones in medial temporal lobe epilepsy (mTLE). The findings were compared with fluorodeoxyglucose positron omission tomography (FDG PET). Materials and Methods: MR T1-weighted images of 12 left mTLE and 11 right mTLE patients were compared with those of 37 normal controls. Images were transformed to standard MNI space and averaged in order to create study-specific brain template. Each image was normalized to this local template and brain tissues were segmented. Modulation VBM analysis was performed in order to observe gray matter volume change. Gray matter was smoothed with a Gaussian kernel. After these preprocessing, statistical analysis was peformed using statistical parametric mapping software (SPM99). FDG PET images were compared with those of 22 normal controls using SPM. Results: Gray matter volume was significantly reduced in the left amygdala and hippocampus in left mTLE. In addition, volume of cerebellum, anterior cingulate, and fusiform gyrus in both sides and left insula was reduced. In right mTLE, volume was reduced significantly in right hippocampus. In contrast, FDG uptake was decreased in broad areas of left or right temporal lobes in left TLE and right TLE, respectively. Conclusions: Gray matter loss was found in the ipsilateral hippocampus by modulation VBM analysis in medial temporal lobe epilepsy. This VBM analysis might be useful in lateralizing the epileptogenic zones in medial temporal lobe epilepsy, while SPM analysis of FDG PET disclosed hypometabolic epileptogenic zones.

Report on the Effects Lipemic Specimen in Anti-ds DNA Antibody Test (Anti-ds DNA 항체 검사 시 Lipemic 검체의 영향에 관한 보고)

  • Cheon, Jun Hong;Kim, Whe Jung;Kim, Sung Ho;Moon, Hyoung Ho;Yoo, Seon Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.153-157
    • /
    • 2014
  • Purpose: SLE (systemic lupus erythematosus) is an inflammatory autoimmune disease, characterized by various autoantibody. The detection of Anti double-stranded DNA (Anti-ds DNA) is important in the diagnostics of SLE, and include the American College of Rheumatology (ACR) diagnostic criteria for SLE. Also SLE disease activity and correlativity with the level Anti-ds DNA antibody have been reported and Anti-ds DNA antibody quantitative test is very useful for tracing before and after SLE treatment. When These Anti-ds DNA antibody test (Farr assay: $^{125}I$ labeled ds-DNA and bound Anti-ds DNA antibodies complex in serum is precipitated by ammonium sulfate and used to centrifugation, measured it) inhaled supernatant after centrifugation, a lipemic specimen does not facilitate the formation of precipitate and also occurs situation was inhaled with precipitate. To solve these problems, The Influence of the degree of lipemic specimen was evaluated. Materials and Methods: September 2012 to February 2013, We selected lipemic samples (n=81) of specimen commissioned by Anti-ds DNA antibody test. Lipemic samples were done pre-treatment (high-speed centrifugation: 14,000 rpm 5 mins) used a micro-centrifuge (Eppendorf Model 5415D). At the same time lipemic specimen and pre-treatment samples were performed Anti-ds DNA antibody test (Anti-ds DNA kit, Trinity Biotech, Ireland). Statistical analysis were analyzed Pearson's correlation coefficients and regression and paired t-test, and Difference (%). Results: Experimental group 1 (Lipemic Specimen Anti-ds DNA Ab concentration ${\leq}7IU/mL$) at y=0.368X+4.732, $R^2=0.023$, Pearson's correlation coefficient was 0.154, paired t-test (P=0.003), Difference (%) mean 65.7 and showed a statistically significant difference. Experimental group 2 (Lipemic Specimen Anti-ds DNA Ab concentration ${\geq}8IU/mL$) at y=0.983X+0.298, $R^2=0.994$, Pearson's correlation coefficient showed 0.997, paired t-test (P=0.181), Difference (%) mean -5.53 made no statistically significant difference. Conclusion: Lipemic sample of low Anti-ds DNA Ab concentration (2.5-7 IU/mL) and the result is obtained pre-treatment (high-speed centrifugation: 14,000 rpm 5 mins) were made a significant difference statistically. Anti-ds DNA is one of the primary auto-antibodies present in patients with SLE, and remain an important diagnostic test for SLE. Therefore, we recommend preprocessing (high-speed centrifugation: 14,000 rpm 5 mins) in order to exclude the influence of lipemic specimen.

  • PDF

Enhancement of Inter-Image Statistical Correlation for Accurate Multi-Sensor Image Registration (정밀한 다중센서 영상정합을 위한 통계적 상관성의 증대기법)

  • Kim, Kyoung-Soo;Lee, Jin-Hak;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.1-12
    • /
    • 2005
  • Image registration is a process to establish the spatial correspondence between images of the same scene, which are acquired at different view points, at different times, or by different sensors. This paper presents a new algorithm for robust registration of the images acquired by multiple sensors having different modalities; the EO (electro-optic) and IR(infrared) ones in the paper. The two feature-based and intensity-based approaches are usually possible for image registration. In the former selection of accurate common features is crucial for high performance, but features in the EO image are often not the same as those in the R image. Hence, this approach is inadequate to register the E0/IR images. In the latter normalized mutual Information (nHr) has been widely used as a similarity measure due to its high accuracy and robustness, and NMI-based image registration methods assume that statistical correlation between two images should be global. Unfortunately, since we find out that EO and IR images don't often satisfy this assumption, registration accuracy is not high enough to apply to some applications. In this paper, we propose a two-stage NMI-based registration method based on the analysis of statistical correlation between E0/1R images. In the first stage, for robust registration, we propose two preprocessing schemes: extraction of statistically correlated regions (ESCR) and enhancement of statistical correlation by filtering (ESCF). For each image, ESCR automatically extracts the regions that are highly correlated to the corresponding regions in the other image. And ESCF adaptively filters out each image to enhance statistical correlation between them. In the second stage, two output images are registered by using NMI-based algorithm. The proposed method provides prospective results for various E0/1R sensor image pairs in terms of accuracy, robustness, and speed.