• Title/Summary/Keyword: Accuracy Improvement

Search Result 2,388, Processing Time 0.03 seconds

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

Generation of Sea Surface Temperature Products Considering Cloud Effects Using NOAA/AVHRR Data in the TeraScan System: Case Study for May Data (TeraScan시스템에서 NOAA/AVHRR 해수면온도 산출시 구름 영향에 따른 신뢰도 부여 기법: 5월 자료 적용)

  • Yang, Sung-Soo;Yang, Chan-Su;Park, Kwang-Soon
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.13 no.3
    • /
    • pp.165-173
    • /
    • 2010
  • A cloud detection method is introduced to improve the reliability of NOAA/AVHRR Sea Surface Temperature (SST) data processed during the daytime and nighttime in the TeraScan System. In daytime, the channels 2 and 4 are used to detect a cloud using the three tests, which are spatial uniformity tests of brightness temperature (infrared channel 4) and channel 2 albedo, and reflectivity threshold test for visible channel 2. Meanwhile, the nighttime cloud detection tests are performed by using the channels 3 and 4, because the channel 2 data are not available in nighttime. This process include the dual channel brightness temperature difference (ch3 - ch4) and infrared channel brightness temperature threshold tests. For a comparison of daytime and nighttime SST images, two data used here are obtained at 0:28 (UTC) and 21:00 (UTC) on May 13, 2009. 6 parameters was tested to understand the factors that affect a cloud masking in and around Korean Peninsula. In daytime, the thresholds for ch2_max cover a range 3 through 8, and ch4_delta and ch2_delta are fixed on 5 and 2, respectively. In nighttime, the threshold range of ch3_minus_ch4 is from -1 to 0, and ch4_delta and min_ch4_temp have the fixed thresholds with 3.5 and 0, respectively. It is acceptable that the resulted images represent a reliability of SST according to the change of cloud masking area by each level. In the future, the accuracy of SST will be verified, and an assimilation method for SST data should be tested for a reliability improvement considering an atmospheric characteristic of research area around Korean Peninsula.

Assessment of Natural Radiation Exposure by Means of Gamma-Ray Spectrometry and Thermoluminescence Dosimetry (감마선분광분석(線分光分析) 및 열형광검출법(熱螢光檢出法)에 의한 자연방사선(自然放射線)의 선량측정연구(線量測定硏究))

  • Jun, Jae-Shik;Oh, Hi-Peel;Choi, Chul-Kyu;Oh, Heon-Jin;Ha, Chung-Woo
    • Journal of Radiation Protection and Research
    • /
    • v.10 no.2
    • /
    • pp.96-108
    • /
    • 1985
  • A study for the assessment of natural environmental radiation exposure at a flat and open field of about $10,000m^2$ in area in CNU Daeduk campus has been carried out by means of gamma-ray scintillation spectrometry and thermoluminescence dosimetry for one year period of time from October 1984. The detectors used were 3'${\phi}{\times}$3' NaI(T1) and two different types of LiF TLD, namely, chip sealed in plastic sheet which tightly pressed on two open holes of a metal plate and Teflon disk. Three 24-hour cycles of in-situ spectrometry, and two 3-month and one 1-month cycles of field TL dosimetry were performed. All the spectra measured were converted into exposure rate by means of G(E) opertaion, and therefrom exposure rate due to terrestrial component of environmental radiation was figured out. Exposure rate determined by the spectrometry was, on average, $(10.54{\pm}2.96){\mu}R/hr$, and the rates of $(12.0{\pm}3.4){\mu}R/hr$ and $(11.0{\pm}3.6){\mu}R/hr$ were obtained from chip and disk TLD, respectively. Fluctuations in diurnal variation of the exposure rate measured by the spectrometry were noticeable sometime even in a single cycle of 24 hours. It is concluded that appropriately combined use of TLD with iu-sitn gamma-ray spectrometry system can give more accurate and precise measure of environmental radiation exposure, and further study for more adequate and sensitive TLD for environmental dosimetry, including improvement and elevation of accuracy in data assessment through inter-laboratory or international intercomparison is necessary.

  • PDF

The NCAM Land-Atmosphere Modeling Package (LAMP) Version 1: Implementation and Evaluation (국가농림기상센터 지면대기모델링패키지(NCAM-LAMP) 버전 1: 구축 및 평가)

  • Lee, Seung-Jae;Song, Jiae;Kim, Yu-Jung
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.18 no.4
    • /
    • pp.307-319
    • /
    • 2016
  • A Land-Atmosphere Modeling Package (LAMP) for supporting agricultural and forest management was developed at the National Center for AgroMeteorology (NCAM). The package is comprised of two components; one is the Weather Research and Forecasting modeling system (WRF) coupled with Noah-Multiparameterization options (Noah-MP) Land Surface Model (LSM) and the other is an offline one-dimensional LSM. The objective of this paper is to briefly describe the two components of the NCAM-LAMP and to evaluate their initial performance. The coupled WRF/Noah-MP system is configured with a parent domain over East Asia and three nested domains with a finest horizontal grid size of 810 m. The innermost domain covers two Gwangneung deciduous and coniferous KoFlux sites (GDK and GCK). The model is integrated for about 8 days with the initial and boundary conditions taken from the National Centers for Environmental Prediction (NCEP) Final Analysis (FNL) data. The verification variables are 2-m air temperature, 10-m wind, 2-m humidity, and surface precipitation for the WRF/Noah-MP coupled system. Skill scores are calculated for each domain and two dynamic vegetation options using the difference between the observed data from the Korea Meteorological Administration (KMA) and the simulated data from the WRF/Noah-MP coupled system. The accuracy of precipitation simulation is examined using a contingency table that is made up of the Probability of Detection (POD) and the Equitable Threat Score (ETS). The standalone LSM simulation is conducted for one year with the original settings and is compared with the KoFlux site observation for net radiation, sensible heat flux, latent heat flux, and soil moisture variables. According to results, the innermost domain (810 m resolution) among all domains showed the minimum root mean square error for 2-m air temperature, 10-m wind, and 2-m humidity. Turning on the dynamic vegetation had a tendency of reducing 10-m wind simulation errors in all domains. The first nested domain (7,290 m resolution) showed the highest precipitation score, but showed little advantage compared with using the dynamic vegetation. On the other hand, the offline one-dimensional Noah-MP LSM simulation captured the site observed pattern and magnitude of radiative fluxes and soil moisture, and it left room for further improvement through supplementing the model input of leaf area index and finding a proper combination of model physics.

Analysis of Urban Heat Island (UHI) Alleviating Effect of Urban Parks and Green Space in Seoul Using Deep Neural Network (DNN) Model (심층신경망 모형을 이용한 서울시 도시공원 및 녹지공간의 열섬저감효과 분석)

  • Kim, Byeong-chan;Kang, Jae-woo;Park, Chan;Kim, Hyun-jin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.48 no.4
    • /
    • pp.19-28
    • /
    • 2020
  • The Urban Heat Island (UHI) Effect has intensified due to urbanization and heat management at the urban level is treated as an important issue. Green space improvement projects and environmental policies are being implemented as a way to alleviate Urban Heat Islands. Several studies have been conducted to analyze the correlation between urban green areas and heat with linear regression models. However, linear regression models have limitations explaining the correlation between heat and the multitude of variables as heat is a result of a combination of non-linear factors. This study evaluated the Heat Island alleviating effects in Seoul during the summer by using a deep neural network model methodology, which has strengths in areas where it is difficult to analyze data with existing statistical analysis methods due to variable factors and a large amount of data. Wide-area data was acquired using Landsat 8. Seoul was divided into a grid (30m × 30m) and the heat island reduction variables were enter in each grid space to create a data structure that is needed for the construction of a deep neural network using ArcGIS 10.7 and Python3.7 with Keras. This deep neural network was used to analyze the correlation between land surface temperature and the variables. We confirmed that the deep neural network model has high explanatory accuracy. It was found that the cooling effect by NDVI was the greatest, and cooling effects due to the park size and green space proximity were also shown. Previous studies showed that the cooling effects related to park size was 2℃-3℃, and the proximity effect was found to lower the temperature 0.3℃-2.3℃. There is a possibility of overestimation of the results of previous studies. The results of this study can provide objective information for the justification and more effective formation of new urban green areas to alleviate the Urban Heat Island phenomenon in the future.

Improving Performance of Recommendation Systems Using Topic Modeling (사용자 관심 이슈 분석을 통한 추천시스템 성능 향상 방안)

  • Choi, Seongi;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.101-116
    • /
    • 2015
  • Recently, due to the development of smart devices and social media, vast amounts of information with the various forms were accumulated. Particularly, considerable research efforts are being directed towards analyzing unstructured big data to resolve various social problems. Accordingly, focus of data-driven decision-making is being moved from structured data analysis to unstructured one. Also, in the field of recommendation system, which is the typical area of data-driven decision-making, the need of using unstructured data has been steadily increased to improve system performance. Approaches to improve the performance of recommendation systems can be found in two aspects- improving algorithms and acquiring useful data with high quality. Traditionally, most efforts to improve the performance of recommendation system were made by the former approach, while the latter approach has not attracted much attention relatively. In this sense, efforts to utilize unstructured data from variable sources are very timely and necessary. Particularly, as the interests of users are directly connected with their needs, identifying the interests of the user through unstructured big data analysis can be a crew for improving performance of recommendation systems. In this sense, this study proposes the methodology of improving recommendation system by measuring interests of the user. Specially, this study proposes the method to quantify interests of the user by analyzing user's internet usage patterns, and to predict user's repurchase based upon the discovered preferences. There are two important modules in this study. The first module predicts repurchase probability of each category through analyzing users' purchase history. We include the first module to our research scope for comparing the accuracy of traditional purchase-based prediction model to our new model presented in the second module. This procedure extracts purchase history of users. The core part of our methodology is in the second module. This module extracts users' interests by analyzing news articles the users have read. The second module constructs a correspondence matrix between topics and news articles by performing topic modeling on real world news articles. And then, the module analyzes users' news access patterns and then constructs a correspondence matrix between articles and users. After that, by merging the results of the previous processes in the second module, we can obtain a correspondence matrix between users and topics. This matrix describes users' interests in a structured manner. Finally, by using the matrix, the second module builds a model for predicting repurchase probability of each category. In this paper, we also provide experimental results of our performance evaluation. The outline of data used our experiments is as follows. We acquired web transaction data of 5,000 panels from a company that is specialized to analyzing ranks of internet sites. At first we extracted 15,000 URLs of news articles published from July 2012 to June 2013 from the original data and we crawled main contents of the news articles. After that we selected 2,615 users who have read at least one of the extracted news articles. Among the 2,615 users, we discovered that the number of target users who purchase at least one items from our target shopping mall 'G' is 359. In the experiments, we analyzed purchase history and news access records of the 359 internet users. From the performance evaluation, we found that our prediction model using both users' interests and purchase history outperforms a prediction model using only users' purchase history from a view point of misclassification ratio. In detail, our model outperformed the traditional one in appliance, beauty, computer, culture, digital, fashion, and sports categories when artificial neural network based models were used. Similarly, our model outperformed the traditional one in beauty, computer, digital, fashion, food, and furniture categories when decision tree based models were used although the improvement is very small.

Development and Analysis of COMS AMV Target Tracking Algorithm using Gaussian Cluster Analysis (가우시안 군집분석을 이용한 천리안 위성의 대기운동벡터 표적추적 알고리듬 개발 및 분석)

  • Oh, Yurim;Kim, Jae Hwan;Park, Hyungmin;Baek, Kanghyun
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.6
    • /
    • pp.531-548
    • /
    • 2015
  • Atmospheric Motion Vector (AMV) from satellite images have shown Slow Speed Bias (SSB) in comparison with rawinsonde. The causes of SSB are originated from tracking, selection, and height assignment error, which is known to be the leading error. However, recent works have shown that height assignment error cannot be fully explained the cause of SSB. This paper attempts a new approach to examine the possibility of SSB reduction of COMS AMV by using a new target tracking algorithm. Tracking error can be caused by averaging of various wind patterns within a target and changing of cloud shape in searching process over time. To overcome this problem, Gaussian Mixture Model (GMM) has been adopted to extract the coldest cluster as target since the shape of such target is less subject to transformation. Then, an image filtering scheme is applied to weigh more on the selected coldest pixels than the other, which makes it easy to track the target. When AMV derived from our algorithm with sum of squared distance method and current COMS are compared with rawindsonde, our products show noticeable improvement over COMS products in mean wind speed by an increase of $2.7ms^{-1}$ and SSB reduction by 29%. However, the statistics regarding the bias show negative impact for mid/low level with our algorithm, and the number of vectors are reduced by 40% relative to COMS. Therefore, further study is required to improve accuracy for mid/low level winds and increase the number of AMV vectors.

A Strategy for Environmental Improvement and Internationalization of the IEODO Ocean Research Station's Radiation Observatory (이어도 종합해양과학기지의 복사관측소 환경 개선 및 국제화 추진 전략)

  • LEE, SANG-HO;Zo, Il-SUNG;LEE, KYU-TAE;KIM, BU-YO;JUNG, HYUN-SEOK;RIM, SE-HUN;BYUN, DO-SEONG;LEE, JU-YEONG
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.22 no.3
    • /
    • pp.118-134
    • /
    • 2017
  • The radiation observation data will be used importantly in research field such as climatology, weather, architecture, agro-livestock and marine science. The Ieodo Ocean Research Station (IORS) is regarded as an ideal observatory because its location can minimize the solar radiation reflection from the surrounding background and also the data produced here can serve as a reference data for radiation observation. This station has the potential to emerge as a significant observatory and join a global radiation observation group such as the Baseline Surface Radiation Network (BSRN), if the surrounding of observatory is improved and be equipped with the essential radiation measuring instruments (pyaranometer and pyrheliometer). IORS has observed the solar radiation using a pyranometer since November 2004 and the data from January 1, 2005 to December 31, 2015 were analyzed in this study. During the period of this study, the daily mean solar radiation observed from IORS decreased to $-3.80W/m^2/year$ due to the variation of the sensor response in addition to the natural environment. Since the yellow sand and fine dust from China are of great interest to scientists around the world, it is necessary to establish a basis of global joint response through the radiation data obtained at the Ieodo as well as at Sinan Gageocho and Ongjin Socheongcho Ocean Research Station. So it is an urgent need to improve the observatory surrounding and the accuracy of the observed data.

Improved Method of License Plate Detection and Recognition using Synthetic Number Plate (인조 번호판을 이용한 자동차 번호인식 성능 향상 기법)

  • Chang, Il-Sik;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.453-462
    • /
    • 2021
  • A lot of license plate data is required for car number recognition. License plate data needs to be balanced from past license plates to the latest license plates. However, it is difficult to obtain data from the actual past license plate to the latest ones. In order to solve this problem, a license plate recognition study through deep learning is being conducted by creating a synthetic license plates. Since the synthetic data have differences from real data, and various data augmentation techniques are used to solve these problems. Existing data augmentation simply used methods such as brightness, rotation, affine transformation, blur, and noise. In this paper, we apply a style transformation method that transforms synthetic data into real-world data styles with data augmentation methods. In addition, real license plate data are noisy when it is captured from a distance and under the dark environment. If we simply recognize characters with input data, chances of misrecognition are high. To improve character recognition, in this paper, we applied the DeblurGANv2 method as a quality improvement method for character recognition, increasing the accuracy of license plate recognition. The method of deep learning for license plate detection and license plate number recognition used YOLO-V5. To determine the performance of the synthetic license plate data, we construct a test set by collecting our own secured license plates. License plate detection without style conversion recorded 0.614 mAP. As a result of applying the style transformation, we confirm that the license plate detection performance was improved by recording 0.679mAP. In addition, the successul detection rate without image enhancement was 0.872, and the detection rate was 0.915 after image enhancement, confirming that the performance improved.

Validation of initial nutrition screening tool for hospitalized patients (입원 환자용 초기 영양검색도구의 타당도 검증)

  • Kim, Hye-Suk;Lee, Seonheui;Kim, Hyesook;Kwon, Oran
    • Journal of Nutrition and Health
    • /
    • v.52 no.4
    • /
    • pp.332-341
    • /
    • 2019
  • Purpose: Poor nutrition in hospitalized patients is closely linked to an increased risk of infection, which can result in complications affecting mortality, as well as increased length of hospital stay and hospital costs. Therefore, adequate nutritional support is essential to manage the nutritional risk status of patients. Nutritional support needs to be preceded by nutrition screening, in which accuracy is crucial, particularly for the initial screening. To perform initial nutrition screening of hospitalized patients, we used the Catholic Kwandong University (CKU) Nutritional Risk Screening (CKUNRS) tool, originally developed at CKU Hospital. To validate CKUNRS against the Patient-Generated Subjective Global Assessment (PG-SGA) tool, which is considered the gold standard for nutritional risk screening, results from both tools were compared. Methods: Nutritional status was evaluated in 686 adult patients admitted to CKU Hospital from May 1 to July 31, 2018 using both CKUNRS and PG-SGA. Collected data were analyzed, and the results compared, to validate CKUNRS as a nutrition screening tool. Results: The comparison of CKUNRS and PG-SGA revealed that the prevalence of nutritional risk on admission was 15.6% (n = 107) with CKUNRS and 44.6% (n = 306) with PG-SGA. The sensitivity and specificity of CKUNRS to evaluate nutritional risk status were 98.7% (96.8 ~ 99.5) and 33.3% (28.1 ~ 39.0), respectively. Thus, the sensitivity was higher, but the specificity lower compared with PG-SGA. Cohen's kappa coefficient was 0.34, indicating valid agreement between the two tools. Conclusion: This study found concordance between CKUNRS and PG-SGA. However, the prevalence of nutritional risk in hospitalized patients was higher when determined by CKUNRS, compared with that by PG-SGA. Accordingly, CKUNRS needs further modification and improvement in terms of screening criteria to promote more effective nutritional support for patients who have been admitted for inpatient care.