• 제목/요약/키워드: 3D accuracy

Search Result 2,839, Processing Time 0.759 seconds

Assessment of Attenuation Correction Techniques with a $^{137}Cs$ Point Source ($^{137}Cs$ 점선원을 이용한 감쇠 보정기법들의 평가)

  • Bong, Jung-Kyun;Kim, Hee-Joung;Son, Hye-Kyoung;Park, Yun-Young;Park, Hae-Joung;Yun, Mi-Jin;Lee, Jong-Doo;Jung, Hae-Jo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.1
    • /
    • pp.57-68
    • /
    • 2005
  • Purpose: The objective of this study was to assess attenuation correction algorithms with the $^{137}Cs$ point source for the brain positron omission tomography (PET) imaging process. Materials & Methods: Four different types of phantoms were used in this study for testing various types of the attenuation correction techniques. Transmission data of a $^{137}Cs$ point source were acquired after infusing the emission source into phantoms and then the emission data were subsequently acquired in 3D acquisition mode. Scatter corrections were performed with a background tail-fitting algorithm. Emission data were then reconstructed using iterative reconstruction method with a measured (MAC), elliptical (ELAC), segmented (SAC) and remapping (RAC) attenuation correction, respectively. Reconstructed images were then both qualitatively and quantitatively assessed. In addition, reconstructed images of a normal subject were assessed by nuclear medicine physicians. Subtracted images were also compared. Results: ELEC, SAC, and RAC provided a uniform phantom image with less noise for a cylindrical phantom. In contrast, a decrease in intensity at the central portion of the attenuation map was noticed at the result of the MAC. Reconstructed images of Jaszack and Hoffan phantoms presented better quality with RAC and SAC. The attenuation of a skull on images of the normal subject was clearly noticed and the attenuation correction without considering the attenuation of the skull resulted in artificial defects on images of the brain. Conclusion: the complicated and improved attenuation correction methods were needed to obtain the better accuracy of the quantitative brain PET images.

Study on the LOWTRAN7 Simulation of the Atmospheric Radiative Transfer Using CAGEX Data. (CAGEX 관측자료를 이용한 LOWTRAN7의 대기 복사전달 모의에 대한 조사)

  • 장광미;권태영;박경윤
    • Korean Journal of Remote Sensing
    • /
    • v.13 no.2
    • /
    • pp.99-120
    • /
    • 1997
  • Solar radiation is scattered and absorbed atmospheric compositions in the atmosphere before it reaches the surface and, then after reflected at the surface, until it reaches the satellite sensor. Therefore, consideration of the radiative transfer through the atmosphere is essential for the quantitave analysis of the satellite sensed data, specially at shortwave region. This study examined a feasibility of using radiative transfer code for estimating the atmospheric effects on satellite remote sensing data. To do this, the flux simulated by LOWTRAN7 is compared with CAGEX data in shortwave region. The CAGEX (CERES/ARM/GEWEX Experiment) data provides a dataset of (1) atmospheric soundings, aerosol optical depth and albedo, (2) ARM(Aerosol Radiation Measurement) radiation flux measured by pyrgeometers, pyrheliometer and shadow pyranometer and (3) broadband shortwave flux simulated by Fu-Liou's radiative transfer code. To simulate aerosol effect using the radiative transfer model, the aerosol optical characteristics were extracted from observed aerosol column optical depth, Spinhirne's experimental vertical distribution of scattering coefficient and D'Almeida's statistical atmospheric aerosols radiative characteristics. Simulation of LOWTRAN7 are performed on 31 sample of completely clear days. LOWTRAN's result and CAGEX data are compared on upward, downward direct, downward diffuse solar flux at the surface and upward solar flux at the top of the atmosphere(TOA). The standard errors in LOWTRAN7 simulation of the above components are within 5% except for the downward diffuse solar flux at the surface(6.9%). The results show that a large part of error in LOWTRAN7 flux simulation appeared in the diffuse component due to scattering mainly by atmispheric aerosol. For improving the accuracy of radiative transfer simulation by model, there is a need to provide better information about the radiative charateristrics of atmospheric aerosols.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

CT Simulation Technique for Craniospinal Irradiation in Supine Position (전산화단층촬영모의치료장치를 이용한 배와위 두개척수 방사선치료 계획)

  • Lee, Suk;Kim, Yong-Bae;Kwon, Soo-Il;Chu, Sung-Sil;Suh, Chang-Ok
    • Radiation Oncology Journal
    • /
    • v.20 no.2
    • /
    • pp.165-171
    • /
    • 2002
  • Purpose : In order to perform craniospinal irradiation (CSI) in the supine position on patients who are unable to lie in the prone position, a new simulation technique using a CT simulator was developed and its availability was evaluated. Materials and Method : A CT simulator and a 3-D conformal treatment planning system were used to develop CSI in the supine position. The head and neck were immobilized with a thermoplastic mask in the supine position and the entire body was immobilized with a Vac-Loc. A volumetrie image was then obtained using the CT simulator. In order to improve the reproducibility of the patients' setup, datum lines and points were marked on the head and the body. Virtual fluoroscopy was peformed with the removal of visual obstacles such as the treatment table or the immobilization devices. After the virtual simulation, the treatment isocenters of each field were marked on the body and the immobilization devices at the conventional simulation room. Each treatment field was confirmed by comparing the fluoroscopy images with the digitally reconstructed radiography (DRR)/digitally composite radiography (DCR) images from the virtual simulation. The port verification films from the first treatment were also compared with the DRR/DCR images for a geometrical verification. Results : CSI in the supine position was successfully peformed in 9 patients. It required less than 20 minutes to construct the immobilization device and to obtain the whole body volumetric images. This made it possible to not only reduce the patients' inconvenience, but also to eliminate the position change variables during the long conventional simulation process. In addition, by obtaining the CT volumetric image, critical organs, such as the eyeballs and spinal cord, were better defined, and the accuracy of the port designs and shielding was improved. The differences between the DRRs and the portal films were less than 3 mm in the vertebral contour. Conclusion : CSI in the supine position is feasible in patients who cannot lie on prone position, such as pediatric patienta under the age of 4 years, patients with a poor general condition, or patients with a tracheostomy.

Relationship Analysis between Lineaments and Epicenters using Hotspot Analysis: The Case of Geochang Region, South Korea (핫스팟 분석을 통한 거창지역의 선구조선과 진앙의 상관관계 분석)

  • Jo, Hyun-Woo;Chi, Kwang-Hoon;Cha, Sungeun;Kim, Eunji;Lee, Woo-Kyun
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_1
    • /
    • pp.469-480
    • /
    • 2017
  • This study aims to understand the relationship between lineaments and epicenters in Geochang region, Gyungsangnam-do, South Korea. An instrumental observation of earthquakes has been started by Korea Meteorological Administration (KMA) since 1978 and there were 6 earthquakes with magnitude ranging 2 to 2.5 in Geochang region from 1978 to 2016. Lineaments were extracted from LANDSAT 8 satellite image and shaded relief map displayed in 3-dimension using Digital Elevation Model (DEM). Then, lineament density was statistically examined by hotspot analysis. Hexagonal grids were generated to perform the analysis because hexagonal pattern expresses lineaments with less discontinuity than square girds, and the size of the grid was selected to minimize a variance of lineament density. Since hotspot analysis measures the extent of clustering with Z score, Z scores computed with lineaments' frequency ($L_f$), length ($L_d$), and intersection ($L_t$) were used to find lineament clusters in the density map. Furthermore, the Z scores were extracted from the epicenters and examined to see the relevance of each density elements to epicenters. As a result, 15 among 18 densities,recorded as 3 elements in 6 epicenters, were higher than 1.65 which is 95% of the standard normal distribution. This indicates that epicenters coincide with high density area. Especially, $L_f$ and $L_t$ had a significant relationship with epicenter, being located in upper 95% of the standard normal distribution, except for one epicenter in $L_t$. This study can be used to identify potential seismic zones by improving the accuracy of expressing lineaments' spatial distribution and analyzing relationship between lineament density and epicenter. However, additional studies in wider study area with more epicenters are recommended to promote the results.

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

A Comparative Analysis between Photogrammetric and Auto Tracking Total Station Techniques for Determining UAV Positions (무인항공기의 위치 결정을 위한 사진 측량 기법과 오토 트래킹 토탈스테이션 기법의 비교 분석)

  • Kim, Won Jin;Kim, Chang Jae;Cho, Yeon Ju;Kim, Ji Sun;Kim, Hee Jeong;Lee, Dong Hoon;Lee, On Yu;Meng, Ju Pil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.553-562
    • /
    • 2017
  • GPS (Global Positioning System) receiver among various sensors mounted on UAV (Unmanned Aerial Vehicle) helps to perform various functions such as hovering flight and waypoint flight based on GPS signals. GPS receiver can be used in an environment where GPS signals are smoothly received. However, recently, the use of UAV has been diversifying into various fields such as facility monitoring, delivery service and leisure as UAV's application field has been expended. For this reason, GPS signals may be interrupted by UAV's flight in a shadow area where the GPS signal is limited. Multipath can also include various noises in the signal, while flying in dense areas such as high-rise buildings. In this study, we used analytical photogrammetry and auto tracking total station technique for 3D positioning of UAV. The analytical photogrammetry is based on the bundle adjustment using the collinearity equations, which is the geometric principle of the center projection. The auto tracking total station technique is based on the principle of tracking the 360 degree prism target in units of seconds or less. In both techniques, the target used for positioning the UAV is mounted on top of the UAV and there is a geometric separation in the x, y and z directions between the targets. Data were acquired at different speeds of 0.86m/s, 1.5m/s and 2.4m/s to verify the flight speed of the UAV. Accuracy was evaluated by geometric separation of the target. As a result, there was an error from 1mm to 12.9cm in the x and y directions of the UAV flight. In the z direction with relatively small movement, approximately 7cm error occurred regardless of the flight speed.

Effect of Artificial Shade Treatment on the Growth and Biomass Production of Several Deciduous Tree Species (인공피음처리가 주요 활엽수종의 생장과 물질생산에 미치는 영향)

  • 최정호;권기원;정진철
    • Journal of Korea Foresty Energy
    • /
    • v.21 no.1
    • /
    • pp.65-75
    • /
    • 2002
  • The study was carried out to determine the growth and biomass production of major deciduous trees including Betula platyphylla var. japonica, Betula schmidtii, Zelkova serrata, Acer mono, Prunes sargentii, and Ligustrum obtusifolium subjected to artificial shade treatment in nursery field. The six deciduous trees seedlings grow for 2 years under different light intensity of 100%, 38-62%, 22-28%, 7-20%, and 2-6% of the full sun light intensity. The results were as follows; In the seedling heights and root collar diameters of shade intolerant species like Betula platyphylla var. japonica and Betula schmidtii, the relative growth rates of seedlings grown in full sun showed 2 times as compared with those subjected to the shade treatment of 2-6% light intensities of full sun. In the shade tolerant species like Acer mono ant Ligustrum obtusifolium, the growth performances were better in the seedlings grown in 38-62% light intensities of full sun. Total dry mass including the dry mass of leaves, shoot and root were as a whole decreased with shade treatment. The ratio of the dry mass of leaves and stem increased the dry mass of root. T/R ratio of the seedlings increased by decreasing the relative light intensity. And the T/R ratio of 2-6% light intensities of full sun was ranged from 1.1~5.0 were greater in the full sun light was ranged from 0.6~3.2. Light intensity by artificial shade treatment decreased in deciduous trees when compared on the whole, it showed tendency that SLA increases, increased that seeing resemblant tendency in LAR and LWR and changed of light intensity is strong, it increased that showed difference as statistical. But, LWR of Betula platyphylla var. japonica increased gradually and showed tendency that decreases rapidly in the shade treatment of 2-6% light intensities of full sun. This result is thought that biomass production decreased by shading treatment influenced in physiological characteristics such as leaf area and decrease of the leaf amount.

  • PDF

Evaluation of Factors Used in AAPM TG-43 Formalism Using Segmented Sources Integration Method and Monte Carlo Simulation: Implementation of microSelectron HDR Ir-192 Source (미소선원 적분법과 몬테칼로 방법을 이용한 AAPM TG-43 선량계산 인자 평가: microSelectron HDR Ir-192 선원에 대한 적용)

  • Ahn, Woo-Sang;Jang, Won-Woo;Park, Sung-Ho;Jung, Sang-Hoon;Cho, Woon-Kap;Kim, Young-Seok;Ahn, Seung-Do
    • Progress in Medical Physics
    • /
    • v.22 no.4
    • /
    • pp.190-197
    • /
    • 2011
  • Currently, the dose distribution calculation used by commercial treatment planning systems (TPSs) for high-dose rate (HDR) brachytherapy is derived from point and line source approximation method recommended by AAPM Task Group 43 (TG-43). However, the study of Monte Carlo (MC) simulation is required in order to assess the accuracy of dose calculation around three-dimensional Ir-192 source. In this study, geometry factor was calculated using segmented sources integration method by dividing microSelectron HDR Ir-192 source into smaller parts. The Monte Carlo code (MCNPX 2.5.0) was used to calculate the dose rate $\dot{D}(r,\theta)$ at a point ($r,\theta$) away from a HDR Ir-192 source in spherical water phantom with 30 cm diameter. Finally, anisotropy function and radial dose function were calculated from obtained results. The obtained geometry factor was compared with that calculated from line source approximation. Similarly, obtained anisotropy function and radial dose function were compared with those derived from MCPT results by Williamson. The geometry factor calculated from segmented sources integration method and line source approximation was within 0.2% for $r{\geq}0.5$ cm and 1.33% for r=0.1 cm, respectively. The relative-root mean square error (R-RMSE) of anisotropy function obtained by this study and Williamson was 2.33% for r=0.25 cm and within 1% for r>0.5 cm, respectively. The R-RMSE of radial dose function was 0.46% at radial distance from 0.1 to 14.0 cm. The geometry factor acquired from segmented sources integration method and line source approximation was in good agreement for $r{\geq}0.1$ cm. However, application of segmented sources integration method seems to be valid, since this method using three-dimensional Ir-192 source provides more realistic geometry factor. The anisotropy function and radial dose function estimated from MCNPX in this study and MCPT by Williamson are in good agreement within uncertainty of Monte Carlo codes except at radial distance of r=0.25 cm. It is expected that Monte Carlo code used in this study could be applied to other sources utilized for brachytherapy.