• Title/Summary/Keyword: Iterative

Search Result 3,671, Processing Time 0.031 seconds

Suitability Evaluation Method for Both Control Data and Operator Regarding Remote Control of Maritime Autonomous Surface Ships (자율운항선박 원격제어 관련 제어 데이터와 운용자의 적합성 평가 방법)

  • Hwa-Sop Roh;Hong-Jin Kim;Jeong-Bin Yim
    • Journal of Navigation and Port Research
    • /
    • v.48 no.3
    • /
    • pp.214-220
    • /
    • 2024
  • Remote control is used for operating maritime autonomous surface ships. The operator controls the ship using control data generated by the remote control system. To ensure successful remote control, three principles must be followed: safety, reliability, and availability. To achieve this, the suitability of both the control data and operators for remote control must be established. Currently, there are no international regulations in place for evaluating remote control suitability through experiments on actual ships. Conducting such experiments is dangerous, costly, and time-consuming. The goal of this study is to develop a suitability evaluation method using the output values of control devices used in actual ship operation. The proposed method involves evaluating the suitability of data by analyzing the output values and evaluating the suitability of operators by examining their tracking of these output values. The experiment was conducted using a shore-based remote control system to operate the training ship 'Hannara' of Korea National Maritime and Ocean University. The experiment involved an iterative process of obtaining the operator's tracking value for the output value of the ship's control devices and transmitting and receiving tracking data between the ship and the shore. The evaluation results showed that the transmission and reception performance of control data was suitable for remote operation. However, the operator's tracking performance revealed a need for further education and training. Therefore, the proposed evaluation method can be applied to assess the suitability and analyze both the control data and the operator's compliance with the three principles of remote control.

Evaluation of Image Quality Change by Truncated Region in Brain PET/CT (Brain PET에서 Truncated Region에 의한 영상의 질 평가)

  • Lee, Hong-Jae;Do, Yong-Ho;Kim, Jin-Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.2
    • /
    • pp.68-73
    • /
    • 2015
  • Purpose The purpose of this study was to evaluate image quality change by truncated region in field of view (FOV) of attenuation correction computed tomography (AC-CT) in brain PET/CT. Materials and Methods Biograph Truepoint 40 with TrueV (Siemens) was used as a scanner. $^{68}Ge$ phantom scan was performed with and without applying brain holder using brain PET/CT protocol. PET attenuation correction factor (ACF) was evaluated according to existence of pallet in FOV of AC-CT. FBP, OSEM-3D and PSF methods were applied for PET reconstruction. Parameters of iteration 4, subsets 21 and gaussian 2 mm filter were applied for iterative reconstruction methods. Window level 2900, width 6000 and level 4, 200, width 1000 were set for visual evaluation of PET AC images. Vertical profiles of 5 slices and 20 slices summation images applied gaussian 5 mm filter were produced for evaluating integral uniformity. Results Patient pallet was not covered in FOV of AC-CT when without applying brain holder because of small size of FOV. It resulted in defect of ACF sinogram by truncated region in ACF evaluation. When without applying brain holder, defect was appeared in lower part of transverse image on condition of window level 4200, width 1000 in PET AC image evaluation. With and without applying brain holder, integral uniformities of 5 slices and 20 slices summation images were 7.2%, 6.7% and 11.7%, 6.7%. Conclusion Truncated region by small FOV results in count defect in occipital lobe of brain in clinical or research studies. It is necessary to understand effect of truncated region and apply appropriate accessory for brain PET/CT.

  • PDF

A Variable Latency Goldschmidt's Floating Point Number Square Root Computation (가변 시간 골드스미트 부동소수점 제곱근 계산기)

  • Kim, Sung-Gi;Song, Hong-Bok;Cho, Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.188-198
    • /
    • 2005
  • The Goldschmidt iterative algorithm for finding a floating point square root calculated it by performing a fixed number of multiplications. In this paper, a variable latency Goldschmidt's square root algorithm is proposed, that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the square root of a floating point number F, the algorithm repeats the following operations: $R_i=\frac{3-e_r-X_i}{2},\;X_{i+1}=X_i{\times}R^2_i,\;Y_{i+1}=Y_i{\times}R_i,\;i{\in}\{{0,1,2,{\ldots},n-1} }}'$with the initial value is $'\;X_0=Y_0=T^2{\times}F,\;T=\frac{1}{\sqrt {F}}+e_t\;'$. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than $'e_r=2^{-p}'$. The value of p is 28 for the single precision floating point, and 58 for the doubel precision floating point. Let $'X_i=1{\pm}e_i'$, there is $'\;X_{i+1}=1-e_{i+1},\;where\;'\;e_{i+1}<\frac{3e^2_i}{4}{\mp}\frac{e^3_i}{4}+4e_{r}'$. If '|X_i-1|<2^{\frac{-p+2}{2}}\;'$ is true, $'\;e_{i+1}<8e_r\;'$ is less than the smallest number which is representable by floating point number. So, $\sqrt{F}$ is approximate to $'\;\frac{Y_{i+1}}{T}\;'$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal square root tables ($T=\frac{1}{\sqrt{F}}+e_i$) with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a square root unit. Also, it can be used to construct optimized approximate reciprocal square root tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc.

Recommending Core and Connecting Keywords of Research Area Using Social Network and Data Mining Techniques (소셜 네트워크와 데이터 마이닝 기법을 활용한 학문 분야 중심 및 융합 키워드 추천 서비스)

  • Cho, In-Dong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.127-138
    • /
    • 2011
  • The core service of most research portal sites is providing relevant research papers to various researchers that match their research interests. This kind of service may only be effective and easy to use when a user can provide correct and concrete information about a paper such as the title, authors, and keywords. However, unfortunately, most users of this service are not acquainted with concrete bibliographic information. It implies that most users inevitably experience repeated trial and error attempts of keyword-based search. Especially, retrieving a relevant research paper is more difficult when a user is novice in the research domain and does not know appropriate keywords. In this case, a user should perform iterative searches as follows : i) perform an initial search with an arbitrary keyword, ii) acquire related keywords from the retrieved papers, and iii) perform another search again with the acquired keywords. This usage pattern implies that the level of service quality and user satisfaction of a portal site are strongly affected by the level of keyword management and searching mechanism. To overcome this kind of inefficiency, some leading research portal sites adopt the association rule mining-based keyword recommendation service that is similar to the product recommendation of online shopping malls. However, keyword recommendation only based on association analysis has limitation that it can show only a simple and direct relationship between two keywords. In other words, the association analysis itself is unable to present the complex relationships among many keywords in some adjacent research areas. To overcome this limitation, we propose the hybrid approach for establishing association network among keywords used in research papers. The keyword association network can be established by the following phases : i) a set of keywords specified in a certain paper are regarded as co-purchased items, ii) perform association analysis for the keywords and extract frequent patterns of keywords that satisfy predefined thresholds of confidence, support, and lift, and iii) schematize the frequent keyword patterns as a network to show the core keywords of each research area and connecting keywords among two or more research areas. To estimate the practical application of our approach, we performed a simple experiment with 600 keywords. The keywords are extracted from 131 research papers published in five prominent Korean journals in 2009. In the experiment, we used the SAS Enterprise Miner for association analysis and the R software for social network analysis. As the final outcome, we presented a network diagram and a cluster dendrogram for the keyword association network. We summarized the results in Section 4 of this paper. The main contribution of our proposed approach can be found in the following aspects : i) the keyword network can provide an initial roadmap of a research area to researchers who are novice in the domain, ii) a researcher can grasp the distribution of many keywords neighboring to a certain keyword, and iii) researchers can get some idea for converging different research areas by observing connecting keywords in the keyword association network. Further studies should include the following. First, the current version of our approach does not implement a standard meta-dictionary. For practical use, homonyms, synonyms, and multilingual problems should be resolved with a standard meta-dictionary. Additionally, more clear guidelines for clustering research areas and defining core and connecting keywords should be provided. Finally, intensive experiments not only on Korean research papers but also on international papers should be performed in further studies.

A Variable Latency Goldschmidt's Floating Point Number Divider (가변 시간 골드스미트 부동소수점 나눗셈기)

  • Kim Sung-Gi;Song Hong-Bok;Cho Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.2
    • /
    • pp.380-389
    • /
    • 2005
  • The Goldschmidt iterative algorithm for a floating point divide calculates it by performing a fixed number of multiplications. In this paper, a variable latency Goldschmidt's divide algorithm is proposed, that performs multiplications a variable number of times until the error becomes smaller than a given value. To calculate a floating point divide '$\frac{N}{F}$', multifly '$T=\frac{1}{F}+e_t$' to the denominator and the nominator, then it becomes ’$\frac{TN}{TF}=\frac{N_0}{F_0}$'. And the algorithm repeats the following operations: ’$R_i=(2-e_r-F_i),\;N_{i+1}=N_i{\ast}R_i,\;F_{i+1}=F_i{\ast}R_i$, i$\in${0,1,...n-1}'. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than ‘$e_r=2^{-p}$'. The value of p is 29 for the single precision floating point, and 59 for the double precision floating point. Let ’$F_i=1+e_i$', there is $F_{i+1}=1-e_{i+1},\;e_{i+1}',\;where\;e_{i+1}, If '$[F_i-1]<2^{\frac{-p+3}{2}}$ is true, ’$e_{i+1}<16e_r$' is less than the smallest number which is representable by floating point number. So, ‘$N_{i+1}$ is approximate to ‘$\frac{N}{F}$'. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal tables ($T=\frac{1}{F}+e_t$) with varying sizes. 1'he superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a divider. Also, it can be used to construct optimized approximate reciprocal tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc

Compare the Clinical Tissue Dose Distributions to the Derived from the Energy Spectrum of 15 MV X Rays Linear Accelerator by Using the Transmitted Dose of Lead Filter (연(鉛)필터의 투과선량을 이용한 15 MV X선의 에너지스펙트럼 결정과 조직선량 비교)

  • Choi, Tae-Jin;Kim, Jin-Hee;Kim, Ok-Bae
    • Progress in Medical Physics
    • /
    • v.19 no.1
    • /
    • pp.80-88
    • /
    • 2008
  • Recent radiotherapy dose planning system (RTPS) generally adapted the kernel beam using the convolution method for computation of tissue dose. To get a depth and profile dose in a given depth concerened a given photon beam, the energy spectrum was reconstructed from the attenuation dose of transmission of filter through iterative numerical analysis. The experiments were performed with 15 MV X rays (Oncor, Siemens) and ionization chamber (0.125 cc, PTW) for measurements of filter transmitted dose. The energy spectrum of 15MV X-rays was determined from attenuated dose of lead filter transmission from 0.51 cm to 8.04 cm with energy interval 0.25 MeV. In the results, the peak flux revealed at 3.75 MeV and mean energy of 15 MV X rays was 4.639 MeV in this experiments. The results of transmitted dose of lead filter showed within 0.6% in average but maximum 2.5% discrepancy in a 5 cm thickness of lead filter. Since the tissue dose is highly depend on the its energy, the lateral dose are delivered from the lateral spread of energy fluence through flattening filter shape as tangent 0.075 and 0.125 which showed 4.211 MeV and 3.906 MeV. In this experiments, analyzed the energy spectrum has applied to obtain the percent depth dose of RTPS (XiO, Version 4.3.1, CMS). The generated percent depth dose from $6{\times}6cm^2$ of field to $30{\times}30cm^2$ showed very close to that of experimental measurement within 1 % discrepancy in average. The computed dose profile were within 1% discrepancy to measurement in field size $10{\times}10cm$, however, the large field sizes were obtained within 2% uncertainty. The resulting algorithm produced x-ray spectrum that match both quality and quantity with small discrepancy in this experiments.

  • PDF

Evaluation of Tensions and Prediction of Deformations for the Fabric Reinforeced -Earth Walls (섬유 보강토벽체의 인장력 평가 및 변형 예측)

  • Kim, Hong-Taek;Lee, Eun-Su;Song, Byeong-Ung
    • Geotechnical Engineering
    • /
    • v.12 no.4
    • /
    • pp.157-178
    • /
    • 1996
  • Current design methods for reinforced earth structures take no account of the magnitude of the strains induced in the tensile members as these are invariably manufactured from high modulus materials, such as steel, where straits are unlikely to be significant. With fabrics, however, large strains may frequently be induced and it is important to determine these to enable the stability of the structure to be assessed. In the present paper internal design method of analysis relating to the use of fabric reinforcements in reinforced earth structures for both stress and strain considerations is presented. For the internal stability analysis against rupture and pullout of the fabric reinforcements, a strain compatibility analysis procedure that considers the effects of reinforcement stiffness, relative movement between the soil and reinforcements, and compaction-induced stresses as studied by Ehrlich 8l Mitchell is used. I Bowever, the soil-reinforcement interaction is modeled by relating nonlinear elastic soil behavior to nonlinear response of the reinforcement. The soil constitutive model used is a modified vertsion of the hyperbolic soil model and compaction stress model proposed by Duncan et at., and iterative step-loading approach is used to take nonlinear soil behavior into consideration. The effects of seepage pressures are also dealt with in the proposed method of analy For purposes of assessing the strain behavior oi the fabric reinforcements, nonlinear model of hyperbolic form describing the load-extension relation of fabrics is employed. A procedure for specifying the strength characteristics of paraweb polyester fibre multicord, needle punched non-woven geotHxtile and knitted polyester geogrid is also described which may provide a more convenient procedure for incorporating the fablic properties into the prediction of fabric deformations. An attempt to define improvement in bond-linkage at the interconnecting nodes of the fabric reinforced earth stracture due to the confining stress is further made. The proposed method of analysis has been applied to estimate the maximum tensions, deformations and strains of the fabric reinforcements. The results are then compared with those of finite element analysis and experimental tests, and show in general good agreements indicating the effectiveness of the proposed method of analysis. Analytical parametric studies are also carried out to investigate the effects of relative soil-fabric reinforcement stiffness, locked-in stresses, compaction load and seepage pressures on the magnitude and variation of the fabric deformations.

  • PDF

Evaluation of Image Noise and Radiation Dose Analysis In Brain CT Using ASIR(Adaptive Statistical Iterative Reconstruction) (ASIR를 이용한 두부 CT의 영상 잡음 평가 및 피폭선량 분석)

  • Jang, Hyon-Chol;Kim, Kyeong-Keun;Cho, Jae-Hwan;Seo, Jeong-Min;Lee, Haeng-Ki
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.5
    • /
    • pp.357-363
    • /
    • 2012
  • The purpose of this study on head computed tomography scan corporate reorganization adaptive iteration algorithm using the statistical noise, and quality assessment, reduction of dose was evaluated. Head CT examinations do not apply ASIR group [A group], ASIR 50 applies a group [B group] were divided into examinations. B group of each 46.9 %, 48.2 %, 43.2 %, and 47.9 % the measured in the phantom research result of measurement of CT noise average were reduced more than A group in the central part (A) and peripheral unit (B, C, D). CT number was measured with the quantitive analytical method in the display-image quality evaluation and about noise was analyze. There was A group and difference which the image noise notes statistically between B. And A group was high so that the image noise could note than B group (31.87 HUs, 31.78 HUs, 26.6 HUs, 30.42 HU P<0.05). The score of the observer 1 of A group evaluated 73.17 on 74.2 at the result 80 half tone dot of evaluating by the qualitative evaluation method of the image by the bean curd clinical image evaluation table. And the score of the observer 1 of B group evaluated 71.77 on 72.47. There was no difference (P>0.05) noted statistically. And the inappropriate image was shown to the diagnosis. As to the exposure dose, by examination by applying ASIR 50 % there was no decline in quality of the image, 47.6 % could reduce the radiation dose. In conclusion, if ASIR is applied to the clinical part, it is considered with the dose written much more that examination is possible. And when examination, it is considered that it becomes the positive factor when the examiner determines.

Development of Gated Myocardial SPECT Analysis Software and Evaluation of Left Ventricular Contraction Function (게이트 심근 SPECT 분석 소프트웨어의 개발과 좌심실 수축 기능 평가)

  • Lee, Byeong-Il;Lee, Dong-Soo;Lee, Jae-Sung;Chung, June-Key;Lee, Myung-Chul;Choi, Heung-Kook
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.2
    • /
    • pp.73-82
    • /
    • 2003
  • Objectives: A new software (Cardiac SPECT Analyzer: CSA) was developed for quantification of volumes and election fraction on gated myocardial SPECT. Volumes and ejection fraction by CSA were validated by comparing with those quantified by Quantitative Gated SPECT (QGS) software. Materials and Methods: Gated myocardial SPECT was peformed in 40 patients with ejection fraction from 15% to 85%. In 26 patients, gated myocardial SPECT was acquired again with the patients in situ. A cylinder model was used to eliminate noise semi-automatically and profile data was extracted using Gaussian fitting after smoothing. The boundary points of endo- and epicardium were found using an iterative learning algorithm. Enddiastolic (EDV) and endsystolic volumes (ESV) and election fraction (EF) were calculated. These values were compared with those calculated by QGS and the same gated SPECT data was repeatedly quantified by CSA and variation of the values on sequential measurements of the same patients on the repeated acquisition. Results: From the 40 patient data, EF, EDV and ESV by CSA were correlated with those by QGS with the correlation coefficients of 0.97, 0.92, 0.96. Two standard deviation (SD) of EF on Bland Altman plot was 10.1%. Repeated measurements of EF, EDV, and ESV by CSA were correlated with each other with the coefficients of 0.96, 0.99, and 0.99 for EF, EDV and ESV respectively. On repeated acquisition, reproducibility was also excellent with correlation coefficients of 0.89, 0.97, 0.98, and coefficient of variation of 8.2%, 5.4mL, 8.5mL and 2SD of 10.6%, 21.2mL, and 16.4mL on Bland Altman plot for EF, EDV and ESV. Conclusion: We developed the software of CSA for quantification of volumes and ejection fraction on gated myocardial SPECT. Volumes and ejection fraction quantified using this software was found valid for its correctness and precision.

Evaluating the Impact of Attenuation Correction Difference According to the Lipiodol in PET/CT after TACE (간동맥 화학 색전술에 사용하는 Lipiodol에 의한 감쇠 오차가 PET/CT검사에서 영상에 미치는 영향 평가)

  • Cha, Eun Sun;Hong, Gun chul;Park, Hoon;Choi, Choon Ki;Seok, Jae Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.1
    • /
    • pp.67-70
    • /
    • 2013
  • Purpose: Surge in patients with hepatocellular carcinoma, hepatic artery chemical embolization is one of the effective interventional procedures. The PET/CT examination plays an important role in determining the presence of residual cancer cells and metastasis, and prognosis after embolization. The other hand, the hepatic artery chemical embolization of embolic material used lipiodol produced artifacts in the PET/CT examination, and these artifacts results in quantitative evaluation influence. This study, the radioactivity density and the percentage error was evaluated by the extent of the impact of lipiodol in the image of PET/CT. Materials and Methods: 1994 NEMA Phantom was acquired for 2 minutes and 30 seconds per bed after the Teflon, water and lipiodol filled, and these three inserts into the enough to mix the rest behind radioactive injection with $20{\pm}10MBq$. Phantom reconfigure with the iterative reconstruction method the number of iterations for two times by law, a subset of 20 errors. We set up region of interest at each area of the Teflon, water, lipiodol, insert artifact occurs between regions, and background and it was calculated and compared by the radioactivity density(kBq/ml) and the% Difference. Results: Radioactivity density of the each region of interest area with the teflon, water, lipiodol, insert artifact occurs between regions, background activity was $0.09{\pm}0.04$, $0.40{\pm}0.17$, $1.55{\pm}0.75$, $2.5{\pm}1.09$, $2.65{\pm}1.16 kBq/ml$ (P <0.05) and it was statistically significant results. Percentage error of lipiodol in each area was 118%, compared to the water compared with the background activity 52%, compared with a teflon was 180% of the difference. Conclusion: We found that the error due to under the influence of the attenuation correction when PET/CT scans after lipiodol injection performed, and the radioactivity density is higher than compared to other implants, lower than background. Applying the nonattenuation correction images, and after hepatic artery chemical embolization who underwent PET/CT imaging so that the test should be take the consideration to the extent of the impact of lipiodol be.

  • PDF