• Title/Summary/Keyword: robustness

Search Result 4,447, Processing Time 0.033 seconds

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

Development of an Automatic Seed Marker Registration Algorithm Using CT and kV X-ray Images (CT 영상 및 kV X선 영상을 이용한 자동 표지 맞춤 알고리듬 개발)

  • Cheong, Kwang-Ho;Cho, Byung-Chul;Kang, Sei-Kwon;Kim, Kyoung-Joo;Bae, Hoon-Sik;Suh, Tae-Suk
    • Radiation Oncology Journal
    • /
    • v.25 no.1
    • /
    • pp.54-61
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: The purpose of this study is to develop a practical method for determining accurate marker positions for prostate cancer radiotherapy using CT images and kV x-ray images obtained from the use of the on- board imager (OBI). $\underline{Materials\;and\;Methods}$: Three gold seed markers were implanted into the reference position inside a prostate gland by a urologist. Multiple digital image processing techniques were used to determine seed marker position and the center-of-mass (COM) technique was employed to determine a representative reference seed marker position. A setup discrepancy can be estimated by comparing a computed $COM_{OBI}$ with the reference $COM_{CT}$. A proposed algorithm was applied to a seed phantom and to four prostate cancer patients with seed implants treated in our clinic. $\underline{Results}$: In the phantom study, the calculated $COM_{CT}$ and $COM_{OBI}$ agreed with $COM_{actual}$ within a millimeter. The algorithm also could localize each seed marker correctly and calculated $COM_{CT}$ and $COM_{OBI}$ for all CT and kV x-ray image sets, respectively. Discrepancies of setup errors between 2D-2D matching results using the OBI application and results using the proposed algorithm were less than one millimeter for each axis. The setup error of each patient was in the range of $0.1{\pm}2.7{\sim}1.8{\pm}6.6\;mm$ in the AP direction, $0.8{\pm}1.6{\sim}2.0{\pm}2.7\;mm$ in the SI direction and $-0.9{\pm}1.5{\sim}2.8{\pm}3.0\;mm$ in the lateral direction, even though the setup error was quite patient dependent. $\underline{Conclusion}$: As it took less than 10 seconds to evaluate a setup discrepancy, it can be helpful to reduce the setup correction time while minimizing subjective factors that may be user dependent. However, the on-line correction process should be integrated into the treatment machine control system for a more reliable procedure.

Development of an Automatic 3D Coregistration Technique of Brain PET and MR Images (뇌 PET과 MR 영상의 자동화된 3차원적 합성기법 개발)

  • Lee, Jae-Sung;Kwark, Cheol-Eun;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul;Park, Kwang-Suk
    • The Korean Journal of Nuclear Medicine
    • /
    • v.32 no.5
    • /
    • pp.414-424
    • /
    • 1998
  • Purpose: Cross-modality coregistration of positron emission tomography (PET) and magnetic resonance imaging (MR) could enhance the clinical information. In this study we propose a refined technique to improve the robustness of registration, and to implement more realistic visualization of the coregistered images. Materials and Methods: Using the sinogram of PET emission scan, we extracted the robust head boundary and used boundary-enhanced PET to coregister PET with MR. The pixels having 10% of maximum pixel value were considered as the boundary of sinogram. Boundary pixel values were exchanged with maximum value of sinogram. One hundred eighty boundary points were extracted at intervals of about 2 degree using simple threshold method from each slice of MR images. Best affined transformation between the two point sets was performed using least square fitting which should minimize the sum of Euclidean distance between the point sets. We reduced calculation time using pre-defined distance map. Finally we developed an automatic coregistration program using this boundary detection and surface matching technique. We designed a new weighted normalization technique to display the coregistered PET and MR images simultaneously. Results: Using our newly developed method, robust extraction of head boundary was possible and spatial registration was successfully performed. Mean displacement error was less than 2.0 mm. In visualization of coregistered images using weighted normalization method, structures shown in MR image could be realistically represented. Conclusion: Our refined technique could practically enhance the performance of automated three dimensional coregistration.

  • PDF

Dose Verification Study of Brachytherapy Plans Using Monte Carlo Methods and CT Images (CT 영상 및 몬테칼로 계산에 기반한 근접 방사선치료계획의 선량분포 평가 방법 연구)

  • Cheong, Kwang-Ho;Lee, Me-Yeon;Kang, Sei-Kwon;Bae, Hoon-Sik;Park, So-Ah;Kim, Kyoung-Joo;Hwang, Tae-Jin;Oh, Do-Hoon
    • Progress in Medical Physics
    • /
    • v.21 no.3
    • /
    • pp.253-260
    • /
    • 2010
  • Most brachytherapy treatment planning systems employ a dosimetry formalism based on the AAPM TG-43 report which does not appropriately consider tissue heterogeneity. In this study we aimed to set up a simple Monte Carlo-based intracavitary high-dose-rate brachytherapy (IC-HDRB) plan verification platform, focusing particularly on the robustness of the direct Monte Carlo dose calculation using material and density information derived from CT images. CT images of slab phantoms and a uterine cervical cancer patient were used for brachytherapy plans based on the Plato (Nucletron, Netherlands) brachytherapy planning system. Monte Carlo simulations were implemented using the parameters from the Plato system and compared with the EBT film dosimetry and conventional dose computations. EGSnrc based DOSXYZnrc code was used for Monte Carlo simulations. Each $^{192}Ir$ source of the afterloader was approximately modeled as a parallel-piped shape inside the converted CT data set whose voxel size was $2{\times}2{\times}2\;mm^3$. Bracytherapy dose calculations based on the TG-43 showed good agreement with the Monte Carlo results in a homogeneous media whose density was close to water, but there were significant errors in high-density materials. For a patient case, A and B point dose differences were less than 3%, while the mean dose discrepancy was as much as 5%. Conventional dose computation methods might underdose the targets by not accounting for the effects of high-density materials. The proposed platform was shown to be feasible and to have good dose calculation accuracy. One should be careful when confirming the plan using a conventional brachytherapy dose computation method, and moreover, an independent dose verification system as developed in this study might be helpful.

Production Traits and Stress Responses of Five Korean Native Chicken Breeds (한국토종닭 5품종의 생산능력 및 스트레스 반응 정도)

  • Cho, Eun Jung;Choi, Eun Sik;Jeong, Hyeon Cheol;Kim, Bo Kyung;Sohn, Sea Hwan
    • Korean Journal of Poultry Science
    • /
    • v.47 no.2
    • /
    • pp.95-105
    • /
    • 2020
  • This study presents the production characteristics and physiological characteristics of five Korean native chicken (KNC) breeds consisting of Hwanggalsaek Jaeraejong (HJ), Korean Rhode Island Red (KR), Korean White Leghorn (KL), Korean Brown Cornish (KC), and Korean Ogye (KO). We investigated their production performances, vitalities, and stress responses. We measured the survival rate, body weight, age at first egg-laying, hen-day egg production, egg weight, amount of telomeric DNA, heterophil-lymphocyte ratio (H/L ratio), and heat shock protein (HSP)-70, HSP-90α and HSP-90β gene expression levels for 493 KNCs. The survival rate was highest in KR, and lowest in KO. Body weights were steadily high in the order of KC, KR, HJ, KO and KL. Average hen-day egg production was highest in KL, and lowest in KC. While the amount of telomeric DNA was highest in KR, and lowest in KC. Furthermore, both the H/L ratio and the HSP-90β gene expression level were highest in KC, and lowest in KR. These results indicated that the KR breed was highly resistant to stress, whereas KC was more susceptible to stress. Taken together, it is considered that with improvements the KC breed would be more suited to be used as a Korean broiler breed while KL would be more appropriately used as a Korean layer breed. In addition, it is considered that the KR breed is appropriate to be used as a maternal chicken breeder based on good production capacity and excellent robustness, while the HJ breed is desirable to be improved as a high-quality Korean meat breed based on its excellent meat quality.

The KALION Automated Aerosol Type Classification and Mass Concentration Calculation Algorithm (한반도 에어로졸 라이다 네트워크(KALION)의 에어로졸 유형 구분 및 질량 농도 산출 알고리즘)

  • Yeo, Huidong;Kim, Sang-Woo;Lee, Chulkyu;Kim, Dukhyeon;Kim, Byung-Gon;Kim, Sewon;Nam, Hyoung-Gu;Noh, Young Min;Park, Soojin;Park, Chan Bong;Seo, Kwangsuk;Choi, Jin-Young;Lee, Myong-In;Lee, Eun hye
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.2
    • /
    • pp.119-131
    • /
    • 2016
  • Descriptions are provided of the automated aerosol-type classification and mass concentration calculation algorithm for real-time data processing and aerosol products in Korea Aerosol Lidar Observation Network (KALION, http://www.kalion.kr). The KALION algorithm provides aerosol-cloud classification and three aerosol types (clean continental, dust, and polluted continental/urban pollution aerosols). It also generates vertically resolved distributions of aerosol extinction coefficient and mass concentration. An extinction-to-backscatter ratio (lidar ratio) of 63.31 sr and aerosol mass extinction efficiency of $3.36m^2g^{-1}$ ($1.39m^2g^{-1}$ for dust), determined from co-located sky radiometer and $PM_{10}$ mass concentration measurements in Seoul from June 2006 to December 2015, are deployed in the algorithm. To assess the robustness of the algorithm, we investigate the pollution and dust events in Seoul on 28-30 March, 2015. The aerosol-type identification, especially for dust particles, is agreed with the official Asian dust report by Korean Meteorological Administration. The lidar-derived mass concentrations also well match with $PM_{10}$ mass concentrations. Mean bias difference between $PM_{10}$ and lidar-derived mass concentrations estimated from June 2006 to December 2015 in Seoul is about $3{\mu}g\;m^{-3}$. Lidar ratio and aerosol mass extinction efficiency for each aerosol types will be developed and implemented into the KALION algorithm. More products, such as ice and water-droplet cloud discrimination, cloud base height, and boundary layer height will be produced by the KALION algorithm.

A Case Study of National Food Safety Control System Assessment in the U.S. (미국의 국가식품안전관리체계 평가 사례연구)

  • Lee, Heejung
    • Journal of Food Hygiene and Safety
    • /
    • v.32 no.3
    • /
    • pp.179-186
    • /
    • 2017
  • For more efficient and proactive safety control of imported food, new trend in U.S. is emerging, which assesses the food safety control systems of exporting countries using Systems Recognition Assessment Tool and helps ensure safety of imported foods. This study examines trends in development and application of assessmemnt tool and country assessment reports in U.S. where an active discussion on this issue is in progress. The expert interviews were also conducted. U.S. Systems Recognition Assessment Tool was developed by FDA to recognize the potential value in leveraging the expertise of foreign food safety systems and help ensure safety of imported food. The tool is comprised of ten standards and provides an objective framework for determining the robustness of trading partners' overall food safety systems. Using its own tool, the U.S. FDA conducted a preliminary assessment of the food safety control systems of New Zealand and Canada. According to the U.S.-New Zealand and the U.S.-Canada assessment reports, the overall structure of the systems was similar between the countries. In summarizing the opinions of experts, such a trend in National Food Safety Control System Assessment may be utilized in the sanitary assessment and the control of imported food border inspection frequency before importing food. It would contribute to more effective distribution of national budget and increased public trust. Additionally, international collaboration as well as securing of qualified experts and sufficient budget appear to be crucial to further increase the utility of National Food Safety Control Systems Assessment. In conclusion, firstly, it is critically important for the competent authority of South Korea to proactively respond to international trend in National Food Safety Control System Assessment by identifying the details of its background, assessment purpose, core assessment elements, and assessment procedures. Secondly, it is necessary to identify and complement the weaknesses of Korea's food safety control system by reviewing it with U.S. Systems Recognition Assessment Tool. Thirdly, by adapting the assessment results from imported countries' food safety control systems to the imported food inspection intensity, the resources previously used in inspecting the imported food from accredited countries can be redistributed to inspecting the imported food from unaccredited countries, and it would contribute to more efficient imported food safety control. Fourthly, the competent authority of South Korea should also consider developing its own assessment tool designed to reflect the unique characteristics of its food safety control system and international guidelines.

Evaluation of Viral Inactivation Efficacy of a Continuous Flow Ultraviolet-C Reactor (UVivatec) (연속 유동 Ultraviolet-C 반응기(UVivatec)의 바이러스 불활화 효과 평가)

  • Bae, Jung-Eun;Jeong, Eun-Kyo;Lee, Jae-Il;Lee, Jeong-Im;Kim, In-Seop;Kim, Jong-Su
    • Microbiology and Biotechnology Letters
    • /
    • v.37 no.4
    • /
    • pp.377-382
    • /
    • 2009
  • Viral safety is an important prerequisite for clinical preparations of all biopharmaceuticals derived from plasma, cell lines, or tissues of human or animal origin. To ensure the safety, implementation of multiple viral clearance (inactivation and/or removal) steps has been highly recommended for manufacturing of biopharmaceuticals. Of the possible viral clearance strategies, Ultraviolet-C (UVC) irradiation has been known as an effective viral inactivating method. However it has been dismissed by biopharmaceutical industry as a result of the potential for protein damage and the difficulty in delivering uniform doses. Recently a continuous flow UVC reactor (UVivatec) was developed to provide highly efficient mixing and maximize virus exposure to the UV light. In order to investigate the effectiveness of UVivatec to inactivate viruses without causing significant protein damage, the feasibility of the UVC irradiation process was studied with a commercial therapeutic protein. Recovery yield in the optimized condition of $3,000\;J/m^2$ irradiation was more than 98%. The efficacy and robustness of the UVC reactor was evaluated with regard to the inactivation of human immunodeficiency virus (HIV), hepatitis A virus (HAV), bovine herpes virus (BHV), bovine viral diarrhea virus (BVDV), porcine parvovirus (PPV), bovine parvovirus (BPV), minute virus of mice (MVM), reovirus type 3 (REO), and bovine parainfluenza virus type 3 (BPIV). Non enveloped viruses (HAV, PPV, BPV, MVM, and REO) were completely inactivated to undetectable levels by $3,000\;J/m^2$ irradiation. Enveloped viruses such as HIV, BVDV, and BPIV were completely inactivated to undetectable levels. However BHV was incompletely inactivated with slight residual infectivity remaining even after $3,000\;J/m^2$ irradiation. The log reduction factors achieved by UVC irradiation were ${\geq}3.89$ for HIV, ${\geq}5.27$ for HAV, 5.29 for BHV, ${\geq}5.96$ for BVDV, ${\geq}4.37$ for PPV, ${\geq}3.55$ for BPV, ${\geq}3.51$ for MVM, ${\geq}4.20$ for REO, and ${\geq}4.15$ for BPIV. These results indicate that UVC irradiation using UVivatec was very effective and robust in inactivating all the viruses tested.

Robust Planning of Intensity-modulated Proton Therapy for Prostate Cancer (전립선암 치료를 위한 세기조절 양성자 로버스트 치료계획)

  • Park, Su Yeon;Kim, Jong Sik;Park, Ju Young;Park, Won;Ju, Sang Gyu
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.1
    • /
    • pp.25-31
    • /
    • 2013
  • Purpose: The aim of this study is to evaluate the dosimetric properties of robust planning strategy for plain intensity-modulated proton therapy (IMPT) taking into account of the uncertainties of effective proton range and set up error as compared to photon intensity-modulated radiation therapy (photon-IMRT) in prostate cancer treatment. Materials and Methods: The photon-IMRT (7 beams, step & shoot), plain-IMPT (2, 4, and 7 portals), and robust- IMPT plans, which was recalculated the plain-IMPT based on the uncertainties of range error (${\pm}5%$) and set up error (0.5 cm), were evaluated for five prostate cancer patients prescribed by 70 Gy/35 fractions. To quantitatively evaluate the dose distributions, several parameters such as maximum dose, minimum dose, mean dose, conformity index (CI), and homogeneity index (HI) for PTV as well as dose-volume index of VxGy for OARs were calculated from dose-volume histograms. Results: Robust-IMPT showed superior dose distributios in the PTV and OARs as compared to plain-IMPT and photon-IMRT. Like plain-IMPT, robust-IMPT were resulted in dose fluctuation around OARs, while better homogeneity and conformity in PTVs and lower mean dose in OARs as compared to photon-IMRT. Conclusion: In consideration with the effective range correction and set up movement using robustness in IMPT plan, the dosimetric uncertainties from plain-IMPT could substantially reduce and suggest more effective solutions than photon-IMRT in prostate cancer treatment.

  • PDF

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.