• Title/Summary/Keyword: Validation toolkit

Search Result 9, Processing Time 0.026 seconds

The Evaluation about the Information Fidelity in the External Image Information Input - Using DICOM Validation Tool - (외부영상정보 입력 시 DICOM정보 충실성에 대한 평가 - DICOM Validation Tool 이용 -)

  • Lee, Song-Woo;Lee, Ho-Yeon;Do, Ji-Hoon;Jang, Hye-Won
    • Korean Journal of Digital Imaging in Medicine
    • /
    • v.13 no.1
    • /
    • pp.33-38
    • /
    • 2011
  • Now a days, there's many change over for PACS among the most of hospital and it standard for DICOM 3.0. These kind of using of DICOM 3.0 improves increasing of medical imaging exchange and service for patient. However, there's some problems of compatibility caused during carry out CD and DVD from hospital. For this reason, this thesis analyzed patients image targeting those storages requested to hospitals in Seoul by using Validation Toolkit which is recommended from KFDA. The analyze type is like this. Make 100 data, total 500, each of MRI CT Plain x-ray Ultrasound PET-CT images and analyzed type of error occurred and loyalty of information. If express percentage of error occurred statistically, we can get a result as follows MRI 5%, Plain x-ray 11%, CT 18%, US 25%, PET-CT 30%. The reson why percentage of error occurred in PET-CT is because of imperfective support and we could notice that we weren't devoted to information. Even though, PET-CT showed highest percentage of error occurred, currently DICOM data improved a lot compare to past. Moreover, it should be devoted to rule of IHE TOOL or DICOM. In conclusion, we can help radiographer to analyze information of image by providing clues for solving primary problem and further more, each of PACS company or equipment company can enhance fidelity for following standard of image information through realizing the actual problem during transfer of image information.

  • PDF

Prediction of Stream Flow on Probability Distributed Model using Multi-objective Function (다목적함수를 이용한 PDM 모형의 유량 분석)

  • Ahn, Sang-Eok;Lee, Hyo-Sang;Jeon, Min-Woo
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.9 no.5
    • /
    • pp.93-102
    • /
    • 2009
  • A prediction of streamflow based on multi-objective function is presented to check the performance of Probability Distributed Model(PDM) in Miho stream basin, Chungcheongbuk-do, Korea. PDM is a lumped conceptual rainfall runoff model which has been widely used for flood prevention activities in UK Environmental Agency. The Monte Carlo Analysis Toolkit(MCAT) is a numerical analysis tools based on population sampling, which allows evaluation of performance, identifiability, regional sensitivity and etc. PDM is calibrated for five model parameters by using MCAT. The results show that the performance of model parameters(cmax and k(q)) indicates high identifiability and the others obtain equifinality. In addition, the multi-objective function is applied to PDM for seeking suitable model parameters. The solution of the multi-objective function consists of the Pareto solution accounting to various trade-offs between the different objective functions considering properties of hydrograph. The result indicated the performance of model and simulated hydrograph are acceptable in terms on Nash Sutcliffe Effciency*(=0.035), FSB(=0.161), and FDBH(=0.809) to calibration periods, validation periods as well.

Design and Verification of the Class-based Architecture Description Language (클래스-기반 아키텍처 기술 언어의 설계 및 검증)

  • Ko, Kwang-Man
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.7
    • /
    • pp.1076-1087
    • /
    • 2010
  • Together with a new advent of embedded processor developed to support specific application area and it evolution, a new research of software development to support the embedded processor and its commercial challenge has been revitalized. Retargetability is typically achieved by providing target machine information, ADL, as input. The ADLs are used to specify processor and memory architectures and generate software toolkit including compiler, simulator, assembler, profiler, and debugger. The EXPRESSION ADL follows a mixed level approach-it can capture both the structure and behavior supporting a natural specification of the programmable architectures consisting of processor cores, coprocessors, and memories. And it was originally designed to capture processor/memory architectures and generate software toolkit to enable compiler-in-the-loop exploration of SoC architecture. In this paper, we designed the class-based ADL based on the EXPRESSION ADL to promote the write-ability, extensibility and verified the validation of grammar. For this works, we defined 6 core classes and generated the EXPRESSION's compiler and simulator through the MIPS R4000 description.

ANALYSIS BY SYNTHESIS FOR ESTIMATION OF DOSE CALCULATION WITH gMOCREN AND GEANT4 IN MEDICAL IMAGE

  • Lee, Jeong-Ok;Kang, Jeong-Ku;Kim, Jhin-Kee;Kim, Bu-Gil;Jeong, Dong-Hyeok
    • Journal of Radiation Protection and Research
    • /
    • v.37 no.3
    • /
    • pp.146-148
    • /
    • 2012
  • The use of GEANT4 simulation toolkit has increased in the radiation medical field for the design of treatment system and the calibration or validation of treatment plans. Moreover, it is used especially on calculating dose simulation using medical data for radiation therapy. However, using internal visualization tool of GEANT4 detector constructions on expressing dose result has deficiencies because it cannot display isodose line. No one has attempted to use this code to a real patient's data. Therefore, to complement this problem, using the result of gMocren that is a three-dimensional volume-visualizing tool, we tried to display a simulated dose distribution and isodose line on medical image. In addition, we have compared cross-validation on the result of gMocren and GEANT4 simulation with commercial radiation treatment planning system. We have extracted the analyzed data of dose distribution, using real patient's medical image data with a program based on Monte Carlo simulation and visualization tool for radiation isodose mapping.

Banding the World Together; The Global Growth of Control Banding and Qualitative Occupational Risk Management

  • Zalk, David M.;Heussen, Ga Henri
    • Safety and Health at Work
    • /
    • v.2 no.4
    • /
    • pp.375-379
    • /
    • 2011
  • Control Banding (CB) strategies to prevent work-related illness and injury for 2.5 billion workers without access to health and safety professionals has grown exponentially this last decade. CB originates from the pharmaceutical industry to control active pharmaceutical ingredients without a complete toxicological basis and therefore no occupational exposure limits. CB applications have broadened into chemicals in general - including new emerging risks like nanomaterials and recently into ergonomics and injury prevention. CB is an action-oriented qualitative risk assessment strategy offering solutions and control measures to users through "toolkits". Chemical CB toolkits are user-friendly approaches used to achieve workplace controls in the absence of firm toxicological and quantitative exposure information. The model (technical) validation of these toolkits is well described, however firm operational analyses (implementation aspects) are lacking. Consequentially, it is often not known if toolkit use leads to successful interventions at individual workplaces. This might lead to virtual safe workplaces without knowing if workers are truly protected. Upcoming international strategies from the World Health Organization Collaborating Centers request assistance in developing and evaluating action-oriented procedures for workplace risk assessment and control. It is expected that to fulfill this strategy's goals, CB approaches will continue its important growth in protecting workers.

Precision Validation of Electromagnetic Physics in Geant4 Simulation for Proton Therapy (양성자 치료 전산모사를 위한 Geant4 전자기 물리 모델 정확성 검증)

  • Park, So-Hyun;Rah, Jeong-Eun;Shin, Jung-Wook;Park, Sung-Yong;Yoon, Sei-Chul;Jung, Won-Gyun;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.225-234
    • /
    • 2009
  • Geant4 (GEometry ANd Tracking) provides various packages specialized in modeling electromagnetic interactions. The validation of Geant4 physics models is a significant issue for the applications of Geant4 based simulation in medical physics. The purpose of this study is to evaluate accuracy of Geant4 electromagnetic physics for proton therapy. The validation was performed both the Continuous slowing down approximation (CSDA) range and the stopping power. In each test, the reliability of the electromagnetic models in a selected group of materials was evaluated such as water, bone, adipose tissue and various atomic elements. Results of Geant4 simulation were compared with the National Institute of Standards and Technology (NIST) reference data. As results of comparison about water, bone and adipose tissue, average percent difference of CSDA range were presented 1.0%, 1.4% and 1.4%, respectively. Average percent difference of stopping power were presented 0.7%, 1.0% and 1.3%, respectively. The data were analyzed through the kolmogorov-smirnov Goodness-of-Fit statistical analysis test. All the results from electromagnetic models showed a good agreement with the reference data, where all the corresponding p-values are higher than the confidence level $\alpha=0.05$ set.

  • PDF

Development of a Korean Speech Recognition Platform (ECHOS) (한국어 음성인식 플랫폼 (ECHOS) 개발)

  • Kwon Oh-Wook;Kwon Sukbong;Jang Gyucheol;Yun Sungrack;Kim Yong-Rae;Jang Kwang-Dong;Kim Hoi-Rin;Yoo Changdong;Kim Bong-Wan;Lee Yong-Ju
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.8
    • /
    • pp.498-504
    • /
    • 2005
  • We introduce a Korean speech recognition platform (ECHOS) developed for education and research Purposes. ECHOS lowers the entry barrier to speech recognition research and can be used as a reference engine by providing elementary speech recognition modules. It has an easy simple object-oriented architecture, implemented in the C++ language with the standard template library. The input of the ECHOS is digital speech data sampled at 8 or 16 kHz. Its output is the 1-best recognition result. N-best recognition results, and a word graph. The recognition engine is composed of MFCC/PLP feature extraction, HMM-based acoustic modeling, n-gram language modeling, finite state network (FSN)- and lexical tree-based search algorithms. It can handle various tasks from isolated word recognition to large vocabulary continuous speech recognition. We compare the performance of ECHOS and hidden Markov model toolkit (HTK) for validation. In an FSN-based task. ECHOS shows similar word accuracy while the recognition time is doubled because of object-oriented implementation. For a 8000-word continuous speech recognition task, using the lexical tree search algorithm different from the algorithm used in HTK, it increases the word error rate by $40\%$ relatively but reduces the recognition time to half.

A Design and Implementation of WML Compiler for WAP Gateway for Wireless Internet Services (무선 인터넷 서비스를 위한 WAP 게이트웨이용 WML 컴파일러의 설계 및 구현)

  • Choi, Eun-Jeong;Han, Dong-Won;Lim, Kyung-Shik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.2
    • /
    • pp.165-182
    • /
    • 2001
  • In this paper, we describe a design and implementation of the Wireless Markup Language(WML) compiler to deploy wireless Internet services effectively. The WML compiler translates textual WML decks into binary ones in order to reduce the traffic on wireless links that have relatively low bandwidth to wireline links and mitigate the processing overhead of WML decks on, wireless terminals that have relatively low processing power to fixed workstations. In addition, it takes over the overhead of eXtensible Markup Language(XML) well-formedness and validation processes. The WML compiler consists of the lexical analyzer and parser modules. The granunar for the WML parser module is LALR(1) context-free grammar that is designed based on XML 1.0 and WML 1.2 DTD(Document Type Definition) with the consideration of the Wireless Application Protocol Binary XML grammar. The grammar description is converted into a C program to parse that grammar by using parser generator. Even though the tags in WML will be extended or WML DTD will be upgraded, this approach has the advantage of flexibility because the program is generated by modifying just the changed parts. We have verified the functionality of the WML compiler by using a WML decompiler in the public domain and by using the Nokia WAP Toolkit as a WAP client. To measurethe compressibility gain of the WML compiler, we have tested a large number of textual WML decks and obtained a maximum 85 %. As the effect of compression is reduced when the portion of general textual strings increases relative to one of the tags and attributes in a WML deck, an extended encoding method might be needed for specific applications such as compiling of the WML decks to which the Hyper Text Markup Language document is translated dynamically.

  • PDF

Governance research for Artificial intelligence service (인공지능 서비스 거버넌스 연구)

  • Soonduck Yoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.15-21
    • /
    • 2024
  • The purpose of this study is to propose a framework for the introduction and evaluation of artificial intelligence (AI) services not only in general applications but also in public policies. To achieve this, the study explores AI service management and governance toolkits, providing insights into how to introduce AI services in public policies. Firstly, it offers guidelines on the direction of AI service development and what aspects to avoid. Secondly, in the development phase, it recommends using the AI governance toolkit to review content through checklists at each stage of design, development, and deployment. Thirdly, when operating AI services, it emphasizes the importance of adhering to principles related to 1) planning and design, 2) the lifecycle, 3) model construction and validation, 4) deployment and monitoring, and 5) accountability. The governance perspective of AI services is crucial for mitigating risks associated with service provision, and research in risk management aspects should be conducted. While embracing the advantages of AI, proactive measures should be taken to address limitations and risks. Efforts should be made to efficiently formulate policies using AI technology to create high value and provide meaningful societal impacts.