• Title/Summary/Keyword: 소프트시스템 방법론

Search Result 179, Processing Time 0.027 seconds

Analysis on the Positional Accuracy of the Non-orthogonal Two-pair kV Imaging Systems for Real-time Tumor Tracking Using XCAT (XCAT를 이용한 실시간 종양 위치 추적을 위한 비직교 스테레오 엑스선 영상시스템에서의 위치 추정 정확도 분석에 관한 연구)

  • Jeong, Hanseong;Kim, Youngju;Oh, Ohsung;Lee, Seho;Jeon, Hosang;Lee, Seung Wook
    • Progress in Medical Physics
    • /
    • v.26 no.3
    • /
    • pp.143-152
    • /
    • 2015
  • In this study, we aim to design the architecture of the kV imaging system for tumor tracking in the dual-head gantry system and analyze its accuracy by simulations. We established mathematical formulas and algorithms to track the tumor position with the two-pair kV imaging systems when they are in the non-orthogonal positions. The algorithms have been designed in the homogeneous coordinate framework and the position of the source and the detector coordinates are used to estimate the tumor position. 4D XCAT (4D extended cardiac-torso) software was used in the simulation to identify the influence of the angle between the two-pair kV imaging systems and the resolution of the detectors to the accuracy in the position estimation. A metal marker fiducial has been inserted in a numerical human phantom of XCAT and the kV projections were acquired at various angles and resolutions using CT projection software of the XCAT. As a result, a positional accuracy of less than about 1mm was achieved when the resolution of the detector is higher than 1.5 mm/pixel and the angle between the kV imaging systems is approximately between $90^{\circ}$ and $50^{\circ}$. When the resolution is lower than 1.5 mm/pixel, the positional errors were higher than 1mm and the error fluctuation by the angles was greater. The resolution of the detector was critical in the positional accuracy for the tumor tracking and determines the range for the acceptable angle range between the kV imaging systems. Also, we found that the positional accuracy analysis method using XCAT developed in this study is highly useful and will be a invaluable tool for further refined design of the kV imaging systems for tumor tracking systems.

Evaluation of TiN-Zr Hydrogen Permeation Membrane by MLCA (Material Life Cycle Assessment) (물질전과정평가(MLCA)를 통한 TiN-Zr 수소분리막의 환경성 평가)

  • Kim, Min-Gyeom;Son, Jong-Tae;Hong, Tae-Whan
    • Clean Technology
    • /
    • v.24 no.1
    • /
    • pp.9-14
    • /
    • 2018
  • In this study, Material life cycle evaluation was performed to analyze the environmental impact characteristics of TiN-Zr membrane manufacturing process. The software of MLCA was Gabi. Through this, environmental impact assessment was performed for each process. Transition metal nitrides have been researched extensively because of their properties. Among these, TiN has the most attention. TiN is a ceramic materials which possess the good combination of physical and chemical properties, such as high melting point, high hardness, and relatively low specific gravity, high wear resistance and high corrosion resistance. With these properties, TiN plays an important role in functional materials for application in separation hydrogen from fossil fuel. Precursor TiN was synthesized by sol-gel method and zirconium was coated by ball mill method. The metallurgical, physical and thermodynamic characteristics of the membranes were analyzed by using Scanning Electron Microscope (SEM), Energy Dispersive X-ray (EDS), X-ray Diffraction (XRD), Thermo Gravimetry/Differential Thermal Analysis (TG/DTA), Brunauer, Emmett, Teller (BET) and Gas Chromatograph System (GP). As a result of characterization and normalization, environmental impacts were 94% in MAETP (Marine Aquatic Ecotoxicity), 2% FAETP (Freshwater Aquatic Ecotoxicity), 2% HTP (Human Toxicity Potential). TiN fabrication process appears to have a direct or indirect impact on the human body. It is believed that the greatest impact that HTP can have on human is the carcinogenic properties. This shows that electricity use has a great influence on ecosystem impact. TiN-Zr was analyzed in Eco-Indicator '99 (EI99) and CML 2001 methodology.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

A Study on Property Change of Auto Body Color Design (자동차 바디컬러 디자인의 속성 변화에 관한 연구)

  • Cho, Kyung-Sil;Lee, Myung-Ki
    • Archives of design research
    • /
    • v.19 no.1 s.63
    • /
    • pp.253-262
    • /
    • 2006
  • Research of color has been developed and also has raised consumer desire through changing from a tool to pursue curiosity or beauty to a tool creating effects in the 20th century. People have been interested in colors as a dynamic expression of results since the color TV appeared. The meaning of colors has been recently diversified as the roles of colors became important to the emotional aspects of design. While auto colors have developed along with such changes of the times, black led the color trend during the first half of the 20th century from 1900 to 1950, a transitional period of economic growth and world war. Since then, automobile production has increased apace with the rapid economic growth throughout the world and automobiles became the most expensive item out of the goods that people use. Accordingly, increasing production induced facility investment in mass production and a technology leveling was achieved. Auto manufacturing processes are very complicated, auto makers gradually recognized that software changes such as to colors or materials was an easier way for the improvement of brand identity as opposed to hardware changes such as the mechanical or design components of the body. Color planning and development systems were segmented in various aspects. In the segmentation issue, pigment technology and painting methods are important elements that have an influence on body colors and have a higher technical correlation with colors than in other industries. In other words, the advanced mixture of pigments is creating new body colors that have not existed previously. This diversifies the painting structure and methods and so maximizes the transparency and depth of body colors. Thus, body colors that are closely related to technical factors will increase in the future and research on color preferences by region have been systemized to cope with global competition due to the expansion and change of auto export regions.

  • PDF

The Utilization of Electronic Journal Files in the Production of an Abstract Database: A case of KoreaMed System (초록 데이터베이스 구축에 있어서 학술지 전자출판 파일의 활용과 문제점: KoreaMed를 중심으로)

  • 이춘실
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.12 no.2
    • /
    • pp.13-29
    • /
    • 2001
  • The study examined the current status and the use of electronic publishing files to produce a bibliographic database. In particular . it examined the problems faced in the production of KoreaMed, an abstract database of Korean medical journals. The methodology of KoreaMed to utilize the computer files which was produced in the process of publication of a print journal is found to be very effective. It assures the accuracy of data, accelerates the input speeds, and reduces the input costs. However, such a project can not be accomplished to a satisfactory level without the cooperation of the publishers involved. It turns out that many small publishers and academic societies hardly have saved the electronic publishing files of previous issues. Besides, it is hard to maintain the right channel to receive the files continuously. The input and processing of special characters are very delicate problems. In addition, the diversity of layout and formats of journals, of the electronic publishing software used, and of the storage media, makes the utilization of the electronic publishing files a very complex process. In order to operate the KoreaMed system more efficiently by requiring the publishers to submit XML files which meets the standard of the KoreaMed, it is necessary to educate and train personnels of journal publishers for the management of electronic publishing files.

  • PDF

Work Improvement by Computerizing the Process of Shielding Block Production (차폐블록 제작과정의 전산화를 통한 업무개선)

  • Kang, Dong Hyuk;Jeong, Do Hyeong;Kang, Dong Yoon;Jeon, Young Gung;Hwang, Jae Woong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.1
    • /
    • pp.87-90
    • /
    • 2013
  • Purpose: Introducing CR (Computed Radiography) system created a process of printing therapy irradiation images and converting the degree of enlargement. This is to increase job efficiency and contribute to work improvement using a computerized method with home grown software to simplify this process, work efficiency. Materials and Methods: Microsoft EXCEL (ver. 2007) and VISUAL BASIC (ver. 6.0) have been used to make the software. A window for each shield block was designed to enter patients' treatment information. Distances on the digital images were measured, the measured data were entered to the Excel program to calculate the degree of enlargement, and printouts were produced to manufacture shield blocks. Results: By computerizing the existing method with this program, the degree of enlargement can easily be calculated and patients' treatment information can be entered into the printouts by using macro function. As a result, errors in calculation which may occur during the process of production or errors that the treatment information may be delivered wrongly can be reduced. In addition, with the simplification of the conversion process of the degree of enlargement, no copy machine was needed, which resulted in the reduction of use of paper. Conclusion: Works have been improved by computerizing the process of block production and applying it to practice which would simplify the existing method. This software can apply to and improve the actual conditions of each hospital in various ways using various features of EXCEL and VISUAL BASIC which has already been proven and used widely.

  • PDF

An Evaluation Model for Software Usability using Mental Model and Emotional factors (정신모형과 감성 요소를 이용한 소프트웨어 사용성 평가 모델 개발)

  • 김한샘;김효영;한혁수
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.117-128
    • /
    • 2003
  • Software usability is a characteristic of the software that is decided based on learnability, effectiveness, and satisfaction when it is evaluated. The usability is a main factor of the software quality. A software has to be continuously improved by taking guidelines that comes from the usability evaluation. Usability factors may vary among the different software products and even for the same factor, the users may have different opinions according to their experience and knowledge. Therefore, a usability evaluation process must be developed with the consideration of many factors like various applications and users. Existing systems such as satisfaction evaluation and performance evaluation only evaluate the result and do not perform cause analysis. And also unified evaluation items and contents do not reflect the characteristics of the products. To address these problems, this paper presents a evaluation model that is based on the mental model of user and the problems, this paper presents a evaluation model that is based on the mental model of user and the emotion of users. This model uses evaluation factors of the user task which are extracted by analyzing usage of the target product. In the mental model approach, the conceptual model of designer and the mental model of the user are compared and the differences are taken as a gap also reported as a part to be improved in the future. In the emotional factor approach, the emotional factors are extracted for the target products and evaluated in terms of the emotional factors. With this proposed method, we can evaluate the software products with customized attributes of the products and deduce the guidelines for the future improvements. We also takes the GUI framework as a sample case and extracts the directions for improvement. As this model analyzes tasks of users and uses evaluation factors for each task, it is capable of not only reflecting the characteristics of the product, but exactly identifying the items that should be modified and improved.

Establishment of Database System for Radiation Oncology (방사선 종양 자료관리 시스템 구축)

  • Kim, Dae-Sup;Lee, Chang-Ju;Yoo, Soon-Mi;Kim, Jong-Min;Lee, Woo-Seok;Kang, Tae-Young;Back, Geum-Mun;Hong, Dong-Ki;Kwon, Kyung-Tae
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.20 no.2
    • /
    • pp.91-102
    • /
    • 2008
  • Purpose: To enlarge the efficiency of operation and establish a constituency for development of new radiotherapy treatment through database which is established by arranging and indexing radiotherapy related affairs in well organized manner to have easy access by the user. Materials and Methods: In this study, Access program provided by Microsoft (MS Office Access) was used to operate the data base. The data of radiation oncology was distinguished by a business logs and maintenance expenditure in addition to stock management of accessories with respect to affairs and machinery management. Data for education and research was distinguished by education material for department duties, user manual and related thesis depending upon its property. Registration of data was designed to have input form according to its subject and the information of data was designed to be inspected by making a report. Number of machine failure in addition to its respective repairing hours from machine maintenance expenditure in a period of January 2008 to April 2009 was analyzed with the result of initial system usage and one year after the usage. Results: Radiation oncology database system was accomplished by distinguishing work related and research related criteria. The data are arranged and collected according to its subjects and classes, and can be accessed by searching the required data through referring the descriptions from each criteria. 32.3% of total average time was reduced on analyzing repairing hours by acquiring number of machine failure in addition to its type in a period of January 2008 to April 2009 through machine maintenance expenditure. Conclusion: On distinguishing and indexing present and past data upon its subjective criteria through the database system for radiation oncology, the use of information can be easily accessed to enlarge the efficiency of operation, and in further, can be a constituency for improvement of work process by acquiring various information required for new radiotherapy treatment in real time.

  • PDF