• Title/Summary/Keyword: virtual system

Search Result 4,632, Processing Time 0.037 seconds

Analysis on the Effect of Field Width in the Delineation of Planning Target Volume for TomoTherapy (토모테라피에서 계획용표적체적 설정 시 필드 폭 영향 분석)

  • Song, Ju-Young;Nah, Byung-Sik;Chung, Woong-Ki;Ahn, Sung-Ja;Nam, Taek-Keun;Yoon, Mee-Sun;Jung, Jae-Uk
    • Progress in Medical Physics
    • /
    • v.21 no.4
    • /
    • pp.323-331
    • /
    • 2010
  • The Hi-Art system for TomoTherapy allows only three (1.0 cm, 2.5 cm, 5.0 cm) field widths and this can produce different dose distribution around the end of PTV (Planning target volume) in the direction of jaw movement. In this study, we investigated the effect of field width on the dose difference around the PTV using DQA (Delivery quality assurance) phantom and real clinical patient cases. In the analysis with DQA phantom, the calculated dose and irradiated films showed that the more dose was widely spreaded out in the end region of PTV as increase of field width. The 2.5 cm field width showed a 1.6 cm wider dose profile and the 5.0 cm field width showed a 4.2 cm wider dose profile compared with the 1.0 cm field width in the region of 50% of maximum dose. The analysis with four patient cases also showed the similar results with the DQA phantom which means that more dose was irradiated around the superior and inferior end of PTV as an increase of field width. The 5.0 cm field width produced the remarkable high dose distribution around the end region of PTV and we could evaluate the effect quantitatively with the calculation of DVH (Dose volume histogram) of the virtual PTVs which were delineated around the end of PTV in the direction of jaw variation. From these results, we could verify that the margin for PTV in the direction of table movement should be reduced compared with the conventional margin for PTV when the large field such as 5.0 cm was used in TomoTherapy.

COATED PARTICLE FUEL FOR HIGH TEMPERATURE GAS COOLED REACTORS

  • Verfondern, Karl;Nabielek, Heinz;Kendall, James M.
    • Nuclear Engineering and Technology
    • /
    • v.39 no.5
    • /
    • pp.603-616
    • /
    • 2007
  • Roy Huddle, having invented the coated particle in Harwell 1957, stated in the early 1970s that we know now everything about particles and coatings and should be going over to deal with other problems. This was on the occasion of the Dragon fuel performance information meeting London 1973: How wrong a genius be! It took until 1978 that really good particles were made in Germany, then during the Japanese HTTR production in the 1990s and finally the Chinese 2000-2001 campaign for HTR-10. Here, we present a review of history and present status. Today, good fuel is measured by different standards from the seventies: where $9*10^{-4}$ initial free heavy metal fraction was typical for early AVR carbide fuel and $3*10^{-4}$ initial free heavy metal fraction was acceptable for oxide fuel in THTR, we insist on values more than an order of magnitude below this value today. Half a percent of particle failure at the end-of-irradiation, another ancient standard, is not even acceptable today, even for the most severe accidents. While legislation and licensing has not changed, one of the reasons we insist on these improvements is the preference for passive systems rather than active controls of earlier times. After renewed HTGR interest, we are reporting about the start of new or reactivated coated particle work in several parts of the world, considering the aspects of designs/ traditional and new materials, manufacturing technologies/ quality control quality assurance, irradiation and accident performance, modeling and performance predictions, and fuel cycle aspects and spent fuel treatment. In very general terms, the coated particle should be strong, reliable, retentive, and affordable. These properties have to be quantified and will be eventually optimized for a specific application system. Results obtained so far indicate that the same particle can be used for steam cycle applications with $700-750^{\circ}C$ helium coolant gas exit, for gas turbine applications at $850-900^{\circ}C$ and for process heat/hydrogen generation applications with $950^{\circ}C$ outlet temperatures. There is a clear set of standards for modem high quality fuel in terms of low levels of heavy metal contamination, manufacture-induced particle defects during fuel body and fuel element making, irradiation/accident induced particle failures and limits on fission product release from intact particles. While gas-cooled reactor design is still open-ended with blocks for the prismatic and spherical fuel elements for the pebble-bed design, there is near worldwide agreement on high quality fuel: a $500{\mu}m$ diameter $UO_2$ kernel of 10% enrichment is surrounded by a $100{\mu}m$ thick sacrificial buffer layer to be followed by a dense inner pyrocarbon layer, a high quality silicon carbide layer of $35{\mu}m$ thickness and theoretical density and another outer pyrocarbon layer. Good performance has been demonstrated both under operational and under accident conditions, i.e. to 10% FIMA and maximum $1600^{\circ}C$ afterwards. And it is the wide-ranging demonstration experience that makes this particle superior. Recommendations are made for further work: 1. Generation of data for presently manufactured materials, e.g. SiC strength and strength distribution, PyC creep and shrinkage and many more material data sets. 2. Renewed start of irradiation and accident testing of modem coated particle fuel. 3. Analysis of existing and newly created data with a view to demonstrate satisfactory performance at burnups beyond 10% FIMA and complete fission product retention even in accidents that go beyond $1600^{\circ}C$ for a short period of time. This work should proceed at both national and international level.

Development of Quality Assurance Software for $PRESAGE^{REU}$ Gel Dosimetry ($PRESAGE^{REU}$ 겔 선량계의 분석 및 정도 관리 도구 개발)

  • Cho, Woong;Lee, Jaegi;Kim, Hyun Suk;Wu, Hong-Gyun
    • Progress in Medical Physics
    • /
    • v.25 no.4
    • /
    • pp.233-241
    • /
    • 2014
  • The aim of this study is to develop a new software tool for 3D dose verification using $PRESAGE^{REU}$ Gel dosimeter. The tool included following functions: importing 3D doses from treatment planning systems (TPS), importing 3D optical density (OD), converting ODs to doses, 3D registration between two volumetric data by translational and rotational transformations, and evaluation with 3D gamma index. To acquire correlation between ODs and doses, CT images of a $PRESAGE^{REU}$ Gel with cylindrical shape was acquired, and a volumetric modulated arc therapy (VMAT) plan was designed to give radiation doses from 1 Gy to 6 Gy to six disk-shaped virtual targets along z-axis. After the VMAT plan was delivered to the targets, 3D OD data were reconstructed from 512 projection data from $Vista^{TM}$ optical CT scanner (Modus Medical Devices Inc, Canada) per every 2 hours after irradiation. A curve for converting ODs to doses was derived by comparing TPS dose profile to OD profile along z-axis, and the 3D OD data were converted to the absorbed doses using the curve. Supra-linearity was observed between doses and ODs, and the ODs were decayed about 60% per 24 hours depending on their magnitudes. Measured doses from the $PRESAGE^{REU}$ Gel were well agreed with the TPS doses at central region, but large under-doses were observed at peripheral region at the cylindrical geometry. Gamma passing rate for 3D doses was 70.36% under the gamma criteria of 3% of dose difference and 3 mm of distance to agreement. The low passing rate was resulted from the mismatching of the refractive index between the PRESAGE gel and oil bath in the optical CT scanner. In conclusion, the developed software was useful for 3D dose verification from PRESAGE gel dosimetry, but further improvement of the Gel dosimetry system were required.

The change of grain quality and starch assimilation of rice under future climate conditions according to RCP 8.5 scenario (RCP 8.5 시나리오에 따른 미래 기후조건에서 벼의 품질 및 전분 동화 특성 변화)

  • Sang, Wan-Gyu;Cho, Hyeoun-Suk;Kim, Jun-Hwan;Shin, Pyong;Baek, Jae-Kyeong;Lee, Yun-Ho;Cho, Jeong-Il;Seo, Myung-Chul
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.20 no.4
    • /
    • pp.296-304
    • /
    • 2018
  • The objective of this study was to analyze the impact of climate change on rice yield and quality. Experiments were conducted using SPAR(Soil-Plant-Atmosphere-Research) chambers, which was designed to create virtual future climate conditions, in the National Institute of Crop Science, Jeonju, Korea, in 2016. In the future climate conditions($+2.8^{\circ}C$ temp, 580 ppm $CO_2$) of year 2051~2060 according to RCP 8.5 scenario, elevated temperature and $CO_2$ accelerated the heading date by about five days than the present climate conditions, resulted in a high temperature environment during grain filling stage. Rice yield decreased sharply in the future climate conditions due to the high temperature induced poor ripening. And the spikelet numbers, ripening ratio, and 1000-grain weight of brown rice were significantly decreased compared to control. The rice grain quality was also decreased sharply, especially due to the increased immature grains. In the future climate conditions, expression of starch biosynthesis-related genes such as granule-bound starch synthase(GBSSI, GBSSII, SSIIa, SSIIb, SSIIIa), starch branching enzyme(BEIIb) and ADP-glucose pyrophosphorylase(AGPS1, AGPS2, AGPL2) were repressed in developing seeds, whereas starch degradation related genes such as ${\alpha}-amylase$(Amy1C, Amy3D, Amy3E) were induced. These results suggest that the reduction in yield and quality of rice in the future climate conditions is likely caused mainly by the poor grain filling by high temperature. Therefore, it is suggested to develop tolerant cultivars to high temperature during grain filling period and a new cropping system in order to ensure a high quality of rice in the future climate conditions.

Evaluation of beam delivery accuracy for Small sized lung SBRT in low density lung tissue (Small sized lung SBRT 치료시 폐 실질 조직에서의 계획선량 전달 정확성 평가)

  • Oh, Hye Gyung;Son, Sang Jun;Park, Jang Pil;Lee, Je Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.7-15
    • /
    • 2019
  • Purpose: The purpose of this study is to evaluate beam delivery accuracy for small sized lung SBRT through experiment. In order to assess the accuracy, Eclipse TPS(Treatment planning system) equipped Acuros XB and radiochromic film were used for the dose distribution. Comparing calculated and measured dose distribution, evaluated the margin for PTV(Planning target volume) in lung tissue. Materials and Methods : Acquiring CT images for Rando phantom, planned virtual target volume by size(diameter 2, 3, 4, 5 cm) in right lung. All plans were normalized to the target Volume=prescribed 95 % with 6MV FFF VMAT 2 Arc. To compare with calculated and measured dose distribution, film was inserted in rando phantom and irradiated in axial direction. The indexes of evaluation are percentage difference(%Diff) for absolute dose, RMSE(Root-mean-square-error) value for relative dose, coverage ratio and average dose in PTV. Results: The maximum difference at center point was -4.65 % in diameter 2 cm size. And the RMSE value between the calculated and measured off-axis dose distribution indicated that the measured dose distribution in diameter 2 cm was different from calculated and inaccurate compare to diameter 5 cm. In addition, Distance prescribed 95 % dose($D_{95}$) in diameter 2 cm was not covered in PTV and average dose value was lowest in all sizes. Conclusion: This study demonstrated that small sized PTV was not enough covered with prescribed dose in low density lung tissue. All indexes of experimental results in diameter 2 cm were much different from other sizes. It is showed that minimized PTV is not accurate and affects the results of radiation therapy. It is considered that extended margin at small PTV in low density lung tissue for enhancing target center dose is necessary and don't need to constraint Maximum dose in optimization.

Economic Impact of HEMOS-Cloud Services for M&S Support (M&S 지원을 위한 HEMOS-Cloud 서비스의 경제적 효과)

  • Jung, Dae Yong;Seo, Dong Woo;Hwang, Jae Soon;Park, Sung Uk;Kim, Myung Il
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.10
    • /
    • pp.261-268
    • /
    • 2021
  • Cloud computing is a computing paradigm in which users can utilize computing resources in a pay-as-you-go manner. In a cloud system, resources can be dynamically scaled up and down to the user's on-demand so that the total cost of ownership can be reduced. The Modeling and Simulation (M&S) technology is a renowned simulation-based method to obtain engineering analysis and results through CAE software without actual experimental action. In general, M&S technology is utilized in Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), Multibody dynamics (MBD), and optimization fields. The work procedure through M&S is divided into pre-processing, analysis, and post-processing steps. The pre/post-processing are GPU-intensive job that consists of 3D modeling jobs via CAE software, whereas analysis is CPU or GPU intensive. Because a general-purpose desktop needs plenty of time to analyze complicated 3D models, CAE software requires a high-end CPU and GPU-based workstation that can work fluently. In other words, for executing M&S, it is absolutely required to utilize high-performance computing resources. To mitigate the cost issue from equipping such tremendous computing resources, we propose HEMOS-Cloud service, an integrated cloud and cluster computing environment. The HEMOS-Cloud service provides CAE software and computing resources to users who want to experience M&S in business sectors or academics. In this paper, the economic ripple effect of HEMOS-Cloud service was analyzed by using industry-related analysis. The estimated results of using the experts-guided coefficients are the production inducement effect of KRW 7.4 billion, the value-added effect of KRW 4.1 billion, and the employment-inducing effect of 50 persons per KRW 1 billion.

A Study on Metaverse Construction Based on 3D Spatial Information of Convergence Sensors using Unreal Engine 5 (언리얼 엔진 5를 활용한 융복합센서의 3D 공간정보기반 메타버스 구축 연구)

  • Oh, Seong-Jong;Kim, Dal-Joo;Lee, Yong-Chang
    • Journal of Cadastre & Land InformatiX
    • /
    • v.52 no.2
    • /
    • pp.171-187
    • /
    • 2022
  • Recently, the demand and development for non-face-to-face services are rapidly progressing due to the pandemic caused by the COVID-19, and attention is focused on the metaverse at the center. Entering the era of the 4th industrial revolution, Metaverse, which means a world beyond virtual and reality, combines various sensing technologies and 3D reconstruction technologies to provide various information and services to users easily and quickly. In particular, due to the miniaturization and economic increase of convergence sensors such as unmanned aerial vehicle(UAV) capable of high-resolution imaging and high-precision LiDAR(Light Detection and Ranging) sensors, research on digital-Twin is actively underway to create and simulate real-life twins. In addition, Game engines in the field of computer graphics are developing into metaverse engines by expanding strong 3D graphics reconstuction and simulation based on dynamic operations. This study constructed a mirror-world type metaverse that reflects real-world coordinate-based reality using Unreal Engine 5, a recently announced metaverse engine, with accurate 3D spatial information data of convergence sensors based on unmanned aerial system(UAS) and LiDAR. and then, spatial information contents and simulations for users were produced based on various public data to verify the accuracy of reconstruction, and through this, it was possible to confirm the construction of a more realistic and highly utilizable metaverse. In addition, when constructing a metaverse that users can intuitively and easily access through the unreal engine, various contents utilization and effectiveness could be confirmed through coordinate-based 3D spatial information with high reproducibility.

Utilization of Smart Farms in Open-field Agriculture Based on Digital Twin (디지털 트윈 기반 노지스마트팜 활용방안)

  • Kim, Sukgu
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2023.04a
    • /
    • pp.7-7
    • /
    • 2023
  • Currently, the main technologies of various fourth industries are big data, the Internet of Things, artificial intelligence, blockchain, mixed reality (MR), and drones. In particular, "digital twin," which has recently become a global technological trend, is a concept of a virtual model that is expressed equally in physical objects and computers. By creating and simulating a Digital twin of software-virtualized assets instead of real physical assets, accurate information about the characteristics of real farming (current state, agricultural productivity, agricultural work scenarios, etc.) can be obtained. This study aims to streamline agricultural work through automatic water management, remote growth forecasting, drone control, and pest forecasting through the operation of an integrated control system by constructing digital twin data on the main production area of the nojinot industry and designing and building a smart farm complex. In addition, it aims to distribute digital environmental control agriculture in Korea that can reduce labor and improve crop productivity by minimizing environmental load through the use of appropriate amounts of fertilizers and pesticides through big data analysis. These open-field agricultural technologies can reduce labor through digital farming and cultivation management, optimize water use and prevent soil pollution in preparation for climate change, and quantitative growth management of open-field crops by securing digital data for the national cultivation environment. It is also a way to directly implement carbon-neutral RED++ activities by improving agricultural productivity. The analysis and prediction of growth status through the acquisition of the acquired high-precision and high-definition image-based crop growth data are very effective in digital farming work management. The Southern Crop Department of the National Institute of Food Science conducted research and development on various types of open-field agricultural smart farms such as underground point and underground drainage. In particular, from this year, commercialization is underway in earnest through the establishment of smart farm facilities and technology distribution for agricultural technology complexes across the country. In this study, we would like to describe the case of establishing the agricultural field that combines digital twin technology and open-field agricultural smart farm technology and future utilization plans.

  • PDF

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.