• Title/Summary/Keyword: Analysis of Errors

Search Result 3,953, Processing Time 0.041 seconds

A Prediction of N-value Using Artificial Neural Network (인공신경망을 이용한 N치 예측)

  • Kim, Kwang Myung;Park, Hyoung June;Goo, Tae Hun;Kim, Hyung Chan
    • The Journal of Engineering Geology
    • /
    • v.30 no.4
    • /
    • pp.457-468
    • /
    • 2020
  • Problems arising during pile design works for plant construction, civil and architecture work are mostly come from uncertainty of geotechnical characteristics. In particular, obtaining the N-value measured through the Standard Penetration Test (SPT) is the most important data. However, it is difficult to obtain N-value by drilling investigation throughout the all target area. There are many constraints such as licensing, time, cost, equipment access and residential complaints etc. it is impossible to obtain geotechnical characteristics through drilling investigation within a short bidding period in overseas. The geotechnical characteristics at non-drilling investigation points are usually determined by the engineer's empirical judgment, which can leads to errors in pile design and quantity calculation causing construction delay and cost increase. It would be possible to overcome this problem if N-value could be predicted at the non-drilling investigation points using limited minimum drilling investigation data. This study was conducted to predicted the N-value using an Artificial Neural Network (ANN) which one of the Artificial intelligence (AI) method. An Artificial Neural Network treats a limited amount of geotechnical characteristics as a biological logic process, providing more reliable results for input variables. The purpose of this study is to predict N-value at the non-drilling investigation points through patterns which is studied by multi-layer perceptron and error back-propagation algorithms using the minimum geotechnical data. It has been reviewed the reliability of the values that predicted by AI method compared to the measured values, and we were able to confirm the high reliability as a result. To solving geotechnical uncertainty, we will perform sensitivity analysis of input variables to increase learning effect in next steps and it may need some technical update of program. We hope that our study will be helpful to design works in the future.

Analysis of Genetics Problem-Solving Processes of High School Students with Different Learning Approaches (학습접근방식에 따른 고등학생들의 유전 문제 해결 과정 분석)

  • Lee, Shinyoung;Byun, Taejin
    • Journal of The Korean Association For Science Education
    • /
    • v.40 no.4
    • /
    • pp.385-398
    • /
    • 2020
  • This study aims to examine genetics problem-solving processes of high school students with different learning approaches. Two second graders in high school participated in a task that required solving the complicated pedigree problem. The participants had similar academic achievements in life science but one had a deep learning approach while the other had a surface learning approach. In order to analyze in depth the students' problem-solving processes, each student's problem-solving process was video-recorded, and each student conducted a think-aloud interview after solving the problem. Although students showed similar errors at the first trial in solving the problem, they showed different problem-solving process at the last trial. Student A who had a deep learning approach voluntarily solved the problem three times and demonstrated correct conceptual framing to the three constraints using rule-based reasoning in the last trial. Student A monitored the consistency between the data and her own pedigree, and reflected the problem-solving process in the check phase of the last trial in solving the problem. Student A's problem-solving process in the third trial resembled a successful problem-solving algorithm. However, student B who had a surface learning approach, involuntarily repeated solving the problem twice, and focused and used only part of the data due to her goal-oriented attitude to solve the problem in seeking for answers. Student B showed incorrect conceptual framing by memory-bank or arbitrary reasoning, and maintained her incorrect conceptual framing to the constraints in two problem-solving processes. These findings can help in understanding the problem-solving processes of students who have different learning approaches, allowing teachers to better support students with difficulties in accessing genetics problems.

Derivation of the Synthetic Unit Hydrograph Based on the Watershed Characteristics (유역특성에 의한 합성단위도의 유도에 관한 연구)

  • 서승덕
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.17 no.1
    • /
    • pp.3642-3654
    • /
    • 1975
  • The purpose of this thesis is to derive a unit hydrograph which may be applied to the ungaged watershed area from the relations between directly measurable unitgraph properties such as peak discharge(qp), time to peak discharge (Tp), and lag time (Lg) and watershed characteristics such as river length(L) from the given station to the upstream limits of the watershed area in km, river length from station to centroid of gravity of the watershed area in km (Lca), and main stream slope in meter per km (S). Other procedure based on routing a time-area diagram through catchment storage named Instantaneous Unit Hydrograph(IUH). Dimensionless unitgraph also analysed in brief. The basic data (1969 to 1973) used in these studies are 9 recording level gages and rating curves, 41 rain gages and pluviographs, and 40 observed unitgraphs through the 9 sub watersheds in Nak Oong River basin. The results summarized in these studies are as follows; 1. Time in hour from start of rise to peak rate (Tp) generally occured at the position of 0.3Tb (time base of hydrograph) with some indication of higher values for larger watershed. The base flow is comparelatively higher than the other small watershed area. 2. Te losses from rainfall were divided into initial loss and continuing loss. Initial loss may be defined as that portion of storm rainfall which is intercepted by vegetation, held in deppression storage or infiltrated at a high rate early in the storm and continuing loss is defined as the loss which continues at a constant rate throughout the duration of the storm after the initial loss has been satisfied. Tis continuing loss approximates the nearly constant rate of infiltration (${\Phi}$-index method). The loss rate from this analysis was estimated 50 Per cent to the rainfall excess approximately during the surface runoff occured. 3. Stream slope seems approximate, as is usual, to consider the mainstreamonly, not giving any specific consideration to tributary. It is desirable to develop a single measure of slope that is representative of the who1e stream. The mean slope of channel increment in 1 meter per 200 meters and 1 meter per 1400 meters were defined at Gazang and Jindong respectively. It is considered that the slopes are low slightly in the light of other river studies. Flood concentration rate might slightly be low in the Nak Dong river basin. 4. It found that the watershed lag (Lg, hrs) could be expressed by Lg=0.253 (L.Lca)0.4171 The product L.Lca is a measure of the size and shape of the watershed. For the logarithms, the correlation coefficient for Lg was 0.97 which defined that Lg is closely related with the watershed characteristics, L and Lca. 5. Expression for basin might be expected to take form containing theslope as {{{{ { L}_{g }=0.545 {( { L. { L}_{ca } } over { SQRT {s} } ) }^{0.346 } }}}} For the logarithms, the correlation coefficient for Lg was 0.97 which defined that Lg is closely related with the basin characteristics too. It should be needed to take care of analysis which relating to the mean slopes 6. Peak discharge per unit area of unitgraph for standard duration tr, ㎥/sec/$\textrm{km}^2$, was given by qp=10-0.52-0.0184Lg with a indication of lower values for watershed contrary to the higher lag time. For the logarithms, the correlation coefficient qp was 0.998 which defined high sign ificance. The peak discharge of the unitgraph for an area could therefore be expected to take the from Qp=qp. A(㎥/sec). 7. Using the unitgraph parameter Lg, the base length of the unitgraph, in days, was adopted as {{{{ {T}_{b } =0.73+2.073( { { L}_{g } } over {24 } )}}}} with high significant correlation coefficient, 0.92. The constant of the above equation are fixed by the procedure used to separate base flow from direct runoff. 8. The width W75 of the unitgraph at discharge equal to 75 per cent of the peak discharge, in hours and the width W50 at discharge equal to 50 Per cent of the peak discharge in hours, can be estimated from {{{{ { W}_{75 }= { 1.61} over { { q}_{b } ^{1.05 } } }}}} and {{{{ { W}_{50 }= { 2.5} over { { q}_{b } ^{1.05 } } }}}} respectively. This provides supplementary guide for sketching the unitgraph. 9. Above equations define the three factors necessary to construct the unitgraph for duration tr. For the duration tR, the lag is LgR=Lg+0.2(tR-tr) and this modified lag, LgRis used in qp and Tb It the tr happens to be equal to or close to tR, further assume qpR=qp. 10. Triangular hydrograph is a dimensionless unitgraph prepared from the 40 unitgraphs. The equation is shown as {{{{ { q}_{p } = { K.A.Q} over { { T}_{p } } }}}} or {{{{ { q}_{p } = { 0.21A.Q} over { { T}_{p } } }}}} The constant 0.21 is defined to Nak Dong River basin. 11. The base length of the time-area diagram for the IUH routing is {{{{C=0.9 {( { L. { L}_{ca } } over { SQRT { s} } ) }^{1/3 } }}}}. Correlation coefficient for C was 0.983 which defined a high significance. The base length of the T-AD was set to equal the time from the midpoint of rain fall excess to the point of contraflexure. The constant K, derived in this studies is K=8.32+0.0213 {{{{ { L} over { SQRT { s} } }}}} with correlation coefficient, 0.964. 12. In the light of the results analysed in these studies, average errors in the peak discharge of the Synthetic unitgraph, Triangular unitgraph, and IUH were estimated as 2.2, 7.7 and 6.4 per cent respectively to the peak of observed average unitgraph. Each ordinate of the Synthetic unitgraph was approached closely to the observed one.

  • PDF

Effects of an Aspirated Radiation Shield on Temperature Measurement in a Greenhouse (강제 흡출식 복사선 차폐장치가 온실의 기온측정에 미치는 영향)

  • Jeong, Young Kyun;Lee, Jong Goo;Yun, Sung Wook;Kim, Hyeon Tae;Ahn, Enu Ki;Seo, Jae Seok;Yoon, Yong Cheol
    • Journal of Bio-Environment Control
    • /
    • v.28 no.1
    • /
    • pp.78-85
    • /
    • 2019
  • This study was designed to examine the performance of an aspirated radiation shield(ARS), which was made at the investigator's lab and characterized by relatively easier making and lower costs based on survey data and reports on errors in its measurements of temperature and relative humidity. The findings were summarized as follows: the ARS and the Jinju weather station made measurements and recorded the range of maximum, average, and minimum temperature at $2.0{\sim}34.1^{\circ}C$, $-6.1{\sim}22.2^{\circ}C$, $-14.0{\sim}15.1^{\circ}C$ and $0.4{\sim}31.5^{\circ}C$, $-5.8{\sim}22.0^{\circ}C$, $-14.1{\sim}16.3^{\circ}C$, respectively. There were no big differences in temperature measurements between the two institutions except that the lowest and highest point of maximum temperature was higher on the campus by $1.6^{\circ}C$ and $2.6^{\circ}C$, respectively. The measurements of ARS were tested against those of a standard thermometer. The results show that the temperature measured by ARS was lower by $-2.0^{\circ}C$ or higher by $1.8^{\circ}C$ than the temperature measured by a standard thermometer. The analysis results of its correlations with a standard thermometer reveal that the coefficient of determination was 0.99. Temperature was compared between fans and no fans, and the results show that maximum, average, and minimum temperature was higher overall with no fans by $0.5{\sim}7.6^{\circ}C$, $0.3{\sim}4.6^{\circ}C$ and $0.5{\sim}3.9^{\circ}C$, respectively. The daily average relative humidity measurements were compared between ARS and the weather station of Jinju, and the results show that the measurements of ARS were a little bit higher than those of the Jinju weather station. The measurements on June 27, July 26 and 29, and August 20 were relatively higher by 5.7%, 5.2%, 9.1%, and 5.8%, respectively, but differences in the monthly average between the two institutions were trivial at 2.0~3.0%. Relative humidity was in the range of -3.98~+7.78% overall based on measurements with ARS and Assman's psychometer. The study analyzed correlations in relative humidity between the measurements of the Jinju weather station and those of Assman's psychometer and found high correlations between them with the coefficient of determination at 0.94 and 0.97, respectively.

Relationships on Magnitude and Frequency of Freshwater Discharge and Rainfall in the Altered Yeongsan Estuary (영산강 하구의 방류와 강우의 규모 및 빈도 상관성 분석)

  • Rhew, Ho-Sang;Lee, Guan-Hong
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.16 no.4
    • /
    • pp.223-237
    • /
    • 2011
  • The intermittent freshwater discharge has an critical influence upon the biophysical environments and the ecosystems of the Yeongsan Estuary where the estuary dam altered the continuous mixing of saltwater and freshwater. Though freshwater discharge is controlled by human, the extreme events are mainly driven by the heavy rainfall in the river basin, and provide various impacts, depending on its magnitude and frequency. This research aims to evaluate the magnitude and frequency of extreme freshwater discharges, and to establish the magnitude-frequency relationships between basin-wide rainfall and freshwater inflow. Daily discharge and daily basin-averaged rainfall from Jan 1, 1997 to Aug 31, 2010 were used to determine the relations between discharge and rainfall. Consecutive daily discharges were grouped into independent events using well-defined event-separation algorithm. Partial duration series were extracted to obtain the proper probability distribution function for extreme discharges and corresponding rainfall events. Extreme discharge events over the threshold 133,656,000 $m^3$ count up to 46 for 13.7y years, following the Weibull distribution with k=1.4. The 3-day accumulated rain-falls which occurred one day before peak discharges (1day-before-3day -sum rainfall), are determined as a control variable for discharge, because their magnitude is best correlated with that of the extreme discharge events. The minimum value of the corresponding 1day-before-3day-sum rainfall, 50.98mm is initially set to a threshold for the selection of discharge-inducing rainfall cases. The number of 1day-before-3day-sum rainfall groups after selection, however, exceeds that of the extreme discharge events. The canonical discriminant analysis indicates that water level over target level (-1.35 m EL.) can be useful to divide the 1day-before-3day-sum rainfall groups into discharge-induced and non-discharge ones. It also shows that the newly-set threshold, 104mm, can just separate these two cases without errors. The magnitude-frequency relationships between rainfall and discharge are established with the newly-selected lday-before-3day-sum rainfalls: $D=1.111{\times}10^8+1.677{\times}10^6{\overline{r_{3day}}$, (${\overline{r_{3day}}{\geqq}104$, $R^2=0.459$), $T_d=1.326T^{0.683}_{r3}$, $T_d=0.117{\exp}[0.0155{\overline{r_{3day}}]$, where D is the quantity of discharge, ${\overline{r_{3day}}$ the 1day-before-3day-sum rainfall, $T_{r3}$ and $T_d$, are respectively return periods of 1day-before-3day-sum rainfall and freshwater discharge. These relations provide the framework to evaluate the effect of freshwater discharge on estuarine flow structure, water quality, responses of ecosystems from the perspective of magnitude and frequency.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Estimation of Fresh Weight and Leaf Area Index of Soybean (Glycine max) Using Multi-year Spectral Data (다년도 분광 데이터를 이용한 콩의 생체중, 엽면적 지수 추정)

  • Jang, Si-Hyeong;Ryu, Chan-Seok;Kang, Ye-Seong;Park, Jun-Woo;Kim, Tae-Yang;Kang, Kyung-Suk;Park, Min-Jun;Baek, Hyun-Chan;Park, Yu-hyeon;Kang, Dong-woo;Zou, Kunyan;Kim, Min-Cheol;Kwon, Yeon-Ju;Han, Seung-ah;Jun, Tae-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.329-339
    • /
    • 2021
  • Soybeans (Glycine max), one of major upland crops, require precise management of environmental conditions, such as temperature, water, and soil, during cultivation since they are sensitive to environmental changes. Application of spectral technologies that measure the physiological state of crops remotely has great potential for improving quality and productivity of the soybean by estimating yields, physiological stresses, and diseases. In this study, we developed and validated a soybean growth prediction model using multispectral imagery. We conducted a linear regression analysis between vegetation indices and soybean growth data (fresh weight and LAI) obtained at Miryang fields. The linear regression model was validated at Goesan fields. It was found that the model based on green ratio vegetation index (GRVI) had the greatest performance in prediction of fresh weight at the calibration stage (R2=0.74, RMSE=246 g/m2, RE=34.2%). In the validation stage, RMSE and RE of the model were 392 g/m2 and 32%, respectively. The errors of the model differed by cropping system, For example, RMSE and RE of model in single crop fields were 315 g/m2 and 26%, respectively. On the other hand, the model had greater values of RMSE (381 g/m2) and RE (31%) in double crop fields. As a result of developing models for predicting a fresh weight into two years (2018+2020) with similar accumulated temperature (AT) in three years and a single year (2019) that was different from that AT, the prediction performance of a single year model was better than a two years model. Consequently, compared with those models divided by AT and a three years model, RMSE of a single crop fields were improved by about 29.1%. However, those of double crop fields decreased by about 19.6%. When environmental factors are used along with, spectral data, the reliability of soybean growth prediction can be achieved various environmental conditions.

Studies on the Functional Interrelation between the Vestibular Canals and the Extraocular Muscles (미로반규관(迷路半規管)과 외안근(外眼筋)의 기능적(機能的) 관계(關係)에 관(關)한 연구(硏究))

  • Kim, Jeh-Hyub
    • The Korean Journal of Physiology
    • /
    • v.8 no.2
    • /
    • pp.1-17
    • /
    • 1974
  • This experiment was designed to explore the specific functional interrelations between the vestibular semicircular canals and the extraocular muscles which may disclose the neural organization, connecting the vestibular canals and each ocular motor nuclei in the brain system, for vestibuloocular reflex mechanism. In urethane anesthetized rabbits, a fine wire insulated except the cut cross section of its tip was inserted into the canals closely to the ampullary receptor organs through the minute holes provided on the osseous canal wall for monopolar stimulation of each canal nerve. All extraocular muscles of both eyes were ligated and cut at their insertio, and the isometric tension and EMG responses of the extraocular muscles to the vestibular canal nerve stimulation were recorded by means of a physiographic recorder. Upon stimulation of the semicircular canal nerve, direction if the eye movement was also observed. The experimental results were as follows. 1) Single canal nerve stimulation with high frequency square waves (240 cps, 0. 1 msec) caused excitation of three extraocular muscles and inhibition of remaining three muscles in the bilateral eyes; stimulation of any canal nerve of a unilateral labyrinth caused excitation (contraction) of the superior rectus, superior oblique and medial rectus muscles and inhibition (relaxation) of the inferior rectus, inferior oblique and lateral rectos muscles in the ipsilateral eye, and it caused the opposite events in the contralateral eye. 2) By the overlapped stimulation of triple canal nerves of a unilateral labyrinth, unidirectional (excitatory or inhibitory) summation of the individual canal effects on a given extraocular muscles was demonstrated, and this indicates that three different canals of a unilateral vestibular system exert similar effect on a given extraocular muscles. 3) Based on the above experimental evidences, a simple rule by which one can define the vestibular excitatory and inhibitory input sources to all the extraocular muscles is proposed; the superior rectus, superior oblique and medial rectus muscles receive excitatory impulses from the ipsilateral vestibular canals, and the inferior rectus, inferior oblique and lateral rectus muscles from the contralateral canals; the opposite relationship applies for vestibular inhibitory impulses to the extraocular muscles. 4) According to the specific direction of the eye movements induced by the individual canal nerve stimulation, an extraocutar muscle exerting major role (a muscle of primary contraction) and two muscles of synergistic contraction could be differentiated in both eyes. 5) When these experimental results were compared to the well known observations of Cohen et al. (1964) made in the cats, extraocular muscles of primary contraction were the same but those of synergistic contraction were partially different. Moreover, the oblique muscle responses to each canal nerve excitation appeared to be all identical. However, the responnes of horizontal (medial and lateral) and vertical (superior and inferior) rectus muscles showed considerable differences. By critical analysis of these data, the author was able to locate theoretical contradictions in the observations of Cohen et al. but not in the author's results. 6) An attempt was also made to compare the functional observation of this experiment to the morphological findings of Carpenter and his associates obtained by degeneration experiments in the monkeys, and it was able to find some significant coincidence between there two works of different approach. In summary, the author has demonstrated that the well known observations of Cohen et al. on the vestibulo-ocular interrelation contain important experimental errors which can he proved by theoretical evaluation and substantiated by a series of experiments. Based on such experimental evidences, a new rule is proposed to define the interrelation between the vestibular canals and the extraocular muscles.

  • PDF

Compare to Evaluate the Imaging dose of MVCT and CBCT (Tomotherapy MVCT와 Linac CBCT의 Imaging dose 비교평가)

  • Yoon, Bo Reum;Hong, Mi Lan;Ahn, Jong Ho;Song, Ki Won
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.83-89
    • /
    • 2014
  • Purpose : In case of the intensity modulated radiation therapy (IMRT) using Tomotherapy and linear accelerator (Linac), it was to compare and to evaluate the imaging dose of MVCT and CBCT that were performed daily for the correct set up of the patient. Materials and Methods : The human body model Phantom (Anderson rando Phantom, USA) was divided into the three parts as Head, Thorax, pelvis, and after GafChromic EBT3 film cut to the size of $0.5{\times}0.5cm2$.in the center of the recording area were situated on the ant, post, left, and right surface of the phantom and 2cm in depth from the ant, post, left, right, and center surface of the phantom, the surface dose and inner dose were measured repeatedly three times, respectively, using the tomotherapy (Hi Art) and the OBI of NovalisTx. The measured film calculated the output value by RIP version6.0 and then the average value of the dose was calculated by the one-way analysis of variance. Results : Using the human body model phantom, the results of MVCT and CBCT performance were that measurements of MVCT inner dose were showed $15.43cGy{\pm}6.05$ in the head, $16.62cGy{\pm}3.08$ in the thorax, $16.81cGy{\pm}5.24$ in the pelvis, and measurements of CBCT inner dose were showed $13.28{\pm}3.68$ in the head, from $13.66{\pm}4.04$ in the thorax, $15.52{\pm}3.52$ in the pelvis. The measurements of surface dose were showed in case of MVCT performance, $11.64{\pm}4.05$ in the head, $12.16{\pm}4.38$ in the thorax, $12.05{\pm}2.71$ in the pelvis, and in case of CBCT performance, $14.59{\pm}3.51$ in the head, $15.82{\pm}2.89$ in the thorax, $17.48{\pm}2.80$ in the pelvis, respectively. Conclusion : In case of Inner dose, the MVCT using MV energy showed higher than the CBCT using kV energy at 1.16 times in the head, at 1.22 times in the thorax, at 1.08 times in the pelvis, and in case of surface dose, the CBCT was higher than MVCT, at 1.25 times in the head, at 1.30 times in the thorax, at 1.45 times in the pelvis. Imaging dose was a small amount compared to the therapeutic dose but it was thought to affect partially to normal tissue because it was done in daily schedule. However, IMRT treatment was necessarily parallel with the IGRT treatment through the image-guide to minimize errors between planned and actual treatment. Thus, to minimize imaging dose that the patients receive, when planning the treatment, it should be set up a treatment plan considering imaging dose, or it must be performed by minimizing the scan range when shooting MVCT.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.