• Title/Summary/Keyword: Flow Analysis Model

Search Result 5,419, Processing Time 0.035 seconds

A Study on the Influence of Workers' Aspiration for Academic Needs on Participation in University Education (근로자의 학업욕구 열망이 대학교육 참여에 미치는 영향에 관한 연구)

  • Lee, Ji-Hun;Mun, Bok-Hyun
    • Journal of Korea Entertainment Industry Association
    • /
    • v.15 no.3
    • /
    • pp.231-241
    • /
    • 2021
  • This study intended to present strategies and implications for attracting new students and customized education to university officials through research on the participation of workers' academic aspirations in university education. Thus, variables were derived by analyzing prior data, and causal settings between variables and questionnaires were developed. Subject to the survey, 331 workers interested in participating in university education were collected through interpersonal interviews. The collected data were dataized, and reliability and feasibility verification and frequency analysis were conducted. Finally, we validate the fit of the structural equation model and the causal relationship for each concept. Therefore, the results of the validation show the following implications. First, university officials should be motivated by a mentor and mentee system with experienced people who have switched to a suitable vocational group through university education. It will also be necessary to develop and disseminate programs so that they can continue to develop themselves for the future. To this end, it will be necessary to help them understand their aptitude and strengths through consultation with experts. Second, university officials should strengthen public relations so that prospective students can know the cases and information of the job transformation of the admitted workers through recommendations. It will also be necessary to develop university education programs that can self-develop, accept various ideas through "public contest", and provide accurate information about university education to workers through re-processing. Third, university officials should provide workers with a program that allows them to catch two rabbits: job transformation and self-improvement through university education. In other words, it is necessary to stimulate the motivation of workers by providing various information such as visiting advanced overseas companies, obtaining various certificates, moving between departments of blue-collar and white-collar, and transfer opportunities. Fourth, university officials should actively promote university education programs related to this by participating in university education and receiving systematic education and the flow of social environment. Finally, university officials will need to consult and promote workers so that they can self-develop when they participate in college education, and they will have to figure out what they need for self-development through demand surveys and analysis.

Patent Production and Technological Performance of Korean Firms: The Role of Corporate Innovation Strategies (특허생산과 기술성과: 기업 혁신전략의 역할)

  • Lee, Jukwan;Jung, Jin Hwa
    • Journal of Technology Innovation
    • /
    • v.22 no.1
    • /
    • pp.149-175
    • /
    • 2014
  • This study analyzed the effect of corporate innovation strategies on patent production and ultimately on technological change and new product development of firms in South Korea. The intent was to derive efficient strategies for enhancing technological performance of the firms. For the empirical analysis, three sources of data were combined: four waves of the Human Capital Corporate Panel Survey (HCCP) data collected by the Korea Research Institute for Vocational Education and Training (KRIVET), corporate financial data obtained from the Korea Information Service (KIS), and corporate patent data provided by the Korean Intellectual Property Office (KIPO). The patent production function was estimated by zero-inflated negative binomial (ZINB) regression. The technological performance function was estimated by two-stage regression, taking into account the endogeneity of patent production. An ordered logit model was applied for the second stage regression. Empirical results confirmed the critical role of corporate innovation strategies in patent production and in facilitating technological change and new product development of the firms. In patent production, the firms' R&D investment and human resources were key determinants. Higher R&D intensity led to more patents, yet with decreasing marginal productivity. A larger stock of registered patents also led to a larger flow of new patent production. Firms were more prolific in patent production when they had high-quality personnel, intensely investing in human resource development, and adopting market-leading or fast-follower strategy as compared to stability strategy. In technological performance, the firms' human resources played a key role in accelerating technological change and new product development. R&D intensity expedited new product development of the firm. Firms adopting market-leading or fast-follower strategy were at an advantage than those with stability strategy in technological performance. Firms prolific in patent production were also advanced in terms of technological change and new product development. However, the nexus between patent production and technological performance measures was substantially reduced when controlling for the endogeneity of patent production. These results suggest that firms need to strengthen the linkage between patent production and technological performance, and take strategies that address each firm's capacities and needs.

Hydrogeochemical and Environmental Isotope Study of Groundwaters in the Pungki Area (풍기 지역 지하수의 수리지구화학 및 환경동위원소 특성 연구)

  • 윤성택;채기탁;고용권;김상렬;최병영;이병호;김성용
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.5 no.4
    • /
    • pp.177-191
    • /
    • 1998
  • For various kinds of waters including surface water, shallow groundwater (<70 m deep) and deep groundwater (500∼810 m deep) from the Pungki area, an integrated study based on hydrochemical, multivariate statistical, thermodynamic, environmental isotopic (tritium, oxygen-hydrogen, carbon and sulfur), and mass-balance approaches was attempted to elucidate the hydrogeochemical and hydrologic characteristics of the groundwater system in the gneiss area. Shallow groundwaters are typified as the 'Ca-HCO$_3$'type with higher concentrations of Ca, Mg, SO$_4$and NO$_3$, whereas deep groundwaters are the 'Na-HCO$_3$'type with elevated concentrations of Na, Ba, Li, H$_2$S, F and Cl and are supersaturated with respect to calcite. The waters in the area are largely classified into two groups: 1) surface waters and most of shallow groundwaters, and 2) deep groundwaters and one sample of shallow groundwater. Seasonal compositional variations are recognized for the former. Multivariate statistical analysis indicates that three factors may explain about 86% of the compositional variations observed in deep groundwaters. These are: 1) plagioclase dissolution and calcite precipitation, 2) sulfate reduction, and 3) acid hydrolysis of hydroxyl-bearing minerals(mainly mica). By combining with results of thermodynamic calculation, four appropriate models of water/ rock interaction, each showing the dissolution of plagioclase, kaolinite and micas and the precipitation of calcite, illite, laumontite, chlorite and smectite, are proposed by mass balance modelling in order to explain the water quality of deep groundwaters. Oxygen-hydrogen isotope data indicate that deep groundwaters were originated from a local meteoric water recharged from distant, topograpically high mountainous region and underwent larger degrees of water/rock interaction during the regional deep circulation, whereas the shallow groundwaters were recharged from nearby, topograpically low region. Tritium data show that the recharge time was the pre-thermonuclear age for deep groundwaters (<0.2 TU) but the post-thermonuclear age for shallow groundwaters (5.66∼7.79 TU). The $\delta$$\^$34/S values of dissolved sulfate indicate that high amounts of dissolved H$_2$S (up to 3.9 mg/1), a characteristic of deep groundwaters in this area, might be derived from the reduction of sulfate. The $\delta$$\^$13/C values of dissolved carbonates are controlled by not only the dissolution of carbonate minerals by dissolved soil CO$_2$(for shallow groundwaters) but also the reprecipitation of calcite (for deep groundwaters). An integrated model of the origin, flow and chemical evolution for the groundwaters in this area is proposed in this study.

  • PDF

The physical geography in general:yesterday and tomorrow (자연지리학 일반: 회고와 전망)

  • Son, Ill
    • Journal of the Korean Geographical Society
    • /
    • v.31 no.2
    • /
    • pp.138-159
    • /
    • 1996
  • There has been a tendency for Geomorphology and Climatology to be dominant in Physical Geography for 50 years in Korea. Physical Geography is concerned with the study of the totality of natural environment through the integrated approaches. But, an overall direction or a certain paradigm could not be found, because major sub-divisions of Physical Geography have been studied individually and the subjects and the approaches in studying Physical Geography are enormously diverse. A consensus of opinion could not also exist in deciding what kind of the sub-divisions should be included in the physical geography in general and how those should be summarized. Furthermore it would be considered imprudent to survey the studies of Physical Geography besides those of Geomorphology and Climatology due to the small number of researchers. Assuming that the rest of Physical Geographical studies with the exception of Geomorphological and Climatological studies are the Physical Geography in general, the studies of Physical Geogrpahy in general are summarized and several aspects are drown out as follows. First the descliption of all possible factors of natural environments was the pattern of early studies of Physical Geography and the tendency is maintained in the various kinds of research and project reports. Recently Physical Geographers have published several introductory textbooks or research monographs. In those books, however, the integrated approaches to Physical Geography were not suggested and the relationship between man and nature are dealt with in the elementary level. Second, the authentic soil studies of Physical Geographers are insignificant, because the studies of soil in Physical Geography have been mostly considered as the subsidiary means of Geomorphology Summarizing the studies of Soil Gegraphy by physical geographers and other Pedologists, the subjects are classified as soil-forming processes, soil erosions, soil in the tidal flat and reclaimed land, and soil pollution. Physical Geographers have focused upon the soil-forming processes in order to elucidate the geomorphic processes and the past climatic environment. The results of other subjects are trifling. Thirdy Byogeygrayhers and the results of studies are extremely of small number and the studies of Biogeography in Korea lines in the starting point. But, Biogeography could be a more unifying theme for the Physical-human Geography interface, and it would be expected to play an active part in the field of environmental conservation and resource management. Forth, the studies of Hydrogeography (Geographical Hydrology) in Korea have run through the studies of water balance and the morphometric studies such as the drainage network analysis and the relations of various kinds of morphometric elements in river. Recently, the hydrological model have introduced and developed to predict the flow of sediment, discharge, and ground water. The growth of groundwater studies is worthy of close attention. Finally, the studies on environmental problems was no mole than the general description about environmental destruction, resource development, environmental conservation, etc. until 1970s. The ecological perspectives on the relationship between man and nature were suggested in some studies of natural hazard. The new environmentalism having been introduced since 1980s. Human geographers have lead the studies of Environmental Perception. Environmental Ethics, Environmental Sociology, environmental policy. The Physical geographers have stay out of phase with the climate of the time and concentrate upon the publication of introductory textbooks. Recently, several studies on the human interference and modification of natural environments have been made an attempt in the fields of Geomorphology and climatology. Summarizing the studies of Physical Geography for 50 years in Korea, the integrated approaches inherent in Physical Geography disappeared little by little and the majol sub-divisions of Physical Ceography have develop in connection with the nearby earth sciences such as Geology, Meteorology, Pedology, Biology, Hydrology, etc been rediscovered by non-geographers under the guise of environmental science. It is expected that Physical Geography would revive as the dominant subject to cope with environmental problems, rearming with the innate integrated approaches.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Development and evaluation of Pre-Parenthood Education Program for high school students based on Home Economics subject (고등학생을 위한 가정교과 기반 예비부모교육 프로그램 개발 및 평가)

  • Noh, Heui-Yeon;Cho, Jae Soon;Chae, Jung Hyun
    • Journal of Korean Home Economics Education Association
    • /
    • v.29 no.4
    • /
    • pp.161-193
    • /
    • 2017
  • The purpose of this study was to develop and evaluate pre-parenthood education program(PPEP) based on Home Economics(HE) subject for high school students. The development and evaluation of PPEP based on HE subject in this study followed ADDIE model except implementation through 4 processes such as analysis, design, development, and evaluation. First, program development directions were set in three aspects such as 'general development', 'contents', and 'teaching and learning methods'. Themes of the program are 11 in total such as '1. Parenting, what is being a parent', '2. Choosing your spouse, happy marital relationship, the best gift to your children', '3. Pregnancy and birth, a moving meeting with a new life', '4. Taking care of a new born infant for 24 hours', '5. Taking care of infants, relationship with my lovely baby, attachment', '6. Taking care of young children, my child from another planet', '7. Parents and children in healthy family', '8. Parent-child relationship, wise parents to make effective interaction with their children', '9. Parents safety manager at home,', '10. Practice to take care of infants', and '11. Practice of community nurturing support service development'. In particular, learning activities of the program have major characteristics such as 1) utilization of cases including practice problems related to parenting, 2) community exchange activities utilizing learned knowledge and techniques, 3) actual life project activities utilizing learning contents related with parenting, 4) activities inducing positive changes in current life of high school students, and 5) practice activities for the necessities of life such as food, clothing and shelter supporting development of children. Second, the program was developed according to the design. Teaching-learning plans and materials for 17 classes were developed according to 11 themes. The developed plans include class flow and teacher's reference. It starts with receiving a class-related message from a virtual child at the introduction stage and ended with replying to the message by summarizing contents of the class and making a promise as a parent-to-be. That is the basic frame of class flow. Learning materials included various plans and reports necessary for learning activities and they are prepared in details so that they can be play the role of textbooks in regular curriculum. Third, evaluation of developed program was executed by a 5 point Likert scale survey on 13 HE experts on two aspects of program development process and program development results. In the evaluation of development process, mean value was 4.61 and index of content validity was 97.4%. For development results, mean value was 4.37 and index of content validity was 86.9%. These values showed that validity in the development process and results in this study was highly secured and confirmed that PPEP based on HE was appropriate and valid to enhance parent qualifications of high school learners.

Cardioprotective Effect of Calcium Preconditioning and Its Relation to Protein Kinase C in Isolated Perfused Rabbit Heart (적출관류 토끼 심장에서 칼슘 전처치에 의한 심근보호 효과와 Protein Kinase C와의 관계)

  • 김용한;손동섭;조대윤;양기민;김호덕
    • Journal of Chest Surgery
    • /
    • v.32 no.7
    • /
    • pp.603-612
    • /
    • 1999
  • Background : It has been documented that brief repetitive periods of ischemia and reperfusion (ischemic preconditioning, IP) enhances the recovery of post-ischemic contractile function and reduces infarct size after a longer period of ischemia. Many mechanisms have been proposed to explain this process. Recent studies have suggested that transient increase in the intracellular calcium may have triggered the activation of protein kinase C(PKC); however, there are still many controversies. Accordingly, the author performed the present study to test the hypothesis that preconditioning with high concentration of calcium before sustained subsequent ischemia(calcium preconditioning) mimics IP by PKC activation. Material and Method : The isolated hearts from the New Zealand White rabbits(1.5∼2.0 kg body weight) Method: The isolated hearts from the New Zealand White rabbits(1.5∼2.0 kg body weight) were perfused with Tyrode solution by Langendorff technique. After stabilization of baseline hemodynamics, the hearts were subjected to 45-minute global ischemia followed by a 120-minute reperfusion with IP(IP group, n=13) or without IP(ischemic control, n=10). IP was induced by single episode of 5-minute global ischemia and 10-minute reperfusion. In the Ca2+ preconditioned group, perfusate containing 10(n=10) or 20 mM(n=11) CaCl2 was perfused for 10 minutes after 5-minute ischemia followed by a 45-minute global ischemia and a 120-minute reperfusion. Baseline PKC was measured after 50-minute perfusion without any treatment(n=5). Left ventricular function including developed pressure(LVDP), dP/dt, heart rate, left ventricular end-diastolic pressure(LVEDP) and coronary flow(CF) was measured. Myo car ial cytosolic and membrane PKC activities were measured by 32P-${\gamma}$-ATP incorporation into PKC-specific pepetide. The infarct size was determined using the TTC (tetrazolium salt) staining and planimetry. Data were analyzed using one-way analysis of variance(ANOVA) variance(ANOVA) and Tukey's post-hoc test. Result: IP increased the functional recovery including LVDP, dP/dt and CF(p<0.05) and lowered the ascending range of LVEDP(p<0.05); it also reduced the infarct size from 38% to 20%(p<0.05). In both of the Ca2+ preconditioned group, functional recovery was not significantly different in comparison with the ischemic control, however, the infarct size was reduced to 19∼23%(p<0.05). In comparison with the baseline(7.31 0.31 nmol/g tissue), the activities of the cytosolic PKC tended to decrease in both the IP and Ca2+ preconditioned groups, particularly in the 10 mM Ca2+ preconditioned group(4.19 0.39 nmol/g tissue, p<0.01); the activity of membrane PKC was significantly increased in both IP and 10 mM Ca2+ preconditioned group (p<0.05; 1.84 0.21, 4.00 0.14, and 4.02 0.70 nmol/g tissue in the baseline, IP, and 10 mM Ca2+ preconditioned group, respectively). However, the activity of both PKC fractions were not significantly different between the baseline and the ischemic control. Conclusion: These results indicate that in isolated Langendorff-perfused rabbit heart model, calcium preconditioning with high concentration of calcium does not improve post-ischemic functional recovery. However, it does have an effect of limiting(reducing) the infart size by ischemic preconditioning, and this cardioprotective effect, at least in part, may have resulted from the activation of PKC by calcium which acts as a messenger(or trigger) to activate membrane PKC.

  • PDF

Ischemic Preconditioning and Its Relation to Glycogen Depletion (허혈성 전처치와 당원 결핍과의 관계)

  • 장대영;김대중;원경준;조대윤;손동섭;양기민;라봉진;김호덕
    • Journal of Chest Surgery
    • /
    • v.33 no.7
    • /
    • pp.531-540
    • /
    • 2000
  • Baclgrpimd; Recent studies have suggested that the cardioprotective effect of ischemic preconditioning(IP) is closely related to glycogen depletion and attenuation of intracellular acidosis. In the present study, the authors tested this hypothesis by perfusion isolated rabbit hearts with glucose(G) is closely related to glycogen depletion and attenuation of intracellular acidosis. In the present study, the authors tested this hypothesis by perfusion isolated rabbit hearts with glucose(G)-free perfusate. Material and Method; Hearts isolated from New Zealand white rabbits(1.5~2.0 kg body weight) were perfused with Tyrode solution by Langendorff technique. After stabilization of baseline hemodynamics, the hearts were subjected to 45 min global ischemia followed by 120 min reperfusion with IP(IP group, n=13) or without IP(ischemic control group, n=10). IP was induced by single episode of 5 min global ischemia and 10 min reperfusion. In the G-free preconditioned group(n=12), G depletion was induced by perfusionwith G-free Tyrode solution for 5 min and then perfused with G-containing Tyrode solution for 10 min; and 45 min ischemia and 120 min reperfusion. Left ventricular functionincluding developed pressure(LVDP), dP/dt, heart rate, left ventricular end-distolic pressure(LVEDP) and coronary flow (CF) were measured. Myocardial cytosolic and membrane PKC activities were measured by 32P-${\gamma}$-ATP incorporation into PKC-specific peptide and PKC isozymes were analyzed by Western blot with monoclonal antibodies. Infarct size was determined by staining with TTC(tetrazolium salt) and planimetry. Data were analyzed by one-way analysis of variance (ANOVA) and Turkey's post-hoc test. Result ; In comparison with the ischemic control group, IP significantly enhanced functional recovery of the left ventricle; in contrast, functional significantly enhanced functional recovery of the left ventricle; in contrast, functional recovery were not significantly different between the G-free preconditioned and the ischemic control groups. However, the infarct size was significantly reduced by IP or G-free preconditioning(39$\pm$2.7% in the ischemic control, 19$\pm$1.2% in the IP, and 15$\pm$3.9% in the G-free preconditioned, p<0.05). Membrane PKC activities were increased significantly after IP (119%), IP and 45 min ischemia(145%), G-free [recpmdotopmomg (150%), and G-free preconditioning and 45 min ischemia(127%); expression of membrane PKC isozymes, $\alpha$ and $\varepsilon$, tended to be increased after IP or G-free preconditioning. Conclusion; These results suggest that in isolated Langendorff-perfused rabbit heart model, G-free preconditioning (induced by single episode of 5 min G depletion and 10 min repletion) colud not improve post-ischemic contractile dysfunction(after 45-minute global ischemia); however, it has an infarct size-limiting effect.

  • PDF

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.