• Title/Summary/Keyword: 최적

Search Result 34,366, Processing Time 0.063 seconds

Effect of Cellulose Derivatives to Reduce the Oil Uptake of Deep Fat Fried Batter of Pork Cutlet (셀룰로오스 유도체가 돈가스 튀김옷의 흡유량 감소에 미치는 영향)

  • Kim, Byung-Sook;Lee, Young-Eun
    • Korean journal of food and cookery science
    • /
    • v.25 no.4
    • /
    • pp.488-495
    • /
    • 2009
  • Pork cutlet is a favorite deep fat fried food item among Korean children, and an excellent protein-containing food, and as well as a simple and economical cuisine. However, the frying process adds a significant amount of calories. We added MC (Methylcellulose) and HPMC (Hydroxypropyl Methylcellulose) to the batter in an effort to reduce oil uptake in prepared pork cutlets. After additions of MC and HPMC at concentrations of 0.5, 1, and 1.5% respectively, we assessed the viscosity of batter, color after frying, the increases in moisture retention and oil uptake, and sensory characteristics, comparing each quality. The viscosity of batter with 0.5% HPMC added (w/w) was similar to that of the controls, but the viscosity of all the batter with added MC was so much higher that it was difficult to use the batter for coating at the same temperature, leading to a failure even to prepare a sample. After frying, the batter with added HPMC provided significantly less oil uptake and more moisture retention than the batter to which MC was added. Additionally, with regard to color and sensory characteristics, the pork cutlet with 0.5% added HPMC was superior to the other samples. According to these results, we concluded that when cellulose derivatives are added in order to reduce oil uptake and to raise the moisture retention of the batter of pork cutlet, HPMC is more useful in this regard than MC. Additionally, the batter with 0.5% HPMC added appears to be the best of the tested choices, for three reasons: first, the viscosity of the batter is similar to that of the controls; second, the taste is not greasy after frying as the result of the reduced oil uptake and higher moisture retention; and third, the sensory characteristics of this sample, such as, color, crispiness, and hardness were the best among samples.

An Exploratory study on the demand for training programs to improve Real Estate Agents job performance -Focused on Cheonan, Chungnam- (부동산중개인의 직무능력 향상을 위한 교육프로그램 욕구에 관한 탐색적 연구 -충청남도 천안지역을 중심으로-)

  • Lee, Jae-Beom
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.9
    • /
    • pp.3856-3868
    • /
    • 2011
  • Until recently, research trend in real estate has been focused on real estate market and the market analysis. But the studies on real estate training program development for real estate agents to improve their job performance are relatively short in numbers. Thus, this study shows empirical analysis of the needs for the training programs for real estate agents in Cheonan to improve their job performance. The results are as follows. First, in the survey of asking what educational contents they need in order to improve real estate agents' job performance, most of the respondents show their needs for the analysis of house's value, legal knowledge, real estate management, accounting, real estate marketing, and understanding of the real estate policy. This is because they are well aware that the best way of responding to the changing clients' needs comes from training programs. Secondly, asked about real estate marketing strategies, most of respondents showed their awareness of new strategies to meet the needs of clients. This is because new forms of marketing strategies including internet ads are needed in the field as the paradigm including Information Technology changes. Thirdly, asked about the need for real estate-related training programs, 92% of the respondents answered they need real estate education programs run by the continuing education centers of the universities. In addition, the survey showed their needs for retraining programs that utilize the resources in the local universities. Other than this, to have effective and efficient training programs, they demanded running a training system by utilizing the human resources of the universities under the name of the department of 'Real Estate Contract' for real estate agents' job performance. Fourthly, the survey revealed real estate management(44.2%) and real estate marketing(42.3%) is the most chosen contents they want to take in the regular course for improving real estate agents' job performance. This shows their will to understand clients' needs through the mind of real estate management and real estate marketing. The survey showed they prefer the training programs as an irregular course to those in the regular one. Despite the above results, this study chose subjects only in Cheanan and thus it needs to research more diverse areas. The needs of programs to improve real estate agents job performance should be analyzed empirically targeting the real estate agents not just in Cheonan but also cities like Pyeongchon, Ilsan and Bundang in which real estate business is booming, as well as undergraduate and graduate students whose major is real estate studies. These studies will be able to provide information to help develop the customized training programs by evaluating elements that real estate agents need in order to meet clients satisfaction and improve their job performance. Many variables of the program development learned through these studies can be incorporated in the curriculum of the real estate studies and used very practically as information for the development of the real estate studies in this fast changing era.

Simultaneous Production System of Silkworm Dongchunghacho and Male Pupae Using Both Parent Sex-limited Larval Marking Variety (한성반문잠품종을 이용한 누에동충하초 및 숫번데기의 동시 생산체계)

  • Ji, Sang-Duk;Kim, Nam-Suk;Kang, Pil-Don;Sung, Gyoo-Byung;Hong, In-Pyo;Ryu, Kang Sun;Kim, Young-Ki;Nam, Sung-Hee;Kim, Mi-Ja;Kim, Kee-Young
    • Journal of Sericultural and Entomological Science
    • /
    • v.50 no.2
    • /
    • pp.101-108
    • /
    • 2012
  • This study was conducted to confirm the mass production of male pupae and sex-limited larval marking variety as a host for synnemata production of Isaria tenupes in RDA(Rural Development Administration). Silkworm pupation, infection rate and synnemate formation of I.tenuipes were examined. Among the silkworm varieties tested, male Hansaengjam showed the highest pupation rate at 98.7%. I. tenuipes infection rate of larvae of newly-exuviated 5th instar silkworm was 83.7 ~ 90.4% in the spring rearing season and 91.7 ~ 96.6% in the autumn rearing season. Synnemata production of I. tenuipes was execellent in female Yangwonjam with an incidence rate of 99.5% followed by male Yangwonjam(99.5%) and Baegokjam(99.4%) in the spring and autumn rearing season. Synnemata living weight ranged from 0.93 ~ 1.25 g in the spring rearing season. The female Hansaengjam had the heaviest synnemata weight(1.25 g). Synnemata dry weight ranged from 0.27 ~ 0.35 g in the spring rearing season. The female Yangwonjam had the heaviest synnemata weight(0.35 g).

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

A Study on analysis of contrasts and variation in SUV with the passage of uptake time in 18F-FDOPA Brain PET/CT (18F-FDOPA Brain PET/CT 검사의 영상 대조도 분석 및 섭취 시간에 따른 SUV변화 고찰)

  • Seo, Kang rok;Lee, Jeong eun;Ko, Hyun soo;Ryu, Jae kwang;Nam, Ki pyo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.1
    • /
    • pp.69-74
    • /
    • 2019
  • Purpose $^{18}F$-FDOPA using amino acid is particularly attractive for imaging of brain tumors because of the high uptake in tumor tissue and the low uptake in normal brain tissue. But, on the other hand, $^{18}F$-FDG is highly uptake in both tumor tissue and normal brain tissue. The purpose of study is to evaluate comparison of contrasts in $^{18}F$-FDOPA Brain PET/CT and $^{18}F$-FDG Brain PET/CT and to find out optimal scan time by analysis of variation in SUV with the passage of uptake time. Materials and Methods A region of interest of approximately $350mm^2$ at the center of the tumor and cerebellum in 12 patients ($51.4{\pm}12.8yrs$) who $^{18}F$-FDG Brain PET/CT and $^{18}F$-FDOPA Brain PET/CT were examined more than once each. The $SUV_{max}$ was measured, and the $SUV_{max}$ ratio (T/C ratio) of the tumor cerebellum was calculated. In the analysis of SUV, T/C ratio was calculated for each frame after dividing into 15 frames of 2 minutes each using List mode data in 25 patients ($49.{\pm}10.3yrs$). SPSS 21 was used to compare T/C ratio of $^{18}F$-FDOPA and T/C ratio of $^{18}F$-FDG. Results The T/C ratio of $^{18}F$-FDOPA Brain PET/CT was higher than the T/C ratio of $^{18}F$-FDG Brain, and show a significant difference according to a paired t-test(t=-5.214, p=0.000). As a result of analyzing changes in $SUV_{max}$ and T/C ratio, the peak point of $SUV_{max}$ was $5.6{\pm}2.9$ and appeared in the fourth frame (6 to 8 minutes), and the peak of T/C ratio also appeared in the fourth frame (6 to 8 minutes). Taking this into consideration and comparing the existing 10 to 30 minutes image and 6 to 26 minutes image, the $SUV_{max}$ and T/C ratio increased by 0.2 and 0.1 each, compared to the 10 to 30 minutes image for 6 to 26 minutes image. Conclusion From this study, $^{18}F$-FDOPA Brain PET/CT is effective when reading the image, because the T/C ratio of $^{18}F$-FDOPA Brain PET/CT was higher than T/C ratio of $^{18}F$-FDG Brain PET/CT. In addition, in the case of $^{18}F$-FDOPA Brain PET/CT, there was no difference between the existing 10 to 30 minutes image and 6 to 26 minutes image. Through continuous research, we can find possibility of shortening examination time in $^{18}F$-FDOPA Brain PET/CT. Also, we can help physician to accurate reading using additional scan data.

Genetic Diversity of Korean Native Chicken Populations in DAD-IS Database Using 25 Microsatellite Markers (초위성체 마커를 활용한 가축다양성정보시스템(DAD-IS) 등재 재래닭 집단의 유전적 다양성 분석)

  • Roh, Hee-Jong;Kim, Kwan-Woo;Lee, Jinwook;Jeon, Dayeon;Kim, Seung-Chang;Ko, Yeoung-Gyu;Mun, Seong-Sil;Lee, Hyun-Jung;Lee, Jun-Heon;Oh, Dong-Yep;Byeon, Jae-Hyun;Cho, Chang-Yeon
    • Korean Journal of Poultry Science
    • /
    • v.46 no.2
    • /
    • pp.65-75
    • /
    • 2019
  • A number of Korean native chicken(KNC) populations were registered in FAO (Food and Agriculture Organization) DAD-IS (Domestic Animal Diversity Information Systems, http://www.fao.org/dad-is). But there is a lack of scientific basis to prove that they are unique population of Korea. For this reason, this study was conducted to prove KNC's uniqueness using 25 Microsatellite markers. A total of 548 chickens from 11 KNC populations (KNG, KNB, KNR, KNW, KNY, KNO, HIC, HYD, HBC, JJC, LTC) and 7 introduced populations (ARA: Araucana, RRC and RRD: Rhode Island Red C and D, LGF and LGK: White Leghorn F and K, COS and COH: Cornish brown and Cornish black) were used. Allele size per locus was decided using GeneMapper Software (v 5.0). A total of 195 alleles were observed and the range was 3 to 14 per locus. The MNA, $H_{\exp}$, $H_{obs}$, PIC value within population were the highest in KNY (4.60, 0.627, 0.648, 0.563 respectively) and the lowest in HYD (1.84, 0.297, 0.286, 0.236 respectively). The results of genetic uniformity analysis suggested 15 cluster (${\Delta}K=66.22$). Excluding JJC, the others were grouped in certain cluster with high genetic uniformity. JJC was not grouped in certain cluster but grouped in cluster 2 (44.3%), cluster 3 (17.7%) and cluster8 (19.1%). As a results of this study, we can secure a scientific basis about KNC's uniqueness and these results can be use to basic data for the genetic evaluation and management of KNC breeds.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Facile [11C]PIB Synthesis Using an On-cartridge Methylation and Purification Showed Higher Specific Activity than Conventional Method Using Loop and High Performance Liquid Chromatography Purification (Loop와 HPLC Purification 방법보다 더 높은 비방사능을 보여주는 카트리지 Methylation과 Purification을 이용한 손쉬운 [ 11C]PIB 합성)

  • Lee, Yong-Seok;Cho, Yong-Hyun;Lee, Hong-Jae;Lee, Yun-Sang;Jeong, Jae Min
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.2
    • /
    • pp.67-73
    • /
    • 2018
  • $[^{11}C]PIB$ synthesis has been performed by a loop-methylation and HPLC purification in our lab. However, this method is time-consuming and requires complicated systems. Thus, we developed an on-cartridge method which simplified the synthetic procedure and reduced time greatly by removing HPLC purification step. We compared 6 different cartridges and evaluated the $[^{11}C]PIB$ production yields and specific activities. $[^{11}C]MeOTf$ was synthesized by using TRACERlab FXC Pro and was transferred into the cartridge by blowing with helium gas for 3 min. To remove byproducts and impurities, cartridges were washed out by 20 mL of 30% EtOH in 0.5 M $NaH_2PO_4$ solution (pH 5.1) and 10 mL of distilled water. And then, $[^{11}C]PIB$ was eluted by 5 mL of 30% EtOH in 0.5 M $NaH_2PO_4$ into the collecting vial containing 10 mL saline. Among the 6 cartridges, only tC18 environmental cartridge could remove impurities and byproducts from $[^{11}C]PIB$ completely and showed higher specific activity than traditional HPLC purification method. This method took only 8 ~ 9 min from methylation to formulation. For the tC18 environmental cartridge and conventional HPLC loop methods, the radiochemical yields were $12.3{\pm}2.2%$ and $13.9{\pm}4.4%$, respectively, and the molar activities were $420.6{\pm}20.4GBq/{\mu}mol$ (n=3) and $78.7{\pm}39.7GBq/{\mu}mol$ (n=41), respectively. We successfully developed a facile on-cartridge methylation method for $[^{11}C]PIB$ synthesis which enabled the procedure more simple and rapid, and showed higher molar radio-activity than HPLC purification method.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.