• Title/Summary/Keyword: gradient모형

Search Result 219, Processing Time 0.026 seconds

Development of Low Reynolds Number k-ε Model for Prediction of a Turbulent Flow with a Weak Adverse Pressure Gradient (약한 역압력구배의 난류유동장 해석을 위한 저레이놀즈수 k-ε 모형 개발)

  • Song, Kyoung;Cho, Kang Rae
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.23 no.5
    • /
    • pp.610-620
    • /
    • 1999
  • Recently, numerous modifications of low Reynolds number $k-{\epsilon}$ model have boon carried out with the aid of DNS data. However, the previous models made in this way are too intricate to be used practically. To overcome this shortcoming, a new low Reynolds number $k-{\epsilon}$ model has boon developed by considering the distribution of turbulent properties near the wall. This study proposes the revised a turbulence model for prediction of turbulent flow with adverse pressure gradient and separation. Nondimensional distance $y^+$ in damping functions is changed to $y^*$ and some terms modeled for one dimensional flow in $\epsilon$ equations are expanded into two or three dimensional form. Predicted results by the revised model show an acceptable agreement with DNS data and experimental results. However, for a turbulent flow with severe adverse pressure gradient, an additive term reflecting an adverse pressure gradient effect will have to be considered.

Developing a regional fog prediction model using tree-based machine-learning techniques and automated visibility observations (시정계 자료와 기계학습 기법을 이용한 지역 안개예측 모형 개발)

  • Kim, Daeha
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.12
    • /
    • pp.1255-1263
    • /
    • 2021
  • While it could become an alternative water resource, fog could undermine traffic safety and operational performance of infrastructures. To reduce such adverse impacts, it is necessary to have spatially continuous fog risk information. In this work, tree-based machine-learning models were developed in order to quantify fog risks with routine meteorological observations alone. The Extreme Gradient Boosting (XGB), Light Gradient Boosting (LGB), and Random Forests (RF) were chosen for the regional fog models using operational weather and visibility observations within the Jeollabuk-do province. Results showed that RF seemed to show the most robust performance to categorize between fog and non-fog situations during the training and evaluation period of 2017-2019. While the LGB performed better than in predicting fog occurrences than the others, its false alarm ratio was the highest (0.695) among the three models. The predictability of the three models considerably declined when applying them for an independent period of 2020, potentially due to the distinctively enhanced air quality in the year under the global lockdown. Nonetheless, even in 2020, the three models were all able to produce fog risk information consistent with the spatial variation of observed fog occurrences. This work suggests that the tree-based machine learning models could be used as tools to find locations with relatively high fog risks.

Test of Model Specification in Panel Regression Model with Two Error Components (이원오차성분을 갖는 패널회귀모형의 모형식별검정)

  • Song, Seuck-Heun;Kim, Young-Ji;Hwang, Sun-Young
    • The Korean Journal of Applied Statistics
    • /
    • v.19 no.3
    • /
    • pp.461-479
    • /
    • 2006
  • This paper derives joint and conditional Lagrange multiplier tests based on Double-Length Artificial Regression(DLR) for testing functional form and/or the presence of individual(time) effect in a panel regression model. Small sample properties of these tests are assessed by Monte Carlo study, and comparisons are made with LM tests based on Outer Product Gradient(OPG). The results show that the proposed DLR based LM tests have the most appropriate finite sample performance.

Design of the Finite Schematic Eye with the Crystalline Lens with GRIN Index (실안의 수정체 굴절률 분포를 갖는 정밀모형안 설계)

  • Kim, Bong-Hwan
    • Korean Journal of Optics and Photonics
    • /
    • v.18 no.2
    • /
    • pp.167-170
    • /
    • 2007
  • In this study, clinical data for emmetopia in young Koreans was taken in order to design the finite schematic eye, which had optical properties of real eyes including spherical aberration, astigmatism, field curvature and distortion. Furthermore, the crystalline lens with GRIN medium was optically analyzed, and the finite schematic eye with the GRIN crystalline lens was designed.

Flow Effects on Tailored RF Gradient Echo (TRFGE) Magnetic Resonance Imaging : In-flow and In-Plane Flow Effect (Tailored RF 경자사계방향 (TRFGE} 자기공명영상(MRI)에서 유체에 의한 영상신호 변화 : 유체유입효과와 영상면내를 흐르는 유체의 효과에 대하여)

  • Mun, Chi-Ung;Kim, Sang-Tae;No, Yong-Man;Im, Tae-Hwan;Jo, Jang-Hui
    • Journal of Biomedical Engineering Research
    • /
    • v.18 no.3
    • /
    • pp.243-251
    • /
    • 1997
  • In this paper, we have reported two interesting flow effects arising in the TRFGE sequence using water flow phantom. First, we have shown that the TRFGE sequence is indeed not affected by "in-flow" effect from the unsaturated spins flowing into the imaging slice. Second, the enhancement of "in-plane flow" signal in the readout gradient direction was observed when the TRFGE sequence was used without flow compensation. These two results have many interesting applications in MR imaging other than fMRI. Results obtained were also compared with the results obtained by the conventional gradient echo(CGE) imaging. Experiments were performed at 4.7T MRI/S animal system (Biospec, BRUKER, Switzerland). A cylindrical phantom was made using acryl and a vinyl tube was inserted at the center(Fig. 1). The whole cylinder was filled with water doped with $MnCl_2$ and the center tube was filled with saline which flows in parallel to the main magnetic field along the tube. Tailored RF pulse was designed to have quadratic ($z^2$) phase distribution in slice direction(z). Imaging parameters were TR/TE = 55~85/10msec, flip angle = $30^{\circ}$, slice thickness = 2mm, matrix size = 256${\times}$256, and FOV= 10cm. In-flow effect : Axial images were obtained with and without flow using the CGE and TRFGE sequences, respectively. The flow direction was perpendicular to the image slice. In-plane flow : Sagittal images were obtained with and without flow using the TRGE sequence. The readout gradient was applied in parallel to the flow direction. We have observed that the "in-flow" effect did not affect the TRFGE image, while "in-plane flow" running along the readout gradient direction enhanced the signal in the TRFGE sequence when flow compensation gradient scheme was not used.

  • PDF

Predicting of the Severity of Car Traffic Accidents on a Highway Using Light Gradient Boosting Model (LightGBM 알고리즘을 활용한 고속도로 교통사고심각도 예측모델 구축)

  • Lee, Hyun-Mi;Jeon, Gyo-Seok;Jang, Jeong-Ah
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.6
    • /
    • pp.1123-1130
    • /
    • 2020
  • This study aims to classify the severity in car crashes using five classification learning models. The dataset used in this study contains 21,013 vehicle crashes, obtained from Korea Expressway Corporation, between the year of 2015-2017 and the LightGBM(Light Gradient Boosting Model) performed well with the highest accuracy. LightGBM, the number of involved vehicles, type of accident, incident location, incident lane type, types of accidents, types of vehicles involved in accidents were shown as priority factors. Based on the results of this model, the establishment of a management strategy for response of highway traffic accident should be presented through a consistent prediction process of accident severity level. This study identifies applicability of Machine Learning Models for Predicting of the Severity of Car Traffic Accidents on a Highway and suggests that various machine learning techniques based on big data that can be used in the future.

Transit Frequency Optimization with Variable Demand Considering Transfer Delay (환승지체 및 가변수요를 고려한 대중교통 운행빈도 모형 개발)

  • Yu, Gyeong-Sang;Kim, Dong-Gyu;Jeon, Gyeong-Su
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.6
    • /
    • pp.147-156
    • /
    • 2009
  • We present a methodology for modeling and solving the transit frequency design problem with variable demand. The problem is described as a bi-level model based on a non-cooperative Stackelberg game. The upper-level operator problem is formulated as a non-linear optimization model to minimize net cost, which includes operating cost, travel cost and revenue, with fleet size and frequency constraints. The lower-level user problem is formulated as a capacity-constrained stochastic user equilibrium assignment model with variable demand, considering transfer delay between transit lines. An efficient algorithm is also presented for solving the proposed model. The upper-level model is solved by a gradient projection method, and the lower-level model is solved by an existing iterative balancing method. An application of the proposed model and algorithm is presented using a small test network. The results of this application show that the proposed algorithm converges well to an optimal point. The methodology of this study is expected to contribute to form a theoretical basis for diagnosing the problems of current transit systems and for improving its operational efficiency to increase the demand as well as the level of service.

A Study on Domestic Drama Rating Prediction (국내 드라마 시청률 예측 및 영향요인 분석)

  • Kang, Suyeon;Jeon, Heejeong;Kim, Jihye;Song, Jongwoo
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.5
    • /
    • pp.933-949
    • /
    • 2015
  • Audience rating competition in the domestic drama market has increased recently due to the introduction of commercial broadcasting and diversification of channels. There is now a need for thorough studies and analysis on audience rating. Especially, a drama rating is an important measure to estimate advertisement costs for producers and advertisers. In this paper, we study the drama rating prediction models using various data mining techniques such as linear regression, LASSO regression, random forest, and gradient boosting. The analysis results show that initial drama ratings are affected by structural elements such as broadcasting station and broadcasting time. Average drama ratings are also influenced by earlier public opinion such as the number of internet searches about the drama.

Convergence study to predict length of stay in premature infants using machine learning (머신러닝을 이용한 미숙아의 재원일수 예측 융복합 연구)

  • Kim, Cheok-Hwan;Kang, Sung-Hong
    • Journal of Digital Convergence
    • /
    • v.19 no.7
    • /
    • pp.271-282
    • /
    • 2021
  • This study was conducted to develop a model for predicting the length of stay for premature infants through machine learning. For the development of this model, 6,149 cases of premature infants discharged from the hospital from 2011 to 2016 of the discharge injury in-depth survey data collected by the Korea Centers for Disease Control and Prevention were used. The neural network model of the initial hospitalization was superior to other models with an explanatory power (R2) of 0.75. In the model added by converting the clinical diagnosis to CCS(Clinical class ification software), the explanatory power (R2) of the cubist model was 0.81, which was superior to the random forest, gradient boost, neural network, and penalty regression models. In this study, using national data, a model for predicting the length of stay for premature infants was presented through machine learning and its applicability was confirmed. However, due to the lack of clinical information and parental information, additional research is needed to improve future performance.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.