• Title/Summary/Keyword: PDF model

Search Result 166, Processing Time 0.036 seconds

Development of a Fatigue Damage Model of Wideband Process using an Artificial Neural Network (인공 신경망을 이용한 광대역 과정의 피로 손상 모델 개발)

  • Kim, Hosoung;Ahn, In-Gyu;Kim, Yooil
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.52 no.1
    • /
    • pp.88-95
    • /
    • 2015
  • For the frequency-domain spectral fatigue analysis, the probability density function of stress range needs to be estimated based on the stress spectrum only, which is a frequency domain representation of the response. The probability distribution of the stress range of the narrow-band spectrum is known to follow the Rayleigh distribution, however the PDF of wide-band spectrum is difficult to define with clarity due to the complicated fluctuation pattern of spectrum. In this paper, efforts have been made to figure out the links between the probability density function of stress range to the structural response of wide-band Gaussian random process. An artificial neural network scheme, known as one of the most powerful system identification methods, was used to identify the multivariate functional relationship between the idealized wide-band spectrums and resulting probability density functions. To achieve this, the spectrums were idealized as a superposition of two triangles with arbitrary location, height and width, targeting to comprise wide-band spectrum, and the probability density functions were represented by the linear combination of equally spaced Gaussian basis functions. To train the network under supervision, varieties of different wide-band spectrums were assumed and the converged probability density function of the stress range was derived using the rainflow counting method and all these data sets were fed into the three layer perceptron model. This nonlinear least square problem was solved using Levenberg-Marquardt algorithm with regularization term included. It was proven that the network trained using the given data set could reproduce the probability density function of arbitrary wide-band spectrum of two triangles with great success.

A Bayesian Approach to Geophysical Inverse Problems (베이지안 방식에 의한 지구물리 역산 문제의 접근)

  • Oh Seokhoon;Chung Seung-Hwan;Kwon Byung-Doo;Lee Heuisoon;Jung Ho Jun;Lee Duk Kee
    • Geophysics and Geophysical Exploration
    • /
    • v.5 no.4
    • /
    • pp.262-271
    • /
    • 2002
  • This study presents a practical procedure for the Bayesian inversion of geophysical data. We have applied geostatistical techniques for the acquisition of prior model information, then the Markov Chain Monte Carlo (MCMC) method was adopted to infer the characteristics of the marginal distributions of model parameters. For the Bayesian inversion of dipole-dipole array resistivity data, we have used the indicator kriging and simulation techniques to generate cumulative density functions from Schlumberger array resistivity data and well logging data, and obtained prior information by cokriging and simulations from covariogram models. The indicator approach makes it possible to incorporate non-parametric information into the probabilistic density function. We have also adopted the MCMC approach, based on Gibbs sampling, to examine the characteristics of a posteriori probability density function and the marginal distribution of each parameter.

Investigating the future changes of extreme precipitation indices in Asian regions dominated by south Asian summer monsoon

  • Deegala Durage Danushka Prasadi Deegala;Eun-Sung Chung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.174-174
    • /
    • 2023
  • The impact of global warming on the south Asian summer monsoon is of critical importance for the large population of this region. This study aims to investigate the future changes of the precipitation extremes during pre-monsoon and monsoon, across this region in a more organized regional structure. The study area is divided into six major divisions based on the Köppen-Geiger's climate structure and 10 sub-divisions considering the geographical locations. The future changes of extreme precipitation indices are analyzed for each zone separately using five indices from ETCCDI (Expert Team on Climate Change Detection and Indices); R10mm, Rx1day, Rx5day, R95pTOT and PRCPTOT. 10 global climate model (GCM) outputs from the latest CMIP6 under four combinations of SSP-RCP scenarios (SSP1-2.6, SSP2-4.5, SSP3-7.0, and SSP5-8.5) are used. The GCMs are bias corrected using nonparametric quantile transformation based on the smoothing spline method. The future period is divided into near future (2031-2065) and far future (2066-2100) and then the changes are compared based on the historical period (1980-2014). The analysis is carried out separately for pre-monsoon (March, April, May) and monsoon (June, July, August, September). The methodology used to compare the changes is probability distribution functions (PDF). Kernel density estimation is used to plot the PDFs. For this study we did not use a multi-model ensemble output and the changes in each extreme precipitation index are analyzed GCM wise. From the results it can be observed that the performance of the GCMs vary depending on the sub-zone as well as on the precipitation index. Final conclusions are made by removing the poor performing GCMs and by analyzing the overall changes in the PDFs of the remaining GCMs.

  • PDF

Numerical simulation of gasification of coal-water slurry for production of synthesis gas in a two stage entrained gasifier (2단 분류층 가스화기에서 합성가스 생성을 위한 석탄 슬러리 가스화에 대한 수치 해석적 연구)

  • Seo, Dong-Kyun;Lee, Sun-Ki;Song, Soon-Ho;Hwang, Jung-Ho
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2007.11a
    • /
    • pp.417-423
    • /
    • 2007
  • Oxy-gasification or oxygen-blown gasification, enables a clean and efficient use of coal and opens a promising way to CO2 capture. The coal gasification process of a slurry feed type, entrained-flow coal gasifier was numerically predicted in this paper. The purposes of this study are to develop an evaluation technique for design and performance optimization of coal gasifiers using a numerical simulation technique, and to confirm the validity of the model. By dividing the complicated coal gasification process into several simplified stages such as slurry evaporation, coal devolatilization, mixture fraction model and two-phase reactions coupled with turbulent flow and two-phase heat transfer, a comprehensive numerical model was constructed to simulate the coal gasification process. The influence of turbulence on the gas properties was taken into account by the PDF (Probability Density Function) model. A numerical simulation with the coal gasification model is performed on the Conoco-Philips type gasifier for IGCC plant. Gas temperature distribution and product gas composition are also presented. Numerical computations were performed to assess the effect of variation in oxygen to coal ratio and steam to coal ratio on reactive flow field. The concentration of major products, CO and H2 were calculated with varying oxygen to coal ratio (0.2-1.5) and steam to coal ratio(0.3-0.7). To verify the validity of predictions, predicted values of CO and H2 concentrations at the exit of the gasifier were compared with previous work of the same geometry and operating points. Predictions showed that the CO and H2 concentration increased gradually to its maximum value with increasing oxygen-coal and hydrogen-coal ratio and decreased. When the oxygen-coal ratio was between 0.8 and 1.2, and the steam-coal ratio was between 0.4 and 0.5, high values of CO and H2 were obtained. This study also deals with the comparison of CFD (Computational Flow Dynamics) and STATNJAN results which consider the objective gasifier as chemical equilibrium to know the effect of flow on objective gasifier compared to equilibrium. This study makes objective gasifier divided into a few ranges to study the evolution of the gasification locally. By this method, we can find that there are characteristics in the each scope divided.

  • PDF

Human Exposure to BTEX and Its Risk Assessment Using the CalTOX Model According to the Probability Density Function in Meteorological Input Data (기상변수들의 확률밀도함수(PDF)에 따른 CalTOX모델을 이용한 BTEX 인체노출량 및 인체위해성 평가 연구)

  • Kim, Ok;Song, Youngho;Choi, Jinha;Park, Sanghyun;Park, Changyoung;Lee, Minwoo;Lee, Jinheon
    • Journal of Environmental Health Sciences
    • /
    • v.45 no.5
    • /
    • pp.497-510
    • /
    • 2019
  • Objectives: The aim of this study was to secure the reliability of using the CalTOX model when evaluating LADD (or ADD) and Risk (or HQ) among local residents for the emission of BTEX (Benzene, Toluene, Ethylbenzene, Xylene) and by closely examining the difference in the confidence interval of the assessment outcomes according to the difference in the probability density function of input variables. Methods: The assessment was made by dividing it according to the method ($I^{\dagger}$) of inputting the probability density function in meteorological variables of the model with log-normal distribution and the method of inputting ($II^{\ddagger}$) after grasping the optimal probability density function using @Risk. A T-test was carried out in order to analyze the difference in confidence interval of the two assessment results. Results: It was evaluated to be 1.46E-03 mg/kg-d in LADD of Benzene, 1.96E-04 mg/kg-d in ADD of Toluene, 8.15E-05 mg/kg-d in ADD of Ethylbenzene, and 2.30E-04 mg/kg-d in ADD of Xylene. As for the predicted confidence interval in LADD and ADD, there was a significant difference between the $I^{\dagger}$ and $II^{\ddagger}$ methods in $LADD_{Inhalation}$ for Benzene, and in $ADD_{Inhalation}$ and ADD for Toluene and Xylene. It appeared to be 3.58E-05 for risk in Benzene, 3.78E-03 for HQ in Toluene, 1.48E-03 for HQ in Ethylbenzene, and 3.77E-03 for HQ in Xylene. As a result of the HQ in Toluene and Xylene, the difference in confidence interval between the $I^{\dagger}$ and $II^{\ddagger}$ methods was shown to be significant. Conclusions: The human risk assessment for BTEX was made by dividing it into the method ($I^{\dagger}$) of inputting the probability density function of meteorological variables for the CalTOX model with log-normal distribution, and the method of inputting ($II^{\ddagger}$) after grasping the optimal probability density function using @Risk. As a result, it was identified that Risk (or HQ) is the same, but that there is a significant difference in the confidence interval of Risk (or HQ) between the $I^{\dagger}$ and $II^{\ddagger}$ methods.

Automatic Generation of Bibliographic Metadata with Reference Information for Academic Journals (학술논문 내에서 참고문헌 정보가 포함된 서지 메타데이터 자동 생성 연구)

  • Jeong, Seonki;Shin, Hyeonho;Ji, Seon-Yeong;Choi, Sungphil
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.56 no.3
    • /
    • pp.241-264
    • /
    • 2022
  • Bibliographic metadata can help researchers effectively utilize essential publications that they need and grasp academic trends of their own fields. With the manual creation of the metadata costly and time-consuming. it is nontrivial to effectively automatize the metadata construction using rule-based methods due to the immoderate variety of the article forms and styles according to publishers and academic societies. Therefore, this study proposes a two-step extraction process based on rules and deep neural networks for generating bibliographic metadata of scientific articlles to overcome the difficulties above. The extraction target areas in articles were identified by using a deep neural network-based model, and then the details in the areas were analyzed and sub-divided into relevant metadata elements. IThe proposed model also includes a model for generating reference summary information, which is able to separate the end of the text and the starting point of a reference, and to extract individual references by essential rule set, and to identify all the bibliographic items in each reference by a deep neural network. In addition, in order to confirm the possibility of a model that generates the bibliographic information of academic papers without pre- and post-processing, we conducted an in-depth comparative experiment with various settings and configurations. As a result of the experiment, the method proposed in this paper showed higher performance.

Numerical Modeling for Turbulent Combustion Processes of Vortex Hybrid Rocket (Vortex Hybrid 로켓 난류연소과정의 모델링 해석)

  • 조웅호;김후중;김용모;윤명원
    • Proceedings of the Korean Society of Propulsion Engineers Conference
    • /
    • 2003.05a
    • /
    • pp.244-245
    • /
    • 2003
  • 고체나 액체 추진로켓에 비하여 하이브리드 추진 시스템은 작동조건의 안정성과 안전함등의 많은 장점을 가지고 있다. HTPB와 같은 고체연료는 제작 및 저장, 운송 그리고 장착상의 안정성을 가지고 있으며 하이브리드 로켓의 고체연료로의 산화제의 유입을 제어하면서 추력의 변화와 엔진내부의 연소중단과 재 점화를 용이하게 할 수 있다. 이러한 이유로 인하여 하이브리드 엔진은 좀 더 경제적인 장치로 기대를 모으고 있다. 그러나, 기존의 하이브리드 로켓 엔진은 고체 추진 로켓에 비하여 낮은 연료 regression 율과 연소효율을 가지는 단점이 있다. 이러한 단점을 해결하고 요구되어지는 추력값과 연료유량을 증가시키기 위하여 고체연료의 표면적을 증가시킬 필요가 있다. 기존의 하이브리드 엔진에서는 연료 그레인에 다수의 연소포트를 만들어 표면적을 증가시켰으나 이는 비 활용 공간의 증가와 추진제의 질량 및 체적분율의 상당한 감소를 초래한다. 지난 수십년간에 걸쳐 하이브리드 엔진에서 연료의 regression 특성 및 엔진 성능 향상을 위한 연구가 계속되어 왔으며 최근에 엔진의 체적 규제를 경감시키고 연료의 regression율을 향상시키기 위하여 선회유동을 이용하는 하이브리드 로켓 엔진들이 제안되고 있다. 이러한 선회유동을 가지는 하이브리드 로켓은 고체연료 그레인에 대하여 평행하게 유입되는 기존의 하이브리드 로켓에 비하여 고체연료 벽면에서의 대류열전달이 현저하게 증가하게 되어 아주 높은 고체연료의 regression율을 얻을 수 있는 이점이 있다. 선회유동 하이브리드 로켓의 연소과정은 고체 연료의 열분해과정, 대류 열전달, 난류 혼합, 난류와 화학반응의 상호작용, soot의 생성 및 산화과정, soot 입자 및 연소가스에 의한 복사 열전달, 연소장과 음향장의 상호작용 등의 복잡한 물리적 과정을 포함하고 있다. 이러한 물리적 과정 중 난류연소, 고체연료 벽면 근방에서의 대류 열전달 및 연소과정에서 생성되는 soot 입자로부터의 복사 열전달, 그리고 고체연료 열 분해시 표면반응들은 고체연료의 regression율에 큰 영향을 미친다. 특히 고체연료의 난류화염면의 위치와 폭, 그리고 비 예혼합 난류화염장에서 생성되는 soot의 체적분율의 예측은 난류연소모델, 열전달 모델, 그리고 regression율 모델에 의해 크게 영향을 받기 때문에 수치모델의 예측 능력 향상시키기 위하여 이러한 물리적 과정을 정확히 모델링해야 할 필요가 있다. 특히 vortex hybrid rocket내의 난류연소과정은 아래와 같은 Laminar Flamelet Model에 의해 모델링 하였다. 상세 화학반응 과정을 고려한 혼합분율 공간에서의 화염편의 화학종 및 에너지 보존 방정식은 다음과 같다. 화염편 방정식과 혼합분률과 scalar dissipation rate의 관계식을 이용하여 혼합분률과 scalar dissipation rate에 따른 모든 reactive scalar들을 구하게 된다. 이러한 화염편 방정식들을 mixture fraction space에서 이산화시켜서 얻은 비선형 대수방정식은 TWOPNT(Grcar, 1992)로 계산돼 flamelet Library에 저장되게 된다. 저장된 laminar flamelet library를 이용하여 난류화염장의 열역학 상태량 평균치는 presumed PDF approach에 의해 구해진다. 본 연구에서는 강한 선회유동을 가지는 Hybrid Rocket 연소장내의 난류와 화학반응의 상호작용을 분석하기 위하여 Laminar Flamelet Model, 화학평형모델, 그리고 Eddy Dissipation Model을 이용한 수치해석결과를 체계적으로 비교하였다. 또한 Laminar Flamelet Model과 state-of-art 물리모델들을 이용하여 선회 유동을 갖는 하이브리드 로켓 엔진의 연소 및 Soot 생성 및 산화과정을 살펴보았으며 복사 열전달이 고체 연료 표면의 regression율에 미치는 영향도 살펴보았다. 특히 swirl강도, 산화제의 유입위치 그리고 선회유동의 형성방식이 하이브리드 로켓의 연소특성 및 regression rate에 미치는 영향을 상세히 해석하였다.

  • PDF

A New Method on the Nonlinear Distortion Analysis in the OFDM Communication System (OFDM 통신 시스템에서 비선형 왜곡분석의 새로운 분석기법)

  • 이동훈;정기호;유흥균
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.13 no.6
    • /
    • pp.538-545
    • /
    • 2002
  • In the orthogonal frequency division multiplexing (OFDM) system, the nonlinear distortion in the high power amplifier(HPA) degrades the system performance because of the high peak-to-average power ratio (PAPR). In this paper, a semi-analytical method is newly proposed for the performance evaluation of the nonlinearly distorted OFDM communication system. In the proposed method, at first, the probability density function (pdf) of the PAPR is generated by computer simulation. Then, mean and variance of the non-linear distortion noise process are computed. Next, the overall BER is found by the analytical method. When the equivalent SSPA model is applied in case of the QPSK/16-QAM and AWGN channel, the BER is calculated for the variation of the IBO(input back-off) and PAPR parameter. It is shown that the results by proposed method are very similar to those of the conventional Monte-Carlo method. The computation time can be considerably reduced than the conventional method that depends on the magnitudes of BER and IBO.

Fixed-point Implementation for Downlink Traffic Channel of IEEE 802.16e OFDMA TDD System (IEEE 802.16e OFDMA TDD 시스템 하향링크 트래픽 채널의 Fixed-point 구현 방법론)

  • Kim Kyoo-Hyun;Sun Tae-Hyung;Wang Yu-Peng;Chang Kyung-Hi;Park Hyung-Il;Eo Ik-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.6A
    • /
    • pp.593-602
    • /
    • 2006
  • This paper propose to methodology for deciding suitable bit size that minimizes hardware complexity and performance degradation from floating-point design the fixed-point implementation of downlink traffic channel of IEEE 802.16e OFDMA TDD system. One of the major considering issues for implementing fixed-point design is to select Saturation or Quantization properly with the knowledge of signal distribution by pdf or histogram. Also, through trial and error, we should execute exhaustive computer simulation for various bit sizes, hence obtain appropriate bit size while minimizing performance degradation. We carry out computer simulation to decide the optimized bit size of downlink traffic channel under AWGN and ITU-R M.1225 Veh-A channel model.

Automatic Object Extraction from Electronic Documents Using Deep Neural Network (심층 신경망을 활용한 전자문서 내 객체의 자동 추출 방법 연구)

  • Jang, Heejin;Chae, Yeonghun;Lee, Sangwon;Jo, Jinyong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.11
    • /
    • pp.411-418
    • /
    • 2018
  • With the proliferation of artificial intelligence technology, it is becoming important to obtain, store, and utilize scientific data in research and science sectors. A number of methods for extracting meaningful objects such as graphs and tables from research articles have been proposed to eventually obtain scientific data. Existing extraction methods using heuristic approaches are hardly applicable to electronic documents having heterogeneous manuscript formats because they are designed to work properly for some targeted manuscripts. This paper proposes a prototype of an object extraction system which exploits a recent deep-learning technology so as to overcome the inflexibility of the heuristic approaches. We implemented our trained model, based on the Faster R-CNN algorithm, using the Google TensorFlow Object Detection API and also composed an annotated data set from 100 research articles for training and evaluation. Finally, a performance evaluation shows that the proposed system outperforms a comparator adopting heuristic approaches by 5.2%.