• Title/Summary/Keyword: Data approximation

Search Result 944, Processing Time 0.034 seconds

PID controller design based on direct synthesis for set point speed control of gas turbine engine in warships (함정용 가스터빈 엔진의 속도 추종제어를 위한 DS 기반의 PID 제어기 설계)

  • Jong-Phil KIM;Ki-Tak RYU;Sang-Sik LEE;Yun-Hyung LEE
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.59 no.1
    • /
    • pp.55-64
    • /
    • 2023
  • Gas turbine engines are widely used as prime movers of generator and propulsion system in warships. This study addresses the problem of designing a DS-based PID controller for speed control of the LM-2500 gas turbine engine used for propulsion in warships. To this end, we first derive a dynamic model of the LM-2500 using actual sea trail data. Next, the PRC (process reaction curve) method is used to approximate the first-order plus time delay (FOPTD) model, and the DS-based PID controller design technique is proposed according to approximation of the time delay term. The proposed controller conducts set-point tracking simulation using MATLAB (2016b), and evaluates and compares the performance index with the existing control methods. As a result of simulation at each operating point, the proposed controller showed the smallest in %OS, which means that the rpm does not change rapidly. In addition, IAE and IAC were also the smallest, showing the best result in error performance and controller effort.

Analysis of Static Crack Growth in Asphalt Concrete using the Extended Finite Element Method (확장유한요소법을 이용한 아스팔트의 정적균열 성장 분석)

  • Zi, Goangseup;Yu, Sungmun;Thanh, Chau-Dinh;Mun, Sungho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.4D
    • /
    • pp.387-393
    • /
    • 2010
  • This paper studies static crack growth of asphalt pavement using the extended finite element method (XFEM). To consider nonlinear characteristics of asphalt concrete, a viscoelastic constitutive equation using the Maxwell chain is used. And a linear cohesive crack model is used to regularize the crack. Instead of constructing the viscoelastic constitutive law from the Prony approximation of compliance and retardation time measured experimentally, we use a smooth log-power function which optimally fits experimental data and is infinitely differentiable. The partial moduli of the Maxwell chain from the log-power function make analysis easy because they change more smoothly in a more stable way than the ordinary method such as the least square method. Using the developed method, we can simulates the static crack growth test results satisfactorily.

Uncertainty of the operational models in the Nakdong River mouth (낙동강 하구 환경변화 예측모형의 불확실성)

  • Cho, Hong Yeon;Lee, Gi Seop
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.4-4
    • /
    • 2022
  • 낙동강 하구 환경/생태 복원을 위하여 "해수유입"으로 하구환경을 조성하는 사업이 추진되고 있으며, 해수 유입 규모와 빈도에 따른 생태환경변화를 예측하는 연구수요가 증가하고 있는 상황이다. 보다 구체적으로는 단기간의 해수유입에 의한 흐름 및 염분 확산범위 예측과 더불어 보다 장기간의 지형변화, 수질환경 변화, 생태환경 변화 등에 대한 예측이 필요한 상황이다. 그리고 그 예측의 대부분을 수치모델에 크게 의존하고 있는 상황이다. 그러나, 수치모형을 이용한 단기 예측은 가까운 미래에 대한 입력조건을 사용하여야 하기 때문에 입력조건에 대한 불확실성이 포함되고, 환경생태모형의 불확실성에 따른 예측 한계 등으로 인하여 오차가 누적되기 때문에 직접적인 활용에 크게 제한이 따를 수 있다. 또한 운영과정에서 어떤 분산, 편향 오차 등이 지속적으로 발생하는 경우, 모델 예측 결과에 대한 신뢰수준이 크게 감소하기 때문에 모델의 적절한 운영기법이 요구된다. 모델은 관심을 가지는 자연현상에 대한 근사(approximation)이고, 예상하지 못한 오차가 발생할 수 있기 때문에 관측 자료를 이용한 자료동화(data assimilation) 과정이 운영모델에서는 필수적인 부분이다. 이론적인 기반이 탄탄한 유체역학 기반 기상예측의 경우에도, 가용한 모든 지점의 관측 자료를 이용한 자료 동화과정을 통하여 모델 예측 결과를 개선하여 나가는 과정을 포함하여 운영하고 있다. 이 과정이 포함하는 중요한 개념은 수치모델이 가지고 있는 (예측 수준의) 한계를 인정하고, 수치모델에 전적으로 의존하는 것이 아니라 관측 자료를 이용하여 그 한계를 저감하여 나가는 과정이다. 모니터링은 모델의 한계를 알려주는 지표이다. 모델링과 모니터링의 불가피한 상호의존 관계를 의미하는 이 개념은 단기간의 흐름, 염분 확산 예측으로 한정되지 않고, 장기적인 변화가 예상되는 생태환경변화 모델에도 적용이 된다. 즉각적인 변화보다는 장기적인 관점에서 파악하여야 하는 생태학적인 변화는 보다 다양한 인자가 관여하기 때문에 어떤 측면에서는 모델보다는 적절한 빈도와 항목에 대한 관측계획 수립(monitoring design)이 더 중요하다고 할 수 있다. 이론적인 질량보존(mass conservation) 방정식을 기반으로 하는 모델은 다양한 현실적인 인자의 영향을 받기 때문에 모델의 한계를 인정하고, 모니터링 자료를 적극적으로 활용하여 불확실성을 저감하는 접근방식이 요구된다.

  • PDF

Numerical study of the flow and heat transfer characteristics in a scale model of the vessel cooling system for the HTTR

  • Tomasz Kwiatkowski;Michal Jedrzejczyk;Afaque Shams
    • Nuclear Engineering and Technology
    • /
    • v.56 no.4
    • /
    • pp.1310-1319
    • /
    • 2024
  • The reactor cavity cooling system (RCCS) is a passive reactor safety system commonly present in the designs of High-Temperature Gas-cooled Reactors (HTGR) that removes heat from the reactor pressure vessel by means of natural convection and radiation. It is one of the factors responsible for ensuring that the reactor does not melt down under any plausible accident scenario. For the simulation of accident scenarios, which are transient phenomena unfolding over a span of up to several days, intermediate fidelity methods and system codes must be employed to limit the models' execution time. These models can quantify radiation heat transfer well, but heat transfer caused by natural convection must be quantified with the use of correlations for the heat transfer coefficient. It is difficult to obtain reliable correlations for HTGR RCCS heat transfer coefficients experimentally due to such a system's size. They could, however, be obtained from high-fidelity steady-state simulations of RCCSs. The Rayleigh number in RCCSs is too high for using a Direct Numerical Simulation (DNS) technique; thus, a Reynolds-Averaged Navier-Stokes (RANS) approach must be employed. There are many RANS models, each performing best under different geometry and fluid flow conditions. To find the most suitable one for simulating an RCCS, the RANS models need to be validated. This work benchmarks various RANS models against three experiments performed on the HTTR RCCS Mockup by the Japanese Atomic Energy Agency (JAEA) in 1993. This facility is a 1/6 scale model of a vessel cooling system (VCS) for the High Temperature Engineering Test Reactor (HTTR), which is operated by JAEA. Multiple RANS models were evaluated on a simplified 2d-axisymmetric geometry. They were found to reproduce the experimental temperature profiles with errors of up to 22% for the lowest temperature benchmark and 15% for the higher temperature benchmarks. The results highlight that the pragmatic turbulence models need to be validated for high Rayleigh natural convection-driven flows and improved accordingly, more publicly available experimental data of RCCS resembling experiments is needed and indicate that a 2d-axisymmetric geometry approximation is likely insufficient to capture all the relevant phenomena in RCCS simulations.

GIS Vector Map Compression using Spatial Energy Compaction based on Bin Classification (빈 분류기반 공간에너지집중기법을 이용한 GIS 벡터맵 압축)

  • Jang, Bong-Joo;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.3
    • /
    • pp.15-26
    • /
    • 2012
  • Recently, due to applicability increase of vector data based digital map for geographic information and evolution of geographic measurement techniques, large volumed GIS(geographic information service) services having high resolution and large volumed data are flowing actively. This paper proposed an efficient vector map compression technique using the SEC(spatial energy compaction) based on classified bins for the vector map having 1cm detail and hugh range. We encoded polygon and polyline that are the main objects to express geographic information in the vector map. First, we classified 3 types of bins and allocated the number of bits for each bin using adjacencies among the objects. and then about each classified bin, energy compaction and or pre-defined VLC(variable length coding) were performed according to characteristics of classified bins. Finally, for same target map, while a vector simplification algorithm had about 13%, compression ratio in 1m resolution we confirmed our method having more than 80% encoding efficiencies about original vector map in the 1cm resolution. Also it has not only higher compression ratio but also faster computing speed than present SEC based compression algorithm through experimental results. Moreover, our algorithm presented much more high performances about accuracy and computing power than vector approximation algorithm on same data volume sizes.

Development of Traffic Conflict Technique with Fuzzy Reasoning Theory (퍼지추론을 적용한 교통상충기법(TCT) 개발)

  • ;;;今田寬典
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.1
    • /
    • pp.55-63
    • /
    • 2002
  • It has been known well that Traffic Conflict Technique(TCT) used to evaluate the safety of intersections in the case of shortage of traffic accidents data and surveying time. Because data for using in traffic conflict technique that is collected by trained surveyors, it is rely on the knowledge, experience and the characteristics of them. The data of surveying generate varying result. So, its variance must minimize and then it is considered of calculating in traffic conflict technique however obviously technique to minimize has not developed until now. So, this paper has a focus on the technical method to minimize the variance. For this, it applied the fuzzy reasoning theory to the existed traffic conflict technique that is the most comprehensive method in the country and then developed the new traffic conflict technique model. Fuzzy reasoning theory is a very appropriate method for minimizing the variance among surveyors because it can systematically calculate the uncertainty of surveyors by approximation reasoning structure. The result of analysis from pilot study, the new Procedure in this Paper minimized the variance by 53 Percentiles and it increased the value of conversion factor two times than the exited traffic conflict technique. The method proposed in this paper, it can be used for evaluating the safety of intersection, and before and after analysis of improving Project of black spots.

Overpressure prediction of the Efomeh field using synthetic data, onshore Niger Delta, Nigeria (합성탄성파 기록을 이용한 나이지리아의 나이저 삼각주 해안 에포메(Efomeh) 지역의 이상고압 예측)

  • Omolaiye, Gabriel Efomeh;Ojo, John Sunday;Oladapo, Michael Ilesanmi;Ayolabi, Elijah A.
    • Geophysics and Geophysical Exploration
    • /
    • v.14 no.1
    • /
    • pp.50-57
    • /
    • 2011
  • For effective and accurate prediction of overpressure in the Efomeh field, located in the Niger delta basin of Nigeria, integrated seismic and borehole analyses were undertaken. Normal and abnormal pore pressure zones were delineated based on the principle of normal and deviation from normal velocity trends. The transition between the two trends signifies the top of overpressure. The overpressure tops were picked at regular intervals from seismic data using interval velocities obtained by applying Dix's approximation. The accuracy of the predicted overpressure zone was confirmed from the sonic velocity data of the Efomeh 01 well. The variation to the depth of overpressure between the predicted and observed values was less than 10mat the Efomeh 01 well location, with confidence of over 99 per cent. The depth map generated shows that the depth distribution to the top of the overpressure zone of the Efomeh field falls within the sub-sea depth range of 2655${\pm}$2m (2550 ms) to 3720${\pm}$2m (2900 ms). This depth conforms to thick marine shales using the Efomeh 01 composite log. The lower part of the Agbada Formation within the Efomeh field is overpressured and the depth of the top of the overpressure does not follow any time-stratigraphic boundary across the field. Prediction of the top of the overpressure zone within the Efomeh field for potential wells that will total depth beyond 2440m sub-sea is very important for safer drilling practice as well as the prevention of lost circulation.

ADMM algorithms in statistics and machine learning (통계적 기계학습에서의 ADMM 알고리즘의 활용)

  • Choi, Hosik;Choi, Hyunjip;Park, Sangun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1229-1244
    • /
    • 2017
  • In recent years, as demand for data-based analytical methodologies increases in various fields, optimization methods have been developed to handle them. In particular, various constraints required for problems in statistics and machine learning can be solved by convex optimization. Alternating direction method of multipliers (ADMM) can effectively deal with linear constraints, and it can be effectively used as a parallel optimization algorithm. ADMM is an approximation algorithm that solves complex original problems by dividing and combining the partial problems that are easier to optimize than original problems. It is useful for optimizing non-smooth or composite objective functions. It is widely used in statistical and machine learning because it can systematically construct algorithms based on dual theory and proximal operator. In this paper, we will examine applications of ADMM algorithm in various fields related to statistics, and focus on two major points: (1) splitting strategy of objective function, and (2) role of the proximal operator in explaining the Lagrangian method and its dual problem. In this case, we introduce methodologies that utilize regularization. Simulation results are presented to demonstrate effectiveness of the lasso.

A study on the difference and calibration of empirical influence function and sample influence function (경험적 영향함수와 표본영향함수의 차이 및 보정에 관한 연구)

  • Kang, Hyunseok;Kim, Honggie
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.5
    • /
    • pp.527-540
    • /
    • 2020
  • While analyzing data, researching outliers, which are out of the main tendency, is as important as researching data that follow the general tendency. In this study we discuss the influence function for outlier discrimination. We derive sample influence functions of sample mean, sample variance, and sample standard deviation, which were not directly derived in previous research. The results enable us to mathematically examine the relationship between the empirical influence function and sample influence function. We can also consider a method to approximate the sample influence function by the empirical influence function. Also, the validity of the relationship between the approximated sample influence function and the empirical influence function is also verified by the simulation of random sampled data in normal distribution. As the result of a simulation, both the relationship between the two influence functions, sample and empirical, and the method of approximating the sample influence function through the emperical influence function were verified. This research has significance in proposing a method that reduces errors in the approximation of the empirical influence function and in proposing an effective and practical method that proceeds from previous research that approximates the sample influence function directly through empirical influence function by constant revision.

A Study on Depth Data Extraction for Object Based on Camera Calibration of Known Patterns (기지 패턴의 카메라 Calibration에 기반한 물체의 깊이 데이터 추출에 관한 연구)

  • 조현우;서경호;김태효
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2001.06a
    • /
    • pp.173-176
    • /
    • 2001
  • In this thesis, a new measurement system is implemented for depth data extraction based on the camera calibration of the known pattern. The relation between 3D world coordinate and 2D image coordinate is analyzed. A new camera calibration algorithm is established from the analysis and then, the internal variables and external variables of the CCD camera are obtained. Suppose that the measurement plane is horizontal plane, from the 2D plane equation and coordinate transformation equation the approximation values corresponding minimum values using Newton-Rabbson method is obtained and they are stored into the look-up table for real time processing . A slit laser light is projected onto the object, and a 2D image obtained on the x-z plane in the measurement system. A 3D shape image can be obtained as the 2D (x-z)images are continuously acquired, during the object is moving to the y direction. The 3D shape images are displayed on computer monitor by use of OpenGL software. In a measuremental result, we found that the resolution of pixels have $\pm$ 1% of error in depth data. It seems that the error components are due to the vibration of mechanic and optical system. We expect that the measurement system need some of mechanic stability and precision optical system in order to improve the system.

  • PDF