• Title/Summary/Keyword: Non-Linear Optimization

Search Result 339, Processing Time 0.031 seconds

A Design on Face Recognition System Based on pRBFNNs by Obtaining Real Time Image (실시간 이미지 획득을 통한 pRBFNNs 기반 얼굴인식 시스템 설계)

  • Oh, Sung-Kwun;Seok, Jin-Wook;Kim, Ki-Sang;Kim, Hyun-Ki
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1150-1158
    • /
    • 2010
  • In this study, the Polynomial-based Radial Basis Function Neural Networks is proposed as one of the recognition part of overall face recognition system that consists of two parts such as the preprocessing part and recognition part. The design methodology and procedure of the proposed pRBFNNs are presented to obtain the solution to high-dimensional pattern recognition problem. First, in preprocessing part, we use a CCD camera to obtain a picture frame in real-time. By using histogram equalization method, we can partially enhance the distorted image influenced by natural as well as artificial illumination. We use an AdaBoost algorithm proposed by Viola and Jones, which is exploited for the detection of facial image area between face and non-facial image area. As the feature extraction algorithm, PCA method is used. In this study, the PCA method, which is a feature extraction algorithm, is used to carry out the dimension reduction of facial image area formed by high-dimensional information. Secondly, we use pRBFNNs to identify the ID by recognizing unique pattern of each person. The proposed pRBFNNs architecture consists of three functional modules such as the condition part, the conclusion part, and the inference part as fuzzy rules formed in 'If-then' format. In the condition part of fuzzy rules, input space is partitioned with Fuzzy C-Means clustering. In the conclusion part of rules, the connection weight of pRBFNNs is represented as three kinds of polynomials such as constant, linear, and quadratic. Coefficients of connection weight identified with back-propagation using gradient descent method. The output of pRBFNNs model is obtained by fuzzy inference method in the inference part of fuzzy rules. The essential design parameters (including learning rate, momentum coefficient and fuzzification coefficient) of the networks are optimized by means of the Particle Swarm Optimization. The proposed pRBFNNs are applied to real-time face recognition system and then demonstrated from the viewpoint of output performance and recognition rate.

Member Sizing Optimization for Seismic Design of the Inverted V-braced Steel Frames with Suspended Zipper Strut (Zipper를 가진 역V형 가새골조의 다목적 최적내진설계기법)

  • Oh, Byung-Kwan;Park, Hyo-Seon;Choi, Se-Woon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.29 no.6
    • /
    • pp.555-562
    • /
    • 2016
  • Seismic design of braced frames that simultaneously considers economic issues and structural performance represents a rather complicated engineering problem, and therefore, a systematic and well-established methodology is needed. This study proposes a multi-objective seismic design method for an inverted V-braced frame with suspended zipper struts that uses the non-dominated sorting genetic algorithm-II(NSGA-II). The structural weight and the maximum inter-story drift ratio as the objective functions are simultaneously minimized to optimize the cost and seismic performance of the structure. To investigate which of strength- and performance-based design criteria for braced frames is the critical design condition, the constraint conditions on the two design methods are simultaneously considered (i.e. the constraint conditions based on the strength and plastic deformation of members). The linear static analysis method and the nonlinear static analysis method are adopted to check the strength- and plastic deformation-based design constraints, respectively. The proposed optimal method are applied to three- and six-story steel frame examples, and the solutions improved for the considered objective functions were found.

Preparation and Luminescence Optimization of CeO2:Er/Yb Phosphor Prepared by Spray Pyrolysis (분무열분해법으로 CeO2:Er/Yb 형광체 제조 및 발광특성 최적화)

  • Jung, Kyeong Youl;Park, Jea Hoon;Song, Shin Ae
    • Applied Chemistry for Engineering
    • /
    • v.26 no.3
    • /
    • pp.319-325
    • /
    • 2015
  • Submicron-sized $CeO_2:Er^{3+}/Yb^{3+}$ upconversion phosphor particles were synthesized by spray pyrolysis, and their luminescent properties were characterized by changing the concentration of $Er^{3+}$ and $Yb^{3+}$. $CeO_2:Er^{3+}/Yb^{3+}$ showed an intense green and red emission due to the $^4S_{3/2}$ or $^2H_{11/2}{\rightarrow}^4I_{15/2}$ and $^4F_{9/2}{\rightarrow}^4I_{15/2}$ transition of $Er^{3+}$ ions, respectively. In terms of the emission intensity, the optimal concentrations of Er and Yb were 1.0 % and 2.0%, respectively, and the concentration quenching was found to occur via the dipole-dipole interaction. Upconversion mechanism was discussed by using the dependency of emission intensities on pumping powers and considering the dominant depletion processes of intermediate energy levels for the red and green emission with changing the $Er^{3+}$ concentration. An energy transfer from $Yb^{3+}$ to $Er^{3+}$ in $CeO_2$ host was mainly involved in ground-state absorption (GSA), and non-radiative relaxation from $^4I_{11/2}$ to $^4I_{13/2}$ of $Er^{3+}$ was accelerated by the $Yb^{3+}$ co-doping. As a result, the $Yb^{3+}$ co-doping led to greatly enhance the upconversion intensity with increasing ratios of the red to green emission. Finally, it is revealed that the upconversion emission is achieved by two photon processes in which the linear decay dominates the depletion of intermediate energy levels for green and red emissions for $CeO_2:Er^{3+}/Yb^{3+}$ phosphor.

Why Gabor Frames? Two Fundamental Measures of Coherence and Their Role in Model Selection

  • Bajwa, Waheed U.;Calderbank, Robert;Jafarpour, Sina
    • Journal of Communications and Networks
    • /
    • v.12 no.4
    • /
    • pp.289-307
    • /
    • 2010
  • The problem of model selection arises in a number of contexts, such as subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper studies non-asymptotic model selection for the general case of arbitrary (random or deterministic) design matrices and arbitrary nonzero entries of the signal. In this regard, it generalizes the notion of incoherence in the existing literature on model selection and introduces two fundamental measures of coherence-termed as the worst-case coherence and the average coherence-among the columns of a design matrix. It utilizes these two measures of coherence to provide an in-depth analysis of a simple, model-order agnostic one-step thresholding (OST) algorithm for model selection and proves that OST is feasible for exact as well as partial model selection as long as the design matrix obeys an easily verifiable property, which is termed as the coherence property. One of the key insights offered by the ensuing analysis in this regard is that OST can successfully carry out model selection even when methods based on convex optimization such as the lasso fail due to the rank deficiency of the submatrices of the design matrix. In addition, the paper establishes that if the design matrix has reasonably small worst-case and average coherence then OST performs near-optimally when either (i) the energy of any nonzero entry of the signal is close to the average signal energy per nonzero entry or (ii) the signal-to-noise ratio in the measurement system is not too high. Finally, two other key contributions of the paper are that (i) it provides bounds on the average coherence of Gaussian matrices and Gabor frames, and (ii) it extends the results on model selection using OST to low-complexity, model-order agnostic recovery of sparse signals with arbitrary nonzero entries. In particular, this part of the analysis in the paper implies that an Alltop Gabor frame together with OST can successfully carry out model selection and recovery of sparse signals irrespective of the phases of the nonzero entries even if the number of nonzero entries scales almost linearly with the number of rows of the Alltop Gabor frame.

An Approach for the Antarctic Polar Front Detection and an Analysis for itsVariability (남극 극 전선 탐지를 위한 접근법과 변동성에 대한 연구)

  • Park, Jinku;Kim, Hyun-cheol;Hwang, Jihyun;Bae, Dukwon;Jo, Young-Heon
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_2
    • /
    • pp.1179-1192
    • /
    • 2018
  • In order to detect the Antarctic Polar Front (PF) among the main fronts in the Southern Ocean, this study is based on the combinations of satellite-based sea surface temperature (SST) and height (SSH) observations. For accurate PF detection, we classified the signals as front or non-front grids based on the Bayesian decision theory from daily SST and SSH datasets, and then spatio-temporal synthesis has been performed to remove primary noises and to supplement geographical connectivity of the front grids. In addition, sea ice and coastal masking were employed in order to remove the noise that still remains even after performing the processes and morphology operations. Finally, we selected only the southernmost grids, which can be considered as fronts and determined as the monthly PF by a linear smoothing spline optimization method. The mean positions of PF in this study are very similar to those of the PFs reported by the previous studies, and it is likely to be well represents PF formation along the bottom topography known as one of the major influences of the PF maintenance. The seasonal variation in the positions of PF is high in the Ross Sea sector (${\sim}180^{\circ}W$), and Australia sector ($120^{\circ}E-140^{\circ}E$), and these variations are quite similar to the previous studies. Therefore, it is expected that the detection approach for the PF position applied in this study and the final composite have a value that can be used in related research to be carried out on the long term time-scale.

Image Data Loss Minimized Geometric Correction for Asymmetric Distortion Fish-eye Lens (비대칭 왜곡 어안렌즈를 위한 영상 손실 최소화 왜곡 보정 기법)

  • Cho, Young-Ju;Kim, Sung-Hee;Park, Ji-Young;Son, Jin-Woo;Lee, Joong-Ryoul;Kim, Myoung-Hee
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.1
    • /
    • pp.23-31
    • /
    • 2010
  • Due to the fact that fisheye lens can provide super wide angles with the minimum number of cameras, field-of-view over 180 degrees, many vehicles are attempting to mount the camera system. Not only use the camera as a viewing system, but also as a camera sensor, camera calibration should be preceded, and geometrical correction on the radial distortion is needed to provide the images for the driver's assistance. In this thesis, we introduce a geometric correction technique to minimize the loss of the image data from a vehicle fish-eye lens having a field of view over $180^{\circ}$, and a asymmetric distortion. Geometric correction is a process in which a camera model with a distortion model is established, and then a corrected view is generated after camera parameters are calculated through a calibration process. First, the FOV model to imitate a asymmetric distortion configuration is used as the distortion model. Then, we need to unify the axis ratio because a horizontal view of the vehicle fish-eye lens is asymmetrically wide for the driver, and estimate the parameters by applying a non-linear optimization algorithm. Finally, we create a corrected view by a backward mapping, and provide a function to optimize the ratio for the horizontal and vertical axes. This minimizes image data loss and improves the visual perception when the input image is undistorted through a perspective projection.

A Development of Automatic Lineament Extraction Algorithm from Landsat TM images for Geological Applications (지질학적 활용을 위한 Landsat TM 자료의 자동화된 선구조 추출 알고리즘의 개발)

  • 원중선;김상완;민경덕;이영훈
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.2
    • /
    • pp.175-195
    • /
    • 1998
  • Automatic lineament extraction algorithms had been developed by various researches for geological purpose using remotely sensed data. However, most of them are designed for a certain topographic model, for instance rugged mountainous region or flat basin. Most of common topographic characteristic in Korea is a mountainous region along with alluvial plain, and consequently it is difficult to apply previous algorithms directly to this area. A new algorithm of automatic lineament extraction from remotely sensed images is developed in this study specifically for geological applications. An algorithm, named as DSTA(Dynamic Segment Tracing Algorithm), is developed to produce binary image composed of linear component and non-linear component. The proposed algorithm effectively reduces the look direction bias associated with sun's azimuth angle and the noise in the low contrast region by utilizing a dynamic sub window. This algorithm can successfully accomodate lineaments in the alluvial plain as well as mountainous region. Two additional algorithms for estimating the individual lineament vector, named as ALEHHT(Automatic Lineament Extraction by Hierarchical Hough Transform) and ALEGHT(Automatic Lineament Extraction by Generalized Hough Transform) which are merging operation steps through the Hierarchical Hough transform and Generalized Hough transform respectively, are also developed to generate geological lineaments. The merging operation proposed in this study is consisted of three parameters: the angle between two lines($\delta$$\beta$), the perpendicular distance($(d_ij)$), and the distance between midpoints of lines(dn). The test result of the developed algorithm using Landsat TM image demonstrates that lineaments in alluvial plain as well as in rugged mountain is extremely well extracted. Even the lineaments parallel to sun's azimuth angle are also well detected by this approach. Further study is, however, required to accommodate the effect of quantization interval(droh) parameter in ALEGHT for optimization.

Preliminary Study on the Development of a Platform for the Optimization of Beach Stabilization Measures Against Beach Erosion III - Centering on the Effects of Random Waves Occurring During the Unit Observation Period, and Infra-Gravity Waves of Bound Mode, and Boundary Layer Streaming on the Sediment Transport (해역별 최적 해빈 안정화 공법 선정 Platform 개발을 위한 기초연구 III - 단위 관측 기간에 발생하는 불규칙 파랑과 구속모드의 외중력파, 경계층 Streaming이 횡단표사에 미치는 영향을 중심으로)

  • Chang, Pyong Sang;Cho, Yong Jun
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.31 no.6
    • /
    • pp.434-449
    • /
    • 2019
  • In this study, we develop a new cross-shore sediment module which takes the effect of infra-gravity waves of bound mode, and boundary layer streaming on the sediment transport into account besides the well-known asymmetry and under-tow. In doing so, the effect of individual random waves occurring during the unit observation period of 1 hr on sediment transport is also fully taken into account. To demonstrate how the individual random waves would affect the sediment transport, we numerically simulate the non-linear shoaling process of random wavers over the beach of uniform slope. Numerical results show that with the consistent frequency Boussinesq Eq. the application of which is lately extended to surf zone, we could simulate the saw-tooth profile observed without exception over the surf zone, infra-gravity waves of bound mode, and boundary-layer streaming accurately enough. It is also shown that when yearly highest random waves are modeled by the equivalent nonlinear uniform waves, the maximum cross-shore transport rate well exceeds the one where the randomness is fully taken into account as much as three times. Besides, in order to optimize the free parameter K involved in the long-shore sediment module, we carry out the numerical simulation to trace the yearly shoreline change of Mang-Bang beach from 2017.4.26 to 2018.4.20 as well, and proceeds to optimize the K by comparing the traced shoreline change with the measured one. Numerical results show that the optimized K for Mang-Bang beach would be 0.17. With K = 0.17, via yearly grand circulation process comprising severe erosion by consecutively occurring yearly highest waves at the end of October, and gradual recovery over the winter and spring by swell, the advance of shore-line at the northern and southern ends of Mang-Bang beach by 18 m, and the retreat of shore-line by 2.4 m at the middle of Mang-Bang beach can be successfully duplicated in the numerical simulation.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.