• Title/Summary/Keyword: Nonlinear feature

Search Result 295, Processing Time 0.032 seconds

Cable damage identification of cable-stayed bridge using multi-layer perceptron and graph neural network

  • Pham, Van-Thanh;Jang, Yun;Park, Jong-Woong;Kim, Dong-Joo;Kim, Seung-Eock
    • Steel and Composite Structures
    • /
    • v.44 no.2
    • /
    • pp.241-254
    • /
    • 2022
  • The cables in a cable-stayed bridge are critical load-carrying parts. The potential damage to cables should be identified early to prevent disasters. In this study, an efficient deep learning model is proposed for the damage identification of cables using both a multi-layer perceptron (MLP) and a graph neural network (GNN). Datasets are first generated using the practical advanced analysis program (PAAP), which is a robust program for modeling and analyzing bridge structures with low computational costs. The model based on the MLP and GNN can capture complex nonlinear correlations between the vibration characteristics in the input data and the cable system damage in the output data. Multiple hidden layers with an activation function are used in the MLP to expand the original input vector of the limited measurement data to obtain a complete output data vector that preserves sufficient information for constructing the graph in the GNN. Using the gated recurrent unit and set2set model, the GNN maps the formed graph feature to the output cable damage through several updating times and provides the damage results to both the classification and regression outputs. The model is fine-tuned with the original input data using Adam optimization for the final objective function. A case study of an actual cable-stayed bridge was considered to evaluate the model performance. The results demonstrate that the proposed model provides high accuracy (over 90%) in classification and satisfactory correlation coefficients (over 0.98) in regression and is a robust approach to obtain effective identification results with a limited quantity of input data.

Voice-Based Gender Identification Employing Support Vector Machines (음성신호 기반의 성별인식을 위한 Support Vector Machines의 적용)

  • Lee, Kye-Hwan;Kang, Sang-Ick;Kim, Deok-Hwan;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.75-79
    • /
    • 2007
  • We propose an effective voice-based gender identification method using a support vector machine(SVM). The SVM is a binary classification algorithm that classifies two groups by finding the voluntary nonlinear boundary in a feature space and is known to yield high classification performance. In the present work, we compare the identification performance of the SVM with that of a Gaussian mixture model(GMM) using the mel frequency cepstral coefficients(MFCC). A novel means of incorporating a features fusion scheme based on a combination of the MFCC and pitch is proposed with the aim of improving the performance of gender identification using the SVM. Experiment results indicate that the gender identification performance using the SVM is significantly better than that of the GMM. Moreover, the performance is substantially improved when the proposed features fusion technique is applied.

A Case Study on the Technology and Innovation Management Process in Smartphone Industry (스마트폰 산업에서의 기술혁신관리 프로세스 사례 연구)

  • Park, Sehyoung;Kim, Byung-Keun
    • Journal of Korea Technology Innovation Society
    • /
    • v.21 no.1
    • /
    • pp.92-129
    • /
    • 2018
  • In general, a technology and innovation strategy has been established first. Then, the technology and innovation activities are conducted accordingly. The literature on the technology management process points out that the technology and innovation activities exist in some sequences, nonlinear or linear pattern. However, it is also argued that a certain technology and innovation activity can be occurred or disappeared at certain timing. In this paper, it has been analyzed and clarified how the technology and innovation activities are performed and working together with the technology and innovation strategies in certain context especially when the handset market moves from feature phone to smartphone during a last decade. Empirical results show that the starting point of the technology and innovation activity changes according to the technology and innovation strategies. And, technology innovation activities exhibit multi-layer architecture types. It is confirmed that technology and innovation activities follow a specific pattern. However, if there are some changes in the technology and innovation strategy due to the external environment change, some technology and innovation activities can be skipped because the priority of the technology and innovation activities would be changed. If the firm which has the strategy type of 'innovators' fails to adapt to the fast changing external environment and has the inadequate technology and innovation activities, it is required to change the technology and innovation strategy. It will have a huge impact on the firm's survival.

Analysis of 3D Accuracy According to Determination of Calibration Initial Value in Close-Range Digital Photogrammetry Using VLBI Antenna and Mobile Phone Camera (VLBI 안테나와 모바일폰 카메라를 활용한 근접수치사진측량의 캘리브레이션 초기값 결정에 따른 3차원 정확도 분석)

  • Kim, Hyuk Gi;Yun, Hong Sik;Cho, Jae Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.1
    • /
    • pp.31-43
    • /
    • 2015
  • This study had been aimed to conduct the camera calibration on VLBI antenna in the Space Geodetic Observation Center of Sejong City with a low-cost digital camera, which embedded in a mobile phone to determine the three-dimension position coordinates of the VLBI antenna, based on stereo images. The initial values for the camera calibration have been obtained by utilizing the Direct Linear Transformation algorithm and the commercial digital photogrammetry system, PhotoModeler $Scanner^{(R)}$ ver. 6.0, respectively. The accuracy of camera calibration results was compared with that the camera calibration results, acquired by a bundle adjustment with nonlinear collinearity condition equation. Although two methods showed significant differences in the initial value, the final calibration demonstrated the consistent results whichever methods had been performed for obtaining the initial value. Furthermore, those three-dimensional coordinates of feature points of the VLBI antenna were respectively calculated using the camera calibration by the two methods to be compared with the reference coordinates obtained from a total station. In fact, both methods have resulted out a same standard deviation of $X=0.004{\pm}0.010m$, $Y=0.001{\pm}0.015m$, $Z=0.009{\pm}0.017m$, that of showing a high degree of accuracy in centimeters. From the result, we can conclude that a mobile phone camera opens up the way for a variety of image processing studies, such as 3D reconstruction from images captured.

FEM-based Seismic Reliability Analysis of Real Structural Systems (실제 구조계의 유한요소법에 기초한 지진 신뢰성해석)

  • Huh Jung-Won;Haldar Achintya
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.19 no.2 s.72
    • /
    • pp.171-185
    • /
    • 2006
  • A sophisticated reliability analysis method is proposed to evaluate the reliability of real nonlinear complicated dynamic structural systems excited by short duration dynamic loadings like earthquake motions by intelligently integrating the response surface method, the finite element method, the first-order reliability method, and the iterative linear interpolation scheme. The method explicitly considers all major sources of nonlinearity and uncertainty in the load and resistance-related random variables. The unique feature of the technique is that the seismic loading is applied in the time domain, providing an alternative to the classical random vibration approach. The four-parameter Richard model is used to represent the flexibility of connections of real steel frames. Uncertainties in the Richard parameters are also incorporated in the algorithm. The laterally flexible steel frame is then reinforced with reinforced concrete shear walls. The stiffness degradation of shear walls after cracking is also considered. The applicability of the method to estimate the reliability of real structures is demonstrated by considering three examples; a laterally flexible steel frame with fully restrained connections, the same steel frame with partially restrained connections with different rigidities, and a steel frame reinforced with concrete shear walls.

PVC Classification based on QRS Pattern using QS Interval and R Wave Amplitude (QRS 패턴에 의한 QS 간격과 R파의 진폭을 이용한 조기심실수축 분류)

  • Cho, Ik-Sung;Kwon, Hyeog-Soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.4
    • /
    • pp.825-832
    • /
    • 2014
  • Previous works for detecting arrhythmia have mostly used nonlinear method such as artificial neural network, fuzzy theory, support vector machine to increase classification accuracy. Most methods require accurate detection of P-QRS-T point, higher computational cost and larger processing time. Even if some methods have the advantage in low complexity, but they generally suffer form low sensitivity. Also, it is difficult to detect PVC accurately because of the various QRS pattern by person's individual difference. Therefore it is necessary to design an efficient algorithm that classifies PVC based on QRS pattern in realtime and decreases computational cost by extracting minimal feature. In this paper, we propose PVC classification based on QRS pattern using QS interval and R wave amplitude. For this purpose, we detected R wave, RR interval, QRS pattern from noise-free ECG signal through the preprocessing method. Also, we classified PVC in realtime through QS interval and R wave amplitude. The performance of R wave detection, PVC classification is evaluated by using 9 record of MIT-BIH arrhythmia database that included over 30 PVC. The achieved scores indicate the average of 99.02% in R wave detection and the rate of 93.72% in PVC classification.

Fuzzy Control for the Obstacle Avoidance of Remote Control Mobile Robot (원격제어 이동로봇의 장애물 회피를 위한 퍼지 제어)

  • Yeo, Hee-Joo;Sung, Mun-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.1
    • /
    • pp.47-54
    • /
    • 2011
  • The remote control mobile robot is the robot accomplishing a task according to the orders giving by a user through departed communication system using a joystick. Basically, to supply a lot of information, as this type of robot uses visual information, the user can check the transmitted information by eyes and give orders to the robot. But the weak point of this type of robot is that it has a possibility to come into a collision with an obstacle not be seen to the user because of the communication delay occurring in a communication system and dead zone happening in visual information. To solve the problem, in this paper, we try to suggest a system applying a fuzzy control system to the robot to avoid collision with an obstacle by an immediate order of the user. The fuzzy control system has better performance than any other existing control methods in the change of noise and parameter. And it is more efficient than any other since it solves easy the complexity of the system analysis occurring because of the nonlinear feature of the mobile robot system. In this paper, we made experiments how the mobile robot controlled by the fuzzy control system avoids an obstacle, tracks the path and avoids the obstacle in the path, to prove the performance and to check the evaluation and the application possibility of the fuzzy control system.

Evaluation of Classification Performance of Inception V3 Algorithm for Chest X-ray Images of Patients with Cardiomegaly (심장비대증 환자의 흉부 X선 영상에 대한 Inception V3 알고리즘의 분류 성능평가)

  • Jeong, Woo-Yeon;Kim, Jung-Hun;Park, Ji-Eun;Kim, Min-Jeong;Lee, Jong-Min
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.4
    • /
    • pp.455-461
    • /
    • 2021
  • Cardiomegaly is one of the most common diseases seen on chest X-rays, but if it is not detected early, it can cause serious complications. In view of this, in recent years, many researches on image analysis in which deep learning algorithms using artificial intelligence are applied to medical care have been conducted with the development of various science and technology fields. In this paper, we would like to evaluate whether the Inception V3 deep learning model is a useful model for the classification of Cardiomegaly using chest X-ray images. For the images used, a total of 1026 chest X-ray images of patients diagnosed with normal heart and those diagnosed with Cardiomegaly in Kyungpook National University Hospital were used. As a result of the experiment, the classification accuracy and loss of the Inception V3 deep learning model according to the presence or absence of Cardiomegaly were 96.0% and 0.22%, respectively. From the research results, it was found that the Inception V3 deep learning model is an excellent deep learning model for feature extraction and classification of chest image data. The Inception V3 deep learning model is considered to be a useful deep learning model for classification of chest diseases, and if such excellent research results are obtained by conducting research using a little more variety of medical image data, I think it will be great help for doctor's diagnosis in future.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.