• Title/Summary/Keyword: 성능최적화 기법

Search Result 1,343, Processing Time 0.031 seconds

Intelligent interface using hand gestures recognition based on artificial intelligence (인공지능 기반 손 체스처 인식 정보를 활용한 지능형 인터페이스)

  • Hangjun Cho;Junwoo Yoo;Eun Soo Kim;Young Jae Lee
    • Journal of Platform Technology
    • /
    • v.11 no.1
    • /
    • pp.38-51
    • /
    • 2023
  • We propose an intelligent interface algorithm using hand gesture recognition information based on artificial intelligence. This method is functionally an interface that recognizes various motions quickly and intelligently by using MediaPipe and artificial intelligence techniques such as KNN, LSTM, and CNN to track and recognize user hand gestures. To evaluate the performance of the proposed algorithm, it is applied to a self-made 2D top-view racing game and robot control. As a result of applying the algorithm, it was possible to control various movements of the virtual object in the game in detail and robustly. And the result of applying the algorithm to the robot control in the real world, it was possible to control movement, stop, left turn, and right turn. In addition, by controlling the main character of the game and the robot in the real world at the same time, the optimized motion was implemented as an intelligent interface for controlling the coexistence space of virtual and real world. The proposed algorithm enables sophisticated control according to natural and intuitive characteristics using the body and fine movement recognition of fingers, and has the advantage of being skilled in a short period of time, so it can be used as basic data for developing intelligent user interfaces.

  • PDF

Sources separation of passive sonar array signal using recurrent neural network-based deep neural network with 3-D tensor (3-D 텐서와 recurrent neural network기반 심층신경망을 활용한 수동소나 다중 채널 신호분리 기술 개발)

  • Sangheon Lee;Dongku Jung;Jaesok Yu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.357-363
    • /
    • 2023
  • In underwater signal processing, separating individual signals from mixed signals has long been a challenge due to low signal quality. The common method using Short-time Fourier transform for spectrogram analysis has faced criticism for its complex parameter optimization and loss of phase data. We propose a Triple-path Recurrent Neural Network, based on the Dual-path Recurrent Neural Network's success in long time series signal processing, to handle three-dimensional tensors from multi-channel sensor input signals. By dividing input signals into short chunks and creating a 3D tensor, the method accounts for relationships within and between chunks and channels, enabling local and global feature learning. The proposed technique demonstrates improved Root Mean Square Error and Scale Invariant Signal to Noise Ratio compared to the existing method.

Systematic Design Approach Based on Cavity-Mode Resonance Analysis for Radiated Susceptibility of Cables in Air Vehicles (캐비티 공진 해석 기반 비행체 내부배선 복사내성 대책 설계 방안)

  • Minseong Kang;Yangwon Kim;Donggyu Roh;Myunghoi Kim
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.5
    • /
    • pp.587-593
    • /
    • 2023
  • In this paper, we propose a systematic design approach based on cavity-mode resonance analysis to improve the radiated susceptibility of cables in air vehicles. As electronic devices equipped in air vehicles substantially increase, enhancing the radiated susceptibility of internal cables becomes more challenging and significant. The proposed design approach provides an efficient method to avoid and suppress cavity-mode resonances using analytical methods to estimate the resonance frequencies and the current ratio induced by the cavity-mode resonances. It is demonstrated in simulated results that the proposed method offers a design solution for improving the radiated susceptibility and reduces the computation time by up to 99.6% compared to the previous design method.

Study of Load Balancing Technique Based on Step-By-Step Weight Considering Server Status in SDN Environment (SDN 환경에서 서버 상태를 고려한 단계적 가중치 기반의 부하 분산 기법 연구)

  • Jae-Young Lee;Tae-Wook Kwon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1087-1094
    • /
    • 2023
  • Due to the development of technologies, such as big data, cloud, IoT, and AI, The high data throughput is required, and the importance of network flexibility and scalability is increasing. However, existing network systems are dependent on vendors and equipment, and thus have limitations in meeting the foregoing needs. Accordingly, SDN technology that can configure a software-centered flexible network is attracting attention. In particular, a load balancing method based on SDN can efficiently process massive traffic and optimize network performance. In the existing load balancing studies in SDN environment have limitation in that unnecessary traffic occurs between servers and controllers or performing load balancing only after the server reaches an overload state. In order to solve this problem, this paper proposes a method that minimizes unnecessary traffic and appropriate load balancing can be performed before the server becomes overloaded through a method of assigning weights to servers in stages according to server load.

Automatic Calibration of SWAT Model Using LH-OAT Sensitivity Analysis and SCE-UA Optimization Method (LH-OAT 민감도 분석과 SCE-UA 최적화 방법을 이용한 SWAT 모형의 자동보정)

  • Lee Do-Hun
    • Journal of Korea Water Resources Association
    • /
    • v.39 no.8 s.169
    • /
    • pp.677-690
    • /
    • 2006
  • The LH-OAT (Latin Hypercube One factor At a Time) method for sensitivity analysis and SCE-UA (Shuffled Complex Evolution at University of Arizona) optimization method were applied for the automatic calibration of SWAT model in Bocheong-cheon watershed. The LH-OAT method which combines the advantages of global and local sensitivity analysis effectively identified the sensitivity ranking for the parameters of SWAT model over feasible parameter space. Use of this information allows us to select the calibrated parameters for the automatic calibration process. The performance of the automatic calibration of SWAT model using SCE-UA method depends on the length of calibration period, the number of calibrated parameters, and the selection of statistical error criteria. The performance of SWAT model in terms of RMSE (Root Mean Square Error), NSEF (Nash-Sutcliffe Model Efficiency), RMAE (Relative Mean Absolute Error), and NMSE (Normalized Mean Square Error) becomes better as the calibration period and the number of parameters defined in the automatic calibration process increase. However, NAE (Normalized Average Error) and SDR (Standard Deviation Ratio) were not improved although the calibration period and the number of calibrated parameters are increased. The result suggests that there are complex interactions among the calibration data, the calibrated parameters, and the model error criteria and a need for further study to understand these complex interactions at various representative watersheds.

Design and Implementation of Receiver Algorithms for VDL Mode-2 Systems (VDL Mode-2 시스템을 위한 수신 알고리듬 설계 및 구현)

  • Lee, Hui-Soo;Kang, Dong-Hoon;Park, Hyo-Bae;Oh, Wang-Rock
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.46 no.10
    • /
    • pp.28-33
    • /
    • 2009
  • In this paper, we propose the receiver algorithms suitable for the VHF (Very High Frequency) digital link mode-2(VDL Mode-2) system. Unlike conventional digital communication systems using the root raised cosine filter as a transmit and receive filter, raised cosine filter is used as a transmit filter in the VDL Mode-2 system. Hence, it is crucial to design and implement the optimum lowpass receive filter by considering inter-symbol interference and noise performance. On the other hand, due to the short preamble pattern, it is crucial to develop an efficient packet detection algorithm for reliable communication link for the VDL Mode-2 system. Also, frequency offset due to the carrier frequency difference between transmitter and receiver and doppler frequency shift must be estimated and compensated for reliable communication. In this paper, the optimum receive filter, packet detection and frequency offset compensation algorithms are proposed and the performance of the VDL system employing the proposed algorithms are evaluated.

Identifying the potential target substance of physical developer (PD) for reagent reliability test and a study on storage period of TWEEN® 20 based PD working solution (Physical Developer(PD)의 신뢰성 테스트(reagent reliability test)를 위한 타겟물질 탐색과 TWEEN® 20 기반 PD 작업용액의 보관기간에 관한 연구)

  • Soo-Jeong Ahn;Ye-jin Lee;Je-Seol Yu
    • Analytical Science and Technology
    • /
    • v.36 no.3
    • /
    • pp.113-120
    • /
    • 2023
  • Physical developer (PD) is an effective technique that can develop fingerprints even on wet or very old paper. However, it has not been known which substance reacts with PD. Also, the timing of optimization according to the storage period of the PD working solution has not been known. The present research has done a spot test with 7 eccrine components and 5 sebaceous components that known as fingerprint components and figured out the mixture of palmitic acid and lysine gave the strongest positive reaction. Also, paper treated with PD was treated in 1,2-indanedione/zinc (1,2-IND/Zn) working solution and showed lysine was not dissolved in water. To find out the timing of optimization according to the storage period of the TWEEN® 20 based PD working solution, the mixture of palmitic acid and lysine was used for the target of reagent reliability test. As the result, working solution of 14 days storage period showed better result than other working solutions.

A Study on Multi-Object Data Split Technique for Deep Learning Model Efficiency (딥러닝 효율화를 위한 다중 객체 데이터 분할 학습 기법)

  • Jong-Ho Na;Jun-Ho Gong;Hyu-Soung Shin;Il-Dong Yun
    • Tunnel and Underground Space
    • /
    • v.34 no.3
    • /
    • pp.218-230
    • /
    • 2024
  • Recently, many studies have been conducted for safety management in construction sites by incorporating computer vision. Anchor box parameters are used in state-of-the-art deep learning-based object detection and segmentation, and the optimized parameters are critical in the training process to ensure consistent accuracy. Those parameters are generally tuned by fixing the shape and size by the user's heuristic method, and a single parameter controls the training rate in the model. However, the anchor box parameters are sensitive depending on the type of object and the size of the object, and as the number of training data increases. There is a limit to reflecting all the characteristics of the training data with a single parameter. Therefore, this paper suggests a method of applying multiple parameters optimized through data split to solve the above-mentioned problem. Criteria for efficiently segmenting integrated training data according to object size, number of objects, and shape of objects were established, and the effectiveness of the proposed data split method was verified through a comparative study of conventional scheme and proposed methods.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.