• Title/Summary/Keyword: Neural compensation

Search Result 181, Processing Time 0.024 seconds

Computer Vision Based Measurement, Error Analysis and Calibration (컴퓨터 시각(視覺)에 의거한 측정기술(測定技術) 및 측정오차(測定誤差)의 분석(分析)과 보정(補正))

  • Hwang, H.;Lee, C.H.
    • Journal of Biosystems Engineering
    • /
    • v.17 no.1
    • /
    • pp.65-78
    • /
    • 1992
  • When using a computer vision system for a measurement, the geometrically distorted input image usually restricts the site and size of the measuring window. A geometrically distorted image caused by the image sensing and processing hardware degrades the accuracy of the visual measurement and prohibits the arbitrary selection of the measuring scope. Therefore, an image calibration is inevitable to improve the measuring accuracy. A calibration process is usually done via four steps such as measurement, modeling, parameter estimation, and compensation. In this paper, the efficient error calibration technique of a geometrically distorted input image was developed using a neural network. After calibrating a unit pixel, the distorted image was compensated by training CMLAN(Cerebellar Model Linear Associator Network) without modeling the behavior of any system element. The input/output training pairs for the network was obtained by processing the image of the devised sampled pattern. The generalization property of the network successfully compensates the distortion errors of the untrained arbitrary pixel points on the image space. The error convergence of the trained network with respect to the network control parameters were also presented. The compensated image through the network was then post processed using a simple DDA(Digital Differential Analyzer) to avoid the pixel disconnectivity. The compensation effect was verified using known sized geometric primitives. A way to extract directly a real scaled geometric quantity of the object from the 8-directional chain coding was also devised and coded. Since the developed calibration algorithm does not require any knowledge of modeling system elements and estimating parameters, it can be applied simply to any image processing system. Furthermore, it efficiently enhances the measurement accuracy and allows the arbitrary sizing and locating of the measuring window. The applied and developed algorithms were coded as a menu driven way using MS-C language Ver. 6.0, PC VISION PLUS library functions, and VGA graphic functions.

  • PDF

Design of Face Recognition Algorithm based Optimized pRBFNNs Using Three-dimensional Scanner (최적 pRBFNNs 패턴분류기 기반 3차원 스캐너를 이용한 얼굴인식 알고리즘 설계)

  • Ma, Chang-Min;Yoo, Sung-Hoon;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.6
    • /
    • pp.748-753
    • /
    • 2012
  • In this paper, Face recognition algorithm is designed based on optimized pRBFNNs pattern classifier using three-dimensional scanner. Generally two-dimensional image-based face recognition system enables us to extract the facial features using gray-level of images. The environmental variation parameters such as natural sunlight, artificial light and face pose lead to the deterioration of the performance of the system. In this paper, the proposed face recognition algorithm is designed by using three-dimensional scanner to overcome the drawback of two-dimensional face recognition system. First face shape is scanned using three-dimensional scanner and then the pose of scanned face is converted to front image through pose compensation process. Secondly, data with face depth is extracted using point signature method. Finally, the recognition performance is confirmed by using the optimized pRBFNNs for solving high-dimensional pattern recognition problems.

A Study On Three-dimensional Optimized Face Recognition Model : Comparative Studies and Analysis of Model Architectures (3차원 얼굴인식 모델에 관한 연구: 모델 구조 비교연구 및 해석)

  • Park, Chan-Jun;Oh, Sung-Kwun;Kim, Jin-Yul
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.6
    • /
    • pp.900-911
    • /
    • 2015
  • In this paper, 3D face recognition model is designed by using Polynomial based RBFNN(Radial Basis Function Neural Network) and PNN(Polynomial Neural Network). Also recognition rate is performed by this model. In existing 2D face recognition model, the degradation of recognition rate may occur in external environments such as face features using a brightness of the video. So 3D face recognition is performed by using 3D scanner for improving disadvantage of 2D face recognition. In the preprocessing part, obtained 3D face images for the variation of each pose are changed as front image by using pose compensation. The depth data of face image shape is extracted by using Multiple point signature. And whole area of face depth information is obtained by using the tip of a nose as a reference point. Parameter optimization is carried out with the aid of both ABC(Artificial Bee Colony) and PSO(Particle Swarm Optimization) for effective training and recognition. Experimental data for face recognition is built up by the face images of students and researchers in IC&CI Lab of Suwon University. By using the images of 3D face extracted in IC&CI Lab. the performance of 3D face recognition is evaluated and compared according to two types of models as well as point signature method based on two kinds of depth data information.

Fuzzy Neural Networks-Based Call Admission Control Using Possibility Distribution of Handoff Calls Dropping Rate for Wireless Networks (핸드오프 호 손실율 가능성 분포에 의한 무선망의 퍼지 신경망 호 수락제어)

  • Lee, Jin-Yi
    • Journal of Advanced Navigation Technology
    • /
    • v.13 no.6
    • /
    • pp.901-906
    • /
    • 2009
  • This paper proposes a call admission control(CAC) method for wireless networks, which is based on the upper bound of a possibility distribution of handoff calls dropping rates. The possibility distribution is estimated in a fuzzy inference and a learning algorithm in neural network. The learning algorithm is considered for tuning the membership functions(then parts)of fuzzy rules for the inference. The fuzzy inference method is based on a weighted average of fuzzy sets. The proposed method can avoid estimating excessively large handoff calls dropping rates, and makes possibile self-compensation in real time for the case where the estimated values are smaller than real values. So this method makes secure CAC, thereby guaranteeing the allowed CDR. From simulation studies we show that the estimation performance for the upper bound of call dropping rate is good, and then handoff call dropping rates in CAC are able to be sustained below user's desired value.

  • PDF

An Implementation of Federated Learning based on Blockchain (블록체인 기반의 연합학습 구현)

  • Park, June Beom;Park, Jong Sou
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.89-96
    • /
    • 2020
  • Deep learning using an artificial neural network has been recently researched and developed in various fields such as image recognition, big data and data analysis. However, federated learning has emerged to solve issues of data privacy invasion and problems that increase the cost and time required to learn. Federated learning presented learning techniques that would bring the benefits of distributed processing system while solving the problems of existing deep learning, but there were still problems with server-client system and motivations for providing learning data. So, we replaced the role of the server with a blockchain system in federated learning, and conducted research to solve the privacy and security problems that are associated with federated learning. In addition, we have implemented a blockchain-based system that motivates users by paying compensation for data provided by users, and requires less maintenance costs while maintaining the same accuracy as existing learning. In this paper, we present the experimental results to show the validity of the blockchain-based system, and compare the results of the existing federated learning with the blockchain-based federated learning. In addition, as a future study, we ended the thesis by presenting solutions to security problems and applicable business fields.

Development and Usability Evaluation of Hand Rehabilitation Training System Using Multi-Channel EMG-Based Deep Learning Hand Posture Recognition (다채널 근전도 기반 딥러닝 동작 인식을 활용한 손 재활 훈련시스템 개발 및 사용성 평가)

  • Ahn, Sung Moo;Lee, Gun Hee;Kim, Se Jin;Bae, So Jeong;Lee, Hyun Ju;Oh, Do Chang;Tae, Ki Sik
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.5
    • /
    • pp.361-368
    • /
    • 2022
  • The purpose of this study was to develop a hand rehabilitation training system for hemiplegic patients. We also tried to find out five hand postures (WF: Wrist Flexion, WE: Wrist Extension, BG: Ball Grip, HG: Hook Grip, RE: Rest) in real-time using multi-channel EMG-based deep learning. We performed a pre-processing method that converts to Spider Chart image data for the classification of hand movement from five test subjects (total 1,500 data sets) using Convolution Neural Networks (CNN) deep learning with an 8-channel armband. As a result of this study, the recognition accuracy was 92% for WF, 94% for WE, 76% for BG, 82% for HG, and 88% for RE. Also, ten physical therapists participated for the usability evaluation. The questionnaire consisted of 7 items of acceptance, interest, and satisfaction, and the mean and standard deviation were calculated by dividing each into a 5-point scale. As a result, high scores were obtained in immersion and interest in game (4.6±0.43), convenience of the device (4.9±0.30), and satisfaction after treatment (4.1±0.48). On the other hand, Conformity of intention for treatment (3.90±0.49) was relatively low. This is thought to be because the game play may be difficult depending on the degree of spasticity of the hemiplegic patient, and compensation may occur in patient with weakened target muscles. Therefore, it is necessary to develop a rehabilitation program suitable for the degree of disability of the patient.

Federated Deep Reinforcement Learning Based on Privacy Preserving for Industrial Internet of Things (산업용 사물 인터넷을 위한 프라이버시 보존 연합학습 기반 심층 강화학습 모델)

  • Chae-Rim Han;Sun-Jin Lee;Il-Gu Lee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1055-1065
    • /
    • 2023
  • Recently, various studies using deep reinforcement learning (deep RL) technology have been conducted to solve complex problems using big data collected at industrial internet of things. Deep RL uses reinforcement learning"s trial-and-error algorithms and cumulative compensation functions to generate and learn its own data and quickly explore neural network structures and parameter decisions. However, studies so far have shown that the larger the size of the learning data is, the higher are the memory usage and search time, and the lower is the accuracy. In this study, model-agnostic learning for efficient federated deep RL was utilized to solve privacy invasion by increasing robustness as 55.9% and achieve 97.8% accuracy, an improvement of 5.5% compared with the comparative optimization-based meta learning models, and to reduce the delay time by 28.9% on average.

Path coordinator by the modified genetic algorithm

  • Chung, C.H.;Lee, K.S.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10b
    • /
    • pp.1939-1943
    • /
    • 1991
  • Path planning is an important task for optimal motion of a robot in structured or unstructured environment. The goal of this paper is to plan the shortest collision-free path in 3D, when a robot is navigated to pick up some tools or to repair some parts from various locations. To accomplish the goal of this paper, the Path Coordinator is proposed to have the capabilities of an obstacle avoidance strategy[3] and a traveling salesman problem strategy(TSP)[23]. The obstacle avoidance strategy is to plan the shortest collision-free path between each pair of n locations in 2D or in 3D. The TSP strategy is to compute a minimal system cost of a tour that is defined as a closed path navigating each location exactly once. The TSP strategy can be implemented by the Neural Network. The obstacle avoidance strategy in 2D can be implemented by the VGraph Algorithm. However, the VGraph Algorithm is not useful in 3D, because it can't compute the global optimality in 3D. Thus, the Path Coordinator is proposed to solve this problem, having the capabilities of selecting the optimal edges by the modified Genetic Algorithm[21] and computing the optimal nodes along the optimal edges by the Recursive Compensation Algorithm[5].

  • PDF

A Study on a Control Method for Small BLDC Motor Sensorless Drive with the Single Phase BEMF and the Neutral Point (소형 BLDC 전동기 센서리스 드라이브의 단상 역기전력과 중성점을 이용한 제어기법 연구)

  • Jo, June-Woo;Hwang, Don-Ha;Hwang, Young-Gi;Jung, Tae-Uk
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.28 no.9
    • /
    • pp.1-7
    • /
    • 2014
  • Brushless Direct Current(BLDC) Motor is essential to measure a rotor position because of that this motor type needs to synchronize the rotor's position and changeover phase current instead of a brush and commutator used on the existing dc motor. Recently, many researches have studied on sensorless control drive for BLDC motor. The conventional control methods are a compensation value dq, Kalman filter, Fuzzy logic, Neurons neural network, and the like. These methods has difficulties of detecting BEMF accurately at low speed because of low BEMF voltage and switching noise. And also, the operation is long and complex. So, it is required a high-performance microprocessor. Therefore, it is not suitable for a small BLDC motor sensorless drive. This paper presents control methods suitable for economic small BLDC motor sensorless drive which are an improved design of the BEMF detection circuit, simplifying a complex algorithm and computation time reduction. The improved motor sensorless drive is verified stability and validity through being designed, manufactured and analyzed.

An Evolutionary Optimized Algorithm Approach to Compensate the Non-linearity in Linear Variable Displacement Transducer Characteristics

  • Murugan, S.;Umayal, S.P.
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.6
    • /
    • pp.2142-2153
    • /
    • 2014
  • Linearization of transducer characteristic plays a vital role in electronic instrumentation because all transducers have outputs nonlinearly related to the physical variables they sense. If the transducer output is nonlinear, it will produce a whole assortment of problems. Transducers rarely possess a perfectly linear transfer characteristic, but always have some degree of non-linearity over their range of operation. Attempts have been made by many researchers to increase the range of linearity of transducers. This paper presents a method to compensate nonlinearity of Linear Variable Displacement Transducer (LVDT) based on Extreme Learning Machine (ELM) method, Differential Evolution (DE) algorithm and Artificial Neural Network (ANN) trained by Genetic Algorithm (GA). Because of the mechanism structure, LVDT often exhibit inherent nonlinear input-output characteristics. The best approximation capability of optimized ANN technique is beneficial to this. The use of this proposed method is demonstrated through computer simulation with the experimental data of two different LVDTs. The results reveal that the proposed method compensated the presence of nonlinearity in the displacement transducer with very low training time, lowest Mean Square Error (MSE) value and better linearity. This research work involves less computational complexity and it behaves a good performance for nonlinearity compensation for LVDT and has good application prospect.