• Title/Summary/Keyword: Image Signal Recognition

Search Result 185, Processing Time 0.026 seconds

Development of Collaborative Robot Control Training Medium to Improve Worker Safety and Work Convenience Using Image Processing and Machine Learning-Based Hand Signal Recognition (작업자의 안전과 작업 편리성 향상을 위한 영상처리 및 기계학습 기반 수신호 인식 협동로봇 제어 교육 매체 개발)

  • Jin-heork Jung;Hun Jeong;Gyeong-geun Park;Gi-ju Lee;Hee-seok Park;Chae-hun An
    • Journal of Practical Engineering Education
    • /
    • v.14 no.3
    • /
    • pp.543-553
    • /
    • 2022
  • A collaborative robot(Cobot) is one of the production systems presented in the 4th industrial revolution and are systems that can maximize efficiency by combining the exquisite hand skills of workers and the ability of simple repetitive tasks of robots. Also, research on the development of an efficient interface method between the worker and the robot is continuously progressing along with the solution to the safety problem arising from the sharing of the workspace. In this study, a method for controlling the robot by recognizing the worker's hand signal was presented to enhance the convenience and concentration of the worker, and the safety of the worker was secured by introducing the concept of a safety zone. Various technologies such as robot control, PLC, image processing, machine learning, and ROS were used to implement this. In addition, the roles and interface methods of the proposed technologies were defined and presented for using educational media. Students can build and adjust the educational media system by linking the introduced various technologies. Therefore, there is an excellent advantage in recognizing the necessity of the technology required in the field and inducing in-depth learning about it. In addition, presenting a problem and then seeking a way to solve it on their own can lead to self-directed learning. Through this, students can learn key technologies of the 4th industrial revolution and improve their ability to solve various problems.

A Noisy-Robust Approach for Facial Expression Recognition

  • Tong, Ying;Shen, Yuehong;Gao, Bin;Sun, Fenggang;Chen, Rui;Xu, Yefeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2124-2148
    • /
    • 2017
  • Accurate facial expression recognition (FER) requires reliable signal filtering and the effective feature extraction. Considering these requirements, this paper presents a novel approach for FER which is robust to noise. The main contributions of this work are: First, to preserve texture details in facial expression images and remove image noise, we improved the anisotropic diffusion filter by adjusting the diffusion coefficient according to two factors, namely, the gray value difference between the object and the background and the gradient magnitude of object. The improved filter can effectively distinguish facial muscle deformation and facial noise in face images. Second, to further improve robustness, we propose a new feature descriptor based on a combination of the Histogram of Oriented Gradients with the Canny operator (Canny-HOG) which can represent the precise deformation of eyes, eyebrows and lips for FER. Third, Canny-HOG's block and cell sizes are adjusted to reduce feature dimensionality and make the classifier less prone to overfitting. Our method was tested on images from the JAFFE and CK databases. Experimental results in L-O-Sam-O and L-O-Sub-O modes demonstrated the effectiveness of the proposed method. Meanwhile, the recognition rate of this method is not significantly affected in the presence of Gaussian noise and salt-and-pepper noise conditions.

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

A Study on ISAR Imaging Algorithm for Radar Target Recognition (표적 구분을 위한 ISAR 영상 기법에 대한 연구)

  • Park, Jong-Il;Kim, Kyung-Tae
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.19 no.3
    • /
    • pp.294-303
    • /
    • 2008
  • ISAR(Inverse Synthetic Aperture Radar) images represent the 2-D(two-dimensional) spatial distribution of RCS (Radar Cross Section) of an object, and they can be applied to the problem of target identification. A traditional approach to ISAR imaging is to use a 2-D IFFT(Inverse Fast Fourier Transform). However, the 2-D IFFT results in low resolution ISAR images especially when the measured frequency bandwidth and angular region are limited. In order to improve the resolution capability of the Fourier transform, various high-resolution spectral estimation approaches have been applied to obtain ISAR images, such as AR(Auto Regressive), MUSIC(Multiple Signal Classification) or Modified MUSIC algorithms. In this study, these high-resolution spectral estimators as well as 2-D IFFT approach are combined with a recently developed ISAR image classification algorithm, and their performances are carefully analyzed and compared in the framework of radar target recognition.

HOG based Pedestrian Detection and Behavior Pattern Recognition for Traffic Signal Control (교통신호제어를 위한 HOG 기반 보행자 검출 및 행동패턴 인식)

  • Yang, Sung-Min;Jo, Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.11
    • /
    • pp.1017-1021
    • /
    • 2013
  • The traffic signal has been widely used in the transport system with a fixed time interval currently. This kind of setting time was determined based on experience for vehicles to generate a waiting time while allowing pedestrians crossing the street. However, this strict setting causes inefficient problems in terms of economic and safety crossing. In this research, we propose a monitoring algorithm to detect, track and check pedestrian crossing the crosswalk by the patterns of behavior. This monitoring system ensures the safety for pedestrian and keeps the traffic flow in efficient. In this algorithm, pedestrians are detected by using HOG feature which is robust to illumination changes in outdoor environment. According to a complex computation, the parallel process with the GPU as well as CPU is adopted for real-time processing. Therefore, pedestrians are tracked by the relationship of hue channel in image sequence according to the predefined pedestrian zone. Finally, the system checks the pedestrians' crossing on the crosswalk by its HOG based behavior patterns. In experiments, the parallel processing by both GPU and CPU was performed so that the result reaches 16 FPS (Frame Per Second). The accuracy of detection and tracking was 93.7% and 91.2%, respectively.

Teeth Image Recognition Using Hidden Markov Model (HMM을 이용한 치열 영상인식)

  • Kim, Dong-Ju;Yoon, Jun-Ho;Cheon, Byeong-Geun;Lee, Hyon-Gu;Hong, Kwang-Seok
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2006.06a
    • /
    • pp.29-32
    • /
    • 2006
  • 본 논문에서는 기존의 생체인식에서 사용하지 않았던 방법으로 개인의 치열 영상을 이용하는 생체 인식 방법을 제안한다. 제안한 치열 인식 시스템은 데이터의 중복성 제거와 관측벡터의 차원 감소를 위하여 2D-DCT를 특징 파라미터로 사용하고, 음성인식 및 얼굴인식 분야에서 사용하는 EHMM 기술을 사용한다. EHMM은 3개의 super-state로 구성되며 각각의 super-state는 3개, 5개, 3개의 상태를 갖는 1D-HMM으로 구성된다. 치열인증 시스템의 성능 평가는 모델 훈련에 사용하지 않은 치열 영상으로 인식 실험하여 평가한다. 치열인식 실험에는 남자 10명과 여자 10명에 대하여 각각 10개의 이미지로 구성된 총 200개의 치열 영상을 사용한다. 치열인식 실험에서 제안한 치열인식 시스템의 인식률은 98.5%를 보였고, 참고문헌 [4]의 EHMM을 사용한 얼굴인식 시스템이 갖는 98%와 대등한 성능을 나타내는 것을 확인하였다.

  • PDF

Requirements Analysis of Image-Based Positioning Algorithm for Vehicles

  • Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.5
    • /
    • pp.397-402
    • /
    • 2019
  • Recently, with the emergence of autonomous vehicles and the increasing interest in safety, a variety of research has been being actively conducted to precisely estimate the position of a vehicle by fusing sensors. Previously, researches were conducted to determine the location of moving objects using GNSS (Global Navigation Satellite Systems) and/or IMU (Inertial Measurement Unit). However, precise positioning of a moving vehicle has lately been performed by fusing data obtained from various sensors, such as LiDAR (Light Detection and Ranging), on-board vehicle sensors, and cameras. This study is designed to enhance kinematic vehicle positioning performance by using feature-based recognition. Therefore, an analysis of the required precision of the observations obtained from the images has carried out in this study. Velocity and attitude observations, which are assumed to be obtained from images, were generated by simulation. Various magnitudes of errors were added to the generated velocities and attitudes. By applying these observations to the positioning algorithm, the effects of the additional velocity and attitude information on positioning accuracy in GNSS signal blockages were analyzed based on Kalman filter. The results have shown that yaw information with a precision smaller than 0.5 degrees should be used to improve existing positioning algorithms by more than 10%.

Study on Heart Rate Variability and PSD Analysis of PPG Data for Emotion Recognition (감정 인식을 위한 PPG 데이터의 심박변이도 및 PSD 분석)

  • Choi, Jin-young;Kim, Hyung-shin
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.103-112
    • /
    • 2018
  • In this paper, we propose a method of recognizing emotions using PPG sensor which measures blood flow according to emotion. From the existing PPG signal, we use a method of determining positive emotions and negative emotions in the frequency domain through PSD (Power Spectrum Density). Based on James R. Russell's two-dimensional prototype model, we classify emotions as joy, sadness, irritability, and calmness and examine their association with the magnitude of energy in the frequency domain. It is significant that this study used the same PPG sensor used in wearable devices to measure the top four kinds of emotions in the frequency domain through image experiments. Through the questionnaire, the accuracy, the immersion level according to the individual, the emotional change, and the biofeedback for the image were collected. The proposed method is expected to be various development such as commercial application service using PPG and mobile application prediction service by merging with context information of existing smart phone.

Multi-functional Automated Cultivation for House Melon;Development of Tele-robotic System (시설멜론용 다기능 재배생력화 시스템;원격 로봇작업 시스템 개발)

  • Im, D.H.;Kim, S.C.;Cho, S.I.;Chung, S.C.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.33 no.3
    • /
    • pp.186-195
    • /
    • 2008
  • In this paper, a prototype tele-operative system with a mobile base was developed in order to automate cultivation of house melon. A man-machine interactive hybrid decision-making system via tele-operative task interface was proposed to overcome limitations of computer image recognition. Identifying house melon including position data from the field image was critical to automate cultivation. And it was not simple especially when melon is covered partly by leaves and stems. The developed system was composed of 5 major modules: (a) main remote monitoring and task control module, (b) wireless remote image acquisition and data transmission module, (c) three-wheel mobile base mounted with a 4 dof articulated type robot manipulator (d) exchangeable modular type end tools, and (e) melon storage module. The system was operated through the graphic user interface using touch screen monitor and wireless data communication among operator, computer, and machine. Once task was selected from the task control and monitoring module, the analog signal of the color image of the field was captured and transmitted to the host computer using R.F. module by wireless. A sequence of algorithms to identify location and size of a melon was performed based on the local image processing. Laboratory experiment showed the developed prototype system showed the practical feasibility of automating various cultivating tasks of house melon.

A Non-invasive Real-time Respiratory Organ Motion Tracking System for Image Guided Radio-Therapy (IGRT를 위한 비침습적인 호흡에 의한 장기 움직임 실시간 추적시스템)

  • Kim, Yoon-Jong;Yoon, Uei-Joong
    • Journal of Biomedical Engineering Research
    • /
    • v.28 no.5
    • /
    • pp.676-683
    • /
    • 2007
  • A non-invasive respiratory gated radiotherapy system like those based on external anatomic motion gives better comfortableness to patients than invasive system on treatment. However, higher correlation between the external and internal anatomic motion is required to increase the effectiveness of non-invasive respiratory gated radiotherapy. Both of invasive and non-invasive methods need to track the internal anatomy with the higher precision and rapid response. Especially, the non-invasive method has more difficulty to track the target position successively because of using only image processing. So we developed the system to track the motion for a non-invasive respiratory gated system to accurately find the dynamic position of internal structures such as the diaphragm and tumor. The respiratory organ motion tracking apparatus consists of an image capture board, a fluoroscopy system and a processing computer. After the image board grabs the motion of internal anatomy through the fluoroscopy system, the computer acquires the organ motion tracking data by image processing without any additional physical markers. The patients breathe freely without any forced breath control and coaching, when this experiment was performed. The developed pattern-recognition software could extract the target motion signal in real-time from the acquired fluoroscopic images. The range of mean deviations between the real and acquired target positions was measured for some sample structures in an anatomical model phantom. The mean and max deviation between the real and acquired positions were less than 1mm and 2mm respectively with the standardized movement using a moving stage and an anatomical model phantom. Under the real human body, the mean and maximum distance of the peak to trough was measured 23.5mm and 55.1mm respectively for 13 patients' diaphragm motion. The acquired respiration profile showed that human expiration period was longer than the inspiration period. The above results could be applied to respiratory-gated radiotherapy.