• Title/Summary/Keyword: vision-based recognition

Search Result 633, Processing Time 0.032 seconds

A Full Body Gumdo Game with an Intelligent Cyber Fencer using Multi-modal(3D Vision and Speech) Interface (멀티모달 인터페이스(3차원 시각과 음성 )를 이용한 지능적 가상검객과의 전신 검도게임)

  • 윤정원;김세환;류제하;우운택
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.4
    • /
    • pp.420-430
    • /
    • 2003
  • This paper presents an immersive multimodal Gumdo simulation game that allows a user to experience the whole body interaction with an intelligent cyber fencer. The proposed system consists of three modules: (i) a nondistracting multimodal interface with 3D vision and speech (ii) an intelligent cyber fencer and (iii) an immersive feedback by a big screen and sound. First, the multimodal Interface with 3D vision and speech allows a user to move around and to shout without distracting the user. Second, an intelligent cyber fencer provides the user with intelligent interactions by perception and reaction modules that are created by the analysis of real Gumdo game. Finally, an immersive audio-visual feedback by a big screen and sound effects helps a user experience an immersive interaction. The proposed system thus provides the user with an immersive Gumdo experience with the whole body movement. The suggested system can be applied to various applications such as education, exercise, art performance, etc.

Development and Performance Evaluation of Hull Blasting Robot for Surface Pre-Preparation for Painting Process (도장전처리 작업을 위한 블라스팅 로봇 시스템 개발 및 성능평가)

  • Lee, JunHo;Jin, Taeseok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.5
    • /
    • pp.383-389
    • /
    • 2016
  • In this paper, we present the hull blasting machine with vision-based weld bead recognition device for cleaning shipment exterior wall. The purpose of this study is to introduce the mechanism design of the high efficiency hull blasting machine using the vision system to recognize the weld bead. Therefore, we have developed a robot mechanism and drive controller system of the hull blasting robot. And hull blasting characteristics such as the climbing mechanism, vision system, remote controller and CAN have been discussed and compared with the experimental data. The hull blasting robots are able to remove rust or paint at anchor, so the re-docking is unnecessary. Therefore, this can save time and cost of undergoing re-docking process and build more vessels instead. The robot uses sensors to navigate safely around the hull and has a filter system to collect the fouling removed. We have completed a pilot test of the robot and demonstrated the drive control and CAN communication performance.

Thermal imaging and computer vision technologies for the enhancement of pig husbandry: a review

  • Md Nasim Reza;Md Razob Ali;Samsuzzaman;Md Shaha Nur Kabir;Md Rejaul Karim;Shahriar Ahmed;Hyunjin Kyoung;Gookhwan Kim;Sun-Ok Chung
    • Journal of Animal Science and Technology
    • /
    • v.66 no.1
    • /
    • pp.31-56
    • /
    • 2024
  • Pig farming, a vital industry, necessitates proactive measures for early disease detection and crush symptom monitoring to ensure optimum pig health and safety. This review explores advanced thermal sensing technologies and computer vision-based thermal imaging techniques employed for pig disease and piglet crush symptom monitoring on pig farms. Infrared thermography (IRT) is a non-invasive and efficient technology for measuring pig body temperature, providing advantages such as non-destructive, long-distance, and high-sensitivity measurements. Unlike traditional methods, IRT offers a quick and labor-saving approach to acquiring physiological data impacted by environmental temperature, crucial for understanding pig body physiology and metabolism. IRT aids in early disease detection, respiratory health monitoring, and evaluating vaccination effectiveness. Challenges include body surface emissivity variations affecting measurement accuracy. Thermal imaging and deep learning algorithms are used for pig behavior recognition, with the dorsal plane effective for stress detection. Remote health monitoring through thermal imaging, deep learning, and wearable devices facilitates non-invasive assessment of pig health, minimizing medication use. Integration of advanced sensors, thermal imaging, and deep learning shows potential for disease detection and improvement in pig farming, but challenges and ethical considerations must be addressed for successful implementation. This review summarizes the state-of-the-art technologies used in the pig farming industry, including computer vision algorithms such as object detection, image segmentation, and deep learning techniques. It also discusses the benefits and limitations of IRT technology, providing an overview of the current research field. This study provides valuable insights for researchers and farmers regarding IRT application in pig production, highlighting notable approaches and the latest research findings in this field.

Visual Sensor Design and Environment Modeling for Autonomous Mobile Welding Robots (자율 주행 용접 로봇을 위한 시각 센서 개발과 환경 모델링)

  • Kim, Min-Yeong;Jo, Hyeong-Seok;Kim, Jae-Hun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.776-787
    • /
    • 2002
  • Automation of welding process in shipyards is ultimately necessary, since the welding site is spatially enclosed by floors and girders, and therefore welding operators are exposed to hostile working conditions. To solve this problem, a welding mobile robot that can navigate autonomously within the enclosure has been developed. To achieve the welding task in the closed space, the robotic welding system needs a sensor system for the working environment recognition and the weld seam tracking, and a specially designed environment recognition strategy. In this paper, a three-dimensional laser vision system is developed based on the optical triangulation technology in order to provide robots with 3D work environmental map. Using this sensor system, a spatial filter based on neural network technology is designed for extracting the center of laser stripe, and evaluated in various situations. An environment modeling algorithm structure is proposed and tested, which is composed of the laser scanning module for 3D voxel modeling and the plane reconstruction module for mobile robot localization. Finally, an environmental recognition strategy for welding mobile robot is developed in order to recognize the work environments efficiently. The design of the sensor system, the algorithm for sensing the partially structured environment with plane segments, and the recognition strategy and tactics for sensing the work environment are described and discussed with a series of experiments in detail.

CNN-based Image Rotation Correction Algorithm to Improve Image Recognition Rate (이미지 인식률 개선을 위한 CNN 기반 이미지 회전 보정 알고리즘)

  • Lee, Donggu;Sun, Young-Ghyu;Kim, Soo-Hyun;Sim, Issac;Lee, Kye-San;Song, Myoung-Nam;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.225-229
    • /
    • 2020
  • Recently, convolutional neural network (CNN) have been showed outstanding performance in the field of image recognition, image processing and computer vision, etc. In this paper, we propose a CNN-based image rotation correction algorithm as a solution to image rotation problem, which is one of the factors that reduce the recognition rate in image recognition system using CNN. In this paper, we trained our deep learning model with Leeds Sports Pose dataset to extract the information of the rotated angle, which is randomly set in specific range. The trained model is evaluated with mean absolute error (MAE) value over 100 test data images, and it is obtained 4.5951.

Hand posture recognition robust to rotation using temporal correlation between adjacent frames (인접 프레임의 시간적 상관 관계를 이용한 회전에 강인한 손 모양 인식)

  • Lee, Seong-Il;Min, Hyun-Seok;Shin, Ho-Chul;Lim, Eul-Gyoon;Hwang, Dae-Hwan;Ro, Yong-Man
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.11
    • /
    • pp.1630-1642
    • /
    • 2010
  • Recently, there is an increasing need for developing the technique of Hand Gesture Recognition (HGR), for vision based interface. Since hand gesture is defined as consecutive change of hand posture, developing the algorithm of Hand Posture Recognition (HPR) is required. Among the factors that decrease the performance of HPR, we focus on rotation factor. To achieve rotation invariant HPR, we propose a method that uses the property of video that adjacent frames in video have high correlation, considering the environment of HGR. The proposed method introduces template update of object tracking using the above mentioned property, which is different from previous works based on still images. To compare our proposed method with previous methods such as template matching, PCA and LBP, we performed experiments with video that has hand rotation. The accuracy rate of the proposed method is 22.7%, 14.5%, 10.7% and 4.3% higher than ordinary template matching, template matching using KL-Transform, PCA and LBP, respectively.

Mobile Robot Control using Hand Shape Recognition (손 모양 인식을 이용한 모바일 로봇제어)

  • Kim, Young-Rae;Kim, Eun-Yi;Chang, Jae-Sik;Park, Se-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.4
    • /
    • pp.34-40
    • /
    • 2008
  • This paper presents a vision based walking robot control system using hand shape recognition. To recognize hand shapes, the accurate hand boundary needs to be tracked in image obtained from moving camera. For this, we use an active contour model-based tracking approach with mean shift which reduces dependency of the active contour model to location of initial curve. The proposed system is composed of four modules: a hand detector, a hand tracker, a hand shape recognizer and a robot controller. The hand detector detects a skin color region, which has a specific shape, as hand in an image. Then, the hand tracking is performed using an active contour model with mean shift. Thereafter the hand shape recognition is performed using Hue moments. To assess the validity of the proposed system we tested the proposed system to a walking robot, RCB-1. The experimental results show the effectiveness of the proposed system.

A Study on Image Recognition based on the Characteristics of Retinal Cells (망막 세포 특성에 의한 영상인식에 관한 연구)

  • Cho, Jae-Hyun;Kim, Do-Hyeon;Kim, Kwang-Baek
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.11
    • /
    • pp.2143-2149
    • /
    • 2007
  • Visual Cortex Stimulator is among artificial retina prosthesis for blind man, is the method that stimulate the brain cell directly without processing the information from retina to visual cortex. In this paper, we propose image construction and recognition model that is similar to human visual processing by recognizing the feature data with orientation information, that is, the characteristics of visual cortex. Back propagation algorithm based on Delta-bar delta is used to recognize after extracting image feature by Kirsh edge detector. Various numerical patterns are used to analyze the performance of proposed method. In experiment, the proposed recognition model to extract image characteristics with the orientation of information from retinal cells to visual cortex makes a little difference in a recognition rate but shows that it is not sensitive in a variety of learning rates similar to human vision system.

Real-time Recognition and Tracking System of Multiple Moving Objects (다중 이동 객체의 실시간 인식 및 추적 시스템)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.7C
    • /
    • pp.421-427
    • /
    • 2011
  • The importance of the real-time object recognition and tracking field has been growing steadily due to rapid advancement in the computer vision applications industry. As is well known, the mean-shift algorithm is widely used in robust real-time object tracking systems. Since the mentioned algorithm is easy to implement and efficient in object tracking computation, many say it is suitable to be applied to real-time object tracking systems. However, one of the major drawbacks of this algorithm is that it always converges to a local mode, failing to perform well in a cluttered environment. In this paper, an Optical Flow-based algorithm which fits for real-time recognition of multiple moving objects is proposed. Also in the tests, the newly proposed method contributed to raising the similarity of multiple moving objects, the similarity was as high as 0.96, up 13.4% over that of the mean-shift algorithm. Meanwhile, the level of pixel errors from using the new method keenly decreased by more than 50% over that from applying the mean-shift algorithm. If the data processing speed in the video surveillance systems can be reduced further, owing to improved algorithms for faster moving object recognition and tracking functions, we will be able to expect much more efficient intelligent systems in this industrial arena.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.