• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.028 seconds

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

A study on development of RGB color variable optical ID module considering smart factory environment (스마트 팩토리 환경을 고려한 RGB 컬러 가변형 광 ID 모듈개발 연구)

  • Lee, Min-Ho;Timur, Khudaybergenov;Lee, Beom-Hee;Cho, Ju-Phil;Cha, Jae-Sang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.623-629
    • /
    • 2018
  • Smart Factory is a concept of automatic production system of machines by the fusion of ICT and manufacturing. As a base technology for realizing such a smart factory, there is an increasing interest in a low-power environmentally friendly LED lighting system, and researches on so-called optical ID related application technologies such as communication using a LED and position recognition are actively underway. In this paper, We have proposed a system that can reliably identify logistics location and additional information without being affected by electromagnetic interference such as high voltage, high current, and generator in the plant. Through the basic experiment, we confirmed the applicability of the color ID recognition rate from 98.8% to 93.8% according to the eight color variations in the short distance.

Development of On-line Quality Sorting System for Dried Oak Mushroom - 3rd Prototype-

  • 김철수;김기동;조기현;이정택;김진현
    • Agricultural and Biosystems Engineering
    • /
    • v.4 no.1
    • /
    • pp.8-15
    • /
    • 2003
  • In Korea, quality evaluation of dried oak mushrooms are done first by classifying them into more than 10 different categories based on the state of opening of the cap, surface pattern, and colors. And mushrooms of each category are further classified into 3 or 4 groups based on its shape and size, resulting into total 30 to 40 different grades. Quality evaluation and sorting based on the external visual features are usually done manually. Since visual features of mushroom affecting quality grades are distributed over the entire surface of the mushroom, both front (cap) and back (stem and gill) surfaces should be inspected thoroughly. In fact, it is almost impossible for human to inspect every mushroom, especially when they are fed continuously via conveyor. In this paper, considering real time on-line system implementation, image processing algorithms utilizing artificial neural network have been developed for the quality grading of a mushroom. The neural network based image processing utilized the raw gray value image of fed mushrooms captured by the camera without any complex image processing such as feature enhancement and extraction to identify the feeding state and to grade the quality of a mushroom. Developed algorithms were implemented to the prototype on-line grading and sorting system. The prototype was developed to simplify the system requirement and the overall mechanism. The system was composed of automatic devices for mushroom feeding and handling, a set of computer vision system with lighting chamber, one chip microprocessor based controller, and pneumatic actuators. The proposed grading scheme was tested using the prototype. Network training for the feeding state recognition and grading was done using static images. 200 samples (20 grade levels and 10 per each grade) were used for training. 300 samples (20 grade levels and 15 per each grade) were used to validate the trained network. By changing orientation of each sample, 600 data sets were made for the test and the trained network showed around 91 % of the grading accuracy. Though image processing itself required approximately less than 0.3 second depending on a mushroom, because of the actuating device and control response, average 0.6 to 0.7 second was required for grading and sorting of a mushroom resulting into the processing capability of 5,000/hr to 6,000/hr.

  • PDF

Design and Implementation of Mobile Vision-based Augmented Galaga using Real Objects (실제 물체를 이용한 모바일 비전 기술 기반의 실감형 갤러그의 설계 및 구현)

  • Park, An-Jin;Yang, Jong-Yeol;Jung, Kee-Chul
    • Journal of Korea Game Society
    • /
    • v.8 no.2
    • /
    • pp.85-96
    • /
    • 2008
  • Recently, research on augmented games as a new game genre has attracted a lot of attention. An augmented game overlaps virtual objects in an augmented reality(AR) environment, allowing game players to interact with the AR environment through manipulating real and virtual objects. However, it is difficult to release existing augmented games to ordinary game players, as the games generally use very expensive and inconvenient 'backpack' systems: To solve this problem, several augmented games have been proposed using mobile devices equipped with cameras, but it can be only enjoyed at a previously-installed location, as a ‘color marker' or 'pattern marker’ is used to overlap the virtual object with the real environment. Accordingly, this paper introduces an augmented game, called augmented galaga based on traditional well-known galaga, executed on mobile devices to make game players experience the game without any economic burdens. Augmented galaga uses real object in real environments, and uses scale-invariant features(SIFT), and Euclidean distance to recognize the real objects. The virtural aliens are randomly appeared around the specific objects, several specific objects are used to improve the interest aspect, andgame players attack the virtual aliens by moving the mobile devices towards specific objects and clicking a button of mobile devices. As a result, we expect that augmented galaga provides an exciting experience without any economic burdens for players based on the game paradigm, where the user interacts with both the physical world captured by a mobile camera and the virtual aliens automatically generated by a mobile devices.

  • PDF

Information Fusion of Cameras and Laser Radars for Perception Systems of Autonomous Vehicles (영상 및 레이저레이더 정보융합을 통한 자율주행자동차의 주행환경인식 및 추적방법)

  • Lee, Minchae;Han, Jaehyun;Jang, Chulhoon;Sunwoo, Myoungho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.1
    • /
    • pp.35-45
    • /
    • 2013
  • A autonomous vehicle requires improved and robust perception systems than conventional perception systems of intelligent vehicles. In particular, single sensor based perception systems have been widely studied by using cameras and laser radar sensors which are the most representative sensors for perception by providing object information such as distance information and object features. The distance information of the laser radar sensor is used for road environment perception of road structures, vehicles, and pedestrians. The image information of the camera is used for visual recognition such as lanes, crosswalks, and traffic signs. However, single sensor based perception systems suffer from false positives and true negatives which are caused by sensor limitations and road environments. Accordingly, information fusion systems are essentially required to ensure the robustness and stability of perception systems in harsh environments. This paper describes a perception system for autonomous vehicles, which performs information fusion to recognize road environments. Particularly, vision and laser radar sensors are fused together to detect lanes, crosswalks, and obstacles. The proposed perception system was validated on various roads and environmental conditions with an autonomous vehicle.

Individual Ortho-rectification of Coast Guard Aerial Images for Oil Spill Monitoring (유출유 모니터링을 위한 해경 항공 영상의 개별정사보정)

  • Oh, Youngon;Bui, An Ngoc;Choi, Kyoungah;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1479-1488
    • /
    • 2022
  • Accidents in which oil spills occur intermittently in the ocean due to ship collisions and sinkings. In order to prepare prompt countermeasures when such an accident occurs, it is necessary to accurately identify the current status of spilled oil. To this end, the Coast Guard patrols the target area with a fixed-wing airplane or helicopter and checks it with the naked eye or video, but it was difficult to determine the area contaminated by the spilled oil and its exact location on the map. Accordingly, this study develops a technology for direct ortho-rectification by automatically geo-referencing aerial images collected by the Coast Guard without individual ground reference points to identify the current status of spilled oil. First, meta information required for georeferencing is extracted from a visualized screen of sensor information such as video by optical character recognition (OCR). Based on the extracted information, the external orientation parameters of the image are determined. Images are individually orthorectified using the determined the external orientation parameters. The accuracy of individual orthoimages generated through this method was evaluated to be about tens of meters up to 100 m. The accuracy level was reasonably acceptable considering the inherent errors of the position and attitude sensors, the inaccuracies in the internal orientation parameters such as camera focal length, without using no ground control points. It is judged to be an appropriate level for identifying the current status of spilled oil contaminated areas in the sea. In the future, if real-time transmission of images captured during flight becomes possible, individual orthoimages can be generated in real time through the proposed individual orthorectification technology. Based on this, it can be effectively used to quickly identify the current status of spilled oil contamination and establish countermeasures.

Histogram-Based Singular Value Decomposition for Object Identification and Tracking (객체 식별 및 추적을 위한 히스토그램 기반 특이값 분해)

  • Ye-yeon Kang;Jeong-Min Park;HoonJoon Kouh;Kyungyong Chung
    • Journal of Internet Computing and Services
    • /
    • v.24 no.5
    • /
    • pp.29-35
    • /
    • 2023
  • CCTV is used for various purposes such as crime prevention, public safety reinforcement, and traffic management. However, as the range and resolution of the camera improve, there is a risk of exposing personal information in the video. Therefore, there is a need for new technologies that can identify individuals while protecting personal information in images. In this paper, we propose histogram-based singular value decomposition for object identification and tracking. The proposed method distinguishes different objects present in the image using color information of the object. For object recognition, YOLO and DeepSORT are used to detect and extract people present in the image. Color values are extracted with a black-and-white histogram using location information of the detected person. Singular value decomposition is used to extract and use only meaningful information among the extracted color values. When using singular value decomposition, the accuracy of object color extraction is increased by using the average of the upper singular value in the result. Color information extracted using singular value decomposition is compared with colors present in other images, and the same person present in different images is detected. Euclidean distance is used for color information comparison, and Top-N is used for accuracy evaluation. As a result of the evaluation, when detecting the same person using a black-and-white histogram and singular value decomposition, it recorded a maximum of 100% to a minimum of 74%.

Visual Touchless User Interface for Window Manipulation (윈도우 제어를 위한 시각적 비접촉 사용자 인터페이스)

  • Kim, Jin-Woo;Jung, Kyung-Boo;Jeong, Seung-Do;Choi, Byung-Uk
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.471-478
    • /
    • 2009
  • Recently, researches for user interface are remarkably processed due to the explosive growth of 3-dimensional contents and applications, and the spread class of computer user. This paper proposes a novel method to manipulate windows efficiently using only the intuitive motion of hand. Previous methods have some drawbacks such as burden of expensive device, high complexity of gesture recognition, assistance of additional information using marker, and so on. To improve the defects, we propose a novel visual touchless interface. First, we detect hand region using hue channel in HSV color space to control window using hand. The distance transform method is applied to detect centroid of hand and curvature of hand contour is used to determine position of fingertips. Finally, by using the hand motion information, we recognize hand gesture as one of predefined seven motions. Recognized hand gesture is to be a command to control window. In the proposed method, user can manipulate windows with sense of depth in the real environment because the method adopts stereo camera. Intuitive manipulation is also available because the proposed method supports visual touch for the virtual object, which user want to manipulate, only using simple motions of hand. Finally, the efficiency of the proposed method is verified via an application based on our proposed interface.

Driver's Status Recognition Using Multiple Wearable Sensors (다중 웨어러블 센서를 활용한 운전자 상태 인식)

  • Shin, Euiseob;Kim, Myong-Guk;Lee, Changook;Kang, Hang-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.6
    • /
    • pp.271-280
    • /
    • 2017
  • In this paper, we propose a new safety system composed of wearable devices, driver's seat belt, and integrating controllers. The wearable device and driver's seat belt capture driver's biological information, while the integrating controller analyzes captured signal to alarm the driver or directly control the car appropriately according to the status of the driver. Previous studies regarding driver's safety from driver's seat, steering wheel, or facial camera to capture driver's physiological signal and facial information had difficulties in gathering accurate and continuous signals because the sensors required the upright posture of the driver. Utilizing wearable sensors, however, our proposed system can obtain continuous and highly accurate signals compared to the previous researches. Our advanced wearable apparatus features a sensor that measures the heart rate, skin conductivity, and skin temperature and applies filters to eliminate the noise generated by the automobile. Moreover, the acceleration sensor and the gyro sensor in our wearable device enable the reduction of the measurement errors. Based on the collected bio-signals, the criteria for identifying the driver's condition were presented. The accredited certification body has verified that the devices has the accuracy of the level of medical care. The laboratory test and the real automobile test demonstrate that our proposed system is good for the measurement of the driver's condition.

Development of CCTV Cooperation Tracking System for Real-Time Crime Monitoring (실시간 범죄 모니터링을 위한 CCTV 협업 추적시스템 개발 연구)

  • Choi, Woo-Chul;Na, Joon-Yeop
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.12
    • /
    • pp.546-554
    • /
    • 2019
  • Typically, closed-circuit television (CCTV) monitoring is mainly used for post-processes (i.e. to provide evidence after an incident has occurred), but by using a streaming video feed, machine-based learning, and advanced image recognition techniques, current technology can be extended to respond to crimes or reports of missing persons in real time. The multi-CCTV cooperation technique developed in this study is a program model that delivers similarity information about a suspect (or moving object) extracted via CCTV at one location and sent to a monitoring agent to track the selected suspect or object when he, she, or it moves out of range to another CCTV camera. To improve the operating efficiency of local government CCTV control centers, we describe here the partial automation of a CCTV control system that currently relies upon monitoring by human agents. We envisage an integrated crime prevention service, which incorporates the cooperative CCTV network suggested in this study and that can easily be experienced by citizens in ways such as determining a precise individual location in real time and providing a crime prevention service linked to smartphones and/or crime prevention/safety information.