• Title/Summary/Keyword: Sensor fusion

Search Result 815, Processing Time 0.021 seconds

Ground Target Classification Algorithm based on Multi-Sensor Images (다중센서 영상 기반의 지상 표적 분류 알고리즘)

  • Lee, Eun-Young;Gu, Eun-Hye;Lee, Hee-Yul;Cho, Woong-Ho;Park, Kil-Houm
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.2
    • /
    • pp.195-203
    • /
    • 2012
  • This paper proposes ground target classification algorithm based on decision fusion and feature extraction method using multi-sensor images. The decisions obtained from the individual classifiers are fused by applying a weighted voting method to improve target recognition rate. For classifying the targets belong to the individual sensors images, features robust to scale and rotation are extracted using the difference of brightness of CM images obtained from CCD image and the boundary similarity and the width ratio between the vehicle body and turret of target in FLIR image. Finally, we verity the performance of proposed ground target classification algorithm and feature extraction method by the experimentation.

Design of Fusion Multilabeling System Controlled by Wi-Fi Signals (Wi-Fi신호로 제어되는 융합형 다중라벨기 설계)

  • Lim, Joong-Soo
    • Journal of the Korea Convergence Society
    • /
    • v.6 no.1
    • /
    • pp.1-5
    • /
    • 2015
  • In this paper, we describe the design of a fusion labeling system which is controlled by the Wi-Fi signals. The Current labeling system which is used in the industry is designed to work independently on the production line not connected with internet network services. For such reasons, it is very inconvenient for the labeling system to transfer such labeling data of the production line to the server computer. We propose a labeling system connected to the Wi-Fi service being able to send real-time transmission of labeling data. This system can supply the labeling data of production line to the server computer in realtime and improve the production quality than the existing system.

On-line Measurement and Characterization of Nano-web Qualities Using a Stochastic Sensor Fusion System Design and Implementation of NAFIS(NAno-Fiber Information System)

  • Kim, Joovong;Lim, Dae-Young;Byun, Sung-Weon
    • Proceedings of the Korean Fiber Society Conference
    • /
    • 2003.10a
    • /
    • pp.45-46
    • /
    • 2003
  • A process control system has been developed for measurement and characterization of the nanofiber web qualities. The nano-fiber information system (NAFIS) developed consists of a measurement device and an analysis algorithm, which are a microscope-laser sensor fusion system and a process information system, respectively. It has been found that NAFIS is so successful in detecting irregularities of pore and diameter that the resulting product has been quitely under control even at the high production rate. Pore distribution, fiber diameter and mass uniformity have been readily measured and analyzed by integrating the non-contact measurement technology and the random function-based time domain signal/image processing algorithm. Qualifies of the nano-fiber webs have been revealed in a way that the statistical parameters for the characteristics above are calculated and stored in a certain interval along with the time-specific information. Quality matrix, scale of homogeneity is easily obtained through the easy-to-use GUI information. Finally, ANFIS has been evaluated both for the real-time measurement and analysis, and for the process monitoring.

  • PDF

Development of Smart Tape Attachment Robot in the Cold Rolled Coil with 3D Non-Contact Recognition (3D 비접촉 인식을 이용한 냉연코일 테이프부착 로봇 개발)

  • Shin, Chan-Bai;Kim, Jin-Dae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.11
    • /
    • pp.1122-1129
    • /
    • 2009
  • Recently taping robot with smart recognition function have been studied in the coil manufacturing field. Due to the difficulty of 3D surface processing from the complicated working environment, it is not easy to accomplish smart tape attachment motion with non-contact sensor. To solve these problems the applicable surface recognition algorithm and a flexible sensing device has been recommended. In this research, the fusion method between 1D displacement and 3D laser scanner is applied for robust tape attachment about cold rolled coil. With these sensors we develop a two-step exploration and the smart algorithm for the awareness of non-aligned coil's information. In the proposed robot system for tape attachment, the problem is reduced to coil's radius searching with laser displacement sensor at first, and then position and orientation detection with 3D laser scanner. To get the movement at the robot's base frame, the hand-eye compensation between robot's end effector and sensing device should be also carried out respectively. In this paper, we examine the auto-coordinate transformation method in the calibration step for the real environment usage. From the experimental results, it was shown that the taping motion of robot had a robust under the non-aligned cold rolled coil.

Hybrid Inertial and Vision-Based Tracking for VR applications (가상 현실 어플리케이션을 위한 관성과 시각기반 하이브리드 트래킹)

  • Gu, Jae-Pil;An, Sang-Cheol;Kim, Hyeong-Gon;Kim, Ik-Jae;Gu, Yeol-Hoe
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.103-106
    • /
    • 2003
  • In this paper, we present a hybrid inertial and vision-based tracking system for VR applications. One of the most important aspects of VR (Virtual Reality) is providing a correspondence between the physical and virtual world. As a result, accurate and real-time tracking of an object's position and orientation is a prerequisite for many applications in the Virtual Environments. Pure vision-based tracking has low jitter and high accuracy but cannot guarantee real-time pose recovery under all circumstances. Pure inertial tracking has high update rates and full 6DOF recovery but lacks long-term stability due to sensor noise. In order to overcome the individual drawbacks and to build better tracking system, we introduce the fusion of vision-based and inertial tracking. Sensor fusion makes the proposal tracking system robust, fast, accurate, and low jitter and noise. Hybrid tracking is implemented with Kalman Filter that operates in a predictor-corrector manner. Combining bluetooth serial communication module gives the system a full mobility and makes the system affordable, lightweight energy-efficient. and practical. Full 6DOF recovery and the full mobility of proposal system enable the user to interact with mobile device like PDA and provide the user with natural interface.

  • PDF

Traveling Performance of a Robot Platform for Unmanned Weeding in a Dry Field (벼농사용 무인 제초로봇의 건답환경 주행 성능)

  • Kim, Gook-Hwan;Kim, Sang-Cheol;Hong, Young-Ki
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.31 no.1
    • /
    • pp.43-50
    • /
    • 2014
  • This paper introduces a robot platform which can do weeding while traveling between rice seedlings stably against irregular land surface of a paddy field. Also, an autonomous navigation technique that can track on stable state without any damage of the seedlings in the working area is proposed. Detection of the rice seedlings and avoidance knocking down by the robot platform is achieved by the sensor fusion of a laser range finder (LRF) and an inertial measurement unit (IMU). These sensors are also used to control navigating direction of the robot to keep going along the column of rice seedling consistently. Deviation of the robot direction from the rice column that is sensed by the LRF is fed back to a proportional and derivative controller to obtain stable adjustment of navigating direction and get proper returning speed of the robot to the rice column.

A Study on the Environment Recognition System of Biped Robot for Stable Walking (안정적 보행을 위한 이족 로봇의 환경 인식 시스템 연구)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.1977-1978
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tele-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

Intelligent Navigation of a Mobile Robot in Dynamic Environments (동적환경에서 이동로봇의 지능적 운행)

  • Heo, Hwa-Ra;Park, Jae-Han;Park, Seong-Hyeon;Park, Jin-U;Lee, Jang-Myeong
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.37 no.2
    • /
    • pp.16-28
    • /
    • 2000
  • In this paper, we propose a navigation algorithm for a mobile robot, which is intelligently searching the goal location in unknown dynamic environments using an ultrasonic sensor. Instead of using "sensor fusion"method which generates the trajectory of a robot based upon the environment model and sensory data, "command fusion"method is used to govern the robot motions. The major factors for robot navigation are represented as a cost function. Using the data of the robot states and the environment, the weight value of each factor is determined for an optimal trajectory in dynamic environments. For the evaluation of the proposed algorithm, we peformed simulations in PC as well as real experiments with ZIRO. The results show that the proposed algorithm is apt to identify obstacles in unknown environments to guide the robot to the goal location safely.

  • PDF

Development of IoT based Real-Time Complex Sensor Board for Managing Air Quality in Buildings

  • Park, Taejoon;Cha, Jaesang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.4
    • /
    • pp.75-82
    • /
    • 2018
  • Efforts to reduce damages from micro dust and harmful gases in life have been led by national or local governments, and information on air quality has been provided along with real-time weather forecast through TV and internet. It is not enough to provide information on the individual indoor space consumed. So in this paper, we propose a IoT-based Real-Time Air Quality Sensing Board Corresponding Fine Particle for Air Quality Management in Buildings. Proposed board is easy to install and can be placed in the right place. In the proposed board, the air quality (level of pollution level) in the indoor space (inside the building) is easy and it is possible to recognize the changed indoor air pollution situation and provide countermeasures. According to the advantages of proposed system, it is possible to provide useful information by linking information about the overall indoor space where at least one representative point is located. In this paper, we compare the performance of the proposed board with the existing air quality measurement equipment.

Robust Elevator Door Recognition using LRF and Camera (LRF와 카메라를 이용한 강인한 엘리베이터 문 인식)

  • Ma, Seung-Wan;Cui, Xuenan;Lee, Hyung-Ho;Kim, Hyung-Rae;Lee, Jae-Hong;Kim, Hak-Il
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.6
    • /
    • pp.601-607
    • /
    • 2012
  • The recognition of elevator door is needed for mobile service robots to moving between floors in the building. This paper proposed the sensor fusion approach using LRF (Laser Range Finder) and camera to solve the problem. Using the laser scans by the LRF, we extract line segments and detect candidates as the elevator door. Using the image by the camera, the door candidates are verified and selected as real door of the elevator. The outliers are filtered through the verification process. Then, the door state detection is performed by depth analysis within the door. The proposed method uses extrinsic calibration to fuse the LRF and the camera. It gives better results of elevator door recognition compared to the method using LRF only.