• Title/Summary/Keyword: Sensory fusion

Search Result 51, Processing Time 0.026 seconds

The Sensory-Motor Fusion System for Object Tracking (이동 물체를 추적하기 위한 감각 운동 융합 시스템 설계)

  • Lee, Sang-Hee;Wee, Jae-Woo;Lee, Chong-Ho
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.3
    • /
    • pp.181-187
    • /
    • 2003
  • For the moving objects with environmental sensors such as object tracking moving robot with audio and video sensors, environmental information acquired from sensors keep changing according to movements of objects. In such case, due to lack of adaptability and system complexity, conventional control schemes show limitations on control performance, and therefore, sensory-motor systems, which can intuitively respond to various types of environmental information, are desirable. And also, to improve the system robustness, it is desirable to fuse more than two types of sensory information simultaneously. In this paper, based on Braitenberg's model, we propose a sensory-motor based fusion system, which can trace the moving objects adaptively to environmental changes. With the nature of direct connecting structure, sensory-motor based fusion system can control each motor simultaneously, and the neural networks are used to fuse information from various types of sensors. And also, even if the system receives noisy information from one sensor, the system still robustly works with information from other sensors which compensates the noisy information through sensor fusion. In order to examine the performance, sensory-motor based fusion model is applied to object-tracking four-foot robot equipped with audio and video sensors. The experimental results show that the sensory-motor based fusion system can tract moving objects robustly with simpler control mechanism than model-based control approaches.

Uncertainty Fusion of Sensory Information Using Fuzzy Numbers

  • Park, Sangwook;Lee, C. S. George
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1001-1004
    • /
    • 1993
  • The Multisensor Fusion Problem (MFP) deals with the methodologies involved in effectively combining together homogeneous or non-homegeneous information obtained from multiple redundant or disparate sensors in order to perform a task more accurately, efficiently, and reliably. The inherent uncertainties in the sensory information are represented using Fuzzy Numbers, -numbers, and the Uncertainty-Reductive Fusion Technique (URFT) is introduced to combine the multiple sensory information into one consensus -number. The MFP is formulated from the Information Theory perspective where sensors are viewed as information sources with a fixed output alphabet and systems are modeled as a network of information processing and processing and propagating channels. The performance of the URFT is compared with other fusion techniques in solving the 3-Sensor Problem.

  • PDF

Command Fusion for Navigation of Mobile Robots in Dynamic Environments with Objects

  • Jin, Taeseok
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.1
    • /
    • pp.24-29
    • /
    • 2013
  • In this paper, we propose a fuzzy inference model for a navigation algorithm for a mobile robot that intelligently searches goal location in unknown dynamic environments. Our model uses sensor fusion based on situational commands using an ultrasonic sensor. Instead of using the "physical sensor fusion" method, which generates the trajectory of a robot based upon the environment model and sensory data, a "command fusion" method is used to govern the robot motions. The navigation strategy is based on a combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance based on a hierarchical behavior-based control architecture. To identify the environments, a command fusion technique is introduced where the sensory data of the ultrasonic sensors and a vision sensor are fused into the identification process. The result of experiment has shown that highlights interesting aspects of the goal seeking, obstacle avoiding, decision making process that arise from navigation interaction.

Neural Network Approach to Sensor Fusion System for Improving the Recognition Performance of 3D Objects (3차원 물체의 인식 성능 향상을 위한 감각 융합 신경망 시스템)

  • Dong Sung Soo;Lee Chong Ho;Kim Ji Kyoung
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.3
    • /
    • pp.156-165
    • /
    • 2005
  • Human being recognizes the physical world by integrating a great variety of sensory inputs, the information acquired by their own action, and their knowledge of the world using hierarchically parallel-distributed mechanism. In this paper, authors propose the sensor fusion system that can recognize multiple 3D objects from 2D projection images and tactile informations. The proposed system focuses on improving recognition performance of 3D objects. Unlike the conventional object recognition system that uses image sensor alone, the proposed method uses tactual sensors in addition to visual sensor. Neural network is used to fuse the two sensory signals. Tactual signals are obtained from the reaction force of the pressure sensors at the fingertips when unknown objects are grasped by four-fingered robot hand. The experiment evaluates the recognition rate and the number of learning iterations of various objects. The merits of the proposed systems are not only the high performance of the learning ability but also the reliability of the system with tactual information for recognizing various objects even though the visual sensory signals get defects. The experimental results show that the proposed system can improve recognition rate and reduce teeming time. These results verify the effectiveness of the proposed sensor fusion system as recognition scheme for 3D objects.

A Framework for Building Reconstruction Based on Data Fusion of Terrestrial Sensory Data

  • Lee, Impyeong;Choi, Yunsoo
    • Korean Journal of Geomatics
    • /
    • v.4 no.2
    • /
    • pp.39-45
    • /
    • 2004
  • Building reconstruction attempts to generate geometric and radiometric models of existing buildings usually from sensory data, which have been traditionally aerial or satellite images, more recently airborne LIDAR data, or the combination of these data. Extensive studies on building reconstruction from these data have developed some competitive algorithms with reasonable performance and some degree of automation. Nevertheless, the level of details and completeness of the reconstructed building models often cannot reach the high standards that is now or will be required by various applications in future. Hence, the use of terrestrial sensory data that can provide higher resolution and more complete coverage has been intensively emphasized. We developed a fusion framework for building reconstruction from terrestrial sensory data, that is, points from a laser scanner, images from digital camera, and absolute coordinates from a total station. The proposed approach was then applied to reconstructing a building model from real data sets acquired from a large complex existing building. Based on the experimental results, we assured that the proposed approach cam achieve high resolution and accuracy in building reconstruction. The proposed approach can effectively contribute in developing an operational system producing large urban models for 3D GIS with reasonable resources.

  • PDF

Obstacle Avoidance of Mobile Robot Based on Behavior Hierarchy by Fuzzy Logic

  • Jin, Tae-Seok
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.3
    • /
    • pp.245-249
    • /
    • 2012
  • In this paper, we propose a navigation algorithm for a mobile robot, which is intelligently searching the goal location in unknown dynamic environments using an ultrasonic sensor. Instead of using "sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data, "command fusion" method is used to govern the robot motions. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. To identify the environments, a command fusion technique is introduced, where the sensory data of ultrasonic sensors and a vision sensor are fused into the identification process.

Virtual Environment Building and Navigation of Mobile Robot using Command Fusion and Fuzzy Inference

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.4
    • /
    • pp.427-433
    • /
    • 2019
  • This paper propose a fuzzy inference model for map building and navigation for a mobile robot with an active camera, which is intelligently navigating to the goal location in unknown environments using sensor fusion, based on situational command using an active camera sensor. Active cameras provide a mobile robot with the capability to estimate and track feature images over a hallway field of view. In this paper, instead of using "physical sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data. Command fusion method is used to govern the robot navigation. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. To identify the environments, a command fusion technique is introduced, where the sensory data of active camera sensor for navigation experiments are fused into the identification process. Navigation performance improves on that achieved using fuzzy inference alone and shows significant advantages over command fusion techniques. Experimental evidences are provided, demonstrating that the proposed method can be reliably used over a wide range of relative positions between the active camera and the feature images.

Multisensor Data Combination Using Fuzzy Weighted Average (퍼지 가중 평균을 이용한 다중 센서 데이타 융합)

  • Kim, Wan-Joo;Ko, Joong-Hyup;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 1993.07a
    • /
    • pp.383-386
    • /
    • 1993
  • In this paper, we propose a sensory data combination method by a fuzzy number approach for multisensor data fusion. Generally, the weighting of one sensory data with respect to another is derived from measures of the relative reliabilities of the two sensory modules. But the relative weight of two sensory data can be approximately determined through human experiences or insufficient experimental data without difficulty. We represent these relative weight using appropriate fuzzy numbers as well as sensory data itself. Using the relative weight, which is subjective valuation, and a fuzzy-numbered sensor data, the fuzzy weighted average method is used for a representative sensory data. The manipulation and calculation of fuzzy numbers can be carried out using the Zadeh's extension principle which can be approximately implemented by the $\alpha$-cut representation of fuzzy numbers and interval analysis.

  • PDF

Convergent evaluation of Visual function and Stereoacuity function after Surgery for Intermittent exotropia (간헐성 외사시 수술 후 시각 기능과 입체시 기능에 대한 융복합적 평가)

  • Cho, Hyung-Chel;Ro, Hyo-Lyun;Lee, Heejae
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.4
    • /
    • pp.147-154
    • /
    • 2022
  • This paper evaluated visual function and stereoacuity function after surgery for intermittent exotropia. Subjects of this study were 18 patients (male: n = 10, female: n = 8) mean aged 12.06±5.43 years diagnosed with intermittent exotropia who underwent strabismus surgery. Of these subjects, 72.2% of the subjects underwent strabismus surgery once and 27.8% had it twice. Visual function and stereoacuity function were tested for these subjects. For data analysis, frequency analysis, cross analysis, and correlation analysis were used, and statistical significance was set at p<.05. Regarding the deviation state after strabismus surgery, exodeviation accounted for the most(72.2%), followed by diplopia(50.0%) and suppression(33.3%) for distance sensory fusion. Regarding near sensory fusion, fusion(50.0%) accounted for the most, followed by diplopia(44.4%). After strabismus surgery, subjects with distance stereoacuity blindness were the most at 61.1% and there were no subjects with a normal range of 40-60 arcsec. Near stereoacuity blindness subjects accounted for 33.3% and subjects with 40-60 arcsec accounted for 1.1%. Even after surgery for intermittent exotropia, there were some areas that did not improve in deviation state, stereoacuity, or sensory fusion. Therefore, it is necessary to manage and control strabismus through non-surgical methods before and after surgery for intermittent exotropia.