• Title/Summary/Keyword: Robot navigation

Search Result 823, Processing Time 0.025 seconds

Evolutionary Optimization of Neurocontroller for Physically Simulated Compliant-Wing Ornithopter

  • Shim, Yoonsik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.12
    • /
    • pp.25-33
    • /
    • 2019
  • This paper presents a novel evolutionary framework for optimizing a bio-inspired fully dynamic neurocontroller for the maneuverable flapping flight of a simulated bird-sized ornithopter robot which takes advantage of the morphological computation and mechansensory feedback to improve flight stability. In order to cope with the difficulty of generating robust flapping flight and its maneuver, the wing of robot is modelled as a series of sub-plates joined by passive torsional springs, which implements the simplified version of feathers attached to the forearm skeleton. The neural controller is designed to have a bilaterally symmetric structure which consists of two fully connected neural network modules receiving mirrored sensory inputs from a series of flight navigation sensors as well as feather mechanosensors to let them participate in pattern generation. The synergy of wing compliance and its sensory reflexes gives a possibility that the robot can feel and exploit aerodynamic forces on its wings to potentially contribute to the agility and stability during flight. The evolved robot exhibited target-following flight maneuver using asymmetric wing movements as well as its tail, showing robustness to external aerodynamic disturbances.

3D Terrain Reconstruction Using 2D Laser Range Finder and Camera Based on Cubic Grid for UGV Navigation (무인 차량의 자율 주행을 위한 2차원 레이저 거리 센서와 카메라를 이용한 입방형 격자 기반의 3차원 지형형상 복원)

  • Joung, Ji-Hoon;An, Kwang-Ho;Kang, Jung-Won;Kim, Woo-Hyun;Chung, Myung-Jin
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.26-34
    • /
    • 2008
  • The information of traversability and path planning is essential for UGV(Unmanned Ground Vehicle) navigation. Such information can be obtained by analyzing 3D terrain. In this paper, we present the method of 3D terrain modeling with color information from a camera, precise distance information from a 2D Laser Range Finder(LRF) and wheel encoder information from mobile robot with less data. And also we present the method of 3B terrain modeling with the information from GPS/IMU and 2D LRF with less data. To fuse the color information from camera and distance information from 2D LRF, we obtain extrinsic parameters between a camera and LRF using planar pattern. We set up such a fused system on a mobile robot and make an experiment on indoor environment. And we make an experiment on outdoor environment to reconstruction 3D terrain with 2D LRF and GPS/IMU(Inertial Measurement Unit). The obtained 3D terrain model is based on points and requires large amount of data. To reduce the amount of data, we use cubic grid-based model instead of point-based model.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment (구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식)

  • Kim, Donghoon;Lee, Donghwa;Myung, Hyun;Choi, Hyun-Taek
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

Outdoor Positioning Estimation of Multi-GPS / INS Integrated System by EKF / UPF Filter Conversion (EKF/UPF필터 변환을 통한 Multi-GPS/INS 융합 시스템의 실외 위치추정)

  • Choi, Seung-Hwan;Kim, Gi-Jeung;Kim, Yun-Ki;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.12
    • /
    • pp.1284-1289
    • /
    • 2014
  • In this Paper, outdoor position estimation system was implemented using GPS (Global Positioning System) and INS (Inertial Navigation System). GPS position information has lots of errors by interference from obstacles and weather, the surrounding environment. To reduce these errors, multiple GPS system is used. Also, the Discrete Wavelet Transforms was applied to INS data for compensation of its error. In this paper, position estimation of the mobile robot in the straight line is conducted by EKF (Extended Kalman Filter). However, curve running position estimation is less accurate than straight line due to phase change in rotation. The curve is recognized through the rate of change in heading angle and the position estimation precision of the initial curve was improved by UPF (Unscented Particle Filter). In the case of UPF, if the number of particle is so many that big memory gets size is needed and processing speed becomes late. So, it only used the position estimation in the initial curve. Thereafter, the position of mobile robot in curve is estimated through switching from UPF to EKF again. Through the experiments, we verify the superiority of the system and make a conclusion.

Technological Trend of Endoscopic Robots (내시경 로봇의 기술동향)

  • Kim, Min Young;Cho, Hyungsuck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.3
    • /
    • pp.345-355
    • /
    • 2014
  • Since the beginning of the 21st century, emergence of innovative technologies in robotic and telepresence surgery has revolutionized minimally access surgery and continually has advanced them till recent years. One of such surgeries is endoscopic surgery, in which endoscope and endoscopic instruments are inserted into the body through small incision or natural openings, surgical operations being carried out by a laparoscopic procedure. Due to a vast amount of developments in this technology, this review article describes only a technological state-of-the arts and trend of endoscopic robots, being further limited to the aspects of key components, their functional requirements and operational procedure in surgery. In particular, it first describes technological limitations in developments of key components and then focuses on the description of the performance required for their functions, which include position control, tracking, navigation, and manipulation of the flexible endoscope body and its end effector as well, and so on. In spite of these rapid developments in functional components, endoscopic surgical robots should be much smaller, less expensive, easier to operate, and should seamlessly integrate emerging technologies for their intelligent vision and dexterous hands not only from the points of the view of surgical, ergonomic but also from safety. We believe that in these respects a medical robotic technology related to endoscopic surgery continues to be revolutionized in the near future, sufficient enough to replace almost all kinds of current endoscopic surgery. This issue remains to be addressed elsewhere in some other review articles.

A Study on the Parallel Escape Maze through Cooperative Activities of Humanoid Robots (인간형 로봇들의 협력 작업을 통한 미로 동시 탈출에 관한 연구)

  • Jun, Bong-Gi
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.6
    • /
    • pp.1441-1446
    • /
    • 2014
  • For the escape from a maze, the cooperative method by robot swarm was proposed in this paper. The robots can freely move by collecting essential data and making a decision in the use of sensors; however, a central control system is required to organize all robots for the escape from the maze. The robots explore new mazes and then send the information to the system for analyzing and mapping the escaping route. Three issues were considered as follows for the effective escape by multiple robots from the mazes in this paper. In the first, the mazes began to divide and secondly, dead-ends should be blocked. Finally, after the first arrivals at the destination, a shortcut should be provided for rapid escaping from the maze. The parallel-escape algorithms were applied to the different size of mazes, so that robot swarm can effectively get away the mazes.

A Moving Camera Localization using Perspective Transform and Klt Tracking in Sequence Images (순차영상에서 투영변환과 KLT추적을 이용한 이동 카메라의 위치 및 방향 산출)

  • Jang, Hyo-Jong;Cha, Jeong-Hee;Kim, Gye-Young
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.163-170
    • /
    • 2007
  • In autonomous navigation of a mobile vehicle or a mobile robot, localization calculated from recognizing its environment is most important factor. Generally, we can determine position and pose of a camera equipped mobile vehicle or mobile robot using INS and GPS but, in this case, we must use enough known ground landmark for accurate localization. hi contrast with homography method to calculate position and pose of a camera by only using the relation of two dimensional feature point between two frames, in this paper, we propose a method to calculate the position and the pose of a camera using relation between the location to predict through perspective transform of 3D feature points obtained by overlaying 3D model with previous frame using GPS and INS input and the location of corresponding feature point calculated using KLT tracking method in current frame. For the purpose of the performance evaluation, we use wireless-controlled vehicle mounted CCD camera, GPS and INS, and performed the test to calculate the location and the rotation angle of the camera with the video sequence stream obtained at 15Hz frame rate.

Development of Ubuntu-based Raspberry Pi 3 of the image recognition system (우분투 기반 라즈베리 파이3의 영상 인식 시스템 개발)

  • Kim, Gyu-Hyun;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.868-871
    • /
    • 2016
  • Recently, Unmanned vehicle and Wearable Technology using iot research is being carried out. The unmanned vehicle is the result of it technology. Robots, autonomous navigation vehicle and obstacle avoidance, data communications, power, and image processing, technology integration of a unmanned vehicle or an unmanned robot. The final goal of the unmanned vehicle manual not autonomous by destination safely and quickly reaching. This paper managed to cover One of the key skills of unmanned vehicle is to image processing. Currently battery technology of unmanned vehicle can drive for up to 1 hours. Therefore, we use the Raspberry Pi 3 to reduce power consumption to a minimum. Using the Raspberry Pi 3 and to develop an image recognition system. The goal is to propose a system that recognizes all the objects in the image received from the camera.

  • PDF

Estimation of Time Difference Using Cross-Correlation in Underwater Environment (수중 환경에서 상호상관을 이용한 시간차이 추정)

  • Lee, Young-Pil;Moon, Yong Seon;Ko, Nak Yong;Choi, Hyun-Taek;Lee, Jeong-Gu;Bae, Young-Chul
    • Journal of Advanced Navigation Technology
    • /
    • v.20 no.2
    • /
    • pp.155-160
    • /
    • 2016
  • Recently, underwater acoustic communication (UWAC) has been studied by many scholars and researchers. In order to use UWAC, we need to estimate time difference between the two signals in underwater environment. Typically, there are major three methods to estimate the time-difference between the two signals such as estimating the arrival time of the first non-background segment and calculate the temporal difference, calculating the cross-correlation between the two signal to infer the time-lagged, and estimating the phase delay to infer the time difference. In this paper, we present calculating the cross-correlation between the two signals to infer the time-lagged to apply UWAC. We also present the experimental result of estimating the arrival time by using cross-correlation. We get EXCORR = 0.003055 second as the estimation error in mean absolute difference.