• Title/Summary/Keyword: robot detection

Search Result 580, Processing Time 0.024 seconds

A Study on Autonomous Stair-climbing System Using Landing Gear for Stair-climbing Robot (계단 승강 로봇의 계단 승강 시 랜딩기어를 활용한 자율 승강 기법에 관한 연구)

  • Hwang, Hyun-Chang;Lee, Won-Young;Ha, Jong-Hee;Lee, Eung-Hyuck
    • Journal of IKEEE
    • /
    • v.25 no.2
    • /
    • pp.362-370
    • /
    • 2021
  • In this paper, we propose the Autonomous Stair-climbing system based on data from ToF sensors and IMU in developing stair-climbing robots to passive wheelchair users. Autonomous stair-climbing system are controlled by separating the timing of landing gear operation by location and utilizing state machines. To prove the theory, we construct and experiment with standard model stairs. Through an experiment to get the Attack angle, the average error of operating landing gear was 2.19% and the average error of the Attack angle was 2.78%, and the step division and status transition of the autonomous stair-climbing system were verified. As a result, the performance of the proposed techniques will reduce constraints of transportation handicapped.

LiDAR Static Obstacle Map based Position Correction Algorithm for Urban Autonomous Driving (도심 자율주행을 위한 라이다 정지 장애물 지도 기반 위치 보정 알고리즘)

  • Noh, Hanseok;Lee, Hyunsung;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.2
    • /
    • pp.39-44
    • /
    • 2022
  • This paper presents LiDAR static obstacle map based vehicle position correction algorithm for urban autonomous driving. Real Time Kinematic (RTK) GPS is commonly used in highway automated vehicle systems. For urban automated vehicle systems, RTK GPS have some trouble in shaded area. Therefore, this paper represents a method to estimate the position of the host vehicle using AVM camera, front camera, LiDAR and low-cost GPS based on Extended Kalman Filter (EKF). Static obstacle map (STOM) is constructed only with static object based on Bayesian rule. To run the algorithm, HD map and Static obstacle reference map (STORM) must be prepared in advance. STORM is constructed by accumulating and voxelizing the static obstacle map (STOM). The algorithm consists of three main process. The first process is to acquire sensor data from low-cost GPS, AVM camera, front camera, and LiDAR. Second, low-cost GPS data is used to define initial point. Third, AVM camera, front camera, LiDAR point cloud matching to HD map and STORM is conducted using Normal Distribution Transformation (NDT) method. Third, position of the host vehicle position is corrected based on the Extended Kalman Filter (EKF).The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment and showed better performance than only lane-detection algorithm. It is expected to be more robust and accurate than raw lidar point cloud matching algorithm in autonomous driving.

A method for automatically generating a route consisting of line segments and arcs for autonomous vehicle driving test (자율이동체의 주행 시험을 위한 선분과 원호로 이루어진 경로 자동 생성 방법)

  • Se-Hyoung Cho
    • Journal of IKEEE
    • /
    • v.27 no.1
    • /
    • pp.1-11
    • /
    • 2023
  • Path driving tests are necessary for the development of self-driving cars or robots. These tests are being conducted in simulation as well as real environments. In particular, for development using reinforcement learning and deep learning, development through simulators is also being carried out when data of various environments are needed. To this end, it is necessary to utilize not only manually designed paths but also various randomly and automatically designed paths. This test site design can be used for actual construction and manufacturing. In this paper, we introduce a method for randomly generating a driving test path consisting of a combination of arcs and segments. This consists of a method of determining whether there is a collision by obtaining the distance between an arc and a line segment, and an algorithm that deletes part of the path and recreates an appropriate path if it is impossible to continue the path.

Object Part Detection-based Manipulation with an Anthropomorphic Robot Hand Via Human Demonstration Augmented Deep Reinforcement Learning (행동 복제 강화학습 및 딥러닝 사물 부분 검출 기술에 기반한 사람형 로봇손의 사물 조작)

  • Oh, Ji Heon;Ryu, Ga Hyun;Park, Na Hyeon;Anazco, Edwin Valarezo;Lopez, Patricio Rivera;Won, Da Seul;Jeong, Jin Gyun;Chang, Yun Jung;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.854-857
    • /
    • 2020
  • 최근 사람형(Anthropomorphic)로봇손의 사물조작 지능을 개발하기 위하여 행동복제(Behavior Cloning) Deep Reinforcement Learning(DRL) 연구가 진행중이다. 자유도(Degree of Freedom, DOF)가 높은 사람형 로봇손의 학습 문제점을 개선하기 위하여, 행동 복제를 통한 Human Demonstration Augmented(DA)강화 학습을 통하여 사람처럼 사물을 조작하는 지능을 학습시킬 수 있다. 그러나 사물 조작에 있어, 의미 있는 파지를 위해서는 사물의 특정 부위를 인식하고 파지하는 방법이 필수적이다. 본 연구에서는 딥러닝 YOLO기술을 적용하여 사물의 특정 부위를 인식하고, DA-DRL을 적용하여, 사물의 특정 부분을 파지하는 딥러닝 학습 기술을 제안하고, 2 종 사물(망치 및 칼)의 손잡이 부분을 인식하고 파지하여 검증한다. 본 연구에서 제안하는 학습방법은 사람과 상호작용하거나 도구를 용도에 맞게 사용해야하는 분야에서 유용할 것이다.

Class Classification and Type of Learning Data by Object for Smart Autonomous Delivery (스마트 자율배송을 위한 클래스 분류와 객체별 학습데이터 유형)

  • Young-Jin Kang;;Jeong, Seok Chan
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.37-47
    • /
    • 2022
  • Autonomous delivery operation data is the key to driving a paradigm shift for last-mile delivery in the Corona era. To bridge the technological gap between domestic autonomous delivery robots and overseas technology-leading countries, large-scale data collection and verification that can be used for artificial intelligence training is required as the top priority. Therefore, overseas technology-leading countries are contributing to verification and technological development by opening AI training data in public data that anyone can use. In this paper, 326 objects were collected to trainn autonomous delivery robots, and artificial intelligence models such as Mask r-CNN and Yolo v3 were trained and verified. In addition, the two models were compared based on comparison and the elements required for future autonomous delivery robot research were considered.

Directionally Adaptive Aliasing and Noise Removal Using Dictionary Learning and Space-Frequency Analysis (사전 학습과 공간-주파수 분석을 사용한 방향 적응적 에일리어싱 및 잡음 제거)

  • Chae, Eunjung;Lee, Eunsung;Cheong, Hejin;Paik, Joonki
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.8
    • /
    • pp.87-96
    • /
    • 2014
  • In this paper, we propose a directionally adaptive aliasing and noise removal using dictionary learning based on space-frequency analysis. The proposed aliasing and noise removal algorithm consists of two modules; i) aliasing and noise detection using dictionary learning and analysis of frequency characteristics from the combined wavelet-Fourier transform and ii) aliasing removal with suppressing noise based on the directional shrinkage in the detected regions. The proposed method can preserve the high-frequency details because aliasing and noise region is detected. Experimental results show that the proposed algorithm can efficiently reduce aliasing and noise while minimizing losses of high-frequency details and generation of artifacts comparing with the conventional methods. The proposed algorithm is suitable for various applications such as image resampling, super-resolution image, and robot vision.

Design and Implementation of AR Model based Automatic Identification and Restoration Scheme for Line Scratches in Old Films (AR 모델 기반의 고전영화의 긁힘 손상의 자동 탐지 및 복원 시스템 설계와 구현)

  • Han, Ngoc-Soc;Kim, Seong-Whan
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.47-54
    • /
    • 2010
  • Old archived film shows two major defects: line scratch and blobs. In this paper, we present a design and implementation of an automatic video restoration system for line scratches observed in archived film. We use autoregressive (AR) image model because we can make stochastic and specifically autoregressive image generation process with our PAST-PRESENT model and Sampling Pattern. We designed locality maximizing scanning pattern, which can generate nearly stationary time-like series of pixels, which is a strong requirement for a stochastic series to be autoregressive. The sampled pixel series undergoes filtering and model fitting using Durbin-Levinson algorithm before interpolation process. We designed three-stage film restoration system, which includes (1) film acquisition from VHS tapes, (2) simple line scratch detection and restoration, and (3) manual blob identification and sophisticated inpainting scheme. We implemented film acquisition and simple inpainting scheme on Texas Instruments DSP board TMS320DM642 EVM, and implemented our AR inpainting scheme on PC for sophisticated restoration. We experimented our scheme with two old Korean films: "Viva Freedom" and "Robot Tae-Kwon-V", and the experimental results show that our scheme improves Bertalmio's scheme for subjective quality (MOS), objective quality (PSNR), and especially restoration ratio (RR), which reflects how much similar to the manual inpainting results.

The Research of Shape Recognition Algorithm for Image Processing of Cucumber Harvest Robot (오이수확로봇의 영상처리를 위한 형상인식 알고리즘에 관한 연구)

  • Min, Byeong-Ro;Lim, Ki-Taek;Lee, Dae-Weon
    • Journal of Bio-Environment Control
    • /
    • v.20 no.2
    • /
    • pp.63-71
    • /
    • 2011
  • Pattern recognition of a cucumber were conducted to detect directly the binary images by using thresholding method, which have the threshold level at the optimum intensity value. By restricting conditions of learning pattern, output patterns could be extracted from the same and similar input patterns by the algorithm. The algorithm of pattern recognition was developed to determine the position of the cucumber from a real image within working condition. The algorithm, designed and developed for this project, learned two, three or four learning pattern, and each learning pattern applied it to twenty sample patterns. The restored success rate of output pattern to sample pattern form two, three or four learning pattern was 65.0%, 45.0%, 12.5% respectively. The more number of learning pattern had, the more number of different out pattern detected when it was conversed. Detection of feature pattern of cucumber was processed by using auto scanning with real image of 30 by 30 pixel. The computing times required to execute the processing time of cucumber recognition took 0.5 to 1 second. Also, five real images tested, false pattern to the learning pattern is found that it has an elimination rate which is range from 96 to 98%. Some output patterns was recognized as a cucumber by the algorithm with the conditions. the rate of false recognition was range from 0.1 to 4.2%.

A 3-D Vision Sensor Implementation on Multiple DSPs TMS320C31 (다중 TMS320C31 DSP를 사용한 3-D 비젼센서 Implementation)

  • Oksenhendler, V.;Bensrhair, Abdelaziz;Miche, Pierre;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.2
    • /
    • pp.124-130
    • /
    • 1998
  • High-speed 3D vision systems are essential for autonomous robot or vehicle control applications. In our study, a stereo vision process has been developed. It consists of three steps : extraction of edges in right and left images, matching corresponding edges and calculation of the 3D map. This process is implemented in a VME 150/40 Imaging Technology vision system. It is a modular system composed by a display, an acquisition, a four Mbytes image frame memory, and three computational cards. Programmable accelerator computational modules are running at 40 MHz and are based on TMS320C31 DSP with a $64{\times}32$ bit instruction cache and two $1024{\times}32$ bit internal RAMs. Each is equipped with 512 Kbytes static RAM, 4 Mbytes image memory, 1 Mbytes flash EEPROM and a serial port. Data transfers and communications between modules are provided by three 8 bit global video bus, and three local configurable pipeline 8 bit video bus. The VME bus is dedicated to system management. Tasks between DSPs are distributed as follows: two DSPs are used to edges detection, one for the right image and the other for the left one. The last processor computes the matching process and the 3D calculation. With $512{\times}512$ pixels images, this sensor generates dense 3D maps at a rate of about 1 Hz depending of the scene complexity. Results can surely be improved by using a special suited multiprocessors cards.

  • PDF

Positive Random Forest based Robust Object Tracking (Positive Random Forest 기반의 강건한 객체 추적)

  • Cho, Yunsub;Jeong, Soowoong;Lee, Sangkeun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.6
    • /
    • pp.107-116
    • /
    • 2015
  • In compliance with digital device growth, the proliferation of high-tech computers, the availability of high quality and inexpensive video cameras, the demands for automated video analysis is increasing, especially in field of intelligent monitor system, video compression and robot vision. That is why object tracking of computer vision comes into the spotlight. Tracking is the process of locating a moving object over time using a camera. The consideration of object's scale, rotation and shape deformation is the most important thing in robust object tracking. In this paper, we propose a robust object tracking scheme using Random Forest. Specifically, an object detection scheme based on region covariance and ZNCC(zeros mean normalized cross correlation) is adopted for estimating accurate object location. Next, the detected region will be divided into five regions for random forest-based learning. The five regions are verified by random forest. The verified regions are put into the model pool. Finally, the input model is updated for the object location correction when the region does not contain the object. The experiments shows that the proposed method produces better accurate performance with respect to object location than the existing methods.