• Title/Summary/Keyword: Robot Vision

Search Result 878, Processing Time 0.026 seconds

Research of the Delivery Autonomy and Vision-based Landing Algorithm for Last-Mile Service using a UAV (무인기를 이용한 Last-Mile 서비스를 위한 배송 자동화 및 영상기반 착륙 알고리즘 연구)

  • Hanseob Lee;Hoon Jung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.2
    • /
    • pp.160-167
    • /
    • 2023
  • This study focuses on the development of a Last-Mile delivery service using unmanned vehicles to deliver goods directly to the end consumer utilizing drones to perform autonomous delivery missions and an image-based precision landing algorithm for handoff to a robot in an intermediate facility. As the logistics market continues to grow rapidly, parcel volumes increase exponentially each year. However, due to low delivery fees, the workload of delivery personnel is increasing, resulting in a decrease in the quality of delivery services. To address this issue, the research team conducted a study on a Last-Mile delivery service using unmanned vehicles and conducted research on the necessary technologies for drone-based goods transportation in this paper. The flight scenario begins with the drone carrying the goods from a pickup location to the rooftop of a building where the final delivery destination is located. There is a handoff facility on the rooftop of the building, and a marker on the roof must be accurately landed upon. The mission is complete once the goods are delivered and the drone returns to its original location. The research team developed a mission planning algorithm to perform the above scenario automatically and constructed an algorithm to recognize the marker through a camera sensor and achieve a precision landing. The performance of the developed system has been verified through multiple trial operations within ETRI.

Human Tracking and Body Silhouette Extraction System for Humanoid Robot (휴머노이드 로봇을 위한 사람 검출, 추적 및 실루엣 추출 시스템)

  • Kwak, Soo-Yeong;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.6C
    • /
    • pp.593-603
    • /
    • 2009
  • In this paper, we propose a new integrated computer vision system designed to track multiple human beings and extract their silhouette with an active stereo camera. The proposed system consists of three modules: detection, tracking and silhouette extraction. Detection was performed by camera ego-motion compensation and disparity segmentation. For tracking, we present an efficient mean shift based tracking method in which the tracking objects are characterized as disparity weighted color histograms. The silhouette was obtained by two-step segmentation. A trimap is estimated in advance and then this was effectively incorporated into the graph cut framework for fine segmentation. The proposed system was evaluated with respect to ground truth data and it was shown to detect and track multiple people very well and also produce high quality silhouettes. The proposed system can assist in gesture and gait recognition in field of Human-Robot Interaction (HRI).

A Study on the Implementation of RFID-based Autonomous Navigation System for Robotic Cellular Phone(RCP)

  • Choe, Jae-Il;Choi, Jung-Wook;Oh, Dong-Ik;Kim, Seung-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.457-462
    • /
    • 2005
  • Industrial and economical importance of CP(Cellular Phone) is growing rapidly. Combined with IT technology, CP is currently one of the most attractive technologies for all. However, unless we find a breakthrough to the technology, its growth may slow down soon. RT(Robot Technology) is considered one of the most promising next generation technology. Unlike the industrial robot of the past, today's robots require advanced technologies, such as soft computing, human-friendly interface, interaction technique, speech recognition, object recognition, and many others. In this study, we present a new technological concept named RCP(Robotic Cellular Phone), which combines RT & CP, in the vision of opening a new direction to the advance of CP, IT, and RT all together. RCP consists of 3 sub-modules. They are $RCP^{Mobility}$, $RCP^{Interaction}$, and $RCP^{Interaction}$. $RCP^{Mobility}$ is the main focus of this paper. It is an autonomous navigation system that combines RT mobility with CP. Through $RCP^{Mobility}$, we should be able to provide CP with robotic functionalities such as auto-charging and real-world robotic entertainments. Eventually, CP may become a robotic pet to the human being. $RCP^{Mobility}$ consists of various controllers. Two of the main controllers are trajectory controller and self-localization controller. While Trajectory Controller is responsible for the wheel-based navigation of RCP, Self-Localization Controller provides localization information of the moving RCP. With the coordinate information acquired from RFID-based self-localization controller, Trajectory Controller refines RCP's movement to achieve better RCP navigations. In this paper, a prototype system we developed for $RCP^{Mobility}$ is presented. We describe overall structure of the system and provide experimental results of the RCP navigation.

  • PDF

Performance Improvement of Human Detection in Thermal Images using Principal Component Analysis and Blob Clustering (주성분 분석과 Blob 군집화를 이용한 열화상 사람 검출 시스템의 성능 향상)

  • Jo, Ahra;Park, Jeong-Sik;Seo, Yong-Ho;Jang, Gil-Jin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.157-163
    • /
    • 2013
  • In this paper, we propose a human detection technique using thermal imaging camera. The proposed method is useful at night or rainy weather where a visible light imaging cameras is not able to detect human activities. Under the observation that a human is usually brighter than the background in the thermal images, we estimate the preliminary human regions using the statistical confidence measures in the gray-level, brightness histogram. Afterwards, we applied Gaussian filtering and blob labeling techniques to remove the unwanted noise, and gather the scattered of the pixel distributions and the center of gravities of the blobs. In the final step, we exploit the aspect ratio and the area on the unified object region as well as a number of the principal components extracted from the object region images to determine if the detected object is a human. The experimental results show that the proposed method is effective in environments where visible light cameras are not applicable.

Development of a Horse Robot for Indoor Leisure Sports (실내 레저 스포츠를 위한 승마 로봇의 개발)

  • Lee, Wonsik;Lee, Youngdae;Moon, Chanwoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.5
    • /
    • pp.161-166
    • /
    • 2014
  • Recently, indoor sports simulator equipped with virtual reality devices, like screen golf system, are riding high. There have been many attempts to develop the indoor simulator systems which can make people enjoy exercises in various sports area. A real horseback riding could not have been popularized, because of the cost involved, difficulty to learn and its dangerousness. In this research, a robotic horseback riding platform based on parallel mechanism and virtual reality device is proposed. The proposed platform provides realistic riding feels and various levels of riding difficulty. The equipped motion capture system with a vision sensor enables riders to correct their riding posture based on expert's one. The developed horseback riding platform make it possible to enjoy a horseback riding in all weather, and also can be used for systematic horseback riding training.

Design and Implementation of Real-time High Performance Face Detection Engine (고성능 실시간 얼굴 검출 엔진의 설계 및 구현)

  • Han, Dong-Il;Cho, Hyun-Jong;Choi, Jong-Ho;Cho, Jae-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.2
    • /
    • pp.33-44
    • /
    • 2010
  • This paper propose the structure of real-time face detection hardware architecture for robot vision processing applications. The proposed architecture is robust against illumination changes and operates at no less than 60 frames per second. It uses Modified Census Transform to obtain face characteristics robust against illumination changes. And the AdaBoost algorithm is adopted to learn and generate the characteristics of the face data, and finally detected the face using this data. This paper describes the face detection hardware structure composed of Memory Interface, Image Scaler, MCT Generator, Candidate Detector, Confidence Comparator, Position Resizer, Data Grouper, and Detected Result Display, and verification Result of Hardware Implementation with using Virtex5 LX330 FPGA of Xilinx. Verification result with using the images from a camera showed that maximum 32 faces per one frame can be detected at the speed of maximum 149 frame per second.

A Study on the Implementation of RFID-Based Autonomous Navigation System for Robotic Cellular Phone (RCP) (RFID를 이용한 RCP 자율 네비게이션 시스템 구현을 위한 연구)

  • Choe Jae-Il;Choi Jung-Wook;Oh Dong-Ik;Kim Seung-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.5
    • /
    • pp.480-488
    • /
    • 2006
  • Industrial and economical importance of CP(Cellular Phone) is growing rapidly. Combined with IT technology, CP is one of the most attractive technologies of today. However, unless we find a new breakthrough in the technology, its growth may slow down soon. RT(Robot Technology) is considered one of the most promising next generation technologies. Unlike the industrial robot of the past, today's robots require advanced features, such as soft computing, human-friendly interface, interaction technique, speech recognition object recognition, among many others. In this paper, we present a new technological concept named RCP (Robotic Cellular Phone) which integrates RT and CP in the vision of opening a combined advancement of CP, IT, and RT, RCP consists of 3 sub-modules. They are $RCP^{Mobility}$(RCP Mobility System), $RCP^{Interaction}$, and $RCP^{Integration}$. The main focus of this paper is on $RCP^{Mobility}$ which combines an autonomous navigation system of the RT mobility with CP. Through $RCP^{Mobility}$, we are able to provide CP with robotic functions such as auto-charging and real-world robotic entertainment. Ultimately, CP may become a robotic pet to the human beings. $RCP^{Mobility}$ consists of various controllers. Two of the main controllers are trajectory controller and self-localization controller. While the former is responsible for the wheel-based navigation of RCP, the latter provides localization information of the moving RCP With the coordinates acquired from RFID-based self-localization controller, trajectory controller refines RCP's movement to achieve better navigation. In this paper, a prototype of $RCP^{Mobility}$ is presented. We describe overall structure of the system and provide experimental results on the RCP navigation.

Sampling-based Control of SAR System Mounted on A Simple Manipulator (간단한 기구부와 결합한 공간증강현실 시스템의 샘플 기반 제어 방법)

  • Lee, Ahyun;Lee, Joo-Ho;Lee, Joo-Haeng
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.4
    • /
    • pp.356-367
    • /
    • 2014
  • A robotic sapatial augmented reality (RSAR) system, which combines robotic components with projector-based AR technique, is unique in its ability to expand the user interaction area by dynamically changing the position and orientation of a projector-camera unit (PCU). For a moving PCU mounted on a conventional robotic device, we can compute its extrinsic parameters using a robot kinematics method assuming a link and joint geometry is available. In a RSAR system based on user-created robot (UCR), however, it is difficult to calibrate or measure the geometric configuration, which limits to apply a conventional kinematics method. In this paper, we propose a data-driven kinematics control method for a UCR-based RSAR system. The proposed method utilized a pre-sampled data set of camera calibration acquired at sufficient instances of kinematics configurations in fixed joint domains. Then, the sampled set is compactly represented as a set of B-spline surfaces. The proposed method have merits in two folds. First, it does not require any kinematics model such as a link length or joint orientation. Secondly, the computation is simple since it just evaluates a several polynomials rather than relying on Jacobian computation. We describe the proposed method and demonstrates the results for an experimental RSAR system with a PCU on a simple pan-tilt arm.

Real-Time Motion Tracking Detection System for a Spherical Pendulum Using a USB Camera (USB 카메라를 이용한 실시간 구면진자 운동추적 감지시스템)

  • Moon, Byung-Yoon;Hong, Sung-Rak;Ha, Manh-Tuan;Kang, Chul-Goo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.40 no.9
    • /
    • pp.807-813
    • /
    • 2016
  • Recently, a spherical pendulum attached to an end-effector of a robot manipulator has been frequently used for a test bed of residual vibration suppression control in a multi-dimensional motion. However, there was no automatic tracking system to detect the current bob position on-line, and there was inconvenience to not be able to store the bob position in real time and plot the trajectory. In this study, we developed a two-dimensional, real-time bob-detecting system using a digital USB camera, of which the key is hardware component design and software C programming for fast image processing and interfacing. The developed system was applied to residual vibration suppression control of a two-dimensional spherical pendulum that is attached at the end-effector of a two degree-of-freedom SCARA robot, and the effectiveness of the developed system has been demonstrated.

Multi-sensor Intelligent Robot (멀티센서 스마트 로보트)

  • Jang, Jong-Hwan;Kim, Yong-Ho
    • The Journal of Natural Sciences
    • /
    • v.5 no.1
    • /
    • pp.87-93
    • /
    • 1992
  • A robotically assisted field material handling system designed for loading and unloading of a planar pallet with a forklift in unstructured field environment is presented. The system uses combined acoustic/visual sensing data to define the position/orientation of the pallet and to determine the specific locations of the two slots of the pallet, so that the forklift can move close to the slot and engage it for transport. In order to reduce the complexity of the material handling operation, we have developed a method based on the integration of 2-D range data of Poraloid ultrasonic sensor along with 2-D visual data of an optical camera. Data obtained from the two separate sources complements each other and is used in an efficient algorithm to control this robotically assisted field material handling system . Range data obtained from two linear scannings is used to determine the pan and tilt angles of a pallet using least mean square method. Then 2-D visual data is used to determine the swing angle and engagement location of a pallet by using edge detection and Hough transform techniques. The limitations of the pan and tilt orientation to be determined arc discussed. The system developed is evaluated through the hardware and software implementation. The experimental results are presented.

  • PDF