• Title/Summary/Keyword: Camera Controller

Search Result 273, Processing Time 0.026 seconds

A Study for Detecting AGV Driving Information using Vision Sensor (비전 센서를 이용한 AGV의 주행정보 획득에 관한 연구)

  • Lee, Jin-Woo;Sohn, Ju-Han;Choi, Sung-Uk;Lee, Young-Jin;Lee, Kwon-Soon
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2575-2577
    • /
    • 2000
  • We experimented on AGV driving test with color CCD camera which is setup on it. This paper can be divided into two parts. One is image processing part to measure the condition of the guideline and AGV. The other is part that obtains the reference steering angle through using the image processing parts. First, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, AGV knows the driving conditions of AGV. After then using of those information, AGV calculates the reference steering angle changed by the speed of AGV. In the case of low speed, it focuses on the left/right error values of the guide line. As increasing of the speed of AGV, it focuses on the slop of guide line. Lastly, we are to model the above descriptions as the type of PID controller and regulate the coefficient value of it the speed of AGV.

  • PDF

A Study on the Development of Automatic Ship Berthing System (선박 자동접안시스템 구축을 위한 기초연구)

  • Kim, Y.B.;Choi, Y.W.;Chae, G.H.
    • Journal of Power System Engineering
    • /
    • v.10 no.4
    • /
    • pp.139-146
    • /
    • 2006
  • In this paper vector code correlation(VCC) method and an algorithm to promote the image processing performance in building an effective measurement system using cameras are described for automatically berthing and controlling the ship equipped with side thrusters. In order to realize automatic ship berthing, it is indispensable that the berthing assistant system on the ship should continuously trace a target in the berth to measure the distance to the target and the ship attitude, such that we can make the ship move to the specified location. The considered system is made up of 4 apparatuses compounded from a CCD camera, a camera direction controller, a popular PC with a built in image processing board and a signal conversion unit connected to parallel port of the PC. The object of this paper is to reduce the image processing time so that the berthing system is able to ensure the safety schedule against risks during approaching to the berth. It could be achieved by composing the vector code image to utilize the gradient of an approximated plane found with the brightness of pixels forming a certain region in an image and verifying the effectiveness on a commonly used PC. From experimental results, it is clear that the proposed method can be applied to the measurement system for automatic ship berthing and has the image processing time of fourfold as compared with the typical template matching method.

  • PDF

Development of Intelligent Multiple Camera System for High-Speed Impact Experiment (고속충돌 시험용 지능형 다중 카메라 시스템 개발)

  • Chung, Dong Teak;Park, Chi Young;Jin, Doo Han;Kim, Tae Yeon;Lee, Joo Yeon;Rhee, Ihnseok
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.37 no.9
    • /
    • pp.1093-1098
    • /
    • 2013
  • A single-crystal sapphire is used as a transparent bulletproof window material; however, few studies have investigated the dynamic behavior and fracture properties under high-speed impact. High-speed and high-resolution sequential images are required to study the interaction of the bullet with the brittle ceramic materials. In this study, a device is developed to capture the sequence of high-speed impact/penetration phenomena. This system consists of a speed measurement device, a microprocessor-based camera controller, and multiple CCD cameras. By using a linear array sensor, the speed-measuring device can measure a small (diameter: up to 1 2 mm) and fast (speed: up to Mach 3) bullet. Once a bullet is launched, it passes through the speed measurement device where its time and speed is recorded, and then, the camera controller computes the exact time of arrival to the target during flight. Then, it sends the trigger signal to the cameras and flashes with a specific delay to capture the impact images sequentially. It is almost impossible to capture high-speed images without the estimation of the time of arrival. We were able to capture high-speed images using the new system with precise accuracy.

Kalman Filter-based Sensor Fusion for Posture Stabilization of a Mobile Robot (모바일 로봇 자세 안정화를 위한 칼만 필터 기반 센서 퓨전)

  • Jang, Taeho;Kim, Youngshik;Kyoung, Minyoung;Yi, Hyunbean;Hwan, Yoondong
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.40 no.8
    • /
    • pp.703-710
    • /
    • 2016
  • In robotics research, accurate estimation of current robot position is important to achieve motion control of a robot. In this research, we focus on a sensor fusion method to provide improved position estimation for a wheeled mobile robot, considering two different sensor measurements. In this case, we fuse camera-based vision and encode-based odometry data using Kalman filter techniques to improve the position estimation of the robot. An external camera-based vision system provides global position coordinates (x, y) for the mobile robot in an indoor environment. An internal encoder-based odometry provides linear and angular velocities of the robot. We then use the position data estimated by the Kalman filter as inputs to the motion controller, which significantly improves performance of the motion controller. Finally, we experimentally verify the performance of the proposed sensor fused position estimation and motion controller using an actual mobile robot system. In our experiments, we also compare the Kalman filter-based sensor fused estimation with two different single sensor-based estimations (vision-based and odometry-based).

Unsupervised Real-time Obstacle Avoidance Technique based on a Hybrid Fuzzy Method for AUVs

  • Anwary, Arif Reza;Lee, Young-Il;Jung, Hee;Kim, Yong-Gi
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.1
    • /
    • pp.82-86
    • /
    • 2008
  • The article presents ARTMAP and Fuzzy BK-Product approach underwater obstacle avoidance for the Autonomous underwater Vehicles (AUV). The AUV moves an unstructured area of underwater and could be met with obstacles in its way. The AUVs are equipped with complex sensorial systems like camera, aquatic sonar system, and transducers. A Neural integrated Fuzzy BK-Product controller, which integrates Fuzzy logic representation of the human thinking procedure with the learning capabilities of neural-networks (ARTMAP), is developed for obstacle avoidance in the case of unstructured areas. In this paper, ARTMAP-Fuzzy BK-Product controller architecture comprises of two distinct elements, are 1) Fuzzy Logic Membership Function and 2) Feed-Forward ART component. Feed-Forward ART component is used to understanding the unstructured underwater environment and Fuzzy BK-Product interpolates the Fuzzy rule set and after the defuzzyfication, the output is used to take the decision for safety direction to go for avoiding the obstacle collision with the AUV. An on-line reinforcement learning method is introduced which adapts the performance of the fuzzy units continuously to any changes in the environment and make decision for the optimal path from source to destination.

Development and Performance Evaluation of a Web-based Management System for Greenhouse Teleoperation (시설재배를 위한 웹 기반의 원격 관리 시스템의 개발 및 성능평가)

  • 심주현;백운재;박주현;이석규
    • Journal of Biosystems Engineering
    • /
    • v.29 no.2
    • /
    • pp.159-166
    • /
    • 2004
  • In this study, we have developed a web-based management system for greenhouse teleoperation. The remote control system consisted of a database, a web-server, a controller in greenhouse, and clients. The database in the server stored user's information and greenhouse conditions was used to manage user's login and conditioning data. The management system developed by using Java applet, which was a client program for effective and easy management of greenhouse, monitored the greenhouse in real time. Master and driver boards were installed in the greenhouse control unit. Database on flowering to collect and analyze data exchanged data with the server. The master board could be managed effectively by timer routine, repeat control within setting time, and algorithm of setting points. Also, the greenhouse conditions could be controlled by manual or remote controller(PC) through a web browser in internet. Furthermore, all of the control devices of the greenhouse were managed by remote control of using PC and checked via camera installed in greenhouse. Finally, we showed the experimental results of the system which was installed in Pusan Horticultural Experiment Station.

Implementation of Embedded System Based Simulator Controller Using Camera Motion Parameter Extractor (카메라 모션 벡터 추출기를 이용한 임베디드 기반 가상현실 시뮬레이터 제어기의 설계)

  • Lee Hee-Man;Park Sang-Jo
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.4
    • /
    • pp.98-108
    • /
    • 2006
  • In the past, the Image processing system is independently implemented and has a limit in its application to a degree of simple display. The scope of present image processing system is diversely extended in its application owing to the development of image processing IC chips. In this paper, we implement the image processing system operated independently without PC by converting analogue image signals into digital signals. In the proposed image processing system, we extract the motion parameters from analogue image signals and generate the virtual movement to Simulator and operate Simulator by extracting motion parameters.

  • PDF

An Automatic Teaching Method by Vision Information for A Robotic Assembly System

  • Ahn, Cheol-Ki;Lee, Min-Cheol;Kim, Jong-Hyung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.65-68
    • /
    • 1999
  • In this study, an off-line automatic teaching method using vision information for robotic assembly task is proposed. Many of industrial robots are still taught and programmed by a teaching pendant. The robot is guided by a human operator to the desired application locations. These motions are recorded and are later edited, within the robotic language using in the robot controller, and played back repetitively to perform the robot task. This conventional teaching method is time-consuming and somewhat dangerous. In the proposed method, the operator teaches the desired locations on the image acquired through CCD camera mounted on the robot hand. The robotic language program is automatically generated and transferred to the robot controller. This teaching process is implemented through an off-line programming(OLP) software. The OLP is developed for the robotic assembly system used in this study. In order to transform the location on image coordinates into robot coordinates, a calibration process is established. The proposed teaching method is implemented and evaluated on the assembly system for soldering electronic parts on a circuit board. A six-axis articulated robot executes assembly task according to the off-line automatic teaching.

  • PDF

Development of Patrol Robot using DGPS and Curb Detection (DGPS와 연석추출을 이용한 순찰용 로봇의 개발)

  • Kim, Seung-Hun;Kim, Moon-June;Kang, Sung-Chul;Hong, Suk-Kyo;Roh, Chi-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.2
    • /
    • pp.140-146
    • /
    • 2007
  • This paper demonstrates the development of a mobile robot for patrol. We fuse differential GPS, angle sensor and odometry data using the framework of extended Kalman filter to localize a mobile robot in outdoor environments. An important feature of road environment is the existence of curbs. So, we also propose an algorithm to find out the position of curbs from laser range finder data using Hough transform. The mobile robot builds the map of the curbs of roads and the map is used fur tracking and localization. The patrol robot system consists of a mobile robot and a control station. The mobile robot sends the image data from a camera to the control station. The remote control station receives and displays the image data. Also, the patrol robot system can be used in two modes, teleoperated or autonomous. In teleoperated mode, the teleoperator commands the mobile robot based on the image data. On the other hand, in autonomous mode, the mobile robot has to autonomously track the predefined waypoints. So, we have designed a path tracking controller to track the path. We have been able to confirm that the proposed algorithms show proper performances in outdoor environment through experiments in the road.

Development of a Model Based Predictive Controller for Lane Keeping Assistance System (모델기반 예측 제어기를 이용한 차선유지 보조 시스템 개발)

  • Hwang, Jun-Yeon;Huh, Kun-Soo;Na, Hyuk-Min;Jung, Ho-Gi;Kang, Hyung-Jin;Yoon, Pal-Joo
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.17 no.3
    • /
    • pp.54-61
    • /
    • 2009
  • Lane keeping assistant system (LKAS) could save thousands of lives each year by maintaining lane position and is regarded as a promising active safety system. The LKAS is expected to reduce the driver workload and to assist the driver during driving. This paper proposes a model based predictive controller for the LKAS which requires cooperative driving between the driver and the assistance system. A Hardware-In-the-Loop-Simulator (HILS) is constructed for its evaluation and includes Carsim, Matlab Simulink and a lane detection algorithm. The single camera is mounted with the HILS to acquire the monitor images and to detect the lane markers. The simulation is conducted to validate the LKAS control performance in various road scenario.