• Title/Summary/Keyword: Vision-based Control

Search Result 687, Processing Time 0.027 seconds

An Application of Active Vision Head Control Using Model-based Compensating Neural Networks Controller

  • Kim, Kyung-Hwan;Keigo, Watanabe
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.168.1-168
    • /
    • 2001
  • This article describes a novel model-based compensating neural network (NN) model developed to be used in our active binocular head controller, which addresses both the kinematics and dynamics aspects in trying to precisely track a moving object of interest to keep it in view. The compensating NN model is constructed using two classes of self-tuning neural models: namely Neural Gas (NG) algorithm and SoftMax function networks. The resultant servo controller is shown to be able to handle the tracking problem with a minimum knowledge of the dynamic aspects of the system.

  • PDF

Control Architecture for N-Screen Based Interactive Mutli-Vision System (N-스크린 기반 인터렉티브 멀티 비전 시스템 제어 구조)

  • Sarwar, Ghulam;Ullah, Farman;Yoon, Changwoo;Lee, Sungchang
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.72-81
    • /
    • 2013
  • In this paper, we propose the architecture and user interaction mechanism to implement N-Screen services on Multi-Vision System (MVS) that are not supported in existing systems. N-Screen services enable users to control the MVS displays through any of their devices and share contents among MVS displays and user's active-devices with service continuation at any location. We provide N-Screen interactive services on MVS by introducing N-Screen interaction & session management server and agent. Furthermore, we present some examples of the protocols such as application launching, user interaction for service control and visualcasting to support N-Screen services. In addition, we explain the N-Screen service scenarios for providing split sessions on user's active-devices and launching metadata content on any of his devices at any location supported by these protocols. The simulation result demonstrates the feasibility and performance improvement of the proposed visualcasting mechanisms.

Volume Control using Gesture Recognition System

  • Shreyansh Gupta;Samyak Barnwal
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.161-170
    • /
    • 2024
  • With the technological advances, the humans have made so much progress in the ease of living and now incorporating the use of sight, motion, sound, speech etc. for various application and software controls. In this paper, we have explored the project in which gestures plays a very significant role in the project. The topic of gesture control which has been researched a lot and is just getting evolved every day. We see the usage of computer vision in this project. The main objective that we achieved in this project is controlling the computer settings with hand gestures using computer vision. In this project we are creating a module which acts a volume controlling program in which we use hand gestures to control the computer system volume. We have included the use of OpenCV. This module is used in the implementation of hand gestures in computer controls. The module in execution uses the web camera of the computer to record the images or videos and then processes them to find the needed information and then based on the input, performs the action on the volume settings if that computer. The program has the functionality of increasing and decreasing the volume of the computer. The setup needed for the program execution is a web camera to record the input images and videos which will be given by the user. The program will perform gesture recognition with the help of OpenCV and python and its libraries and them it will recognize or identify the specified human gestures and use them to perform or carry out the changes in the device setting. The objective is to adjust the volume of a computer device without the need for physical interaction using a mouse or keyboard. OpenCV, a widely utilized tool for image processing and computer vision applications in this domain, enjoys extensive popularity. The OpenCV community consists of over 47,000 individuals, and as of a survey conducted in 2020, the estimated number of downloads exceeds 18 million.

Deviation Angles of Inverted Pendulum by Edge Detection Method of Vision System (비젼 시스템의 에지 검출 방법을 이용한 도립 진자의 편차 각)

  • Ryu, Sang-Moon;Park, Jong-Gyu;Han, Il-Suck;Jang, Sung-Whan;Ahn, Tae-Chon
    • Proceedings of the KIEE Conference
    • /
    • 1999.07b
    • /
    • pp.797-799
    • /
    • 1999
  • In this paper, the edge intensification and detection algorithm which is one of image processing operations is considered. Edge detection algorithm is the most useful and important method for image processing or image analysis. The vision system based on these processing and concerned in specific project is proposed and is applied to the inverted pendulum in order to automatically acquire the angles between the bar and the perpendicular reference line. In this paper, the angles that are obtained from some images of computer vision system can offer useful informations for control of real inverted pendulum system. Next, the inverted pendulum will be controlled by the proposed method.

  • PDF

PC 기반의 다이싱 공정 자동화 시스템 개발

  • 김형태;양해정;송창섭
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.3
    • /
    • pp.47-57
    • /
    • 2000
  • In this study, PC-based dicing machine and driving software were constructed for the purpose of automation of wafer cutting process. To automate the machine, hard automation including vision, loading, and software were considered in the development. Auto loading device and vision system were adopted for the increase of productivity, GUI software programmed for the expedient operation. The dicing machine is operated by the control algorithm and some parameters. It is verified that this kind of PC based automation has a great potential compared with the conventional dicing machine when applied to manufacturing some kinds of wafers as a test purpose.

  • PDF

Mobile Robot System Design for RFID-based Inventory Checking (RFID 기반 재고조사용 이동로봇 시스템의 설계)

  • Son, Min-Hyuk;Do, Yong-Tae
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.1
    • /
    • pp.49-57
    • /
    • 2011
  • In many industries, the accurate and quick checking of goods in storage is of great importance. Most today's inventory checking is based on bar code scanning, but the relative position between a bar code and an optical scanner should be maintained in close distance and proper angle for the successful scanning. This requirement makes it difficult to fully automate the inventory information/control systems. The use of RFID technology can be a solution for overcoming this problem. The mobile robot presented in this paper is equipped with an RFID tag scanning system, that automates the otherwise manual or semi-automatic inventory checking process. We designed the robot system in a quite practical approach, and the developed system is close to the commercialization stage. In our experiments, the robot could collect information of goods stacked on shelves autonomously without any failure and maintain corresponding database while it navigated predefined paths between the shelves using vision.

Self-localization of a Mobile Robot for Decreasing the Error and VRML Image Overlay (오차 감소를 위한 이동로봇 Self-Localization과 VRML 영상오버레이 기법)

  • Kwon Bang-Hyun;Shon Eun-Ho;Kim Young-Chul;Chong Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.4
    • /
    • pp.389-394
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localization technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

Automatic Pipeline Welding System with Self-Diagnostic Function and Laser Vision Sensor

  • Kim, Yong-Baek;Moon, Hyeong-Soon;Kim, Jong-Cheol;Kim, Jong-Jun;Choo, Jeong-Bog
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1137-1140
    • /
    • 2005
  • Automatic welding has been used frequently on pipeline projects. The productivity and reliability are most essential features of the automatic welding system. The mechanized GMAW process is the most widely used welding process and the carriage and band system is most effective welding system for pipeline laying. This application-oriented paper introduces new automatic welding equipment for pipeline construction. It is based on cutting-edge design and practical welding physics to minimize downtime. This paper also describes the control system which was designed and implemented for new automatic welding equipment. The system has the self diagnostic function which facilitates maintenance and repairs, and also has the network function via which the welding task data can be transmitted and the welding process data can be monitored. The laser vision sensor was designed for narrow welding groove in order to implement higher accuracy of seam tracking and fully automatic operation.

  • PDF

Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion (천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정)

  • Shin, Ok-Shik;Park, Chan-Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.

Stereo Vision-Based 3D Pose Estimation of Product Labels for Bin Picking (빈피킹을 위한 스테레오 비전 기반의 제품 라벨의 3차원 자세 추정)

  • Udaya, Wijenayake;Choi, Sung-In;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.1
    • /
    • pp.8-16
    • /
    • 2016
  • In the field of computer vision and robotics, bin picking is an important application area in which object pose estimation is necessary. Different approaches, such as 2D feature tracking and 3D surface reconstruction, have been introduced to estimate the object pose accurately. We propose a new approach where we can use both 2D image features and 3D surface information to identify the target object and estimate its pose accurately. First, we introduce a label detection technique using Maximally Stable Extremal Regions (MSERs) where the label detection results are used to identify the target objects separately. Then, the 2D image features on the detected label areas are utilized to generate 3D surface information. Finally, we calculate the 3D position and the orientation of the target objects using the information of the 3D surface.