• 제목/요약/키워드: vision-based control

검색결과 683건 처리시간 0.026초

스테레오비전 기반의 도로의 기울기 추정과 자유주행공간 검출 (Stereo-Vision Based Road Slope Estimation and Free Space Detection on Road)

  • 이기용;이준웅
    • 제어로봇시스템학회논문지
    • /
    • 제17권3호
    • /
    • pp.199-205
    • /
    • 2011
  • This paper presents an algorithm capable of detecting free space for the autonomous vehicle navigation. The algorithm consists of two main steps: 1) estimation of longitudinal profile of road, 2) detection of free space. The estimation of longitudinal profile of road is detection of v-line in v-disparity image which is corresponded to road slope, using v-disparity image and hough transform, Dijkstra algorithm. To detect free space, we detect u-line in u-disparity image which is a boundary line between free space and obstacle's region, using u-disparity image and dynamic programming. Free space is decided by detected v-line and u-line. The proposed algorithm is proven to be successful through experiments under various traffic scenarios.

분산다중센서로 구현된 지능화공간의 색상정보를 이용한 실시간 물체추적 (Real-Time Objects Tracking using Color Configuration in Intelligent Space with Distributed Multi-Vision)

  • 진태석;이장명;하시모토히데키
    • 제어로봇시스템학회논문지
    • /
    • 제12권9호
    • /
    • pp.843-849
    • /
    • 2006
  • Intelligent Space defines an environment where many intelligent devices, such as computers and sensors, are distributed. As a result of the cooperation between smart devices, intelligence emerges from the environment. In such scheme, a crucial task is to obtain the global location of every device in order to of for the useful services. Some tracking systems often prepare the models of the objects in advance. It is difficult to adopt this model-based solution as the tracking system when many kinds of objects exist. In this paper the location is achieved with no prior model, using color properties as information source. Feature vectors of multiple objects using color histogram and tracking method are described. The proposed method is applied to the intelligent environment and its performance is verified by the experiments.

스테레오 연속 영상을 이용한 구조 복원의 정제 (A New Refinement Method for Structure from Stereo Motion)

  • 박성기;권인소
    • 제어로봇시스템학회논문지
    • /
    • 제8권11호
    • /
    • pp.935-940
    • /
    • 2002
  • For robot navigation and visual reconstruction, structure from motion (SFM) is an active issue in computer vision community and its properties arc also becoming well understood. In this paper, when using stereo image sequence and direct method as a tool for SFM, we present a new method for overcoming bas-relief ambiguity. We first show that the direct methods, based on optical flow constraint equation, are also intrinsically exposed to such ambiguity although they introduce robust methods. Therefore, regarding the motion and depth estimation by the robust and direct method as approximated ones. we suggest a method that refines both stereo displacement and motion displacement with sub-pixel accuracy, which is the central process f3r improving its ambiguity. Experiments with real image sequences have been executed and we show that the proposed algorithm has improved the estimation accuracy.

감시용 로봇의 시각을 위한 인공 신경망 기반 겹친 사람의 구분 (Dividing Occluded Humans Based on an Artificial Neural Network for the Vision of a Surveillance Robot)

  • 도용태
    • 제어로봇시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.505-510
    • /
    • 2009
  • In recent years the space where a robot works has been expanding to the human space unlike traditional industrial robots that work only at fixed positions apart from humans. A human in the recent situation may be the owner of a robot or the target in a robotic application. This paper deals with the latter case; when a robot vision system is employed to monitor humans for a surveillance application, each person in a scene needs to be identified. Humans, however, often move together, and occlusions between them occur frequently. Although this problem has not been seriously tackled in relevant literature, it brings difficulty into later image analysis steps such as tracking and scene understanding. In this paper, a probabilistic neural network is employed to learn the patterns of the best dividing position along the top pixels of an image region of partly occlude people. As this method uses only shape information from an image, it is simple and can be implemented in real time.

Three Dimensional Volume Reconstruction of Polyhedral Objects Using X-ray Stereo Images

  • Roh, Young-Jun;Kim, Byung-Man;Cho, Hyung-Suck
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.28.2-28
    • /
    • 2001
  • Three dimensional shape measurement techniques are widely needed in industries for product quality monitoring and control. X-ray imaging method is a promising technology to achieve three-dimensional Information, both the surface and inner structure of an object, since it can overcome the limitations of conventional visual or optical methods such as an occlusion problem or surface reflection properties. In this paper, we propose three dimensional volume reconstruction method based on x-ray stereo imaging technology. Here, the stereo images of an object from two different views are taken by changing the object pose rather than moving imaging plane as in conventional stereo vision method. We propose a series of image processing techniques to extract the features efficiently from x-ray images, where the occluded features in case of normal camera vision could be found ...

  • PDF

ITC 자동조정을 위한 제어기법에 관한 연구 (A study on the control strategy for automatic adjustment of ITC(Integrated Tube Components))

  • 김성락;이종운;변증남;장태규
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1991년도 한국자동제어학술회의논문집(국내학술편); KOEX, Seoul; 22-24 Oct. 1991
    • /
    • pp.935-938
    • /
    • 1991
  • We are developing an automatic adjusting system for ITC. ITC(Integrated Tube Components) has a large set-by-set variability in its characteristics. And it also has nonlinearities. It requires not only a fast vision process but also an efficient control algorithm to meet the need for high productivity. In this paper, the description of an adjusting system and the modelling of ITC will be presented. And also the concept of a new rule based hierarchical algorithmic approaches will be suggested.

  • PDF

LonRF 지능형 디바이스 기반의 유비쿼터스 홈네트워크 테스트베드 개발 (Development of a LonRF Intelligent Device-based Ubiquitous Home Network Testbed)

  • 이병복;박애순;김대식;노광현
    • 제어로봇시스템학회논문지
    • /
    • 제10권6호
    • /
    • pp.566-573
    • /
    • 2004
  • This paper describes the ubiquitous home network (uHome-net) testbed and LonRF intelligent devices based on LonWorks technology. These devices consist of Neuron Chip, RF transceiver, sensor, and other peripheral components. Using LonRF devices, a home control network can be simplified and most devices can be operated on LonWorks control network. Also, Indoor Positioning System (IPS) that can serve various location based services was implemented in uHome-net. Smart Badge of IPS, that is a special LonRF device, can measure the 3D location of objects in the indoor environment. In the uHome-net testbed, remote control service, cooking help service, wireless remote metering service, baby monitoring service and security & fire prevention service were realized. This research shows the vision of the ubiquitous home network that will be emerged in the near future.

차선 변경 지원을 위한 레이더 및 비전센서 융합기반 다중 차량 인식 (Multiple Vehicle Recognition based on Radar and Vision Sensor Fusion for Lane Change Assistance)

  • 김형태;송봉섭;이훈;장형선
    • 제어로봇시스템학회논문지
    • /
    • 제21권2호
    • /
    • pp.121-129
    • /
    • 2015
  • This paper presents a multiple vehicle recognition algorithm based on radar and vision sensor fusion for lane change assistance. To determine whether the lane change is possible, it is necessary to recognize not only a primary vehicle which is located in-lane, but also other adjacent vehicles in the left and/or right lanes. With the given sensor configuration, two challenging problems are considered. One is that the guardrail detected by the front radar might be recognized as a left or right vehicle due to its genetic characteristics. This problem can be solved by a guardrail recognition algorithm based on motion and shape attributes. The other problem is that the recognition of rear vehicles in the left or right lanes might be wrong, especially on curved roads due to the low accuracy of the lateral position measured by rear radars, as well as due to a lack of knowledge of road curvature in the backward direction. In order to solve this problem, it is proposed that the road curvature measured by the front vision sensor is used to derive the road curvature toward the rear direction. Finally, the proposed algorithm for multiple vehicle recognition is validated via field test data on real roads.

컴퓨터비전에 기반한 효율적인 프리젠테이션 슬라이드 제어 (Computer Vision Based Efficient Control of Presentation Slides)

  • 박정우;석민수;이준호
    • 전자공학회논문지CI
    • /
    • 제40권4호
    • /
    • pp.232-239
    • /
    • 2003
  • 본 연구에서는 컴퓨터 비전 기술을 응용하여 프리젠테이션시에 슬라이드의 내용을 설명하기 위해 사용되는 일반 레이저 포인터로 슬라이드 쇼를 효율적으로 제어하는 실제 시스템을 제안하고 구현하였다. 슬라이드 상에 레이저 포인터로 포인팅 할 가상 버튼 영역을 설정하여 카메라에서 보이는 슬라이드의 가상 버튼 영역에서 레이저 포인터를 검출함으로써 슬라이드 쇼 제어 명령을 수행한다. 따라서 발표자가 슬라이드 쇼를 제어하기 위해 키보드나 마우스 주변에 머물거나 다른 사람의 도움을 받을 필요가 없다. 본 연구에서는 사용자가 처음에 카메라에서 보이는 슬라이드의 모양에 대한 정보를 입력하는 복잡한 캘리브레이션 과정 없이 실시간으로 가상 버튼 영역을 계산하는 방법을 제안하였다. 또한, 가상 버튼 영역에 속하는 픽셀들의 컬러 정보 획득을 위해 해당 픽셀들의 좌표들을 동적 큐를 생성하여 저장 사용함으로써 계산의 복잡도를 낮추었다. 현재 구현된 시스템은 마이크로소프트사의 파워포인트를 기반으로 이루어졌으나, 이와 유사한 다른 프리젠테이션 소프트웨어에도 쉽게 적용이 가능하다. 이와 같은 인간 중심의 발표 시스템 하에서, 발표자는 기존의 슬라이드를 제어하기 위한 공간적인 제약에서 벗어나 편리하고 효율적으로 청중에게 내용 전달을 할 수 있다.

LRF 를 이용한 이동로봇의 실시간 차선 인식 및 자율주행 (A Real Time Lane Detection Algorithm Using LRF for Autonomous Navigation of a Mobile Robot)

  • 김현우;황요섭;김윤기;이동혁;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제19권11호
    • /
    • pp.1029-1035
    • /
    • 2013
  • This paper proposes a real time lane detection algorithm using LRF (Laser Range Finder) for autonomous navigation of a mobile robot. There are many technologies for safety of the vehicles such as airbags, ABS, EPS etc. The real time lane detection is a fundamental requirement for an automobile system that utilizes outside information of automobiles. Representative methods of lane recognition are vision-based and LRF-based systems. By the vision-based system, recognition of environment for three dimensional space becomes excellent only in good conditions for capturing images. However there are so many unexpected barriers such as bad illumination, occlusions, and vibrations that the vision cannot be used for satisfying the fundamental requirement. In this paper, we introduce a three dimensional lane detection algorithm using LRF, which is very robust against the illumination. For the three dimensional lane detections, the laser reflection difference between the asphalt and lane according to the color and distance has been utilized with the extraction of feature points. Also a stable tracking algorithm is introduced empirically in this research. The performance of the proposed algorithm of lane detection and tracking has been verified through the real experiments.