• 제목/요약/키워드: Vision Based Sensor

검색결과 428건 처리시간 0.026초

명령융합과 퍼지기반의 지능형 시스템-이동로봇주행적용 (Intelligent System based on Command Fusion and Fuzzy Logic Approaches - Application to mobile robot navigation)

  • 진태석;김현덕
    • 한국정보통신학회논문지
    • /
    • 제18권5호
    • /
    • pp.1034-1041
    • /
    • 2014
  • 본 논문은 능동카메라가 장착된 이동로봇의 장애물 회피를 위한 퍼지추론방법 제시하였다. 영상센서를 이용하여 상황적 판단에 근거한 명령융합을 사용하여 미지의 환경에서의 목적지까지의 지능적인 탐색을 수행하도록 하였다. 본 연구를 검증하기 위하여 환경모델과 센서데이터에 기반 한 이동로봇의 경로생성을 위한 물리적 센서융합을 시도하지 않고, 환경에 따른 각각의 로봇의 주행행동을 제어하기 위한 명령융합 적용하였다. 주행을 위한 전략으로는 목적지 접근과 장애물 회피를 수행할 수 있도록 퍼지규칙 조합을 통해 판단하도록 수행하였다. 제안한 방법을 검증하기 위하여 영상데이터를 사용한 성공적인 주행 실험 결과를 제시하였다.

자율주행 차량의 도로 평면선형 기반 차로이탈 허용 범위 산정 (Estimating a Range of Lane Departure Allowance based on Road Alignment in an Autonomous Driving Vehicle)

  • 김영민;김형수
    • 한국ITS학회 논문지
    • /
    • 제15권4호
    • /
    • pp.81-90
    • /
    • 2016
  • 자율주행 차량은 변화하는 도로환경에 스스로 대응 가능하여야 하여, 인간 운전자 수준의 도로환경 인지성능을 확보하여야 한다. 자율주행 차량의 센서 중 영상센서는 주행방향 결정 및 차로이탈 방지 등 조향제어 수행을 위하여 차선인식 기능을 수행한다. 현재 제시된 영상센서의 차선인식 성능기준은 ADAS(Advanced Driver Assistance System)과 관련된 '운전자 보조' 관점의 성능기준으로서, 자율주행 차량의 '주체적 인지'를 위한 성능조건과 상이할 것으로 판단된다. 본 연구에서는 자율주행 시 차선인식이 비정상적으로 지속되어, 직선구간에서 곡선구간으로 진입하는 차량이 조향실패에 따라 차로를 이탈하는 상황을 가정하였다. 차량 이동궤적을 기반하여 차로이탈 상황을 모형화하고, 차로이탈 허용 수준에 따른 자율주행 차량 영상센서 성능수준을 제시하였다. 분석 결과 승용차 조건에서 차선인식 기능이 1초 이상 연속적인 오작동을 일으킨다면 차로이탈에 의한 위험한 상황에 놓일 수 있으며, 자율주행 차량을 위하여 현재 ADAS 영상센서 성능평가 방법에서의 차로이탈조건보다 심각한 차로이탈상황을 고려한 영상센서 성능평가 방안이 필요할 것으로 판단된다.

에지 기반 고속 지평선 검출 알고리즘 (A Fast Horizontal line detection algorithm based on edge information)

  • 나상일;이웅호;서동진;이웅희;정동석
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 컴퓨터소사이어티 추계학술대회논문집
    • /
    • pp.199-202
    • /
    • 2003
  • In the research for Unmaned Air Vehicles(UAVs), the use of Vision-sensor has been increased. It is possible to calculate the position information of air vehicle by finding a horizontal line. In this paper, we proposed a vision-based algorithm for finding the horizontal line. Experimental results show that the proposed algorithm is faster than an existing algorithm.

  • PDF

Intelligent Shoes for Detecting Blind Falls Using the Internet of Things

  • Ahmad Abusukhon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권9호
    • /
    • pp.2377-2398
    • /
    • 2023
  • In our daily lives, we engage in a variety of tasks that rely on our senses, such as seeing. Blindness is the absence of the sense of vision. According to the World Health Organization, 2.2 billion people worldwide suffer from various forms of vision impairment. Unfortunately, blind people face a variety of indoor and outdoor challenges on a daily basis, limiting their mobility and preventing them from engaging in other activities. Blind people are very vulnerable to a variety of hazards, including falls. Various barriers, such as stairs, can cause a fall. The Internet of Things (IoT) is used to track falls and send a warning message to the blind caretakers. One of the gaps in the previous works is that they were unable to differentiate between falls true and false. Treating false falls as true falls results in many false alarms being sent to the blind caretakers and thus, they may reject the IoT system. As a means of bridging this chasm, this paper proposes an intelligent shoe that is able to precisely distinguish between false and true falls based on three sensors, namely, the load scale sensor, the light sensor, and the Flex sensor. The proposed IoT system is tested in an indoor environment for various scenarios of falls using four models of machine learning. The results from our system showed an accuracy of 0.96%. Compared to the state-of-the-art, our system is simpler and more accurate since it avoids sending false alarms to the blind caretakers.

초소형 공작기계 적용을 고려한 광학식 3 축 공구원점 센서 모델링 및 실험에 관한 연구 (Study on Modeling and Experiment of Optical Three Axis Tool-Origin Sensor for Applications of Micro Machine-Tools)

  • 신우철;이현화;노승국;박종권;노명규
    • 한국정밀공학회지
    • /
    • 제26권6호
    • /
    • pp.68-73
    • /
    • 2009
  • One of the traditional optical methods to monitor a tool is a CCD sensor-based vision system which captures an aspect of the tool in real time. In the case using the CCD sensor, specific lens-modules are necessary to monitor the tool with higher resolution than its pixel size, and a microprocessor is required to attain desired data from captured images. Thus theses additional devices make the entire measurement system complex. Another method is to use a pair of an optical source and a detector per measuring axis. Since the method is based on the intensity modulation, the structure of the measurement system is simper than the CCD sensor-based vision system. However, in the case measuring the three dimensional position of the tool, it is difficult to apply to micro machine-tools because there may not be space to integrate three pairs of an optical source and a detector. In this paper, in order to develop a tool-origin measurement system which is employed in micro machine-tools, the improved method to measure a tool origin in x, y and z axes is introduced. The method is based on the intensity modulation and employs one pair of an optical source radiating divergent beams and a quadrant photodiode to detect a three dimensional position of the tool. This paper presents the measurement models of the proposed tool-origin sensor. The models were verified experimentally The verification results show that the proposed method is possible and the induced models are available for design.

Deep Learning Machine Vision System with High Object Recognition Rate using Multiple-Exposure Image Sensing Method

  • Park, Min-Jun;Kim, Hyeon-June
    • 센서학회지
    • /
    • 제30권2호
    • /
    • pp.76-81
    • /
    • 2021
  • In this study, we propose a machine vision system with a high object recognition rate. By utilizing a multiple-exposure image sensing technique, the proposed deep learning-based machine vision system can cover a wide light intensity range without further learning processes on the various light intensity range. If the proposed machine vision system fails to recognize object features, the system operates in a multiple-exposure sensing mode and detects the target object that is blocked in the near dark or bright region. Furthermore, short- and long-exposure images from the multiple-exposure sensing mode are synthesized to obtain accurate object feature information. That results in the generation of a wide dynamic range of image information. Even with the object recognition resources for the deep learning process with a light intensity range of only 23 dB, the prototype machine vision system with the multiple-exposure imaging method demonstrated an object recognition performance with a light intensity range of up to 96 dB.

수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법 (Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing)

  • 이상훈;송진모;배종수
    • 한국군사과학기술학회지
    • /
    • 제18권3호
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.

Automatic Pipeline Welding System with Self-Diagnostic Function and Laser Vision Sensor

  • Kim, Yong-Baek;Moon, Hyeong-Soon;Kim, Jong-Cheol;Kim, Jong-Jun;Choo, Jeong-Bog
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.1137-1140
    • /
    • 2005
  • Automatic welding has been used frequently on pipeline projects. The productivity and reliability are most essential features of the automatic welding system. The mechanized GMAW process is the most widely used welding process and the carriage and band system is most effective welding system for pipeline laying. This application-oriented paper introduces new automatic welding equipment for pipeline construction. It is based on cutting-edge design and practical welding physics to minimize downtime. This paper also describes the control system which was designed and implemented for new automatic welding equipment. The system has the self diagnostic function which facilitates maintenance and repairs, and also has the network function via which the welding task data can be transmitted and the welding process data can be monitored. The laser vision sensor was designed for narrow welding groove in order to implement higher accuracy of seam tracking and fully automatic operation.

  • PDF

시각센서를 이용한 벨로우즈 외부 모서리 레이저 용접 시스템의 개발에 관한 연구 (A Study on Development of Laser Welding System for Bellows Outside Ege Using Vision Sensor)

  • 이승기;유중돈;나석주
    • Journal of Welding and Joining
    • /
    • 제17권3호
    • /
    • pp.71-78
    • /
    • 1999
  • The welded metal bellows is commonly manufactured by welding pairs of washer-shaped discs of thin sheet metal stamped from strip stock in thickness from 0.025 to 0.254 mm. The discs, or diaphragms, are formed with mating circumferential corrugations. In this study, the diaphragms were welded by using a CW Nd: YAG laser to form metal bellows. The bellows was fixed on a jig and compressed axially, while Cu-rings were installed between belows edges for intimate contact of edges. The difference between the inner diameter of bellows and jig shaft causes an eccentricity, while the tolerance between motor shaft and jig shaft causes a wobble type motion. A vision sensor which is based on the optical triangulation was used for seam tracking. An image processing algorithm which can distinguish the image by bellows edge from that by Cu-ring was developed. The geometric relationship which describes the eccentricity and wobble type motion was modeled. The seam tracking using the image processing algorithm and the geometric modeling was performed successfully.

  • PDF

용접선 추적용 전자기센서의 제어시스템 개발 (Development of a Dual Electromagnetic Sensor-Based Weld Line Seam Tracking System)

  • 조방현;민기업;아미트;김동호;김수호;권순창
    • 대한용접접합학회:학술대회논문집
    • /
    • 대한용접접합학회 2005년도 추계학술발표대회 개요집
    • /
    • pp.144-146
    • /
    • 2005
  • Dual electromagnetic sensor is used for sensing the weld line. The sensor consists of excitation and two sensing coil wound over the ferro-magnetic core. By using the dual sensor, the effect of noise is minimized. It is based on the generation of eddy currents in the welding plate by passing current through the excitation coil. The sensor can be used to track the butt joints having no gap between them, where a vision based sensor fails to track. Sensor sensitivity depends on the number of coil turns, frequency of excitation, distance of a sensor from the work piece, diameter of core, etc. The whole system consists of a sensor, a signal processing board, a motion controller and a personnel computer (PC). The raw sensor signal is processed using the signal processing board. It consists of amplification, rectification, filtering, averaging, offset adjustment, etc. Based on sensor data, the motion controller adjusts the position of a welding torch.

  • PDF