• 제목/요약/키워드: Image guided navigation

검색결과 27건 처리시간 0.023초

비전 시스템을 이용한 AGV의 차선인식 및 장애물 위치 검출에 관한 연구 (A Study on Detection of Lane and Situation of Obstacle for AGV using Vision System)

  • 이진우;이영진;이권순
    • 한국항해항만학회:학술대회논문집
    • /
    • 한국항해항만학회 2000년도 추계학술대회논문집
    • /
    • pp.207-217
    • /
    • 2000
  • In this paper, we describe an image processing algorithm which is able to recognize the road lane. This algorithm performs to recognize the interrelation between AGV and the other vehicle. We experimented on AGV driving test with color CCD camera which is setup on the top of vehicle and acquires the digital signal. This paper is composed of two parts. One is image preprocessing part to measure the condition of the lane and vehicle. This finds the information of lines using RGB ratio cutting algorithm, the edge detection and Hough transform. The other obtains the situation of other vehicles using the image processing and viewport. At first, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, if vehicle knows the driving conditions which are angle, distance error and real position of other vehicles, we should calculate the reference steering angle.

  • PDF

영상탐색기 적용 전술유도무기 영상 내 표적존재확률 분석을 위한 M&S 설계 및 분석 (Modeling and Simulation of Target Existence Probability in Tactical Guidance Missile Seeker Image)

  • 설상환
    • 한국시뮬레이션학회논문지
    • /
    • 제24권4호
    • /
    • pp.43-49
    • /
    • 2015
  • 탐색기 적용 전술유도무기의 탐색기는 전술유도무기 개발 간 제한된 직경과 무게로 설계된다. 이러한 경우, 하드웨어 특성에 의해 탐색기 시야각, 분해능, 추적 알고리즘 등의 탐색기 성능에 제한이 존재하게 되고, 이로 인해 최대 표적포착 가능거리가 정해지게 된다. 장사거리용 전술유도무기의 경우 최대 표적포착 가능거리 이전까지는 INS 순수항법 또는 GPS/INS 통합항법으로 항법 유도비행을 수행하는데, INS 순수항법은 비행시간이 증가함에 따라 항법성능이 급격하게 감소하고, GPS/INS 통합항법의 경우 재밍상황에서 항법성능이 급격하게 감소한다. 본 논문에서는 앞서 언급한 최대 표적포착 가능거리와 항법장치 성능등 다양한 변수를 고려하여 전술유도무기 체계 관점에서 탐색기 영상 내 표적존재확률에 대해 분석할 수 있는 시뮬레이션을 수행하였다.

이동형 로보트 주행을 위한 장애물 검출에 관한 연구 (A Study on Obstacle Detection for Mobile Robot Navigation)

  • 윤지호;우동민
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1995년도 추계학술대회 논문집 학회본부
    • /
    • pp.587-589
    • /
    • 1995
  • The safe navigation of a mobile robot requires the recognition of the environment in terms of vision processing. To be guided in the given path, the robot should acquire the information about where the wall and corridor are located. Also unexpected obstacles should be detected as rapid as possible for the safe obstacle avoidance. In the paper, we assume that the mobile robot should be navigated in the flat surface. In terms of this assumption we simplify the correspondence problem by the free navigation surface and matching features in that coordinate system. Basically, the vision processing system adopts line segment of edge as the feature. The extracted line segments of edge out of both image are matched in the free nevigation surface. According to the matching result, each line segment is labeled by the attributes regarding obstacle and free surface and the 3D shape of obstacle is interpreted. This proposed vision processing method is verified in terms of various simulations and experimentation using real images.

  • PDF

비전 시스템을 이용한 AGV의 차선인식 및 장애물 위치 검출에 관한 연구 (A Study on Detection of Lane and Situation of Obstacle for AGV using Vision System)

  • 이진우;이영진;이권순
    • 한국항만학회지
    • /
    • 제14권3호
    • /
    • pp.303-312
    • /
    • 2000
  • In this paper, we describe an image processing algorithm which is able to recognize the road lane. This algorithm performs to recognize the interrelation between AGV and the other vehicle. We experimented on AGV driving test with color CCD camera which is setup on the top of vehicle and acquires the digital signal. This paper is composed of two parts. One is image preprocessing part to measure the condition of the condition of the lane and vehicle. This finds the information of lines using RGB ratio cutting algorithm, the edge detection and Hough transform. The other obtains the situation of other vehicles using the image processing and viewport. At first, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, if vehicle knows the driving conditions which are lane angle, distance error and real position of other vehicles, we should calculate the reference steering angle.

  • PDF

이동로보트의 궤도관제기법 (NAVUNGATION CONTROL OF A MOBILE ROBOT)

  • 홍문성;이상용;한민용
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1989년도 한국자동제어학술회의논문집; Seoul, Korea; 27-28 Oct. 1989
    • /
    • pp.226-229
    • /
    • 1989
  • This paper presents a navigation control method for a vision guided robot. The robot is equipped with one camera, an IBM/AT compatible PC, and a sonar system. The robot can either follow track specified on a monitor screen or navigate to a destination avoiding any obstacles on its way. The robot finds its current position as well as its moving direction by taking an image of a circular pattern placed on the ceiling.

  • PDF

Color Line 탐색을 이용한 AGV의 주행제어에 관한 연구 (A Study on the Navigation Control of Automated Guided Vehicle using Color Line Search)

  • 박영만;박경우;안동순
    • 한국컴퓨터정보학회논문지
    • /
    • 제8권1호
    • /
    • pp.13-19
    • /
    • 2003
  • 유연 생산 시스템(FMS)이나 자동화 창고(AWS) 등에 이용되는 AGV(Automated Guided Vehicle)에 관하여 많은 연구가 진행 중이다. 기존의 AGV는 마그넷 테이프나 전기와이어, RF나 Laser를 가이드라인으로 사용하고 있어 가이드라인 설치와 변경 시 시간과 비용이 많이 드는 단점이 있다. 본 논문에서는 단일 컬러 CCD카메라를 가지고 50mm 황색 컬러테이프를 가이드라인으로 사용하여 주행하는 AGV를 구현하였다. 컬러테이프를 사용하므로 라인설치 및 변경 추가 시 작업 시간이 빠르고 비용이 적게 드는 장점이 있다. 구현한 AGV의 구조와 컬러 특성만을 추출하여 주행용 가이드라인을 탐색하는 영상처리 기법과 AGV주행 결과를 제시하였다.

  • PDF

Image-guided navigation surgery for bilateral choanal atresia with a Tessier number 3 facial cleft in an adult

  • Sung, Ji Yoon;Cho, Kyu-Sup;Bae, Yong Chan;Bae, Seong Hwan
    • 대한두개안면성형외과학회지
    • /
    • 제21권1호
    • /
    • pp.64-68
    • /
    • 2020
  • The coexistence of craniofacial cleft and bilateral choanal atresia has only been reported in three cases in the literature, and only one of those cases involved a Tessier number 3 facial cleft. It is also rare for bilateral choanal atresia to be found in adulthood, with 10 previous cases reported in the literature. This report presents the case of a 19-year-old woman with a Tessier number 3 facial cleft who was diagnosed with bilateral choanal atresia in adulthood. At first, the diagnosis of bilateral choanal atresia was missed and septoplasty was performed. After septoplasty, the patient's symptoms did not improve, and an endoscopic examination revealed previously unnoticed bilateral choanal atresia. Computed tomography showed left membranous atresia and right bony atresia. The patient underwent an operation for opening and widening of the left choana with an image-guided navigation system (IGNS), which enabled accurate localization of the lesion while ensuring patient safety. Postoperatively, the patient became able to engage in nasal breathing and reported that it was easier for her to breathe, and there were no signs of restenosis at a 26-month follow-up. The patient was successfully treated with an IGNS.

복도 주행 로봇을 위한 단일 카메라 영상에서의 사람 검출 (Human Detection in the Images of a Single Camera for a Corridor Navigation Robot)

  • 김정대;도용태
    • 로봇학회논문지
    • /
    • 제8권4호
    • /
    • pp.238-246
    • /
    • 2013
  • In this paper, a robot vision technique is presented to detect obstacles, particularly approaching humans, in the images acquired by a mobile robot that autonomously navigates in a narrow building corridor. A single low-cost color camera is attached to the robot, and a trapezoidal area is set as a region of interest (ROI) in front of the robot in the camera image. The lower parts of a human such as feet and legs are first detected in the ROI from their appearances in real time as the distance between the robot and the human becomes smaller. Then, the human detection is confirmed by detecting his/her face within a small search region specified above the part detected in the trapezoidal ROI. To increase the credibility of detection, a final decision about human detection is made when a face is detected in two consecutive image frames. We tested the proposed method using images of various people in corridor scenes, and could get promising results. This method can be used for a vision-guided mobile robot to make a detour for avoiding collision with a human during its indoor navigation.

광추적기와 내부 비전센서를 이용한 수술도구의 3차원 자세 및 위치 추적 시스템 (3D Orientation and Position Tracking System of Surgical Instrument with Optical Tracker and Internal Vision Sensor)

  • 조영진;오현민;김민영
    • 제어로봇시스템학회논문지
    • /
    • 제22권8호
    • /
    • pp.579-584
    • /
    • 2016
  • When surgical instruments are tracked in an image-guided surgical navigation system, a stereo vision system with high accuracy is generally used, which is called optical tracker. However, this optical tracker has the disadvantage that a line-of-sight between the tracker and surgical instrument must be maintained. Therefore, to complement the disadvantage of optical tracking systems, an internal vision sensor is attached to a surgical instrument in this paper. Monitoring the target marker pattern attached on patient with this vision sensor, this surgical instrument is possible to be tracked even when the line-of-sight of the optical tracker is occluded. To verify the system's effectiveness, a series of basic experiments is carried out. Lastly, an integration experiment is conducted. The experimental results show that rotational error is bounded to max $1.32^{\circ}$ and mean $0.35^{\circ}$, and translation error is in max 1.72mm and mean 0.58mm. Finally, it is confirmed that the proposed tool tracking method using an internal vision sensor is useful and effective to overcome the occlusion problem of the optical tracker.

An Image-Guided Robotic Surgery System for Spinal Fusion

  • Chung Goo Bong;Kim Sungmin;Lee Soo Gang;Yi Byung-Ju;Kim Wheekuk;Oh Se Min;Kim Young Soo;So Byung Rok;Park Jong Il;Oh Seong Hoon
    • International Journal of Control, Automation, and Systems
    • /
    • 제4권1호
    • /
    • pp.30-41
    • /
    • 2006
  • The goal of this work is to develop and test a robot-assisted surgery system for spinal fusion. The system is composed of a robot, a surgical planning system, and a navigation system. It plays the role of assisting surgeons for inserting a pedicle screw in the spinal fusion procedure. Compared to conventional methods for spinal fusion, the proposed surgical procedure ensures minimum invasion and better accuracy by using robot and image information. The robot plays the role of positioning and guiding needles, drills, and other surgical instruments or conducts automatic boring and screwing. Pre-operative CT images intra-operative fluoroscopic images are integrated to provide the surgeon with information for surgical planning. Some experiments employing the developed robotic surgery system are conducted. The experimental results confirm that the system is not only able to guide the surgical tools by accurately pointing and orienting the specified location, but also successfully compensate the movement of the patient due to respiration.