• Title/Summary/Keyword: 비전센서

Search Result 294, Processing Time 0.039 seconds

A Study of the Shaft Power Measuring System Using Cameras (카메라를 이용한 축계 비틀림 계측 장치 개발)

  • Jeong, Jeong-Soon;Kim, Young-Bok;Choi, Myung-Soo
    • Journal of Ocean Engineering and Technology
    • /
    • v.24 no.4
    • /
    • pp.72-77
    • /
    • 2010
  • This paper presents a method for measuring the shaft power of a marine main engine. Usually, in traditional systems for measuring shaft power, a strain gauge is used even though it has several disadvantages. First, it is difficult to set up the strain gauge on the shaft and acquire the correct signal for analysis. Second, it is very expensive and complicated. For these reasons, we investigated alternative approaches for measuring shaft power and proposed a new method that uses a vision-based measurement system. For this study, templates for image processing and CCD cameras were installed at the both ends of the shaft. Then, in order for the cameras to capture the images synchronously, we used a trigger mark and a optical sensor. The position of each template between the first and the second camera images were compared to calculate the torsion angle. The proposed measurement system can be installed more easily than traditional measurement systems and is suitable for any shaft because it does not contact the shaft. With this approach, it is possible to measure the shaft power while a ship is operating.

Performing Missions of a Minicar Using a Single Camera (단안 카메라를 이용한 소형 자동차의 임무 수행)

  • Kim, Jin-Woo;Ha, Jong-Eun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.1
    • /
    • pp.123-128
    • /
    • 2017
  • This paper deals with performing missions through autonomous navigation using camera and other sensors. Extracting pose of the car is necessary to navigate safely within the given road. Homography is used to find it. Color image is converted into grey image and thresholding and edge is used to find control points. Two control ponits are converted into world coordinates using homography to find the angle and position of the car. Color is used to find traffic signal. It was confirmed that the given tasks performed well through experiments.

An Weldability Estimation of Laser Welded Specimens (레이저 용접물의 용접성 평가)

  • Lee, Jeong-Ick
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.1
    • /
    • pp.60-68
    • /
    • 2007
  • It has been conducted by laser vision sensor for weldability estimation of front-bead after doing high speed butt laser welding of any condition. It has been developed a real time GUI(Graphic User Interface) system for weldability application in the basis of texts and field qualify levels. In the reference of bead imperfections, defects absolute position and defects intensity index of front-bead in the basis of formability reference, it has been produced a weldability estimation and defects intensity index of back-bead by back propagation neural network. In the result of by comparing measuring data by laser vision sensor of back-bead and data by back propagation neural network of one, it has been shown the similar results. Finally, under knowledge of welding condition in production line, it has been conducted a weldability estimation of back-bead only in knowledge of informations of front-bead data without using laser vision sensor or welding inspection experts and furthermore it can be used data for final inspection results of back-bead.

Development of a Lateral Control System for Autonomous Vehicles Using Data Fusion of Vision and IMU Sensors with Field Tests (비전 및 IMU 센서의 정보융합을 이용한 자율주행 자동차의 횡방향 제어시스템 개발 및 실차 실험)

  • Park, Eun Seong;Yu, Chang Ho;Choi, Jae Weon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.3
    • /
    • pp.179-186
    • /
    • 2015
  • In this paper, a novel lateral control system is proposed for the purpose of improving lane keeping performance which is independent from GPS signals. Lane keeping is a key function for the realization of unmanned driving systems. In order to obtain this objective, a vision sensor based real-time lane detection scheme is developed. Furthermore, we employ a data fusion along with a real-time steering angle of the test vehicle to improve its lane keeping performance. The fused direction data can be obtained by an IMU sensor and vision sensor. The performance of the proposed system was verified by computer simulations along with field tests using MOHAVE, a commercial vehicle from Kia Motors of Korea.

Linear Velocity Control of the Mobile Robot with the Vision System at Corridor Navigation (비전 센서를 갖는 이동 로봇의 복도 주행 시 직진 속도 제어)

  • Kwon, Ji-Wook;Hong, Suk-Kyo;Chwa, Dong-Kyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.9
    • /
    • pp.896-902
    • /
    • 2007
  • This paper proposes a vision-based kinematic control method for mobile robots with camera-on-board. In the previous literature on the control of mobile robots using camera vision information, the forward velocity is set to be a constant, and only the rotational velocity of the robot is controlled. More efficient motion, however, is needed by controlling the forward velocity, depending on the position in the corridor. Thus, both forward and rotational velocities are controlled in the proposed method such that the mobile robots can move faster when the comer of the corridor is far away, and it slows down as it approaches the dead end of the corridor. In this way, the smooth turning motion along the corridor is possible. To this end, visual information using the camera is used to obtain the perspective lines and the distance from the current robot position to the dead end. Then, the vanishing point and the pseudo desired position are obtained, and the forward and rotational velocities are controlled by the LOS(Line Of Sight) guidance law. Both numerical and experimental results are included to demonstrate the validity of the proposed method.

Vision Sensor and Deep Learning-based Around View Monitoring System for Ship Berthing (비전 센서 및 딥러닝 기반 선박 접안을 위한 어라운드뷰 모니터링 시스템)

  • Kim, Hanguen;Kim, Donghoon;Park, Byeolteo;Lee, Seung-Mok
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.2
    • /
    • pp.71-78
    • /
    • 2020
  • This paper proposes vision sensors and deep learning-based around view monitoring system for ship berthing. Ship berthing to the port requires precise relative position and relative speed information between the mooring facility and the ship. For ships of Handysize or higher, the vesselships must be docked with the help of pilots and tugboats. In the case of ships handling dangerous cargo, tug boats push the ship and dock it in the port, using the distance and velocity information receiving from the berthing aid system (BAS). However, the existing BAS is very expensive and there is a limit on the size of the vessel that can be measured. Also, there is a limitation that it is difficult to measure distance and speed when there are obstacles near the port. This paper proposes a relative distance and speed estimation system that can be used as a ship berthing assist system. The proposed system is verified by comparing the performance with the existing laser-based distance and speed measurement system through the field tests at the actual port.

A Study of Digital Cartoon Authoring Tool with Specialized Emotion Viewer in Smart Phone (스마트폰에 특화된 감성 뷰어와 디지털 만화 저작도구에 관한 연구)

  • Koh, Hee-Chang;Min, Hyun-Ki;Cho, Jae-Hoon;Cho, Eun-Ae;Lee, Se-Hoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2010.07a
    • /
    • pp.87-90
    • /
    • 2010
  • 최근 급속도로 보급되고 있는 스마트폰은 큰 화면과 터치 기능은 물론 GPS를 포함한 다양한 센서와 표현기능이 포함되어 있다. 이 연구에서는 대부분의 스마트폰이 제공하고 있는 풍부한 표현성능을 최대한 활용하여 작가의 의도가 독자에게 최대한 전달될 수 있는 디지털 만화 감성 뷰어와 그 저작도구에 대하여 연구하고 구현하였다. 스마트폰에 탑재된 뷰어에서 독자는 각 Cut의 장면 전환 효과, 진동, 사운드 효과 등을 통하여 효과적인 디지털 만화읽기를 할 수 있도록 하였다. 또한 이러한 기능들을 컴퓨터 비전문가인 만화저작자들이 쉽게 적용할 수 있는 저작도구를 구현하였다. 저작도구는 이러한 감성표현 기능 외에도 화면전환 효과와 씬플로우(scene-flow)도 작가의 의도대로 적용할 수 있도록 하였다. 뷰어는 Apple, Android, Windows Mobile과 Symbian 운영체제에 대해 개발하였으며, 저작도구는 Windows XP 환경에서 개발하였다. 이 연구에서 개발된 뷰어와 저작도구가 상용화되면 현재 일본제품이 독점하고 있는 국내 이동통신의 디지털 만화 시장에 적용될 것이며, 디지털 만화의 앱스토어로 발전될 수 있을 것이다.

  • PDF

Development of a Fault Diagnosis System for Circulating Fluidized Bed Boiler Tube (순환유동층 보일러 튜브 결함 진단을 위한 진단장치 개발)

  • Kim, Yu-Hyun;Jeong, In-Kyu;Ban, Jae-Kyo;Kim, JaeYoung;Kim, Jong-Myon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2018.07a
    • /
    • pp.53-54
    • /
    • 2018
  • 최근 화력 발전소 보일러 튜브의 노후화로 인해서 불시정지 빈도수 및 재가동 시간이 늦춰지고 있다. 이는 막대한 경제적, 사회적 손실로 이어지며, 이를 예방하기 위해서는 상태기반 정비가 필요하다. 현재의 상태기반 정비는 센서, 신호 수집장치, 신호 분석단계를 거쳐 전문가가 진단하기 때문에 즉각적으로 대응하기 어려운 문제점이 있어서 설비의 재가동 시간이 늦춰지고 있다. 따라서 본 논문에서는 전문가의 도움 없이 자동으로 상태를 진단하기 위해서 머신러닝 기법 중 하나인 서포트 벡터 머신(SVM)을 이용한 진단 알고리즘을 구현하고, 이를 탑재한 진단장치를 개발하여 비전문가들도 즉각적으로 대응할 수 있게 하여 불시정지 시간과 빈도수를 줄이고자 한다.

  • PDF

Target Tracking Control of Mobile Robots with Vision System in the Absence of Velocity Sensors (속도센서가 없는 비전시스템을 이용한 이동로봇의 목표물 추종)

  • Cho, Namsub;Kwon, Ji-Wook;Chwa, Dongkyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.6
    • /
    • pp.852-862
    • /
    • 2013
  • This paper proposes a target tracking control method for wheeled mobile robots with nonholonomic constraints by using a backstepping-like feedback linearization. For the target tracking, we apply a vision system to mobile robots to obtain the relative posture information between the mobile robot and the target. The robots do not use the sensors to obtain the velocity information in this paper and therefore assumed the unknown velocities of both mobile robot and target. Instead, the proposed method uses only the maximum velocity information of the mobile robot and target. First, the pseudo command for the forward linear velocity and the heading direction angle are designed based on the kinematics by using the obtained image information. Then, the actual control inputs are designed to make the actual forward linear velocity and the heading direction angle follow the pseudo commands. Through simulations and experiments for the mobile robot we have confirmed that the proposed control method is able to track target even when the velocity sensors are not used at all.

A Study for Detecting AGV Driving Information using Vision Sensor (비전 센서를 이용한 AGV의 주행정보 획득에 관한 연구)

  • Lee, Jin-Woo;Sohn, Ju-Han;Choi, Sung-Uk;Lee, Young-Jin;Lee, Kwon-Soon
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2575-2577
    • /
    • 2000
  • We experimented on AGV driving test with color CCD camera which is setup on it. This paper can be divided into two parts. One is image processing part to measure the condition of the guideline and AGV. The other is part that obtains the reference steering angle through using the image processing parts. First, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, AGV knows the driving conditions of AGV. After then using of those information, AGV calculates the reference steering angle changed by the speed of AGV. In the case of low speed, it focuses on the left/right error values of the guide line. As increasing of the speed of AGV, it focuses on the slop of guide line. Lastly, we are to model the above descriptions as the type of PID controller and regulate the coefficient value of it the speed of AGV.

  • PDF