• Title/Summary/Keyword: Camera Link

Search Result 43, Processing Time 0.028 seconds

Determination of Optimal Position of an Active Camera System Using Inverse Kinematics of Virtual Link Model and Manipulability Measure (가상 링크 모델의 역기구학과 조작성을 이용한 능동 카메라 시스템의 최적 위치 결정에 관한 연구)

  • Chu, Gil-Whoan;Cho, Jae-Soo;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.239-242
    • /
    • 2003
  • In this paper, we propose how to determine the optimal camera position using inverse kinematics of virtual link model and manipulability measure. We model the variable distance and viewing direction between a target object and a camera position as a virtual link. And, by using inverse kinematics of virtual link model, we find out regions that satisfy the direction and distance constraints for the observation of target object. The solution of inverse kinematics of virtual link model simultaneously satisfies camera accessibility as well as a direction and distance constraints. And we use a manipulability measure of active camera system in order to determine an optimal camera position among the multiple solutions of inverse kinematics. By using the inverse kinematics of virtual link model and manipulability measure, the optimal camera position in order to observe a target object can be determined easily and rapidly.

  • PDF

An Impletation of FPGA-based Pattern Matching System for PCB Pattern Detection (PCB 패턴 검출을 위한 FPGA 기반 패턴 매칭 시스템 구현)

  • Jung, Kwang-Sung;Moon, Cheol-Hong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.5
    • /
    • pp.465-472
    • /
    • 2016
  • This study materialized an FPGA-based system to extract PCB patterns. The Printed Circuit Boards that are produced these days are becoming more detailed and complex. Therefore, the importance of a vision system to extract defects of detailed patterns is increasing. This study produced an FPGA-based system that has high speed handling for vision automation of the PCB production process. A vision library that is used to extract defect patterns was also materialized in IPs to optimize the system. The IPs materialized are Camera Link IP, pattern matching IP, VGA IP, edge extraction IP, and memory IP.

Implementation of an FPGA-based Frame Grabber System for PCB Pattern Detection (PCB 패턴 검출을 위한 FPGA 기반 프레임 그래버 시스템 구현)

  • Moon, Cheol-Hong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.2
    • /
    • pp.435-442
    • /
    • 2018
  • This study implemented an FPGA-based system to extract PCB defect patterns. The FPGA-based system can perform pattern matching at high speed for vision automation. An image processing library that is used to extract defect patterns was also implemented in IPs to optimize the system. The IPs implemented are Camera Link IP, Histogram IP, VGA IP, Horizontal Projection IP and Vertical Projection IP. In terms of hardware, the FPGA chip from the Vertex-5 of Xilinx was used to receive and handle images that are sent from a digital camera. This system uses MicroBlaze CPU. The image results are sent to PC and displayed on a 7inch TFT-LCD and monitor.

PTZ Camera Based Multi Event Processing for Intelligent Video Network (지능형 영상네트워크 연계형 PTZ카메라 기반 다중 이벤트처리)

  • Chang, Il-Sik;Ahn, Seong-Je;Park, Gwang-Yeong;Cha, Jae-Sang;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11A
    • /
    • pp.1066-1072
    • /
    • 2010
  • In this paper we proposed a multi event handling surveillance system using multiple PTZ cameras. One event is assigned to each PTZ camera to detect unusual situation. If a new object appears in the scene while a camera is tracking the old one, it can not handle two objects simultaneously. In the second case that the object moves out of the scene during the tracking, the camera loses the object. In the proposed method, the nearby camera takes the role to trace the new one or detect the lost one in each case. The nearby camera can get the new object location information from old camera and set the seamless event link for the object. Our simulation result shows the continuous camera-to-camera object tracking performance.

Optical Vehicle to Vehicle Communications for Autonomous Mirrorless Cars

  • Jin, Sung Yooun;Choi, Dongnyeok;Kim, Byung Wook
    • Journal of Multimedia Information System
    • /
    • v.5 no.2
    • /
    • pp.105-110
    • /
    • 2018
  • Autonomous cars require the integration of multiple communication systems for driving safety. Many carmakers unveil mirrorless concept cars aiming to replace rear and sideview mirrors in vehicles with camera monitoring systems, which eliminate blind spots and reduce risk. This paper presents optical vehicle-to-vehicle (V2V) communications for autonomous mirrorless cars. The flicker-free light emitting diode (LED) light sources, providing illumination and data transmission simultaneously, and a high speed camera are used as transmitters and a receiver in the OCC link, respectively. The rear side vehicle transmits both future action data and vehicle type data using a headlamp or daytime running light, and the front vehicle can receive OCC data from the camera that replaces side mirrors so as not to prevent accidents while driving. Experimental results showed that action and vehicle type information were sent by LED light sources successfully to the front vehicle's camera via the OCC link and proved that OCC-based V2V communications for mirrorless cars can be a viable solution to improve driving safety.

Optical Camera Communications: Future Approach of Visible Light Communication

  • Le, Nam-Tuan;Nguyen, Trang;Jang, Yeong Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.2
    • /
    • pp.380-384
    • /
    • 2015
  • As an extension of Visible Light Communication, Optical Camera Communications (OCC) will be a promising service for smart devices. Especially in line of sight marketing service and indoor localization application, by using camera which exists in smart devices, small amount of data (url link) can be broadcasted or find direction from the illumination system. This paper introduces the operation of wireless communications technology that transmits optical information from optical light source to camera, called Optical Camera Communications.

Implementation Of Moving Picture Transfer System Using Bluetooth (Bluetooth를 이용한 동영상 전송 시스템 구현)

  • 조경연;이승은;최종찬
    • Proceedings of the IEEK Conference
    • /
    • 2001.06a
    • /
    • pp.25-28
    • /
    • 2001
  • In this paper we implement moving picture transfer system using bluetooth Development Kit (DK). To reduce the size of the image data, we use M-JPEG compression. We use bluetooth Synchronous Connection-Oriented (SCO) link to transfer voice data. Server receive image data from camera and compress the image data in M-JPEG format, and then transmit the image data to client using bluetooth Asynchronous connection-less (ACL) link. Client receive image data from bluetooth ACL link and decode the compressed image and then display the image to screen. Sever and Client can transmit and receive voice data simultaneously using bluetooth SCO link. In this paper bluetooth HCI commands and events generated by host controller to return the results of HCI commands are explained and the flow of bluetooth connection procedure is presented.

  • PDF

A Study on Efficient Image Processing and CAD-Vision System Interface (효율적인 화상자료 처리와 시각 시스템과 CAD시스템의 인터페이스에 관한 연구)

  • Park, Jin-Woo;Kim, Ki-Dong
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.18 no.2
    • /
    • pp.11-22
    • /
    • 1992
  • Up to now, most researches on production automation have concentrated on local automation, e. g. CAD, CAM, robotics, etc. However, to achieve total automation it is required to link each local modules such as CAD, CAM into a unified and integrated system. One such missing link is between CAD and computer vision system. This thesis is an attempt to link the gap between CAD and computer vision system. In this paper, we propose algorithms that carry out edge detection, thinning and pruning from the image data of manufactured parts, which are obtained from video camera and then transmitted to computer. We also propose a feature extraction and surface determination algorithm which extract informations from the image data. The informations are compatible to IGES CAD data. In addition, we suggest a methodology to reduce search efforts for CAD data bases. The methodology is based on graph submatching algorithm in GEFG(Generalized Edge Face Graph) representation for each part.

  • PDF

Monitoring Robot System with RF and Network Communication (네트워크 및 RF 기반의 감시용 로봇 시스템)

  • Kim, Dong-Hwan;Jeong, Gi-Beom;Hong, Yeong-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.9
    • /
    • pp.733-740
    • /
    • 2001
  • A monitoring robot capable of doing network and RF communication is introduced. The robot has several features that poses arbitrary position thanks to a mechanism combining the 4wheel drive and 4 link mechanism, transmits an image and command data via RF wireless communication. Moreover, the image data from the camera are transferred through a network communication. The robot plays a role in monitoring what is happening around the robot, and covers wide range due to a moving camera associated with the 4 arms. The robot can adjust its mass center by the 4 link mechanism, hence it guarantees a stability in moving on the slope.

  • PDF

Robust Control of Robot Manipulators using Vision Systems

  • Lee, Young-Chan;Jie, Min-Seok;Lee, Kang-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.7 no.2
    • /
    • pp.162-170
    • /
    • 2003
  • In this paper, we propose a robust controller for trajectory control of n-link robot manipulators using feature based on visual feedback. In order to reduce tracking error of the robot manipulator due to parametric uncertainties, integral action is included in the dynamic control part of the inner control loop. The desired trajectory for tracking is generated from feature extraction by the camera mounted on the end effector. The stability of the robust state feedback control system is shown by the Lyapunov method. Simulation and experimental results on a 5-link robot manipulator with two degree of freedom show that the proposed method has good tracking performance.

  • PDF