• 제목/요약/키워드: smart camera

검색결과 572건 처리시간 0.035초

영상처리를 이용한 레지스터 컨트롤러 인쇄기법 연구 (A Study for Register Controller Printing Method using Image Process)

  • 홍선기;정훈
    • 조명전기설비학회논문지
    • /
    • 제24권9호
    • /
    • pp.56-61
    • /
    • 2010
  • Scanning head has used to detect the error signal in register controller. However the resolution of the system is not enough for E-printing(Electronic-paper printing). In this paper, the register mark shapes and process techniques are studied using a smart camera which can detect the register mark and calculate the printing error. With these, a register controller within 10[um] of error range is developed for e-printing system and confirmed with experiments.

강교량 재도장 로봇의 모니터링 모듈 시제품 개발 (Development of a Prototype Monitoring Module for Steel Bridge Repainting Robots)

  • 서명국;이호연;박일환;장병하
    • 드라이브 ㆍ 컨트롤
    • /
    • 제17권4호
    • /
    • pp.15-22
    • /
    • 2020
  • With the need for efficient maintenance technology to reduce maintenance costs for steel bridges, repainting robots are being developed to automate the work in narrow and poor bridge spaces. The repainting robot is equipped with a blasting module to remove paint layers and contaminants. This study developed a prototype monitoring module to be mounted on the repainting robot. The monitoring module analyzes the condition of the painting surface through a camera installed in the front, guides the direction of movement of the robot, and provides the operator with a video to check the working status after blasting through a camera installed in the back. Various image visibility enhancement technologies were applied to the monitoring module to overcome worksite challenges where incomplete lighting and dust occurs.

실시간 3D 영상 카메라의 영상 동기화 방법에 관한 연구 (A Study of Video Synchronization Method for Live 3D Stereoscopic Camera)

  • 한병완;임성준
    • 한국인터넷방송통신학회논문지
    • /
    • 제13권6호
    • /
    • pp.263-268
    • /
    • 2013
  • 3D 영상을 만들기 위해서는 왼쪽과 오른쪽 두 대의 카메라로부터 입력되는 영상을 합쳐서 입체 영상을 만들어 내는 과정을 거치게 된다. 이 경우 두 대의 카메라로부터 입력되는 영상의 동기를 맞추는 것은 매우 중요하다. 본 논문서는 두 대의 카메라로부터 입력되는 영상의 동기를 맞추는 방법을 제시하였다. 또한 본 논문에서 제시한 알고리즘을 구현하기 위한 시스템을 소프트웨어를 이용하여 구현함으로써 다양한 영상 포맷을 쉽게 지원할 수 있도록 하였으며, 향후 여러 대의 카메라를 이용한 무안경 방식을 지원하는 시스템에도 적용할 수 있도록 하였다.

A Study on Ceiling Light and Guided Line based Moving Detection Estimation Algorithm using Multi-Camera in Factory

  • Kim, Ki Rhyoung;Lee, Kang Hun;Cho, Su Hyung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제10권4호
    • /
    • pp.70-74
    • /
    • 2018
  • In order to ensure the flow of goods available and more flexible, reduce labor costs, many factories and industrial zones around the world are gradually moving to use automated solutions. One of them is to use Automated guided vehicles (AGV). Currently, there are a line tracing method as an AGV operating method, and a method of estimating the current position of the AGV and matching with a factory map and knowing the moving direction of the AGV. In this paper, we propose ceiling Light and guided line based moving direction estimation algorithm using multi-camera on the AGV in smart factory that can operate stable AGV by compensating the disadvantages of existing AGV operation method. The proposed algorithm is able to estimate its position and direction using a general - purpose camera instead of a sensor. Based on this, it can correct its movement error and estimate its own movement path.

A Vehicle Recognition Method based on Radar and Camera Fusion in an Autonomous Driving Environment

  • Park, Mun-Yong;Lee, Suk-Ki;Shin, Dong-Jin
    • International journal of advanced smart convergence
    • /
    • 제10권4호
    • /
    • pp.263-272
    • /
    • 2021
  • At a time when securing driving safety is the most important in the development and commercialization of autonomous vehicles, AI and big data-based algorithms are being studied to enhance and optimize the recognition and detection performance of various static and dynamic vehicles. However, there are many research cases to recognize it as the same vehicle by utilizing the unique advantages of radar and cameras, but they do not use deep learning image processing technology or detect only short distances as the same target due to radar performance problems. Radars can recognize vehicles without errors in situations such as night and fog, but it is not accurate even if the type of object is determined through RCS values, so accurate classification of the object through images such as cameras is required. Therefore, we propose a fusion-based vehicle recognition method that configures data sets that can be collected by radar device and camera device, calculates errors in the data sets, and recognizes them as the same target.

회전 카메라를 이용한 블랙박스 시스템 구현 (Implementation of a Dashcam System using a Rotating Camera)

  • 김기완;구성우;김두용
    • 반도체디스플레이기술학회지
    • /
    • 제19권4호
    • /
    • pp.34-38
    • /
    • 2020
  • In this paper, we implement a Dashcam system capable of shooting 360 degrees using a Raspberry Pi, shock sensors, distance sensors, and rotating camera with a servo motor. If there is an object approaching the vehicle by the distance sensor, the camera rotates to take a video. In the event of an external shock, videos and images are stored in the server to analyze the cause of the vehicle's accident and prevent the user from forging or tampering with videos or images. We also implement functions that transmit the message with the location and the intensity of the impact when the accident occurs and send the vehicle information to an insurance authority with by linking the system with a smart device. It is advantage that the authority analyzes the transmitted message and provides the accident handling information giving the user's safety and convenience.

스마트 NUX용 고해상도 광각렌즈모듈 및 영상왜곡보정 설계 (Design of High-resolution Wide-angle Lenz Module, and Image Distortion Compensation for Smart NUX)

  • 이재곤;강민구;김원규;이경택
    • 한국전자통신학회논문지
    • /
    • 제7권5호
    • /
    • pp.999-1004
    • /
    • 2012
  • 본 논문에서는 광각렌즈기반의 WDR(Wide Dynamic Range)인 2M(Mega)급 CMOS 이미지 센서를 통해 왜곡영상을 보정하는 카메라 모듈의 설계와 렌즈영상의 성능을 분석한다. 또한, 설계한 광각렌즈모듈의 광각렌즈($176^{\circ}$) 특성으로 인한 왜곡영상의 보정된 결과를 분석하였으며, 카메라 모듈의 스마트 NUX(Natural User eXprience) 활용방안을 제안하였다.

라즈베리파이와 MQTT를 이용한 스마트 가드닝 구현 (An Implementation of Smart Gardening using Raspberry pi and MQTT)

  • 황기태;박혜진;김지수;이태윤;정인환
    • 한국인터넷방송통신학회논문지
    • /
    • 제18권1호
    • /
    • pp.151-157
    • /
    • 2018
  • 본 논문은 라즈베리파이를 이용하여 온도, 토양습도, 조도에 따라 자동으로 물과 빛을 제공하며, 원격 카메라를 이용하여 실시간으로 식물 상태를 전송하는 스마트 화분의 구현 사례를 소개한다. 화분의 용기는 5개의 층으로 분리하고, 각 용기는 3D프린터로 직접 제작하였다. 용기는 5개 층을 연결하여 사용하며 추후 확장할 수 있도록 설계하였다. 그리고 용기 내부에 라즈베리파이와 센서, 펌프, 그리고 카메라를 장착하였다. 본 연구에서는 사용자가 원격에서 카메라나 센서 정보를 받아 스마트 화분을 관찰하고 제어할 수 있도록 안드로이드 앱을 개발하였으며, 앱과 라즈베리파이 사이의 데이터 통신 및 제어는 MQTT 프로토콜을 이용하였다.

다양한 환경에 강인한 컬러기반 실시간 손 영역 검출 (Color-Based Real-Time Hand Region Detection with Robust Performance in Various Environments)

  • 홍동균;이동화
    • 대한임베디드공학회논문지
    • /
    • 제14권6호
    • /
    • pp.295-311
    • /
    • 2019
  • The smart product market is growing year by year and is being used in many areas. There are various ways of interacting with smart products and users by inputting voice recognition, touch and finger movements. It is most important to detect an accurate hand region as a whole step to recognize hand movement. In this paper, we propose a method to detect accurate hand region in real time in various environments. A conventional method of detecting a hand region includes a method using depth information of a multi-sensor camera, a method of detecting a hand through machine learning, and a method of detecting a hand region using a color model. Among these methods, a method using a multi-sensor camera or a method using a machine learning requires a large amount of calculation and a high-performance PC is essential. Many computations are not suitable for embedded systems, and high-end PCs increase or decrease the price of smart products. The algorithm proposed in this paper detects the hand region using the color model, corrects the problems of the existing hand detection algorithm, and detects the accurate hand region based on various experimental environments.

비전센서를 활용한 양날 도로절단기의 절단경로 인식 기술 개발 (Development of Cutting Route Recognition Technology of a Double-Blade Road Cutter Using a Vision Sensor)

  • 서명국;권진욱;정황훈;주정함;김영진
    • 드라이브 ㆍ 컨트롤
    • /
    • 제20권1호
    • /
    • pp.8-15
    • /
    • 2023
  • With the recent trend of intelligence and automation of construction work, a double-blade road cutter is being developed that automatically enables cutting along the cutting line marked on the road using a vision system. The road cutter can recognize the cutting line through the camera and correct the driving route in real-time, and it detects the load of the cutting blade in real-time to control the driving speed in case of overload to protect workers and cutting blades. In this study, a vision system mounted on a double-blade road cutter was developed. A cutting route recognition technology was developed to stably recognize cutting lines displayed on non-uniform road surfaces, and performance was verified in similar environments. In addition, a vision sensor protection module was developed to prevent foreign substances (dust, water, etc.) generated during cutting from being attached to the camera.