• Title/Summary/Keyword: Open-CV

Search Result 402, Processing Time 0.024 seconds

Conversion Method of 3D Point Cloud to Depth Image and Its Hardware Implementation (3차원 점군데이터의 깊이 영상 변환 방법 및 하드웨어 구현)

  • Jang, Kyounghoon;Jo, Gippeum;Kim, Geun-Jun;Kang, Bongsoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.10
    • /
    • pp.2443-2450
    • /
    • 2014
  • In the motion recognition system using depth image, the depth image is converted to the real world formed 3D point cloud data for efficient algorithm apply. And then, output depth image is converted by the projective world after algorithm apply. However, when coordinate conversion, rounding error and data loss by applied algorithm are occurred. In this paper, when convert 3D point cloud data to depth image, we proposed efficient conversion method and its hardware implementation without rounding error and data loss according image size change. The proposed system make progress using the OpenCV and the window program, and we test a system using the Kinect in real time. In addition, designed using Verilog-HDL and verified through the Zynq-7000 FPGA Board of Xilinx.

An Intelligent Moving Wireless Camera Surveillance System with Motion sensor and Remote Control (무선조종과 모션 센서를 이용한 지능형 이동 무선감시카메라 구현)

  • Lee, Young Woong;Kim, Jong-Nam
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.661-664
    • /
    • 2009
  • Recently, intelligent surveillance camera systems are needed popularly. However, current researches are focussed on improvement of a single module rather than implementation of an integrated system. In this paper, we implemented a moving wireless surveillance camera system which is composed of face detection, and using motion sensor. In our implementation, we used a camera module from SHARP, a pair of wireless video transmission module from ECOM, body of moving robot used for A4WD1 Combo kit for RC, a pair of ZigBee RF wireless transmission module from ROBOBLOCK, and a motion sensor module (AMN14111) from PANASONIC. We used OpenCV library for face dection and MFC for implement software. We identified real-time operations of face detection, PTT control, and motion sensor detecton. Thus, the implemented system will be useful for the applications of remote control, human detection, and using motion sensor.

  • PDF

Development of Face Recognition System based on Real-time Mini Drone Camera Images (실시간 미니드론 카메라 영상을 기반으로 한 얼굴 인식 시스템 개발)

  • Kim, Sung-Ho
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.12
    • /
    • pp.17-23
    • /
    • 2019
  • In this paper, I propose a system development methodology that accepts images taken by camera attached to drone in real time while controlling mini drone and recognize and confirm the face of certain person. For the development of this system, OpenCV, Python related libraries and the drone SDK are used. To increase face recognition ratio of certain person from real-time drone images, it uses Deep Learning-based facial recognition algorithm and uses the principle of Triples in particular. To check the performance of the system, the results of 30 experiments for face recognition based on the author's face showed a recognition rate of about 95% or higher. It is believed that research results of this paper can be used to quickly find specific person through drone at tourist sites and festival venues.

Incident response system through emergency recognition using heart rate and real-time image sharing (심박수를 이용한 위급상황 인식 및 실시간 영상공유를 통한 사고대처 시스템)

  • Lee, In-kwon;Park, Jung-hoon;Jin, Sorin;Han, Kyung-dong;Hwang, Hoyoung
    • Journal of IKEEE
    • /
    • v.23 no.2
    • /
    • pp.358-363
    • /
    • 2019
  • In this paper, we implemented a welfare system for the elderly living alone, disabled, or babies to provide fast incident response in case of emergency situations. The proposed system can quickly recognize emergency situations using heart rate sensors and real-time image sharing. The sensors attached on a wrist band monitor the heart rate along with relevant bio signals of clients and send alarms to guardians in the emergency situations. At the same time, the real-time image signals are captured using OpenCV and sent to the guardians in order to give the exact information for fast and appropriate response to handle the situation. In the proposed system, the camera works only in the emergency situations so as to provide enough privacy to the client's every day life.

Associative Interactive play Contents for Infant Imagination (유아 상상력을 위한 연상 인터렉티브 놀이 콘텐츠)

  • Jang, Eun-Jung;Lim, Chan
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.1
    • /
    • pp.371-376
    • /
    • 2019
  • Creative thinking appears even before it is expressed in language, and its existence is revealed through emotion, intuition, image and body feeling before logic or linguistics rules work. In this study, Lego is intended to present experimental child interactive content that is applied with a computer vision based on image processing techniques. In the case of infants, the main purpose of this content is the development of hand muscles and the ability to implement imagination. The purpose of the analysis algorithm of the OpenCV library and the image processing using the 'VVVV' that is implemented as a 'Node' in the midst of perceptual changes in image processing technology that are representative of object recognition, and the objective is to use a webcam to film, recognize, derive results that match the analysis and produce interactive content that is completed by the user participating. Research shows what Lego children have made, and children can create things themselves and develop creativity. Furthermore, we expect to be able to infer a diverse and individualistic person's thinking based on more data.

Color Change Information Collection Using Python in The Event of Color Temperature Change (색온도 변화 시 파이썬을 이용한 색상 변화 정보의 수집)

  • Jeon, Byungil;Kim, Semin;Lee, Gyujeong;Lee, Jeongwon;Lee, Choong Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.618-620
    • /
    • 2019
  • Smart Farm, which combines agriculture and ICT convergence technology, is at a lower stage than other industries in Korea, but it is also one of the most active research and development fields. Smart Farm aims to improve the efficiency of each step by collecting, processing and analyzing various information of agriculture sector through convergence between agriculture and ICT technology. In this study, we studied the image processing method that can distinguish strawberry which can be harvested at harvest time by color for smart farm composition of strawberry which is a horticultural crop. Strawberry harvesting requires a lot of labor in the process of growing strawberries. In this study, we aim to collect information necessary for labor saving in strawberry harvester. As a precedent study, we plan to implement a form in which the color temperature changes according to the light direction and brightness value through OpenCV color detection using Python. In the future, it is planned to study strawberry color value suitable for harvest by applying compensation value to color temperature change.

  • PDF

Image Denoising Methods based on DAECNN for Medication Prescriptions (DAECNN 기반의 병원처방전 이미지잡음제거)

  • Khongorzul, Dashdondov;Lee, Sang-Mu;Kim, Yong-Ki;Kim, Mi-Hye
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.5
    • /
    • pp.17-26
    • /
    • 2019
  • We aimed to build a patient-based allergy prevention system using the smartphone and focused on the region of interest (ROI) extraction method for Optical Character Recognition (OCR) in the general environment. However, the current ROI extraction method has shown good performance in the experimental environment, but the performance in the real environment was not good due to the noisy background. Therefore, in this paper, we propose the compared methods of reducing noisy background to solve the ROI extraction problem. There five methods used as a SMF, DIN, Denoising Autoencoder(DAE), DAE with Convolution Neural Network(DAECNN) and median filter(MF) with DAECNN (MF+DAECNN). We have shown that our proposed DAECNN and MF+DAECNN methods are 69%, respectively, which is relatively higher than the conventional DAE method 55%. The verification of performance improvement uses MSE, PSNR and SSIM. The system has implemented OpenCV, C++ and Python, including its performance, is tested on real images.

The Road Speed Sign Board Recognition, Steering Angle and Speed Control Methodology based on Double Vision Sensors and Deep Learning (2개의 비전 센서 및 딥 러닝을 이용한 도로 속도 표지판 인식, 자동차 조향 및 속도제어 방법론)

  • Kim, In-Sung;Seo, Jin-Woo;Ha, Dae-Wan;Ko, Yun-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.4
    • /
    • pp.699-708
    • /
    • 2021
  • In this paper, a steering control and speed control algorithm was presented for autonomous driving based on two vision sensors and road speed sign board. A car speed control algorithm was developed to recognize the speed sign by using TensorFlow, a deep learning program provided by Google to the road speed sign image provided from vision sensor B, and then let the car follows the recognized speed. At the same time, a steering angle control algorithm that detects lanes by analyzing road images transmitted from vision sensor A in real time, calculates steering angles, controls the front axle through PWM control, and allows the vehicle to track the lane. To verify the effectiveness of the proposed algorithm's steering and speed control algorithms, a car's prototype based on the Python language, Raspberry Pi and OpenCV was made. In addition, accuracy could be confirmed by verifying various scenarios related to steering and speed control on the test produced track.

Coin Classification using CNN (CNN 을 이용한 동전 분류)

  • Lee, Jaehyun;Shin, Donggyu;Park, Leejun;Song, Hyunjoo;Gu, Bongen
    • Journal of Platform Technology
    • /
    • v.9 no.3
    • /
    • pp.63-69
    • /
    • 2021
  • Limited materials to make coins for countries and designs suitable for hand-carry make the shape, size, and color of coins similar. This similarity makes that it is difficult for visitors to identify each country's coins. To solve this problem, we propose the coin classification method using CNN effective to image processing. In our coin identification method, we collect the training data by using web crawling and use OpenCV for preprocessing. After preprocessing, we extract features from an image by using three CNN layers and classify coins by using two fully connected network layers. To show that our model designed in this paper is effective for coin classification, we evaluate our model using eight different coin types. From our experimental results, the accuracy for coin classification is about 99.5%.

Remote Control System using Face and Gesture Recognition based on Deep Learning (딥러닝 기반의 얼굴과 제스처 인식을 활용한 원격 제어)

  • Hwang, Kitae;Lee, Jae-Moon;Jung, Inhwan
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.115-121
    • /
    • 2020
  • With the spread of IoT technology, various IoT applications using facial recognition are emerging. This paper describes the design and implementation of a remote control system using deep learning-based face recognition and hand gesture recognition. In general, an application system using face recognition consists of a part that takes an image in real time from a camera, a part that recognizes a face from the image, and a part that utilizes the recognized result. Raspberry PI, a single board computer that can be mounted anywhere, has been used to shoot images in real time, and face recognition software has been developed using tensorflow's FaceNet model for server computers and hand gesture recognition software using OpenCV. We classified users into three groups: Known users, Danger users, and Unknown users, and designed and implemented an application that opens automatic door locks only for Known users who have passed both face recognition and hand gestures.