• Title/Summary/Keyword: openCV(openCV)

Search Result 403, Processing Time 0.034 seconds

Pattern Template Construction of Buried Pipes and Cavities (매립 파이프 및 공동의 패턴 템플레이트 구축)

  • Lee, Hyun-Ho
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.21 no.4
    • /
    • pp.80-86
    • /
    • 2017
  • The purpose of this study is to construct a pattern database of pipes and cavities buried in the ground to prevent ground subsidence. To do this, it developed a pattern template algorithm using Open CV and applied it to the results of GPR detection results of tank. As a result, proper pattern database construction was possible. Since the results of this study are based only on limited experimental results, it is expected that more realistic data will be constructed if various field data and detection results of large test beds are supplemented in the future.

Study on improvement of the pupil motion recognition algorithm for human-computer interface system (사람 기계간 의사소통 시스템을 위한 눈동자 모션 인식 알고리즘 개선에 대한 연구)

  • Heo, Seung Won;Lee, Hee Bin;Lee, Seung Jun;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.377-378
    • /
    • 2018
  • This paper introduce the improvement of the pupil motion recognition algorithm in the previously reported "Eye-Motion Communication System using FPGA and OpenCV". It is a system for generalized paralysis and Lou Gehrig patients who can not move their body naturally, recognizing the pupil's motion and selecting the text in the FPGA in real time. In this paper, we improve the speed of motion recognition by minimizing the operation of eye detection function based on the user being general paralysis patient.

  • PDF

k-means clustering analysis of a movie poster colors using OpenCV, and recommendation system (OpenCV를 활용한 k-means clustering 기반의 포스터 색감 분석 기법 및 추천 시스템)

  • Kim, Tae Hong;OH, Sujin;Kim, Ung-Mo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.569-572
    • /
    • 2018
  • 본 연구는 영화 포스터를 대상으로 OpenCV를 활용하여 k-means clustering 기반의 색감을 분석하는 기법을 제안한다. 또한 이를 활용하여 영화 포스터 간의 유사도를 구하고 특정 영화와 대표색을 가지는 영화를 추천하는 시스템을 제안한다. 이를 위해 본 연구에서 다음과 같은 가정을 기반으로 한다. 첫 번째, 포스터는 해당 영화를 가장 잘 나타내는 이미지로, 포스터의 색감은 영화의 전반적인 분위기를 가진다. 두 번째, 영화 사이에 유사한 색감을 가진다면, 해당 영화들은 유사한 분위기를 가진다. 본 연구에서는 2단계로 나누어 연구를 진행한다. 우선 k-means clustering 기법을 통하여 데이터를 전처리 하여 영화별 대표색을 선정한다. 이 때, 선정된 대표색을 이용하여 각 영화간 색감 유사도를 분석한 결과를 통해, 같은 장르의 영화도는 유사도가 높음을 확인할 수 있었다. 다음으로 앞의 색감 유사도 분석을 통하여 특정 영화와 높은 유사도를 가지는 영화를 추천한다. 본 연구에서 추천된 영화는 기존의 영화 선택 기준에 비하여 사용자 본인의 취향을 반영한다. 본 연구 내용이 영화를 추천하는 과정에서 반영된다면 추천 시스템의 정확도와 사용자 만족도 향상에 기여할 것으로 기대된다.

A Study on Tools for Creater's Subtitle using fastText and OpenCV (fastText와 OpenCV를 이용하여 크리에이터 맞춤 영상자막 수정 방법 연구)

  • Choi, Wonchil;Jo, Sehyeon;Yoon, DongWoo;Woo, Hojin;Kim, Youngjong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.566-567
    • /
    • 2019
  • 영상으로 콘텐츠를 개발하는 사람들을 '크리에이터'라 칭한다. 이들이 사람들에게 재미를 주고 이목을 끌기 위해 표준어 이외에 다양한 유행어와 신조어들을 만들어내며 이들을 영상뿐만 아니라 자막으로 활용하게 된다. 이러한 자막이 있는 영상 제작시 대본을 제작하는데 있어 자유도가 높은 크리에이터들의 특징상 맞춤법 오류 및 오타의 문제가 생긴다. 하지만 영상제작 도구에는 맞춤법 검사 기능이 없어 검사를 미리 하기에는 어려운 점이 있다. 우리는 이 문제점을 해결하기 위해 영상을 완성 하고 최종 검토를 할 때 맞춤법 검사를 하기 쉽도록 프로그램을 개발한다. OpenCV를 통해 영상의 자막을 글자로써 인식을 하고, fastText 모델을 통해 인식된 글자가 맞춤법에 맞는지 크리에이터에게 제안해주는 맞춤형 프로그램을 개발하고자 한다.

A study on road damage detection for safe driving of autonomous vehicles based on OpenCV and CNN

  • Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.47-54
    • /
    • 2022
  • For safe driving of autonomous vehicles, road damage detection is very important to lower the potential risk. In order to ensure safety while an autonomous vehicle is driving on the road, technology that can cope with various obstacles is required. Among them, technology that recognizes static obstacles such as poor road conditions as well as dynamic obstacles that may be encountered while driving, such as crosswalks, manholes, hollows, and speed bumps, is a priority. In this paper, we propose a method to extract similarity of images and find damaged road images using OpenCV image processing and CNN algorithm. To implement this, we trained a CNN model using 280 training datasheets and 70 test datasheets out of 350 image data. As a result of training, the object recognition processing speed and recognition speed of 100 images were tested, and the average processing speed was 45.9 ms, the average recognition speed was 66.78 ms, and the average object accuracy was 92%. In the future, it is expected that the driving safety of autonomous vehicles will be improved by using technology that detects road obstacles encountered while driving.

Manufacture artificial intelligence education kit using Jetson Nano and 3D printer (Jetson Nano와 3D프린터를 이용한 인공지능 교육용 키트 제작)

  • SeongJu Park;NamHo Kim
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.40-48
    • /
    • 2022
  • In this paper, an educational kit that can be used in AI education was developed to solve the difficulties of AI education. Through this, object detection and person detection in computer vision using CNN and OpenCV to learn practical-oriented experiences from theory-centered and user image recognition (Your Own) that learns and recognizes specific objects Image Recognition), user object classification (Segmentation) and segmentation (Classification Datasets), IoT hardware control that attacks the learned target, and Jetson Nano GPIO, an AI board, are developed and utilized to develop and utilize textbooks that help effective AI learning made it possible.

Development of vehicle traffic statistics system using deep learning (딥러닝 영상인식을 이용한 출입 차량 통계 시스템 개발)

  • Mun, Dong-Ho;Hwang, Seung-Hyuk;Jeon, Han-Gyeol;Hwang, Su-Min;Yun, Tae-Jin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.701-702
    • /
    • 2020
  • 본 논문에서는 Jetson-Nano와 데스크탑에서 OpenCV와 YOLOv3 실시간 객체 인식 알고리즘을 이용하여 웹캠을 통해 주차장 등의 출입 차량 인식 통계 시스템을 개발하였다. 최근 에지컴퓨팅에 관심이 증가하고 있는 시점에서 Nvidia사에서 개발하여 보급하고 있는 Jetson-Nano에 YOLOv3 tiny와 OpenCV를 이용하여 차량인식을 수행하고, 구글에서 개발한 오픈 소스 Tesseract-OCR을 이용해 차량번호인식하여 입출차 혹은 주차시 차량정보를 확인할 수 있다. 딥러닝 학습 알고리즘에서 전기차 번호판의 특징점을 인식하여 전기차를 판별하여 일반차량이 전기차 주차구역에 불법주차하는 것을 모니터링할 수도 있다. 출입한 차량 데이터 베이스에서 입출차 시각, 차량번호, 전기차여부등이 확인 가능하다.

  • PDF

Study on Extracting Filming Location Information in Movies Using OCR for Developing Customized Travel Content (맞춤형 여행 콘텐츠 개발을 위한 OCR 기법을 활용한 영화 속 촬영지 정보 추출 방안 제시)

  • Park, Eunbi;Shin, Yubin;Kang, Juyoung
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.29-39
    • /
    • 2020
  • Purpose The atmosphere of respect for individual tastes that have spread throughout society has changed the consumption trend. As a result, the travel industry is also seeing customized travel as a new trend that reflects consumers' personal tastes. In particular, there is a growing interest in 'film-induced tourism', one of the areas of travel industry. We hope to satisfy the individual's motivation for traveling while watching movies with customized travel proposals, which we expect to be a catalyst for the continued development of the 'film-induced tourism industry'. Design/methodology/approach In this study, we implemented a methodology through 'OCR' of extracting and suggesting film location information that viewers want to visit. First, we extract a scene from a movie selected by a user by using 'OpenCV', a real-time image processing library. In addition, we detected the location of characters in the scene image by using 'EAST model', a deep learning-based text area detection model. The detected images are preprocessed by using 'OpenCV built-in function' to increase recognition accuracy. Finally, after converting characters in images into recognizable text using 'Tesseract', an optical character recognition engine, the 'Google Map API' returns actual location information. Significance This research is significant in that it provides personalized tourism content using fourth industrial technology, in addition to existing film tourism. This could be used in the development of film-induced tourism packages with travel agencies in the future. It also implies the possibility of being used for inflow from abroad as well as to abroad.

Design and Implementation of OpenCV-based Inventory Management System to build Small and Medium Enterprise Smart Factory (중소기업 스마트공장 구축을 위한 OpenCV 기반 재고관리 시스템의 설계 및 구현)

  • Jang, Su-Hwan;Jeong, Jopil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.1
    • /
    • pp.161-170
    • /
    • 2019
  • Multi-product mass production small and medium enterprise factories have a wide variety of products and a large number of products, wasting manpower and expenses for inventory management. In addition, there is no way to check the status of inventory in real time, and it is suffering economic damage due to excess inventory and shortage of stock. There are many ways to build a real-time data collection environment, but most of them are difficult to afford for small and medium-sized companies. Therefore, smart factories of small and medium enterprises are faced with difficult reality and it is hard to find appropriate countermeasures. In this paper, we implemented the contents of extension of existing inventory management method through character extraction on label with barcode and QR code, which are widely adopted as current product management technology, and evaluated the effect. Technically, through preprocessing using OpenCV for automatic recognition and classification of stock labels and barcodes, which is a method for managing input and output of existing products through computer image processing, and OCR (Optical Character Recognition) function of Google vision API. And it is designed to recognize the barcode through Zbar. We propose a method to manage inventory by real-time image recognition through Raspberry Pi without using expensive equipment.

Development of Real-time Video Search System Using the Intelligent Object Recognition Technology (지능형 객체 인식 기술을 이용한 실시간 동영상 검색시스템)

  • Chang, Jae-Young;Kang, Chan-Hyeok;Yoon, Jae-Min;Cho, Jae-Won;Jung, Ji-Sung;Chun, Jonghoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.85-91
    • /
    • 2020
  • Recently, video-taping equipment such as CCTV have been seeing more use for crime prevention and general safety concerns. Since these video-taping equipment operates all throughout the day, the need for security personnel is lessened, and naturally costs incurred from managing such manpower should also decrease. However, technology currently used predominantly lacks self-sufficiency when given the task of searching for a specific object in the recorded video such as a person, and has to be done manually; current security-based video equipment is insufficient in an environment where real-time information retrieval is required. In this paper, we propose a technology that uses the latest deep-learning technology and OpenCV library to quickly search for a specific person in a video; the search is based on the clothing information that is inputted by the user and transmits the result in real time. We implemented our system to automatically recognize specific human objects in real time by using the YOLO library, whilst deep learning technology is used to classify human clothes into top/bottom clothes. Colors are also detected through the OpenCV library which are then all combined to identify the requested object. The system presented in this paper not only accurately and quickly recognizes a person object with a specific clothing, but also has a potential extensibility that can be used for other types of object recognition in a video surveillance system for various purposes.