• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.028 seconds

Automatic detection system for surface defects of home appliances based on machine vision (머신비전 기반의 가전제품 표면결함 자동검출 시스템)

  • Lee, HyunJun;Jeong, HeeJa;Lee, JangGoon;Kim, NamHo
    • Smart Media Journal
    • /
    • v.11 no.9
    • /
    • pp.47-55
    • /
    • 2022
  • Quality control in the smart factory manufacturing process is an important factor. Currently, quality inspection of home appliance manufacturing parts produced by the mold process is mostly performed with the naked eye of the operator, resulting in a high error rate of inspection. In order to improve the quality competition, an automatic defect detection system was designed and implemented. The proposed system acquires an image by photographing an object with a high-performance scan camera at a specific location, and reads defective products due to scratches, dents, and foreign substances according to the vision inspection algorithm. In this study, the depth-based branch decision algorithm (DBD) was developed to increase the recognition rate of defects due to scratches, and the accuracy was improved.

A Method of Hand Recognition for Virtual Hand Control of Virtual Reality Game Environment (가상 현실 게임 환경에서의 가상 손 제어를 위한 사용자 손 인식 방법)

  • Kim, Boo-Nyon;Kim, Jong-Ho;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.10 no.2
    • /
    • pp.49-56
    • /
    • 2010
  • In this paper, we propose a control method of virtual hand by the recognition of a user's hand in the virtual reality game environment. We display virtual hand on the game screen after getting the information of the user's hand movement and the direction thru input images by camera. We can utilize the movement of a user's hand as an input interface for virtual hand to select and move the object. As a hand recognition method based on the vision technology, the proposed method transforms input image from RGB color space to HSV color space, then segments the hand area using double threshold of H, S value and connected component analysis. Next, The center of gravity of the hand area can be calculated by 0 and 1 moment implementation of the segmented area. Since the center of gravity is positioned onto the center of the hand, the further apart pixels from the center of the gravity among the pixels in the segmented image can be recognized as fingertips. Finally, the axis of the hand is obtained as the vector of the center of gravity and the fingertips. In order to increase recognition stability and performance the method using a history buffer and a bounding box is also shown. The experiments on various input images show that our hand recognition method provides high level of accuracy and relatively fast stable results.

Improvement of Face Recognition Algorithm for Residential Area Surveillance System Based on Graph Convolution Network (그래프 컨벌루션 네트워크 기반 주거지역 감시시스템의 얼굴인식 알고리즘 개선)

  • Tan Heyi;Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.1-15
    • /
    • 2024
  • The construction of smart communities is a new method and important measure to ensure the security of residential areas. In order to solve the problem of low accuracy in face recognition caused by distorting facial features due to monitoring camera angles and other external factors, this paper proposes the following optimization strategies in designing a face recognition network: firstly, a global graph convolution module is designed to encode facial features as graph nodes, and a multi-scale feature enhancement residual module is designed to extract facial keypoint features in conjunction with the global graph convolution module. Secondly, after obtaining facial keypoints, they are constructed as a directed graph structure, and graph attention mechanisms are used to enhance the representation power of graph features. Finally, tensor computations are performed on the graph features of two faces, and the aggregated features are extracted and discriminated by a fully connected layer to determine whether the individuals' identities are the same. Through various experimental tests, the network designed in this paper achieves an AUC index of 85.65% for facial keypoint localization on the 300W public dataset and 88.92% on a self-built dataset. In terms of face recognition accuracy, the proposed network achieves an accuracy of 83.41% on the IBUG public dataset and 96.74% on a self-built dataset. Experimental results demonstrate that the network designed in this paper exhibits high detection and recognition accuracy for faces in surveillance videos.

Development of Hand Recognition Interface for Interactive Digital Signage (인터렉티브 디지털 사이니지를 위한 손 인식 인터페이스 개발)

  • Lee, Jung-Wun;Cha, Kyung-Ae;Ryu, Jeong-Tak
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.3
    • /
    • pp.1-11
    • /
    • 2017
  • There is a Growing Interest in Motion Recognition for Recognizing Human Motion in Camera Images. As a Result, Researches are Being Actively Conducted to Control Digital Devices with Gestures at a Long Distance. The Interface Using Gesture can be Effectively Used in the Digital Signage Industry Where the Advertisement Effect is Expected to be Exposed to the Public in Various Places. Since the Digital Signage Contents can be Easily Controlled through the Non-contact Hand Operation, it is Possible to Provide the Advertisement Information of Interest to a Large Number of People, Thereby Providing an Opportunity to Lead to Sales. Therefore, we Propose a Digital Signage Content Control System Based on Hand Movement at a Certain Distance, which can be Effectively Used for the Development of Interactive Advertizing Media.

HMM-based Upper-body Gesture Recognition for Virtual Playing Ground Interface (가상 놀이 공간 인터페이스를 위한 HMM 기반 상반신 제스처 인식)

  • Park, Jae-Wan;Oh, Chi-Min;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.11-17
    • /
    • 2010
  • In this paper, we propose HMM-based upper-body gesture. First, to recognize gesture of space, division about pose that is composing gesture once should be put priority. In order to divide poses which using interface, we used two IR cameras established on front side and side. So we can divide and acquire in front side pose and side pose about one pose in each IR camera. We divided the acquired IR pose image using SVM's non-linear RBF kernel function. If we use RBF kernel, we can divide misclassification between non-linear classification poses. Like this, sequences of divided poses is recognized by gesture using HMM's state transition matrix. The recognized gesture can apply to existent application to do mapping to OS Value.

Visual Multi-touch Input Device Using Vision Camera (비젼 카메라를 이용한 멀티 터치 입력 장치)

  • Seo, Hyo-Dong;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.6
    • /
    • pp.718-723
    • /
    • 2011
  • In this paper, we propose a visual multi-touch air input device using vision cameras. The implemented device provides a barehanded interface which copes with the multi-touch operation. The proposed device is easy to apply to the real-time systems because of its low computational load and is cheaper than the existing methods using glove data or 3-dimensional data because any additional equipment is not required. To do this, first, we propose an image processing algorithm based on the HSV color model and the labeling from obtained images. Also, to improve the accuracy of the recognition of hand gestures, we propose a motion recognition algorithm based on the geometric feature points, the skeleton model, and the Kalman filter. Finally, the experiments show that the proposed device is applicable to remote controllers for video games, smart TVs and any computer applications.

A Flexible Model-Based Face Region Detection Method (유연한 모델 기반의 얼굴 영역 검출 방법)

  • Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.5
    • /
    • pp.251-256
    • /
    • 2021
  • Unlike general cameras, a high-speed camera capable of capturing a large number of frames per second can enable the advancement of some image processing technologies that have been limited so far. This paper proposes a method of removing undesirable noise from an high-speed input color image, and then detecting a human face from the noise-free image. In this paper, noise pixels included in the ultrafast input image are first removed by applying a bidirectional filter. Then, using RetinaFace, a region representing the person's personal information is robustly detected from the image where noise was removed. The experimental results show that the described algorithm removes noise from the input image and then robustly detects a human face using the generated model. The model-based face-detection method presented in this paper is expected to be used as basic technology for many practical application fields related to image processing and pattern recognition, such as indoor and outdoor building monitoring, door opening and closing management, and mobile biometric authentication.

Untact-based elevator operating system design using deep learning of private buildings (프라이빗 건물의 딥러닝을 활용한 언택트 기반 엘리베이터 운영시스템 설계)

  • Lee, Min-hye;Kang, Sun-kyoung;Shin, Seong-yoon;Mun, Hyung-jin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.161-163
    • /
    • 2021
  • In an apartment or private building, it is difficult for the user to operate the elevator button in a similar situation with luggage in both hands. In an environment where human contact must be minimized due to a highly infectious virus such as COVID-19, it is inevitable to operate an elevator based on untact. This paper proposes an operating system capable of operating the elevator by using the user's voice and image processing through the user's face without pressing the elevator button. The elevator can be operated to a designated floor without pressing a button by detecting the face of a person entering the elevator by detecting the person's face from the camera installed in the elevator, matching the information registered in advance. When it is difficult to recognize a person's face, it is intended to enhance the convenience of elevator use in an untouched environment by controlling the floor of the elevator using the user's voice through a microphone and automatically recording access information.

  • PDF

Design of Face with Mask Detection System in Thermal Images Using Deep Learning (딥러닝을 이용한 열영상 기반 마스크 검출 시스템 설계)

  • Yong Joong Kim;Byung Sang Choi;Ki Seop Lee;Kyung Kwon Jung
    • Convergence Security Journal
    • /
    • v.22 no.2
    • /
    • pp.21-26
    • /
    • 2022
  • Wearing face masks is an effective measure to prevent COVID-19 infection. Infrared thermal image based temperature measurement and identity recognition system has been widely used in many large enterprises and universities in China, so it is totally necessary to research the face mask detection of thermal infrared imaging. Recently introduced MTCNN (Multi-task Cascaded Convolutional Networks)presents a conceptually simple, flexible, general framework for instance segmentation of objects. In this paper, we propose an algorithm for efficiently searching objects of images, while creating a segmentation of heat generation part for an instance which is a heating element in a heat sensed image acquired from a thermal infrared camera. This method called a mask MTCNN is an algorithm that extends MTCNN by adding a branch for predicting an object mask in parallel with an existing branch for recognition of a bounding box. It is easy to generalize the R-CNN to other tasks. In this paper, we proposed an infrared image detection algorithm based on R-CNN and detect heating elements which can not be distinguished by RGB images.

Implementation of ROS-Based Intelligent Unmanned Delivery Robot System (ROS 기반 지능형 무인 배송 로봇 시스템의 구현)

  • Seong-Jin Kong;Won-Chang Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.610-616
    • /
    • 2023
  • In this paper, we implement an unmanned delivery robot system with Robot Operating System(ROS)-based mobile manipulator, and introduce the technologies employed for the system implementation. The robot consists of a mobile robot capable of autonomous navigation inside the building using an elevator and a Selective Compliance Assembly Robot Arm(SCARA)-Type manipulator equipped with a vacuum pump. The robot can determines the position and orientation for picking up a package through image segmentation and corner detection using the camera on the manipulator. The proposed system has a user interface implemented to check the delivery status and determine the real-time location of the robot through a web server linked to the application and ROS, and recognizes the shipment and address at the delivery station through You Only Look Once(YOLO) and Optical Character Recognition(OCR). The effectiveness of the system is validated through delivery experiments conducted within a 4-story building.