• Title/Summary/Keyword: 카메라 기반 인식

Search Result 700, Processing Time 0.027 seconds

Effective Image Retrieval for the M-Learning System (모바일 교육 시스템을 위한 효율적인 영상 검색 구축)

  • Han Eun-Jung;Park An-Jin;Jung Kee-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.5
    • /
    • pp.658-670
    • /
    • 2006
  • As the educational media tends to be more digitalized and individualized, the learning paradigm is dramatically changing into e-learning. Existing on-line courseware gives a learner more chances to learn when they are home with their own PCs. However, it is of little use when they are away from their digital media. Also, it is very labor-intensive to convert the original off-line contents to on-line contents. This paper proposes education mobile contents(EMC) that can supply the learners with dynamic interactions using various multimedia information by recognizing real images of off-line contents using mobile devices. Content-based image retrieval based on object shapes is used to recognize the real image, and shapes are represented by differential chain code with estimated new starting points to obtain rotation-invariant representation, which is fitted to computational resources of mobile devices with low resolution camera. Moreover we use a dynamic time warping method to recognize the object shape, which compensates scale variations of an object. The EMC can provide learners with quick and accurate on-line contents on off-line ones using mobile devices without limitations of space.

  • PDF

Image Denoising Via Structure-Aware Deep Convolutional Neural Networks (구조 인식 심층 합성곱 신경망 기반의 영상 잡음 제거)

  • Park, Gi-Tae;Son, Chang-Hwan
    • The Journal of Korean Institute of Information Technology
    • /
    • v.16 no.11
    • /
    • pp.85-95
    • /
    • 2018
  • With the popularity of smartphones, most peoples have been using mobile cameras to capture photographs. However, due to insufficient amount of lights in a low lighting condition, unwanted noises can be generated during image acquisition. To remove the noise, a method of using deep convolutional neural networks is introduced. However, this method still lacks the ability to describe textures and edges, even though it has made significant progress in terms of visual quality performance. Therefore, in this paper, the HOG (Histogram of Oriented Gradients) images that contain information about edge orientations are used. More specifically, a method of learning deep convolutional neural networks is proposed by stacking noise and HOG images into an input tensor. Experiment results confirm that the proposed method not only can obtain excellent result in visual quality evaluations, compared to conventional methods, but also enable textures and edges to be improved visually.

CNN3D-Based Bus Passenger Prediction Model Using Skeleton Keypoints (Skeleton Keypoints를 활용한 CNN3D 기반의 버스 승객 승하차 예측모델)

  • Jang, Jin;Kim, Soo Hyung
    • Smart Media Journal
    • /
    • v.11 no.3
    • /
    • pp.90-101
    • /
    • 2022
  • Buses are a popular means of transportation. As such, thorough preparation is needed for passenger safety management. However, the safety system is insufficient because there are accidents such as a death accident occurred when the bus departed without recognizing the elderly approaching to get on in 2018. There is a safety system that prevents pinching accidents through sensors on the back door stairs, but such a system does not prevent accidents that occur in the process of getting on and off like the above accident. If it is possible to predict the intention of bus passengers to get on and off, it will help to develop a safety system to prevent such accidents. However, studies predicting the intention of passengers to get on and off are insufficient. Therefore, in this paper, we propose a 1×1 CNN3D-based getting on and off intention prediction model using skeleton keypoints of passengers extracted from the camera image attached to the bus through UDP-Pose. The proposed model shows approximately 1~2% higher accuracy than the RNN and LSTM models in predicting passenger's getting on and off intentions.

Unauthorized person tracking system in video using CNN-LSTM based location positioning

  • Park, Chan;Kim, Hyungju;Moon, Nammee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.77-84
    • /
    • 2021
  • In this paper, we propose a system that uses image data and beacon data to classify authorized and unauthorized perosn who are allowed to enter a group facility. The image data collected through the IP camera uses YOLOv4 to extract a person object, and collects beacon signal data (UUID, RSSI) through an application to compose a fingerprinting-based radio map. Beacon extracts user location data after CNN-LSTM-based learning in order to improve location accuracy by supplementing signal instability. As a result of this paper, it showed an accuracy of 93.47%. In the future, it can be expected to fusion with the access authentication process such as QR code that has been used due to the COVID-19, track people who haven't through the authentication process.

IoT industrial site safety management system incorporating AI (AI를 접목한 IoT 기반 산업현장 안전관리 시스템)

  • Lee, Seul;Jo, So-Young;Yeo, Seung-Yeon;Lee, Hee-Soo;Kim, Sung-Wook
    • Annual Conference of KIPS
    • /
    • 2022.05a
    • /
    • pp.118-121
    • /
    • 2022
  • 국내 산업재해 사고 사망자의 상당수가 건설업에서 발생하고 있다. 건설 현장에는 굴삭기, 크레인과 같은 중장비가 많고 높은 곳에서 작업하는 경우가 흔해 위험 요소에 노출될 가능성이 높다. 물리적 사고 외에도 작업 중 발생하는 미세먼지에는 여러 유해 인자가 존재하여 건설근로자들에게 호흡기질환과 같은 직업병을 유발한다. 정부에서는 산업현장 안전 관리의 중요성이 증가함에 따라 각종 산업재해로부터 근로자를 보호하기 위한 법안을 마련하였다. 따라서 건설 현장의 경우 산업재해를 방지하기 위해서 위험요소를 사전에 인지하고 즉각 대응할 수 있는 기술이 필요하다. 본 연구에서는 인공지능(AI)과 사물인터넷(IoT)을 통한 자동화 기술을 활용하여 24시간 안전 관리 시스템을 제안한다. 제안하는 IoT 기반 통합안전 관리 시스템은 AI를 적용한 CCTV를 통해 산업 현장을 모니터링하고, 다수의 IoT 센서가 측정한 수치를 근로자 및 관리자가 실시간으로 확인할 수 있게 하여 산업 현장 내 안전사고를 예방한다. 구체적으로 어플리케이션을 통해 미세먼지 농도, 가스 농도, 온도, 습도, 안전모 착용 여부 등을 모니터링할 수 있다. 모니터링 중에 유해물질의 농도가 일정 수치를 넘기거나 안전모를 착용하지 않은 근로자가 발견될 경우 근로자 및 관리자에게 경고 알림을 발송한다. 유해물질 농도는 IoT 센서를 통해 측정하며 안전모 착용 여부는 카메라 센서에 딥러닝 모델을 적용하여 인식하였다. 본 연구에서 제시한 통합안전관리시스템을 통해 건설현장을 비롯한 산업현장의 산업재해 감소와 근로자 안전 증진에 기여할 수 있을 것으로 기대한다.

Untact-based elevator operating system design using deep learning of private buildings (프라이빗 건물의 딥러닝을 활용한 언택트 기반 엘리베이터 운영시스템 설계)

  • Lee, Min-hye;Kang, Sun-kyoung;Shin, Seong-yoon;Mun, Hyung-jin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.161-163
    • /
    • 2021
  • In an apartment or private building, it is difficult for the user to operate the elevator button in a similar situation with luggage in both hands. In an environment where human contact must be minimized due to a highly infectious virus such as COVID-19, it is inevitable to operate an elevator based on untact. This paper proposes an operating system capable of operating the elevator by using the user's voice and image processing through the user's face without pressing the elevator button. The elevator can be operated to a designated floor without pressing a button by detecting the face of a person entering the elevator by detecting the person's face from the camera installed in the elevator, matching the information registered in advance. When it is difficult to recognize a person's face, it is intended to enhance the convenience of elevator use in an untouched environment by controlling the floor of the elevator using the user's voice through a microphone and automatically recording access information.

  • PDF

Implementation of ROS-Based Intelligent Unmanned Delivery Robot System (ROS 기반 지능형 무인 배송 로봇 시스템의 구현)

  • Seong-Jin Kong;Won-Chang Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.610-616
    • /
    • 2023
  • In this paper, we implement an unmanned delivery robot system with Robot Operating System(ROS)-based mobile manipulator, and introduce the technologies employed for the system implementation. The robot consists of a mobile robot capable of autonomous navigation inside the building using an elevator and a Selective Compliance Assembly Robot Arm(SCARA)-Type manipulator equipped with a vacuum pump. The robot can determines the position and orientation for picking up a package through image segmentation and corner detection using the camera on the manipulator. The proposed system has a user interface implemented to check the delivery status and determine the real-time location of the robot through a web server linked to the application and ROS, and recognizes the shipment and address at the delivery station through You Only Look Once(YOLO) and Optical Character Recognition(OCR). The effectiveness of the system is validated through delivery experiments conducted within a 4-story building.

A Block based 3D Map for Recognizing Three Dimensional Spaces (3차원 공간의 인식을 위한 블록기반 3D맵)

  • Yi, Jong-Su;Kim, Jun-Seong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.4
    • /
    • pp.89-96
    • /
    • 2012
  • A 3D map provides useful information for intelligent services. Traditional 3D maps, however, consist of a raw image data and are not suitable for real-time applications. In this paper, we propose the Block-based 3D map, that represents three dimensional spaces in a collection of square blocks. The Block_based 3D map has two major variables: an object ratio and a block size. The object ratio is defined as the proportion of object pixels to space pixels in a block and determines the type of the block. The block size is defined as the number of pixels of the side of a block and determines the size of the block. Experiments show the advantage of the Block-based 3D map in reducing noise, and in saving the amount of processing data. With the block size of $40{\times}40$ and the object ratio of 30% to 50% we can get the most matched Block-based 3D map for the $320{\times}240$ depthmap. The Block-based 3D map provides useful information, that can produce a variety of new services with high added value in intelligent environments.

Fire Detection using Deep Convolutional Neural Networks for Assisting People with Visual Impairments in an Emergency Situation (시각 장애인을 위한 영상 기반 심층 합성곱 신경망을 이용한 화재 감지기)

  • Kong, Borasy;Won, Insu;Kwon, Jangwoo
    • 재활복지
    • /
    • v.21 no.3
    • /
    • pp.129-146
    • /
    • 2017
  • In an event of an emergency, such as fire in a building, visually impaired and blind people are prone to exposed to a level of danger that is greater than that of normal people, for they cannot be aware of it quickly. Current fire detection methods such as smoke detector is very slow and unreliable because it usually uses chemical sensor based technology to detect fire particles. But by using vision sensor instead, fire can be proven to be detected much faster as we show in our experiments. Previous studies have applied various image processing and machine learning techniques to detect fire, but they usually don't work very well because these techniques require hand-crafted features that do not generalize well to various scenarios. But with the help of recent advancement in the field of deep learning, this research can be conducted to help solve this problem by using deep learning-based object detector that can detect fire using images from security camera. Deep learning based approach can learn features automatically so they can usually generalize well to various scenes. In order to ensure maximum capacity, we applied the latest technologies in the field of computer vision such as YOLO detector in order to solve this task. Considering the trade-off between recall vs. complexity, we introduced two convolutional neural networks with slightly different model's complexity to detect fire at different recall rate. Both models can detect fire at 99% average precision, but one model has 76% recall at 30 FPS while another has 61% recall at 50 FPS. We also compare our model memory consumption with each other and show our models robustness by testing on various real-world scenarios.

Eye Region Detection Method in Rotated Face using Global Orientation Information (전역적인 에지 오리엔테이션 정보를 이용한 기울어진 얼굴 영상에서의 눈 영역 추출)

  • Jang, Chang-Hyuk;Park, An-Jin;Kurata Takeshi;Jain Anil K.;Park, Se-Hyun;Kim, Eun-Yi;Yang, Jong-Yeol;Jung, Kee-Chul
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.11 no.4
    • /
    • pp.82-92
    • /
    • 2006
  • In the field of image recognition, research on face recognition has recently attracted a lot of attention. The most important step in face recognition is automatic eye detection researched as a prerequisite stage. Existing eye detection methods for focusing on the frontal face can be mainly classified into two categories: active infrared(IR)-based approaches and image-based approaches. This paper proposes an eye region detection method in non-frontal faces. The proposed method is based on the edge--based method that shows the fastest computation time. To extract eye region in non-frontal faces, the method uses edge orientationhistogram of the global region of faces. The problem caused by some noise and unfavorable ambient light is solved by using proportion of width and height for local information and relationship between components for global information in approximately extracted region. In experimental results, the proposed method improved precision rates, as solving 3 problems caused by edge information and achieves a detection accuracy of 83.5% and a computational time of 0.5sec per face image using 300 face images provided by The Weizmann Institute of Science.

  • PDF