• Title/Summary/Keyword: Real-time object recognition

Search Result 280, Processing Time 0.023 seconds

A method of improving the quality of 3D images acquired from RGB-depth camera (깊이 영상 카메라로부터 획득된 3D 영상의 품질 향상 방법)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.5
    • /
    • pp.637-644
    • /
    • 2021
  • In general, in the fields of computer vision, robotics, and augmented reality, the importance of 3D space and 3D object detection and recognition technology has emerged. In particular, since it is possible to acquire RGB images and depth images in real time through an image sensor using Microsoft Kinect method, many changes have been made to object detection, tracking and recognition studies. In this paper, we propose a method to improve the quality of 3D reconstructed images by processing images acquired through a depth-based (RGB-Depth) camera on a multi-view camera system. In this paper, a method of removing noise outside an object by applying a mask acquired from a color image and a method of applying a combined filtering operation to obtain the difference in depth information between pixels inside the object is proposed. Through each experiment result, it was confirmed that the proposed method can effectively remove noise and improve the quality of 3D reconstructed image.

An User-Friendly Kiosk System Based on Deep Learning (딥러닝 기반 사용자 친화형 키오스크 시스템)

  • Su Yeon Kang;Yu Jin Lee;Hyun Ah Jung;Seung A Cho;Hyung Gyu Lee
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.1
    • /
    • pp.1-13
    • /
    • 2024
  • This study aims to provide a customized dynamic kiosk screen that considers user characteristics to cope with changes caused by increased use of kiosks. In order to optimize the screen composition according to the characteristics of the digital vulnerable group such as the visually impaired, the elderly, children, and wheelchair users, etc., users are classified into nine categories based on real-time analysis of user characteristics (wheelchair use, visual impairment, age, etc.). The kiosk screen is dynamically adjusted according to the characteristics of the user to provide efficient services. This study shows that the system communication and operation were performed in the embedded environment, and the used object detection, gait recognition, and speech recognition technologies showed accuracy of 74%, 98.9%, and 96%, respectively. The proposed technology was verified for its effectiveness by implementing a prototype, and through this, this study showed the possibility of reducing the digital gap and providing user-friendly "barrier-free kiosk" services.

A Study on the Image DB Construction for the Multi-function Front Looking Camera System Development (다기능 전방 카메라 개발을 위한 영상 DB 구축 방법에 관한 연구)

  • Kee, Seok-Cheol
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.25 no.2
    • /
    • pp.219-226
    • /
    • 2017
  • This paper addresses the effective and quantitative image DB construction for the development of front looking camera systems. The automotive industry has expanded the capability of front camera solutions that will help ADAS(Advanced Driver Assistance System) applications targeting Euro NCAP function requirements. These safety functions include AEB(Autonomous Emergency Braking), TSR(Traffic Signal Recognition), LDW(Lane Departure Warning) and FCW(Forward Collision Warning). In order to guarantee real road safety performance, the driving image DB logged under various real road conditions should be used to train core object classifiers and verify the function performance of the camera system. However, the driving image DB would entail an invalid and time consuming task without proper guidelines. The standard working procedures and design factors required for each step to build an effective image DB for reliable automotive front looking camera systems are proposed.

A Study on the Impact of AI Edge Computing Technology on Reducing Traffic Accidents at Non-signalized Intersections on Residential Road (이면도로 비신호교차로에서 AI 기반 엣지컴퓨팅 기술이 교통사고 감소에 미치는 영향에 관한 연구)

  • Young-Gyu Jang;Gyeong-Seok Kim;Hye-Weon Kim;Won-Ho Cho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.2
    • /
    • pp.79-88
    • /
    • 2024
  • We used actual field data to analyze from a traffic engineering perspective how AI and edge computing technologies affect the reduction of traffic accidents. By providing object information from 20m behind with AI object recognition, the driver secures a response time of about 3.6 seconds, and with edge technology, information is displayed in 0.5 to 0.8 seconds, giving the driver time to respond to intersection situations. In addition, it was analyzed that stopping before entering the intersection is possible when speed is controlled at 11-12km at the 10m point of the intersection approach and 20km/h at the 20m point. As a result, it was shown that traffic accidents can be reduced when the high object recognition rate of AI technology, provision of real-time information by edge technology, and the appropriate speed management at intersection approaches are executed simultaneously.

A Study on the Gesture Based Virtual Object Manipulation Method in Multi-Mixed Reality

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.125-132
    • /
    • 2021
  • In this paper, We propose a study on the construction of an environment for collaboration in mixed reality and a method for working with wearable IoT devices. Mixed reality is a mixed form of virtual reality and augmented reality. We can view objects in the real and virtual world at the same time. And unlike VR, MR HMD does not occur the motion sickness. It is using a wireless and attracting attention as a technology to be applied in industrial fields. Myo wearable device is a device that enables arm rotation tracking and hand gesture recognition by using a triaxial sensor, an EMG sensor, and an acceleration sensor. Although various studies related to MR are being progressed, discussions on developing an environment in which multiple people can participate in mixed reality and manipulating virtual objects with their own hands are insufficient. In this paper, We propose a method of constructing an environment where collaboration is possible and an interaction method for smooth interaction in order to apply mixed reality in real industrial fields. As a result, two people could participate in the mixed reality environment at the same time to share a unified object for the object, and created an environment where each person could interact with the Myo wearable interface equipment.

Study on vision-based object recognition to improve performance of industrial manipulator (산업용 매니퓰레이터의 작업 성능 향상을 위한 영상 기반 물체 인식에 관한 연구)

  • Park, In-Cheol;Park, Jong-Ho;Ryu, Ji-Hyoung;Kim, Hyoung-Ju;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.4
    • /
    • pp.358-365
    • /
    • 2017
  • In this paper, we propose an object recognition method using image information to improve the efficiency of visual servoingfor industrial manipulators in industry. This is an image-processing method for real-time responses to an abnormal situation or to external environment change in a work object by utilizing camera-image information of an industrial manipulator. The object recognition method proposed in this paper uses the Otsu method, a thresholding technique based on separation of the V channel containing color information and the S channel, in which it is easy to separate the background from the HSV channel in order to improve the recognition rate of the existing Harris Corner algorithm. Through this study, when the work object is not placed in the correct position due to external factors or from being twisted,the position is calculated and provided to the industrial manipulator.

A climbing movement detection system through efficient cow behavior recognition based on YOLOX and OC-SORT (YOLOX와 OC-SORT 기반의 효율적인 소 행동 인식을 통한 승가 운동 감지시스템)

  • LI YU;NamHo Kim
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.18-26
    • /
    • 2023
  • In this study, we propose a cow behavior recognition system based on YOLOX and OC-SORT. YOLO X detects targets in real-time and provides information on cow location and behavior. The OC-SORT module tracks cows in the video and assigns unique IDs. The quantitative analysis module analyzes the behavior and location information of cows. Experimental results show that our system demonstrates high accuracy and precision in target detection and tracking. The average precision (AP) of YOLOX was 82.2%, the average recall (AR) was 85.5%, the number of parameters was 54.15M, and the computation was 194.16GFLOPs. OC-SORT was able to maintain high-precision real-time target tracking in complex environments and occlusion situations. By analyzing changes in cow movement and frequency of mounting behavior, our system can help more accurately discern the estrus behavior of cows.

Real-Time Human Tracker Based Location and Motion Recognition for the Ubiquitous Smart Home (유비쿼터스 스마트 홈을 위한 위치와 모션인식 기반의 실시간 휴먼 트랙커)

  • Park, Se-Young;Shin, Dong-Kyoo;Shin, Dong-Il;Cuong, Nguyen Quoe
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06d
    • /
    • pp.444-448
    • /
    • 2008
  • The ubiquitous smart home is the home of the future that takes advantage of context information from the human and the home environment and provides an automatic home service for the human. Human location and motion are the most important contexts in the ubiquitous smart home. We present a real-time human tracker that predicts human location and motion for the ubiquitous smart home. We used four network cameras for real-time human tracking. This paper explains the real-time human tracker's architecture, and presents an algorithm with the details of two functions (prediction of human location and motion) in the real-time human tracker. The human location uses three kinds of background images (IMAGE1: empty room image, IMAGE2:image with furniture and home appliances in the home, IMAGE3: image with IMAGE2 and the human). The real-time human tracker decides whether the human is included with which furniture (or home appliance) through an analysis of three images, and predicts human motion using a support vector machine. A performance experiment of the human's location, which uses three images, took an average of 0.037 seconds. The SVM's feature of human's motion recognition is decided from pixel number by array line of the moving object. We evaluated each motion 1000 times. The average accuracy of all the motions was found to be 86.5%.

  • PDF

An Improved Area Edge Detection for Real-time Image Processing (실시간 영상 처리를 위한 향상된 영역 경계 검출)

  • Kim, Seung-Hee;Nam, Si-Byung;Lim, Hae-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.1
    • /
    • pp.99-106
    • /
    • 2009
  • Though edge detection, an important stage that significantly affecting the performance of image recognition, has been given numerous researches on its execution methods, it still remains as difficult problem and it is one of the components for image recognition applications while it is not the only way to identify an object or track a specific area. This paper, unlike gradient operator using edge detection method, found out edge pixel by referring to 2 neighboring pixels information in binary image and comparing them with pre-defined 4 edge pixels pattern, and detected binary image edge by determining the direction of the next edge detection exploring pixel and proposed method to detect binary image edge by repeating step of edge detection to detect another area edge. When recognizing image, if edge is detected with the use of gradient operator, thinning process, the stage next to edge detection, can be omitted, and with the edge detection algorithm executing time reduced compared with existing area edge tracing method, the entire image recognizing time can be reduced by applying real-time image recognizing system.

Development of CCTV Cooperation Tracking System for Real-Time Crime Monitoring (실시간 범죄 모니터링을 위한 CCTV 협업 추적시스템 개발 연구)

  • Choi, Woo-Chul;Na, Joon-Yeop
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.12
    • /
    • pp.546-554
    • /
    • 2019
  • Typically, closed-circuit television (CCTV) monitoring is mainly used for post-processes (i.e. to provide evidence after an incident has occurred), but by using a streaming video feed, machine-based learning, and advanced image recognition techniques, current technology can be extended to respond to crimes or reports of missing persons in real time. The multi-CCTV cooperation technique developed in this study is a program model that delivers similarity information about a suspect (or moving object) extracted via CCTV at one location and sent to a monitoring agent to track the selected suspect or object when he, she, or it moves out of range to another CCTV camera. To improve the operating efficiency of local government CCTV control centers, we describe here the partial automation of a CCTV control system that currently relies upon monitoring by human agents. We envisage an integrated crime prevention service, which incorporates the cooperative CCTV network suggested in this study and that can easily be experienced by citizens in ways such as determining a precise individual location in real time and providing a crime prevention service linked to smartphones and/or crime prevention/safety information.