• Title/Summary/Keyword: Motion recognition image processing

Search Result 64, Processing Time 0.039 seconds

Exploring Image Processing and Image Restoration Techniques

  • Omarov, Batyrkhan Sultanovich;Altayeva, Aigerim Bakatkaliyevna;Cho, Young Im
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.172-179
    • /
    • 2015
  • Because of the development of computers and high-technology applications, all devices that we use have become more intelligent. In recent years, security and surveillance systems have become more complicated as well. Before new technologies included video surveillance systems, security cameras were used only for recording events as they occurred, and a human had to analyze the recorded data. Nowadays, computers are used for video analytics, and video surveillance systems have become more autonomous and automated. The types of security cameras have also changed, and the market offers different kinds of cameras with integrated software. Even though there is a variety of hardware, their capabilities leave a lot to be desired. Therefore, this drawback is trying to compensate by dint of computer program solutions. Image processing is a very important part of video surveillance and security systems. Capturing an image exactly as it appears in the real world is difficult if not impossible. There is always noise to deal with. This is caused by the graininess of the emulsion, low resolution of the camera sensors, motion blur caused by movements and drag, focus problems, depth-of-field issues, or the imperfect nature of the camera lens. This paper reviews image processing, pattern recognition, and image digitization techniques, which will be useful in security services, to analyze bio-images, for image restoration, and for object classification.

Emergency situations Recognition System Using Multimodal Information (멀티모달 정보를 이용한 응급상황 인식 시스템)

  • Kim, Young-Un;Kang, Sun-Kyung;So, In-Mi;Han, Dae-Kyung;Kim, Yoon-Jin;Jung, Sung-Tae
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.757-758
    • /
    • 2008
  • This paper aims to propose an emergency recognition system using multimodal information extracted by an image processing module, a voice processing module, and a gravity sensor processing module. Each processing module detects predefined events such as moving, stopping, fainting, and transfer them to the multimodal integration module. Multimodal integration module recognizes emergency situation by using the transferred events and rechecks it by asking the user some question and recognizing the answer. The experiment was conducted for a faint motion in the living room and bathroom. The results of the experiment show that the proposed system is robust than previous methods and effectively recognizes emergency situations at various situations.

  • PDF

Depth-Based Recognition System for Continuous Human Action Using Motion History Image and Histogram of Oriented Gradient with Spotter Model (모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용한 깊이 정보 기반의 연속적인 사람 행동 인식 시스템)

  • Eum, Hyukmin;Lee, Heejin;Yoon, Changyong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.471-476
    • /
    • 2016
  • In this paper, recognition system for continuous human action is explained by using motion history image and histogram of oriented gradient with spotter model based on depth information, and the spotter model which performs action spotting is proposed to improve recognition performance in the recognition system. The steps of this system are composed of pre-processing, human action and spotter modeling and continuous human action recognition. In pre-processing process, Depth-MHI-HOG is used to extract space-time template-based features after image segmentation, and human action and spotter modeling generates sequence by using the extracted feature. Human action models which are appropriate for each of defined action and a proposed spotter model are created by using these generated sequences and the hidden markov model. Continuous human action recognition performs action spotting to segment meaningful action and meaningless action by the spotter model in continuous action sequence, and continuously recognizes human action comparing probability values of model for meaningful action sequence. Experimental results demonstrate that the proposed model efficiently improves recognition performance in continuous action recognition system.

Implementing Augmented Reality By Using Face Detection, Recognition And Motion Tracking (얼굴 검출과 인식 및 모션추적에 의한 증강현실 구현)

  • Lee, Hee-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.1
    • /
    • pp.97-104
    • /
    • 2012
  • Natural User Interface(NUI) technologies introduce new trends in using devices such as computer and any other electronic devices. In this paper, an augmented reality on a mobile device is implemented by using face detection, recognition and motion tracking. The face detection is obtained by using Viola-Jones algorithm from the images of the front camera. The Eigenface algorithm is employed for face recognition and face motion tracking. The augmented reality is implemented by overlapping the rear camera image and GPS, accelerator sensors' data with the 3D graphic object which is correspond with the recognized face. The algorithms and methods are limited by the mobile device specification such as processing ability and main memory capacity.

Motion Plane Estimation for Real-Time Hand Motion Recognition (실시간 손동작 인식을 위한 동작 평면 추정)

  • Jeong, Seung-Dae;Jang, Kyung-Ho;Jung, Soon-Ki
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.347-358
    • /
    • 2009
  • In this thesis, we develop a vision based hand motion recognition system using a camera with two rotational motors. Existing systems were implemented using a range camera or multiple cameras and have a limited working area. In contrast, we use an uncalibrated camera and get more wide working area by pan-tilt motion. Given an image sequence provided by the pan-tilt camera, color and pattern information are integrated into a tracking system in order to find the 2D position and direction of the hand. With these pose information, we estimate 3D motion plane on which the gesture motion trajectory from approximately forms. The 3D trajectory of the moving finger tip is projected into the motion plane, so that the resolving power of the linear gesture patterns is enhanced. We have tested the proposed approach in terms of the accuracy of trace angle and the dimension of the working volume.

A Study on Auto Inspection System of Cross Coil Movement Using Machine Vision (머신비젼을 이용한 Cross Coil Movement 자동검사 시스템에 관한 연구)

  • Lee, Chul-Hun;Seol, Sung-Wook;Joo, Jae-Heum;Lee, Sang-Chan;Nam, Ki-Gon
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.11
    • /
    • pp.79-88
    • /
    • 1999
  • In this paper we address the tracking method which tracks only target object in image sequence including moving object. We use a contour tracking algorithm based on intensity and motion boundaries. The motion of the moving object contour in the image is assumed to be well describable by an affine motion model with a translation, a change in scale and a rotation. The moving object contour is represented by B-spline, the position and motion of which is estimated along the image sequence. we use pattern recognition to identify target object. In order to use linear Kalman Filters we decompose the estimation process into two filters. One is estimating the affine motion parameters and the other the shape of moving object contour. In some experiments with dial plate we show that this method enables us to obtain the robust motion estimates and tracking trajectories even in case of including obstructive object.

  • PDF

Combining Object Detection and Hand Gesture Recognition for Automatic Lighting System Control

  • Pham, Giao N.;Nguyen, Phong H.;Kwon, Ki-Ryong
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.329-332
    • /
    • 2019
  • Recently, smart lighting systems are the combination between sensors and lights. These systems turn on/off and adjust the brightness of lights based on the motion of object and the brightness of environment. These systems are often applied in places such as buildings, rooms, garages and parking lot. However, these lighting systems are controlled by lighting sensors, motion sensors based on illumination environment and motion detection. In this paper, we propose an automatic lighting control system using one single camera for buildings, rooms and garages. The proposed system is one integration the results of digital image processing as motion detection, hand gesture detection to control and dim the lighting system. The experimental results showed that the proposed system work very well and could consider to apply for automatic lighting spaces.

Implementation of Interactive Media Content Production Framework based on Gesture Recognition (제스처 인식 기반의 인터랙티브 미디어 콘텐츠 제작 프레임워크 구현)

  • Koh, You-jin;Kim, Tae-Won;Kim, Yong-Goo;Choi, Yoo-Joo
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.545-559
    • /
    • 2020
  • In this paper, we propose a content creation framework that enables users without programming experience to easily create interactive media content that responds to user gestures. In the proposed framework, users define the gestures they use and the media effects that respond to them by numbers, and link them in a text-based configuration file. In the proposed framework, the interactive media content that responds to the user's gesture is linked with the dynamic projection mapping module to track the user's location and project the media effects onto the user. To reduce the processing speed and memory burden of the gesture recognition, the user's movement is expressed as a gray scale motion history image. We designed a convolutional neural network model for gesture recognition using motion history images as input data. The number of network layers and hyperparameters of the convolutional neural network model were determined through experiments that recognize five gestures, and applied to the proposed framework. In the gesture recognition experiment, we obtained a recognition accuracy of 97.96% and a processing speed of 12.04 FPS. In the experiment connected with the three media effects, we confirmed that the intended media effect was appropriately displayed in real-time according to the user's gesture.

Automatic Recognition of Local Wrinkles in Textile Using Block Matching Algorithm (블록 정합을 이용한 국부적인 직물 구김 인식)

  • Lee, Hyeon-Jin;Kim, Eun-Jin;Lee, Il-Byeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.11
    • /
    • pp.3165-3177
    • /
    • 1999
  • With the recent outstanding advance in computer software and hardware, a number of researches to enhance the manufacturing speed and the process accuracy has been undertaken in many fields of textile industry. Frequently issued problems of automatic recognition of textile wrinkles in a grey scale image are as follows. First, changes in grey level intensity of wrinkles are so minute. Second, as both colors and patterns in a grey scale image appear in grey level intensity, it is difficult to sort out the wrinkle information only. Third, it is also difficult to distinguish grey level intensity changed by wrinkles from those by uneven illumination. This paper suggests a method of automatic recognition of textile wrinkles that can solve above problems concerned with wrinkles, which can be raised in a manufacturing process as one of errors. In this paper, we first make the outline of wrinkles distinctly, apply the block matching algorithm used in motion estimation, and then estimate block locations of target images corresponding to blocks of standard images with the assumption that wrinkles are kind of textile distortions caused by directional forces. We plot a "wrinkle map" considering distances between wrinkles as depths of wrinkles. But because mismatch can occur by different illumination intensity and changes in tensions and directions of the force, there are also undesirable patterns in the map. Post processing is needed to filter them out and get wrinkles information only. We use average grey level intensity of wrinkle map to recognize wrinkles. When it comes to textile with colors and patterns, previous researches on wrinkles in grey scale image hasn't been successful. But we make it possible by considering wrinkles as distortion.istortion.

  • PDF

A New Residual Attention Network based on Attention Models for Human Action Recognition in Video

  • Kim, Jee-Hyun;Cho, Young-Im
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.1
    • /
    • pp.55-61
    • /
    • 2020
  • With the development of deep learning technology and advances in computing power, video-based research is now gaining more and more attention. Video data contains a large amount of temporal and spatial information, which is the biggest difference compared with image data. It has a larger amount of data. It has attracted intense attention in computer vision. Among them, motion recognition is one of the research focuses. However, the action recognition of human in the video is extremely complex and challenging subject. Based on many research in human beings, we have found that artificial intelligence-like attention mechanisms are an efficient model for cognition. This efficient model is ideal for processing image information and complex continuous video information. We introduce this attention mechanism into video action recognition, paying attention to human actions in video and effectively improving recognition efficiency. In this paper, we propose a new 3D residual attention network using convolutional neural network based on two attention models to identify human action behavior in the video. An evaluation result of our model showed up to 90.7% accuracy.