• Title/Summary/Keyword: Direction recognition

Search Result 848, Processing Time 0.025 seconds

A Study on the Industrial Application of Image Recognition Technology (이미지 인식 기술의 산업 적용 동향 연구)

  • Song, Jaemin;Lee, Sae Bom;Park, Arum
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.7
    • /
    • pp.86-96
    • /
    • 2020
  • Based on the use cases of image recognition technology, this study looked at how artificial intelligence plays a role in image recognition technology. Through image recognition technology, satellite images can be analyzed with artificial intelligence to reveal the calculation of oil storage tanks in certain countries. And image recognition technology makes it possible for searching images or products similar to images taken or downloaded by users, as well as arranging fruit yields, or detecting plant diseases. Based on deep learning and neural network algorithms, we can recognize people's age, gender, and mood, confirming that image recognition technology is being applied in various industries. In this study, we can look at the use cases of domestic and overseas image recognition technology, as well as see which methods are being applied to the industry. In addition, through this study, the direction of future research was presented, focusing on various successful cases in which image recognition technology was implemented and applied in various industries. At the conclusion, it can be considered that the direction in which domestic image recognition technology should move forward in the future.

Recognizing the Direction of Action using Generalized 4D Features (일반화된 4차원 특징을 이용한 행동 방향 인식)

  • Kim, Sun-Jung;Kim, Soo-Wan;Choi, Jin-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.518-528
    • /
    • 2014
  • In this paper, we propose a method to recognize the action direction of human by developing 4D space-time (4D-ST, [x,y,z,t]) features. For this, we propose 4D space-time interest points (4D-STIPs, [x,y,z,t]) which are extracted using 3D space (3D-S, [x,y,z]) volumes reconstructed from images of a finite number of different views. Since the proposed features are constructed using volumetric information, the features for arbitrary 2D space (2D-S, [x,y]) viewpoint can be generated by projecting the 3D-S volumes and 4D-STIPs on corresponding image planes in training step. We can recognize the directions of actors in the test video since our training sets, which are projections of 3D-S volumes and 4D-STIPs to various image planes, contain the direction information. The process for recognizing action direction is divided into two steps, firstly we recognize the class of actions and then recognize the action direction using direction information. For the action and direction of action recognition, with the projected 3D-S volumes and 4D-STIPs we construct motion history images (MHIs) and non-motion history images (NMHIs) which encode the moving and non-moving parts of an action respectively. For the action recognition, features are trained by support vector data description (SVDD) according to the action class and recognized by support vector domain density description (SVDDD). For the action direction recognition after recognizing actions, each actions are trained using SVDD according to the direction class and then recognized by SVDDD. In experiments, we train the models using 3D-S volumes from INRIA Xmas Motion Acquisition Sequences (IXMAS) dataset and recognize action direction by constructing a new SNU dataset made for evaluating the action direction recognition.

Implement of Finger-Gesture Remote Controller using the Moving Direction Recognition of Single (단일 형상의 이동 방향 인식에 의한 손 동작 리모트 컨트롤러 구현)

  • Jang, Myeong-Soo;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.4
    • /
    • pp.91-97
    • /
    • 2013
  • A finger-gesture remote controller using the single camera is implemented in this paper, which is base on the recognition of finger number and finger moving direction. Proposed method uses the transformed YCbCr color-difference information to extract the hand region effectively. The number and position of finger are computer by using a double circle tracing method. Specially, a user continuous-command can be performed repeatedly by recognizing the finger-gesture direction of single shape. The position information of finger enables a user command to amplify a same command in the User eXperience. Also, all processing tasks are implemented by using the Intel OpenCV library and C++ language. In order to evaluate the performance of the our proposed method, after applying to the commercial video player software as a remote controller. As a result, the proposed method showed the average 89% recognition ratio by the user command-mode.

Robust Gait Recognition for Directional Variation Using Canonical View Synthesis (고유시점 재구성을 이용한 방향 변화에 강인한 게이트 인식)

  • 정승도;최병욱
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.5
    • /
    • pp.59-67
    • /
    • 2004
  • Gait is defined as a manner or characteristics of walking. Recently, the study on extracting features of the gait to identify the individual has been progressed actively, within the computer vision community. Even if the camera is fixed, gait features extracted from images are varied according to the direction of walking. In this paper, we propose the method which compensates for the drawback of the gait recognition which is dependant on the direction. First, we search a direction of walking and estimate the planar homography with simple operations. Through synthesizing canonical viewed images by using the estimated homography, viewpoint variation by the direction of walking is compensated. In this paper, we segment gait silhouette into sub-regions and use averaged feature and its variation of each region to recognition experiment. Experimental results show that the proposed method is robust for directional variation of the gait.

A Study on Adaptive Feature-Factors Based Fingerprint Recognition (적응적 특징요소 기반의 지문인식에 관한 연구)

  • 노정석;정용훈;이상범
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1799-1802
    • /
    • 2003
  • This paper has been studied a Adaptive feature-factors based fingerprints recognition in many biometrics. we study preprocessing and matching method of fingerprints image in various circumstances by using optical fingerprint input device. The Fingerprint Recognition Technology had many development until now. But, There is yet many point which the accuracy improves with operation speed in the side. First of all we study fingerprint classification to reduce existing preprocessing step and then extract a Feature-factors with direction information in fingerprint image. Also in the paper, we consider minimization of noise for effective fingerprint recognition system.

  • PDF

Development of VIN Character Recognition System for Motor (자동차 VIN 문자 인식 시스템 개발)

  • 이용중;이화춘;류재엽
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2000.10a
    • /
    • pp.68-73
    • /
    • 2000
  • This study to embody automatic recognition of VIN(Vehicle Identification Number)character by computer vision system. Automatic recognition characters methods consist of the thining processing and the recognition of each character. VIN character and background classified using counting method of the size of connected pixels. Thining processing applied to segmentation of connected fundamental phonemes by Hilditch's algorithm. Each VIN character contours tracing algorithm used the Freeman's direction tracing algorithm.

  • PDF

Speaker Identification Using Augmented PCA in Unknown Environments (부가 주성분분석을 이용한 미지의 환경에서의 화자식별)

  • Yu, Ha-Jin
    • MALSORI
    • /
    • no.54
    • /
    • pp.73-83
    • /
    • 2005
  • The goal of our research is to build a text-independent speaker identification system that can be used in any condition without any additional adaptation process. The performance of speaker recognition systems can be severely degraded in some unknown mismatched microphone and noise conditions. In this paper, we show that PCA(principal component analysis) can improve the performance in the situation. We also propose an augmented PCA process, which augments class discriminative information to the original feature vectors before PCA transformation and selects the best direction for each pair of highly confusable speakers. The proposed method reduced the relative recognition error by 21%.

  • PDF

Multiple Vehicle Recognition based on Radar and Vision Sensor Fusion for Lane Change Assistance (차선 변경 지원을 위한 레이더 및 비전센서 융합기반 다중 차량 인식)

  • Kim, Heong-Tae;Song, Bongsob;Lee, Hoon;Jang, Hyungsun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.2
    • /
    • pp.121-129
    • /
    • 2015
  • This paper presents a multiple vehicle recognition algorithm based on radar and vision sensor fusion for lane change assistance. To determine whether the lane change is possible, it is necessary to recognize not only a primary vehicle which is located in-lane, but also other adjacent vehicles in the left and/or right lanes. With the given sensor configuration, two challenging problems are considered. One is that the guardrail detected by the front radar might be recognized as a left or right vehicle due to its genetic characteristics. This problem can be solved by a guardrail recognition algorithm based on motion and shape attributes. The other problem is that the recognition of rear vehicles in the left or right lanes might be wrong, especially on curved roads due to the low accuracy of the lateral position measured by rear radars, as well as due to a lack of knowledge of road curvature in the backward direction. In order to solve this problem, it is proposed that the road curvature measured by the front vision sensor is used to derive the road curvature toward the rear direction. Finally, the proposed algorithm for multiple vehicle recognition is validated via field test data on real roads.

New developmental direction of telecommunications for Disabilities Welfare (장애인복지를 위한 정보통신의 발전방향)

  • 박민수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.1
    • /
    • pp.35-43
    • /
    • 2000
  • This paper was studied on developmental direction of telecommunications for disabilities welfare. Method of this study is delphi method. Persons with disabilities is classed as motor disability, visual handicap, hearing impairment, and language and speech disorders. Persons with motor disability is needs as follow, speed recognition technology, video recognition technology, breath capacity recognition technology. Persons with visual handicap is needs as follow, display recognition technology, speed recognition technology, text recognition technology, intelligence conversion handling technology, video recognition - speed synthetic technology. Persons with hearing impairment and language - speech disorders is needs as follow, speed signal handling technology, speed recognition technology, intelligence conversion handling technology, video recognition technology, speed synthetic technology the results of this study is as follow: first, disabilities telecommunications organization must be constructed. Second, persons with disabilities in need of universal service. Third, Persons with disabilities in need of information education, Fourth, studying for telecommunications in need of support. Fifth, small telecommunications company in need of support. Sixth, software industry in need of new development. Seventh, Persons with disabilities in need of standard guideline for telecommunications.

  • PDF

Person-Independent Facial Expression Recognition with Histograms of Prominent Edge Directions

  • Makhmudkhujaev, Farkhod;Iqbal, Md Tauhid Bin;Arefin, Md Rifat;Ryu, Byungyong;Chae, Oksam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.6000-6017
    • /
    • 2018
  • This paper presents a new descriptor, named Histograms of Prominent Edge Directions (HPED), for the recognition of facial expressions in a person-independent environment. In this paper, we raise the issue of sampling error in generating the code-histogram from spatial regions of the face image, as observed in the existing descriptors. HPED describes facial appearance changes based on the statistical distribution of the top two prominent edge directions (i.e., primary and secondary direction) captured over small spatial regions of the face. Compared to existing descriptors, HPED uses a smaller number of code-bins to describe the spatial regions, which helps avoid sampling error despite having fewer samples while preserving the valuable spatial information. In contrast to the existing Histogram of Oriented Gradients (HOG) that uses the histogram of the primary edge direction (i.e., gradient orientation) only, we additionally consider the histogram of the secondary edge direction, which provides more meaningful shape information related to the local texture. Experiments on popular facial expression datasets demonstrate the superior performance of the proposed HPED against existing descriptors in a person-independent environment.