• 제목/요약/키워드: object features

검색결과 1,180건 처리시간 0.033초

Vision-Based Activity Recognition Monitoring Based on Human-Object Interaction at Construction Sites

  • Chae, Yeon;Lee, Hoonyong;Ahn, Changbum R.;Jung, Minhyuk;Park, Moonseo
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.877-885
    • /
    • 2022
  • Vision-based activity recognition has been widely attempted at construction sites to estimate productivity and enhance workers' health and safety. Previous studies have focused on extracting an individual worker's postural information from sequential image frames for activity recognition. However, various trades of workers perform different tasks with similar postural patterns, which degrades the performance of activity recognition based on postural information. To this end, this research exploited a concept of human-object interaction, the interaction between a worker and their surrounding objects, considering the fact that trade workers interact with a specific object (e.g., working tools or construction materials) relevant to their trades. This research developed an approach to understand the context from sequential image frames based on four features: posture, object, spatial features, and temporal feature. Both posture and object features were used to analyze the interaction between the worker and the target object, and the other two features were used to detect movements from the entire region of image frames in both temporal and spatial domains. The developed approach used convolutional neural networks (CNN) for feature extractors and activity classifiers and long short-term memory (LSTM) was also used as an activity classifier. The developed approach provided an average accuracy of 85.96% for classifying 12 target construction tasks performed by two trades of workers, which was higher than two benchmark models. This experimental result indicated that integrating a concept of the human-object interaction offers great benefits in activity recognition when various trade workers coexist in a scene.

  • PDF

자율주행 로봇을 위한 다중 특징을 이용하여 외부환경에서 물체 분석 (Object Analysis on Outdoor Environment Using Multiple Features for Autonomous Navigation Robot)

  • 김대년;조강현
    • 한국멀티미디어학회논문지
    • /
    • 제13권5호
    • /
    • pp.651-662
    • /
    • 2010
  • 본 연구는 외부환경에서 자율주행 로봇을 위해 중요한 물체를 찾기 위한 방법을 설명한다. 외부환경의 물체를 찾기 위해서 먼저 로봇은 외부환경에서 주행할 때 획득한 영상으로부터 물체를 검출하고 분할한다. 로봇은 물체의 후보를 자연물의 하늘과 나무로, 인공물의 빌딩으로 나눈다. 후보 물체를 분할하기 위해서 다중 특징을 이용한다. 다중 특징은 색상, 선분, 상황정보, 동시발생 행렬, 소실점 및 주요한 요소성분을 이용한다. 후보 특징은 물체의 특성에 맞게 혼합하여 물체를 분할한다. 이런 다중 특징은 물체에 대한 공간정보, 인간의 선험적인 지식을 이용한 물체의 기하학 정보, 공간적인 주파수 등으로 다양한 특징 추출 방법을 이용하여 물체의 영역분할의 결과를 얻는다. 물체의 분석은 분할된 영역을 이용하여 벽 영역, 창문, 정문과 같은 빌딩면의 기하학적인 속성을 찾는다. 빌딩은 소실점의 수직선분과 수평선분을 교차함으로써 그물을 얻는다. 빌딩의 벽 영역은 유사한 색상을 가지는 이웃해 있는 평행사변형의 그물을 합병해서 검출한다. 창문은 층의 수와 동일한 층에 있는 방의 수를 추정하여 빌딩의 높이와 크기를 추정한다. 실험에서 다중 특징을 이용하여 물체의 영역을 분할하고 빌딩의 기하학적인 속성을 이용하여 물체를 분석한다.

상황 정보 기반 양방향 추론 방법을 이용한 이동 로봇의 물체 인식 (Object Recognition for Mobile Robot using Context-based Bi-directional Reasoning)

  • 임기현;류광근;서일홍;김종복;장국현;강정호;박명관
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2007년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.6-8
    • /
    • 2007
  • In this paper, We propose reasoning system for object recognition and space classification using not only visual features but also contextual information. It is necessary to perceive object and classify space in real environments for mobile robot. especially vision based. Several visual features such as texture, SIFT. color are used for object recognition. Because of sensor uncertainty and object occlusion. there are many difficulties in vision-based perception. To show the validities of our reasoning system. experimental results will be illustrated. where object and space are inferred by bi -directional rules even with partial and uncertain information. And the system is combined with top-down and bottom-up approach.

  • PDF

증강현실 서비스를 위한 Camshift와 SURF를 개선한 객체 검출 및 추적 구현 (Implementation of Improved Object Detection and Tracking based on Camshift and SURF for Augmented Reality Service)

  • 이용환;김흥준
    • 반도체디스플레이기술학회지
    • /
    • 제16권4호
    • /
    • pp.97-102
    • /
    • 2017
  • Object detection and tracking have become one of the most active research areas in the past few years, and play an important role in computer vision applications over our daily life. Many tracking techniques are proposed, and Camshift is an effective algorithm for real time dynamic object tracking, which uses only color features, so that the algorithm is sensitive to illumination and some other environmental elements. This paper presents and implements an effective moving object detection and tracking to reduce the influence of illumination interference, which improve the performance of tracking under similar color background. The implemented prototype system recognizes object using invariant features, and reduces the dimension of feature descriptor to rectify the problems. The experimental result shows that that the system is superior to the existing methods in processing time, and maintains better problem ratios in various environments.

  • PDF

계기판 벌브 인식 알고리즘 ((Algorithm for Recognizing Bulb in Cluster))

  • 이철헌;설성욱;김효성
    • 대한전자공학회논문지TE
    • /
    • 제39권1호
    • /
    • pp.37-45
    • /
    • 2002
  • 본 논문은 차량계기판에서 벌브를 인식하기 위한 새로운 특징을 제안한다. 대부분의 모델기반 물체 인식에서 사용되는 특징으로는 물체의 다각형 근사점이 있다. 이러한 특징을 이용한 정합방식을 차량계기판의 벌브와 같은 작은 물체에 적용하며, 정합율이 낮다. 이러한 정합율을 높이기 위해서 본 논문에서는 새로운 특징을 제안한다. 제안된 특징은 물체화소의 원분포와 물체의 중심에서 경계선까지의 거리비이다. 본 논문에서는 이러한 세 개의 특징을 모두 같이 이용하기 위해서 새로운 결정함수를 정의한다. 실험 결과는 다각형 근사점을 이용한 정합방식과 3개의 특징을 모두 이용한 정합방식에서의 정합이 되지 않은 물체수로 비교를 한다.

Signature 기반의 겹쳐진 원형 물체 검출 및 인식 기법 (Detection and Recognition of Overlapped Circular Objects based a Signature Representation Scheme)

  • 박상범;한헌수;한영준
    • 제어로봇시스템학회논문지
    • /
    • 제14권1호
    • /
    • pp.54-61
    • /
    • 2008
  • This paper proposes a new algorithm for detecting and recognizing overlapped objects among a stack of arbitrarily located objects using a signature representation scheme. The proposed algorithm consists of two processes of detecting overlap of objects and of determining the boundary between overlapping objects. To determine overlap of objects, in the first step, the edge image of object region is extracted and those areas in the object region are considered as the object areas if an area is surrounded by a closed edge. For each object, its signature image is constructed by measuring the distances of those edge points from the center of the object, along the angle axis, which are located at every angle with reference to the center of the object. When an object is not overlapped, its features which consist of the positions and angles of outstanding points in the signature are searched in the database to find its corresponding model. When an object is overlapped, its features are partially matched with those object models among which the best matching model is selected as the corresponding model. The boundary among the overlapping objects is determined by projecting the signature to the original image. The performance of the proposed algorithm has been tested with the task of picking the top or non-overlapped object from a stack of arbitrarily located objects. In the experiment, a recognition rate of 98% has been achieved.

For the Association between 3D VAR Model and 2D Features

  • Kiuchi, Yasuhiko;Tanaka, Masaru;Fujiki, Jun;Mishima, Taketoshi
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -3
    • /
    • pp.1404-1407
    • /
    • 2002
  • Although we look at objects as 2D images through our eyes, we can reconstruct the shape and/or depth of objects. In order to realize this ability using computers, it is required that the method which can estimate the 3D features of object from 2D images. As feature which represents 3D shapes effectively, three dimensional vector autoregressive model is pro- posed. If this feature is associated other feature of 2D shape, then above aim might be achieved. On the other hand, as feature which represents 2D shapes, quasi moment features is proposed. As the first step of association of these features, we constructed real time simulator that computes both of two features concurrently from object data (3D curves) . This simulator can also rotate object and estimate the rotation The method using 3D VAR model estimates the rotation correctly, but the estimation by quasi moment features includes much errors. This reason would be that projected images are constructed by the points only, and doesn't have enough sizes to estimate the correct 3D rotation parameters.

  • PDF

적외선 영상에서의 불변 특징 정보를 이용한 목표물 인식 (Object Recognition by Invariant Feature Extraction in FLIR)

  • 권재환;이광연;김성대
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 추계종합학술대회 논문집(4)
    • /
    • pp.65-68
    • /
    • 2000
  • This paper describes an approach for extracting invariant features using a view-based representation and recognizing an object with a high speed search method in FLIR. In this paper, we use a reformulated eigenspace technique based on robust estimation for extracting features which are robust for outlier such as noise and clutter. After extracting feature, we recognize an object using a partial distance search method for calculating Euclidean distance. The experimental results show that the proposed method achieves the improvement of recognition rate compared with standard PCA.

  • PDF

3차원 물체 인식을 위한 전략적 매칭 알고리듬 (Strategical matching algorithm for 3-D object recoginition)

  • 이상근;이선호;송호근;최종수
    • 전자공학회논문지C
    • /
    • 제35C권1호
    • /
    • pp.55-63
    • /
    • 1998
  • This paper presents a new maching algorithm by Hopfield Neural Network for 3-D object recognition. In the proposed method, a model object is represented by a set of polygons in a single coordinate. And each polygon is described by a set of features; feature attributes. In case of 3-D object recognition, the scale and poses of the object are important factors. So we propose a strategy for 3-D object recognition independently to its scale and poses. In this strategy, the respective features of the input or the model objects are changed to the startegical constants when they are compared with one another. Finally, we show that the proposed method has a robustness through the results of experiments which included the classification of the input objects and the matching sequence to its 3-D rotation and scale.

  • PDF

AANet: Adjacency auxiliary network for salient object detection

  • Li, Xialu;Cui, Ziguan;Gan, Zongliang;Tang, Guijin;Liu, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권10호
    • /
    • pp.3729-3749
    • /
    • 2021
  • At present, deep convolution network-based salient object detection (SOD) has achieved impressive performance. However, it is still a challenging problem to make full use of the multi-scale information of the extracted features and which appropriate feature fusion method is adopted to process feature mapping. In this paper, we propose a new adjacency auxiliary network (AANet) based on multi-scale feature fusion for SOD. Firstly, we design the parallel connection feature enhancement module (PFEM) for each layer of feature extraction, which improves the feature density by connecting different dilated convolution branches in parallel, and add channel attention flow to fully extract the context information of features. Then the adjacent layer features with close degree of abstraction but different characteristic properties are fused through the adjacent auxiliary module (AAM) to eliminate the ambiguity and noise of the features. Besides, in order to refine the features effectively to get more accurate object boundaries, we design adjacency decoder (AAM_D) based on adjacency auxiliary module (AAM), which concatenates the features of adjacent layers, extracts their spatial attention, and then combines them with the output of AAM. The outputs of AAM_D features with semantic information and spatial detail obtained from each feature are used as salient prediction maps for multi-level feature joint supervising. Experiment results on six benchmark SOD datasets demonstrate that the proposed method outperforms similar previous methods.