• Title/Summary/Keyword: Object Extractor

Search Result 25, Processing Time 0.024 seconds

Context-aware Video Surveillance System

  • An, Tae-Ki;Kim, Moon-Hyun
    • Journal of Electrical Engineering and Technology
    • /
    • v.7 no.1
    • /
    • pp.115-123
    • /
    • 2012
  • A video analysis system used to detect events in video streams generally has several processes, including object detection, object trajectories analysis, and recognition of the trajectories by comparison with an a priori trained model. However, these processes do not work well in a complex environment that has many occlusions, mirror effects, and/or shadow effects. We propose a new approach to a context-aware video surveillance system to detect predefined contexts in video streams. The proposed system consists of two modules: a feature extractor and a context recognizer. The feature extractor calculates the moving energy that represents the amount of moving objects in a video stream and the stationary energy that represents the amount of still objects in a video stream. We represent situations and events as motion changes and stationary energy in video streams. The context recognizer determines whether predefined contexts are included in video streams using the extracted moving and stationary energies from a feature extractor. To train each context model and recognize predefined contexts in video streams, we propose and use a new ensemble classifier based on the AdaBoost algorithm, DAdaBoost, which is one of the most famous ensemble classifier algorithms. Our proposed approach is expected to be a robust method in more complex environments that have a mirror effect and/or a shadow effect.

Bottleneck-based Siam-CNN Algorithm for Object Tracking (객체 추적을 위한 보틀넥 기반 Siam-CNN 알고리즘)

  • Lim, Su-Chang;Kim, Jong-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.1
    • /
    • pp.72-81
    • /
    • 2022
  • Visual Object Tracking is known as the most fundamental problem in the field of computer vision. Object tracking localize the region of target object with bounding box in the video. In this paper, a custom CNN is created to extract object feature that has strong and various information. This network was constructed as a Siamese network for use as a feature extractor. The input images are passed convolution block composed of a bottleneck layers, and features are emphasized. The feature map of the target object and the search area, extracted from the Siamese network, was input as a local proposal network. Estimate the object area using the feature map. The performance of the tracking algorithm was evaluated using the OTB2013 dataset. Success Plot and Precision Plot were used as evaluation matrix. As a result of the experiment, 0.611 in Success Plot and 0.831 in Precision Plot were achieved.

Development of Robust Feature Detector Using Sonar Data (초음파 데이터를 이용한 강인한 형상 검출기 개발)

  • Lee, Se-Jin;Lim, Jong-Hwan;Cho, Dong-Woo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.25 no.2
    • /
    • pp.35-42
    • /
    • 2008
  • This study introduces a robust feature detector for sonar data from a general fixed-type of sonar ring. The detector is composed of a data association filter and a feature extractor. The data association filter removes false returns provided frequently from sonar sensors, and classifies set of data from various objects and robot positions into a group in which all the data are from the same object. The feature extractor calculates the geometries of the feature for the group. We show the possibility of extracting circle feature as well as a line and a point features. The proposed method was applied to a real home environment with a real robot.

Semi-Automatic Object-Action Extractor to Build the Utterance Corpus for the Dialogue System (대화 시스템의 말뭉치 구축을 위한 Object-Action 반자동 추출기)

  • Yoon, JungMin;Hwang, Jaewon;Ko, Youngjoong
    • Annual Conference on Human and Language Technology
    • /
    • 2015.10a
    • /
    • pp.220-223
    • /
    • 2015
  • 본 논문은 대화 시스템에서 사용되는 말뭉치의 구축을 위해 Object와 Action을 반자동으로 추출하는 도구에 대해 기술한다. 제안하는 추출 도구는 형태소 분석과 의존 구문 분석의 결과를 기반으로 적절한 Object와 Action을 추출하는 것에 목표를 두고 있다. 그러나 형태소 분석과 의존 구문 분석의 결과는 여러 가지 오류가 포함될 수 있다. 이러한 오류는 잘못된 Object와 Action의 추출로 이어질 수 있다. 그리고 Object의 추출에 있어 해당 명사의 격이 중요한 정보를 가진다. 하지만 한국어의 특성한 조사의 생략 등으로 인해 격 태깅의 모호성이 발생하게 된다. 따라서 본 논문에서 제안하는 반자동 추출기는 형태소 분석과 의존 구문 분석의 잘못된 결과를 사용자가 손쉽게 수정할 수 있도록 하고 모호성이 발생할 수 있는 Object를 사용자에게 알려주어 올바른 Object와 Action의 추출을 가능하게 한다. 추출기를 이용한 말뭉치의 구축은 1) 형태소 분석 2) 의존 구문 분석 3) Object-Action 추출의 단계로 진행된다. 실험에서 사용된 발화는 관광 회화용 대화 시스템의 숙박, 공항 영역의 500개의 발화이며, 이 중 259개의 발화가 태깅 시 모호성이 발생하는 발화이다. 반자동 추출기를 통해 모호성이 발생한 발화를 태깅한 결과 전체 발화 중 51.8%의 발화를 빠르고 정확하게 태깅할 수 있었다.

  • PDF

Parallel Dense Merging Network with Dilated Convolutions for Semantic Segmentation of Sports Movement Scene

  • Huang, Dongya;Zhang, Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.11
    • /
    • pp.3493-3506
    • /
    • 2022
  • In the field of scene segmentation, the precise segmentation of object boundaries in sports movement scene images is a great challenge. The geometric information and spatial information of the image are very important, but in many models, they are usually easy to be lost, which has a big influence on the performance of the model. To alleviate this problem, a parallel dense dilated convolution merging Network (termed PDDCM-Net) was proposed. The proposed PDDCMNet consists of a feature extractor, parallel dilated convolutions, and dense dilated convolutions merged with different dilation rates. We utilize different combinations of dilated convolutions that expand the receptive field of the model with fewer parameters than other advanced methods. Importantly, PDDCM-Net fuses both low-level and high-level information, in effect alleviating the problem of accurately segmenting the edge of the object and positioning the object position accurately. Experimental results validate that the proposed PDDCM-Net achieves a great improvement compared to several representative models on the COCO-Stuff data set.

On-line visin system for transistor inspection (트랜지스터 검사용 온라인 비젼 시스템)

  • 노경완;전정희;김충원
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.769-772
    • /
    • 1998
  • This paper present an efficient techniques for visual inspection of taped electronic parts, suitable for real time implementation. The main environments of developed system are IBM-compatible personal computer, frame grabber, digital input-output board. It is connected to the programmable logic controller unit of the taping machine in real time. Using a queuing structure, operator or extractor machine can remove easily the defect one from production line. Also, we design a new illumination system for sacquring shape and subface features of object. Therefore, it redue pre-processing step and processing time.

  • PDF

Feature Extraction in 3-Dimensional Object with Closed-surface using Fourier Transform (Fourier Transform을 이용한 3차원 폐곡면 객체의 특징 벡터 추출)

  • 이준복;김문화;장동식
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.3
    • /
    • pp.21-26
    • /
    • 2003
  • A new method to realize 3-dimensional object pattern recognition system using Fourier-based feature extractor has been proposed. The procedure to obtain the invariant feature vector is as follows ; A closed surface is generated by tracing the surface of object using the 3-dimensional polar coordinate. The centroidal distances between object's geometrical center and each closed surface points are calculated. The distance vector is translation invariant. The distance vector is normalized, so the result is scale invariant. The Fourier spectrum of each normalized distance vector is calculated, and the spectrum is rotation invariant. The Fourier-based feature generating from above procedure completely eliminates the effect of variations in translation, scale, and rotation of 3-dimensional object with closed-surface. The experimental results show that the proposed method has a high accuracy.

  • PDF

Implementation of Embedded System Based Simulator Controller Using Camera Motion Parameter Extractor (카메라 모션 벡터 추출기를 이용한 임베디드 기반 가상현실 시뮬레이터 제어기의 설계)

  • Lee Hee-Man;Park Sang-Jo
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.4
    • /
    • pp.98-108
    • /
    • 2006
  • In the past, the Image processing system is independently implemented and has a limit in its application to a degree of simple display. The scope of present image processing system is diversely extended in its application owing to the development of image processing IC chips. In this paper, we implement the image processing system operated independently without PC by converting analogue image signals into digital signals. In the proposed image processing system, we extract the motion parameters from analogue image signals and generate the virtual movement to Simulator and operate Simulator by extracting motion parameters.

  • PDF

Object-Oriented Simulation-Based Expert System Using a Smalltalk Paradigm (Smalltalk 패러다임을 이용한 객체지향 시뮬레이션기반 전문가시스템)

  • 김선욱;양문희
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.24 no.66
    • /
    • pp.1-10
    • /
    • 2001
  • Simulation-Based Expert System(SIMBES) is a very effective tool to solve complex antral hard problems. The SIMBES model includes a simulator, a feature extractor, a machine learning system, a performance evaluator, and a Knowledge-Based Expert System(KBES). Since SIMBES depends on Problem domains, a schedule-based material requirements planning problem, which is NP-hard, was selected to exemplify the SIMBES model. To implement the SIMBES application in Smalltalk paradigm, a system class hierarchy was constructed. The hierarchy consists of five large classes such as Job Generator, Job Scheduler, Job Evaluator, Inference Engine, and Executive System. Several classes inside these classes were identified. Additionally, instance protocols about all classes have been described in terms of messages and pseudo methods. These protocols can be implemented easily by any other object-oriented languages. Furthermore, these results may be used as a skeletal system to develop a new SIMBES efficiently, especially when the application is related to other scheduling problems.

  • PDF

Three-stream network with context convolution module for human-object interaction detection

  • Siadari, Thomhert S.;Han, Mikyong;Yoon, Hyunjin
    • ETRI Journal
    • /
    • v.42 no.2
    • /
    • pp.230-238
    • /
    • 2020
  • Human-object interaction (HOI) detection is a popular computer vision task that detects interactions between humans and objects. This task can be useful in many applications that require a deeper understanding of semantic scenes. Current HOI detection networks typically consist of a feature extractor followed by detection layers comprising small filters (eg, 1 × 1 or 3 × 3). Although small filters can capture local spatial features with a few parameters, they fail to capture larger context information relevant for recognizing interactions between humans and distant objects owing to their small receptive regions. Hence, we herein propose a three-stream HOI detection network that employs a context convolution module (CCM) in each stream branch. The CCM can capture larger contexts from input feature maps by adopting combinations of large separable convolution layers and residual-based convolution layers without increasing the number of parameters by using fewer large separable filters. We evaluate our HOI detection method using two benchmark datasets, V-COCO and HICO-DET, and demonstrate its state-of-the-art performance.