• 제목/요약/키워드: Object-based Classification

검색결과 493건 처리시간 0.033초

A Hybrid Proposed Framework for Object Detection and Classification

  • Aamir, Muhammad;Pu, Yi-Fei;Rahman, Ziaur;Abro, Waheed Ahmed;Naeem, Hamad;Ullah, Farhan;Badr, Aymen Mudheher
    • Journal of Information Processing Systems
    • /
    • 제14권5호
    • /
    • pp.1176-1194
    • /
    • 2018
  • The object classification using the images' contents is a big challenge in computer vision. The superpixels' information can be used to detect and classify objects in an image based on locations. In this paper, we proposed a methodology to detect and classify the image's pixels' locations using enhanced bag of words (BOW). It calculates the initial positions of each segment of an image using superpixels and then ranks it according to the region score. Further, this information is used to extract local and global features using a hybrid approach of Scale Invariant Feature Transform (SIFT) and GIST, respectively. To enhance the classification accuracy, the feature fusion technique is applied to combine local and global features vectors through weight parameter. The support vector machine classifier is a supervised algorithm is used for classification in order to analyze the proposed methodology. The Pascal Visual Object Classes Challenge 2007 (VOC2007) dataset is used in the experiment to test the results. The proposed approach gave the results in high-quality class for independent objects' locations with a mean average best overlap (MABO) of 0.833 at 1,500 locations resulting in a better detection rate. The results are compared with previous approaches and it is proved that it gave the better classification results for the non-rigid classes.

Sparse Feature Convolutional Neural Network with Cluster Max Extraction for Fast Object Classification

  • Kim, Sung Hee;Pae, Dong Sung;Kang, Tae-Koo;Kim, Dong W.;Lim, Myo Taeg
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권6호
    • /
    • pp.2468-2478
    • /
    • 2018
  • We propose the Sparse Feature Convolutional Neural Network (SFCNN) to reduce the volume of convolutional neural networks (CNNs). Despite the superior classification performance of CNNs, their enormous network volume requires high computational cost and long processing time, making real-time applications such as online-training difficult. We propose an advanced network that reduces the volume of conventional CNNs by producing a region-based sparse feature map. To produce the sparse feature map, two complementary region-based value extraction methods, cluster max extraction and local value extraction, are proposed. Cluster max is selected as the main function based on experimental results. To evaluate SFCNN, we conduct an experiment with two conventional CNNs. The network trains 59 times faster and tests 81 times faster than the VGG network, with a 1.2% loss of accuracy in multi-class classification using the Caltech101 dataset. In vehicle classification using the GTI Vehicle Image Database, the network trains 88 times faster and tests 94 times faster than the conventional CNNs, with a 0.1% loss of accuracy.

무인비행기 (UAV) 영상을 이용한 농작물 분류 (Crops Classification Using Imagery of Unmanned Aerial Vehicle (UAV))

  • 박진기;박종화
    • 한국농공학회논문집
    • /
    • 제57권6호
    • /
    • pp.91-97
    • /
    • 2015
  • The Unmanned Aerial Vehicles (UAVs) have several advantages over conventional RS techniques. They can acquire high-resolution images quickly and repeatedly. And with a comparatively lower flight altitude i.e. 80~400 m, they can obtain good quality images even in cloudy weather. Therefore, they are ideal for acquiring spatial data in cases of small agricultural field with mixed crop, abundant in South Korea. This paper discuss the use of low cost UAV based remote sensing for classifying crops. The study area, Gochang is produced by several crops such as red pepper, radish, Chinese cabbage, rubus coreanus, welsh onion, bean in South Korea. This study acquired images using fixed wing UAV on September 23, 2014. An object-based technique is used for classification of crops. The results showed that scale 250, shape 0.1, color 0.9, compactness 0.5 and smoothness 0.5 were the optimum parameter values in image segmentation. As a result, the kappa coefficient was 0.82 and the overall accuracy of classification was 85.0 %. The result of the present study validate our attempts for crop classification using high resolution UAV image as well as established the possibility of using such remote sensing techniques widely to resolve the difficulty of remote sensing data acquisition in agricultural sector.

Applicability of Geo-spatial Processing Open Sources to Geographic Object-based Image Analysis (GEOBIA)

  • Lee, Ki-Won;Kang, Sang-Goo
    • 대한원격탐사학회지
    • /
    • 제27권3호
    • /
    • pp.379-388
    • /
    • 2011
  • At present, GEOBIA (Geographic Object-based Image Analysis), heir of OBIA (Object-based Image Analysis), is regarded as an important methodology by object-oriented paradigm for remote sensing, dealing with geo-objects related to image segmentation and classification in the different view point of pixel-based processing. This also helps to directly link to GIS applications. Thus, GEOBIA software is on the booming. The main theme of this study is to look into the applicability of geo-spatial processing open source to GEOBIA. However, there is no few fully featured open source for GEOBIA which needs complicated schemes and algorithms, till It was carried out to implement a preliminary system for GEOBIA running an integrated and user-oriented environment. This work was performed by using various open sources such as OTB or PostgreSQL/PostGIS. Some points are different from the widely-used proprietary GEOBIA software. In this system, geo-objects are not file-based ones, but tightly linked with GIS layers in spatial database management system. The mean shift algorithm with parameters associated with spatial similarities or homogeneities is used for image segmentation. For classification process in this work, tree-based model of hierarchical network composing parent and child nodes is implemented by attribute join in the semi-automatic mode, unlike traditional image-based classification. Of course, this integrated GEOBIA system is on the progressing stage, and further works are necessary. It is expected that this approach helps to develop and to extend new applications such as urban mapping or change detection linked to GIS data sets using GEOBIA.

칼라분류와 방향성 에지의 클러스터링에 의한 차선 검출 (Detection of Road Lane with Color Classification and Directional Edge Clustering)

  • 정차근
    • 대한전자공학회논문지SP
    • /
    • 제48권4호
    • /
    • pp.86-97
    • /
    • 2011
  • 본 논문에서는 칼라분류 및 방향성 에지정보의 클러스터링과 이들의 통합에 의한 새로운 도로영역 및 차선검출 알고리즘을 제안한다. 도로영역 및 차선을 하나의 인식대상 물체로 취급하고, 통계적 파라미터의 반복 최적화에 의한 칼라정보의 클러스터링을 수행해서 검출과 인식을 위한 초기정보로 사용한다. 다음으로, 칼라정보가 갖는 물체인식 의 한계를 개선하기 위해 에지정보를 검출하고, 관심영역(Region Of Interest for Lane Boundary(ROI-LB))의 추출과 ROI-LB 영역에서 방향성 에지정보의 검출과 클러스터링을 수행한다. 칼라분류 및 에지 클러스터링의 결과를 통합해, 이들 각각의 정보가 갖는 특징을 이용함으로서 도로환경에 적합한 도로영역 및 차선을 검출할 수 있도록 한다. 제안방법은 도로와 차선에 관한 파라미터릭 수학적 모델을 사용하지 않고 칼라 및 에지의 클러스터링 정보에 의한 non-parametric 방법으로 다양한 도로 환경에 유연한 대응이 가능한 장점을 갖는다. 본 제안방법의 유효성을 입증하기 위해 상이한 촬상조건 및 도로환경에서의 영상에 대한 실험결과를 제시한다.

화소 및 객체기반 분석기법을 활용한 생태계서비스 가치 추정 결과 비교 (Comparison of the Estimated Result of Ecosystem Service Value Using Pixel-based and Object-based Analysis)

  • 문지윤;김윤수
    • 대한원격탐사학회지
    • /
    • 제33권6_3호
    • /
    • pp.1187-1196
    • /
    • 2017
  • 오늘날 생태계가 인간에게 제공하는 기능과 서비스의 가치를 측정하고자 하는 노력이 지속적으로 이어지고 있으나, 지금까지 수행된 연구들의 대다수는 Landsat이나 MODIS와 같은 중저해상도의 위성영상을 활용해 왔다. 최근 들어 가용할 수 있는 고해상도 위성영상이 많아지는 데 비해 고해상도 영상을 활용한 생태계서비스 가치 추정 연구는 많이 이루어지지 않은 것이다. 이에 본 연구에서는 생태계서비스 가치를 좀 더 정밀하게 분석하고자 최근 들어 가장 많은 변화가 발생하였던 세종시 지역에 대한 고해상도 KOMPSAT-3 영상을 활용하여 화소 및 객체기반 분류를 수행한 후, 생태계서비스 가치 추정 결과를 비교하였다. 화소 및 객체기반 토지피복분류 결과, 화소기반 분류에서 산림과 초지는 상대적으로 과소평가되고 농경지와 시가지는 과대평가되는 것을 확인할 수 있었다. 또한 나지의 경우에는 화소기반 분류 결과에서는 증가하였고 객체기반 분류 결과에서는 감소하여 상반된 결과를 보였다. 이러한 토지피복 분류 결과를 토대로 연구지역 전체의 생태계서비스 가치를 추정한 결과, 2014년 기준 연간 약 818만(화소기반 결과) 달러와 863만(객체기반 결과) 달러에서 2016년에는 약 780만(화소기반 결과) 달러와 862만(객체기반 결과) 달러로 감소하는 것을 확인할 수 있었다. 본 연구 결과는 지역적 차원에서의 지속가능한 도시 개발 및 삶의 질 향상을 위한 정책 등을 수립하는 데 기초 자료로서 활용될 수 있을 것으로 기대된다.

Covariance-based Recognition Using Machine Learning Model

  • Osman, Hassab Elgawi
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.223-228
    • /
    • 2009
  • We propose an on-line machine learning approach for object recognition, where new images are continuously added and the recognition decision is made without delay. Random forest (RF) classifier has been extensively used as a generative model for classification and regression applications. We extend this technique for the task of building incremental component-based detector. First we employ object descriptor model based on bag of covariance matrices, to represent an object region then run our on-line RF learner to select object descriptors and to learn an object classifier. Experiments of the object recognition are provided to verify the effectiveness of the proposed approach. Results demonstrate that the propose model yields in object recognition performance comparable to the benchmark standard RF, AdaBoost, and SVM classifiers.

  • PDF

컨볼루션 신경망을 이용한 CCTV 영상 기반의 성별구분 (CCTV Based Gender Classification Using a Convolutional Neural Networks)

  • 강현곤;박장식;송종관;윤병우
    • 한국멀티미디어학회논문지
    • /
    • 제19권12호
    • /
    • pp.1943-1950
    • /
    • 2016
  • Recently, gender classification has attracted a great deal of attention in the field of video surveillance system. It can be useful in many applications such as detecting crimes for women and business intelligence. In this paper, we proposed a method which can detect pedestrians from CCTV video and classify the gender of the detected objects. So far, many algorithms have been proposed to classify people according the their gender. This paper presents a gender classification using convolutional neural network. The detection phase is performed by AdaBoost algorithm based on Haar-like features and LBP features. Classifier and detector is trained with data-sets generated form CCTV images. The experimental results of the proposed method is male matching rate of 89.9% and the results shows 90.7% of female videos. As results of simulations, it is shown that the proposed gender classification is better than conventional classification algorithm.

Effective Design of Inference Rule for Shape Classification

  • Kim, Yoon-Ho;Lee, Sang-Sock;Lee, Joo-Shin
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1998년도 The Third Asian Fuzzy Systems Symposium
    • /
    • pp.417-422
    • /
    • 1998
  • This paper presents a method of object classification from dynamic image based on fuzzy inference algorithm which is suitable for low speed such as, conveyor, uninhabited transportation. At first, by using feature parameters of moving object, fuzzy if - then rule that can be able to adapt the wide variety of surroundings is developed. Secondly, implication function for fuzzy inference are compared with respect the proposed algorithm. Simulation results are presented to testify the performance and applicability of the proposed system.

  • PDF

상황 정보 기반 양방향 추론 방법을 이용한 이동 로봇의 물체 인식 (Object Recognition for Mobile Robot using Context-based Bi-directional Reasoning)

  • 임기현;류광근;서일홍;김종복;장국현;강정호;박명관
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2007년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.6-8
    • /
    • 2007
  • In this paper, We propose reasoning system for object recognition and space classification using not only visual features but also contextual information. It is necessary to perceive object and classify space in real environments for mobile robot. especially vision based. Several visual features such as texture, SIFT. color are used for object recognition. Because of sensor uncertainty and object occlusion. there are many difficulties in vision-based perception. To show the validities of our reasoning system. experimental results will be illustrated. where object and space are inferred by bi -directional rules even with partial and uncertain information. And the system is combined with top-down and bottom-up approach.

  • PDF