• Title/Summary/Keyword: bird images

Search Result 75, Processing Time 0.022 seconds

A Comparison of Pixel- and Segment-based Classification for Tree Species Classification using QuickBird Imagery (QuickBird 위성영상을 이용한 수종분류에서 픽셀과 분할기반 분류방법의 정확도 비교)

  • Chung, Sang Young;Yim, Jong Su;Shin, Man Yong
    • Journal of Korean Society of Forest Science
    • /
    • v.100 no.4
    • /
    • pp.540-547
    • /
    • 2011
  • This study was conducted to compare classification accuracy by tree species using QuickBird imagery for pixel- and segment-based classifications that have been mostly applied to classify land covers. A total of 398 points was used as training and reference data. Based on this points, the points were classified into fourteen land cover classes: four coniferous and seven deciduous tree species in forest classes, and three non-forested classes. In pixel-based classification, three images obtained by using raw spectral values, three tasseled indices, and three components from principal component analysis were produced. For the both classification processes, the maximum likelihood method was applied. In the pixel-based classification, it was resulted that the classification accuracy with raw spectral values was better than those by the other band combinations. As resulted that, the segment-based classification with a scale factor of 50% provided the most accurate classification (overall accuracy:76% and ${\hat{k}}$ value:0.74) compared to the other scale factors and pixel-based classification.

Land cover classification of a non-accessible area using multi-sensor images and GIS data (다중센서와 GIS 자료를 이용한 접근불능지역의 토지피복 분류)

  • Kim, Yong-Min;Park, Wan-Yong;Eo, Yang-Dam;Kim, Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.5
    • /
    • pp.493-504
    • /
    • 2010
  • This study proposes a classification method based on an automated training extraction procedure that may be used with very high resolution (VHR) images of non-accessible areas. The proposed method overcomes the problem of scale difference between VHR images and geographic information system (GIS) data through filtering and use of a Landsat image. In order to automate maximum likelihood classification (MLC), GIS data were used as an input to the MLC of a Landsat image, and a binary edge and a normalized difference vegetation index (NDVI) were used to increase the purity of the training samples. We identified the thresholds of an NDVI and binary edge appropriate to obtain pure samples of each class. The proposed method was then applied to QuickBird and SPOT-5 images. In order to validate the method, visual interpretation and quantitative assessment of the results were compared with products of a manual method. The results showed that the proposed method could classify VHR images and efficiently update GIS data.

Fusion Techniques Comparison of GeoEye-1 Imagery

  • Kim, Yong-Hyun;Kim, Yong-Il;Kim, Youn-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.6
    • /
    • pp.517-529
    • /
    • 2009
  • Many satellite image fusion techniques have been developed in order to produce a high resolution multispectral (MS) image by combining a high resolution panchromatic (PAN) image and a low resolution MS image. Heretofore, most high resolution image fusion techniques have used IKONOS and QuickBird images. Recently, GeoEye-1, offering the highest resolution of any commercial imaging system, was launched. In this study, we have experimented with GeoEye-1 images in order to evaluate which fusion algorithms are suitable for these images. This paper presents compares and evaluates the efficiency of five image fusion techniques, the $\grave{a}$ trous algorithm based additive wavelet transformation (AWT) fusion techniques, the Principal Component analysis (PCA) fusion technique, Gram-Schmidt (GS) spectral sharpening, Pansharp, and the Smoothing Filter based Intensity Modulation (SFIM) fusion technique, for the fusion of a GeoEye-1 image. The results of the experiment show that the AWT fusion techniques maintain more spatial detail of the PAN image and spectral information of the MS image than other image fusion techniques. Also, the Pansharp technique maintains information of the original PAN and MS images as well as the AWT fusion technique.

A Study on the Method for Three-dimensional Geo-positioning Using Heterogeneous Satellite Stereo Images (이종위성 스테레오 영상의 3차원 위치 결정 방법 연구)

  • Jaehoon, Jeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.4
    • /
    • pp.325-331
    • /
    • 2015
  • This paper suggests an intersection method to improve the accuracy of three-dimensional position from heterogeneous satellite stereo images, and addresses validation of the suggested method following the experimental results. The three-dimensional position is achieved by determining an intersection point of two rays that have been precisely adjusted through the sensor orientation. In case of conventional homogeneous satellite stereo images, the intersection point is generally determined as a mid-point of the shortest line that links two rays in at least square fashion. In this paper, a refined method, which determines the intersection point upon the ray adjusted at the higher resolution image, was used to improve the positioning accuracy of heterogeneous satellite images. Those heterogeneous satellite stereo pairs were constituted using two KOMPSAT-2 and QuickBird images of covering the same area. Also, the positioning results were visually compared in between the conventional intersection and the refined intersection, while the quantitative analysis was performed. The results demonstrated that the potential of refined intersection improved the positioning accuracy of heterogeneous satellite stereo pairs; especially, with a weak geometry of the heterogeneous satellite stereo, the greater effects on the accuracy improvement.

Estimating vegetation index for outdoor free-range pig production using YOLO

  • Sang-Hyon Oh;Hee-Mun Park;Jin-Hyun Park
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.638-651
    • /
    • 2023
  • The objective of this study was to quantitatively estimate the level of grazing area damage in outdoor free-range pig production using a Unmanned Aerial Vehicles (UAV) with an RGB image sensor. Ten corn field images were captured by a UAV over approximately two weeks, during which gestating sows were allowed to graze freely on the corn field measuring 100 × 50 m2. The images were corrected to a bird's-eye view, and then divided into 32 segments and sequentially inputted into the YOLOv4 detector to detect the corn images according to their condition. The 43 raw training images selected randomly out of 320 segmented images were flipped to create 86 images, and then these images were further augmented by rotating them in 5-degree increments to create a total of 6,192 images. The increased 6,192 images are further augmented by applying three random color transformations to each image, resulting in 24,768 datasets. The occupancy rate of corn in the field was estimated efficiently using You Only Look Once (YOLO). As of the first day of observation (day 2), it was evident that almost all the corn had disappeared by the ninth day. When grazing 20 sows in a 50 × 100 m2 cornfield (250 m2/sow), it appears that the animals should be rotated to other grazing areas to protect the cover crop after at least five days. In agricultural technology, most of the research using machine and deep learning is related to the detection of fruits and pests, and research on other application fields is needed. In addition, large-scale image data collected by experts in the field are required as training data to apply deep learning. If the data required for deep learning is insufficient, a large number of data augmentation is required.

A Study on the Technique Develop for Perspective Image Generation and 3 Dimension Simulation in Jecheon (제천시 영상 조감도 생성 및 3차원 시뮬레이션 기술개발에 관한 연구)

  • 연상호;홍일화
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.21 no.1
    • /
    • pp.45-51
    • /
    • 2003
  • Stereo bird's-eyes-view was prepared for 3-dimensional view of various forms of Jecheon city, and 3-dimensional simulation was applied to it so as to show it in moving pictures in spatial. In manufacturing stereo bird's-eyes-view, perspective technology was used in image-making technology, and the basic material images are prepared as fellows: used EOC Images from Arirang-1 satellite, created DEM whose error was optionally geometric corrected after drawn from the contour line of the map on a scale of l/5,000 manufactured by national geography institute as a national standard map, and classified road lines which were manufactured as a road layer vector file of a map on a scale of l/l,000 and then overlay it over the three dimensional image of target area. Especially for the connectivity with address system to be used in new address, an arterial road map on a scale of l/l,000 that had been manufactured to grant new address was used in maximum in road network structure data of city area in this study.

Stereoscopic Visualization of Buildings Using Horizontal and Vertical Projection Systems (수평 및 수직형 프로젝션 시스템을 이용한 건물의 입체 가시화)

  • Rhee, Seon-Min;Choi, Soo-Mi;Kim, Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • v.10A no.2
    • /
    • pp.165-172
    • /
    • 2003
  • In this paper, we constructed horizontal and vertical virtual spaces using the projection table and the projection wall. We then implemented a system that stereoscopically visualizes three-dimensional (3D) buildings in the virtual environments in accordance with the user's viewing point. The projection table, a kind of horizontal display equipment, is effectively used in reproducing operations on a table or desk as well as in areas that require bird-eye views because its viewing frustum allows to view things from above. On the other hand, the large projection wall, a kind of vertical display equipment, is effectively used in navigating virtual spaces because its viewing frustum allows to take a front view. In this paper, we provided quick interaction between the user and virtual objects by representing major objects as detail 3D models and a background as images. We also augmented the reality by properly integrating models and images with user's locations and viewpoint in different virtual environments.

THE LAND COVER MAPPING IN NORTH KOREA USING MODIS IMAGE;THE CLASSIFICATION ACCURACY ENHANCEMENT FOR INACCESSIBLE AREA USING GOOGLE EARTH

  • Cha, Su-Young;Park, Chong-Hwa
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.341-344
    • /
    • 2007
  • A major obstacle to classify and validate Land Cover maps is the high cost of generating reference data or multiple thematic maps for subsequent comparative analysis. In case of inaccessible area such as North Korea, the high resolution satellite imagery may be used as in situ data so as to overcome the lack of reliable reference data. The objective of this paper is to investigate the possibility of utilizing QuickBird (0.6m) of North Korea obtained from Google Earth data provided thru internet. Monthly NDVI images of nine months from the summer of 2004 were classified into L=54 cluster using ISODATA algorithm, and these L clusters were assigned to 7 classes; coniferous forest, deciduous forest, mixed forest, paddy field, dry field, water and built-up area. The overall accuracy and Kappa index were 85.98% and 0.82, respectively, which represents about 10% point increase of classification accuracy than our previous study based on GCP point data around North Korea. Thus we can conclude that Google Earth may be used to substitute the traditional in situ data collection on the site where the accessibility is severely limited.

  • PDF

The Utilization of Google Earth Images as Reference Data for The Multitemporal Land Cover Classification with MODIS Data of North Korea

  • Cha, Su-Young;Park, Chong-Hwa
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.5
    • /
    • pp.483-491
    • /
    • 2007
  • One of the major obstacles to classify and validate Land Cover maps is the high cost of acquiring reference data. In case of inaccessible areas such as North Korea, the high resolution satellite imagery may be used for reference data. The objective of this paper is to investigate the possibility of utilizing QuickBird high resolution imagery of North Korea that can be obtained from Google Earth data via internet for reference data of land cover classification. Monthly MODIS NDVI data of nine months from the summer of 2004 were classified into L=54 cluster using ISODATA algorithm, and these L clusters were assigned to 7 classes - coniferous forest, deciduous forest, mixed forest, paddy field, dry field, water, and built-up areas - by careful use of reference data obtained through visual interpretation of the high resolution imagery. The overall accuracy and Kappa index were 85.98% and 0.82, respectively, which represents about 10% point increase of classification accuracy than our previous study based on GCP point data around North Korea. Thus we can conclude that Google Earth may be used to substitute the traditional reference data collection on the site where the accessibility is severely limited.

Bird's Eye View Semantic Segmentation based on Improved Transformer for Automatic Annotation

  • Tianjiao Liang;Weiguo Pan;Hong Bao;Xinyue Fan;Han Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.1996-2015
    • /
    • 2023
  • High-definition (HD) maps can provide precise road information that enables an autonomous driving system to effectively navigate a vehicle. Recent research has focused on leveraging semantic segmentation to achieve automatic annotation of HD maps. However, the existing methods suffer from low recognition accuracy in automatic driving scenarios, leading to inefficient annotation processes. In this paper, we propose a novel semantic segmentation method for automatic HD map annotation. Our approach introduces a new encoder, known as the convolutional transformer hybrid encoder, to enhance the model's feature extraction capabilities. Additionally, we propose a multi-level fusion module that enables the model to aggregate different levels of detail and semantic information. Furthermore, we present a novel decoupled boundary joint decoder to improve the model's ability to handle the boundary between categories. To evaluate our method, we conducted experiments using the Bird's Eye View point cloud images dataset and Cityscapes dataset. Comparative analysis against stateof-the-art methods demonstrates that our model achieves the highest performance. Specifically, our model achieves an mIoU of 56.26%, surpassing the results of SegFormer with an mIoU of 1.47%. This innovative promises to significantly enhance the efficiency of HD map automatic annotation.