• Title/Summary/Keyword: Aerial shot

Search Result 13, Processing Time 0.025 seconds

Few-shot Aerial Image Segmentation with Mask-Guided Attention (마스크-보조 어텐션 기법을 활용한 항공 영상에서의 퓨-샷 의미론적 분할)

  • Kwon, Hyeongjun;Song, Taeyong;Lee, Tae-Young;Ahn, Jongsik;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.5
    • /
    • pp.685-694
    • /
    • 2022
  • The goal of few-shot semantic segmentation is to build a network that quickly adapts to novel classes with extreme data shortage regimes. Most existing few-shot segmentation methods leverage single or multiple prototypes from extracted support features. Although there have been promising results for natural images, these methods are not directly applicable to the aerial image domain. A key factor in few-shot segmentation on aerial images is to effectively exploit information that is robust against extreme changes in background and object scales. In this paper, we propose a Mask-Guided Attention module to extract more comprehensive support features for few-shot segmentation in aerial images. Taking advantage of the support ground-truth masks, the area correlated to the foreground object is highlighted and enables the support encoder to extract comprehensive support features with contextual information. To facilitate reproducible studies of the task of few-shot semantic segmentation in aerial images, we further present the few-shot segmentation benchmark iSAID-, which is constructed from a large-scale iSAID dataset. Extensive experimental results including comparisons with the state-of-the-art methods and ablation studies demonstrate the effectiveness of the proposed method.

Background memory-assisted zero-shot video object segmentation for unmanned aerial and ground vehicles

  • Kimin Yun;Hyung-Il Kim;Kangmin Bae;Jinyoung Moon
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.795-810
    • /
    • 2023
  • Unmanned aerial vehicles (UAV) and ground vehicles (UGV) require advanced video analytics for various tasks, such as moving object detection and segmentation; this has led to increasing demands for these methods. We propose a zero-shot video object segmentation method specifically designed for UAV and UGV applications that focuses on the discovery of moving objects in challenging scenarios. This method employs a background memory model that enables training from sparse annotations along the time axis, utilizing temporal modeling of the background to detect moving objects effectively. The proposed method addresses the limitations of the existing state-of-the-art methods for detecting salient objects within images, regardless of their movements. In particular, our method achieved mean J and F values of 82.7 and 81.2 on the DAVIS'16, respectively. We also conducted extensive ablation studies that highlighted the contributions of various input compositions and combinations of datasets used for training. In future developments, we will integrate the proposed method with additional systems, such as tracking and obstacle avoidance functionalities.

Performance Improvement of Aerial Images Taken by UAV Using Daubechies Stationary Wavelet (Daubechies 정상 웨이블릿을 이용한 무인항공기 촬영 영상 성능 개선)

  • Kim, Sung-Hoon;Hong, Gyo-Young
    • Journal of Advanced Navigation Technology
    • /
    • v.20 no.6
    • /
    • pp.539-543
    • /
    • 2016
  • In this paper, we study the technique to improve the performance of the aerial images taken by UAV using daubechies stationary wavelet transform. When aerial images taken by UAV were damaged by gaussian noise very commonly applied, the experiment for image performance improvement was performed. It was known that stationary wavelet transform is the transferring solution to the problem occurred by down sampling from DWT also more efficient to remove noise than DWT. Also haar wavelet is discontinuous function so not efficient for smooth signal and image processing. Therefore, this study is confirmed that the noise can be removed by daubechies stationary wavelet and the performance is improved by haar stationary wavelet.

The Visual Aesthetics of Drone Shot and Hand-held Shot based on the Representation of Place and Space : focusing on World Travel' Peninsula de Yucatán' Episode (장소와 공간의 재현적 관점에서 본 드론 쇼트와 핸드헬드 쇼트의 영상 미학 : <세계테마기행> '유카탄 반도'편을 중심으로)

  • Ryu, Jae-Hyung
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.3
    • /
    • pp.251-265
    • /
    • 2020
  • The Drone shot is moving images captured by a remotely controlled unmanned aerial vehicle, takes usually bird's eye view. The hand-held shot is moving images recorded by literal handheld shooting which is specialized to on-the-spot filming. It takes a walker's viewpoint through vivid realism of its self-reflexive camera movements. The purpose of this study is to analyze comparatively aesthetic functions of the drone shot and the hand-held shot. For this, the study understood Certeau's concepts of 'place' and 'space,' chose World Travel 'Peninsula de Yucatan' episode as a research object, and analytically applied two concepts to the scenes clearly presenting two shots' aesthetic characteristics. As a result, the drone shot took the authoritative viewpoint providing the general information and atmosphere as it overlooked the city with silent movements removing the self-reflexivity. This aesthetic function was reinforced the narration and subtitles mediating prior-knowledge about proper rules and orders of the place. The drone shot tended to project the location as a place. Conversely, the hand-held shot practically experienced the space with free walking which is free from rules and orders inherent in the city. The aesthetics of hand-held images represented the tactic resisting against the strategy of a subject of will and power in that the hand-held shot practiced anthropological walking by means of noticing everyday lives of the small town and countryside than main tourist attraction. In opposition to the drone shot, the hand-held shot tended to reflect the location as a space.

Analysis of Landslide in Inje Region Using Aerial Photograph and GIS (항공사진과 GIS를 이용한 인제지역 산사태 분석)

  • Son, Jung-Woo;Kim, Kyung-Tak;Lee, Chang-Hun;Choi, Chul-Uong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.2
    • /
    • pp.61-69
    • /
    • 2009
  • In mid-July, 2006 the torrential rainfall across Gangwon-do region caused 48 casualties and 1,248 houses submerged, resulting in damages with the restoration costs of 3 trillion and 512.5 billion won. This was because the topographic characteristics of Gangwon-do region for which mountainous areas mostly account increased the effects of landslide. In this study, the landslide region was shot using the PKNU No.4 system immediately after the occurrence of landslide in order to analyze it as objectively, exactly, and rapidly as possible. 1,054 areas with landslide occurrence were extracted by digitizing the shot images through visual reading after orthometric correction using ERDAS 9.1. Using the Arc GIS 9.2, a GIS program, hydrologic, topographic, clinical, geologic, pedologic aspects and characteristics of the landslide region were established in database through overlay analysis of digital map, vegetation map, geologic map, and soil map, and the status and characteristics of the occurrence of the landslide were analyzed.

  • PDF

A Study on Aerial Perspective on Painterly Rendering (회화적 렌더링에서의 대기원근법의 표현에 관한 연구)

  • Jang, Jae-Ni;Ryoo, Seung-Taek;Seo, Sang-Hyun;Lee, Ho-Chang;Yoon, Kyung-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.10
    • /
    • pp.1474-1486
    • /
    • 2010
  • In this paper, we propose an algorithm which represents the distance depiction technique of real painting that named "Aerial Perspective" in painterly rendering. It is a painting technique that depicts the attenuations of light in the atmosphere, and the scattering effect is changed by the distance, altitude and density of atmospheres. For the reflection of these natures, we use the depth information corresponding to an input image and user-defined parameters, so that user changes the effect level. We calculate the distance and altitude of every pixel with the depth information and parameters about shot information, and control the scattering effects by expression parameters. Additionally, we accentuate the occluding edges detected by the depth information to clarify the sense of distance between fore and back-ground. We apply our algorithm on various landscape scenes, and generate the distance-emphasized results compared to existing works.

A Study on Automatic Precision Landing for Small UAV's Industrial Application (소형 UAV의 산업 응용을 위한 자동 정밀 착륙에 관한 연구)

  • Kim, Jong-Woo;Ha, Seok-Wun;Moon, Yong-Ho
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.3
    • /
    • pp.27-36
    • /
    • 2017
  • In almost industries, such as the logistics industry, marine fisheries, agriculture, industry, and services, small unmanned aerial vehicles are used for aerial photographing or closing flight in areas where human access is difficult or CCTV is not installed. Also, based on the information of small unmanned aerial photographing, application research is actively carried out to efficiently perform surveillance, control, or management. In order to carry out tasks in a mission-based manner in which the set tasks are assigned and the tasks are automatically performed, the small unmanned aerial vehicles must not only fly steadily but also be able to charge the energy periodically, In addition, the unmanned aircraft need to land automatically and precisely at certain points after the end of the mission. In order to accomplish this, an automatic precision landing method that leads landing by continuously detecting and recognizing a marker located at a landing point from a video shot of a small UAV is required. In this paper, it is shown that accurate and stable automatic landing is possible even if simple template matching technique is applied without using various recognition methods that require high specification in using low cost general purpose small unmanned aerial vehicle. Through simulation and actual experiments, the results show that the proposed method will be made good use of industrial fields.

Acquisition of Subcentimeter GSD Images Using UAV and Analysis of Visual Resolution (UAV를 이용한 Subcentimeter GSD 영상의 취득 및 시각적 해상도 분석)

  • Han, Soohee;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.563-572
    • /
    • 2017
  • The purpose of the study is to investigate the effect of flight height, flight speed, exposure time of camera shutter and autofocusing on the visual resolution of the image in order to obtain ultra-high resolution images with a GSD less than 1cm. It is also aimed to evaluate the ease of recognition of various types of aerial targets. For this purpose, we measured the visual resolution using a 7952*5304 pixel 35mm CMOS sensor and a 55mm prime lens at 20m intervals from 20m to 120m above ground. As a result, with automatic focusing, the visual resolution is measured 1.1~1.6 times as the theoretical GSD, and without automatic focusing, 1.5~3.5 times. Next, the camera was shot at 80m above ground at a constant flight speed of 5m/s, while reducing the exposure time by 1/2 from 1/60sec to 1/2000sec. Assuming that blur is allowed within 1 pixel, the visual resolution is 1.3~1.5 times larger than the theoretical GSD when the exposure time is kept within the longest exposure time, and 1.4~3.0 times larger when it is not kept. If the aerial targets are printed on A4 paper and they are shot within 80m above ground, the encoded targets can be recognized automatically by commercial software, and various types of general targets and coded ones can be manually recognized with ease.

Manned-Unmanned Teaming Air-to-Air Combat Tactic Development Using Longshot Unmanned Aerial Vehicle (롱샷 무인기를 활용한 유무인 협업 공대공 전술 개발)

  • Yoo, Seunghoon;Park, Myunghwan;Hwang, Seongin;Seol, Hyeonju
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.64-72
    • /
    • 2021
  • Manned-unmanned teaming can be a very promising air-to-air combat tactic since it can maximize the advantage of combining human insight with the robustness of the machine. The rapid advances in artificial intelligence and autonomous control technology will speed up the development of manned-unmanned teaming air-to-air combat system. In this paper, we introduce a manned-unmanned teaming air-to-air combat tactic which is composed of a manned aircraft and an UAV. In this tactic, a manned aircraft equipped with radar is functioning both as a sensor to detect the hostile aircraft and as a controller to direct the UAV to engage the hostile aircraft. The UAV equipped with missiles is functioning as an actor to engage the hostile aircraft. We also developed a combat scenario of executing this tactic where the manned-unmanned teaming is engaging a hostile aircraft. The hostile aircraft is equipped with both missiles and radar. To demonstrate the efficiency of the tactic, we run the simulation of the scenario of the tactic. Using the simulation, we found the optimal formation and maneuver for the manned-unmanned teaming where the manned-unmanned teaming can survive while the hostile aircraft is shot-downed. The result of this study can provide an insight to how manned aircraft can collaborate with UAV to carry out air-to-air combat missions.

Manhole Cover Detection from Natural Scene Based on Imaging Environment Perception

  • Liu, Haoting;Yan, Beibei;Wang, Wei;Li, Xin;Guo, Zhenhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.10
    • /
    • pp.5095-5111
    • /
    • 2019
  • A multi-rotor Unmanned Aerial Vehicle (UAV) system is developed to solve the manhole cover detection problem for the infrastructure maintenance in the suburbs of big city. The visible light sensor is employed to collect the ground image data and a series of image processing and machine learning methods are used to detect the manhole cover. First, the image enhancement technique is employed to improve the imaging effect of visible light camera. An imaging environment perception method is used to increase the computation robustness: the blind Image Quality Evaluation Metrics (IQEMs) are used to percept the imaging environment and select the images which have a high imaging definition for the following computation. Because of its excellent processing effect the adaptive Multiple Scale Retinex (MSR) is used to enhance the imaging quality. Second, the Single Shot multi-box Detector (SSD) method is utilized to identify the manhole cover for its stable processing effect. Third, the spatial coordinate of manhole cover is also estimated from the ground image. The practical applications have verified the outdoor environment adaptability of proposed algorithm and the target detection correctness of proposed system. The detection accuracy can reach 99% and the positioning accuracy is about 0.7 meters.