• Title/Summary/Keyword: image maps

Search Result 724, Processing Time 0.03 seconds

Land Cover Classification Using UAV Imagery and Object-Based Image Analysis - Focusing on the Maseo-myeon, Seocheon-gun, Chungcheongnam-do - (UAV와 객체기반 영상분석 기법을 활용한 토지피복 분류 - 충청남도 서천군 마서면 일원을 대상으로 -)

  • MOON, Ho-Gyeong;LEE, Seon-Mi;CHA, Jae-Gyu
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.1
    • /
    • pp.1-14
    • /
    • 2017
  • A land cover map provides basic information to help understand the current state of a region, but its utilization in the ecological research field has deteriorated due to limited temporal and spatial resolutions. The purpose of this study was to investigate the possibility of using a land cover map with data based on high resolution images acquired by UAV. Using the UAV, 10.5 cm orthoimages were obtained from the $2.5km^2$ study area, and land cover maps were obtained from object-based and pixel-based classification for comparison and analysis. From accuracy verification, classification accuracy was shown to be high, with a Kappa of 0.77 for the pixel-based classification and a Kappa of 0.82 for the object-based classification. The overall area ratios were similar, and good classification results were found in grasslands and wetlands. The optimal image segmentation weights for object-based classification were Scale=150, Shape=0.5, Compactness=0.5, and Color=1. Scale was the most influential factor in the weight selection process. Compared with the pixel-based classification, the object-based classification provides results that are easy to read because there is a clear boundary between objects. Compared with the land cover map from the Ministry of Environment (subdivision), it was effective for natural areas (forests, grasslands, wetlands, etc.) but not developed areas (roads, buildings, etc.). The application of an object-based classification method for land cover using UAV images can contribute to the field of ecological research with its advantages of rapidly updated data, good accuracy, and economical efficiency.

Technique for the Measurement of Crack Widths at Notched / Unnotched Regions and Local Strains (콘크리트의 노치 및 비노치 구역에서의 균열폭 및 국부 변형률 정밀 측정기법)

  • Choi, Sok-Hwan;Lim, Bub-Mook;Oh, Chang-Kook;Joh, Chang-Bin
    • Journal of the Korea Concrete Institute
    • /
    • v.24 no.2
    • /
    • pp.205-214
    • /
    • 2012
  • Crack widths play an important role in the serviceability limit state. When crack widths are controlled sufficiently, the reinforcement corrosion can be reduced using only existing concrete cover thickness due to low permeability in the region of finely distributed hair-cracks. Thus, the knowledge about the tensile crack opening is essential in designing more durable concrete structures. Therefore, numerous researches related to the topic have been performed. Nevertheless accurate measurement of a crack width is not a simple task due to several reasons such as unknown potential crack formation location and crack opening damaging strain gages. In order to overcome these difficulties and measure precise crack widths, a displacement measurement system was developed using digital image correlation. Accuracy calibration tests gave an average measurement error of 0.069 pixels and a standard deviation of 0.050 pixels. Direct tensile test was performed using ultra high performance concrete specimens. Crack widths at both notched and unnotched locations were measured and compared with clip-in gages at various loading steps to obtain crack opening profile. Tensile deformation characteristics of concrete were well visualized using displacement vectors and full-field displacement contour maps. The proposed technique made it possible to measure crack widths at arbitrary locations, which is difficult with conventional gages such as clip-in gages or displacement transducers.

Generating Motion- and Distortion-Free Local Field Map Using 3D Ultrashort TE MRI: Comparison with T2* Mapping

  • Jeong, Kyle;Thapa, Bijaya;Han, Bong-Soo;Kim, Daehong;Jeong, Eun-Kee
    • Investigative Magnetic Resonance Imaging
    • /
    • v.23 no.4
    • /
    • pp.328-340
    • /
    • 2019
  • Purpose: To generate phase images with free of motion-induced artifact and susceptibility-induced distortion using 3D radial ultrashort TE (UTE) MRI. Materials and Methods: The field map was theoretically derived by solving Laplace's equation with appropriate boundary conditions, and used to simulate the image distortion in conventional spin-warp MRI. Manufacturer's 3D radial imaging sequence was modified to acquire maximum number of radial spokes in a given time, by removing the spoiler gradient and sampling during both rampup and rampdown gradient. Spoke direction randomly jumps so that a readout gradient acts as a spoiling gradient for the previous spoke. The custom raw data was reconstructed using a homemade image reconstruction software, which is programmed using Python language. The method was applied to a phantom and in-vivo human brain and abdomen. The performance of UTE was compared with 3D GRE for phase mapping. Local phase mapping was compared with T2* mapping using UTE. Results: The phase map using UTE mimics true field-map, which was theoretically calculated, while that using 3D GRE revealed both motion-induced artifact and geometric distortion. Motion-free imaging is particularly crucial for application of phase mapping for abdomen MRI, which typically requires multiple breathold acquisitions. The air pockets, which are caught within the digestive pathway, induce spatially varying and large background field. T2* map, that was calculated using UTE data, suffers from non-uniform T2* value due to this background field, while does not appear in the local phase map of UTE data. Conclusion: Phase map generated using UTE mimicked the true field map even when non-zero susceptibility objects were present. Phase map generated by 3D GRE did not accurately mimic the true field map when non-zero susceptibility objects were present due to the significant field distortion as theoretically calculated. Nonetheless, UTE allows for phase maps to be free of susceptibility-induced distortion without the use of any post-processing protocols.

The Application of GIS for the Prediction of Landslide-Potential Areas (산사태의 발생가능지 예측을 위한 GIS의 적용)

  • Lee, Jin-Duk;Yeon, Sang-Ho;Kim, Sung-Gil;Lee, Ho-Chan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.5 no.1
    • /
    • pp.38-47
    • /
    • 2002
  • This paper demonstrates a regional analysis of landslide occurrence potential by applying geographic information system to the Kumi City selected as a pilot study area. The estimate criteria related to natural and humane environmental factors which affect landslides were first established. A slope map and a aspect map were extracted from DEM, which was generated from the contour layers of digital topographic maps, and a NDVI vegetation map and a land cover map were obtained through satellite image processing. After the spatial database was constructed, indexes of landslide occurrence potential were computed and then a few landslide-potential areas were extracted by an overlay method. It was ascertained that there are high landslide-potential at areas of about 30% incline, aspects including either south or east at least, adjacent to water areas or pointed end of the water system, in or near fault zones, covered with medium vegetable. For more synthetic and accurate analysis, soil data, forest data, underground water level data, meteorological data and so on should be added to the spatial database.

  • PDF

3D Fusion Imaging based on Spectral Computed Tomography Using K-edge Images (K-각 영상을 이용한 스펙트럼 전산화단층촬영 기반 3차원 융합진단영상화에 관한 연구)

  • Kim, Burnyoung;Lee, Seungwan;Yim, Dobin
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.4
    • /
    • pp.523-530
    • /
    • 2019
  • The purpose of this study was to obtain the K-edge images using a spectral CT system based on a photon-counting detector and implement the 3D fusion imaging using the conventional and spectral CT images. Also, we evaluated the clinical feasibility of the 3D fusion images though the quantitative analysis of image quality. A spectral CT system based on a CdTe photon-counting detector was used to obtain K-edge images. A pork phantom was manufactured with the six tubes including diluted iodine and gadolinium solutions. The K-edge images were obtained by the low-energy thresholds of 35 and 52 keV for iodine and gadolinium imaging with the X-ray spectrum, which was generated at a tube voltage of 100 kVp with a tube current of $500{\mu}A$. We implemented 3D fusion imaging by combining the iodine and gadolinium K-edge images with the conventional CT images. The results showed that the CNRs of the 3D fusion images were 6.76-14.9 times higher than those of the conventional CT images. Also, the 3D fusion images was able to provide the maps of target materials. Therefore, the technique proposed in this study can improve the quality of CT images and the diagnostic efficiency through the additional information of target materials.

Object Tracking Method using Deep Learning and Kalman Filter (딥 러닝 및 칼만 필터를 이용한 객체 추적 방법)

  • Kim, Gicheol;Son, Sohee;Kim, Minseop;Jeon, Jinwoo;Lee, Injae;Cha, Jihun;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.495-505
    • /
    • 2019
  • Typical algorithms of deep learning include CNN(Convolutional Neural Networks), which are mainly used for image recognition, and RNN(Recurrent Neural Networks), which are used mainly for speech recognition and natural language processing. Among them, CNN is able to learn from filters that generate feature maps with algorithms that automatically learn features from data, making it mainstream with excellent performance in image recognition. Since then, various algorithms such as R-CNN and others have appeared in object detection to improve performance of CNN, and algorithms such as YOLO(You Only Look Once) and SSD(Single Shot Multi-box Detector) have been proposed recently. However, since these deep learning-based detection algorithms determine the success of the detection in the still images, stable object tracking and detection in the video requires separate tracking capabilities. Therefore, this paper proposes a method of combining Kalman filters into deep learning-based detection networks for improved object tracking and detection performance in the video. The detection network used YOLO v2, which is capable of real-time processing, and the proposed method resulted in 7.7% IoU performance improvement over the existing YOLO v2 network and 20 fps processing speed in FHD images.

Real-time Segmentation of Black Ice Region in Infrared Road Images

  • Li, Yu-Jie;Kang, Sun-Kyoung;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.33-42
    • /
    • 2022
  • In this paper, we proposed a deep learning model based on multi-scale dilated convolution feature fusion for the segmentation of black ice region in road image to send black ice warning to drivers in real time. In the proposed multi-scale dilated convolution feature fusion network, different dilated ratio convolutions are connected in parallel in the encoder blocks, and different dilated ratios are used in different resolution feature maps, and multi-layer feature information are fused together. The multi-scale dilated convolution feature fusion improves the performance by diversifying and expending the receptive field of the network and by preserving detailed space information and enhancing the effectiveness of diated convolutions. The performance of the proposed network model was gradually improved with the increase of the number of dilated convolution branch. The mIoU value of the proposed method is 96.46%, which was higher than the existing networks such as U-Net, FCN, PSPNet, ENet, LinkNet. The parameter was 1,858K, which was 6 times smaller than the existing LinkNet model. From the experimental results of Jetson Nano, the FPS of the proposed method was 3.63, which can realize segmentation of black ice field in real time.

Detection of Plastic Greenhouses by Using Deep Learning Model for Aerial Orthoimages (딥러닝 모델을 이용한 항공정사영상의 비닐하우스 탐지)

  • Byunghyun Yoon;Seonkyeong Seong;Jaewan Choi
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.183-192
    • /
    • 2023
  • The remotely sensed data, such as satellite imagery and aerial photos, can be used to extract and detect some objects in the image through image interpretation and processing techniques. Significantly, the possibility for utilizing digital map updating and land monitoring has been increased through automatic object detection since spatial resolution of remotely sensed data has improved and technologies about deep learning have been developed. In this paper, we tried to extract plastic greenhouses into aerial orthophotos by using fully convolutional densely connected convolutional network (FC-DenseNet), one of the representative deep learning models for semantic segmentation. Then, a quantitative analysis of extraction results had performed. Using the farm map of the Ministry of Agriculture, Food and Rural Affairsin Korea, training data was generated by labeling plastic greenhouses into Damyang and Miryang areas. And then, FC-DenseNet was trained through a training dataset. To apply the deep learning model in the remotely sensed imagery, instance norm, which can maintain the spectral characteristics of bands, was used as normalization. In addition, optimal weights for each band were determined by adding attention modules in the deep learning model. In the experiments, it was found that a deep learning model can extract plastic greenhouses. These results can be applied to digital map updating of Farm-map and landcover maps.

Development of a Method for Tracking Sandbar Formation by Weir-Gate Opening Using Multispectral Satellite Imagery in the Geumgang River, South Korea (금강에서 다분광 위성영상을 이용한 보 운영에 따른 모래톱 형성 추적 방법의 개발)

  • Cheolho Lee;Kang-Hyun Cho
    • Ecology and Resilient Infrastructure
    • /
    • v.10 no.4
    • /
    • pp.135-142
    • /
    • 2023
  • A various technology of remote sensing and image analysis are applied to study landscape changes and their influencing factors in stream corridors. We developed a method to detect landscape changes over time by calculating the optical index using multispectral images taken from satellites at various time points, calculating the threshold to delineate the boundaries of water bodies, and creating binarized maps into land and water areas. This method was applied to the upstream reach of the weirs in the Geumgang River to track changes in the sandbar formed by the opening of the weir gate. First, we collected multispectral images with a resolution of 10 m × 10 m taken from the Sentinel-2 satellite at various times before and after the opening of the dam in the Geumgang River. The normalized difference water index (NDWI) was calculated using the green light and near-infrared bands from the collected images. The Otsu's threshold of NDWI calculated to delineate the boundary of the water body ranged from -0.0573 to 0.1367. The boundary of the water area determined by remote sensing matched the boundary in the actual image. A map binarized into water and land areas was created using NDWI and the Otsu's threshold. According to these results of the developed method, it was estimated that a total of 379.7 ha of new sandbar was formed by opening the three weir floodgates from 2017 to 2021 in the longitudinal range from Baekje Weir to Daecheong Dam on the Geumgang River. The landscape detection method developed in this study is evaluated as a useful method that can obtain objective results with few resources over a wide spatial and temporal range.

Deep Learning Approach for Automatic Discontinuity Mapping on 3D Model of Tunnel Face (터널 막장 3차원 지형모델 상에서의 불연속면 자동 매핑을 위한 딥러닝 기법 적용 방안)

  • Chuyen Pham;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.508-518
    • /
    • 2023
  • This paper presents a new approach for the automatic mapping of discontinuities in a tunnel face based on its 3D digital model reconstructed by LiDAR scan or photogrammetry techniques. The main idea revolves around the identification of discontinuity areas in the 3D digital model of a tunnel face by segmenting its 2D projected images using a deep-learning semantic segmentation model called U-Net. The proposed deep learning model integrates various features including the projected RGB image, depth map image, and local surface properties-based images i.e., normal vector and curvature images to effectively segment areas of discontinuity in the images. Subsequently, the segmentation results are projected back onto the 3D model using depth maps and projection matrices to obtain an accurate representation of the location and extent of discontinuities within the 3D space. The performance of the segmentation model is evaluated by comparing the segmented results with their corresponding ground truths, which demonstrates the high accuracy of segmentation results with the intersection-over-union metric of approximately 0.8. Despite still being limited in training data, this method exhibits promising potential to address the limitations of conventional approaches, which only rely on normal vectors and unsupervised machine learning algorithms for grouping points in the 3D model into distinct sets of discontinuities.