• Title/Summary/Keyword: High segmentation

Search Result 691, Processing Time 0.028 seconds

Analytical Methods for the Analysis of Structural Connectivity in the Mouse Brain (마우스 뇌의 구조적 연결성 분석을 위한 분석 방법)

  • Im, Sang-Jin;Baek, Hyeon-Man
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.4
    • /
    • pp.507-518
    • /
    • 2021
  • Magnetic resonance imaging (MRI) is a key technology that has been seeing increasing use in studying the structural and functional innerworkings of the brain. Analyzing the variability of brain connectome through tractography analysis has been used to increase our understanding of disease pathology in humans. However, there lacks standardization of analysis methods for small animals such as mice, and lacks scientific consensus in regard to accurate preprocessing strategies and atlas-based neuroinformatics for images. In addition, it is difficult to acquire high resolution images for mice due to how significantly smaller a mouse brain is compared to that of humans. In this study, we present an Allen Mouse Brain Atlas-based image data analysis pipeline for structural connectivity analysis involving structural region segmentation using mouse brain structural images and diffusion tensor images. Each analysis method enabled the analysis of mouse brain image data using reliable software that has already been verified with human and mouse image data. In addition, the pipeline presented in this study is optimized for users to efficiently process data by organizing functions necessary for mouse tractography among complex analysis processes and various functions.

Expected Segmentation of the Chugaryung Fault System Estimated by the Gravity Field Interpretation (추가령단층대의 중력장 데이터 해석)

  • Choi, Sungchan;Choi, Eun-Kyeong;Kim, Sung-Wook;Lee, Young-Cheol
    • Economic and Environmental Geology
    • /
    • v.54 no.6
    • /
    • pp.743-752
    • /
    • 2021
  • The three-dimensional distribution of the fault was evaluated using gravity field interpretation such as curvature analysis and Euler deconvolution in the Seoul-Gyeonggi region where the Chugaryeong fault zone was developed. In addition, earthquakes that occurred after 2000 and the location of faults were compared. In Bouguer anomaly of Chugaryeong faults, the Pocheon Fault is an approximately 100 km fault that is extended from the northern part of Gyeonggi Province to the west coast through the central part of Seoul. Considering the frequency of epicenters is high, there is a possibility of an active fault. The Wangsukcheon Fault is divided into the northeast and southwest parts of Seoul, but it shows that the fault is connected underground in the bouguer anomaly. The magnitude 3.0 earthquake that occurred in Siheung city in 2010 occurred in an anticipated fault (aF) that developed in the north-south direction. In the western region of the Dongducheon Fault (≒5,500 m), the density boundary of the rock mass is deeper than that in the eastern region (≒4,000 m), suggesting that the tectonic movements of the western and eastern regions of the Dongducheon Fault is different. The maximum depth of the fracture zone developed in the Dongducheon Fault is about 6,500 m, and it is the deepest in the research area. It is estimated that the fracture zone extends to a depth of about 6,000 m for the Pocheon Fault, about 5,000 m for the Wangsukcheon Fault, and about 6,000 m for the Gyeonggang Fault.

The Effect of Training Patch Size and ConvNeXt application on the Accuracy of CycleGAN-based Satellite Image Simulation (학습패치 크기와 ConvNeXt 적용이 CycleGAN 기반 위성영상 모의 정확도에 미치는 영향)

  • Won, Taeyeon;Jo, Su Min;Eo, Yang Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.177-185
    • /
    • 2022
  • A method of restoring the occluded area was proposed by referring to images taken with the same types of sensors on high-resolution optical satellite images through deep learning. For the natural continuity of the simulated image with the occlusion region and the surrounding image while maintaining the pixel distribution of the original image as much as possible in the patch segmentation image, CycleGAN (Cycle Generative Adversarial Network) method with ConvNeXt block applied was used to analyze three experimental regions. In addition, We compared the experimental results of a training patch size of 512*512 pixels and a 1024*1024 pixel size that was doubled. As a result of experimenting with three regions with different characteristics,the ConvNeXt CycleGAN methodology showed an improved R2 value compared to the existing CycleGAN-applied image and histogram matching image. For the experiment by patch size used for training, an R2 value of about 0.98 was generated for a patch of 1024*1024 pixels. Furthermore, As a result of comparing the pixel distribution for each image band, the simulation result trained with a large patch size showed a more similar histogram distribution to the original image. Therefore, by using ConvNeXt CycleGAN, which is more advanced than the image applied with the existing CycleGAN method and the histogram-matching image, it is possible to derive simulation results similar to the original image and perform a successful simulation.

Flood Mapping Using Modified U-NET from TerraSAR-X Images (TerraSAR-X 영상으로부터 Modified U-NET을 이용한 홍수 매핑)

  • Yu, Jin-Woo;Yoon, Young-Woong;Lee, Eu-Ru;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1709-1722
    • /
    • 2022
  • The rise in temperature induced by global warming caused in El Nino and La Nina, and abnormally changed the temperature of seawater. Rainfall concentrates in some locations due to abnormal variations in seawater temperature, causing frequent abnormal floods. It is important to rapidly detect flooded regions to recover and prevent human and property damage caused by floods. This is possible with synthetic aperture radar. This study aims to generate a model that directly derives flood-damaged areas by using modified U-NET and TerraSAR-X images based on Multi Kernel to reduce the effect of speckle noise through various characteristic map extraction and using two images before and after flooding as input data. To that purpose, two synthetic aperture radar (SAR) images were preprocessed to generate the model's input data, which was then applied to the modified U-NET structure to train the flood detection deep learning model. Through this method, the flood area could be detected at a high level with an average F1 score value of 0.966. This result is expected to contribute to the rapid recovery of flood-stricken areas and the derivation of flood-prevention measures.

Development of Chinese Cabbage Detection Algorithm Based on Drone Multi-spectral Image and Computer Vision Techniques (드론 다중분광영상과 컴퓨터 비전 기술을 이용한 배추 객체 탐지 알고리즘 개발)

  • Ryu, Jae-Hyun;Han, Jung-Gon;Ahn, Ho-yong;Na, Sang-Il;Lee, Byungmo;Lee, Kyung-do
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.535-543
    • /
    • 2022
  • A drone is used to diagnose crop growth and to provide information through images in the agriculture field. In the case of using high spatial resolution drone images, growth information for each object can be produced. However, accurate object detection is required and adjacent objects should be efficiently classified. The purpose of this study is to develop a Chinese cabbage object detection algorithm using multispectral reflectance images observed from drone and computer vision techniques. Drone images were captured between 7 and 15 days after planting a Chinese cabbage from 2018 to 2020 years. The thresholds of object detection algorithm were set based on 2019 year, and the algorithm was evaluated based on images in 2018 and 2019 years. The vegetation area was classified using the characteristics of spectral reflectance. Then, morphology techniques such as dilatation, erosion, and image segmentation by considering the size of the object were applied to improve the object detection accuracy in the vegetation area. The precision of the developed object detection algorithm was over 95.19%, and the recall and accuracy were over 95.4% and 93.68%, respectively. The F1-Score of the algorithm was over 0.967 for 2 years. The location information about the center of the Chinese cabbage object extracted using the developed algorithm will be used as data to provide decision-making information during the growing season of crops.

Spatial Replicability Assessment of Land Cover Classification Using Unmanned Aerial Vehicle and Artificial Intelligence in Urban Area (무인항공기 및 인공지능을 활용한 도시지역 토지피복 분류 기법의 공간적 재현성 평가)

  • Geon-Ung, PARK;Bong-Geun, SONG;Kyung-Hun, PARK;Hung-Kyu, LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.63-80
    • /
    • 2022
  • As a technology to analyze and predict an issue has been developed by constructing real space into virtual space, it is becoming more important to acquire precise spatial information in complex cities. In this study, images were acquired using an unmanned aerial vehicle for urban area with complex landscapes, and land cover classification was performed object-based image analysis and semantic segmentation techniques, which were image classification technique suitable for high-resolution imagery. In addition, based on the imagery collected at the same time, the replicability of land cover classification of each artificial intelligence (AI) model was examined for areas that AI model did not learn. When the AI models are trained on the training site, the land cover classification accuracy is analyzed to be 89.3% for OBIA-RF, 85.0% for OBIA-DNN, and 95.3% for U-Net. When the AI models are applied to the replicability assessment site to evaluate replicability, the accuracy of OBIA-RF decreased by 7%, OBIA-DNN by 2.1% and U-Net by 2.3%. It is found that U-Net, which considers both morphological and spectroscopic characteristics, performs well in land cover classification accuracy and replicability evaluation. As precise spatial information becomes important, the results of this study are expected to contribute to urban environment research as a basic data generation method.

A Study on Class Sample Extraction Technique Using Histogram Back-Projection for Object-Based Image Classification (객체 기반 영상 분류를 위한 히스토그램 역투영을 이용한 클래스 샘플 추출 기법에 관한 연구)

  • Chul-Soo Ye
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.157-168
    • /
    • 2023
  • Image segmentation and supervised classification techniques are widely used to monitor the ground surface using high-resolution remote sensing images. In order to classify various objects, a process of defining a class corresponding to each object and selecting samples belonging to each class is required. Existing methods for extracting class samples should select a sufficient number of samples having similar intensity characteristics for each class. This process depends on the user's visual identification and takes a lot of time. Representative samples of the class extracted are likely to vary depending on the user, and as a result, the classification performance is greatly affected by the class sample extraction result. In this study, we propose an image classification technique that minimizes user intervention when extracting class samples by applying the histogram back-projection technique and has consistent intensity characteristics of samples belonging to classes. The proposed classification technique using histogram back-projection showed improved classification accuracy in both the experiment using hue subchannels of the hue saturation value transformed image from Compact Advanced Satellite 500-1 imagery and the experiment using the original image compared to the technique that did not use histogram back-projection.

Detection and Grading of Compost Heap Using UAV and Deep Learning (UAV와 딥러닝을 활용한 야적퇴비 탐지 및 관리등급 산정)

  • Miso Park;Heung-Min Kim;Youngmin Kim;Suho Bak;Tak-Young Kim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.33-43
    • /
    • 2024
  • This research assessed the applicability of the You Only Look Once (YOLO)v8 and DeepLabv3+ models for the effective detection of compost heaps, identified as a significant source of non-point source pollution. Utilizing high-resolution imagery acquired through Unmanned Aerial Vehicles(UAVs), the study conducted a comprehensive comparison and analysis of the quantitative and qualitative performances. In the quantitative evaluation, the YOLOv8 model demonstrated superior performance across various metrics, particularly in its ability to accurately distinguish the presence or absence of covers on compost heaps. These outcomes imply that the YOLOv8 model is highly effective in the precise detection and classification of compost heaps, thereby providing a novel approach for assessing the management grades of compost heaps and contributing to non-point source pollution management. This study suggests that utilizing UAVs and deep learning technologies for detecting and managing compost heaps can address the constraints linked to traditional field survey methods, thereby facilitating the establishment of accurate and effective non-point source pollution management strategies, and contributing to the safeguarding of aquatic environments.

A Green View Index Improvement Program for Urban Roads Using a Green Infrastructure Theory - Focused on Chengdu City, Sichuan Province, China - (그린인프라스트럭처 개념을 적용한 가로 녹시율 개선 방안 - 중국 쓰촨성(四川省) 청두시(成都市)을 중심으로 -)

  • Hou, ShuJun;Jung, Taeyeol
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.6
    • /
    • pp.61-74
    • /
    • 2023
  • The concept of "green infrastructure" emphasizes the close relationship between natural and urban social systems, thereby providing services that protect the ecological environment and improve the quality of human life. The Green View Index(GVI) is an important indicator for measuring the supply of urban green space and contains more 3D spatial elements concerning the green space ratio. This study focused on an area within the Third Ring Road in the city of Chengdu, Sichuan Province, China. The purposes of this study were three-fold. First, this study analyzed the spatial distribution characteristics of the GVI in urban streets and its correlation with the urban park green space system using Street View image data. Second to analyze the characteristics of low GVI streets were analyzed. Third, to analyze the connectivity between road traffic and street GVI using space syntax were analyzed. This study found that the Street GVI was higher in the southwestern part of the study area than in the northeastern part. The spatial distribution of the street GVI correlated with urban park green space. Second, the street areas with low GVI are mainly concentrated in areas with dense commercial facilities, areas with new construction, areas around elevated roads, roads below Class 4, and crossroads areas. Third, the high integration and low GVI areas were mainly concentrated within the First Ring Road in the city as judged by the concentration of vehicles and population. This study provides base material for future programs to improve the GVI of streets in Chengdu, Sichuan Province.

Automated Lung Segmentation on Chest Computed Tomography Images with Extensive Lung Parenchymal Abnormalities Using a Deep Neural Network

  • Seung-Jin Yoo;Soon Ho Yoon;Jong Hyuk Lee;Ki Hwan Kim;Hyoung In Choi;Sang Joon Park;Jin Mo Goo
    • Korean Journal of Radiology
    • /
    • v.22 no.3
    • /
    • pp.476-488
    • /
    • 2021
  • Objective: We aimed to develop a deep neural network for segmenting lung parenchyma with extensive pathological conditions on non-contrast chest computed tomography (CT) images. Materials and Methods: Thin-section non-contrast chest CT images from 203 patients (115 males, 88 females; age range, 31-89 years) between January 2017 and May 2017 were included in the study, of which 150 cases had extensive lung parenchymal disease involving more than 40% of the parenchymal area. Parenchymal diseases included interstitial lung disease (ILD), emphysema, nontuberculous mycobacterial lung disease, tuberculous destroyed lung, pneumonia, lung cancer, and other diseases. Five experienced radiologists manually drew the margin of the lungs, slice by slice, on CT images. The dataset used to develop the network consisted of 157 cases for training, 20 cases for development, and 26 cases for internal validation. Two-dimensional (2D) U-Net and three-dimensional (3D) U-Net models were used for the task. The network was trained to segment the lung parenchyma as a whole and segment the right and left lung separately. The University Hospitals of Geneva ILD dataset, which contained high-resolution CT images of ILD, was used for external validation. Results: The Dice similarity coefficients for internal validation were 99.6 ± 0.3% (2D U-Net whole lung model), 99.5 ± 0.3% (2D U-Net separate lung model), 99.4 ± 0.5% (3D U-Net whole lung model), and 99.4 ± 0.5% (3D U-Net separate lung model). The Dice similarity coefficients for the external validation dataset were 98.4 ± 1.0% (2D U-Net whole lung model) and 98.4 ± 1.0% (2D U-Net separate lung model). In 31 cases, where the extent of ILD was larger than 75% of the lung parenchymal area, the Dice similarity coefficients were 97.9 ± 1.3% (2D U-Net whole lung model) and 98.0 ± 1.2% (2D U-Net separate lung model). Conclusion: The deep neural network achieved excellent performance in automatically delineating the boundaries of lung parenchyma with extensive pathological conditions on non-contrast chest CT images.