• 제목/요약/키워드: Deep features

검색결과 1,093건 처리시간 0.028초

북동태평양 KODOS 해역 심해 해저특성에 따른 초대형저서동물 분포 (The Distribution of Epifaunal Megabenthos Varies with Deep-sea Sediment Conditions in the Korea Deep Ocean Study Area (KODOS) of the North-eastern Pacific)

  • 유옥환;손주원;함동진;이근창;김경홍
    • Ocean and Polar Research
    • /
    • 제36권4호
    • /
    • pp.447-454
    • /
    • 2014
  • In August, 2013, we collected epifaunal megabenthos using a deep sea camera (DSC) around a benthic impact study (BIS) site. This was located in the KR5 block of the Korea Deep Ocean Study (KODOS) area in the Northeastern Pacific. The DSC was positioned at $6.8{\pm}2.9m$ (SD) from the sea bottom and was operated from a position at $131^{\circ}56.85^{\prime}-131^{\circ}55.02^{\prime}W$ for 2.3 h at a speed of 1-2 knot. The geographical features of the study area consisted of two structures; a trough in the middle and hills at the east and west sides. Sediment conditions were consistent within six blocks and were affected by slope and polymetallic nodule deposits. We analyzed 226 megafaunal species. Sipunculida comprised the highest percentage of individuals (39%), and the dominant epifaunal megabenthos were Hormathiidae sp., Primnoidae sp., Hexactinellida sp., Hyphalaster inermis, Freyella benthophila, Paelopatides confundens, Psychropotes longicauda, and Peniagone leander. More than 80% of the total density of megafauna occurred on sea plain (D- and E-blocks). We found two distinct groups in the community, one located on sea plains and the other along both sides of the sea slop. Our results suggest that geographical features such as slope and polymetalic nodule deposits are important in controlling the distribution of the epifaunal megabenthos around the KODOS area.

DeepSDO: Solar event detection using deep-learning-based object detection methods

  • Baek, Ji-Hye;Kim, Sujin;Choi, Seonghwan;Park, Jongyeob;Kim, Jihun;Jo, Wonkeum;Kim, Dongil
    • 천문학회보
    • /
    • 제46권2호
    • /
    • pp.46.2-46.2
    • /
    • 2021
  • We present solar event auto detection using deep-learning-based object detection algorithms and DeepSDO event dataset. DeepSDO event dataset is a new detection dataset with bounding boxed as ground-truth for three solar event (coronal holes, sunspots and prominences) features using Solar Dynamics Observatory data. To access the reliability of DeepSDO event dataset, we compared to HEK data. We train two representative object detection models, the Single Shot MultiBox Detector (SSD) and the Faster Region-based Convolutional Neural Network (R-CNN) with DeepSDO event dataset. We compared the performance of the two models for three solar events and this study demonstrates that deep learning-based object detection can successfully detect multiple types of solar events. In addition, we provide DeepSDO event dataset for further achievements event detection in solar physics.

  • PDF

작물 분류에서 시공간 특징을 고려하기 위한 2D CNN과 양방향 LSTM의 결합 (Combining 2D CNN and Bidirectional LSTM to Consider Spatio-Temporal Features in Crop Classification)

  • 곽근호;박민규;박찬원;이경도;나상일;안호용;박노욱
    • 대한원격탐사학회지
    • /
    • 제35권5_1호
    • /
    • pp.681-692
    • /
    • 2019
  • 이 논문에서는 작물 분류를 목적으로 작물의 시공간 특징을 고려할 수 있는 딥러닝 모델 2D convolution with bidirectional long short-term memory(2DCBLSTM)을 제안하였다. 제안 모델은 우선 작물의 공간 특징을 추출하기 위해 2차원의 합성곱 연산자를 적용하고, 추출된 공간 특징을 시간 특징을 고려할 수 있는 양방향 LSTM 모델의 입력 자료로 이용한다. 제안 모델의 분류 성능을 평가하기 위해 안반덕에서 수집된 다중시기 무인기 영상을 이용한 밭작물 구분 사례 연구를 수행하였다. 비교를 목적으로 기존 딥러닝 모델인 2차원의 공간 특징을 이용하는 2D convolutional neural network(CNN), 시간 특징을 이용하는 LSTM과 3차원의 시공간 특징을 이용하는 3D CNN을 적용하였다. 하이퍼 파라미터의 영향 분석을 통해, 시공간 특징을 이용함으로써 작물의 오분류 양상을 현저히 줄일 수 있었으며, 제안 모델이 공간 특징이나 시간 특징만을 고려하는 기존 딥러닝 모델에 비해 가장 우수한 분류 정확도를 나타냈다. 따라서 이 연구에서 제안된 모델은 작물의 시공간 특징을 고려할 수 있기 때문에 작물 분류에 효과적으로 적용될 수 있을 것으로 기대된다.

DeepLabV3+를 이용한 이종 센서의 구름탐지 기법 연구 (A Study on the Cloud Detection Technique of Heterogeneous Sensors Using Modified DeepLabV3+)

  • 김미정;고윤호
    • 대한원격탐사학회지
    • /
    • 제38권5_1호
    • /
    • pp.511-521
    • /
    • 2022
  • 위성영상에서의 구름 탐지 및 제거는 지형관측과 분석을 위해 필수적인 과정이다. 임계값 기반의 구름탐지 기법은 구름의 물리적인 특성을 이용하여 탐지하므로 안정적인 성능을 보여주지만, 긴 연산시간과 모든 채널의 영상 및 메타데이터가 필요하다는 단점을 가지고 있다. 최근 활발히 연구되고 있는 딥러닝을 활용한 구름탐지 기법은 4개 이하의 채널(RGB, NIR) 영상만을 활용하고도 짧은 연산시간과 우수한 성능을 보여주고 있다. 본 논문에서는 해상도가 다른 이종 데이터 셋을 활용하여 학습데이터 셋에 따른 딥러닝 네트워크 성능 의존도를 확인하였다. 이를 위해 DeepLabV3+ 네트워크를 구름탐지의 채널 별 특징이 추출되도록 개선하고 공개된 두 이종 데이터 셋과 혼합 데이터로 각각 학습하였다. 실험결과 테스트 영상과 다른 종류의 영상으로만 학습한 네트워크에서는 낮은 Jaccard 지표를 보여주었다. 그러나 테스트 데이터와 동종의 데이터를 일부 추가한 혼합 데이터로 학습한 네트워크는 높은 Jaccard 지표를 나타내었다. 구름은 사물과 달리 형태가 구조화 되어 있지 않아 공간적인 특성보다 채널 별 특성을 학습에 반영하는 것이 구름 탐지에 효과적이므로 위성 센서의 채널 별 특징을 학습하는 것이 필요하기 때문이다. 본 연구를 통해 해상도가 다른 이종 센서의 구름탐지는 학습데이터 셋에 매우 의존적임을 확인하였다.

Research on data augmentation algorithm for time series based on deep learning

  • Shiyu Liu;Hongyan Qiao;Lianhong Yuan;Yuan Yuan;Jun Liu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권6호
    • /
    • pp.1530-1544
    • /
    • 2023
  • Data monitoring is an important foundation of modern science. In most cases, the monitoring data is time-series data, which has high application value. The deep learning algorithm has a strong nonlinear fitting capability, which enables the recognition of time series by capturing anomalous information in time series. At present, the research of time series recognition based on deep learning is especially important for data monitoring. Deep learning algorithms require a large amount of data for training. However, abnormal sample is a small sample in time series, which means the number of abnormal time series can seriously affect the accuracy of recognition algorithm because of class imbalance. In order to increase the number of abnormal sample, a data augmentation method called GANBATS (GAN-based Bi-LSTM and Attention for Time Series) is proposed. In GANBATS, Bi-LSTM is introduced to extract the timing features and then transfer features to the generator network of GANBATS.GANBATS also modifies the discriminator network by adding an attention mechanism to achieve global attention for time series. At the end of discriminator, GANBATS is adding averagepooling layer, which merges temporal features to boost the operational efficiency. In this paper, four time series datasets and five data augmentation algorithms are used for comparison experiments. The generated data are measured by PRD(Percent Root Mean Square Difference) and DTW(Dynamic Time Warping). The experimental results show that GANBATS reduces up to 26.22 in PRD metric and 9.45 in DTW metric. In addition, this paper uses different algorithms to reconstruct the datasets and compare them by classification accuracy. The classification accuracy is improved by 6.44%-12.96% on four time series datasets.

Deep Wide-Field Imaging of Nearby Galaxies with KMTNet telescopes

  • Kim, Minjin;Ho, Luis C.;Park, Byeong-Gon;Lee, Joon Hyeop;Seon, Kwang-Il;Jeong, Hyunjin;Kim, Sang Chul
    • 천문학회보
    • /
    • 제40권1호
    • /
    • pp.57.1-57.1
    • /
    • 2015
  • We will obtain deep wide-field images of the 150-200 nearby bright galaxies in the southern hemisphere, in order to explore the origin of faint extended features in the outer regions of target galaxies. Using KMTNet telescopes, we will take very deep images, spending ~ 4.5 hr for the B and R filters for each object. With this dataset, we will look for diffuse, low-surface brightness structures including outer disks, truncated disks, tidal features/stellar streams, and faint companions.

  • PDF

랜덤 변환에 대한 컨볼루션 뉴럴 네트워크를 이용한 특징 추출 (Feature Extraction Using Convolutional Neural Networks for Random Translation)

  • 진태석
    • 한국산업융합학회 논문집
    • /
    • 제23권3호
    • /
    • pp.515-521
    • /
    • 2020
  • Deep learning methods have been effectively used to provide great improvement in various research fields such as machine learning, image processing and computer vision. One of the most frequently used deep learning methods in image processing is the convolutional neural networks. Compared to the traditional artificial neural networks, convolutional neural networks do not use the predefined kernels, but instead they learn data specific kernels. This property makes them to be used as feature extractors as well. In this study, we compared the quality of CNN features for traditional texture feature extraction methods. Experimental results demonstrate the superiority of the CNN features. Additionally, the recognition process and result of a pioneering CNN on MNIST database are presented.

Super-resolution in Music Score Images by Instance Normalization

  • Tran, Minh-Trieu;Lee, Guee-Sang
    • 스마트미디어저널
    • /
    • 제8권4호
    • /
    • pp.64-71
    • /
    • 2019
  • The performance of an OMR (Optical Music Recognition) system is usually determined by the characterizing features of the input music score images. Low resolution is one of the main factors leading to degraded image quality. In this paper, we handle the low-resolution problem using the super-resolution technique. We propose the use of a deep neural network with instance normalization to improve the quality of music score images. We apply instance normalization which has proven to be beneficial in single image enhancement. It works better than batch normalization, which shows the effectiveness of shifting the mean and variance of deep features at the instance level. The proposed method provides an end-to-end mapping technique between the high and low-resolution images respectively. New images are then created, in which the resolution is four times higher than the resolution of the original images. Our model has been evaluated with the dataset "DeepScores" and shows that it outperforms other existing methods.

Domain Shift 문제를 해결하기 위해 안개 특징을 이용한 딥러닝 기반 안개 제거 방법 (Deep learning-based de-fogging method using fog features to solve the domain shift problem)

  • 심휘보;강봉순
    • 한국멀티미디어학회논문지
    • /
    • 제24권10호
    • /
    • pp.1319-1325
    • /
    • 2021
  • It is important to remove fog for accurate object recognition and detection during preprocessing because images taken in foggy adverse weather suffer from poor quality of images due to scattering and absorption of light, resulting in poor performance of various vision-based applications. This paper proposes an end-to-end deep learning-based single image de-fogging method using U-Net architecture. The loss function used in the algorithm is a loss function based on Mahalanobis distance with fog features, which solves the problem of domain shifts, and demonstrates superior performance by comparing qualitative and quantitative numerical evaluations with conventional methods. We also design it to generate fog through the VGG19 loss function and use it as the next training dataset.

Vehicle Image Recognition Using Deep Convolution Neural Network and Compressed Dictionary Learning

  • Zhou, Yanyan
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.411-425
    • /
    • 2021
  • In this paper, a vehicle recognition algorithm based on deep convolutional neural network and compression dictionary is proposed. Firstly, the network structure of fine vehicle recognition based on convolutional neural network is introduced. Then, a vehicle recognition system based on multi-scale pyramid convolutional neural network is constructed. The contribution of different networks to the recognition results is adjusted by the adaptive fusion method that adjusts the network according to the recognition accuracy of a single network. The proportion of output in the network output of the entire multiscale network. Then, the compressed dictionary learning and the data dimension reduction are carried out using the effective block structure method combined with very sparse random projection matrix, which solves the computational complexity caused by high-dimensional features and shortens the dictionary learning time. Finally, the sparse representation classification method is used to realize vehicle type recognition. The experimental results show that the detection effect of the proposed algorithm is stable in sunny, cloudy and rainy weather, and it has strong adaptability to typical application scenarios such as occlusion and blurring, with an average recognition rate of more than 95%.