• 제목/요약/키워드: LeNet

검색결과 102건 처리시간 0.024초

제주 연안의 소대망에서 조석에 의한 어획량 변동 (Catch fluctuation of the Pound Set Net According to Tide Age in the Coastal Waters of Jeju)

  • 김병엽;서두옥;이창헌
    • 한국수산과학회지
    • /
    • 제42권1호
    • /
    • pp.83-88
    • /
    • 2009
  • The purpose of this paper is to obtain the fundamental data on the catch fluctuation in the pound set net according to the tide age from the catch recorded from the year 1997 to 2004 in the coastal waters of Guideok, Jeju. Total catch by the pound set net had a little connection with the tide age. During increasing tide, total catch were reduced slightly from the neap tide to the high tide while there seemed to be a little sign of rise in the total catch until decreasing tide. But in the relation between the catches and the tide age, the level of the correlation coefficient chosen at $p{\le}0.05$ was not significant. Therefore, the catch of the pound set net seemed not to be influenced by the tide age. In addition, CPUE at the high tide was higher than that at the neap tide. When the catch per operating frequency was graded in the order under 50 kg, 50-100 kg and 100-200 kg, the frequency rate by the pound set net was 38%, 19% and 19%, respectively.

MEDU-Net+: a novel improved U-Net based on multi-scale encoder-decoder for medical image segmentation

  • Zhenzhen Yang;Xue Sun;Yongpeng, Yang;Xinyi Wu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권7호
    • /
    • pp.1706-1725
    • /
    • 2024
  • The unique U-shaped structure of U-Net network makes it achieve good performance in image segmentation. This network is a lightweight network with a small number of parameters for small image segmentation datasets. However, when the medical image to be segmented contains a lot of detailed information, the segmentation results cannot fully meet the actual requirements. In order to achieve higher accuracy of medical image segmentation, a novel improved U-Net network architecture called multi-scale encoder-decoder U-Net+ (MEDU-Net+) is proposed in this paper. We design the GoogLeNet for achieving more information at the encoder of the proposed MEDU-Net+, and present the multi-scale feature extraction for fusing semantic information of different scales in the encoder and decoder. Meanwhile, we also introduce the layer-by-layer skip connection to connect the information of each layer, so that there is no need to encode the last layer and return the information. The proposed MEDU-Net+ divides the unknown depth network into each part of deconvolution layer to replace the direct connection of the encoder and decoder in U-Net. In addition, a new combined loss function is proposed to extract more edge information by combining the advantages of the generalized dice and the focal loss functions. Finally, we validate our proposed MEDU-Net+ MEDU-Net+ and other classic medical image segmentation networks on three medical image datasets. The experimental results show that our proposed MEDU-Net+ has prominent superior performance compared with other medical image segmentation networks.

Daily Feed Intake, Energy Intake, Growth Rate and Measures of Dietary Energy Efficiency of Pigs from Four Sire Lines Fed Diets with High or Low Metabolizable and Net Energy Concentrations

  • Schinckel, A.P.;Einstein, M.E.;Jungst, S.;Matthews, J.O.;Booher, C.;Dreadin, T.;Fralick, C.;Wilson, E.;Boyd, R.D.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • 제25권3호
    • /
    • pp.410-420
    • /
    • 2012
  • A trial was conducted to: i) evaluate the BW growth, energy intakes and energetic efficiency of pigs fed high and low density diets from 27 to 141 kg BW, ii) evaluate sire line and sex differences when fed both diets, and iii) to compare ME to NE as predictor of pig performance. The experiment had a replicated factorial arrangement of treatments including four sire lines, two sexes (2,192 barrows and 2,280 gilts), two dietary energy densities and a light or heavy target BW, 118 and 131.5 kg in replicates 1 to 6 and 127 and 140.6 kg in replicates 7 to 10. Pigs were allocated to a series of low energy (LE, 3.27 Mcal ME/kg) corn-soybean meal based diets with 16% wheat midds or high energy diets (HE, 3.53 to 3.55 Mcal ME/kg) with 4.5 to 4.95% choice white grease. All diets contained 6% DDGS. The HE and LE diets of each of the four phases were formulated to have equal lysine:Mcal ME ratios. Pigs were weighed and pen feed intake (11 or 12 pigs/pen) recorded at 28-d intervals. The barrow and gilt daily feed (DFI), ME (MEI) and NE (NEI) intake data were fitted to a Bridges function of BW. The BW data of each sex were fitted to a generalized Michaelis-Menten function of days of age. ME and NE required for maintenance (Mcal/d) were predicted using functions of BW (0.255 and 0.179 BW^0.60 respectively). Pigs fed LE diets had decreased ADG (915 vs. 945 g/d, p<0.001) than pigs fed HE diets. Overall, DFI was greater (p<0.001) for pigs fed the LE diets (2.62 vs. 2.45 kg/d). However, no diet differences were observed for MEI (8.76 vs. 8.78 Mcal/d, p = 0.49) or NEI (6.39 vs. 6.44 Mcal/d, p = 0.13), thereby indicating that the pigs compensated for the decreased energy content of the diet. Overall ADG:DFI (0.362 vs. 0.377) and ADG:Mcal MEI (0.109 vs. 0.113) was less (p<0.001) for pigs fed LE compared to HE diets. Pigs fed HE diets had 3.6% greater ADG:Mcal MEI above maintenance and only 1.3% greater ADG:Mcal NEI (0.152 versus 0.150), therefore NEI is a more accurate predictor of growth and G:F than MEI.Pigs fed HE diets had 3.4% greater ADG:Mcal MEI and 0.11% greater ADG:NEI above maintenance than pigs fed LE diets, again demonstrating that NEI is a better predictor of pig performance than MEI. Pigs fed LE diets had similar daily NEI and MEI but grew slower and less efficiently on both ME and NE basis than pigs fed HE diets. The data suggest that the midds NE value (2.132 Mcal/kg) was too high for this source or that maintenance was increased for pigs fed LE diets.

PET-CT 영상 알츠하이머 분류에서 유전 알고리즘 이용한 심층학습 모델 최적화 (Optimization of Deep Learning Model Using Genetic Algorithm in PET-CT Image Alzheimer's Classification)

  • 이상협;강도영;송종관;박장식
    • 한국멀티미디어학회논문지
    • /
    • 제23권9호
    • /
    • pp.1129-1138
    • /
    • 2020
  • The performance of convolutional deep learning networks is generally determined according to parameters of target dataset, structure of network, convolution kernel, activation function, and optimization algorithm. In this paper, a genetic algorithm is used to select the appropriate deep learning model and parameters for Alzheimer's classification and to compare the learning results with preliminary experiment. We compare and analyze the Alzheimer's disease classification performance of VGG-16, GoogLeNet, and ResNet to select an effective network for detecting AD and MCI. The simulation results show that the network structure is ResNet, the activation function is ReLU, the optimization algorithm is Adam, and the convolution kernel has a 3-dilated convolution filter for the accuracy of dementia medical images.

Application of Deep Learning to the Forecast of Flare Classification and Occurrence using SOHO MDI data

  • Park, Eunsu;Moon, Yong-Jae;Kim, Taeyoung
    • 천문학회보
    • /
    • 제42권2호
    • /
    • pp.60.2-61
    • /
    • 2017
  • A Convolutional Neural Network(CNN) is one of the well-known deep-learning methods in image processing and computer vision area. In this study, we apply CNN to two kinds of flare forecasting models: flare classification and occurrence. For this, we consider several pre-trained models (e.g., AlexNet, GoogLeNet, and ResNet) and customize them by changing several options such as the number of layers, activation function, and optimizer. Our inputs are the same number of SOHO)/MDI images for each flare class (None, C, M and X) at 00:00 UT from Jan 1996 to Dec 2010 (total 1600 images). Outputs are the results of daily flare forecasting for flare class and occurrence. We build, train, and test the models on TensorFlow, which is well-known machine learning software library developed by Google. Our major results from this study are as follows. First, most of the models have accuracies more than 0.7. Second, ResNet developed by Microsoft has the best accuracies : 0.77 for flare classification and 0.83 for flare occurrence. Third, the accuracies of these models vary greatly with changing parameters. We discuss several possibilities to improve the models.

  • PDF

Fast and Accurate Single Image Super-Resolution via Enhanced U-Net

  • Chang, Le;Zhang, Fan;Li, Biao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권4호
    • /
    • pp.1246-1262
    • /
    • 2021
  • Recent studies have demonstrated the strong ability of deep convolutional neural networks (CNNs) to significantly boost the performance in single image super-resolution (SISR). The key concern is how to efficiently recover and utilize diverse information frequencies across multiple network layers, which is crucial to satisfying super-resolution image reconstructions. Hence, previous work made great efforts to potently incorporate hierarchical frequencies through various sophisticated architectures. Nevertheless, economical SISR also requires a capable structure design to balance between restoration accuracy and computational complexity, which is still a challenge for existing techniques. In this paper, we tackle this problem by proposing a competent architecture called Enhanced U-Net Network (EUN), which can yield ready-to-use features in miscellaneous frequencies and combine them comprehensively. In particular, the proposed building block for EUN is enhanced from U-Net, which can extract abundant information via multiple skip concatenations. The network configuration allows the pipeline to propagate information from lower layers to higher ones. Meanwhile, the block itself is committed to growing quite deep in layers, which empowers different types of information to spring from a single block. Furthermore, due to its strong advantage in distilling effective information, promising results are guaranteed with comparatively fewer filters. Comprehensive experiments manifest our model can achieve favorable performance over that of state-of-the-art methods, especially in terms of computational efficiency.

SVM on Top of Deep Networks for Covid-19 Detection from Chest X-ray Images

  • Do, Thanh-Nghi;Le, Van-Thanh;Doan, Thi-Huong
    • Journal of information and communication convergence engineering
    • /
    • 제20권3호
    • /
    • pp.219-225
    • /
    • 2022
  • In this study, we propose training a support vector machine (SVM) model on top of deep networks for detecting Covid-19 from chest X-ray images. We started by gathering a real chest X-ray image dataset, including positive Covid-19, normal cases, and other lung diseases not caused by Covid-19. Instead of training deep networks from scratch, we fine-tuned recent pre-trained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, to classify chest X-ray images into one of three classes (Covid-19, normal, and other lung). We propose training an SVM model on top of deep networks to perform a nonlinear combination of deep network outputs, improving classification over any single deep network. The empirical test results on the real chest X-ray image dataset show that deep network models, with an exception of ResNet50 with 82.44%, provide an accuracy of at least 92% on the test set. The proposed SVM on top of the deep network achieved the highest accuracy of 96.16%.

딥뉴럴네트워크를 위한 기능성 기반의 핌 가속기 (Functionality-based Processing-In-Memory Accelerator for Deep Neural Networks)

  • 김민재;김신덕
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2020년도 추계학술발표대회
    • /
    • pp.8-11
    • /
    • 2020
  • 4 차 산업혁명 시대의 도래와 함께 AI, ICT 기술의 융합이 진행됨에 따라, 유저 레벨의 디바이스에서도 AI 서비스의 요청이 실현되었다. 이미지 처리와 관련된 AI 서비스는 피사체 판별, 불량품 검사, 자율주행 등에 이용되고 있으며, 특히 Deep Convolutional Neural Network (DCNN)은 이미지의 특색을 파악하는 데 뛰어난 성능을 보여준다. 하지만, 이미지의 크기가 커지고, 신경망이 깊어짐에 따라 연산 처리에 있어 낮은 데이터 지역성과 빈번한 메모리 참조를 야기했다. 이에 따라, 기존의 계층적 시스템 구조는 DCNN 을 scalable 하고 빠르게 처리하는 데 한계를 보인다. 본 연구에서는 DCNN 의 scalable 하고 빠른 처리를 위해 3 차원 메모리 구조의 Processing-In-Memory (PIM) 가속기를 제안한다. 이를 위해 기존 3 차원 메모리인 Hybrid Memory Cube (HMC)에 하드웨어 및 소프트웨어 모듈을 추가로 구성하였다. 구체적으로, Processing Element (PE)간 데이터를 공유할 수 있는 공유 캐시 및 소프트웨어 스택, 파이프라인화된 곱셈기 및 듀얼 프리페치 버퍼를 구성하였다. 이를 유명 DCNN 알고리즘 LeNet, AlexNet, ZFNet, VGGNet, GoogleNet, RestNet 에 대해 성능 평가를 진행한 결과 기존 HMC 대비 40.3%의 속도 향상을 29.4%의 대역폭 향상을 보였다.

콩군낙(群落)의 열수지특성(熱收支特性)과 건물(乾物)로의 물이용효율(利用效率) (Heat Balance Characteristics and Water Use Efficiency of Soybean Community)

  • 이양수;임정남
    • 한국토양비료학회지
    • /
    • 제23권2호
    • /
    • pp.94-99
    • /
    • 1990
  • 콩군낙(群落)(팔달(八達)콩 ; 재식밀도(栽植密度) $45{\times}10cm$)에서의 열수지성분(熱收支成分) 변화(變化)를 계절별(季節別)로 관측(觀測)하고 열수지법(熱收支法)으로 증발산량(蒸發散量)을 계산(計算)하여 건물생산량(乾物生産量)과의 관계(關係)를 구한 결과(結果)는 다음과 같다. 1. 맑은날의 전단파복사량(全短波輻射量)에 대한 순복사량(純輻射量)의 비율(比率)은 59~76%로 흐린날의 63~83%보다 낮았다. 2. 순복사량(純輻射量)에 대한 잠열전달량(潛熱傳達量)의 비율(比率)은 흐린날에 100%를 넘는 경우가 있어 때때로 이류(移流)에 의한 열복사(熱輻射)의 수평전도(水平傳導)가 있었다. 3. 1시간(時間)마다 적산(積算)한 일적산순복사량(日積算純輻射量)(Rn)과 증발잠열(蒸發潛熱)($LE_{(+)}$) 및 화간순복사량(畵間純輻射量)($Rn_{(+)}$)과 증발잠열(蒸發潛熱)과의 관계(關係)는 각각(各各) 다음과 같은 직선관계(直線關係)가 성립(成立)하였다. $$LE_{(+)}=0.971\;Rn+1.122\;R^2=0.9017$$ $$LE_{(+)}=0.882\;Rn_{(+)}+1.945\;R^2=0.8836$$ 4. 열수지계산(熱收支計算)에 의한 군낙증발산량(群落蒸發散量)(ETa)과 대형(大型) pan 증발량(蒸發量)(Epan)과의 관계(關係)는 다음과 같았다. ETa = 1.049 Epan + 1.657 $$R^2=0.6589$$ 5. 생육초기(生育初期)를 제외(除外)한 생육기간중(生育期間中) 건물(乾物)로의 물이용효율(利用效率)은 $2.31g{\cdot}DM{\cdot}kg^{-1}\;H_2O$이었으며, 평균일증발산량(平均日蒸發散量)은 5.29mm, 일사량중(日射量中) 군낙(群落)의 증발산(蒸發散)으로 소비(消費)된 열량(熱量)의 비율(比率)은 85%이었다.

  • PDF

합성곱 신경망을 이용한 '미황' 복숭아 과실의 성숙도 분류 (Grading of Harvested 'Mihwang' Peach Maturity with Convolutional Neural Network)

  • 신미희;장경은;이슬기;조정건;송상준;김진국
    • 생물환경조절학회지
    • /
    • 제31권4호
    • /
    • pp.270-278
    • /
    • 2022
  • 본 연구는 무대재배 복숭아 '미황'을 대상으로 성숙기간 중 RGB 영상을 취득한 후 다양한 품질 지표를 측정하고 이를 딥러닝 기술에 적용하여 복숭아 과실 숙도 분류의 가능성을 탐색하고자 실시하였다. 취득 영상 730개의 데이터를 training과 validation에 사용하였고, 170개는 최종테스트 이미지로 사용하였다. 본 연구에서는 딥러닝을 활용한 성숙도 자동 분류를 위하여 조사된 품질 지표 중 경도, Hue 값, a*값을 최종 선발하여 이미지를 수동으로 미성숙(immature), 성숙(mature), 과숙(over mature)으로 분류하였다. 이미지 자동 분류는 CNN(Convolutional Neural Networks, 컨볼루션 신경망) 모델 중에서 이미지 분류 및 탐지에서 우수한 성능을 보이고 있는 VGG16, GoogLeNet의 InceptionV3 두종류의 모델을 사용하여 복숭아 품질 지표 값의 분류 이미지별 성능을 측정하였다. 딥러닝을 통한 성숙도 이미지 분석 결과, VGG16과 InceptionV3 모델에서 Hue_left 특성이 각각 87.1%, 83.6%의 성능(F1 기준)을 나타냈고, 그에 비해 Firmness 특성이 각각 72.2%, 76.9%를 나타냈고, Loss율이 각각 54.3%, 62.1%로 Firmness를 기준으로 한 성숙도 분류는 적용성이 낮음을 확인하였다. 추후에 더 많은 종류의 이미지와 다양한 품질 지표를 가지고 학습이 진행된다면 이전 연구보다 향상된 정확도와 세밀한 성숙도 판별이 가능할 것으로 판단되었다.