• Title/Summary/Keyword: 학습영상

Search Result 2,580, Processing Time 0.029 seconds

Real-time Artificial Neural Network for High-dimensional Medical Image (고차원 의료 영상을 위한 실시간 인공 신경망)

  • Choi, Kwontaeg
    • Journal of the Korean Society of Radiology
    • /
    • v.10 no.8
    • /
    • pp.637-643
    • /
    • 2016
  • Due to the popularity of artificial intelligent, medical image processing using artificial neural network is increasingly attracting the attention of academic and industry researches. Deep learning with a convolutional neural network has been proved to very effective representation of images. However, the training process requires high performance H/W platform. Thus, the realtime learning of a large number of high dimensional samples within low-power devices is a challenging problem. In this paper, we attempt to establish this possibility by presenting a realtime neural network method on Raspberry pi using online sequential extreme learning machine. Our experiments on high-dimensional dataset show that the proposed method records an almost real-time execution.

Proper Base-model and Optimizer Combination Improves Transfer Learning Performance for Ultrasound Breast Cancer Classification (다단계 전이 학습을 이용한 유방암 초음파 영상 분류 응용)

  • Ayana, Gelan;Park, Jinhyung;Choe, Se-woon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.655-657
    • /
    • 2021
  • It is challenging to find breast ultrasound image training dataset to develop an accurate machine learning model due to various regulations, personal information issues, and expensiveness of acquiring the images. However, studies targeting transfer learning for ultrasound breast cancer images classification have not been able to achieve high performance compared to radiologists. Here, we propose an improved transfer learning model for ultrasound breast cancer classification using publicly available dataset. We argue that with a proper combination of ImageNet pre-trained model and optimizer, a better performing model for ultrasound breast cancer image classification can be achieved. The proposed model provided a preliminary test accuracy of 99.5%. With more experiments involving various hyperparameters, the model is expected to achieve higher performance when subjected to new instances.

  • PDF

Unsupervised Non-rigid Registration Network for 3D Brain MR images (3차원 뇌 자기공명 영상의 비지도 학습 기반 비강체 정합 네트워크)

  • Oh, Donggeon;Kim, Bohyoung;Lee, Jeongjin;Shin, Yeong-Gil
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.64-74
    • /
    • 2019
  • Although a non-rigid registration has high demands in clinical practice, it has a high computational complexity and it is very difficult for ensuring the accuracy and robustness of registration. This study proposes a method of applying a non-rigid registration to 3D magnetic resonance images of brain in an unsupervised learning environment by using a deep-learning network. A feature vector between two images is produced through the network by receiving both images from two different patients as inputs and it transforms the target image to match the source image by creating a displacement vector field. The network is designed based on a U-Net shape so that feature vectors that consider all global and local differences between two images can be constructed when performing the registration. As a regularization term is added to a loss function, a transformation result similar to that of a real brain movement can be obtained after the application of trilinear interpolation. This method enables a non-rigid registration with a single-pass deformation by only receiving two arbitrary images as inputs through an unsupervised learning. Therefore, it can perform faster than other non-learning-based registration methods that require iterative optimization processes. Our experiment was performed with 3D magnetic resonance images of 50 human brains, and the measurement result of the dice similarity coefficient confirmed an approximately 16% similarity improvement by using our method after the registration. It also showed a similar performance compared with the non-learning-based method, with about 10,000 times speed increase. The proposed method can be used for non-rigid registration of various kinds of medical image data.

Data Augmentation for Tomato Detection and Pose Estimation (토마토 위치 및 자세 추정을 위한 데이터 증대기법)

  • Jang, Minho;Hwang, Youngbae
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.44-55
    • /
    • 2022
  • In order to automatically provide information on fruits in agricultural related broadcasting contents, instance image segmentation of target fruits is required. In addition, the information on the 3D pose of the corresponding fruit may be meaningfully used. This paper represents research that provides information about tomatoes in video content. A large amount of data is required to learn the instance segmentation, but it is difficult to obtain sufficient training data. Therefore, the training data is generated through a data augmentation technique based on a small amount of real images. Compared to the result using only the real images, it is shown that the detection performance is improved as a result of learning through the synthesized image created by separating the foreground and background. As a result of learning augmented images using images created using conventional image pre-processing techniques, it was shown that higher performance was obtained than synthetic images in which foreground and background were separated. To estimate the pose from the result of object detection, a point cloud was obtained using an RGB-D camera. Then, cylinder fitting based on least square minimization is performed, and the tomato pose is estimated through the axial direction of the cylinder. We show that the results of detection, instance image segmentation, and cylinder fitting of a target object effectively through various experiments.

The Effect of Training Patch Size and ConvNeXt application on the Accuracy of CycleGAN-based Satellite Image Simulation (학습패치 크기와 ConvNeXt 적용이 CycleGAN 기반 위성영상 모의 정확도에 미치는 영향)

  • Won, Taeyeon;Jo, Su Min;Eo, Yang Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.177-185
    • /
    • 2022
  • A method of restoring the occluded area was proposed by referring to images taken with the same types of sensors on high-resolution optical satellite images through deep learning. For the natural continuity of the simulated image with the occlusion region and the surrounding image while maintaining the pixel distribution of the original image as much as possible in the patch segmentation image, CycleGAN (Cycle Generative Adversarial Network) method with ConvNeXt block applied was used to analyze three experimental regions. In addition, We compared the experimental results of a training patch size of 512*512 pixels and a 1024*1024 pixel size that was doubled. As a result of experimenting with three regions with different characteristics,the ConvNeXt CycleGAN methodology showed an improved R2 value compared to the existing CycleGAN-applied image and histogram matching image. For the experiment by patch size used for training, an R2 value of about 0.98 was generated for a patch of 1024*1024 pixels. Furthermore, As a result of comparing the pixel distribution for each image band, the simulation result trained with a large patch size showed a more similar histogram distribution to the original image. Therefore, by using ConvNeXt CycleGAN, which is more advanced than the image applied with the existing CycleGAN method and the histogram-matching image, it is possible to derive simulation results similar to the original image and perform a successful simulation.

Development and Evaluation of an English Speaking Task Using Smartphone and Text-to-Speech (스마트폰과 음성합성을 활용한 영어 말하기 과제의 개발과 평가)

  • Moon, Dosik
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.5
    • /
    • pp.13-20
    • /
    • 2016
  • This study explores the effects of an video-recording English speaking task model on learners. The learning model, a form of mobile learning, was developed to facilitate the learners' output practice applying advantages of a smartphone and Text-to Speech. The survey results shows the positive effects of the speaking task on the domain of pronunciation, speaking, listening, writing in terms of students' confidence, as well as general English ability. The study further examines the possibilities and limitations of the speaking task in assisting Korean learners improve their speaking ability, who do not have sufficient exposure to English input or output practice due to the situational limitations where English is learned as a foreign language.

A Method for Detecting Learning Activities in Online Classes Based on LSTM (LSTM 기반의 온라인 수업 속 학습활동 검출 방법)

  • Park, Ji-Young;Park, Se-Hee;Park, Seung-Bo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.97-98
    • /
    • 2021
  • 학습에 대한 적극적인 참여는 학업에서 중요한 행동이며 높은 학업 참여는 성공적인 학업성취와 밀접한 관계가 있다. 학업 참여는 학자들의 관점에 따라 행동적 참여, 정서적 참여, 인지적 참여로 구분된다. 행동적 참여는 학생들이 실제 학습활동과 과제 수행에 어떻게 참여하는가로 정의한다. 그러나 온라인 학습 환경에서는 학생들의 학습활동을 평가하는 데 어려움이 존재하여 관련된 연구의 필요성이 대두되고 있다. 본 논문에서는 영상 분석을 이용한 양방향 Convolutional LSTM 모델을 기반으로 온라인 수업 상에서 학습활동 중 하나인 손들기 행동을 인식하는 방법을 제안한다. 제안된 방법으로 학습활동 중 하나인 손들기 행동의 인식 정확도는 88%이다.

  • PDF

Video Classification System Based on Similarity Representation Among Sequential Data (순차 데이터간의 유사도 표현에 의한 동영상 분류)

  • Lee, Hosuk;Yang, Jihoon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.1
    • /
    • pp.1-8
    • /
    • 2018
  • It is not easy to learn simple expressions of moving picture data since it contains noise and a lot of information in addition to time-based information. In this study, we propose a similarity representation method and a deep learning method between sequential data which can express such video data abstractly and simpler. This is to learn and obtain a function that allow them to have maximum information when interpreting the degree of similarity between image data vectors constituting a moving picture. Through the actual data, it is confirmed that the proposed method shows better classification performance than the existing moving image classification methods.

Disease Region Pattern Recognition Algorithm of Gastrointestinal Image using Wavelet Transform and Neural Network (Wavelet변환과 신경회로망에 의한 위장 영상의 질환 부위 패턴 인식 알고리즘)

  • 이상복;이주신
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.5
    • /
    • pp.70-77
    • /
    • 1999
  • 본 논문에서는 Wavelet을 이용한 위장 영상의 질환 부위 특징을 추출하여 질환 부위 패턴을 인식할 수 있는 알고리즘을 제안하였다. 전처리 과정으로서 위장 영상이 형태정보는 입력 영상을 DWT(Discrete wavelet transform)에 의해 4레벨 DWT 계수 행렬을 구하고 계수 행렬의 특징에 따라 저주파 계수 행렬로부터 저주파 특징 파라미터 32개, 수평 고주파 계수 행렬로부터 수평 고주파 특징 파라미터 16개, 수직 고주파 계수 행렬로부터 수직 고주파 특징 파라미터 16개, 그리고, 대각 고주파 계수 행렬로부터 대각 고주파 특징 파라미터 32개 등 모두 96개의 특징 파라미터를 추출한 후 각각의 특징 파라미터를 최대 값+0.5로 최소 값을 -0.5로 정규화 하여 신경회로망의 입력 벡터로 사용하였다. 위장 영상 패턴 인식을 위한 신경회로망은 교사 학습을 요구하는 다층 구조의 오차 역전파(Error back propagation)알고리즘으로 하였고 구조적 특성을 이용하여 입력층, 중간층, 출력층의 계층 구조로 설계하였다. 설계된 신경회로망의 학습은 학습계수를 0.2로 모우멘텀을 0.6으로 설정하여 출력층 최대오차가 0.01보다 작을 때까지 수행하였으며 약 8000회 정도 학습한 결과 설정값 보다 작은 결과를 얻었고 질환의 종류나 위치, 크기에 관계없이 100%의 인식률을 얻었다.

  • PDF

A Study of Kernel Characteristics of CNN Deep Learning for Effective Fire Detection Based on Video (영상기반의 화재 검출에 효과적인 CNN 심층학습의 커널 특성에 대한 연구)

  • Son, Geum-Young;Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.6
    • /
    • pp.1257-1262
    • /
    • 2018
  • In this paper, a deep learning method is proposed to detect the fire effectively by using video of surveillance camera. Based on AlexNet model, classification performance is compared according to kernel size and stride of convolution layer. Dataset for learning and interfering are classified into two classes such as normal and fire. Normal images include clouds, and foggy images, and fire images include smoke and flames images, respectively. As results of simulations, it is shown that the larger kernel size and smaller stride shows better performance.