• 제목/요약/키워드: brain-based learning model

검색결과 68건 처리시간 0.023초

Proper Noun Embedding Model for the Korean Dependency Parsing

  • Nam, Gyu-Hyeon;Lee, Hyun-Young;Kang, Seung-Shik
    • Journal of Multimedia Information System
    • /
    • 제9권2호
    • /
    • pp.93-102
    • /
    • 2022
  • Dependency parsing is a decision problem of the syntactic relation between words in a sentence. Recently, deep learning models are used for dependency parsing based on the word representations in a continuous vector space. However, it causes a mislabeled tagging problem for the proper nouns that rarely appear in the training corpus because it is difficult to express out-of-vocabulary (OOV) words in a continuous vector space. To solve the OOV problem in dependency parsing, we explored the proper noun embedding method according to the embedding unit. Before representing words in a continuous vector space, we replace the proper nouns with a special token and train them for the contextual features by using the multi-layer bidirectional LSTM. Two models of the syllable-based and morpheme-based unit are proposed for proper noun embedding and the performance of the dependency parsing is more improved in the ensemble model than each syllable and morpheme embedding model. The experimental results showed that our ensemble model improved 1.69%p in UAS and 2.17%p in LAS than the same arc-eager approach-based Malt parser.

딥러닝 기반 CT 스캔 재구성을 통한 조영제 사용 및 신체 부위 분류 성능 향상 연구 (A Study on the Use of Contrast Agent and the Improvement of Body Part Classification Performance through Deep Learning-Based CT Scan Reconstruction)

  • 나성원;고유선;김경원
    • 방송공학회논문지
    • /
    • 제28권3호
    • /
    • pp.293-301
    • /
    • 2023
  • 표준화되지 않은 의료 데이터 수집 및 관리는 여전히 수동으로 진행되고 있어, 이 문제를 해결하기 위해 딥 러닝을 사용해 CT 데이터를 분류하는 연구들이 진행되고 있다. 하지만 대부분 연구에서는 기본적인 CT slice인 axial 평면만을 기반으로 모델을 개발하고 있다. CT 영상은 일반 이미지와 다르게 인체 구조만 묘사하기 때문에 CT scan을 재구성하는 것만으로도 더 풍부한 신체적 특징을 나타낼 수 있다. 이 연구는 axial 평면뿐만 아니라 CT 데이터를 2D로 변환하는 여러가지 방법들을 통해 보다 높은 성능을 달성할 수 있는 방법을 찾고자 한다. 훈련은 5가지 부위의 CT 스캔 1042개를 사용했고, 모델 평가를 위해 테스트셋 179개, 외부 데이터셋으로 448개를 수집했다. 딥러닝 모델 개발을 위해 ImageNet으로 사전 학습된 InceptionResNetV2를 백본으로 사용하였으며, 모델의 전체 레이어를 재 학습했다. 실험결과 신체 부위 분류에서는 재구성 데이터 모델이 99.33%를 달성하며 axial 모델보다 1.12% 더 높았고, 조영제 분류에서는 brain과 neck에서만 axial모델이 높았다. 결론적으로 axial slice로만 훈련했을 때 보다 해부학적 특징이 잘 나타나는 데이터로 학습했을 때 더 정확한 성능 달성이 가능했다.

이기종 머신러닝 모델 기반 치매예측 모델 (Dementia Prediction Model based on Gradient Boosting)

  • 이태인;오하영
    • 한국정보통신학회논문지
    • /
    • 제25권12호
    • /
    • pp.1729-1738
    • /
    • 2021
  • 머신러닝은 인지심리, 뇌과학과 긴밀한 관계를 유지하며 함께 발전하고 있다. 본 논문은 OASIS-3 dataset을 머신러닝 기법을 이용하여 분석하고, 이를 통해 치매를 예측하는 모델을 제안한다. OASIS-3 데이터 중 각 영역의 부피를 수치화한 데이터들에 대해 PCA(Principal component analysis) 를 통한 차원 축소를 실행한 뒤, 중요한 요소(특징)들만 추출 후 이에 대해 그래디언트 부스팅, 스태킹을 포함한 다양한 머신러닝 모델들을 적용, 각각의 성능을 비교한다. 제안하는 기법은 기존 연구들과 달리 뇌 생체 데이터들은 물론 참가자의 성별 등의 기본 정보 데이터, 참여자의 의료 정보 데이터를 사용했기에 차별성이 크다. 또한, 다양한 성능평가를 통해 제안하는 기법이 다양한 수치 데이터 중 치매와 더 많은 관련성을 보이는 특징들을 찾아내어 치매를 더 잘 예측할 수 있는 모델임을 보였다.

뇌파를 이용한 맞춤형 주행 제어 모델 설계 (EEG-based Customized Driving Control Model Design)

  • 이진희;박재형;김제석;권순
    • 대한임베디드공학회논문지
    • /
    • 제18권2호
    • /
    • pp.81-87
    • /
    • 2023
  • With the development of BCI devices, it is now possible to use EEG control technology to move the robot's arms or legs to help with daily life. In this paper, we propose a customized vehicle control model based on BCI. This is a model that collects BCI-based driver EEG signals, determines information according to EEG signal analysis, and then controls the direction of the vehicle based on the determinated information through EEG signal analysis. In this case, in the process of analyzing noisy EEG signals, controlling direction is supplemented by using a camera-based eye tracking method to increase the accuracy of recognized direction . By synthesizing the EEG signal that recognized the direction to be controlled and the result of eye tracking, the vehicle was controlled in five directions: left turn, right turn, forward, backward, and stop. In experimental result, the accuracy of direction recognition of our proposed model is about 75% or higher.

Brain Activation Pattern and Functional Connectivity Network during Experimental Design on the Biological Phenomena

  • Lee, Il-Sun;Lee, Jun-Ki;Kwon, Yong-Ju
    • 한국과학교육학회지
    • /
    • 제29권3호
    • /
    • pp.348-358
    • /
    • 2009
  • The purpose of this study was to investigate brain activation pattern and functional connectivity network during experimental design on the biological phenomena. Twenty six right-handed healthy science teachers volunteered to be in the present study. To investigate participants' brain activities during the tasks, 3.0T fMRI system with the block experimental-design was used to measure BOLD signals of their brain and SPM2 software package was applied to analyze the acquired initial image data from the fMRI system. According to the analyzed data, superior, middle and inferior frontal gyrus, superior and inferior parietal lobule, fusiform gyrus, lingual gyrus, and bilateral cerebellum were significantly activated during participants' carrying-out experimental design. The network model was consisting of six nodes (ROIs) and its six connections. These results suggested the notion that the activation and connections of these regions mean that experimental design process couldn't succeed just a memory retrieval process. These results enable the scientific experimental design process to be examined from the cognitive neuroscience perspective, and may be used as a basis for developing a teaching-learning program for scientific experimental design such as brain-based science education curriculum.

Brain activation pattern and functional connectivity network during classification on the living organisms

  • Byeon, Jung-Ho;Lee, Jun-Ki;Kwon, Yong-Ju
    • 한국과학교육학회지
    • /
    • 제29권7호
    • /
    • pp.751-758
    • /
    • 2009
  • The purpose of this study was to investigate brain activation pattern and functional connectivity network during classification on the biological phenomena. Twenty six right-handed healthy science teachers volunteered to be in the present study. To investigate participants' brain activities during the tasks, 3.0T fMRI system with the block experimental-design was used to measure BOLD signals of their brain. According to the analyzed data, superior, middle and inferior frontal gyrus, superior and inferior parietal lobule, fusiform gyrus, lingual gyrus, and bilateral cerebellum were significantly activated during participants' carrying-out classification. The network model was consisting of six nodes (ROIs) and its fourteen connections. These results suggested the notion that the activation and connections of these regions mean that classification is consist of two sub-network systems (top-down and bottom-up related) and it functioning reciprocally. These results enable the examination of the scientific classification process from the cognitive neuroscience perspective, and may be used as basic materials for developing a teaching-learning program for scientific classification such as brain-based science education curriculum in the science classrooms.

패션 드로잉을 위한 기초교육에 관한 연구 (A Study on the Basic Education Program of Fashion Drawing)

  • 장동림
    • 패션비즈니스
    • /
    • 제1권1호
    • /
    • pp.84-98
    • /
    • 1997
  • This study is to develop a fashion drawing education program which is based on the theory of 'Split-brain' by Roger W. Sperry and 'Drawing on the Right Side of the Brain' by Betty Edwards. Students in Fashion Design start their training by developing a foundation in drawing and studing the tools, materials and methods of the Industry. Ideas are then developed on paper, later translated into three-dimensional shapes and finally into finished garments. Fashion drawing and design techniques train the hand and eye to all the nuances of fashion design and illustration. Fashion drawing course deals with the sketching of fashion models for the purpose of understanding the model figure, basic anatomy, movement and figure attitudes. Having mastered the basic skills, students take advanced drawing course which is developing awareness of design, needs, of fashion market' using various media for the purpose of developing a designer's sketch, with emphasis on the drawing and designs. Featured aspects of this study include the following; 1. Drawing the negative space; basic visual concepts 2. Contour drawing; constructs, visual measurement, movement 3. Model drawing; the classical method, proportion, symmetry. The primary aim of this study is to develop a sensitive, animated line based on observed form. It is important to let the students Imagine that they are actually touching the model, for in this way they can benefit from simulating the child's learning process. Instead of actually touching the model they are using their eyes as an extension of their sense of touch.

  • PDF

애니메이션 교육을 위한 모션드로잉 범주화 -뇌과학 원리를 적용한 교육모형 개발 기초연구 (Categorization of motion drawing for educating animation -A basic study on the development of educational model applied with principles of brain science)

  • 박성원
    • 만화애니메이션 연구
    • /
    • 통권35호
    • /
    • pp.1-27
    • /
    • 2014
  • 본 연구는 대안적인 교육모형을 연구하는 과정으로 뇌의 기능과 학습, 창작 기제를 고려한 교수법을 적용하면 애니메이션 드로잉능력이 효율적으로 신장될 것이라는 관점으로 이어지기 위한 연구의 선행분석 과정이다. 근래에 각 분야의 학문 연구들은 단순히 한 전공에 국한되는 것이 아니라 타 분야와의 융합적인 연구 활동을 통해 세분화 된 융합적 교육 콘텐츠를 생산해 내는 시도들을 하고 있다. 어떠한 분야 던지 복합적인 인문학적 경험 구조를 가지고 있기 때문이며 예술분야 또한 그러하다. 특히나 영상콘텐츠인 애니메이션 분야는 전문화된 영역이 세분화 되어 있기 때문에 드로잉 관련 교육만 보더라도 전문성에 필요한 항목을 명확히 하고 체계적인 교육방법을 개발할 필요성이 요구된다. 이에 본 연구는 애니메이션 교육방법의 전문적인 특성에 적합한 교육모형을 설계하기 위한 문헌 연구결과를 제시한다. 이에 애니메이션 분야의 교육에 가장 기초적인 능력을 연마할 수 있는 드로잉의 의미를 전공분야의 특성에 맞게 개념화하고 범주화하였다. 이후 재정립 된 드로잉의 개념과 범주를 통해 구성요소가 돌출되며 이는 후속 작업인 학습목표를 구체화하는 과정의 토대가 된다. 그에 대한 결과로 애니메이션 분야의 특성이 반영된 드로잉의 의미를 모션드로잉으로 정의하고 구성요소를 살펴보며, 뇌과학적 창의-학습 원리를 적용한 교육모형을 계획하기 위한 근거로 삼는다.

Electroencephalography-based imagined speech recognition using deep long short-term memory network

  • Agarwal, Prabhakar;Kumar, Sandeep
    • ETRI Journal
    • /
    • 제44권4호
    • /
    • pp.672-685
    • /
    • 2022
  • This article proposes a subject-independent application of brain-computer interfacing (BCI). A 32-channel Electroencephalography (EEG) device is used to measure imagined speech (SI) of four words (sos, stop, medicine, washroom) and one phrase (come-here) across 13 subjects. A deep long short-term memory (LSTM) network has been adopted to recognize the above signals in seven EEG frequency bands individually in nine major regions of the brain. The results show a maximum accuracy of 73.56% and a network prediction time (NPT) of 0.14 s which are superior to other state-of-the-art techniques in the literature. Our analysis reveals that the alpha band can recognize SI better than other EEG frequencies. To reinforce our findings, the above work has been compared by models based on the gated recurrent unit (GRU), convolutional neural network (CNN), and six conventional classifiers. The results show that the LSTM model has 46.86% more average accuracy in the alpha band and 74.54% less average NPT than CNN. The maximum accuracy of GRU was 8.34% less than the LSTM network. Deep networks performed better than traditional classifiers.

Emotion Recognition Method for Driver Services

  • Kim, Ho-Duck;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제7권4호
    • /
    • pp.256-261
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology developed, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on the reinforcement learning.