• Title/Summary/Keyword: Deep neural networks

Search Result 859, Processing Time 0.027 seconds

Integrated receptive field diversification method for improving speaker verification performance for variable-length utterances (가변 길이 입력 발성에서의 화자 인증 성능 향상을 위한 통합된 수용 영역 다양화 기법)

  • Shin, Hyun-seo;Kim, Ju-ho;Heo, Jungwoo;Shim, Hye-jin;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.319-325
    • /
    • 2022
  • The variation of utterance lengths is a representative factor that can degrade the performance of speaker verification systems. To handle this issue, previous studies had attempted to extract speaker features from various branches or to use convolution layers with different receptive fields. Combining the advantages of the previous two approaches for variable-length input, this paper proposes integrated receptive field diversification that extracts speaker features through more diverse receptive field. The proposed method processes the input features by convolutional layers with different receptive fields at multiple time-axis branches, and extracts speaker embedding by dynamically aggregating the processed features according to the lengths of input utterances. The deep neural networks in this study were trained on the VoxCeleb2 dataset and tested on the VoxCeleb1 evaluation dataset that divided into 1 s, 2 s, 5 s, and full-length. Experimental results demonstrated that the proposed method reduces the equal error rate by 19.7 % compared to the baseline.

Fake News Detection Using CNN-based Sentiment Change Patterns (CNN 기반 감성 변화 패턴을 이용한 가짜뉴스 탐지)

  • Tae Won Lee;Ji Su Park;Jin Gon Shon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.4
    • /
    • pp.179-188
    • /
    • 2023
  • Recently, fake news disguises the form of news content and appears whenever important events occur, causing social confusion. Accordingly, artificial intelligence technology is used as a research to detect fake news. Fake news detection approaches such as automatically recognizing and blocking fake news through natural language processing or detecting social media influencer accounts that spread false information by combining with network causal inference could be implemented through deep learning. However, fake news detection is classified as a difficult problem to solve among many natural language processing fields. Due to the variety of forms and expressions of fake news, the difficulty of feature extraction is high, and there are various limitations, such as that one feature may have different meanings depending on the category to which the news belongs. In this paper, emotional change patterns are presented as an additional identification criterion for detecting fake news. We propose a model with improved performance by applying a convolutional neural network to a fake news data set to perform analysis based on content characteristics and additionally analyze emotional change patterns. Sentimental polarity is calculated for the sentences constituting the news and the result value dependent on the sentence order can be obtained by applying long-term and short-term memory. This is defined as a pattern of emotional change and combined with the content characteristics of news to be used as an independent variable in the proposed model for fake news detection. We train the proposed model and comparison model by deep learning and conduct an experiment using a fake news data set to confirm that emotion change patterns can improve fake news detection performance.

Detail Focused Image Classifier Model for Traditional Images (전통문화 이미지를 위한 세부 자질 주목형 이미지 자동 분석기)

  • Kim, Kuekyeng;Hur, Yuna;Kim, Gyeongmin;Yu, Wonhee;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.12
    • /
    • pp.85-92
    • /
    • 2017
  • As accessibility toward traditional cultural contents drops compared to its increase in production, the need for higher accessibility for continued management and research to exist. For this, this paper introduces an image classifier model for traditional images based on artificial neural networks, which converts the input image's features into a vector space and by utilizing a RNN based model it recognizes and compares the details of the input which enables the classification of traditional images. This enables the classifiers to classify similarly looking traditional images more precisely by focusing on the details. For the training of this model, a wide range of images were arranged and collected based on the format of the Korean information culture field, which contributes to other researches related to the fields of using traditional cultural images. Also, this research contributes to the further activation of demand, supply, and researches related to traditional culture.

Cox Model Improvement Using Residual Blocks in Neural Networks: A Study on the Predictive Model of Cervical Cancer Mortality (신경망 내 잔여 블록을 활용한 콕스 모델 개선: 자궁경부암 사망률 예측모형 연구)

  • Nang Kyeong Lee;Joo Young Kim;Ji Soo Tak;Hyeong Rok Lee;Hyun Ji Jeon;Jee Myung Yang;Seung Won Lee
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.260-268
    • /
    • 2024
  • Cervical cancer is the fourth most common cancer in women worldwide, and more than 604,000 new cases were reported in 2020 alone, resulting in approximately 341,831 deaths. The Cox regression model is a major model widely adopted in cancer research, but considering the existence of nonlinear associations, it faces limitations due to linear assumptions. To address this problem, this paper proposes ResSurvNet, a new model that improves the accuracy of cervical cancer mortality prediction using ResNet's residual learning framework. This model showed accuracy that outperforms the DNN, CPH, CoxLasso, Cox Gradient Boost, and RSF models compared in this study. As this model showed accuracy that outperformed the DNN, CPH, CoxLasso, Cox Gradient Boost, and RSF models compared in this study, this excellent predictive performance demonstrates great value in early diagnosis and treatment strategy establishment in the management of cervical cancer patients and represents significant progress in the field of survival analysis.

Random Balance between Monte Carlo and Temporal Difference in off-policy Reinforcement Learning for Less Sample-Complexity (오프 폴리시 강화학습에서 몬테 칼로와 시간차 학습의 균형을 사용한 적은 샘플 복잡도)

  • Kim, Chayoung;Park, Seohee;Lee, Woosik
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.1-7
    • /
    • 2020
  • Deep neural networks(DNN), which are used as approximation functions in reinforcement learning (RN), theoretically can be attributed to realistic results. In empirical benchmark works, time difference learning (TD) shows better results than Monte-Carlo learning (MC). However, among some previous works show that MC is better than TD when the reward is very rare or delayed. Also, another recent research shows when the information observed by the agent from the environment is partial on complex control works, it indicates that the MC prediction is superior to the TD-based methods. Most of these environments can be regarded as 5-step Q-learning or 20-step Q-learning, where the experiment continues without long roll-outs for alleviating reduce performance degradation. In other words, for networks with a noise, a representative network that is regardless of the controlled roll-outs, it is better to learn MC, which is robust to noisy rewards than TD, or almost identical to MC. These studies provide a break with that TD is better than MC. These recent research results show that the way combining MC and TD is better than the theoretical one. Therefore, in this study, based on the results shown in previous studies, we attempt to exploit a random balance with a mixture of TD and MC in RL without any complicated formulas by rewards used in those studies do. Compared to the DQN using the MC and TD random mixture and the well-known DQN using only the TD-based learning, we demonstrate that a well-performed TD learning are also granted special favor of the mixture of TD and MC through an experiments in OpenAI Gym.

Status and Implications of Hydrogeochemical Characterization of Deep Groundwater for Deep Geological Disposal of High-Level Radioactive Wastes in Developed Countries (고준위 방사성 폐기물 지질처분을 위한 해외 선진국의 심부 지하수 환경 연구동향 분석 및 시사점 도출)

  • Jaehoon Choi;Soonyoung Yu;SunJu Park;Junghoon Park;Seong-Taek Yun
    • Economic and Environmental Geology
    • /
    • v.55 no.6
    • /
    • pp.737-760
    • /
    • 2022
  • For the geological disposal of high-level radioactive wastes (HLW), an understanding of deep subsurface environment is essential through geological, hydrogeological, geochemical, and geotechnical investigations. Although South Korea plans the geological disposal of HLW, only a few studies have been conducted for characterizing the geochemistry of deep subsurface environment. To guide the hydrogeochemical research for selecting suitable repository sites, this study overviewed the status and trends in hydrogeochemical characterization of deep groundwater for the deep geological disposal of HLW in developed countries. As a result of examining the selection process of geological disposal sites in 8 countries including USA, Canada, Finland, Sweden, France, Japan, Germany, and Switzerland, the following geochemical parameters were needed for the geochemical characterization of deep subsurface environment: major and minor elements and isotopes (e.g., 34S and 18O of SO42-, 13C and 14C of DIC, 2H and 18O of water) of both groundwater and pore water (in aquitard), fracture-filling minerals, organic materials, colloids, and oxidation-reduction indicators (e.g., Eh, Fe2+/Fe3+, H2S/SO42-, NH4+/NO3-). A suitable repository was selected based on the integrated interpretation of these geochemical data from deep subsurface. In South Korea, hydrochemical types and evolutionary patterns of deep groundwater were identified using artificial neural networks (e.g., Self-Organizing Map), and the impact of shallow groundwater mixing was evaluated based on multivariate statistics (e.g., M3 modeling). The relationship between fracture-filling minerals and groundwater chemistry also has been investigated through a reaction-path modeling. However, these previous studies in South Korea had been conducted without some important geochemical data including isotopes, oxidationreduction indicators and DOC, mainly due to the lack of available data. Therefore, a detailed geochemical investigation is required over the country to collect these hydrochemical data to select a geological disposal site based on scientific evidence.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

A Study on Similar Trademark Search Model Using Convolutional Neural Networks (합성곱 신경망(Convolutional Neural Network)을 활용한 지능형 유사상표 검색 모형 개발)

  • Yoon, Jae-Woong;Lee, Suk-Jun;Song, Chil-Yong;Kim, Yeon-Sik;Jung, Mi-Young;Jeong, Sang-Il
    • Management & Information Systems Review
    • /
    • v.38 no.3
    • /
    • pp.55-80
    • /
    • 2019
  • Recently, many companies improving their management performance by building a powerful brand value which is recognized for trademark rights. However, as growing up the size of online commerce market, the infringement of trademark rights is increasing. According to various studies and reports, cases of foreign and domestic companies infringing on their trademark rights are increased. As the manpower and the cost required for the protection of trademark are enormous, small and medium enterprises(SMEs) could not conduct preliminary investigations to protect their trademark rights. Besides, due to the trademark image search service does not exist, many domestic companies have a problem that investigating huge amounts of trademarks manually when conducting preliminary investigations to protect their rights of trademark. Therefore, we develop an intelligent similar trademark search model to reduce the manpower and cost for preliminary investigation. To measure the performance of the model which is developed in this study, test data selected by intellectual property experts was used, and the performance of ResNet V1 101 was the highest. The significance of this study is as follows. The experimental results empirically demonstrate that the image classification algorithm shows high performance not only object recognition but also image retrieval. Since the model that developed in this study was learned through actual trademark image data, it is expected that it can be applied in the real industrial environment.

Accuracy of one-step automated orthodontic diagnosis model using a convolutional neural network and lateral cephalogram images with different qualities obtained from nationwide multi-hospitals

  • Yim, Sunjin;Kim, Sungchul;Kim, Inhwan;Park, Jae-Woo;Cho, Jin-Hyoung;Hong, Mihee;Kang, Kyung-Hwa;Kim, Minji;Kim, Su-Jung;Kim, Yoon-Ji;Kim, Young Ho;Lim, Sung-Hoon;Sung, Sang Jin;Kim, Namkug;Baek, Seung-Hak
    • The korean journal of orthodontics
    • /
    • v.52 no.1
    • /
    • pp.3-19
    • /
    • 2022
  • Objective: The purpose of this study was to investigate the accuracy of one-step automated orthodontic diagnosis of skeletodental discrepancies using a convolutional neural network (CNN) and lateral cephalogram images with different qualities from nationwide multi-hospitals. Methods: Among 2,174 lateral cephalograms, 1,993 cephalograms from two hospitals were used for training and internal test sets and 181 cephalograms from eight other hospitals were used for an external test set. They were divided into three classification groups according to anteroposterior skeletal discrepancies (Class I, II, and III), vertical skeletal discrepancies (normodivergent, hypodivergent, and hyperdivergent patterns), and vertical dental discrepancies (normal overbite, deep bite, and open bite) as a gold standard. Pre-trained DenseNet-169 was used as a CNN classifier model. Diagnostic performance was evaluated by receiver operating characteristic (ROC) analysis, t-stochastic neighbor embedding (t-SNE), and gradient-weighted class activation mapping (Grad-CAM). Results: In the ROC analysis, the mean area under the curve and the mean accuracy of all classifications were high with both internal and external test sets (all, > 0.89 and > 0.80). In the t-SNE analysis, our model succeeded in creating good separation between three classification groups. Grad-CAM figures showed differences in the location and size of the focus areas between three classification groups in each diagnosis. Conclusions: Since the accuracy of our model was validated with both internal and external test sets, it shows the possible usefulness of a one-step automated orthodontic diagnosis tool using a CNN model. However, it still needs technical improvement in terms of classifying vertical dental discrepancies.

Lightening of Human Pose Estimation Algorithm Using MobileViT and Transfer Learning

  • Kunwoo Kim;Jonghyun Hong;Jonghyuk Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.17-25
    • /
    • 2023
  • In this paper, we propose a model that can perform human pose estimation through a MobileViT-based model with fewer parameters and faster estimation. The based model demonstrates lightweight performance through a structure that combines features of convolutional neural networks with features of Vision Transformer. Transformer, which is a major mechanism in this study, has become more influential as its based models perform better than convolutional neural network-based models in the field of computer vision. Similarly, in the field of human pose estimation, Vision Transformer-based ViTPose maintains the best performance in all human pose estimation benchmarks such as COCO, OCHuman, and MPII. However, because Vision Transformer has a heavy model structure with a large number of parameters and requires a relatively large amount of computation, it costs users a lot to train the model. Accordingly, the based model overcame the insufficient Inductive Bias calculation problem, which requires a large amount of computation by Vision Transformer, with Local Representation through a convolutional neural network structure. Finally, the proposed model obtained a mean average precision of 0.694 on the MS COCO benchmark with 3.28 GFLOPs and 9.72 million parameters, which are 1/5 and 1/9 the number compared to ViTPose, respectively.