• Title/Summary/Keyword: Inference models

Search Result 450, Processing Time 0.025 seconds

Strawberry Pests and Diseases Detection Technique Optimized for Symptoms Using Deep Learning Algorithm (딥러닝을 이용한 병징에 최적화된 딸기 병충해 검출 기법)

  • Choi, Young-Woo;Kim, Na-eun;Paudel, Bhola;Kim, Hyeon-tae
    • Journal of Bio-Environment Control
    • /
    • v.31 no.3
    • /
    • pp.255-260
    • /
    • 2022
  • This study aimed to develop a service model that uses a deep learning algorithm for detecting diseases and pests in strawberries through image data. In addition, the pest detection performance of deep learning models was further improved by proposing segmented image data sets specialized in disease and pest symptoms. The CNN-based YOLO deep learning model was selected to enhance the existing R-CNN-based model's slow learning speed and inference speed. A general image data set and a proposed segmented image dataset was prepared to train the pest and disease detection model. When the deep learning model was trained with the general training data set, the pest detection rate was 81.35%, and the pest detection reliability was 73.35%. On the other hand, when the deep learning model was trained with the segmented image dataset, the pest detection rate increased to 91.93%, and detection reliability was increased to 83.41%. This study concludes with the possibility of improving the performance of the deep learning model by using a segmented image dataset instead of a general image dataset.

Text Classification Using Heterogeneous Knowledge Distillation

  • Yu, Yerin;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.29-41
    • /
    • 2022
  • Recently, with the development of deep learning technology, a variety of huge models with excellent performance have been devised by pre-training massive amounts of text data. However, in order for such a model to be applied to real-life services, the inference speed must be fast and the amount of computation must be low, so the technology for model compression is attracting attention. Knowledge distillation, a representative model compression, is attracting attention as it can be used in a variety of ways as a method of transferring the knowledge already learned by the teacher model to a relatively small-sized student model. However, knowledge distillation has a limitation in that it is difficult to solve problems with low similarity to previously learned data because only knowledge necessary for solving a given problem is learned in a teacher model and knowledge distillation to a student model is performed from the same point of view. Therefore, we propose a heterogeneous knowledge distillation method in which the teacher model learns a higher-level concept rather than the knowledge required for the task that the student model needs to solve, and the teacher model distills this knowledge to the student model. In addition, through classification experiments on about 18,000 documents, we confirmed that the heterogeneous knowledge distillation method showed superior performance in all aspects of learning efficiency and accuracy compared to the traditional knowledge distillation.

Semantic Computing-based Dynamic Job Scheduling Model and Simulation (시멘틱 컴퓨팅 기반의 동적 작업 스케줄링 모델 및 시뮬레이션)

  • Noh, Chang-Hyeon;Jang, Sung-Ho;Kim, Tae-Young;Lee, Jong-Sik
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.2
    • /
    • pp.29-38
    • /
    • 2009
  • In the computing environment with heterogeneous resources, a job scheduling model is necessary for effective resource utilization and high-speed data processing. And, the job scheduling model has to cope with a dynamic change in the condition of resources. There have been lots of researches on resource estimation methods and heuristic algorithms about how to distribute and allocate jobs to heterogeneous resources. But, existing researches have a weakness for system compatibility and scalability because they do not support the standard language. Also, they are impossible to process jobs effectively and deal with a variety of computing situations in which the condition of resources is dynamically changed in real-time. In order to solve the problems of existing researches, this paper proposes a semantic computing-based dynamic job scheduling model that defines various knowledge-based rules for job scheduling methods adaptable to changes in resource condition and allocate a job to the best suited resource through inference. This paper also constructs a resource ontology to manage information about heterogeneous resources without difficulty as using the OWL, the standard ontology language established by W3C. Experimental results shows that the proposed scheduling model outperforms existing scheduling models, in terms of throughput, job loss, and turn around time.

Parameter Optimization and Uncertainty Analysis of the NWS-PC Rainfall-Runoff Model Coupled with Bayesian Markov Chain Monte Carlo Inference Scheme (Bayesian Markov Chain Monte Carlo 기법을 통한 NWS-PC 강우-유출 모형 매개변수의 최적화 및 불확실성 분석)

  • Kwon, Hyun-Han;Moon, Young-Il;Kim, Byung-Sik;Yoon, Seok-Young
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.4B
    • /
    • pp.383-392
    • /
    • 2008
  • It is not always easy to estimate the parameters in hydrologic models due to insufficient hydrologic data when hydraulic structures are designed or water resources plan are established. Therefore, uncertainty analysis are inevitably needed to examine reliability for the estimated results. With regard to this point, this study applies a Bayesian Markov Chain Monte Carlo scheme to the NWS-PC rainfall-runoff model that has been widely used, and a case study is performed in Soyang Dam watershed in Korea. The NWS-PC model is calibrated against observed daily runoff, and thirteen parameters in the model are optimized as well as posterior distributions associated with each parameter are derived. The Bayesian Markov Chain Monte Carlo shows a improved result in terms of statistical performance measures and graphical examination. The patterns of runoff can be influenced by various factors and the Bayesian approaches are capable of translating the uncertainties into parameter uncertainties. One could provide against an unexpected runoff event by utilizing information driven by Bayesian methods. Therefore, the rainfall-runoff analysis coupled with the uncertainty analysis can give us an insight in evaluating flood risk and dam size in a reasonable way.

Building robust Korean speech recognition model by fine-tuning large pretrained model (대형 사전훈련 모델의 파인튜닝을 통한 강건한 한국어 음성인식 모델 구축)

  • Changhan Oh;Cheongbin Kim;Kiyoung Park
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.75-82
    • /
    • 2023
  • Automatic speech recognition (ASR) has been revolutionized with deep learning-based approaches, among which self-supervised learning methods have proven to be particularly effective. In this study, we aim to enhance the performance of OpenAI's Whisper model, a multilingual ASR system on the Korean language. Whisper was pretrained on a large corpus (around 680,000 hours) of web speech data and has demonstrated strong recognition performance for major languages. However, it faces challenges in recognizing languages such as Korean, which is not major language while training. We address this issue by fine-tuning the Whisper model with an additional dataset comprising about 1,000 hours of Korean speech. We also compare its performance against a Transformer model that was trained from scratch using the same dataset. Our results indicate that fine-tuning the Whisper model significantly improved its Korean speech recognition capabilities in terms of character error rate (CER). Specifically, the performance improved with increasing model size. However, the Whisper model's performance on English deteriorated post fine-tuning, emphasizing the need for further research to develop robust multilingual models. Our study demonstrates the potential of utilizing a fine-tuned Whisper model for Korean ASR applications. Future work will focus on multilingual recognition and optimization for real-time inference.

Performance Evaluation and Analysis on Single and Multi-Network Virtualization Systems with Virtio and SR-IOV (가상화 시스템에서 Virtio와 SR-IOV 적용에 대한 단일 및 다중 네트워크 성능 평가 및 분석)

  • Jaehak Lee;Jongbeom Lim;Heonchang Yu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.48-59
    • /
    • 2024
  • As functions that support virtualization on their own in hardware are developed, user applications having various workloads are operating efficiently in the virtualization system. SR-IOV is a virtualization support function that takes direct access to PCI devices, thus giving a high I/O performance by minimizing the need for hypervisor or operating system interventions. With SR-IOV, network I/O acceleration can be realized in virtualization systems that have relatively long I/O paths compared to bare-metal systems and frequent context switches between the user area and kernel area. To take performance advantages of SR-IOV, network resource management policies that can derive optimal network performance when SR-IOV is applied to an instance such as a virtual machine(VM) or container are being actively studied.This paper evaluates and analyzes the network performance of SR-IOV implementing I/O acceleration is compared with Virtio in terms of 1) network delay, 2) network throughput, 3) network fairness, 4) performance interference, and 5) multi-network. The contributions of this paper are as follows. First, the network I/O process of Virtio and SR-IOV was clearly explained in the virtualization system, and second, the evaluation results of the network performance of Virtio and SR-IOV were analyzed based on various performance metrics. Third, the system overhead and the possibility of optimization for the SR-IOV network in a virtualization system with high VM density were experimentally confirmed. The experimental results and analysis of the paper are expected to be referenced in the network resource management policy for virtualization systems that operate network-intensive services such as smart factories, connected cars, deep learning inference models, and crowdsourcing.

Nonlinear Vector Alignment Methodology for Mapping Domain-Specific Terminology into General Space (전문어의 범용 공간 매핑을 위한 비선형 벡터 정렬 방법론)

  • Kim, Junwoo;Yoon, Byungho;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.127-146
    • /
    • 2022
  • Recently, as word embedding has shown excellent performance in various tasks of deep learning-based natural language processing, researches on the advancement and application of word, sentence, and document embedding are being actively conducted. Among them, cross-language transfer, which enables semantic exchange between different languages, is growing simultaneously with the development of embedding models. Academia's interests in vector alignment are growing with the expectation that it can be applied to various embedding-based analysis. In particular, vector alignment is expected to be applied to mapping between specialized domains and generalized domains. In other words, it is expected that it will be possible to map the vocabulary of specialized fields such as R&D, medicine, and law into the space of the pre-trained language model learned with huge volume of general-purpose documents, or provide a clue for mapping vocabulary between mutually different specialized fields. However, since linear-based vector alignment which has been mainly studied in academia basically assumes statistical linearity, it tends to simplify the vector space. This essentially assumes that different types of vector spaces are geometrically similar, which yields a limitation that it causes inevitable distortion in the alignment process. To overcome this limitation, we propose a deep learning-based vector alignment methodology that effectively learns the nonlinearity of data. The proposed methodology consists of sequential learning of a skip-connected autoencoder and a regression model to align the specialized word embedding expressed in each space to the general embedding space. Finally, through the inference of the two trained models, the specialized vocabulary can be aligned in the general space. To verify the performance of the proposed methodology, an experiment was performed on a total of 77,578 documents in the field of 'health care' among national R&D tasks performed from 2011 to 2020. As a result, it was confirmed that the proposed methodology showed superior performance in terms of cosine similarity compared to the existing linear vector alignment.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

A Generalized Adaptive Deep Latent Factor Recommendation Model (일반화 적응 심층 잠재요인 추천모형)

  • Kim, Jeongha;Lee, Jipyeong;Jang, Seonghyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.249-263
    • /
    • 2023
  • Collaborative Filtering, a representative recommendation system methodology, consists of two approaches: neighbor methods and latent factor models. Among these, the latent factor model using matrix factorization decomposes the user-item interaction matrix into two lower-dimensional rectangular matrices, predicting the item's rating through the product of these matrices. Due to the factor vectors inferred from rating patterns capturing user and item characteristics, this method is superior in scalability, accuracy, and flexibility compared to neighbor-based methods. However, it has a fundamental drawback: the need to reflect the diversity of preferences of different individuals for items with no ratings. This limitation leads to repetitive and inaccurate recommendations. The Adaptive Deep Latent Factor Model (ADLFM) was developed to address this issue. This model adaptively learns the preferences for each item by using the item description, which provides a detailed summary and explanation of the item. ADLFM takes in item description as input, calculates latent vectors of the user and item, and presents a method that can reflect personal diversity using an attention score. However, due to the requirement of a dataset that includes item descriptions, the domain that can apply ADLFM is limited, resulting in generalization limitations. This study proposes a Generalized Adaptive Deep Latent Factor Recommendation Model, G-ADLFRM, to improve the limitations of ADLFM. Firstly, we use item ID, commonly used in recommendation systems, as input instead of the item description. Additionally, we apply improved deep learning model structures such as Self-Attention, Multi-head Attention, and Multi-Conv1D. We conducted experiments on various datasets with input and model structure changes. The results showed that when only the input was changed, MAE increased slightly compared to ADLFM due to accompanying information loss, resulting in decreased recommendation performance. However, the average learning speed per epoch significantly improved as the amount of information to be processed decreased. When both the input and the model structure were changed, the best-performing Multi-Conv1d structure showed similar performance to ADLFM, sufficiently counteracting the information loss caused by the input change. We conclude that G-ADLFRM is a new, lightweight, and generalizable model that maintains the performance of the existing ADLFM while enabling fast learning and inference.

A Study on the Effect of User Value on Smartwatch Digital HealthcareAcceptance Intention to Promote Digital Healthcare Venture Start Up (Digital Healthcare 벤처창업 촉진을 위한, 사용자 가치가 Smartwatch Digital Healthcare 수용의도에 미치는 영향 연구)

  • Eekseong Jin;soyoung Lee
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.2
    • /
    • pp.35-52
    • /
    • 2023
  • Recently, as the non-face-to-face environment has developed due to COVID-19 and environmental pollution, the importance of online digital healthcare is increasing, and venture start-ups and activities such as health care, telemedicine, and digital treatments are also actively underway. This study conducted the impact on the acceptability of digital healthcare smartwatches with an integrated approach of the expanded integrated technology acceptance model (UTAUT2) and the behavioral inference model (BRT). The most advanced integrated technology acceptance model for innovative technology acceptance research was used to identify major factors such as utility expectations, social effects, convenience, price barriers, lack of alternatives, and behavioral intentions. For the study, about 410 responses from ordinary people in their teens to 60s across the country were collected, and based on this, the hypothesis was verified using structural equations after testing reliability and validity of the data. SPSS 23 and AMOS 23 were used for research analysis. Studies have shown that personal innovation has a significant impact on the reasons for acceptance (use value, social impact, convenience of use), attitude, and non-use (price barriers, lack of alternatives, and barriers to use). These results are the same as the results of previous studies that confirmed the influence of the main value of innovative ICT on user acceptance intention. In addition, the reason for acceptance had a significant effect on attitude, but the effect of the reason for non-acceptance was not significant. It can be analyzed that consumers are interested in new ICT products and new services, but purchase them more carefully and selectively. This study has evolved from the acceptance analysis of general-purpose consumer innovation technology to the acceptance analysis of consumer value in smartwatch digital healthcare, which is a new and important area in the future. Industrially, it can contribute to the product's purchase and marketing. It is hoped that this study will contribute to increasing research in the digital healthcare sector, which will play an important role in our lives in the future, and that it will develop into in-depth factors that are more suitable for consumer value through integrated approach models and integrated analysis of consumer acceptance and non-acceptance.

  • PDF