• Title/Summary/Keyword: Inference Service

Search Result 180, Processing Time 0.033 seconds

A Study on Accuracy Estimation of Service Model by Cross-validation and Pattern Matching

  • Cho, Seongsoo;Shrestha, Bhanu
    • International journal of advanced smart convergence
    • /
    • v.6 no.3
    • /
    • pp.17-21
    • /
    • 2017
  • In this paper, the service execution accuracy was compared by ontology based rule inference method and machine learning method, and the amount of data at the point when the service execution accuracy of the machine learning method becomes equal to the service execution accuracy of the rule inference was found. The rule inference, which measures service execution accuracy and service execution accuracy using accumulated data and pattern matching on service results. And then machine learning method measures service execution accuracy using cross validation data. After creating a confusion matrix and measuring the accuracy of each service execution, the inference algorithm can be selected from the results.

Performance analysis of local exit for distributed deep neural networks over cloud and edge computing

  • Lee, Changsik;Hong, Seungwoo;Hong, Sungback;Kim, Taeyeon
    • ETRI Journal
    • /
    • v.42 no.5
    • /
    • pp.658-668
    • /
    • 2020
  • In edge computing, most procedures, including data collection, data processing, and service provision, are handled at edge nodes and not in the central cloud. This decreases the processing burden on the central cloud, enabling fast responses to end-device service requests in addition to reducing bandwidth consumption. However, edge nodes have restricted computing, storage, and energy resources to support computation-intensive tasks such as processing deep neural network (DNN) inference. In this study, we analyze the effect of models with single and multiple local exits on DNN inference in an edge-computing environment. Our test results show that a single-exit model performs better with respect to the number of local exited samples, inference accuracy, and inference latency than a multi-exit model at all exit points. These results signify that higher accuracy can be achieved with less computation when a single-exit model is adopted. In edge computing infrastructure, it is therefore more efficient to adopt a DNN model with only one or a few exit points to provide a fast and reliable inference service.

An Inference Method of a Multi-server Queue using Arrival and Departure Times (도착 및 이탈시점을 이용한 다중서버 대기행렬 추론)

  • Park, Jinsoo
    • Journal of the Korea Society for Simulation
    • /
    • v.25 no.3
    • /
    • pp.117-123
    • /
    • 2016
  • This paper presents inference methods for inner operations of a multi-server queue when historical data are limited or system observation is restricted. In a queueing system analysis, autocorrelated arrival and service processes increase the complexity of modeling. Accordingly, numerous analysis methods have been developed. In this paper, we introduce an inference method for specific situations when external observations exhibit autocorrelated structure and and observations of internal operations are difficult. We release an assumption of the previous method and provide lemma and theorem to guarantee the correctness of our proposed inference method. Using only external observations, our proposed method deduces the internal operation of a multi-server queue via non-parametric approach even when the service times are autocorrelated. The main internal inference measures are waiting times and service times of respective customers. We provide some numerical results to verify that our method performs as intended.

A Case-Based Reasoning Approach to Ontology Inference Engine Selection for Robust Context-Aware Services (상황인식 서비스의 안정적 운영을 위한 온톨로지 추론 엔진 선택을 위한 사례기반추론 접근법)

  • Shim, Jae-Moon;Kwon, Oh-Byung
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.33 no.2
    • /
    • pp.27-44
    • /
    • 2008
  • Owl-based ontology is useful to realize the context-aware services which are composed of the distributed and self-configuring modules. Many ontology-based inference engines are developed to infer useful information from ontology. Since these engines show the uniqueness in terms of speed and information richness, it's difficult to ensure stable operation in providing dynamic context-aware services, especially when they should deal with the complex and big-size ontology. To provide a best inference service, the purpose of this paper is to propose a novel methodology of context-aware engine selection in a contextually prompt manner Case-based reasoning is applied to identify the causality between context and inference engined to be selected. Finally, a series of experiments is performed with a novel evaluation methodology to what extent the methodology works better than competitive methods on an actual context-aware service.

Ubiquitous Architectural Framework for UbiSAS using Context Adaptive Rule Inference Engine

  • Yoo, Yoon-Sik;Huh, Jae-Doo
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.243-246
    • /
    • 2005
  • Recent ubiquitous computing environments increasingly impact on our lives using the current technologies of sensor network and ubiquitous services. In this paper, we propose ubiquitous architectural framework for ubiquitous sleep aid service(UbiSAS) in the subset of ubiquitous computing for refreshing of human's sleep. And we examine technical feasibility. Human can recover his health through refreshing sleep from fatigue. Ubiquitous architectural framework for UbiSAS in digital home offers agreeable sleeping environment and improves recovery from fatigue. So we present new concept of ubiquitous architectural framework dissolving stress. Specially, we apply context to context-aware framework module. This context is transferred to context adaptive inference engine which has service invocation function in intelligent agent module. Ubiquitous architectural framework for UbiSAS using context adaptive rule inference engine without user intervention is technical issue. That is to say, we should take sleep comfortably during our sleeping. And sensed information during sleeping is changed to context-aware information. This presents significant information in context adaptive rule inference engine for UbiSAS. This information includes all sleeping state during sleeping in context-aware computing technique. So we propose more effective and most suitable ubiquitous architectural framework using context adaptive rule inference engine for refreshing sleep in this paper.

  • PDF

Web Service Platform for Optimal Quantization of CNN Models (CNN 모델의 최적 양자화를 위한 웹 서비스 플랫폼)

  • Roh, Jaewon;Lim, Chaemin;Cho, Sang-Young
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.4
    • /
    • pp.151-156
    • /
    • 2021
  • Low-end IoT devices do not have enough computation and memory resources for DNN learning and inference. Integer quantization of real-type neural network models can reduce model size, hardware computational burden, and power consumption. This paper describes the design and implementation of a web-based quantization platform for CNN deep learning accelerator chips. In the web service platform, we implemented visualization of the model through a convenient UI, analysis of each step of inference, and detailed editing of the model. Additionally, a data augmentation function and a management function of files that store models and inference intermediate results are provided. The implemented functions were verified using three YOLO models.

Building Thesaurus for Science & Technology Domain Using Facets and Its Application to Inference Services (패싯(Facet)을 이용한 과학기술분야 시소러스 구축과 활용방안)

  • Hwang, Soon-Hee;Jung, Han-Min;Sung, Won-Kyung
    • Journal of Information Management
    • /
    • v.37 no.3
    • /
    • pp.61-84
    • /
    • 2006
  • In this paper, we proposed one of the methods for building thesaurus in Science & Technology domain and investigated its applicability as an inference service based on ontology. There exist as many building methods for thesaurus as its role and function, and actually many thesauri capable of ensuring the accuracy and efficiency in information search are being built by many experts. After examining the previous studies related to the principles of building thesaurus and relevant concept "facet", we focused on its characteristics and applied it to building thesaurus. The facet is classified into 2 categories, conceptual facet and relational facet. The latter contains 3 subcategories: category relational facet, attribute relational facet and thematic relational facet. The thesaurus for Science & Technology domain using facets can be applied as a web-based inference service. As a result, the three types of inference service, COP(Communities of Practice), Researcher Tracing and Research Map are provided by means of ontology, and can be applied for the Query Expansion.

Web Service based Recommendation System using Inference Engine (추론엔진을 활용한 웹서비스 기반 추천 시스템)

  • Kim SungTae;Park SooMin;Yang JungJin
    • Journal of Intelligence and Information Systems
    • /
    • v.10 no.3
    • /
    • pp.59-72
    • /
    • 2004
  • The range of Internet usage is drastically broadened and diversed from information retrieval and collection to many different functions. Contrasting to the increase of Internet use, the efficiency of finding necessary information is decreased. Therefore, the need of information system which provides customized information is emerged. Our research proposes Web Service based recommendation system which employes inference engine to find and recommend the most appropriate products for users. Web applications in present provide useful information for users while they still carry the problem of overcoming different platforms and distributed computing environment. The need of standardized and systematic approach is necessary for easier communication and coherent system development through heterogeneous environments. Web Service is programming language independent and improves interoperability by describing, deploying, and executing modularized applications through network. The paper focuses on developing Web Service based recommendation system which will provide benchmarks of Web Service realization. It is done by integrating inference engine where the dynamics of information and user preferences are taken into account.

  • PDF

Mobile Cloud Context-Awareness System based on Jess Inference and Semantic Web RL for Inference Cost Decline (추론 비용 감소를 위한 Jess 추론과 시멘틱 웹 RL기반의 모바일 클라우드 상황인식 시스템)

  • Jung, Se-Hoon;Sim, Chun-Bo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.1
    • /
    • pp.19-30
    • /
    • 2012
  • The context aware service is the service to provide useful information to the users by recognizing surroundings around people who receive the service via computer based on computing and communication, and by conducting self-decision. But CAS(Context Awareness System) shows the weak point of small-scale context awareness processing capacity due to restricted mobile function under the current mobile environment, memory space, and inference cost increment. In this paper, we propose a mobile cloud context system with using Google App Engine based on PaaS(Platform as a Service) in order to get context service in various mobile devices without any subordination to any specific platform. Inference design method of the proposed system makes use of knowledge-based framework with semantic inference that is presented by SWRL rule and OWL ontology and Jess with rule-based inference engine. As well as, it is intended to shorten the context service reasoning time with mapping the regular reasoning of SWRL to Jess reasoning engine by connecting the values such as Class, Property and Individual which are regular information in the form of SWRL to Jess reasoning engine via JessTab plug-in in order to overcome the demerit of queries reasoning method of SparQL in semantic search which is a previous reasoning method.

Ontology Representation of Pulse-Diagnosis Data and an Inference System for the Diagnosis Service (맥진 데이터의 온톨로지 표현과 진단 서비스 추론 시스템)

  • Yang, Dong-Il;Park, Sun-Hee;Lim, Hwa-Jung;Yang, Hae-Sool;Choi, Hyung-Jin
    • The KIPS Transactions:PartB
    • /
    • v.15B no.3
    • /
    • pp.237-244
    • /
    • 2008
  • In this paper, an infra-structure using the ontology based on the pulse information is proposed for the context-aware service of medical information system in ubiquitous computing environment. An diagnosis service inference system that represents the pulse data which was generated by the pulse-diagnosis with wearable signal, temperature, humidity, time, and other factors as ontology with artificial intelligence methods and describes the service scenario based on the ontology is designed and implemented.