• Title/Summary/Keyword: 대용량 추론

Search Result 77, Processing Time 0.024 seconds

A Real-time High-speed Fuzzy Control System Using Integer Fuzzy Control Method (정수형 퍼지제어기법을 적용한 실시간 고속 퍼지제어시스템)

  • 손기성;김종혁;성은무;이상구
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.05a
    • /
    • pp.299-302
    • /
    • 2003
  • In fuzzy control systems having large volumes of fuzzy data. one of the important problems is the improvement of execution speed in the fuzzy inference and defuzzification stages. In this paper, to improve the speedup of fuzzy controllers, we use an integer line mapping algorithm to convert [0, 1] real values in the fuzzy membership functions to integer pixels. U sing this, we propose a real-time high-speed fuzzy control system and implement a fast fuzzy processor and control system using FPGAs.

  • PDF

Efficient Mining of User Behavior Patterns by Temporal Access (시간을 고려한 모바일 사용자의 유용한 행동패턴 추출)

  • Lee, Seung-Cheol;Kim, Ung-Mo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10c
    • /
    • pp.60-65
    • /
    • 2007
  • 유비쿼터스 컴퓨팅은 일상생활 속에 편재해 있는 PDA 또는 모바일 폰 등의 무선 단말기를 이용하여 사용자가 언제, 어디서나 유용한 서비스를 받을 수 있는 환경을 제공한다. 이는 대용량 데이터베이스에 저장된 지능형 멀티 모바일 에이전트의 통신 데이터를 분석하여 모바일 유저의 위치에 따른 요청된 유용한 서비스정보를 추출할 수 있게 되었으며, 이를 통한 효율적인 사용자 서비스는 물론 광고 등의 새로운 이익 창출로 이어져왔다. 그러나 기존 위치 정보만을 이용한 서비스정보의 추론은 단순히 통계적인 빈발 행동패턴만을 추출하여 시간에 따른 사용자의 서비스 요청에 능동적으로 대처할 수 없을 뿐만 아니라 원치 않는 서비스정보를 제공하는 문제점을 야기 시켰다. 이 논문에서는 시간을 고려한 모바일 사용자의 유용한 행동패턴 추출을 위한 효율적인 마이닝 기법인 시간대별 모바일 사용자 행동패턴 및 메모리 적재에 용이한 새로운 콤팩트한 데이터 구조를 제안한다. 이는 사용자의 동적인 움직임에 따른 실시간적 서비스를 가능하게 하며, 더 나아가 유비쿼터스 컴퓨팅 환경에서 중요한 이슈인 데이터의 메모리 적재가 용이 할 뿐만 아니라 접근속도의 향상 및 메모리 사용이 적다는 이점이 있다.

  • PDF

Data Mining Approach for Real-Time Processing of Large Data Using Case-Based Reasoning : High-Risk Group Detection Data Warehouse for Patients with High Blood Pressure (사례기반추론을 이용한 대용량 데이터의 실시간 처리 방법론 : 고혈압 고위험군 관리를 위한 자기학습 시스템 프레임워크)

  • Park, Sung-Hyuk;Yang, Kun-Woo
    • Journal of Information Technology Services
    • /
    • v.10 no.1
    • /
    • pp.135-149
    • /
    • 2011
  • In this paper, we propose the high-risk group detection model for patients with high blood pressure using case-based reasoning. The proposed model can be applied for public health maintenance organizations to effectively manage knowledge related to high blood pressure and efficiently allocate limited health care resources. Especially, the focus is on the development of the model that can handle constraints such as managing large volume of data, enabling the automatic learning to adapt to external environmental changes and operating the system on a real-time basis. Using real data collected from local public health centers, the optimal high-risk group detection model was derived incorporating optimal parameter sets. The results of the performance test for the model using test data show that the prediction accuracy of the proposed model is two times better than the natural risk of high blood pressure.

Role Administration Security Model based on MAC and Role Gragh (강제적 접근방식과 역할 그래프를 기반으로 한 역할관리 보안모델)

  • Park, Ki-Hong;Kim, Ung-Mo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.10a
    • /
    • pp.73-76
    • /
    • 2001
  • 다중등급을 갖고 있는 대용량 데이터베이스 환경에서 각 보안등급을 갖고 있는 사용자가 데이터베이스에 접근할 때 확장된 강제적 접근제어(MAC:Mandatory Access Control) 방식과 역한 그래프(Role Graph)를 이용해 하위등급의 사용자가 상위등급의 데이터를 추론하거나 인지하는 데이터 유출을 방지하여 데이터의 무결성(integrity)과 데이터베이스 관리시스템(DBMS:Database Management System) 전체의 보안을 유지하며 각 보안등급의 데이터와 사용자를 효율적으로 관리하고 제어한 수 있는 역할관리 보안모델을 제안한다.

  • PDF

An Integrated Method of Iterative and Incremental Requirement Analysis for Large-Scale Systems (시스템 요구사항 분석을 위한 순환적-점진적 복합 분석방법)

  • Park, Jisung;Lee, Jaeho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.193-202
    • /
    • 2017
  • Development of Intelligent Systems involves effective integration of large-scaled knowledge processing and understanding, human-machine interaction, and intelligent services. Especially, in our project for development of a self-growing knowledge-based system with inference methodologies utilizing the big data technology, we are building a platform called WiseKB as the central knowledge base for storing massive amount of knowledge and enabling question-answering by inferences. WiseKB thus requires an effective methodology to analyze diverse requirements convoluted with the integration of various components of knowledge representation, resource management, knowledge storing, complex hybrid inference, and knowledge learning, In this paper, we propose an integrated requirement analysis method that blends the traditional sequential method and the iterative-incremental method to achieve an efficient requirement analysis for large-scale systems.

Ontology and Sequential Rule Based Streaming Media Event Recognition (온톨로지 및 순서 규칙 기반 대용량 스트리밍 미디어 이벤트 인지)

  • Soh, Chi-Seung;Park, Hyun-Kyu;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.43 no.4
    • /
    • pp.470-479
    • /
    • 2016
  • As the number of various types of media data such as UCC (User Created Contents) increases, research is actively being carried out in many different fields so as to provide meaningful media services. Amidst these studies, a semantic web-based media classification approach has been proposed; however, it encounters some limitations in video classification because of its underlying ontology derived from meta-information such as video tag and title. In this paper, we define recognized objects in a video and activity that is composed of video objects in a shot, and introduce a reasoning approach based on description logic. We define sequential rules for a sequence of shots in a video and describe how to classify it. For processing the large amount of increasing media data, we utilize Spark streaming, and a distributed in-memory big data processing framework, and describe how to classify media data in parallel. To evaluate the efficiency of the proposed approach, we conducted an experiment using a large amount of media ontology extracted from Youtube videos.

Technical Trend Analysis of Fingerprint Classification (지문분류 기술 동향 분석)

  • Jung, Hye-Wuk;Lee, Seung
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.9
    • /
    • pp.132-144
    • /
    • 2017
  • The fingerprint classification of categorizing fingerprints by classes should be used in order to improve the processing speed and accuracy in a fingerprint recognition system using a large database. The fingerprint classification methods extract features from the fingerprint ridges of a fingerprint and classify the fingerprint using learning and reasoning techniques based on the classes defined according to the flow and shape of the fingerprint ridges. In earlier days, many researches have been conducted using NIST database acquired by pressing or rolling finger against a paper. However, as automated systems using live-scan scanners for fingerprint recognition have become popular, researches using fingerprint images obtained by live-scan scanners, such as fingerprint data provided by FVC, are increasing. And these days the methods of fingerprint classification using Deep Learning have proposed. In this paper, we investigate the trends of fingerprint classification technology and compare the classification performance of the technology. We desire to assist fingerprint classification research with increasing large fingerprint database in improving the performance by mentioning the necessity of fingerprint classification research with consideration for fingerprint images based on live-scan scanners and analyzing fingerprint classification using deep learning.

Implementation of efficient L-diversity de-identification for large data (대용량 데이터에 대한 효율적인 L-diversity 비식별화 구현)

  • Jeon, Min-Hyuk;Temuujin, Odsuren;Ahn, Jinhyun;Im, Dong-Hyuk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.465-467
    • /
    • 2019
  • 최근 많은 단체나 기업에서 다양하고 방대한 데이터를 요구로 하고, 그에 따라서 국가 공공데이터나 데이터 브로커등 데이터를 통해 직접 수집 하거나 구매해야 하는 경우가 많아지고 있다. 하지만 개인정보의 경우 개인의 동의 없이는 타인에게 양도가 불가능하여 이러한 데이터에 대한 연구에 어려움이 있다. 그래서 특정 개인을 추론할 수 없도록 하는 비식별 처리 기술이 연구되고 있다. 이러한 비식별화의 정도는 모델로 나타낼 수가 있는데, 현재 k-anonymity 와 l-diversity 모델 등이 많이 사용된다. 이 중에서 l-diversity 는 k-anonymity 의 만족 조건을 포함하고 있어 비식별화의 정도가 더욱 강하다. 이러한 l-diversity 모델을 만족하는 알고리즘은 The Hardness and Approximation, Anatomy 등이 있는데 본 논문에서는 일반화 과정을 거치지 않아 유용성이 높은 Anatomy 의 구현에 대해 연구하였다. 또한 비식별화 과정은 전체 데이터에 대한 특성을 고려해야 하기 때문에 데이터의 크기가 커짐에 따라 실질적인 처리량이 방대해지는데, 이러한 문제를 Spark 를 통해 데이터가 커짐에 따라서 최대한 안정적으로 대응하여 처리할 수 있는 시스템을 구현하였다.

SPARQL-SQL Conversion and Improvement in Response Time based on Expanded Class-Property Views (확장 클래스-속성 뷰기반의 SPARQL-SQL 질의 변환 및 속도 개선)

  • Lee, Seungwoo;Kim, Pyung;Kim, Jaehan;Sung, Won-Kyung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.84-88
    • /
    • 2007
  • In a general tendency that DBMS is used as a tool for storing large size of triple knowledge, it still remains in issue that which DBMS schema should be designed for storing, managing, inferring, and querying the triple knowledge efficiently. In this paper, we present, in the view point of efficient query process, a method that processes a query using Expanded Class-Property Views (ECPV) and, as a result, improvement in response time. The response time of DBMS-based inference systems is proportioned to table size and the number of table join operations. The more query is complex, the more join operations it requires, and the longer response time it requires. ECPV is a table obtained by processing possible join operations before queries. To use ECPV in the query process, SPARQL queries should be converted into corresponding ECPV-based SQL queries. This paper describes the conversion process and shows the improvement in response time by experiments.

  • PDF

Korean Machine Reading Comprehension for Patent Consultation Using BERT (BERT를 이용한 한국어 특허상담 기계독해)

  • Min, Jae-Ok;Park, Jin-Woo;Jo, Yu-Jeong;Lee, Bong-Gun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.4
    • /
    • pp.145-152
    • /
    • 2020
  • MRC (Machine reading comprehension) is the AI NLP task that predict the answer for user's query by understanding of the relevant document and which can be used in automated consult services such as chatbots. Recently, the BERT (Pre-training of Deep Bidirectional Transformers for Language Understanding) model, which shows high performance in various fields of natural language processing, have two phases. First phase is Pre-training the big data of each domain. And second phase is fine-tuning the model for solving each NLP tasks as a prediction. In this paper, we have made the Patent MRC dataset and shown that how to build the patent consultation training data for MRC task. And we propose the method to improve the performance of the MRC task using the Pre-trained Patent-BERT model by the patent consultation corpus and the language processing algorithm suitable for the machine learning of the patent counseling data. As a result of experiment, we show that the performance of the method proposed in this paper is improved to answer the patent counseling query.