• Title/Summary/Keyword: 집합론

Search Result 279, Processing Time 0.026 seconds

Support Vector Learning for Abnormality Detection Problems (비정상 상태 탐지 문제를 위한 서포트벡터 학습)

  • Park, Joo-Young;Leem, Chae-Hwan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.266-274
    • /
    • 2003
  • This paper considers an incremental support vector learning for the abnormality detection problems. One of the most well-known support vector learning methods for abnormality detection is the so-called SVDD(support vector data description), which seeks the strategy of utilizing balls defined on the kernel feature space in order to distinguish a set of normal data from all other possible abnormal objects. The major concern of this paper is to modify the SVDD into the direction of utilizing the relation between the optimal solution and incrementally given training data. After a thorough review about the original SVDD method, this paper establishes an incremental method for finding the optimal solution based on certain observations on the Lagrange dual problems. The applicability of the presented incremental method is illustrated via a design example.

Ontology-based Cohort DB Search Simulation (온톨로지 기반 대용량 코호트 DB 검색 시뮬레이션)

  • Song, Joo-Hyung;Hwang, Jae-min;Choi, Jeongseok;Kang, Sanggil
    • Journal of the Korea Society for Simulation
    • /
    • v.25 no.1
    • /
    • pp.29-34
    • /
    • 2016
  • Many researchers have used cohort DB (database) to predict the occurrence of disease or to keep track of disease spread. Cohort DB is Big Data which has simply stored disease and health information as separated DB table sets. To measure the relations between health information, It is necessary to reconstruct cohort DB which follows research purpose. In this paper, XML descriptor, editor has been used to construct ontology-based Big Data cohort DB. Also, we have developed ontology based cohort DB search system to check results of relations between health information. XML editor has used 7 layered Ontology development 101 and OWL API to change cohort DB into ontology-based. Ontology-based cohort DB system can measure the relation of disease and health information and can be used effectively when semantic relations are found. We have developed ontology-based cohort DB search system which can measure the relations between disease and health information. And it is very effective when searched results are semantic relations.

READY MADE Creative Gymnastic for Designers (READY MADE디자이너를 위한 창조적인 훈련 연구)

  • Bruno, Marco
    • Archives of design research
    • /
    • v.19 no.2 s.64
    • /
    • pp.365-374
    • /
    • 2006
  • A 'Readymade' is an everyday object selected and designated as art. The term was coined by Marcel Duchamp to describe his artistic process based on the attempt to destroy the notion of the uniqueness of the art object: his influence went for beyond the art world affecting all design activities based on creativity. The purpose of this study is to investigate the ready-made technique from an educational point of view. Starting from Duchamp experience and his further influence on the design world, the study aims to demonstrate the value of the ready-made technique as a basic element in the education of young designers. The research method is based on the empirical observation of the results of the same project assigned to forty different students in different universities. The collected results were grouped in four families according to each specific generative method: constructive, conceptual, aggregative and elaborative. These four categories, derived by the observation of the results, represent tangible variations of the same disciplined technique. This flexibility demonstrates the value of the ready-made process as a foundation practice particularly indicated for young designers. These are the main skills students developed through its application to design projects; exploring and reconsidering attitude, recycling issues, new identity to familiar objects, focus on ideas.

  • PDF

Energy-Efficient and Parameterized Designs for Fast Fourier Transform on FPGAs (FPGA에서 FFT(Fast Fourier Transform)를 구현하기 위한 에너지 효율적이고 변수화 된 설계)

  • Jang Ju-Wook;Han Woo-Jin;Choi Seon-Il;Govindu Gokul;Prasanna Viktor K.
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.171-176
    • /
    • 2006
  • In this paper, we develop energy efficient designs for the Fast Fourier Transform (FFT) on FPGAs. Architectures for FFT on FPGAs are designed by investigating and applying techniques for minimizing the energy dissipation. Architectural parmeters such as degrees of vertical and horizontal parallelism are identified and a design choices. We determine design trade-offs using high-level performance estimation to obtain energy-efficient designs. We implemented a set storage types as parameters, on Xilinx Vertex-II FPGA to verify the estimates. Our designs dissipate 57% to 78% less energy than the optimized designs from the Xilinx library. In terms of a comprehensive metric such as EAT (Energy-Area-Time), out designs offer performance improvements of 3-13x over the Xilinx designs.

The Integrated Methodology of Rough Set Theory and Artificial Neural Network for Business Failure Prediction (도산 예측을 위한 러프집합이론과 인공신경망 통합방법론)

  • Kim, Chang-Yun;Ahn, Byeong-Seok;Cho, Sung-Sik;Kim, Soung-Hie
    • Asia pacific journal of information systems
    • /
    • v.9 no.4
    • /
    • pp.23-40
    • /
    • 1999
  • This paper proposes a hybrid intelligent system that predicts the failure of firms based on the past financial performance data, combining neural network and rough set approach, We can get reduced information table, which implies that the number of evaluation criteria such as financial ratios and qualitative variables and objects (i.e., firms) is reduced with no information loss through rough set approach. And then, this reduced information is used to develop classification rules and train neural network to infer appropriate parameters. Through the reduction of information table, it is expected that the performance of the neural network improve. The rules developed by rough sets show the best prediction accuracy if a case does match any of the rules. The rationale of our hybrid system is using rules developed by rough sets for an object that matches any of the rules and neural network for one that does not match any of them. The effectiveness of our methodology was verified by experiments comparing traditional discriminant analysis and neural network approach with our hybrid approach. For the experiment, the financial data of 2,400 Korean firms during the period 1994-1996 were selected, and for the validation, k-fold validation was used.

  • PDF

A survey on unsupervised subspace outlier detection methods for high dimensional data (고차원 자료의 비지도 부분공간 이상치 탐지기법에 대한 요약 연구)

  • Ahn, Jaehyeong;Kwon, Sunghoon
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.507-521
    • /
    • 2021
  • Detecting outliers among high-dimensional data encounters a challenging problem of screening the variables since relevant information is often contained in only a few of the variables. Otherwise, when a number of irrelevant variables are included in the data, the distances between all observations tend to become similar which leads to making the degree of outlierness of all observations alike. The subspace outlier detection method overcomes the problem by measuring the degree of outlierness of the observation based on the relevant subsets of the entire variables. In this paper, we survey recent subspace outlier detection techniques, classifying them into three major types according to the subspace selection method. And we summarize the techniques of each type based on how to select the relevant subspaces and how to measure the degree of outlierness. In addition, we introduce some computing tools for implementing the subspace outlier detection techniques and present results from the simulation study and real data analysis.

An Efficient Signature Batch Verification System for VANET (VANET를 위한 효율적인 서명 일괄 확인 시스템)

  • Lim, Ji-Hwan;Oh, Hee-Kuck;Kim, Sang-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.20 no.1
    • /
    • pp.17-31
    • /
    • 2010
  • In VANET (Vehicular Ad hoc NETwork), vehicles can efficiently verify a large number of signatures efficiently using batch verification techniques. However, batch verification performed independently in each vehicle raises many redundant verification cost. Although, an RSU (Road Side Unit) can perform the batch verification as a proxy to reduce this cost, it additionally requires an efficient method to identify invalid signatures when the batch verification fails. In this paper, we analyze several ways of constructing a distributed batch verification system, and propose an efficient distributed batch verification system in which participating vehicles perform batch verification in a distributive manner for a small size signature set. In our proposed system, each node can report the batch verification result or the identified invalid signatures list and the RSU who received these reports can identify the invalid signatures and efficiently exclude them.

Sparse and low-rank feature selection for multi-label learning

  • Lim, Hyunki
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.1-7
    • /
    • 2021
  • In this paper, we propose a feature selection technique for multi-label classification. Many existing feature selection techniques have selected features by calculating the relation between features and labels such as a mutual information scale. However, since the mutual information measure requires a joint probability, it is difficult to calculate the joint probability from an actual premise feature set. Therefore, it has the disadvantage that only a few features can be calculated and only local optimization is possible. Away from this regional optimization problem, we propose a feature selection technique that constructs a low-rank space in the entire given feature space and selects features with sparsity. To this end, we designed a regression-based objective function using Nuclear norm, and proposed an algorithm of gradient descent method to solve the optimization problem of this objective function. Based on the results of multi-label classification experiments on four data and three multi-label classification performance, the proposed methodology showed better performance than the existing feature selection technique. In addition, it was showed by experimental results that the performance change is insensitive even to the parameter value change of the proposed objective function.

Knowledge Representation and Reasoning using Metalogic in a Cooperative Multiagent Environment

  • Kim, Koono
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.7
    • /
    • pp.35-48
    • /
    • 2022
  • In this study, it propose a proof theory method for expressing and reasoning knowledge in a multiagent environment. Since this method determines logical results in a mechanical way, it has developed as a core field from early AI research. However, since the proposition cannot always be proved in any set of closed sentences, in order for the logical result to be determinable, the range of expression is limited to the sentence in the form of a clause. In addition, the resolution principle, a simple and strong reasoning rule applicable only to clause-type sentences, is applied. Also, since the proof theory can be expressed as a meta predicate, it can be extended to the metalogic of the proof theory. Metalogic can be superior in terms of practicality and efficiency based on improved expressive power over epistemic logic of model theory. To prove this, the semantic method of epistemic logic and the metalogic method of proof theory are applied to the Muddy Children problem, respectively. As a result, it prove that the method of expressing and reasoning knowledge and common knowledge using metalogic in a cooperative multiagent environment is more efficient.

Document Classification Methodology Using Autoencoder-based Keywords Embedding

  • Seobin Yoon;Namgyu Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.35-46
    • /
    • 2023
  • In this study, we propose a Dual Approach methodology to enhance the accuracy of document classifiers by utilizing both contextual and keyword information. Firstly, contextual information is extracted using Google's BERT, a pre-trained language model known for its outstanding performance in various natural language understanding tasks. Specifically, we employ KoBERT, a pre-trained model on the Korean corpus, to extract contextual information in the form of the CLS token. Secondly, keyword information is generated for each document by encoding the set of keywords into a single vector using an Autoencoder. We applied the proposed approach to 40,130 documents related to healthcare and medicine from the National R&D Projects database of the National Science and Technology Information Service (NTIS). The experimental results demonstrate that the proposed methodology outperforms existing methods that rely solely on document or word information in terms of accuracy for document classification.