• Title/Summary/Keyword: 시간 가중치

Search Result 791, Processing Time 0.026 seconds

The Software Reliability Evaluation of a Nuclear Controller Software Using a Fault Detection Coverage Based on the Fault Weight (가중치 기반 고장감지 커버리지 방법을 이용한 원전 제어기기 소프트웨어 신뢰도 평가)

  • Lee, Young-Jun;Lee, Jang-Soo;Kim, Young-Kuk
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.9
    • /
    • pp.275-284
    • /
    • 2016
  • The software used in the nuclear safety field has been ensured through the development, validation, safety analysis, and quality assurance activities throughout the entire process life cycle from the planning phase to the installation phase. However, this evaluation through the development and validation process needs a lot of time and money, and there are limitations to ensure that the quality is improved enough. Therefore, the effort to calculate the reliability of the software continues for a quantitative evaluation instead of a qualitative evaluation. In this paper, we propose a reliability evaluation method for the software to be used for a specific operation of the digital controller in a nuclear power plant. After injecting weighted faults in the internal space of a developed controller and calculating the ability to detect the injected faults using diagnostic software, we can evaluate the software reliability of a digital controller in a nuclear power plant.

Multimodal Media Content Classification using Keyword Weighting for Recommendation (추천을 위한 키워드 가중치를 이용한 멀티모달 미디어 콘텐츠 분류)

  • Kang, Ji-Soo;Baek, Ji-Won;Chung, Kyungyong
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.5
    • /
    • pp.1-6
    • /
    • 2019
  • As the mobile market expands, a variety of platforms are available to provide multimodal media content. Multimodal media content contains heterogeneous data, accordingly, user requires much time and effort to select preferred content. Therefore, in this paper we propose multimodal media content classification using keyword weighting for recommendation. The proposed method extracts keyword that best represent contents through keyword weighting in text data of multimodal media contents. Based on the extracted data, genre class with subclass are generated and classify appropriate multimodal media contents. In addition, the user's preference evaluation is performed for personalized recommendation, and multimodal content is recommended based on the result of the user's content preference analysis. The performance evaluation verifies that it is superiority of recommendation results through the accuracy and satisfaction. The recommendation accuracy is 74.62% and the satisfaction rate is 69.1%, because it is recommended considering the user's favorite the keyword as well as the genre.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

A Basic Study on the Establishment of Evaluation Items for the Resiliency of Planting Landscape in Hahoe and Yangdong of World Cultural Heritage (세계문화유산 하회와 양동의 식생경관 진정성 유지를 위한 평가항목 설정 기초 연구)

  • Lee, Chang-Hun;Shin, Hyun-Sil
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.36 no.2
    • /
    • pp.21-29
    • /
    • 2018
  • This study was wanted to select a value evaluation item to maintain the authenticity of the Planting Landscape in Hahoe and Yangdong villages. Above all, after checking the suitability of the 43 selected items by the Focus Group Interview, the weight was calculated in the Analytic Hierarchy Process expert questionnaire to establish the importance of the indicators for the development of the assessment items. The expert analysis analyzed the importance of 2 sections, 6 divisions and 11 detailed categories, and summarized the results of the study as follows : First, the comparative importance of each category and selection of the assessment items for the stabilization of the Planting Landscape of Hahoe and Yangdong indicated that cultural values are more important than biological values. In particular, details of the biological values of trees were derived with relatively low estimates, except for the specific characteristics of species and items of type of tree. Second, as a result of verification of the suitability of the 43 items selected by the Focus Group Interview, the source diameter, the collection width, reception, flushing, supersonality, records, memorials, 11 items were selected, Third, the results of the importance evaluation of the value properties for maintaining the resistance of vegetation through the Analytic Hierarchy Process were : specific in biological values (0.187), steady (0.094), and water pipe width (2007). There was relatively little difference in the highest weights, the width of the pipe except for the lowest value received, and the source diameter flushing. Fourth, the results of an evaluation of the importance of a value property aimed at historical values were whether or not the cultural asset was designated (0.134), the record value (0.092), the time (0.088), and the monument (0.063). In terms of the importance of evaluating the historical values of Planting sites consisting of Hahoe and Yangdong, the importance of designation of cultural properties was considered to be relative to the maintenance of the Planting Landscape including culture and history. Based on the assessment items and weighted values of the Planting Landscape of Yangdong Village and the World Cultural Heritage below and below, this study's Analytic Hierarchy Process can be applied to actual criteria for the assessment of the authenticity of trees in the village. Based on the assessment items and weighted values of the Planting Landscape in Hahoe and Yangdong villages, a follow-up study on the assessment standards for the authenticity of trees in the village will be left as a future task.

Long-term Streamflow Prediction for Integrated Real-time Water Management System (통합실시간 물관리 운영시스템을 위한 장기유량예측)

  • Kang Boosik;Rieu Seung Yup;Ko Ick-Hwan
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2005.05b
    • /
    • pp.1450-1454
    • /
    • 2005
  • 수자원관리에 있어서 미래시구간에 대한 유량예측은 수자원시스템운영자에게 있어서 의사결정에 결정적인 영향을 미치는 가장 중요한 요소 중의 하나이다. 효율적 물배분이나 발전 등의 이수활동을 위해서 최소 월단위 이상의 장기유량예측이 필요하며, 이를 위해서는 강우예측이 선행되어야 하는데, 본 연구에서는 통합 실시간 물관리 운영시스템을 위한 중장기 유량예측을 목표로 방법론을 제시하고자 한다. 중장기 유량예측을 수행하는 대표적인 방법 중의 하나는 앙상블 유량예측(ESP; Ensemble Streamflow Prediction) 기법이다. ESP란 현재의 유역상태를 초기조건으로 사용하고 과거의 온도나 강수 등의 시계열앙상블을 모형입력으로 이용해서 강우-유출모형을 통하여 유출량을 예측하는 기법이다. ESP는 결국 현재의 유역상태와 유역에서의 과거강우관측기록, 미래강우예측에 대한 정보를 조합하여 그에 따른 유출앙상블을 생산해 내게 된다. 유출앙상블은 각 앙상블 트레이스가 갖게 되는 가중치에 따라 확률분포를 달리 갖게 되고 경우에 따라서는 유량으로부터 2차적으로 유도되는 변수들의 확률분포로 전이되기도 한다. 기존의 ESP 이론은 미국 NWS의 범주형 확률예보를 근간으로 하고 있어, 이를 국내 환경에 그대로 적용시키기에 어려움이 있어 왔다. 따라서 본 연구에서는 국내 기상청의 월간 강수전망을 이용하고, 이러한 정보의 특성에 맞는 ESP기법을 제시하였다. 더 나아가 중장기 수자원운영을 위한 일단위 월강수시나리오 구성을 위해서 수치예보와 월강수전망을 조합하여 ESP를 사용하는 기법을 제시하였다.

  • PDF

Mining Frequent Service Patterns using Graph (그래프를 이용한 빈발 서비스 탐사)

  • Hwang, Jeong-Hee
    • Journal of Digital Contents Society
    • /
    • v.19 no.3
    • /
    • pp.471-477
    • /
    • 2018
  • As time changes, users change their interest. In this paper, we propose a method to provide suitable service for users by dynamically weighting service interests in the context of age, timing, and seasonal changes in ubiquitous environment. Based on the service history data presented to users according to the age or season, we also offer useful services by continuously adding the most recent service rules to reflect the changing of service interest. To do this, a set of services is considered as a transaction and each service is considered as an item in a transaction. And also we represent the association of services in a graph and extract frequent service items that refer to the latest information services for users.

Effective Coordination Method of Multi-Agent Based on Fuzzy Decision Making (퍼지 의사결정에 기반한 멀티에이전트의 효율적인 조정방안)

  • Ryu, Kyung-Hyun;Chung, Hwan-Mook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.1
    • /
    • pp.66-71
    • /
    • 2007
  • To adapt environment changing high speed and improve rapidly response ability for variation of environment and reduce delay time of decision making inlet agents, the derivation of user's preference and alternative are required. In this paper, we propose an efficient coordination method of multi-agents based on fuzzy decision making with the solution proposed by agents in the view of Pareto optimality. Our method generates the optimal alternative by using weighted value. We compute importance of attributes of winner agent, then can obtain the priorities lot attributes. The result of our method is analyzed that of Yager's method.

Binary Neural Network in Binary Space using NETLA (NETLA를 이용한 이진 공간내의 패턴분류)

  • Sung, Sang-Kyu;Park, Doo-Hwan;Jeong, Jong-Won;Lee, Joo-Tark
    • Proceedings of the KIEE Conference
    • /
    • 2001.11c
    • /
    • pp.431-434
    • /
    • 2001
  • 단층 퍼셉트론이 처음 개발되었을 때, 간단한 패턴을 인식하는 학습 기능을 가지고 있기 장점 때문에 학자들의 관심을 끌었다. 단층 퍼셉트론은 한 개의 소자를 이용해서 이진 논리를 가중치(weight)의 변경만으로 모두 표현할 수 있는 장점 때문에 영상처리, 패턴인식, 장면인식 등에 이용되어 왔다. 최근에, 역전파학습(Back-Propagation Learning)알고리즘이 이진 공간내의 매핑 문제에 적용되고 있다. 그러나, 역전파 학습알고리즘은 연속공간 내에서 긴 학습시간과 비효율적인 수행의 문제를 가지고 있다. 일반적으로 역전파 학습 알고리즘은 간단한 이진 공간에서 매핑하기 위해서 많은 반복과정을 요구한다. 역전파 학습 알고리즘에서는 은닉층의 뉴런의 수는 주어진 문제를 해결하기 위해서 우선순위(prior)를 알지 못하기 때문에 입력층과 출력층내의 뉴런의 수에 의존한다. 따라서, 3층 신경회로망의 적용에 있어 가장 중요한 문제중의 하나는 은닉층내의 필요한 뉴런수를 결정하는 것이고, 회로망 합성과 가중치 결정에 대한 적절한 방법을 찾지 못해 실제로 그 사용 영역이 한정되어 있었다. 본 논문에서는 패턴 분류를 위한 새로운 학습방법을 제시한다. 훈련입력의 기하학적인 분석에 기반을 둔 이진 신경회로망내의 은닉층내의 뉴런의 수를 자동적으로 결정할 수 있는 NETLA(Newly Expand and Truncate Learning Algorithm)라 불리우는 기하학적 학습알고리즘을 제시하고, 시뮬레이션을 통하여, 제안한 알고리즘의 우수성을 증명한다.

  • PDF

A Study on the efficient AODV Routing Algorithm using Cross-Layer Design (크로스레이어 디자인을 이용한 효율적인 AODV 알고리즘에 관한 연구)

  • Nam, Ho-Seok;Lee, Tae-Hoon;Do, Jae-Hwan;Kim, Jun-Nyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.11B
    • /
    • pp.981-988
    • /
    • 2008
  • In this paper, the efficient AODV routing algorithm in MANET is proposed. Because transmission channel has a high error rate and loss in MANET, the number of hops can't be regarded as an absolute network metric. After measuring FER periodically at the data link layer using cross-layer design, the scheme that every node forwards the weight of link status in the reserved field of AODV protocol is used. In order to find the efficient route, we design AODV to be able to select an optimal route that has a good channel status by evaluating the sum of weight. The proposed AODV improves throughput, routing overhead and average end-to-end delay in comparison with the generic AODV.

Indian Buffet Process Inspired Component Analysis for fMRI Data (fMRI 데이터에 적용한 인디언 뷔페 프로세스 닮은 성분 분석법)

  • Kim, Joon-Shik;Kim, Eun-Sol;Lim, Byoung-Kwon;Lee, Chung-Yeon;Zhang, Byoung-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06c
    • /
    • pp.191-194
    • /
    • 2011
  • 문서를 이루는 단어들의 빈도수가 지수법칙(power law)를 따른다는 지프의 법칩(Zipf's law)이 있다. 이러한 단어분포를 고려하여 문서의 토픽을 찾아내는 기계학습법이 디리쉴레 프로세스(Dirichlet process) 이다. 이를 발전시켜서 데이터의 잠재 요인(latent factor)들을 베이즈 확률모델에 기반한 샘플링 바탕으로 찾는 방법이 인디언 뷔페 과정(Indian buffet process) 이다. 우리는 25가지의 특징(feature)들에 대한 점수(rating)들이 볼드(blood oxygen dependent level) 신호와 함께 주어지는 PBAIC 2007 데이터에 주성분 분석법(principal component analysis)를 적용했다. PBAIC 2007 데이터는 비디오 게임을 수행하며 기능적뇌영상(functional magnetic resonance imaging, fMRI) 촬영을 하여 얻어진 공개데이터이다. 우리의 연구에서는 주성분 분석법을 이용하여 10개의 독립 성분(independent component)들을 찾았다. 그리고 1.75초 마다 촬영된 BOLD 신호와 10개의 고유벡터(eigenvector)들간의 내적을 취하여 가중치(weight)를 구하였다. 성분들의 가중치를 낮은 순서로 정렬함으로써 각 시간마다 주도적으로 영향을 미치는 성분들을 알아낼 수 있었다.