• Title/Summary/Keyword: complexity of reasoning

Search Result 48, Processing Time 0.03 seconds

Development of OOKS : a Knowledge Base Model Using an Object-Oriented Database (객체지향 데이터베이스를 이용한 지식베이스 모형(OOKS) 개발)

  • 허순영;김형민;양근우;최지윤
    • Journal of Intelligence and Information Systems
    • /
    • v.5 no.1
    • /
    • pp.13-34
    • /
    • 1999
  • Building a knowledge base effectively has been an important research area in the expert systems field. A variety of approaches have been studied including rules, semantic networks, and frames to represent the knowledge base for expert systems. As the size and complexity of the knowledge base get larger and more complicated, the integration of knowledge based with database technology cecomes more important to process the large amount of data. However, relational database management systems show many limitations in handing the complicated human knowledge due to its simple two dimensional table structure. In this paper, we propose Object-Oriented Knowledge Store (OOKS), a knowledge base model on the basis of a frame sturcture using an object-oriented database. In the proposed model, managing rules for inferencing and facts about objects in one uniform structure, knowledge and data can be tightly coupled and the performance of reasoning can be improved. For building a knowledge base, a knowledge script file representing rules and facts is used and the script file is transferred into a frame structure in database systems. Specifically, designing a frame structure in the database model as it is, it can facilitate management and utilization of knowledge in expert systems. To test the appropriateness of the proposed knowledge base model, a prototype system has been developed using a commercial ODBMS called ObjectStore and C++ programming language.

  • PDF

Combined Horizontal-Vertical Serial BP Decoding of GLDPC Codes with Binary Cyclic Codes (이진 순환 부호를 쓰는 GLDPC 부호의 수평-수직 결합 직렬 복호)

  • Chung, Kyuhyuk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.10
    • /
    • pp.585-592
    • /
    • 2014
  • It is well known that serial belief propagation (BP) decoding for low-density parity-check (LDPC) codes achieves faster convergence without any increase of decoding complexity per iteration and bit error rate (BER) performance loss than standard parallel BP (PBP) decoding. Serial BP (SBP) decoding, such as horizontal SBP (H-SBP) decoding or vertical SBP (V-SBP) decoding, updates check nodes or variable nodes faster than standard PBP decoding within a single iteration. In this paper, we propose combined horizontal-vertical SBP (CHV-SBP) decoding. By the same reasoning, CHV-SBP decoding updates check nodes or variable nodes faster than SBP decoding within a serialized step in an iteration. CHV-SBP decoding achieves faster convergence than H-SBP or V-SBP decoding. We compare these decoding schemes in details. We also show in simulations that the convergence rate, in iterations, for CHV-SBP decoding is about $\frac{1}{6}$ of that for standard PBP decoding, while the convergence rate for SBP decoding is about $\frac{1}{2}$ of that for standard PBP decoding. In simulations, we use recently proposed generalized LDPC (GLDPC) codes with binary cyclic codes (BCC).

A refinement and abstraction method of the SPZN formal model for intelligent networked vehicles systems

  • Yang Liu;Yingqi Fan;Ling Zhao;Bo Mi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.64-88
    • /
    • 2024
  • Security and reliability are the utmost importance facts in intelligent networked vehicles. Stochastic Petri Net and Z (SPZN) as an excellent formal verification tool for modeling concurrent systems, can effectively handles concurrent operations within a system, establishes relationships among components, and conducts verification and reasoning to ensure the system's safety and reliability in practical applications. However, the application of a system with numerous nodes to Petri Net often leads to the issue of state explosion. To tackle these challenges, a refinement and abstraction method based on SPZN is proposed in this paper. This approach can not only refine and abstract the Stochastic Petri Net but also establish a corresponding relationship with the Z language. In determining the implementation rate of transitions in Stochastic Petri Net, we employ the interval average and weighted average method, which significantly reduces the time and space complexity compared to alternative techniques and is suitable for expert systems at various levels. This reduction facilitates subsequent comprehensive system analysis and module analysis. Furthermore, by analyzing the properties of Markov Chain isomorphism in the case study, recommendations for minimizing system risks in the application of intelligent parking within the intelligent networked vehicle system can be put forward.

A Study of Fatigue Damage Factor Evaluation for Railway Turnout Crossing using Qualitative Analysis & Field Test (현장측정 및 정성분석기법을 이용한 분기기 망간 크로싱의 피로손상도 평가에 관한 연구)

  • Park, Yong-Gul;Choi, Jung-Youl;Eum, Ki-Young
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.6D
    • /
    • pp.881-893
    • /
    • 2008
  • The major objective of this study is to investigate the fatigue damage factor evaluation of immovability crossing for railway turnout by the field test and qualitative analysis. From the field test results of the servicing turnout crossing and qualitative analysis with frictional wear which section stiffness decreased, it was evaluated fatigue life of servicing turnout crossing. Most design practices have not taken advantage of the advanced theories in the modern fracture mechanics and finite element analysis due to complexity of analysis as well as the large quantity of vaguely defined parameters in actual designs. This paper considers fatigue problems in turnout crossing using effective analytical and design tools from the field of qualitative constraint reasoning. A set of software modules was developed for fatigue analysis and evaluation, which is easily applicable in engineering practices of designers. The techniques enable the use complex analysis formulations to tackle practical problems with uncertainties, and present the design outcome in two-dimensional design space solution. Appropriate engineering assumptions and judgments in carrying out these procedures, often the most difficult part for practicing engineers, can be partially produced by using qualitative reasoning to define the trends and ranges, interval constraint analysis to derive the controlling parameters, as well as design space to account for practical experience.

On the Development of Agent-Based Online Game Characters (에이전트 기반 지능형 게임 캐릭터 구현에 관한 연구)

  • 이재호;박인준
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2002.11a
    • /
    • pp.379-384
    • /
    • 2002
  • 개발적인 측면에서 온라인 게임 환경에서의 NPC(Non Playable Character)들은 환경인식능력, 이동능력, 특수 능력 및 아이템의 소유 배분 등을 원활히 하기 위한 능력들을 소유해야 하며, 게임 환경을 인식, 저장하기 위한 데이터구조와 자신만의 독특한 임무(mission)를 달성하기 위한 계획을 갖고 행위를 해야 한다. 이런 의미에서 NPC는 자신만의 고유한 규칙과 행동 패턴, 그리고 목표(Goal)와 이를 실행하기 위한 계획(plan)을 소유하는 에이전트로 인식되어야 할 것이다. 그러나, 기존 게임의 NPC 제어 구조나 구현 방법은 이러한 요구조건에 부합되지 못한 부분이 많았다. C/C++ 같은 컴퓨터 언어들을 이용한 구현은 NPC의 유연성이나, 행위에 많은 문제점이 있었다. 이들 언어의 switch 문법은 NPC의 몇몇 특정 상태를 묘사하고, 그에 대한 행위를 지정하는 방법으로 사용되었으나, 게임 환경이 복잡해지면서, 더욱더 방대한 코드를 만들어야 했고, 해석하는데 많은 어려움을 주었으며, 동일한 NPC에 다른 행동패턴을 적용시키기도 어려웠다. 또한, 대부분의 제어권을 게임 서버 폭에서 도맡아 함으로써, 서버측에 많은 과부하 요인이 되기도 하였다. 이러한 어려움을 제거하기 위해서 게임 스크립트를 사용하기도 하였지만, 그 또한 단순 반복적인 패턴에 사용되거나, 캐릭터의 속성적인 측면만을 기술 할 수 있을 뿐이었다 이러한 어려움을 해소하기 위해서는 NPC들의 작업에 필요한 지식의 계층적 분화를 해야 하고, 현재 상황과 목표 변화에 적합한 반응을 표현할 수 있는 스크립트의 개발이 필수 적이라 할 수 있다 또한 스크립트의 실행도 게임 서버 측이 아닌 클라이언트 측에서 수행됨으로써, 서버에 걸리는 많은 부하를 줄일 수 있어야 할 것이다. 본 논문에서는, 대표적인 반응형 에이전트 시스템인 UMPRS/JAM을 이용하여, 에이전트 기반의 게임 캐릭터 구현 방법론에 대해 알아본다.퓨터 부품조립을 사용해서 Template-based reasoning 예를 보인다 본 방법론은 검색노력을 줄이고, 검색에 있어 Feasibility와 Admissibility를 보장한다.매김할 수 있는 중요한 계기가 될 것이다.재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computation. Adaptive transversal filter with proposed data recycling buffer algorithm could efficiently reject ISI of channel and increase speed of convergence in avoidance burden of computational complexity in reality when it was experimented having

  • PDF

Pace and Facial Element Extraction in CCD-Camera Images by using Snake Algorithm (스네이크 알고리즘에 의한 CCD 카메라 영상에서의 얼굴 및 얼굴 요소 추출)

  • 판데홍;김영원;김정연;전병환
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2002.11a
    • /
    • pp.535-542
    • /
    • 2002
  • 최근 IT 산업이 급성장하면서 화상 회의, 게임, 채팅 등에서의 아바타(avatar) 제어를 위한 자연스러운 인터페이스 기술이 요구되고 있다. 본 논문에서는 동적 윤곽선 모델(active contour models; snakes)을 이용하여 복잡한 배경이 있는 컬러 CCD 카메라 영상에서 얼굴과 눈, 입, 눈썹, 코 등의 얼굴 요소에 대해 윤곽선을 추출하거나 위치를 파악하는 방법을 제안한다. 일반적으로 스네이크 알고리즘은 잡음에 민감하고 초기 모델을 어떻게 설정하는가에 따라 추출 성능이 크게 좌우되기 때문에 주로 단순한 배경의 영상에서 정면 얼굴의 추출에 사용되어왔다 본 연구에서는 이러한 단점을 파악하기 위해, 먼저 YIQ 색상 모델의 I 성분을 이용한 색상 정보와 차 영상 정보를 사용하여 얼굴의 최소 포함 사각형(minimum enclosing rectangle; MER)을 찾고, 이 얼굴 영역 내에서 기하학적인 위치 정보와 에지 정보를 이용하여 눈, 입, 눈썹, 코의 MER을 설정한다. 그런 다음, 각 요소의 MER 내에서 1차 미분과 2차 미분에 근거한 내부 에너지와 에지에 기반한 영상 에너지를 이용한 스네이크 알고리즘을 적용한다. 이때, 에지 영상에서 얼굴 주변의 복잡한 잡음을 제거하기 위하여 색상 정보 영상과 차 영상에 각각 모폴로지(morphology)의 팽창(dilation) 연산을 적용하고 이들의 AND 결합 영상에 팽창 연산을 다시 적용한 이진 영상을 필터로 사용한다. 총 7명으로부터 양 눈이 보이는 정면 유사 방향의 영상을 20장씩 취득하여 총 140장에 대해 실험한 결과, MER의 오차율은 얼굴, 눈, 입에 대해 각각 6.2%, 11.2%, 9.4%로 나타났다. 또한, 스네이크의 초기 제어점을 얼굴은 44개, 눈은 16개, 입은 24개로 지정하여 MER추출에 성공한 영상에 대해 스네이크 알고리즘을 수행한 결과, 추출된 영역의 오차율은 각각 2.2%, 2.6%, 2.5%로 나타났다.해서 Template-based reasoning 예를 보인다 본 방법론은 검색노력을 줄이고, 검색에 있어 Feasibility와 Admissibility를 보장한다.매김할 수 있는 중요한 계기가 될 것이다.재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computation. Adaptive transversal filter with proposed data recycling buffer algorithm could efficiently reject ISI of channel and increase speed of convergence in avoidance burden of computational complexity in reality when it was experimented having the same condition of

  • PDF

Development of Feature-based Encapsulation Process using Filler Material (충진재를 이용한 특징형상 가공용 RFPE 공정 개발)

  • Choe, Du-Seon;Lee, Su-Hong;Sin, Bo-Seong;Yun, Gyeong-Gu;Hwang, Gyeong-Hyeon;Lee, Ho-Yeong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.1
    • /
    • pp.98-103
    • /
    • 2001
  • Machining is the commonly used process in the manufacturing of prototypes. This process offers several advantages, such as rigidity of the machine, precision of the machine, precision of the operation and specially a quick delivery. The weight and immobility of the machine support and immobilize the part during the operation. However, despite these advantages it shows, machining still presents several limitations. The immobilization, location and support of the part are referred to as fixturing or workholding and present the biggest challenge for time efficient machining. So it is important to select and design the appropriate fixturing assembly. This assembly depends on the complexity of the part and the tool paths and may require the construction of dedicated fixtures. With traditional techniques, the range of fixturable shapes is limited and the identification of suitable fixtures in a given setup involves complex reasoning. To solve this limitation and to apply the automation, this paper presents the Reference Free Part Encapsulation(RFPE) and implementation of the encapsulation system. The feature-based modeling system and the encapsulation system are implemented. The small part of which it is difficult to find out the appropriate fixturing assembly is made by this system.

  • PDF

A Qualitative Formal Method for Requirements Specification and Safety Analysis of Hybrid Real-Time Systems (복합 실시간 계통의 요구사항 명세와 안전성 분석을 위한 정성적 정형기법)

  • Lee, Jang-Soo;Cha, Sung-Deok
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.120-133
    • /
    • 2000
  • Major obstruction of using formal methods for hybrid real-time systems in industry is the difficulty that engineers have in understanding and applying the quantitative methods in an abstract requirements phase. While formal methods technology in safety-critical systems can help increase confidence of software, difficulty and complexity in using them can cause another hazard. In order to overcome this obstruction, we propose a framework for qualitative requirements engineering of the hybrid real-time systems. It consists of a qualitative method for requirements specification, called QFM (Qualitative Formal Method), and a safety analysis method for the requirements based on a causality information, called CRSA (Causal Requirements Safety Analysis). QFM emphasizes the idea of a causal and qualitative reasoning in formal methods to reduce the cognitive burden of designers when specifying and validating the software requirements of hybrid safety systems. CRSA can evaluate the logical contribution of the software elements to the physical hazard of systems by utilizing the causality information that is kept during specification by QFM. Using the Shutdown System 2 of Wolsong nuclear power plants as a realistic example, we demonstrate the effectiveness of our approach.

  • PDF

An Improvement of Coherence and Validity between CLD and SFD of System Dynamics (시스템 다이내믹스의 CLD와 SFD의 일관성 및 타당성 개선에 관한 연구)

  • Jung, Jae Un;Kim, Hyun Soo
    • Journal of Digital Convergence
    • /
    • v.12 no.6
    • /
    • pp.69-77
    • /
    • 2014
  • System Dynamics(SD) is one of the complexity theories that has attracted attention as a computer-aided simulation methodology to analyze a dynamic problem and to develop a policy(strategy) in social science. Though there are properly unproven cases in research models which were developed in various fields by SD methodology during the last five decades, they are utilized as models to represent SD sub-theories. For this reason, this study targeted the population dynamics model which was frequently utilized to explain SD fundamentals and it proved errors of reasoning a structure of the existing causal and dominant feedback loop. Consequently, we presented a strategy to strengthen the coherence between CLD(causal loop diagram) and SFD(stocks-and-flows diagram) for improving validity of the existing model. The findings of this study contribute to the advancement of the existing SD and to the reinforcement of validation for policy research models of SD.

Flexible Decision-Making for Autonomous Agent Through Computation of Urgency in Time-Critical Domains (실시간 환경에서 긴급한 정도의 계산을 통한 자율적인 에이전트의 유연한 의사결정)

  • Noh Sanguk
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.9
    • /
    • pp.1196-1203
    • /
    • 2004
  • Autonomous agents need considerable computational resources to perform rational decision-making. The complexity of decision-making becomes prohibitive when large number of agents are present and when decisions have to be made under time pressure. One of approaches in time-critical domains is to respond to an observed condition with a predefined action. Although such a system may be able to react very quickly to environmental conditions, predefined plans are of less value if a situation changes and re-planning is needed. In this paper we investigate strategies intended to tame the computational burden by using off-line computation in conjunction with on-line reasoning. We use performance profiles computed off-line and the notion of urgency (i.e., the value of time) computed on-line to choose the amount of information to be included during on-line deliberation. This method can adjust to various levels of real-time demands, but incurs some overhead associated with iterative deepening. We test our framework with experiments in a simulated anti-air defense domain. The experiments show that the off-line performance profiles and the on-line computation of urgency are effective in time-critical situations.