• Title/Summary/Keyword: 비일관성 식별자

Search Result 8, Processing Time 0.024 seconds

Detecting Inconsistent Code Identifiers (코드 비 일관적 식별자 검출 기법)

  • Lee, Sungnam;Kim, Suntae;Park, Sooyoung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.5
    • /
    • pp.319-328
    • /
    • 2013
  • Software maintainers try to comprehend software source code by intensively using source code identifiers. Thus, use of inconsistent identifiers throughout entire source code causes to increase cost of software maintenance. Although participants can adopt peer reviews to handle this problem, it might be impossible to go through entire source code if the volume of code is huge. This paper introduces an approach to automatically detecting inconsistent identifiers of Java source code. This approach consists of tokenizing and POS tagging all identifiers in the source code, classifying syntactic and semantic similar terms, and finally detecting inconsistent identifiers by applying proposed rules. In addition, we have developed tool support, named CodeAmigo, to support the proposed approach. We applied it to two popular Java based open source projects in order to show feasibility of the approach by computing precision.

Uncertainty Improvement of Incomplete Decision System using Bayesian Conditional Information Entropy (베이지언 정보엔트로피에 의한 불완전 의사결정 시스템의 불확실성 향상)

  • Choi, Gyoo-Seok;Park, In-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.6
    • /
    • pp.47-54
    • /
    • 2014
  • Based on the indiscernible relation of rough set, the inevitability of superposition and inconsistency of data makes the reduction of attributes very important in information system. Rough set has difficulty in the difference of attribute reduction between consistent and inconsistent information system. In this paper, we propose the new uncertainty measure and attribute reduction algorithm by Bayesian posterior probability for correlation analysis between condition and decision attributes. We compare the proposed method and the conditional information entropy to address the uncertainty of inconsistent information system. As the result, our method has more accuracy than conditional information entropy in dealing with uncertainty via mutual information of condition and decision attributes of information system.

A Cache Consistency Control for B-Tree Indices in a Database Sharing System (데이타베이스 공유 시스템에서 B-트리 인덱스를 위한 캐쉬 일관성 제어)

  • On, Gyeong-O;Jo, Haeng-Rae
    • The KIPS Transactions:PartD
    • /
    • v.8D no.5
    • /
    • pp.593-604
    • /
    • 2001
  • A database sharing system (DSS) refers to a system for high performance transaction processing. In the DSS, the processing nodes are coupled via a high speed network and share a common database at the disk level. Each node has a local memory and a separate copy of operating system. To reduce the number of disk accesses, the node caches data pages and index pages in its memory buffer. In general, B-tree index pages are accessed more often and thus cached at more processing nodes, than their corresponding data pages. There are also complicated operations in the B-tree such as Fetch, Fetch Next, Insertion and Deletion. Therefore, an efficient cache consistency scheme supporting high level concurrency is required. In this paper, we propose cache consistency schemes using identifiers of index pages and page_LSN of leaf page. The propose schemes can improve the system throughput by reducing the required message traffic between nodes and index re-traversal.

  • PDF

Geometrical Analysis on Parts of Load Limit Valve for Static Structural Test of Aerospace Flight Vehicles (항공우주 비행체 정적구조시험용 하중제한밸브 부품 형상 분석)

  • Shim, Jae-Yeul
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.47 no.9
    • /
    • pp.607-616
    • /
    • 2019
  • Free body diagram analysis is done for key parts of pilot stage of LLV (Load Limit Valve) which is used to protect overload for static structural test of aerospace flight vehicle. It is shown through the analysis that diameter ratio($D_2)^{ten}/D_2)^{comp}$) of two poppets in a pilot stage must be equal to piston area ratio($A_{comp}/A_{ten}$) of a hydraulic actuator for making a poppet open consistently at constant force applied by an actuator. The result of the analysis is verified by measuring geometries of the poppets in the four different LLVs which are corresponding to four actuators with different capacity and have been used after being imported in this laboratory. Results of "Adjuster resolution tests" with two different pilot stages show the max. deviation of Fi(actuator force in instant of opening poppet) from average Fi obtained for each turn of adjuster is 0.3KN and max. deviation of the Fi normalized by average Fi of each turn of adjuster is 3.7%. From the results, it is verified that the two pilot stages with same poppet diameter ratio make a poppet consistently open at Fis within ${\pm}3.7%$ deviation from the average Fi. The deviation is shown to be caused from frictional force of O-ring in the poppet. Additionally, design factors for poppet spring and adjuster, which are also key parts of the pilot stage, are distinguished and procedure for deciding the factors are also shown in this study.

Legal Issues in Protecting and Utilitizing Medical Data in United States - Focused on HIPAA/HITECH, 21st Century Cures Act, Common Law, Guidance - (미국의 보건의료데이터 보호 및 활용을 위한 주요 법적 쟁점 -미국 HIPAA/HITECH, 21세기 치료법, 공통규칙, 민간 가이드라인을 중심으로-)

  • Kim, Jae Sun
    • The Korean Society of Law and Medicine
    • /
    • v.22 no.4
    • /
    • pp.117-157
    • /
    • 2021
  • This research reviewed the HIPAA/HITECH, 21st Century Cures Act, Common Law, and private Guidances from the perspectives in protecting and utilitizing the medical data, while implications were followed. First, the standards for protection and utilization are relatively clearly regulated through single law on personal medical information in the United States. The HIPAA has been introduced in 1996 as fundamental act on protection of medical data. Medical data was divided into personally identifiable information, non-identifying information, and limited dataset under HIPAA. Regulations on de-identification measures for medical information, objects for deletion of limited data sets, and agreement on prohibition of data re-identification were stipulated. Moreover, in the 21st Century Cures Act regulated mutual compatibility for data sharing, prohibition of data blocking, and strengthening of accessibility of data subjects. Common Law introduced comprehensive consent system and clearly stipulates procedures. Second, the regulatory system is relatively simplified and clearly stipulated in the United States. To be specific, the expert consensus and the safe harbor system were introduced as an anonymity measure for identifiable medical information, which clearly defines the process while increasing trust. Third, the protection of the rights of the data subject is specified, the duty of explanation is specified in detail, while the information right of the consumer (opt-out procedure) for identification information is specified. For instance, the HHS rule and FDA regulations recognize the comprehensive consent system for human research, but the consent procedure, method, and requirements are stipulated through the common rule. Fourth, in the case of the United States, a trust-based system is being used throughout the health and medical data legislation. To be specific, Limited Data Sets are allowed to use in condition to the researcher's agreement to prohibit re-identification, and de-identification or consent process is simplified under the system.

Dependency Label based Causing Inconsistency Axiom Detection for Ontology Debugging (온톨로지 디버깅을 위한 종속 부호 기반 비논리적 공리 탐지)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.764-773
    • /
    • 2008
  • The web ontology language(OWL) has become a W3C recommendation to publish and share ontologies on the semantic web. In order to check the satisfiablity of concepts in OWL ontology, OWL reasoners have been introduced. But most reasoners simply report check results without providing a justification for any arbitrary entailment of unsatisfiable concept in OWL ontologies. In this paper, we propose dependency label based causing inconsistency axiom (CIA) detection for debugging unsatisfiable concepts in ontology. CIA is a set of axioms to occur unsatisfiable concepts. In order to detect CIA, we need to find axiom to cause inconsistency in ontology. If precise CIA is gave to ontology building tools, these ontology tools display CIA to debug unsatisfiable concepts as suitable presentation format. Our work focuses on two key aspects. First, when a inconsistency ontology is given, it detect axioms to occur unsatisfiable and identify the root of them. Second, when particular unsatisfiable concepts in an ontology are detected, it extracts them and presents to ontology designers. Therefore we introduce a tableau-based decision procedure and propose an improved method which is dependency label based causing inconsistency axiom detection. Our results are applicable to the very expressive logic SHOIN that is the basis of the Web Ontology Language.

A Methodology for Quality Control of Railroad Trackbed Fills Using Compressional Wave Velocities : I. Preliminary Investigation (압축파 속도를 이용한 철도 토공노반의 품질관리 방안 : I. 예비연구)

  • Park, Chul-Soo;Mok, Young-Jin;Choi, Chan-Yong;Lee, Tai-Hee
    • Journal of the Korean Geotechnical Society
    • /
    • v.25 no.9
    • /
    • pp.45-55
    • /
    • 2009
  • The quality of railroad trackbed fills has been controlled by field measurements of density and bearing resistance of plate-load tests. The control measures are compatible with the design procedures whose design parameter is $k_{30}$ for both ordinary-speed railways and high-speed railways. However, one of fatal flaws of the design procedures is that there are no simple laboratory measurement procedures for the design parameters ($k_{30}$ or, $E_{v2}$ and $E_{v2}/E_{v1}$) in design stage. To overcome the defect, the compressional wave velocity was adopted as a control measure, in parallel with the advent of the new design procedure, and its measurement technique was proposed in the preliminary investigation. The key concept of the quality control procedure is that the target value for field compaction control is the compressional wave velocity determined at optimum moisture content using modified compaction test, and direct-arrival method is used for the field measurements during construction, which is simple and reliable enough for practice engineers to access. This direct-arrival method is well-suited for such a shallow and homogeneous fill lift in terms of applicability and cost effectiveness. The sensitivity of direct-arrival test results according to the compaction quality was demonstrated at a test site, and it was concluded that compressional wave velocity can be effectively used as quality control measure. The experimental background far the companion study (Park et al., 2009) was established through field and laboratory measurements of the compressional wave velocity.

A Semantic Classification Model for e-Catalogs (전자 카탈로그를 위한 의미적 분류 모형)

  • Kim Dongkyu;Lee Sang-goo;Chun Jonghoon;Choi Dong-Hoon
    • Journal of KIISE:Databases
    • /
    • v.33 no.1
    • /
    • pp.102-116
    • /
    • 2006
  • Electronic catalogs (or e-catalogs) hold information about the goods and services offered or requested by the participants, and consequently, form the basis of an e-commerce transaction. Catalog management is complicated by a number of factors and product classification is at the core of these issues. Classification hierarchy is used for spend analysis, custom3 regulation, and product identification. Classification is the foundation on which product databases are designed, and plays a central role in almost all aspects of management and use of product information. However, product classification has received little formal treatment in terms of underlying model, operations, and semantics. We believe that the lack of a logical model for classification Introduces a number of problems not only for the classification itself but also for the product database in general. It needs to meet diverse user views to support efficient and convenient use of product information. It needs to be changed and evolved very often without breaking consistency in the cases of introduction of new products, extinction of existing products, class reorganization, and class specialization. It also needs to be merged and mapped with other classification schemes without information loss when B2B transactions occur. For these requirements, a classification scheme should be so dynamic that it takes in them within right time and cost. The existing classification schemes widely used today such as UNSPSC and eClass, however, have a lot of limitations to meet these requirements for dynamic features of classification. In this paper, we try to understand what it means to classify products and present how best to represent classification schemes so as to capture the semantics behind the classifications and facilitate mappings between them. Product information implies a plenty of semantics such as class attributes like material, time, place, etc., and integrity constraints. In this paper, we analyze the dynamic features of product databases and the limitation of existing code based classification schemes. And describe the semantic classification model, which satisfies the requirements for dynamic features oi product databases. It provides a means to explicitly and formally express more semantics for product classes and organizes class relationships into a graph. We believe the model proposed in this paper satisfies the requirements and challenges that have been raised by previous works.