• Title/Summary/Keyword: Boolean Research

Search Result 68, Processing Time 0.031 seconds

Weighted Constrained One-Bit Transform Method for Low-Complexity Block Motion Estimation

  • Choi, Youngkyoung;Kim, Hyungwook;Lim, Sojeong;Yu, Sungwook
    • ETRI Journal
    • /
    • v.34 no.5
    • /
    • pp.795-798
    • /
    • 2012
  • This letter proposes a new low-complexity motion estimation method. The proposed method classifies various nonmatching pixel pairs into several categories and assigns an appropriate weight for each category in the matching stage. As a result, it can significantly improve performance compared to that of the conventional methods by adding only one 1-bit addition and two Boolean operations per pixel.

End-mill Manufacturing and Developing of Processing Verification via Cutting Simulation (Cutting Simulation을 이용한 End-milling Cutter의 제작 및 가공 검증 기술 개발)

  • Kim J.H.;Kim J.H.;Ko T.J.;Park J.W.;Kim H.S.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.453-454
    • /
    • 2006
  • This paper describes a processing verification technique for developing about end-milling cutters. Developed software is processing verification module for manufacturing. By using cutting simulation method, we can obtain center points of finding wheel via Boolean operation between a grinding wheel and a cylindrical workpiece. The obtained CL data can be used for calculating NC data. After then, we can simulate by using designed grinding machine and NC data. This research has been implemented on a commercial CAD system by using the API function programming. The operator can evaluate the cutting simulation process and reduce the time of design and manufacturing.

  • PDF

A Study on the Theoretical Structure Modeling using ISM & FSM (ISM과 FSM을 이용한 이론적 구조모형화에 대한 연구)

  • 조성훈;정민용
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.21 no.47
    • /
    • pp.219-232
    • /
    • 1998
  • A lot of difficulties exist in analyzing the structure of a system owing to the complex and organic relations in the systems we face in reality. Focuses have been put on the research of optimal solution in a defined structure, however, on the assumption that the structure of the system has been already defined. With the grasping of the structure as the most prior condition, ISM(Interpretive Structural Modeling) and FSM(Fuzzy Structural Modeling) are suggested as solutions in this paper. ISM uses the systematic application of some elementary notions of graph theory and boolean algebra, FSM uses Fuzzy conception for representing relationship between elements. In FSM, the entries in the relation matrix are taken to value on the interval [0,1] by virtue of a fuzzy binary relation. Numeric examples are used as the actual application as follows.

  • PDF

An Association Discovery Algorithm Containing Quantitative Attributes with Item Constraints (수량적 속성을 포함하는 항목 제약을 고려한 연관규칙 마이닝 앨고리듬)

  • 한경록;김재련
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.22 no.50
    • /
    • pp.183-193
    • /
    • 1999
  • The problem of discovering association rules has received considerable research attention and several fast algorithms for mining association rules have been developed. In this paper, we propose an efficient algorithm for mining quantitative association rules with item constraints. For categorical attributes, we map the values of the attribute to a set of consecutive integers. For quantitative attributes, we can partition the attribute into values or ranges. While such constraints can be applied as a post-processing step, integrating them into the mining algorithm can reduce the execution time. We consider the problem of integrating constraints that are boolean expressions over the presence or absence of items containing quantitative attributes into the association discovery algorithm using Apriori concept.

  • PDF

Parametric Design of Complex Hull Forms

  • Kim Hyun-Cheol;Nowacki Horst
    • Journal of Ship and Ocean Technology
    • /
    • v.9 no.1
    • /
    • pp.47-63
    • /
    • 2005
  • In the present study, we suggest a new method for designing complex ship hull forms with multiple domain B-spline surfaces accounting for their topological arrangement, where all subdomains are fully defined in terms of form parameters, e.g., positional, differential and integral descriptors. For the construction of complex hull forms, free-form elementary models such as forebody, afterbody and bulbs are united by Boolean operation and blending surfaces in compliance with the sectional area curve (SAC) of the whole ship. This new design process in this paper is called Sectional Area Curve-Balanced Parametric Design (SAC-BPD).

Analysis of the Methods to Decrease the Depth of Menu in Web Site (웹사이트 메뉴 Depth를 줄이는 방식간의 비교 분석)

  • Park, Hui-Seok;Kim, Yu-No
    • Journal of the Ergonomics Society of Korea
    • /
    • v.19 no.3
    • /
    • pp.61-75
    • /
    • 2000
  • To enhance web site's usability, it has been suggested that the depth of tree structured menus should be minimized. In this research, experimental results are reported to quantitatively compare the methods currently used for reducing the depth of menus in web sites. 25 popular web sites were selected and their menu types were categorized into four types: top menu, drop-down menu, boolean menu, and table of contents. The four types of menu were then sub-categorized into 15 different types according to their sub-menu type, existence of menu colors, and the event occurring after mouse activation. Performance tests and subjective evaluation were carried out. The results showed that there were no significant differences in terms of response time among the 15 menu types, while table of contents and drop-down in which the first and second level of menus were visible induced the least number of errors. In the subjective test, the top-menu structure with colors and presentation of its sub-menu without clicking mouse were preferred.

  • PDF

Coregistration of QuickBird Imagery and Digital Map Using a Modified ICP Algorithm (수정된 ICP알고리즘을 이용한 수치지도와 QuickBird 영상의 보정)

  • Han, Dong-Yeob;Eo, Yang-Dam;Kim, Yong-Hyun;Lee, Kwang-Jae;Kim, Youn-Soo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.6
    • /
    • pp.621-626
    • /
    • 2010
  • For geometric correction of high-resolution images, the authors matched corresponding objects between a large-scale digital map and a QuickBird image to obtain the coefficients of the first order polynomial. Proximity corrections were performed, using the Boolean operation, to perform automated matching accurately. The modified iterative closest point (ICP) algorithm was used between the point data of the surface linear objects and the point data of the edge objects of the image to determine accurate transformation coefficients. As a result of the automated geometric correction for the study site, an accuracy of 1.207 root mean square error (RMSE) per pixel was obtained.

Intelligent information filtering using rough sets

  • Ratanapakdee, Tithiwat;Pinngern, Ouen
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1302-1306
    • /
    • 2004
  • This paper proposes a model for information filtering (IF) on the Web. The user information need is described into two levels in this model: profiles on category level, and Boolean queries on document level. To efficiently estimate the relevance between the user information need and documents by fuzzy, the user information need is treated as a rough set on the space of documents. The rough set decision theory is used to classify the new documents according to the user information need. In return for this, the new documents are divided into three parts: positive region, boundary region, and negative region. We modified user profile by the user's relevance feedback and discerning words in the documents. In experimental we compared the results of three methods, firstly is to search documents that are not passed the filtering system. Second, search documents that passed the filtering system. Lastly, search documents after modified user profile. The result from using these techniques can obtain higher precision.

  • PDF

Retrieval methodology for similar NPP LCO cases based on domain specific NLP

  • No Kyu Seong ;Jae Hee Lee ;Jong Beom Lee;Poong Hyun Seong
    • Nuclear Engineering and Technology
    • /
    • v.55 no.2
    • /
    • pp.421-431
    • /
    • 2023
  • Nuclear power plants (NPPs) have technical specifications (Tech Specs) to ensure that the equipment and key operating parameters necessary for the safe operation of the power plant are maintained within limiting conditions for operation (LCO) determined by a safety analysis. The LCO of Tech Specs that identify the lowest functional capability of equipment required for safe operation for a facility must be complied for the safe operation of NPP. There have been previous studies to aid in compliance with LCO relevant to rule-based expert systems; however, there is an obvious limit to expert systems for implementing the rules for many situations related to LCO. Therefore, in this study, we present a retrieval methodology for similar LCO cases in determining whether LCO is met or not met. To reflect the natural language processing of NPP features, a domain dictionary was built, and the optimal term frequency-inverse document frequency variant was selected. The retrieval performance was improved by adding a Boolean retrieval model based on terms related to the LCO in addition to the vector space model. The developed domain dictionary and retrieval methodology are expected to be exceedingly useful in determining whether LCO is met.

PubMine: An Ontology-Based Text Mining System for Deducing Relationships among Biological Entities

  • Kim, Tae-Kyung;Oh, Jeong-Su;Ko, Gun-Hwan;Cho, Wan-Sup;Hou, Bo-Kyeng;Lee, Sang-Hyuk
    • Interdisciplinary Bio Central
    • /
    • v.3 no.2
    • /
    • pp.7.1-7.6
    • /
    • 2011
  • Background: Published manuscripts are the main source of biological knowledge. Since the manual examination is almost impossible due to the huge volume of literature data (approximately 19 million abstracts in PubMed), intelligent text mining systems are of great utility for knowledge discovery. However, most of current text mining tools have limited applicability because of i) providing abstract-based search rather than sentence-based search, ii) improper use or lack of ontology terms, iii) the design to be used for specific subjects, or iv) slow response time that hampers web services and real time applications. Results: We introduce an advanced text mining system called PubMine that supports intelligent knowledge discovery based on diverse bio-ontologies. PubMine improves query accuracy and flexibility with advanced search capabilities of fuzzy search, wildcard search, proximity search, range search, and the Boolean combinations. Furthermore, PubMine allows users to extract multi-dimensional relationships between genes, diseases, and chemical compounds by using OLAP (On-Line Analytical Processing) techniques. The HUGO gene symbols and the MeSH ontology for diseases, chemical compounds, and anatomy have been included in the current version of PubMine, which is freely available at http://pubmine.kobic.re.kr. Conclusions: PubMine is a unique bio-text mining system that provides flexible searches and analysis of biological entity relationships. We believe that PubMine would serve as a key bioinformatics utility due to its rapid response to enable web services for community and to the flexibility to accommodate general ontology.