• Title/Summary/Keyword: the concept of vector

Search Result 270, Processing Time 0.027 seconds

An Implementation of an ENC Representation System which meets S-52 presentation specification and S-57 transfer standards (S-52 표현사양 및 S-57 교환표준을 만족하는 전자해도 표현 시스템 구현)

  • 서상현;이희용
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.146-150
    • /
    • 1999
  • On the advent of digital era, ECDIS has emerged as a new navigation aid that should result in significant benefits to safe navigation. More than simply a graphics display, ECDIS is a new concept navigation system capable of providing integrated information of geographical and texual data. As an official vector data for ECDIS, ENC consists of spatial and feature data to describe objects in form of points, lines and areas. IHO published International Standards for ENC, such as S-52(Specification for Chart Content and Display Aspects of ECDIS and S-57(IHO Transfer Standard for Digital Hydrographic Data). This paper deals with the implementation of an EUC representation system which meets S-52 presentation specification and S-57 transfer standards by analyzing S-57 data structures and converting then to an appropriate internal data structures and representing them onto screen adopting S-52 presentation specification.

  • PDF

New Method for Station Keeping of Geostationary Spacecraft Using Relative Orbital Motion and Optimization Technique (상대 운동과 최적화 기법을 이용한 정지궤도 위치유지에 관한 연구)

  • Jung, Ok-Chul;No, Tae-Soo;Lee, Sang-Cherl;Yang, Koon-Ho;Choi, Seong-Bong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.33 no.1
    • /
    • pp.39-47
    • /
    • 2005
  • In this paper, a method of station keeping strategy using relative orbital motion and numerical optimization technique is presented for geostationary spacecraft. Relative position vector with respect to an ideal geostationary orbit is generated using high precision orbit propagation, and compressed in terms of polynomial and trigonometric function. Then this relative orbit model is combined with optimization scheme to propose a very efficient and flexible method of station keeping planning. Proper selection of objective and constraint functions for optimization can yield a variety of station keeping methods improved over the classical ones. Results from the nonlinear simulation have been shown to support such concept.

Extracting Alternative Word Candidates for Patent Information Search (특허 정보 검색을 위한 대체어 후보 추출 방법)

  • Baik, Jong-Bum;Kim, Seong-Min;Lee, Soo-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.4
    • /
    • pp.299-303
    • /
    • 2009
  • Patent information search is used for checking existence of earlier works. In patent information search, there are many reasons that fails to get appropriate information. This research proposes a method extracting alternative word candidates in order to minimize search failure due to keyword mismatch. Assuming that two words have similar meaning if they have similar co-occurrence words, the proposed method uses the concept of concentration, association word set, cosine similarity between association word sets and a ranking modification technique. Performance of the proposed method is evaluated using a manually extracted alternative word candidate list. Evaluation results show that the proposed method outperforms the document vector space model in recall.

A Semantic Similarity Decision Using Ontology Model Base On New N-ary Relation Design (새로운 N-ary 관계 디자인 기반의 온톨로지 모델을 이용한 문장의미결정)

  • Kim, Su-Kyoung;Ahn, Kee-Hong;Choi, Ho-Jin
    • Journal of the Korean Society for information Management
    • /
    • v.25 no.4
    • /
    • pp.43-66
    • /
    • 2008
  • Currently be proceeded a lot of researchers for 'user information demand description' for interface of an information retrieval system or Web search engines, but user information demand description for a natural language form is a difficult situation. These reasons are as they cannot provide the semantic similarity that an information retrieval model can be completely satisfied with variety regarding an information demand expression and semantic relevance for user information description. Therefore, this study using the description logic that is a knowledge representation base of OWL and a vector model-based weight between concept, and to be able to satisfy variety regarding an information demand expression and semantic relevance proposes a decision way for perfect assistances of user information demand description. The experiment results by proposed method, semantic similarity of a polyseme and a synonym showed with excellent performance in decision.

Mesh Simplification Algorithm Using Differential Error Metric (미분 오차 척도를 이용한 메쉬 간략화 알고리즘)

  • 김수균;김선정;김창헌
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.5_6
    • /
    • pp.288-296
    • /
    • 2004
  • This paper proposes a new mesh simplification algorithm using differential error metric. Many simplification algorithms make use of a distance error metric, but it is hard to measure an accurate geometric error for the high-curvature region even though it has a small distance error measured in distance error metric. This paper proposes a new differential error metric that results in unifying a distance metric and its first and second order differentials, which become tangent vector and curvature metric. Since discrete surfaces may be considered as piecewise linear approximation of unknown smooth surfaces, theses differentials can be estimated and we can construct new concept of differential error metric for discrete surfaces with them. For our simplification algorithm based on iterative edge collapses, this differential error metric can assign the new vertex position maintaining the geometry of an original appearance. In this paper, we clearly show that our simplified results have better quality and smaller geometry error than others.

An Extended Dynamic Web Page Recommendation Algorithm Based on Mining Frequent Traversal Patterns (빈발 순회패턴 탐사에 기반한 확장된 동적 웹페이지 추천 알고리즘)

  • Lee KeunSoo;Lee Chang Hoon;Yoon Sun-Hee;Lee Sang Moon;Seo Jeong Min
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.9
    • /
    • pp.1163-1176
    • /
    • 2005
  • The Web is the largest distributed information space but, the individual's capacity to read and digest contents is essentially fixed. In these Web environments, mining traversal patterns is an important problem in Web mining with a host of application domains including system design and information services. Conventional traversal pattern mining systems use the inter-pages association in sessions with only a very restricted mechanism (based on vector or matrix) for generating frequent K-Pagesets. We extend a family of novel algorithms (termed WebPR - Web Page Recommend) for mining frequent traversal patterns and then pageset to recommend. We add a WebPR(A) algorithm into a family of WebPR algorithms, and propose a new winWebPR(T) algorithm introducing a window concept on WebPR(T). Including two extended algorithms, our experimentation with two real data sets, including LadyAsiana and KBS media server site, clearly validates that our method outperforms conventional methods.

  • PDF

Knowledge-poor Term Translation using Common Base Axis with application to Korean-English Cross-Language Information Retrieval (과도한 지식을 요구하지 않는 공통기반축에 의한 용어 번역과 한영 교차정보검색에의 응용)

  • 최용석;최기선
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.1
    • /
    • pp.29-40
    • /
    • 2003
  • Cross-Language Information Retrieval (CLIR) deals with the documents in various languages by one language query. A user who uses one language can retrieve the documents in another language through CLIR system. In CLIR, query translation method is known to be more efficient. For the better performance of query translation, we need more resources like dictionary, ontology, and parallel/comparable corpus but usually not available. This paper proposes a new concept called the Common Base Axis which is adapted to Korean-English Query translation ann a new weighting method in dictionary based query translation. The essential idea is that we can express Korean and English word in one vector space by Common Base Axis and use it in calculating sense distance for query weighting. The experiments show that Common Base Axis gives us good performance without ontology and is especially good for one word query translation.

  • PDF

A study on the process of mapping data and conversion software using PC-clustering (PC-clustering을 이용한 매핑자료처리 및 변환소프트웨어에 관한 연구)

  • WhanBo, Taeg-Keun;Lee, Byung-Wook;Park, Hong-Gi
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.7 no.2 s.14
    • /
    • pp.123-132
    • /
    • 1999
  • With the rapid increases of the amount of data and computing, the parallelization of the computing algorithm becomes necessary more than ever. However the parallelization had been conducted mostly in a super-computer until the rod 1990s, it was not for the general users due to the high price, the complexity of usage, and etc. A new concept for the parallel processing has been emerged in the form of K-clustering form the late 1990s, it becomes an excellent alternative for the applications need high computer power with a relative low cost although the installation and the usage are still difficult to the general users. The mapping algorithms (cut, join, resizing, warping, conversion from raster to vector and vice versa, etc) in GIS are well suited for the parallelization due to the characteristics of the data structure. If those algorithms are manipulated using PC-clustering, the result will be satisfiable in terms of cost and performance since they are processed in real flu with a low cos4 In this paper the tools and the libraries for the parallel processing and PC-clustering we introduced and how those tools and libraries are applied to mapping algorithms in GIS are showed. Parallel programs are developed for the mapping algorithms and the result of the experiments shows that the performance in most algorithms increases almost linearly according to the number of node.

  • PDF

Multiple Cause Model-based Topic Extraction and Semantic Kernel Construction from Text Documents (다중요인모델에 기반한 텍스트 문서에서의 토픽 추출 및 의미 커널 구축)

  • 장정호;장병탁
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.595-604
    • /
    • 2004
  • Automatic analysis of concepts or semantic relations from text documents enables not only an efficient acquisition of relevant information, but also a comparison of documents in the concept level. We present a multiple cause model-based approach to text analysis, where latent topics are automatically extracted from document sets and similarity between documents is measured by semantic kernels constructed from the extracted topics. In our approach, a document is assumed to be generated by various combinations of underlying topics. A topic is defined by a set of words that are related to the same topic or cooccur frequently within a document. In a network representing a multiple-cause model, each topic is identified by a group of words having high connection weights from a latent node. In order to facilitate teaming and inferences in multiple-cause models, some approximation methods are required and we utilize an approximation by Helmholtz machines. In an experiment on TDT-2 data set, we extract sets of meaningful words where each set contains some theme-specific terms. Using semantic kernels constructed from latent topics extracted by multiple cause models, we also achieve significant improvements over the basic vector space model in terms of retrieval effectiveness.

Application and Expansion of the Harm Principle to the Restrictions of Liberty in the COVID-19 Public Health Crisis: Focusing on the Revised Bill of the March 2020 「Infectious Disease Control and Prevention Act」 (코로나19 공중보건 위기 상황에서의 자유권 제한에 대한 '해악의 원리'의 적용과 확장 - 2020년 3월 개정 「감염병의 예방 및 관리에 관한 법률」을 중심으로 -)

  • You, Kihoon;Kim, Dokyun;Kim, Ock-Joo
    • The Korean Society of Law and Medicine
    • /
    • v.21 no.2
    • /
    • pp.105-162
    • /
    • 2020
  • In the pandemic of infectious disease, restrictions of individual liberty have been justified in the name of public health and public interest. In March 2020, the National Assembly of the Republic of Korea passed the revised bill of the 「Infectious Disease Control and Prevention Act.」 The revised bill newly established the legal basis for forced testing and disclosure of the information of confirmed cases, and also raised the penalties for violation of self-isolation and treatment refusal. This paper examines whether and how these individual liberty limiting clauses be justified, and if so on what ethical and philosophical grounds. The authors propose the theories of the philosophy of law related to the justifiability of liberty-limiting measures by the state and conceptualized the dual-aspect of applying the liberty-limiting principle to the infected patient. In COVID-19 pandemic crisis, the infected person became the 'Patient as Victim and Vector (PVV)' that posits itself on the overlapping area of 'harm to self' and 'harm to others.' In order to apply the liberty-limiting principle proposed by Joel Feinberg to a pandemic with uncertainties, it is necessary to extend the harm principle from 'harm' to 'risk'. Under the crisis with many uncertainties like COVID-19 pandemic, this shift from 'harm' to 'risk' justifies the state's preemptive limitation on individual liberty based on the precautionary principle. This, at the same time, raises concerns of overcriminalization, i.e., too much limitation of individual liberty without sufficient grounds. In this article, we aim to propose principles regarding how to balance between the precautionary principle for preemptive restrictions of liberty and the concerns of overcriminalization. Public health crisis such as the COVID-19 pandemic requires a population approach where the 'population' rather than an 'individual' works as a unit of analysis. We propose the second expansion of the harm principle to be applied to 'population' in order to deal with the public interest and public health. The new concept 'risk to population,' derived from the two arguments stated above, should be introduced to explain the public health crisis like COVID-19 pandemic. We theorize 'the extended harm principle' to include the 'risk to population' as a third liberty-limiting principle following 'harm to others' and 'harm to self.' Lastly, we examine whether the restriction of liberty of the revised 「Infectious Disease Control and Prevention Act」 can be justified under the extended harm principle. First, we conclude that forced isolation of the infected patient could be justified in a pandemic situation by satisfying the 'risk to the population.' Secondly, the forced examination of COVID-19 does not violate the extended harm principle either, based on the high infectivity of asymptomatic infected people to others. Thirdly, however, the provision of forced treatment can not be justified, not only under the traditional harm principle but also under the extended harm principle. Therefore it is necessary to include additional clauses in the provision in order to justify the punishment of treatment refusal even in a pandemic.