• Title/Summary/Keyword: Interoperability Definition

Search Result 51, Processing Time 0.03 seconds

Calibration and Validation Activities for Earth Observation Mission Future Evolution for GMES

  • LECOMTE Pascal
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.237-240
    • /
    • 2005
  • Calibration and Validation are major element of any space borne Earth Observation Mission. These activities are the major objective of the commissioning phases but routine activities shall be maintained during the whole mission in order to maintain the quality of the product delivered to the users or at least to fully characterise the evolution with time of the product quality. With the launch of ERS-l in 1991, the European Space Agency decided to put in place a group dedicated to these activities, along with the daily monitoring of the product quality for anomaly detection and algorithm evolution. These four elements are all strongly linked together. Today this group is fully responsible for the monitoring of two ESA missions, ERS-2 and Envisat, for a total of 12 instruments of various types, preparing itself for the Earth Explorer series of five. other satellites (Cryosat, Goce, SMOS, ADM-Aeolus, Swarm) and at various levels in past and future Third Party Missions such as Landsat, J-ERS, ALOS and KOMPSAT. The Joint proposal by the European Union and the European Space Agency for a 'Global Monitoring for Environment and Security' project (GMES), triggers a review of the scope of these activities in a much wider framework than the handling of single missions with specific tools, methods and activities. Because of the global objective of this proposal, it is necessary to put in place Multi-Mission Calibration and Validation systems and procedures. GMES Calibration and Validation activities will rely on multi source data access, interoperability, long-term data preservation, and definition standards to facilitate the above objectives. The scope of this presentation is to give an overview of the current Calibration and Validation activities at ESA, and the planned evolution in the context of GMES.

  • PDF

Research on Convergence of Internet-of-Things and Cloud Computing (사물인터넷과 클라우드 컴퓨팅의 융합에 대한 연구)

  • Choi, Kyung;Kim, Mihui
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.5
    • /
    • pp.1-12
    • /
    • 2016
  • Internet of Things (IoTs) technologies have been computerized information generated from a variety of objects and humans, and have been applied to various fields by connecting via the Internet. In order to compensate for the marginal characteristics of IoT smart devices, such as low-power, limited processing and capacities, combining IoT and cloud computing technologies is now established itself as one of the paradigms. In this paper, we look at the definition, features and services of IoT and cloud computing technology, and we investigate and analyze the conversing needs of IoT and could computing, existing conversion paradigms, convergence cases, and platforms. In results, there are challenges to be solved, even though the cloud technologies complement a number of restrictions of IoT and offer various advantages such as scalability, interoperability, reliability, efficiency, availability, security, ease of access, ease of use, and reduced cost of deployment. We analyze the new research issues of convergence paradigm, and finally suggest a research challenges for convergence.

A Study on Data Sharing Codes Definition of Chinese in CAI Application Programs (CAI 응용프로그램 작성시 자료공유를 위한 한자 코드 체계 정의에 관한 연구)

  • Kho, Dae-Ghon
    • Journal of The Korean Association of Information Education
    • /
    • v.2 no.2
    • /
    • pp.162-173
    • /
    • 1998
  • Writing a CAI program containing Chinese characters requires a common Chinese character code to share information for educational purposes. A Chinese character code setting needs to allow a mixed use of both vowel and stroke order, to represent Chinese characters in simplified Chinese as well as in Japanese version, and to have a conversion process for data exchange among different sets of Chinese codes. Waste in code area is expected when vowel order is used because heteronyms are recognized as different. However, using stroke order facilitates in data recovery preventing duplicate code generation, though it does not comply with the phonetic rule. We claim that the first and second level Chinese code area needs to be expanded as much as academic and industrial circles have demanded. Also, we assert that Unicode can be a temporary measure for an educational code system due to its interoperability, expandability, and expressivity of character sets.

  • PDF

Comparison and Analysis of Science and Technology Journal Metadata (해외 과학기술 학술논문 메타데이터의 비교 분석)

  • Lee, Min-Ho;Lee, Won-Goo;Yoon, Hwa-Mook;Shin, Sung-Ho;Ryou, Jae-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.9
    • /
    • pp.515-523
    • /
    • 2011
  • It is important to manage large amount of information from various information providers for supporting recent information services such as providing global research trends, detecting emerging signal and listing leading researchers. For integrated management, definition of integrated metadata schema, data transformation and schema matching are needed. It is first necessary to analyze existing various metadata for defining integrated metadata schema. In this paper, we have analyzed several metadata of scientific journal papers by classifying semantics, content rules and syntax, and looked around considerations to make integrated schema or transform metadata. We have known that XML is used as a syntax for supporting convenience and various usage condition, and hierarchy element names and common elements in semantics are needed. We also have looked at elements having various content rules and related standards. We hope that this study will be used as basic research material of metadata integrated management, data transform and schema matching for interoperability.

Managing Scheme for 3-dimensional Geo-features using XML

  • Kim, Kyong-Ho;Choe, Seung-Keol;Lee, Jong-Hun;Yang, Young-Kyu
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 1999.12a
    • /
    • pp.47-51
    • /
    • 1999
  • Geo-features play a key role in object-oriented or feature-based geo-processing system. So the strategy for how-to-model and how-to-manage the geo-features builds the main architecture of the entire system and also supports the efficiency and functionality of the system. Unlike the conventional 2D geo-processing system, geo-features in 3D GIS have lots to be considered to model regarding the efficient manipulation and analysis and visualization. When the system is running on the Web, it should also be considered that how to leverage the level of detail and the level of automation of modeling in addition to the support for client side data interoperability. We built a set of 3D geo-features, and each geo-feature contains a set of aspatial data and 3D geo-primitives. The 3D geo-primitives contain the fundamental modeling data such as the height of building and the burial depth of gas pipeline. We separated the additional modeling data on the geometry and appearance of the model from the fundamental modeling data to make the table in database more concise and to allow the users more freedom to represent the geo-object. To get the users to build and exchange their own data, we devised a fie format called VGFF 2.0 which stands for Virtual GIS File Format. It is to describe the three dimensional geo-information in XML(extensible Markup Language). The DTD(Document Type Definition) of VGFF 2.0 is parsed using the DOM(Document Object Model). We also developed the authoring tools for users can make their own 3D geo-features and model and save the data to VGFF 2.0 format. We are now expecting the VGFF 2.0 evolve to the 3D version of SVG(Scalable Vector Graphics) especially for 3D GIS on the Web.

  • PDF

A Study on the Model of Collection-Level Description based on Ontology for Resources Sharing (자원공유를 위한 온톨로지기반 컬렉션 단위 기술 모형개발 연구)

  • Lee, Hye-Won
    • Journal of the Korean Society for information Management
    • /
    • v.25 no.3
    • /
    • pp.209-230
    • /
    • 2008
  • This study is based on the practical use for distributed resources considering growing network rapidly. The focal point of this study will be argued on semantic interoperability for sharing of resources, not be emphasized the technical issues of network. The aim of this article is developing the model of Collection-Level Description(CLD) for sharing of resources. The present article consists of a definition of collection in relation to the scope, objectives, and agents of the collection and an analysis of researches about CLD strengths and standards. Lastly, it was intended to construct the model focused on relation which was needed to be strengthened the existing CLD's function, thus, this study attempted to use the concept of ontology. The model of CLD based on ontology suggested the description could represent new relations inferred between classes and properties. Distinguishing class and property, furthermore, this study suggested properties were separated the characteristic of class and the relation with classes.

Effects of Adopting the Open Document Format in Public Records Management (공문서 컴포넌트 오픈포맷 채택이 기록관리에 미치는 영향 분석)

  • Jung, Mi Ri;Oh, Seh-La;Yim, Jin Hee
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.16 no.2
    • /
    • pp.29-55
    • /
    • 2016
  • Korean public organizations create electronic documents through electronic document management systems under the e-Government scheme. A majority of these public documents are saved in vendor-dependent file formats, mainly HWP. Vendor-dependent formats can be opened only with specific software, which requires purchase. As the license does not guarantee compatibility between past and future versions, interoperability problems occur in long-term preservation, which need to be solved. Any error from the elimination of styles or no matching elements in document definition during conversion from vendor-dependent formats to the XML-based standard exchange format leads to file open failure or the modification of original documents. This study introduces the Open Document Format (ODF) and investigates the effects of adopting ODF in the creation, exchange, management, and preservation of public records.

Designing Requisite Techniques of Storage Structuresupporting Efficient Retrieval in Semantic Web (시멘틱 웹의 효율적 검색을 지원하는 저장 구조의 요소 기술 설계)

  • Shin Pan-Seop
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.3
    • /
    • pp.227-236
    • /
    • 2006
  • Semantic Web is getting popular to next web environment. Additionally, ontology language research is also activating to represent semantic relation of resource in semantic web. Specially, Ontology language as RDF and DAML+OIL appear on start point of research. But Ontology Language limited to describing characters of resource and to making a clear definition of relation of resource. So W3C suggest OWL at the next standard language for describing resource. OWL supply the lack of representation for RDF and RDF Schema. In this paper, we make Ontology to implement Online Retrieval System using OWL and propose the structure of storing Ontology document at the RDB. The structure support characters of OWL that are equivalent relationship, heterogeneous relationship, inverse relationship, union relationship and one of relationship between classes or properties. In this paper, we classify the extended elements for OWL from RDF Schema. And we propose the method of storing OWL using RDB for interoperability with many applications based on RDB. Finally, implement the storage and retrieval system based on OWL to provide advanced search function.

  • PDF

The Reference Model based on Web Services for USN Application Service (USN 응용 서비스를 위한 웹 서비스 기반의 참조 모델)

  • Bang, Jin-Suk;Kim, Yong-Woon;Yoo, Sang-Keun;Jung, Hoe-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.5
    • /
    • pp.948-955
    • /
    • 2008
  • Recently, as the dissemination of the Internet and development of computer network technology, Research is actively underway for realization of the next-generation computing paradigm called Ubiquitous Computing. For realization of Ubiquitous computing, The data recognized from each sensors must be collected on real-time and transferred to applied service so that they may be used as data for providing service to users. However, there are several problem for realization. First, there is no standardization of sensor metadata, interface and event definition. Second, applied service has weak point that difficult of approaching to the data. Third, none interoperability with each platform and protocol. In this paper, we designed Sensor Service Description Language based on XML to resolve problem above. This Language expresses the measurement of the sensor and Service Metadata in the form to be standardized. In addition, we have proposed and development a reference model for USN application service.

Development and Application of Development Principles for Clinical Information Model (임상정보모델 개발원칙의 개발과 적용)

  • Ahn, Sun-Ju
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.8
    • /
    • pp.2899-2905
    • /
    • 2010
  • To be applicable under electronic health record system in order to ensure semantic interoperability of clinical information, the development principle for clinical information model to reflect objective and function is required. The aim of this study is to develop the development principles for clinical information model and evaluate the Clinical Contents Model. In order to develop the principle, from November 2008 to March 2009, the surveys about 1) definition, 2) function and 3) quality criteria were done, and 4) the components of advanced model were analyzed. The study was processed in 3 levels. Firstly in the development level, key words and key words-paragraph were driven from the references, and the principles were drawn based on the clinical or functional importance and frequency. In the application level, the 3 experts of clinical information model assessed 30 Clinical Contents Models by applying it. In the feedback level, the Clinical Contents Model in which errors were found was modified. As the results, 18 development principles were derived with 3 categories which were structure, process and contents. The Clinical Contents Models were assessed with the principles, and the 17 models were found that they did not follow it. During the feedback process, the necessity of the advanced education of the principle and the establishment of the regular quality improvement strategy to use it is raised. The proposed development principle supports the consistent model-development between clinical information model developers, and could be used as evaluation criteria.