• Title/Summary/Keyword: standard platform

Search Result 731, Processing Time 0.039 seconds

Prelaunch Study of Validation for the Geostationary Ocean Color Imager (GOCI) (정지궤도 해색탑재체(GOCI) 자료 검정을 위한 사전연구)

  • Ryu, Joo-Hyung;Moon, Jeong-Eon;Son, Young-Baek;Cho, Seong-Ick;Min, Jee-Eun;Yang, Chan-Su;Ahn, Yu-Hwan;Shim, Jae-Seol
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.2
    • /
    • pp.251-262
    • /
    • 2010
  • In order to provide quantitative control of the standard products of Geostationary Ocean Color Imager (GOCI), on-board radiometric correction, atmospheric correction, and bio-optical algorithm are obtained continuously by comprehensive and consistent calibration and validation procedures. The calibration/validation for radiometric, atmospheric, and bio-optical data of GOCI uses temperature, salinity, ocean optics, fluorescence, and turbidity data sets from buoy and platform systems, and periodic oceanic environmental data. For calibration and validation of GOCI, we compared radiometric data between in-situ measurement and HyperSAS data installed in the Ieodo ocean research station, and between HyperSAS and SeaWiFS radiance. HyperSAS data were slightly different in in-situ radiance and irradiance, but they did not have spectral shift in absorption bands. Although all radiance bands measured between HyperSAS and SeaWiFS had an average 25% error, the 11% absolute error was relatively lower when atmospheric correction bands were omitted. This error is related to the SeaWiFS standard atmospheric correction process. We have to consider and improve this error rate for calibration and validation of GOCI. A reference target site around Dokdo Island was used for studying calibration and validation of GOCI. In-situ ocean- and bio-optical data were collected during August and October, 2009. Reflectance spectra around Dokdo Island showed optical characteristic of Case-1 Water. Absorption spectra of chlorophyll, suspended matter, and dissolved organic matter also showed their spectral characteristics. MODIS Aqua-derived chlorophyll-a concentration was well correlated with in-situ fluorometer value, which installed in Dokdo buoy. As we strive to solv the problems of radiometric, atmospheric, and bio-optical correction, it is important to be able to progress and improve the future quality of calibration and validation of GOCI.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

A Research on the Regulations and Perception of Interactive Game in Data Broadcasting: Special Emphasis on the TV-Betting Game (데이터방송 인터랙티브 게임 규제 및 이용자 인식에 관한 연구: 승부게임을 중심으로)

  • Byun, Dong-Hyun;Jung, Moon-Ryul;Bae, Hong-Seob
    • Korean journal of communication and information
    • /
    • v.35
    • /
    • pp.250-291
    • /
    • 2006
  • This study examines the regulatory issues and introduction problems of TV-betting data broadcasts in Korea by in-depth interview with a panel group. TV-betting data broadcast services of card games and horse racing games are widely in use in Europe and other parts of the world. In order to carry out the study, a demo program of TV-betting data broadcast in the OCAP(OpenCableTM Application Platform Specification) system environment, which is the data broadcasting standard for digital cable broadcasts in Korea was exposed to the panel group and then they were interviewed after watching and using the program. The results could be summarized as below. First of all, while TV-betting data broadcasts have many elements of entertainment, the respondents thought that it would be difficult to introduce TV-betting in data broadcasts as in overseas countries largely due to social factors. In addition, in order to introduce TV-betting data broadcasts, they suggested that excessive speculativeness must be suppressed through a series of regulatory system devices, such as by guaranteeing credibility of the media based on safe security systems for transactions, scheduling programs with effective time constraints to prevent the games from running too frequently, limiting the betting values, and by prohibiting access to games through set-top boxes of other data broadcast subscribers. The general consensus was that TV-betting could be considered for gradual introduction within the governmental laws and regulations that would minimize its ill effects. Therefore, the government should formulate long-term regulations and policies for data broadcasts. Once the groundwork is laid for safe introduction of TV-betting on data broadcasts within the boundary of laws and regulations, interactive TV games are expected to be introduced in Korea not only for added functionality of entertainment but also for far-ranging development of data broadcast and new media industries.

  • PDF

정지궤도 통신해양기상위성의 기상분야 요구사항에 관하여

  • Ahn, Myung-Hwan;Kim, Kum-Lan
    • Atmosphere
    • /
    • v.12 no.4
    • /
    • pp.20-42
    • /
    • 2002
  • Based on the "Mid to Long Term Plan for Space Development", a project to launch COMeS (Communication, Oceanography, and Meteorological Satellite) into the geostationary orbit is undergoing. Accordingly, KMA (Korea Meteorological Administration) has defined the meteorological missions and prepared the user requirements to fulfill the missions. To make a realistic user requirements, we prepared a first draft based on the ideal meteorological products derivable from a geostationary platform and sent the RFI (request for information) to the sensor manufacturers. Based on the responses to the RFI and other considerations, we revised the user requirement to be a realistic plan for the 2008 launch of the satellite. This manuscript introduces the revised user requirements briefly. The major mission defined in the revised user requirement is the augmentation of the detection and prediction ability of the severe weather phenomena, especially around the Korean Peninsula. The required payload is an enhanced Imager, which includes the major observation channels of the current geostationary sounder. To derive the required meteorological products from the Imager, at least 12 channels are required with the optimum of 16 channels. The minimum 12 channels are 6 wavelength bands used for current geostationary satellite, and additional channels in two visible bands, a near infrared band, two water vapor bands and one ozone absorption band. From these enhanced channel observation, we are going to derive and utilize the information of water vapor, stability index, wind field, and analysis of special weather phenomena such as the yellow sand event in addition to the standard derived products from the current geostationary Imager data. For a better temporal coverage, the Imager is required to acquire the full disk data within 15 minutes and to have the rapid scan mode for the limited area coverage. The required thresholds of spatial resolutions are 1 km and 2 km for visible and infrared channels, respectively, while the target resolutions are 0.5 km and 1 km.

On Method for LBS Multi-media Services using GML 3.0 (GML 3.0을 이용한 LBS 멀티미디어 서비스에 관한 연구)

  • Jung, Kee-Joong;Lee, Jun-Woo;Kim, Nam-Gyun;Hong, Seong-Hak;Choi, Beyung-Nam
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2004.12a
    • /
    • pp.169-181
    • /
    • 2004
  • SK Telecom has already constructed GIMS system as the base common framework of LBS/GIS service system based on OGC(OpenGIS Consortium)'s international standard for the first mobile vector map service in 2002, But as service content appears more complex, renovation has been needed to satisfy multi-purpose, multi-function and maximum efficiency as requirements have been increased. This research is for preparation ion of GML3-based platform to upgrade service from GML2 based GIMS system. And with this, it will be possible for variety of application services to provide location and geographic data easily and freely. In GML 3.0, it has been selected animation, event handling, resource for style mapping, topology specification for 3D and telematics services for mobile LBS multimedia service. And the schema and transfer protocol has been developed and organized to optimize data transfer to MS(Mobile Stat ion) Upgrade to GML 3.0-based GIMS system has provided innovative framework in the view of not only construction but also service which has been implemented and applied to previous research and system. Also GIMS channel interface has been implemented to simplify access to GIMS system, and service component of GIMS internals, WFS and WMS, has gotten enhanded and expanded function.

  • PDF

A study on developing a new self-esteem measurement test adopting DAP and drafting the direction of digitalizing measurement program of DAP (청소년 자존감 DAP 인물화 검사 개발 및 디지털화 측정 시스템 방향성 연구)

  • Woo, Sungju;Park, Chongwook
    • Journal of the HCI Society of Korea
    • /
    • v.8 no.1
    • /
    • pp.1-9
    • /
    • 2013
  • This is to develop a new way of testing self-esteem by adopting DAP(Draw a Person) test and to make a platform to digitalize it for young people in the adolescent stage. This approach is to get high effectiveness of the self-esteem measurement using DAP test, including some personal inner situations which can be easily missed in the large statistical analysis. The other objective of this study is digitalize to recover limits of DAP test in the subjective rating standard. It is based on the distribution of the figure drawing expressed numerically by the anxiety index of Handler. For these two examinations, we made experiment through 4 stages with second grade middle school 73 students from July 30th to October 31th in 2009 during 4 months. Firstly, we executed 'Self Values Test' for all 73 people, and divided them into two groups; one is high self-esteem group of 36 people, the other is low self-esteem group of 37 people. Secondly, we regrouped them following D (Depression), Pd (Psychopathic Deviate), Sc (Schizophrenia) scales of MMPI; one is high self-esteem group of 7 people, the other is low self-esteem group of 13 people. Thirdly, we conducted DAP test separately for these 20 people. We intended to verify necessity and appropriateness of direction of 'Digitalizing Measurement System' by comparing and analyzing relation between DAP and Self-esteem following evaluation criteria which has similarity in 3 tests, after executing DAP to reflect peculiarity of adolescents sufficiently. We compared and analyzed result abstracted by sampling DAP test of two groups; One is high self-esteem group of 2 people, the other is low self-esteem group of 2 people; to confirm whether we can improve limitation that original psychological testing has by comparing mutual reliance of measurement test. Finally, with DAP test gained from correlations between self-esteem and melancholia following as above-mentioned steps, we discovered possibility of realization to get a concrete and individual criteria of evaluation based on Expert System as a way of enhancing accessibility in quantitative manner. 'Digitalizing Measurement Program' of DAP test suggested in this study promote results' reliability based on existing tests and measurement.

  • PDF

Probabilistic Anatomical Labeling of Brain Structures Using Statistical Probabilistic Anatomical Maps (확률 뇌 지도를 이용한 뇌 영역의 위치 정보 추출)

  • Kim, Jin-Su;Lee, Dong-Soo;Lee, Byung-Il;Lee, Jae-Sung;Shin, Hee-Won;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.317-324
    • /
    • 2002
  • Purpose: The use of statistical parametric mapping (SPM) program has increased for the analysis of brain PET and SPECT images. Montreal Neurological Institute (MNI) coordinate is used in SPM program as a standard anatomical framework. While the most researchers look up Talairach atlas to report the localization of the activations detected in SPM program, there is significant disparity between MNI templates and Talairach atlas. That disparity between Talairach and MNI coordinates makes the interpretation of SPM result time consuming, subjective and inaccurate. The purpose of this study was to develop a program to provide objective anatomical information of each x-y-z position in ICBM coordinate. Materials and Methods: Program was designed to provide the anatomical information for the given x-y-z position in MNI coordinate based on the Statistical Probabilistic Anatomical Map (SPAM) images of ICBM. When x-y-z position was given to the program, names of the anatomical structures with non-zero probability and the probabilities that the given position belongs to the structures were tabulated. The program was coded using IDL and JAVA language for 4he easy transplantation to any operating system or platform. Utility of this program was shown by comparing the results of this program to those of SPM program. Preliminary validation study was peformed by applying this program to the analysis of PET brain activation study of human memory in which the anatomical information on the activated areas are previously known. Results: Real time retrieval of probabilistic information with 1 mm spatial resolution was archived using the programs. Validation study showed the relevance of this program: probability that the activated area for memory belonged to hippocampal formation was more than 80%. Conclusion: These programs will be useful for the result interpretation of the image analysis peformed on MNI coordinate, as done in SPM program.

Performance Analysis and Comparison of Stream Ciphers for Secure Sensor Networks (안전한 센서 네트워크를 위한 스트림 암호의 성능 비교 분석)

  • Yun, Min;Na, Hyoung-Jun;Lee, Mun-Kyu;Park, Kun-Soo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.5
    • /
    • pp.3-16
    • /
    • 2008
  • A Wireless Sensor Network (WSN for short) is a wireless network consisting of distributed small devices which are called sensor nodes or motes. Recently, there has been an extensive research on WSN and also on its security. For secure storage and secure transmission of the sensed information, sensor nodes should be equipped with cryptographic algorithms. Moreover, these algorithms should be efficiently implemented since sensor nodes are highly resource-constrained devices. There are already some existing algorithms applicable to sensor nodes, including public key ciphers such as TinyECC and standard block ciphers such as AES. Stream ciphers, however, are still to be analyzed, since they were only recently standardized in the eSTREAM project. In this paper, we implement over the MicaZ platform nine software-based stream ciphers out of the ten in the second and final phases of the eSTREAM project, and we evaluate their performance. Especially, we apply several optimization techniques to six ciphers including SOSEMANUK, Salsa20 and Rabbit, which have survived after the final phase of the eSTREAM project. We also present the implementation results of hardware-oriented stream ciphers and AES-CFB fur reference. According to our experiment, the encryption speeds of these software-based stream ciphers are in the range of 31-406Kbps, thus most of these ciphers are fairly acceptable fur sensor nodes. In particular, the survivors, SOSEMANUK, Salsa20 and Rabbit, show the throughputs of 406Kbps, 176Kbps and 121Kbps using 70KB, 14KB and 22KB of ROM and 2811B, 799B and 755B of RAM, respectively. From the viewpoint of encryption speed, the performances of these ciphers are much better than that of the software-based AES, which shows the speed of 106Kbps.

A Study on Image Copyright Archive Model for Museums (미술관 이미지저작권 아카이브 모델 연구)

  • Nam, Hyun Woo;Jeong, Seong In
    • Korea Science and Art Forum
    • /
    • v.23
    • /
    • pp.111-122
    • /
    • 2016
  • The purpose of this multi-disciplinary convergent study is to establish Image Copyright Archive Model for Museums to protect image copyright and vitalize the use of images out of necessity of research and development on copyright services over the life cycle of art contents created by the museums and out of the necessity to vitalize distribution market of image copyright contents in creative industry and to formulate management system of copyright services. This study made various suggestions for enhancement of transparency and efficiency of art contents ecosystem through vitalization of use and recycling of image copyright materials by proposing standard system for calculation, distribution, settlement and monitoring of copyright royalty of 1,000 domestic museums, galleries and exhibit halls. First, this study proposed contents and structure design of image copyright archive model and, by proposing art contents distribution service platform for prototype simulation, execution simulation and model operation simulation, established art contents copyright royalty process model. As billing system and technological development for image contents are still in incipient stage, this study used the existing contents billing framework as basic model for the development of billing technology for distribution of museum collections and artworks and automatic division and calculation engine for copyright royalty. Ultimately, study suggested image copyright archive model which can be used by artists, curators and distributors. In business strategy, study suggested niche market penetration of museum image copyright archive model. In sales expansion strategy, study established a business model in which effective process of image transaction can be conducted in the form of B2B, B2G, B2C and C2B through flexible connection of museum archive system and controllable management of image copyright materials can be possible. This study is expected to minimize disputes between copyright holder of artwork images and their owners and enhance manageability of copyrighted artworks through prevention of such disputes and provision of information on distribution and utilization of art contents (of collections and new creations) owned by the museums. In addition, by providing a guideline for archives of collections of museums and new creations, this study is expected to increase registration of image copyright and to make various convergent businesses possible such as billing, division and settlement of copyright royalty for image copyright distribution service.

The Classification System and Information Service for Establishing a National Collaborative R&D Strategy in Infectious Diseases: Focusing on the Classification Model for Overseas Coronavirus R&D Projects (국가 감염병 공동R&D전략 수립을 위한 분류체계 및 정보서비스에 대한 연구: 해외 코로나바이러스 R&D과제의 분류모델을 중심으로)

  • Lee, Doyeon;Lee, Jae-Seong;Jun, Seung-pyo;Kim, Keun-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.127-147
    • /
    • 2020
  • The world is suffering from numerous human and economic losses due to the novel coronavirus infection (COVID-19). The Korean government established a strategy to overcome the national infectious disease crisis through research and development. It is difficult to find distinctive features and changes in a specific R&D field when using the existing technical classification or science and technology standard classification. Recently, a few studies have been conducted to establish a classification system to provide information about the investment research areas of infectious diseases in Korea through a comparative analysis of Korea government-funded research projects. However, these studies did not provide the necessary information for establishing cooperative research strategies among countries in the infectious diseases, which is required as an execution plan to achieve the goals of national health security and fostering new growth industries. Therefore, it is inevitable to study information services based on the classification system and classification model for establishing a national collaborative R&D strategy. Seven classification - Diagnosis_biomarker, Drug_discovery, Epidemiology, Evaluation_validation, Mechanism_signaling pathway, Prediction, and Vaccine_therapeutic antibody - systems were derived through reviewing infectious diseases-related national-funded research projects of South Korea. A classification system model was trained by combining Scopus data with a bidirectional RNN model. The classification performance of the final model secured robustness with an accuracy of over 90%. In order to conduct the empirical study, an infectious disease classification system was applied to the coronavirus-related research and development projects of major countries such as the STAR Metrics (National Institutes of Health) and NSF (National Science Foundation) of the United States(US), the CORDIS (Community Research & Development Information Service)of the European Union(EU), and the KAKEN (Database of Grants-in-Aid for Scientific Research) of Japan. It can be seen that the research and development trends of infectious diseases (coronavirus) in major countries are mostly concentrated in the prediction that deals with predicting success for clinical trials at the new drug development stage or predicting toxicity that causes side effects. The intriguing result is that for all of these nations, the portion of national investment in the vaccine_therapeutic antibody, which is recognized as an area of research and development aimed at the development of vaccines and treatments, was also very small (5.1%). It indirectly explained the reason of the poor development of vaccines and treatments. Based on the result of examining the investment status of coronavirus-related research projects through comparative analysis by country, it was found that the US and Japan are relatively evenly investing in all infectious diseases-related research areas, while Europe has relatively large investments in specific research areas such as diagnosis_biomarker. Moreover, the information on major coronavirus-related research organizations in major countries was provided by the classification system, thereby allowing establishing an international collaborative R&D projects.