• Title/Summary/Keyword: standard platform

Search Result 744, Processing Time 0.035 seconds

Development of Probabilistic Seismic Coefficients of Korea (국내 확률론적 지진계수 생성)

  • Kwak, Dong-Yeop;Jeong, Chang-Gyun;Park, Du-Hee;Lee, Hong-Sung
    • Journal of the Korean Geotechnical Society
    • /
    • v.25 no.10
    • /
    • pp.87-97
    • /
    • 2009
  • The seismic site coefficients are often used with the seismic hazard maps to develop the design response spectrum at the surface. The site coefficients are most commonly developed deterministically, while the seismic hazarde maps are derived probabilistically. There is, hence, an inherent incompatibility between the two approaches. However, they are used together in the seismic design codes without a clear rational basis. To resolve the fundamental imcompatibility between the site coefficients and hazard maps, this study uses a novel probabilistic seismic hazard analysis (PSHA) technique that simulates the results of a standard PSHA at a rock outcrop, but integrates the site response analysis function to capture the site amplification effects within the PSHA platform. Another important advantage of the method is its ability to model the uncertainty, variability, and randomness of the soil properties. The new PSHA was used to develop fully probabilistic site coefficients for site classes of the seismic design code and another sets of site classes proposed in Korea. Comparisons highlight the pronounced discrepancy between the site coefficients of the seismic design code and the proposed coefficients, while another set of site coefficients show differences only at selected site classes.

Application of peak based-Bayesian statistical method for isotope identification and categorization of depleted, natural and low enriched uranium measured by LaBr3:Ce scintillation detector

  • Haluk Yucel;Selin Saatci Tuzuner;Charles Massey
    • Nuclear Engineering and Technology
    • /
    • v.55 no.10
    • /
    • pp.3913-3923
    • /
    • 2023
  • Todays, medium energy resolution detectors are preferably used in radioisotope identification devices(RID) in nuclear and radioactive material categorization. However, there is still a need to develop or enhance « automated identifiers » for the useful RID algorithms. To decide whether any material is SNM or NORM, a key parameter is the better energy resolution of the detector. Although masking, shielding and gain shift/stabilization and other affecting parameters on site are also important for successful operations, the suitability of the RID algorithm is also a critical point to enhance the identification reliability while extracting the features from the spectral analysis. In this study, a RID algorithm based on Bayesian statistical method has been modified for medium energy resolution detectors and applied to the uranium gamma-ray spectra taken by a LaBr3:Ce detector. The present Bayesian RID algorithm covers up to 2000 keV energy range. It uses the peak centroids, the peak areas from the measured gamma-ray spectra. The extraction features are derived from the peak-based Bayesian classifiers to estimate a posterior probability for each isotope in the ANSI library. The program operations were tested under a MATLAB platform. The present peak based Bayesian RID algorithm was validated by using single isotopes(241Am, 57Co, 137Cs, 54Mn, 60Co), and then applied to five standard nuclear materials(0.32-4.51% at.235U), as well as natural U- and Th-ores. The ID performance of the RID algorithm was quantified in terms of F-score for each isotope. The posterior probability is calculated to be 54.5-74.4% for 238U and 4.7-10.5% for 235U in EC-NRM171 uranium materials. For the case of the more complex gamma-ray spectra from CRMs, the total scoring (ST) method was preferred for its ID performance evaluation. It was shown that the present peak based Bayesian RID algorithm can be applied to identify 235U and 238U isotopes in LEU or natural U-Th samples if a medium energy resolution detector is was in the measurements.

Design of Ship-type Floating LiDAR Buoy System for Wind Resource Measurement inthe Korean West Sea and Numerical Analysis of Stability Assessment of Mooring System (서해안 해상풍력단지 풍황관측용 부유식 라이다 운영을 위한 선박형 부표식 설계 및 계류 시스템의 수치 해석적 안정성 평가)

  • Yong-Soo, Gang;Jong-Kyu, Kim;Baek-Bum, Lee;Su-In, Yang;Jong-Wook, Kim
    • Journal of Navigation and Port Research
    • /
    • v.46 no.6
    • /
    • pp.483-490
    • /
    • 2022
  • Floating LiDAR is a system that provides a new paradigm for wind condition observation, which is essential when creating an offshore wind farm. As it can save time and money, minimize environmental impact, and even reduce backlash from local communities, it is emerging as the industry standard. However, the design and verification of a stable platform is very important, as disturbance factors caused by fluctuations of the buoy affect the reliability of observation data. In Korea, due to the nation's late entry into the technology, a number of foreign equipment manufacturers are dominating the domestic market. The west coast of Korea is a shallow sea environment with a very large tidal difference, so strong currents repeatedly appear depending on the region, and waves of strong energy that differ by season are formed. This paper conducted a study examining buoys suitable for LiDAR operation in the waters of Korea, which have such complex environmental characteristics. In this paper, we will introduce examples of optimized design and verification of ship-type buoys, which were applied first, and derive important concepts that will serve as the basis for the development of various platforms in the future.

Prelaunch Study of Validation for the Geostationary Ocean Color Imager (GOCI) (정지궤도 해색탑재체(GOCI) 자료 검정을 위한 사전연구)

  • Ryu, Joo-Hyung;Moon, Jeong-Eon;Son, Young-Baek;Cho, Seong-Ick;Min, Jee-Eun;Yang, Chan-Su;Ahn, Yu-Hwan;Shim, Jae-Seol
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.2
    • /
    • pp.251-262
    • /
    • 2010
  • In order to provide quantitative control of the standard products of Geostationary Ocean Color Imager (GOCI), on-board radiometric correction, atmospheric correction, and bio-optical algorithm are obtained continuously by comprehensive and consistent calibration and validation procedures. The calibration/validation for radiometric, atmospheric, and bio-optical data of GOCI uses temperature, salinity, ocean optics, fluorescence, and turbidity data sets from buoy and platform systems, and periodic oceanic environmental data. For calibration and validation of GOCI, we compared radiometric data between in-situ measurement and HyperSAS data installed in the Ieodo ocean research station, and between HyperSAS and SeaWiFS radiance. HyperSAS data were slightly different in in-situ radiance and irradiance, but they did not have spectral shift in absorption bands. Although all radiance bands measured between HyperSAS and SeaWiFS had an average 25% error, the 11% absolute error was relatively lower when atmospheric correction bands were omitted. This error is related to the SeaWiFS standard atmospheric correction process. We have to consider and improve this error rate for calibration and validation of GOCI. A reference target site around Dokdo Island was used for studying calibration and validation of GOCI. In-situ ocean- and bio-optical data were collected during August and October, 2009. Reflectance spectra around Dokdo Island showed optical characteristic of Case-1 Water. Absorption spectra of chlorophyll, suspended matter, and dissolved organic matter also showed their spectral characteristics. MODIS Aqua-derived chlorophyll-a concentration was well correlated with in-situ fluorometer value, which installed in Dokdo buoy. As we strive to solv the problems of radiometric, atmospheric, and bio-optical correction, it is important to be able to progress and improve the future quality of calibration and validation of GOCI.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

A Research on the Regulations and Perception of Interactive Game in Data Broadcasting: Special Emphasis on the TV-Betting Game (데이터방송 인터랙티브 게임 규제 및 이용자 인식에 관한 연구: 승부게임을 중심으로)

  • Byun, Dong-Hyun;Jung, Moon-Ryul;Bae, Hong-Seob
    • Korean journal of communication and information
    • /
    • v.35
    • /
    • pp.250-291
    • /
    • 2006
  • This study examines the regulatory issues and introduction problems of TV-betting data broadcasts in Korea by in-depth interview with a panel group. TV-betting data broadcast services of card games and horse racing games are widely in use in Europe and other parts of the world. In order to carry out the study, a demo program of TV-betting data broadcast in the OCAP(OpenCableTM Application Platform Specification) system environment, which is the data broadcasting standard for digital cable broadcasts in Korea was exposed to the panel group and then they were interviewed after watching and using the program. The results could be summarized as below. First of all, while TV-betting data broadcasts have many elements of entertainment, the respondents thought that it would be difficult to introduce TV-betting in data broadcasts as in overseas countries largely due to social factors. In addition, in order to introduce TV-betting data broadcasts, they suggested that excessive speculativeness must be suppressed through a series of regulatory system devices, such as by guaranteeing credibility of the media based on safe security systems for transactions, scheduling programs with effective time constraints to prevent the games from running too frequently, limiting the betting values, and by prohibiting access to games through set-top boxes of other data broadcast subscribers. The general consensus was that TV-betting could be considered for gradual introduction within the governmental laws and regulations that would minimize its ill effects. Therefore, the government should formulate long-term regulations and policies for data broadcasts. Once the groundwork is laid for safe introduction of TV-betting on data broadcasts within the boundary of laws and regulations, interactive TV games are expected to be introduced in Korea not only for added functionality of entertainment but also for far-ranging development of data broadcast and new media industries.

  • PDF

정지궤도 통신해양기상위성의 기상분야 요구사항에 관하여

  • Ahn, Myung-Hwan;Kim, Kum-Lan
    • Atmosphere
    • /
    • v.12 no.4
    • /
    • pp.20-42
    • /
    • 2002
  • Based on the "Mid to Long Term Plan for Space Development", a project to launch COMeS (Communication, Oceanography, and Meteorological Satellite) into the geostationary orbit is undergoing. Accordingly, KMA (Korea Meteorological Administration) has defined the meteorological missions and prepared the user requirements to fulfill the missions. To make a realistic user requirements, we prepared a first draft based on the ideal meteorological products derivable from a geostationary platform and sent the RFI (request for information) to the sensor manufacturers. Based on the responses to the RFI and other considerations, we revised the user requirement to be a realistic plan for the 2008 launch of the satellite. This manuscript introduces the revised user requirements briefly. The major mission defined in the revised user requirement is the augmentation of the detection and prediction ability of the severe weather phenomena, especially around the Korean Peninsula. The required payload is an enhanced Imager, which includes the major observation channels of the current geostationary sounder. To derive the required meteorological products from the Imager, at least 12 channels are required with the optimum of 16 channels. The minimum 12 channels are 6 wavelength bands used for current geostationary satellite, and additional channels in two visible bands, a near infrared band, two water vapor bands and one ozone absorption band. From these enhanced channel observation, we are going to derive and utilize the information of water vapor, stability index, wind field, and analysis of special weather phenomena such as the yellow sand event in addition to the standard derived products from the current geostationary Imager data. For a better temporal coverage, the Imager is required to acquire the full disk data within 15 minutes and to have the rapid scan mode for the limited area coverage. The required thresholds of spatial resolutions are 1 km and 2 km for visible and infrared channels, respectively, while the target resolutions are 0.5 km and 1 km.

On Method for LBS Multi-media Services using GML 3.0 (GML 3.0을 이용한 LBS 멀티미디어 서비스에 관한 연구)

  • Jung, Kee-Joong;Lee, Jun-Woo;Kim, Nam-Gyun;Hong, Seong-Hak;Choi, Beyung-Nam
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2004.12a
    • /
    • pp.169-181
    • /
    • 2004
  • SK Telecom has already constructed GIMS system as the base common framework of LBS/GIS service system based on OGC(OpenGIS Consortium)'s international standard for the first mobile vector map service in 2002, But as service content appears more complex, renovation has been needed to satisfy multi-purpose, multi-function and maximum efficiency as requirements have been increased. This research is for preparation ion of GML3-based platform to upgrade service from GML2 based GIMS system. And with this, it will be possible for variety of application services to provide location and geographic data easily and freely. In GML 3.0, it has been selected animation, event handling, resource for style mapping, topology specification for 3D and telematics services for mobile LBS multimedia service. And the schema and transfer protocol has been developed and organized to optimize data transfer to MS(Mobile Stat ion) Upgrade to GML 3.0-based GIMS system has provided innovative framework in the view of not only construction but also service which has been implemented and applied to previous research and system. Also GIMS channel interface has been implemented to simplify access to GIMS system, and service component of GIMS internals, WFS and WMS, has gotten enhanded and expanded function.

  • PDF

A study on developing a new self-esteem measurement test adopting DAP and drafting the direction of digitalizing measurement program of DAP (청소년 자존감 DAP 인물화 검사 개발 및 디지털화 측정 시스템 방향성 연구)

  • Woo, Sungju;Park, Chongwook
    • Journal of the HCI Society of Korea
    • /
    • v.8 no.1
    • /
    • pp.1-9
    • /
    • 2013
  • This is to develop a new way of testing self-esteem by adopting DAP(Draw a Person) test and to make a platform to digitalize it for young people in the adolescent stage. This approach is to get high effectiveness of the self-esteem measurement using DAP test, including some personal inner situations which can be easily missed in the large statistical analysis. The other objective of this study is digitalize to recover limits of DAP test in the subjective rating standard. It is based on the distribution of the figure drawing expressed numerically by the anxiety index of Handler. For these two examinations, we made experiment through 4 stages with second grade middle school 73 students from July 30th to October 31th in 2009 during 4 months. Firstly, we executed 'Self Values Test' for all 73 people, and divided them into two groups; one is high self-esteem group of 36 people, the other is low self-esteem group of 37 people. Secondly, we regrouped them following D (Depression), Pd (Psychopathic Deviate), Sc (Schizophrenia) scales of MMPI; one is high self-esteem group of 7 people, the other is low self-esteem group of 13 people. Thirdly, we conducted DAP test separately for these 20 people. We intended to verify necessity and appropriateness of direction of 'Digitalizing Measurement System' by comparing and analyzing relation between DAP and Self-esteem following evaluation criteria which has similarity in 3 tests, after executing DAP to reflect peculiarity of adolescents sufficiently. We compared and analyzed result abstracted by sampling DAP test of two groups; One is high self-esteem group of 2 people, the other is low self-esteem group of 2 people; to confirm whether we can improve limitation that original psychological testing has by comparing mutual reliance of measurement test. Finally, with DAP test gained from correlations between self-esteem and melancholia following as above-mentioned steps, we discovered possibility of realization to get a concrete and individual criteria of evaluation based on Expert System as a way of enhancing accessibility in quantitative manner. 'Digitalizing Measurement Program' of DAP test suggested in this study promote results' reliability based on existing tests and measurement.

  • PDF

Probabilistic Anatomical Labeling of Brain Structures Using Statistical Probabilistic Anatomical Maps (확률 뇌 지도를 이용한 뇌 영역의 위치 정보 추출)

  • Kim, Jin-Su;Lee, Dong-Soo;Lee, Byung-Il;Lee, Jae-Sung;Shin, Hee-Won;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.317-324
    • /
    • 2002
  • Purpose: The use of statistical parametric mapping (SPM) program has increased for the analysis of brain PET and SPECT images. Montreal Neurological Institute (MNI) coordinate is used in SPM program as a standard anatomical framework. While the most researchers look up Talairach atlas to report the localization of the activations detected in SPM program, there is significant disparity between MNI templates and Talairach atlas. That disparity between Talairach and MNI coordinates makes the interpretation of SPM result time consuming, subjective and inaccurate. The purpose of this study was to develop a program to provide objective anatomical information of each x-y-z position in ICBM coordinate. Materials and Methods: Program was designed to provide the anatomical information for the given x-y-z position in MNI coordinate based on the Statistical Probabilistic Anatomical Map (SPAM) images of ICBM. When x-y-z position was given to the program, names of the anatomical structures with non-zero probability and the probabilities that the given position belongs to the structures were tabulated. The program was coded using IDL and JAVA language for 4he easy transplantation to any operating system or platform. Utility of this program was shown by comparing the results of this program to those of SPM program. Preliminary validation study was peformed by applying this program to the analysis of PET brain activation study of human memory in which the anatomical information on the activated areas are previously known. Results: Real time retrieval of probabilistic information with 1 mm spatial resolution was archived using the programs. Validation study showed the relevance of this program: probability that the activated area for memory belonged to hippocampal formation was more than 80%. Conclusion: These programs will be useful for the result interpretation of the image analysis peformed on MNI coordinate, as done in SPM program.