• 제목/요약/키워드: consistency of derived data

검색결과 83건 처리시간 0.028초

GOES-9과 MTSAT-1R 위성 간의 일사량 산출의 연속성과 일관성 확보를 위한 구름 감쇠 계수의 조정 (An Adjustment of Cloud Factors for Continuity and Consistency of Insolation Estimations between GOES-9 and MTSAT-1R)

  • 김인환;한경수;염종민
    • 대한원격탐사학회지
    • /
    • 제28권1호
    • /
    • pp.69-77
    • /
    • 2012
  • 표면도달일사량은 전 지구시스템에 대한 기후 연구에 가장 중요한 요소 중 하나다. 기후 연구에서는 동일 지역을 관측하는 두 개 혹은 그 이상의 위성자료로부터 넓은 공간적 범위를 가지는 장기간의 데이터를 사용하는 것이 필요하다. 시간적 연속성을 가지는 서로 다른 위성으로부터 산출 된 표면도달일사량의 연속성과 일관성을 향상시키는 것은 매우 중요하다. 본 연구에서는 물리적 모델을 이용하여 GOES-9과 MTSAT-1R 위성의 중복 관측 기간 동안의 표면도달일사량을 산출하고, 두 위성의 채널자료와 실측치 비교를 통해 위성간의 연속성과 일관성을 향상시키는 방법을 연구하였다. 두 위성의 적외 채널 온도는 매우 잘 일치하는 경향을 보였다 : RMSE=5.595 Kelvin; Bias=2.065 Kelvin. 반면에, 가시채널은 다른 값의 분포를 보였지만 비슷한 경향을 보였다. 그리고 두 위성으로부터 산출 된 표면도달일사량은 실측치와 일치성이 낮았다. 표면도달일사량의 품질 향상을 위해 구름감쇠계수 조정을 통해 표면도달일사량 산출물을 재생산하였다. 그리고 채널 자료의 비교 분석을 통해 GOES-9 위성을 위한 구름감쇠계수를 생산하였다. 그 결과, 구름 효과를 고려한 GOES-9의 표면도달일사량 산출물은 MTSAT-1R과 실측치에 대해 매우 높은 일치성을 보였다 : RMSE=$83.439W\;m^{-2}$; Bias=$27.296W\;m^{-2}$. 구름감쇠계수 조정을 통해 향상 된 정확도는 두 개 이상의 위성으로부터 산출 된 표면도달일사량 산출물의 연속성과 일관성을 향상 시킬 수 있을 것이다.

AHP 기법을 이용한 농촌 커뮤니티 리질리언스 지표 도출 연구 (Assessing Community Resilience in Rural Regions Using the Analytic Hierarchy Process Method)

  • 김은솔;이재호
    • 농촌계획
    • /
    • 제28권1호
    • /
    • pp.37-47
    • /
    • 2022
  • The purpose of this study is to introduce the concept of community resilience to rural society and build an index suitable for the reality of rural areas. Furthermore, by calculating the importance of evaluation factors, it was attempted to present priorities and alternatives for each evaluation factor. By stratifying the derived indicators, a survey was conducted targeting 20 researchers, practitioners, and public officials, three groups of experts working in rural areas who were well aware of the realities and problems of rural areas. In the survey, a pairwise comparison was performed to compare factors 1:1 to calculate the importance, and for rational and consistent decision-making, decisions were made in the 9-grade section. Using the collected data, consistency analysis that can evaluate reliability in the decision-making process and the relative weight of evaluation factors were calculated through AHP analysis. As a result of the analysis, as a result of examining the priority of final importance by summarizing the importance of all evaluation factors, 'Income creation using resources' > 'Population Characteristics' > 'Tolerance' > 'External Support' > 'Social Accessibility' > 'Physical Accessibility' > 'Community Competence' > 'Infrastructure' > 'Leader Competence' > 'Natural Environment' was derived in the order. In the study dealing with urban community resilience indicators, social aspects such as citizen participation, public-private cooperation, and governance were presented as the most important requirements, but this study differs in that the 'income creation' factor is derived as the most important factor. This can be seen through the change in the income difference between rural and urban areas. The income structure of rural areas has changed rapidly, and it is now reaching a very poor level, so it is necessary to prepare alternatives to 'income creation' in the case of rural areas. Unlike urban indicators, 'population characteristics' and 'tolerance' were also derived as important indicators of rural society. However, there are currently no alternatives to supplement the vulnerability by strengthening the resilience of rural communities. Based on the priority indicators derived from the study, we tried to suggest alternatives necessary for rural continuity in the future so that they can be supplemented step by step.

How to automatically extract 2D deliverables from BIM?

  • Kim, Yije;Chin, Sangyoon
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.1253-1253
    • /
    • 2022
  • Although the construction industry is changing from a 2D-based to a 3D BIM-based management process, 2D drawings are still used as standards for permits and construction. For this reason, 2D deliverables extracted from 3D BIM are one of the essential achievements of BIM projects. However, due to technical and institutional problems that exist in practice, the process of extracting 2D deliverables from BIM requires additional work beyond generating 3D BIM models. In addition, the consistency of data between 3D BIM models and 2D deliverables is low, which is a major factor hindering work productivity in practice. To solve this problem, it is necessary to build BIM data that meets information requirements (IRs) for extracting 2D deliverables to minimize the amount of work of users and maximize the utilization of BIM data. However, despite this, the additional work that occurs in the BIM process for drawing creation is still a burden on BIM users. To solve this problem, the purpose of this study is to increase the productivity of the BIM process by automating the process of extracting 2D deliverables from BIM and securing data consistency between the BIM model and 2D deliverables. For this, an expert interview was conducted, and the requirements for automation of the process of extracting 2D deliverables from BIM were analyzed. Based on the requirements, the types of drawings and drawing expression elements that require automation of drawing generation in the design development stage were derived. Finally, the method for developing automation technology targeting elements that require automation was classified and analyzed, and the process for automatically extracting BIM-based 2D deliverables through templates and rule-based automation modules were derived. At this time, the automation module was developed as an add-on to Revit software, a representative BIM authoring tool, and 120 rule-based automation rulesets, and the combinations of these rulesets were used to automatically generate 2D deliverables from BIM. Through this, it was possible to automatically create about 80% of drawing expression elements, and it was possible to simplify the user's work process compared to the existing work. Through the automation process proposed in this study, it is expected that the productivity of extracting 2D deliverables from BIM will increase, thereby increasing the practical value of BIM utilization.

  • PDF

온톨로지 기반 메타데이터 명명 규칙에 관한 연구 (A Study on the Naming Rules of Metadata based on Ontology)

  • 고영만;서태설
    • 정보관리학회지
    • /
    • 제22권4호통권58호
    • /
    • pp.97-109
    • /
    • 2005
  • 본 연구에서는 정보자원의 메타데이터 작성시 메타데이터 간 의미의 일관성을 유지하기 위한 메타데이터 명명 방법론과 이를 실제 분야에 적용할 수 있는 메타데이터 명명 규칙의 실험적 모형을 제시하였다. 이를 위해 우선적으로 ISO/IEC 11179에서 제시하고 있는 메타데이터 레지스트리 메타모형과 데이터의 기본 속성 및 개념을 논의하였으며, 이러한 논의를 토대로 객체용어(object term)와 속성용어(property term) 및 표현(presentation)에 관한 명명 규칙의 실제 적용 사례를 제시하였다. 객체용어의 생성은 자료유형의 엔터티-관계(I-R) 모형에 근거한 휴리스틱 분석을 통해 이루어졌으며, 속성용어의 명명은 더블린코어의 메타데이터 셋을 기반으로, 표현은 SHOE 1.0 버전을 기반으로 하였다.

Multi-view Clustering by Spectral Structure Fusion and Novel Low-rank Approximation

  • Long, Yin;Liu, Xiaobo;Murphy, Simon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권3호
    • /
    • pp.813-829
    • /
    • 2022
  • In multi-view subspace clustering, how to integrate the complementary information between perspectives to construct a unified representation is a critical problem. In the existing works, the unified representation is usually constructed in the original data space. However, when the data representation in each view is very diverse, the unified representation derived directly in the original data domain may lead to a huge information loss. To address this issue, different to the existing works, inspired by the latest revelation that the data across all perspectives have a very similar or close spectral block structure, we try to construct the unified representation in the spectral embedding domain. In this way, the complementary information across all perspectives can be fused into a unified representation with little information loss, since the spectral block structure from all views shares high consistency. In addition, to capture the global structure of data on each view with high accuracy and robustness both, we propose a novel low-rank approximation via the tight lower bound on the rank function. Finally, experimental results prove that, the proposed method has the effectiveness and robustness at the same time, compared with the state-of-art approaches.

노인의 당뇨병 관리 자기효능감 측정도구 개발 및 평가 (Development and Validation of the Diabetes Management Self-efficacy Scale for Older Adults (DMSES-O))

  • 송미순;최수영;김세안;서경산;이수진;김은호
    • 근관절건강학회지
    • /
    • 제21권3호
    • /
    • pp.184-194
    • /
    • 2014
  • Purpose: The purpose of this study was to develop and validate a diabetes management self-efficacy scale for older adults (DMSES-O). Methods: A preliminary DMSES-O of 22 items was derived from a literature review and seven domains of self-management behaviors. Content validity was confirmed by experts in diabetes self-management education. To test the reliability and validity of the DMSES-O, data were collected from 150 older adults with type 2 diabetes. The data were analyzed using exploratory factor analysis, and Cronbach's ${\alpha}$ and Pearson's correlation coefficients were calculated. Results: From the exploratory factor analysis, 17 significant items in six subscales were derived. Factors derived were named "problem solving for hypoglycemia and self-monitoring blood glucose," "problem solving for hyperglycemia," "coping with psychological distress and taking medication," "reducing risks of diabetes complications," "appropriate exercise," and "healthy eating." The criterion-related validity of the DMSES-O was established by its correlation with the Summary of Diabetes Self-care Activities Questionnaire. Cronbach's ${\alpha}$, a measure of internal consistency, was .84 for the overall scale and ranged from .54 to .80 for the subscales. Conclusion: The DMSES-O is a reliable and valid instrument to measure selfefficacy for diabetes self-management among older adults.

환경 위성관측자료의 통계분석을 통한 동아시아 대기오염특성 연구 (Analysis of Characteristics of Air Pollution Over Asia with Satellite-derived $NO_2$ and HCHO using Statistical Methods)

  • 백강현;김재환
    • 대기
    • /
    • 제20권4호
    • /
    • pp.495-503
    • /
    • 2010
  • Satellite data have an intrinsic problem due to a number of various physical parameters, which can have a similar effect on measured radiance. Most evaluations of satellite performance have relied on comparisons with limited spatial and temporal resolution of ground-based measurements such as soundings and in-situ measurements. In order to overcome this problem, a new way of satellite data evaluation is suggested with statistical tools such as empirical orthogonal function(EOF), and singular value decomposition(SVD). The EOF analyses with OMI and OMI HCHO over northeast Asia show that the spatial pattern show high correlation with population density. This suggests that human activity is a major source of as well as HCHO over this region. However, this analysis is contradictory to the previous finding with GOME HCHO that biogenic activity is the main driving mechanism(Fu et al., 2007). To verify the source of HCHO over this region, we performed the EOF analyses with vegetation and HCHO distribution. The results showed no coherence in the spatial and temporal pattern between two factors. Rather, the additional SVD analysis between $NO_2$ and HCHO shows consistency in spatial and temporal coherence. This outcome suggests that the anthropogenic emission is the main source of HCHO over the region. We speculate that the previous study appears to be due to low temporal and spatial resolution of GOME measurements or uncertainty in model input data.

기상청 기후자료의 균질성 문제 (II): 통계지침의 변경 (Inhomogeneities in Korean Climate Data (II): Due to the Change of the Computing Procedure of Daily Mean)

  • 류상범;김연희
    • 대기
    • /
    • 제17권1호
    • /
    • pp.17-26
    • /
    • 2007
  • The station relocations, the replacement of instruments, and the change of a procedure for calculating derived climatic quantities from observations are well-known nonclimatic factors that seriously contaminate the worthwhile results in climate study. Prior to embarking on the climatological analysis, therefore, the quality and homogeneity of the utilized data sets should be properly evaluated with metadata. According to the metadata of the Korea Meteorological Administration (KMA), there have been plenty of changes in the procedure computing the daily mean values of temperature, humidity, etc, since 1904. For routine climatological work, it is customary to compute approximate daily mean values for individual days from values observed at fixed hours. In the KMA, fixed hours were totally 5 times changed: at four-hourly, four-hourly interval with additional 12 hour, eight-hourly, six-hourly, three-hourly intervals. In this paper, the homogeneity in the daily mean temperature dataset of the KMA was assessed with the consistency and efficiency of point estimators. We used the daily mean calculated from the 24 hourly readings as a potential true value. Approximate daily means computed from temperatures observed at different fixed hours have statistically different properties. So this inhomogeneity in KMA climate data should be kept in mind if you want to analysis secular aspects of Korea climate using this data set.

준모수적 방법을 이용한 랜덤 절편 로지스틱 모형 분석 (Semiparametric Approach to Logistic Model with Random Intercept)

  • 김미정
    • 응용통계연구
    • /
    • 제28권6호
    • /
    • pp.1121-1131
    • /
    • 2015
  • 의학이나 사회과학에서 이진 데이터 분석 시 랜덤 절편(random intercept)을 갖는 로지스틱 모형이 유용하게 쓰이고 있다. 지금까지는 이러한 로지스틱 모형에서 랜덤 절편이 정규분포와 같은 모수 모형(parametric model)을 따른다는 가정과 설명변수와 랜덤 절편이 독립이라는 가정 하에 실행된 데이터 분석이 전반적이었다. 그러나 이러한 두 가지 가정은 다소 무리가 있다. 이 연구에서는 설명 변수와 랜덤 절편의 독립성을 가정하지 않고, 비모수 랜덤 절편을 따르는 로지스틱 모형의 방법론을 기존에 널리 쓰인 방법과 비교하여 설명하도록 한다. 케냐의 초등학생들의 영양 섭취 및 질병의 발병을 조사한 데이터에 이 방법을 적용하였다.

FPGA implementation of overhead reduction algorithm for interspersed redundancy bits using EEDC

  • Kim, Hi-Seok
    • 전기전자학회논문지
    • /
    • 제21권2호
    • /
    • pp.130-135
    • /
    • 2017
  • Normally, in data transmission, extra parity bits are added to the input message which were derived from its input and a pre-defined algorithm. The same algorithm is used by the receiver to check the consistency of the delivered information, to determine if it is corrupted or not. It recovers and compares the received information, to provide matching and correcting the corrupted transmitted bits if there is any. This paper aims the following objectives: to use an alternative error detection-correction method, to lessens both the fixed number of the required redundancy bits 'r' in cyclic redundancy checking (CRC) because of the required polynomial generator and the overhead of interspersing the r in Hamming code. The experimental results were synthesized using Xilinx Virtex-5 FPGA and showed a significant increase in both the transmission rate and detection of random errors. Moreover, this proposal can be a better option for detecting and correcting errors.