• Title/Summary/Keyword: format validation

Search Result 50, Processing Time 0.031 seconds

Evidence-Based Clinical Practice Guideline for Fluid Therapy to Prevent Contrast-induced Nephropathy (조영제 유발 신장병증 예방을 위한 수액요법에 관한 근거기반 임상실무지침 개발)

  • Lee, Kyung Hae;Shin, Kyung Min;Lee, Hyeon Jeong;Kim, So Young;Chae, Jung Won;Kim, Mi Ra;Han, Min Young;Ahn, Mi Sook;Park, Jin Kyung;Chung, Mi Ae;Chu, Sang Hui;Hwang, Jung Hwa
    • Journal of Korean Clinical Nursing Research
    • /
    • v.23 no.1
    • /
    • pp.83-90
    • /
    • 2017
  • Purpose: This study was to develop evidence-based clinical practice guideline in order to prevent contrastinduced nephropathy (CIN) for patients undergoing percutaneous coronary intervention (PCI). Methods: The guideline was developed based on the "Scottish Intercollegiate Guidelines Network (SIGN)". The first draft of guideline was developed through 5 stages and evaluated by 10 experts.(1) Clinical questions were ensured in PICO format.(2) Two researchers conducted a systematic search through electronic database, identifying 170 studies. We selected 27 full text articles including 16 randomized clinical trials, 7 systematic reviews, and 4 guidelines. Quality of each studies were evaluated by the Cochran's Risk of Bias, AMSTAR, K-AGREEII. Among the studies, 11 studies were excluded.(3) The strength of recommendations were classified and quality of recommendations were ranked.(4) Guideline draft was finalized.(5) Content-validation was conducted by an expert group. All contents were ranked above 0.8 in CVI. Results: Evidence-based clinical practice guideline to prevent CIN was dveloped.(1) The guideline for preventing CIN recommends using 0.9% saline.(2) Standardized rate of fluid therapy is 1 to 1.5ml/kg/hr.(3) Execute hydration for 6~12hrs before PCI and after PCI. Conclusion: This study suggests evidence-based clinical practice guideline for preventing CIN which can be more efficiently used in clinical practice.

The Validation Study of Normality Distribution of Aquatic Toxicity Data for Statistical Analysis (수생태 독성자료의 정규성 분포 특성 확인을 통해 통계분석 시 분포 특성 적용에 대한 타당성 확인 연구)

  • OK, Seung-yeop;Moon, Hyo-Bang;Ra, Jin-Sung
    • Journal of Environmental Health Sciences
    • /
    • v.45 no.2
    • /
    • pp.192-202
    • /
    • 2019
  • Objectives: According to the central limit theorem, the samples in population might be considered to follow normal distribution if a large number of samples are available. Once we assume that toxicity dataset follow normal distribution, we can treat and process data statistically to calculate genus or species mean value with standard deviation. However, little is known and only limited studies are conducted to investigate whether toxicity dataset follows normal distribution or not. Therefore, the purpose of study is to evaluate the generally accepted normality hypothesis of aquatic toxicity dataset Methods: We selected the 8 chemicals, which consist of 4 organic and 4 inorganic chemical compounds considering data availability for the development of species sensitivity distribution. Toxicity data were collected at the US EPA ECOTOX Knowledgebase by simple search with target chemicals. Toxicity data were re-arranged to a proper format based on the endpoint and test duration, where we conducted normality test according to the Shapiro-Wilk test. Also we investigated the degree of normality by simple log transformation of toxicity data Results: Despite of the central limit theorem, only one large dataset (n>25) follow normal distribution out of 25 large dataset. By log transforming, more 7 large dataset show normality. As a result of normality test on small dataset (n<25), log transformation of toxicity value generally increases normality. Both organic and inorganic chemicals show normality growth for 26 species and 30 species, respectively. Those 56 species shows normality growth by log transformation in the taxonomic groups such as amphibian (1), crustacean (21), fish (22), insect (5), rotifer (2), and worm (5). In contrast, mollusca shows normality decrease at 1 species out of 23 that originally show normality. Conclusions: The normality of large toxicity dataset was not always satisfactory to the central limit theorem. Normality of those data could be improved through log transformation. Therefore, care should be taken when using toxicity data to induce, for example, mean value for risk assessment.

Building Large-scale CityGML Feature for Digital 3D Infrastructure (디지털 3D 인프라 구축을 위한 대규모 CityGML 객체 생성 방법)

  • Jang, Hanme;Kim, HyunJun;Kang, HyeYoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.187-201
    • /
    • 2021
  • Recently, the demand for a 3D urban spatial information infrastructure for storing, operating, and analyzing a large number of digital data produced in cities is increasing. CityGML is a 3D spatial information data standard of OGC (Open Geospatial Consortium), which has strengths in the exchange and attribute expression of city data. Cases of constructing 3D urban spatial data in CityGML format has emerged on several cities such as Singapore and New York. However, the current ecosystem for the creation and editing of CityGML data is limited in constructing CityGML data on a large scale because of lack of completeness compared to commercial programs used to construct 3D data such as sketchup or 3d max. Therefore, in this study, a method of constructing CityGML data is proposed using commercial 3D mesh data and 2D polygons that are rapidly and automatically produced through aerial LiDAR (Light Detection and Ranging) or RGB (Red Green Blue) cameras. During the data construction process, the original 3D mesh data was geometrically transformed so that each object could be expressed in various CityGML LoD (Levels of Detail), and attribute information extracted from the 2D spatial information data was used as a supplement to increase the utilization as spatial information. The 3D city features produced in this study are CityGML building, bridge, cityFurniture, road, and tunnel. Data conversion for each feature and property construction method were presented, and visualization and validation were conducted.

Development of IFC Standard for Securing Interoperability of BIM Data for Port Facilities (항만 BIM 데이터의 상호운용성 확보를 위한 IFC 표준 개발)

  • Moon, Hyoun-Seok;Won, Ji-Sun;Shin, Jae-Young
    • Journal of KIBIM
    • /
    • v.10 no.1
    • /
    • pp.9-22
    • /
    • 2020
  • Recently, BIM has been extended to infrastructures such as roads and bridges, and the demand for BIM standard development for ports is increasing internationally. Due to the low level of utilization of classification system and drawing standards compared to other infrastructures, and the closed nature of national security facilities, ports have insufficient level of connection and sharing environment among external systems or users. In addition, since the standardization of data for port facilities is not made, it is still necessary to establish an independent DB for each system and to ensure interoperability of data between these systems since it does not have a shared environment among similar data. Therefore, the purpose of this study is to develop and verify IFC, the international standard for BIM, in order to cope with the BIM environment and to be commonly used in the design, construction, and maintenance of port facilities. To this end, we build a standard schema with port-specific Express Notation according to buildingSMART International's standard development methodology. First, domestic and international reference model standards were analyzed to derive components such as space and facilities of port facilities. Based on this, the components of the port facility were derived through the codification, categorization, and normalization process developed by the research team. This was extended based on the port BIM object classification system developed by the research team. Normalization results were verified by designers and associations. Then, IFC schema construction was based on Express-G data modeling based on IFC 4 * 2 Candidate, which is a bridge candidate standard based on IFC4 (ISO16739), and IFC 4 * 3 Draft, which is developed by buildingSMART International. The final schema was validated using the commercialized validation tool. In addition, in order to verify the structural verification of the port IFC schema, the transformation process was verified by converting the caisson model into a Part21 file. In the future, this result will not only be used as a delivery standard for port BIM products, but will also be applied as a linkage standard between systems and a common data format for port BIM platforms when BIM is used in the maintenance phase. In particular, it is expected to be used as a core standard for data exchange in the port maintenance stage.

Analysis of Relative Importance of Board Game Development Models (보드게임 개발 모형의 상대적 중요도 분석)

  • Kim, Ji-Hye;Ho, Ryu Seuc
    • Journal of Digital Convergence
    • /
    • v.20 no.1
    • /
    • pp.257-263
    • /
    • 2022
  • In the current era of Corona, the positive potential of board games is emerging as the 'untact' is accelerating. This situation can be seen as an opportunity to have a positive impact on the development of the board game industry depending on how to develop board games in a more interesting, high-quality and educational way than before. To this end, this study intends to present a methodology for deriving the relative importance of important factors in board game development among the preceding studies and methodologies for more advanced board game development by board game developers and experts. This study is to derive the relative importance of important factors in board game development. To this end, important factors in board game development for board game developers and experts were derived through the Delphi method, and the Delphi method in the ranking format was used to rank the relative importance of the derived factors. This study focused on the development model of playing cards among board games. Priority analysis result, development and planning composition, prior research analysis, draft production, production, validation, application, and review in order of importance were derived(were derived in the order of importance). The results of this study are expected to provide practical guidelines for companies to prioritize and utilize development according to their importance when developing board games.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

Development of validated Nursing Interventions for Home Health Care to Women who have had a Caesarian Delivery (조기퇴원 제왕절개 산욕부를 위한 가정간호 표준서 개발)

  • HwangBo, Su-Ja
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.6 no.1
    • /
    • pp.135-146
    • /
    • 2000
  • The purpose of this study was to develope, based on the Nursing Intervention Classification (NIC) system. a set of standardized nursing interventions which had been validated. and their associated activities. for use with nursing diagnoses related to home health care for women who have had a caesarian delivery and for their newborn babies. This descriptive study for instrument development had three phases: first. selection of nursing diagnoses. second, validation of the preliminary home health care interventions. and third, application of the home care interventions. In the first phases, diagnoses from 30 nursing records of clients of the home health care agency at P. medical center who were seen between April 21 and July 30. 1998. and from 5 textbooks were examined. Ten nursing diagnoses were selected through a comparison with the NANDA (North American Nursing Diagnosis Association) classification In the second phase. using the selected diagnoses. the nursing interventions were defined from the diagnoses-intervention linkage lists along with associated activities for each intervention list in NIC. To develope the preliminary interventions five-rounds of expertise tests were done. During the first four rounds. 5 experts in clinical nursing participated. and for the final content validity test of the preliminary interventions. 13 experts participated using the Fehring's Delphi technique. The expert group evaluated and defined the set of preliminary nursing interventions. In the third phases, clinical tests were held at in a home health care setting with two home health care nurses using the preliminary intervention list as a questionnaire. Thirty clients referred to the home health care agency at P. medical center between October 1998 and March 1999 were the subjects for this phase. Each of the activities were tested using dichotomous question method. The results of the study are as follows: 1. For the ten nursing diagnoses. 63 appropriate interventions were selected from 369 diagnoses interventions links in NlC., and from 1.465 associated nursing activities. From the 63 interventions. the nurses expert group developed 18 interventions and 258 activities as the preliminary intervention list through a five-round validity test 2. For the fifth content validity test using Fehring's model for determining lCV (Intervention Content Validity), a five point Likert scale was used with values converted to weights as follows: 1=0.0. 2=0.25. 3=0.50. 4=0.75. 5=1.0. Activities of less than O.50 were to be deleted. The range of ICV scores for the nursing diagnoses was 0.95-0.66. for the nursing interventions. 0.98-0.77 and for the nursing activities, 0.95-0.85. By Fehring's method. all of these were included in the preliminary intervention list. 3. Using a questionnaire format for the preliminary intervention list. clinical application tests were done. To define nursing diagnoses. home health care nurses applied each nursing diagnoses to every client. and it was found that 13 were most frequently used of 400 times diagnoses were used. Therefore. 13 nursing diagnoses were defined as validated nursing diagnoses. Ten were the same as from the nursing records and textbooks and three were new from the clinical application. The final list included 'Anxiety', 'Aspiration. risk for'. 'Infant behavior, potential for enhanced, organized'. 'Infant feeding pattern. ineffective'. 'Infection'. 'Knowledge deficit'. 'Nutrition, less than body requirements. altered', 'Pain'. 'Parenting'. 'Skin integrity. risk for. impared' and 'Risk for activity intolerance'. 'Self-esteem disturbance', 'Sleep pattern disturbance' 4. In all. there were 19 interventions. 18 preliminary nursing interventions and one more intervention added from the clinical setting. 'Body image enhancement'. For 265 associated nursing activities. clinical application tests were also done. The intervention rate of 19 interventions was from 81.6% to 100%, so all 19 interventions were in c1uded in the validated intervention set. From the 265 nursing activities. 261(98.5%) were accepted and four activities were deleted. those with an implimentation rate of less than 50%. 5. In conclusion. 13 diagnoses. 19 interventions and 261 activities were validated for the final validated nursing intervention set.

  • PDF

A Product Model Centered Integration Methodology for Design and Construction Information (프로덕트 모델 중심의 설계, 시공 정보 통합 방법론)

  • Lee Keun-Hyoung;Kim Jae-Jun
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • autumn
    • /
    • pp.99-106
    • /
    • 2002
  • Researches on integration of design and construction information from earlier era focused on the conceptual data models. Development and prevalent use of commercial database management system led many researchers to design database schemas for enlightening of relationship between non-graphic data items. Although these researches became the foundation fur the proceeding researches. they did not utilize the graphic data providable from CAD system which is already widely used. 4D CAD concept suggests a way of integrating graphic data with schedule data. Although this integration provided a new possibility for integration, there exists a limitation in data dependency on a specific application. This research suggests a new approach on integration for design and construction information, 'Product Model Centered Integration Methodology'. This methodology achieves integration by preliminary research on existing methodology using 4D CAD concept. and by development and application of new integration methodology, 'Product Model Centered Integration Methodology'. 'Design Component' can be converted into digital format by object based CAD system. 'Unified Object-based Graphic Modeling' shows how to model graphic product model using CAD system. Possibility of reusing design information in latter stage depends on the ways of creating CAD model, so modeling guidelines and specifications are suggested. Then prototype system for integration management, and exchange are presented, using 'Product Frameworker', and 'Product Database' which also supports multiple-viewpoints. 'Product Data Model' is designed, and main data workflows are represented using 'Activity Diagram', one of UML diagrams. These can be used for writing programming codes and developing prototype in order to automatically create activity items in actual schedule management system. Through validation processes, 'Product Model Centered Integration Methodology' is suggested as the new approach for integration of design and construction information.

  • PDF

Very short-term rainfall prediction based on radar image learning using deep neural network (심층신경망을 이용한 레이더 영상 학습 기반 초단시간 강우예측)

  • Yoon, Seongsim;Park, Heeseong;Shin, Hongjoon
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.12
    • /
    • pp.1159-1172
    • /
    • 2020
  • This study applied deep convolution neural network based on U-Net and SegNet using long period weather radar data to very short-term rainfall prediction. And the results were compared and evaluated with the translation model. For training and validation of deep neural network, Mt. Gwanak and Mt. Gwangdeoksan radar data were collected from 2010 to 2016 and converted to a gray-scale image file in an HDF5 format with a 1km spatial resolution. The deep neural network model was trained to predict precipitation after 10 minutes by using the four consecutive radar image data, and the recursive method of repeating forecasts was applied to carry out lead time 60 minutes with the pretrained deep neural network model. To evaluate the performance of deep neural network prediction model, 24 rain cases in 2017 were forecast for rainfall up to 60 minutes in advance. As a result of evaluating the predicted performance by calculating the mean absolute error (MAE) and critical success index (CSI) at the threshold of 0.1, 1, and 5 mm/hr, the deep neural network model showed better performance in the case of rainfall threshold of 0.1, 1 mm/hr in terms of MAE, and showed better performance than the translation model for lead time 50 minutes in terms of CSI. In particular, although the deep neural network prediction model performed generally better than the translation model for weak rainfall of 5 mm/hr or less, the deep neural network prediction model had limitations in predicting distinct precipitation characteristics of high intensity as a result of the evaluation of threshold of 5 mm/hr. The longer lead time, the spatial smoothness increase with lead time thereby reducing the accuracy of rainfall prediction The translation model turned out to be superior in predicting the exceedance of higher intensity thresholds (> 5 mm/hr) because it preserves distinct precipitation characteristics, but the rainfall position tends to shift incorrectly. This study are expected to be helpful for the improvement of radar rainfall prediction model using deep neural networks in the future. In addition, the massive weather radar data established in this study will be provided through open repositories for future use in subsequent studies.

A study on improving self-inference performance through iterative retraining of false positives of deep-learning object detection in tunnels (터널 내 딥러닝 객체인식 오탐지 데이터의 반복 재학습을 통한 자가 추론 성능 향상 방법에 관한 연구)

  • Kyu Beom Lee;Hyu-Soung Shin
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.2
    • /
    • pp.129-152
    • /
    • 2024
  • In the application of deep learning object detection via CCTV in tunnels, a large number of false positive detections occur due to the poor environmental conditions of tunnels, such as low illumination and severe perspective effect. This problem directly impacts the reliability of the tunnel CCTV-based accident detection system reliant on object detection performance. Hence, it is necessary to reduce the number of false positive detections while also enhancing the number of true positive detections. Based on a deep learning object detection model, this paper proposes a false positive data training method that not only reduces false positives but also improves true positive detection performance through retraining of false positive data. This paper's false positive data training method is based on the following steps: initial training of a training dataset - inference of a validation dataset - correction of false positive data and dataset composition - addition to the training dataset and retraining. In this paper, experiments were conducted to verify the performance of this method. First, the optimal hyperparameters of the deep learning object detection model to be applied in this experiment were determined through previous experiments. Then, in this experiment, training image format was determined, and experiments were conducted sequentially to check the long-term performance improvement through retraining of repeated false detection datasets. As a result, in the first experiment, it was found that the inclusion of the background in the inferred image was more advantageous for object detection performance than the removal of the background excluding the object. In the second experiment, it was found that retraining by accumulating false positives from each level of retraining was more advantageous than retraining independently for each level of retraining in terms of continuous improvement of object detection performance. After retraining the false positive data with the method determined in the two experiments, the car object class showed excellent inference performance with an AP value of 0.95 or higher after the first retraining, and by the fifth retraining, the inference performance was improved by about 1.06 times compared to the initial inference. And the person object class continued to improve its inference performance as retraining progressed, and by the 18th retraining, it showed that it could self-improve its inference performance by more than 2.3 times compared to the initial inference.