• Title/Summary/Keyword: mapping class

Search Result 290, Processing Time 0.027 seconds

Model Checking of Concurrent Object-Oriented Systems (병렬 객체지향 시스템의 검증)

  • Cho, Seung-Mo;Kim, Young-Gon;Bae, Doo-Hwan;Byun, Sung-Won;Kim, Sang-Taek
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.1
    • /
    • pp.1-12
    • /
    • 2000
  • Model checking is a formal verification technique which checks the consistency between a requirement specification and a behavior model of the system by explorating the state space of the model. We apply model checking to the formal verification of the concurrent object-oriented system, using an existing model checker SPIN which has been successful in verifying concurrent systems. First, we propose an Actor-based modeling language, called APromela, by extending the modeling language Promela which is a modeling language supported in SPIN. APromela supports not only all the primitives of Promela, but additional primitives needed to model concurrent object-oriented systems, such as class definition, object instantiation, message send, and synchronization.Second, we provide translation rules for mapping APromela's such modeling primitives to Promela's. As an application of APromela, we suggest a verification method for UML models. By giving an example of specification, translation, and verification, we also demonstrate the applicability of our proposed approach, and discuss the limitations and further research issues.

  • PDF

APDM : Adding Attributes to Permission-Based Delegation Model

  • Kim, Si-Myeong;Han, Sang-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.107-114
    • /
    • 2022
  • Delegation is a powerful mechanism that allocates access rights to users to provide flexible and dynamic access control decisions. It is also particularly useful in a distributed environment. Among the representative delegation models, the RBDM0 and RDM2000 models are role delegation as the user to user delegation. However, In RBAC, the concept of inheritance of the role class is not well harmonized with the management rules of the actual corporate organization. In this paper, we propose an Adding Attributes on Permission-Based Delegation Model (ABDM) that guarantees the permanence of delegated permissions. It does not violate the separation of duty and security principle of least privilege. ABDM based on RBAC model, supports both the role to role and user to user delegation with an attribute. whenever the delegator wants the permission can be withdrawn, and A delegator can give permission to a delegatee.

Structural health monitoring data anomaly detection by transformer enhanced densely connected neural networks

  • Jun, Li;Wupeng, Chen;Gao, Fan
    • Smart Structures and Systems
    • /
    • v.30 no.6
    • /
    • pp.613-626
    • /
    • 2022
  • Guaranteeing the quality and integrity of structural health monitoring (SHM) data is very important for an effective assessment of structural condition. However, sensory system may malfunction due to sensor fault or harsh operational environment, resulting in multiple types of data anomaly existing in the measured data. Efficiently and automatically identifying anomalies from the vast amounts of measured data is significant for assessing the structural conditions and early warning for structural failure in SHM. The major challenges of current automated data anomaly detection methods are the imbalance of dataset categories. In terms of the feature of actual anomalous data, this paper proposes a data anomaly detection method based on data-level and deep learning technique for SHM of civil engineering structures. The proposed method consists of a data balancing phase to prepare a comprehensive training dataset based on data-level technique, and an anomaly detection phase based on a sophisticatedly designed network. The advanced densely connected convolutional network (DenseNet) and Transformer encoder are embedded in the specific network to facilitate extraction of both detail and global features of response data, and to establish the mapping between the highest level of abstractive features and data anomaly class. Numerical studies on a steel frame model are conducted to evaluate the performance and noise immunity of using the proposed network for data anomaly detection. The applicability of the proposed method for data anomaly classification is validated with the measured data of a practical supertall structure. The proposed method presents a remarkable performance on data anomaly detection, which reaches a 95.7% overall accuracy with practical engineering structural monitoring data, which demonstrates the effectiveness of data balancing and the robust classification capability of the proposed network.

Mapping Landslide Susceptibility Based on Spatial Prediction Modeling Approach and Quality Assessment (공간예측모형에 기반한 산사태 취약성 지도 작성과 품질 평가)

  • Al, Mamun;Park, Hyun-Su;JANG, Dong-Ho
    • Journal of The Geomorphological Association of Korea
    • /
    • v.26 no.3
    • /
    • pp.53-67
    • /
    • 2019
  • The purpose of this study is to identify the quality of landslide susceptibility in a landslide-prone area (Jinbu-myeon, Gangwon-do, South Korea) by spatial prediction modeling approach and compare the results obtained. For this goal, a landslide inventory map was prepared mainly based on past historical information and aerial photographs analysis (Daum Map, 2008), as well as some field observation. Altogether, 550 landslides were counted at the whole study area. Among them, 182 landslides are debris flow and each group of landslides was constructed in the inventory map separately. Then, the landslide inventory was randomly selected through Excel; 50% landslide was used for model analysis and the remaining 50% was used for validation purpose. Total 12 contributing factors, such as slope, aspect, curvature, topographic wetness index (TWI), elevation, forest type, forest timber diameter, forest crown density, geology, landuse, soil depth, and soil drainage were used in the analysis. Moreover, to find out the co-relation between landslide causative factors and incidents landslide, pixels were divided into several classes and frequency ratio for individual class was extracted. Eventually, six landslide susceptibility maps were constructed using the Bayesian Predictive Discriminant (BPD), Empirical Likelihood Ratio (ELR), and Linear Regression Method (LRM) models based on different category dada. Finally, in the cross validation process, landslide susceptibility map was plotted with a receiver operating characteristic (ROC) curve and calculated the area under the curve (AUC) and tried to extract success rate curve. The result showed that Bayesian, likelihood and linear models were of 85.52%, 85.23%, and 83.49% accuracy respectively for total data. Subsequently, in the category of debris flow landslide, results are little better compare with total data and its contained 86.33%, 85.53% and 84.17% accuracy. It means all three models were reasonable methods for landslide susceptibility analysis. The models have proved to produce reliable predictions for regional spatial planning or land-use planning.

Underwater Transient Signal Classification Using Eigen Decomposition Based on Wigner-Ville Distribution Function (위그너-빌 분포 함수 기반의 고유치 분해를 이용한 수중 천이 신호 식별)

  • Bae, Keun-Sung;Hwang, Chan-Sik;Lee, Hyeong-Uk;Lim, Tae-Gyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.3
    • /
    • pp.123-128
    • /
    • 2007
  • This Paper Presents new transient signal classification algorithms for underwater transient signals. In general. the ambient noise has small spectral deviation and energy variation. while a transient signal has large fluctuation. Hence to detect the transient signal, we use the spectral deviation and power variation. To classify the detected transient signal. the feature Parameters are obtained by using the Wigner-Ville distribution based eigenvalue decomposition. The correlation is then calculated between the feature vector of the detected signal and all the feature vectors of the reference templates frame-by-frame basis, and the detected transient signal is classified by the frame mapping rate among the class database.

Multi-classification of Osteoporosis Grading Stages Using Abdominal Computed Tomography with Clinical Variables : Application of Deep Learning with a Convolutional Neural Network (멀티 모달리티 데이터 활용을 통한 골다공증 단계 다중 분류 시스템 개발: 합성곱 신경망 기반의 딥러닝 적용)

  • Tae Jun Ha;Hee Sang Kim;Seong Uk Kang;DooHee Lee;Woo Jin Kim;Ki Won Moon;Hyun-Soo Choi;Jeong Hyun Kim;Yoon Kim;So Hyeon Bak;Sang Won Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.3
    • /
    • pp.187-201
    • /
    • 2024
  • Osteoporosis is a major health issue globally, often remaining undetected until a fracture occurs. To facilitate early detection, deep learning (DL) models were developed to classify osteoporosis using abdominal computed tomography (CT) scans. This study was conducted using retrospectively collected data from 3,012 contrast-enhanced abdominal CT scans. The DL models developed in this study were constructed for using image data, demographic/clinical information, and multi-modality data, respectively. Patients were categorized into the normal, osteopenia, and osteoporosis groups based on their T-scores, obtained from dual-energy X-ray absorptiometry, into normal, osteopenia, and osteoporosis groups. The models showed high accuracy and effectiveness, with the combined data model performing the best, achieving an area under the receiver operating characteristic curve of 0.94 and an accuracy of 0.80. The image-based model also performed well, while the demographic data model had lower accuracy and effectiveness. In addition, the DL model was interpreted by gradient-weighted class activation mapping (Grad-CAM) to highlight clinically relevant features in the images, revealing the femoral neck as a common site for fractures. The study shows that DL can accurately identify osteoporosis stages from clinical data, indicating the potential of abdominal CT scans in early osteoporosis detection and reducing fracture risks with prompt treatment.

A Study of Improvement for the Prediction of Groundwater Pollution in Rural Area: Application in Keumsan, Korea (농촌지역 지하수의 오염 예측 방법 개선방안 연구: 충남 금산 지역에의 적용)

  • Cheong, Beom-Keun;Chae, Gi-Tak;Koh, Dong-Chan;Ko, Kyung-Seok;Koo, Min-Ho
    • Journal of Soil and Groundwater Environment
    • /
    • v.13 no.4
    • /
    • pp.40-53
    • /
    • 2008
  • Groundwater pollution prediction methods have been developed to plan the sustainable groundwater usage and protection from potential pollution in many countries. DRASTIC established by US EPA is the most widely used groundwater vulnerability mapping method. However, the DRASTIC showed limitation in predicting the groundwater contamination because the DRASTIC method is designed to embrace only hydrogeologic factors. Therefore, in this study, three different methods were applied to improve a groundwater pollution prediction method: US EPA DRASTIC, Modified-DRASTIC suggested by Panagopoulos et al. (2006), and LSDG (Land use, Soil drainage, Depth to water, Geology) proposed by Rupert (1999). The Modified-DRASTIC is the modified version of the DRASTIC in terms of the rating scales and the weighting coefficients. The rating scales of each factor were calculated by the statistical comparison of nitrate concentrations in each class using the Wilcoxon rank-sum test; while the weighting coefficients were modified by the statistical correlation of each parameter to nitrate concentrations using the Spearman's rho test. The LSDG is a simple rating method using four factors such as Land use, Soil drainage, Depth to water, and Geology. Classes in each factor are compared by the Wilcoxon rank-sum test which gives a different rating to each class if the nitrate concentration in the class is significantly different. A database of nitrate concentrations in groundwaters from 149 wells was built in Keumsan area. Application of three different methods for assessing the groundwater pollution potential resulted that the prediction which was represented by a correlation (r) between each index and nitrate was improved from the EPA DRASTIC (r = 0.058) to the modified rating (r = 0.245), to the modified rating and weights (r = 0.400), and to the LSDG (r = 0.415), respectively. The LSDG seemed appropriate to predict the groundwater pollution in that it contained land use as a factor of the groundwater pollution sources and the rating of each class was defined by a real pollution nitrate concentration.

Brain F-18 FDG PET for localization of epileptogenic zones in frontal lobe epilepsy: visual assessment and statistical parametric mapping analysis (전두엽 간질에서 F-18-FDG PET의 간질병소 국소화 성능: 육안 판독과 SPM에 의한 분석)

  • Kim, Yu-Kyeong;Lee, Dong-Soo;Lee, Sang-Kun;Chung, Chun-Kee;Yeo, Jeong-Seok;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.35 no.3
    • /
    • pp.131-141
    • /
    • 2001
  • Purpose: We evaluated the sensitivity of the F-18 FDG PET by visual assessment and statistical parametric mapping (SPM) analysis for the localization of the epileptogenic zones in frontal lobe epilepsy. Materials and Methods: Twenty-four patients with frontal lobe epilepsy were examined. All patients exhibited improvements after surgical resection (Engel class I or II). Upon pathological examination, 18 patients revealed cortical dysplasia, 4 patients revealed tumor, and 2 patients revealed cortical scar. The hypometabolic lesions were found in F-18 FDG PET by visual assessment and SPM analysis. On SPM analysis, cutoff threshold was changed. Results: MRI showed structural lesions in 12 patients and normal results in the remaining 12. F-18 FDG PET correctly localized epileptogenic zones in 13 patients (54%) by visual assessment. Sensitivity of F-18 FDG PET in MR-negative patients (50%) was similar to that in MR-positive patients (67%). On SPM analysis, sensitivity decreased according to the decrease of p value. Using uncorrected p value of 0.05 as threshold, sensitivity of SPM analysis was 53%, which was not statistically different from that of visual assessment. Conclusion: F-18 FDG PET was sensitive in finding epileptogenic zones by revealing hypometabolic areas even in MR-negative patients with frontal lobe epilepsy as well as in MR-positive patients. SPM analysis showed comparable sensitivity to visual assessment and could be used as an aid in the diagnosis of epileptogenic zones in frontal lobe epilepsy.

  • PDF

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

Wildfire Severity Mapping Using Sentinel Satellite Data Based on Machine Learning Approaches (Sentinel 위성영상과 기계학습을 이용한 국내산불 피해강도 탐지)

  • Sim, Seongmun;Kim, Woohyeok;Lee, Jaese;Kang, Yoojin;Im, Jungho;Kwon, Chunguen;Kim, Sungyong
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1109-1123
    • /
    • 2020
  • In South Korea with forest as a major land cover class (over 60% of the country), many wildfires occur every year. Wildfires weaken the shear strength of the soil, forming a layer of soil that is vulnerable to landslides. It is important to identify the severity of a wildfire as well as the burned area to sustainably manage the forest. Although satellite remote sensing has been widely used to map wildfire severity, it is often difficult to determine the severity using only the temporal change of satellite-derived indices such as Normalized Difference Vegetation Index (NDVI) and Normalized Burn Ratio (NBR). In this study, we proposed an approach for determining wildfire severity based on machine learning through the synergistic use of Sentinel-1A Synthetic Aperture Radar-C data and Sentinel-2A Multi Spectral Instrument data. Three wildfire cases-Samcheok in May 2017, Gangreung·Donghae in April 2019, and Gosung·Sokcho in April 2019-were used for developing wildfire severity mapping models with three machine learning algorithms (i.e., Random Forest, Logistic Regression, and Support Vector Machine). The results showed that the random forest model yielded the best performance, resulting in an overall accuracy of 82.3%. The cross-site validation to examine the spatiotemporal transferability of the machine learning models showed that the models were highly sensitive to temporal differences between the training and validation sites, especially in the early growing season. This implies that a more robust model with high spatiotemporal transferability can be developed when more wildfire cases with different seasons and areas are added in the future.