• Title/Summary/Keyword: Semantic Net

Search Result 248, Processing Time 0.029 seconds

EVALUATION OF STATIC ANALYSIS TOOLS USED TO ASSESS SOFTWARE IMPORTANT TO NUCLEAR POWER PLANT SAFETY

  • OURGHANLIAN, ALAIN
    • Nuclear Engineering and Technology
    • /
    • v.47 no.2
    • /
    • pp.212-218
    • /
    • 2015
  • We describe a comparative analysis of different tools used to assess safety-critical software used in nuclear power plants. To enhance the credibility of safety assessments and to optimize safety justification costs, $Electricit{\acute{e}}$ de France (EDF) investigates the use of methods and tools for source code semantic analysis, to obtain indisputable evidence and help assessors focus on the most critical issues. EDF has been using the PolySpace tool for more than 10 years. Currently, new industrial tools based on the same formal approach, Abstract Interpretation, are available. Practical experimentation with these new tools shows that the precision obtained on one of our shutdown systems software packages is substantially improved. In the first part of this article, we present the analysis principles of the tools used in our experimentation. In the second part, we present the main characteristics of protection-system software, and why these characteristics are well adapted for the new analysis tools. In the last part, we present an overview of the results and the limitations of the tools.

Emotion Recognition based on Short Text using Semantic Orientation Analysis (의미 지향성 분석을 통한 단문 텍스트 기반 감정인지)

  • Kim, Hyun-Woo;Lee, Sung-Young;Chung, Tae-Choong;Yoon, Suk-Hwan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06b
    • /
    • pp.375-377
    • /
    • 2012
  • 스마트폰과 같은 모바일 기기가 발전함에 따라 SNS, 모바일 메신저, SMS와 같은 단문 기반 메시지는 자신의 감정을 가장 잘 표현하는 매체이다. 그럼에도 불구하고 기존 연구는 주로 장문의 텍스트로부터 긍정, 부정 분류나 문서의 성향을 분석하는 것에 그치는 경우가 많다. 의미지향(Semantic Orientation)방법은 검색엔진을 통해 감정 키워드와 인지하고자 하는 단어의 동시 빈출 정도를 PMI로 계산한 것으로 WordNet과 같은 의미 사전이 존재하지 않는 한국어의 특성에서 적용 가능한 방법이다. 본 논문에서는 의미 지향성 및 다른 텍스트 기반 감정 분류 기술에 대해 비교하고 이들을 활용하여 한국어로 구성된 단문 텍스트에서 효율적인 감정 분류 기법을 제안하고자 한다.

Discovering Semantic Relationships between Words by using Wikipedia (위키피디아에 기반한 단어 사이의 의미적 연결 관계 탐색)

  • Kim, Ju-Hwang;Hong, Min-sung;Lee, O-Joun;Jung, Jason J.
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.07a
    • /
    • pp.17-18
    • /
    • 2015
  • 본 논문에서는 위키피디아를 이용하여 단어 사이의 유사도와 내포된 연결 단어들에 대한 탐색 기법을 제안 한다. 위키피디아에서 제공하는 API를 이용하여 두 단어 사이를 탐색함으로써, 기존 단어 사이의 유사도를 계산하는 방식보다 더 간단하고 폭 넓은 의미 집단을 포괄할 수 있다. 이는 그래프적 특성에 기반하며 그래프를 구성하는 방식으로써 동적 방식과 정적 방식으로 구성된다.

  • PDF

A QUALITATIVE METHOD TO ESTIMATE HSI DISPLAY COMPLEXITY

  • Hugo, Jacques;Gertman, David
    • Nuclear Engineering and Technology
    • /
    • v.45 no.2
    • /
    • pp.141-150
    • /
    • 2013
  • There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.

Syntactic and semantic information extraction from NPP procedures utilizing natural language processing integrated with rules

  • Choi, Yongsun;Nguyen, Minh Duc;Kerr, Thomas N. Jr.
    • Nuclear Engineering and Technology
    • /
    • v.53 no.3
    • /
    • pp.866-878
    • /
    • 2021
  • Procedures play a key role in ensuring safe operation at nuclear power plants (NPPs). Development and maintenance of a large number of procedures reflecting the best knowledge available in all relevant areas is a complex job. This paper introduces a newly developed methodology and the implemented software, called iExtractor, for the extraction of syntactic and semantic information from NPP procedures utilizing natural language processing (NLP)-based technologies. The steps of the iExtractor integrated with sets of rules and an ontology for NPPs are described in detail with examples. Case study results of the iExtractor applied to selected procedures of a U.S. commercial NPP are also introduced. It is shown that the iExtractor can provide overall comprehension of the analyzed procedures and indicate parts of procedures that need improvement. The rich information extracted from procedures could be further utilized as a basis for their enhanced management.

Skin Lesion Segmentation with Codec Structure Based Upper and Lower Layer Feature Fusion Mechanism

  • Yang, Cheng;Lu, GuanMing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.60-79
    • /
    • 2022
  • The U-Net architecture-based segmentation models attained remarkable performance in numerous medical image segmentation missions like skin lesion segmentation. Nevertheless, the resolution gradually decreases and the loss of spatial information increases with deeper network. The fusion of adjacent layers is not enough to make up for the lost spatial information, thus resulting in errors of segmentation boundary so as to decline the accuracy of segmentation. To tackle the issue, we propose a new deep learning-based segmentation model. In the decoding stage, the feature channels of each decoding unit are concatenated with all the feature channels of the upper coding unit. Which is done in order to ensure the segmentation effect by integrating spatial and semantic information, and promotes the robustness and generalization of our model by combining the atrous spatial pyramid pooling (ASPP) module and channel attention module (CAM). Extensive experiments on ISIC2016 and ISIC2017 common datasets proved that our model implements well and outperforms compared segmentation models for skin lesion segmentation.

Compound Loss Function of semantic segmentation models for imbalanced construction data

  • Chern, Wei-Chih;Kim, Hongjo;Asari, Vijayan;Nguyen, Tam
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.808-813
    • /
    • 2022
  • This study presents the problems of data imbalance, varying difficulties across target objects, and small objects in construction object segmentation for far-field monitoring and utilize compound loss functions to address it. Construction site scenes of assembling scaffolds were analyzed to test the effectiveness of compound loss functions for five construction object classes---workers, hardhats, harnesses, straps, hooks. The challenging problem was mitigated by employing a focal and Jaccard loss terms in the original loss function of LinkNet segmentation model. The findings indicates the importance of the loss function design for model performance on construction site scenes for far-field monitoring.

  • PDF

Extracting Flooded Areas in Southeast Asia Using SegNet and U-Net (SegNet과 U-Net을 활용한 동남아시아 지역 홍수탐지)

  • Kim, Junwoo;Jeon, Hyungyun;Kim, Duk-jin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1095-1107
    • /
    • 2020
  • Flood monitoring using satellite data has been constrained by obtaining satellite images for flood peak and accurately extracting flooded areas from satellite data. Deep learning is a promising method for satellite image classification, yet the potential of deep learning-based flooded area extraction using SAR data remained uncertain, which has advantages in obtaining data, comparing to optical satellite data. This research explores the performance of SegNet and U-Net on image segmentation by extracting flooded areas in the Khorat basin, Mekong river basin, and Cagayan river basin in Thailand, Laos, and the Philippines from Sentinel-1 A/B satellite data. Results show that Global Accuracy, Mean IoU, and Mean BF Score of SegNet are 0.9847, 0.6016, and 0.6467 respectively, whereas those of U-Net are 0.9937, 0.7022, 0.7125. Visual interpretation shows that the classification accuracy of U-Net is higher than SegNet, but overall processing time of SegNet is around three times faster than that of U-Net. It is anticipated that the results of this research could be used when developing deep learning-based flood monitoring models and presenting fully automated flooded area extraction models.

Evaluation on the Usefulness of X-ray Computer-Aided Detection (CAD) System for Pulmonary Tuberculosis (PTB) using SegNet (X-ray 영상에서 SegNet을 이용한 폐결핵 자동검출 시스템의 유용성 평가)

  • Lee, J.H.;Ahn, H.S.;Choi, D.H.;Tae, Ki Sik
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.1
    • /
    • pp.25-31
    • /
    • 2017
  • Testing TB in chest X-ray images is a typical method to diagnose presence and magnitude of PTB lesion. However, the method has limitation due to inter-reader variability. Therefore, it is essential to overcome this drawback with automatic interpretation. In this study, we propose a novel method for detection of PTB using SegNet, which is a deep learning architecture for semantic pixel wise image labelling. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. We modified parameters of SegNet to change the number of classes from 12 to 2 (TB or none-TB) and applied the architecture to automatically interpret chest radiographs. 552 chest X-ray images, provided by The Korean Institute of Tuberculosis, used for training and test and we constructed a receiver operating characteristic (ROC) curve. As a consequence, the area under the curve (AUC) was 90.4% (95% CI:[85.1, 95.7]) with a classification accuracy of 84.3%. A sensitivity was 85.7% and specificity was 82.8% on 431 training images (TB 172, none-TB 259) and 121 test images (TB 63, none-TB 58). This results show that detecting PTB using SegNet is comparable to other PTB detection methods.

Automatic Expansion of ConceptNet by Using Neural Tensor Networks (신경 텐서망을 이용한 컨셉넷 자동 확장)

  • Choi, Yong Seok;Lee, Gyoung Ho;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.549-554
    • /
    • 2016
  • ConceptNet is a common sense knowledge base which is formed in a semantic graph whose nodes represent concepts and edges show relationships between concepts. As it is difficult to make knowledge base integrity, a knowledge base often suffers from incompleteness problem. Therefore the quality of reasoning performed over such knowledge bases is sometimes unreliable. This work presents neural tensor networks which can alleviate the problem of knowledge bases incompleteness by reasoning new assertions and adding them into ConceptNet. The neural tensor networks are trained with a collection of assertions extracted from ConceptNet. The input of the networks is two concepts, and the output is the confidence score, telling how possible the connection between two concepts is under a specified relationship. The neural tensor networks can expand the usefulness of ConceptNet by increasing the degree of nodes. The accuracy of the neural tensor networks is 87.7% on testing data set. Also the neural tensor networks can predict a new assertion which does not exist in ConceptNet with an accuracy 85.01%.