• Title/Summary/Keyword: Semantic technology

Search Result 944, Processing Time 0.03 seconds

A Study on the Meaning Extension of User-Centeredness in UX Design (사용자 경험 디자인의 사용자 중심성에 대한 의미 확장 연구)

  • Lee, You-Jin
    • Journal of Digital Convergence
    • /
    • v.19 no.8
    • /
    • pp.301-310
    • /
    • 2021
  • The purpose of the study was to induce meaning of the UX design from users' interview. The study covers interviews from 20 untact finance application users in their twenties in written form. It aims to examine previous studies on UX design and to overcome their shortcomings by categorizing usability qualities focusing on verbs used in the interview. The followings are the result: Usability of UX design can be summarized into Unity, Trust, Persistency, Recognition and Approachability of the information to the 20 users in their twenties. As for the data earned from interviews focusing on verbs, usability included Security, Familiarity, Accessibility, Convenience of Operation and Visibility. Each of the qualities fell into related categories such as Security, Information, Brand and Design. In conclusion, analysis based on verb choices led to better understanding of the user-based experience compared to using objective means in previous studies and can be a suggestion to make up for errors in the former evaluation process.

Text Mining-based Fake News Detection Using News And Social Media Data (뉴스와 소셜 데이터를 활용한 텍스트 기반 가짜 뉴스 탐지 방법론)

  • Hyun, Yoonjin;Kim, Namgyu
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.4
    • /
    • pp.19-39
    • /
    • 2018
  • Recently, fake news has attracted worldwide attentions regardless of the fields. The Hyundai Research Institute estimated that the amount of fake news damage reached about 30.9 trillion won per year. The government is making efforts to develop artificial intelligence source technology to detect fake news such as holding "artificial intelligence R&D challenge" competition on the title of "searching for fake news." Fact checking services are also being provided in various private sector fields. Nevertheless, in academic fields, there are also many attempts have been conducted in detecting the fake news. Typically, there are different attempts in detecting fake news such as expert-based, collective intelligence-based, artificial intelligence-based, and semantic-based. However, the more accurate the fake news manipulation is, the more difficult it is to identify the authenticity of the news by analyzing the news itself. Furthermore, the accuracy of most fake news detection models tends to be overestimated. Therefore, in this study, we first propose a method to secure the fairness of false news detection model accuracy. Secondly, we propose a method to identify the authenticity of the news using the social data broadly generated by the reaction to the news as well as the contents of the news.

A Qualitative Study into Special Education Teachers' Failure and Success Factors in Teacher Recruitment Examinations (특수교사들의 임용시험 실패 요인과 성공 요인에 관한 질적 연구)

  • Pack, Mee-Jung;Nam, Yun-Sug
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.8
    • /
    • pp.221-232
    • /
    • 2019
  • This study aimed at finding out special education teachers' failure and success factors in teacher recruitment examinations. Total of 24 special education teachers participated in the semi-structured interview and 12 separate semantic themes were extracted via continuous comparative analysis on the interview contents. The findings were the following. First, the identified factors for the failures on the examinations were merely following what others do, failure-causing learning strategies, unconditional memorization, ineffective study groups, anxiety and lack of confidence, and lack self-management issue. Second, the identified factors for the success on the examinations were my style of study habits, success-causing learning strategies, balance of understanding and memorization, effective study groups, positivity, and strong self routine. The research proposes several practical applications to prepare the exam regarding this results.

A Qualitative Study on the Interpersonal Trauma Experience in Counseling Psychology Major University Students and their Growth Process as Counselors (상담심리전공 대학생의 대인 외상 경험과 상담자로서의 성장 과정에 대한 질적 연구)

  • Hong, Ye Young;Chang, Eun Jin
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.5
    • /
    • pp.147-157
    • /
    • 2019
  • In this study, we focused on the experience of interpersonal trauma among university students majoring in counseling psychology to understand the meaning of the trauma they experienced, and analyzed the process of their growth as counselors. To that end, we conducted a survey on interpersonal trauma and post-traumatic growth, and interviewed final six students in a face-to-face interview, and analyzed the collected data using the method of qualitative case study. As a result, the trauma and subsequent growth experience have been categorized into 38 semantic units, 15 subcategories and 5 categories, including 'Times of pain', 'A life of dealing with complex emotions alone', 'An experience of understanding and being understood', 'Attempts to change and new meanings' and 'Worries and expectations of growth as a counselor'. The results of this study are meaningful in providing basic information and educational materials needed for the intervention of students majoring in counseling psychology who have experienced trauma.

AANet: Adjacency auxiliary network for salient object detection

  • Li, Xialu;Cui, Ziguan;Gan, Zongliang;Tang, Guijin;Liu, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3729-3749
    • /
    • 2021
  • At present, deep convolution network-based salient object detection (SOD) has achieved impressive performance. However, it is still a challenging problem to make full use of the multi-scale information of the extracted features and which appropriate feature fusion method is adopted to process feature mapping. In this paper, we propose a new adjacency auxiliary network (AANet) based on multi-scale feature fusion for SOD. Firstly, we design the parallel connection feature enhancement module (PFEM) for each layer of feature extraction, which improves the feature density by connecting different dilated convolution branches in parallel, and add channel attention flow to fully extract the context information of features. Then the adjacent layer features with close degree of abstraction but different characteristic properties are fused through the adjacent auxiliary module (AAM) to eliminate the ambiguity and noise of the features. Besides, in order to refine the features effectively to get more accurate object boundaries, we design adjacency decoder (AAM_D) based on adjacency auxiliary module (AAM), which concatenates the features of adjacent layers, extracts their spatial attention, and then combines them with the output of AAM. The outputs of AAM_D features with semantic information and spatial detail obtained from each feature are used as salient prediction maps for multi-level feature joint supervising. Experiment results on six benchmark SOD datasets demonstrate that the proposed method outperforms similar previous methods.

Integration of Extended IFC-BIM and Ontology for Information Management of Bridge Inspection (확장 IFC-BIM 기반 정보모델과 온톨로지를 활용한 교량 점검데이터 관리방법)

  • Erdene, Khuvilai;Kwon, Tae Ho;Lee, Sang-Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.33 no.6
    • /
    • pp.411-417
    • /
    • 2020
  • To utilize building information modeling (BIM) technology at the bridge maintenance stage, it is necessary to integrate large quantities of bridge inspection and model data for object-oriented information management. This research aims to establish the benefits of utilizing the extended industry foundation class (IFC)-BIM and ontology for bridge inspection information management. The IFC entities were extended to represent the bridge objects, and a method of generating the extended IFC-based information model was proposed. The bridge inspection ontology was also developed by extraction and classification of inspection concepts from the AASHTO standard. The classified concepts and their relationships were mapped to the ontology based on the semantic triples approach. Finally, the extended IFC-based BIM model was integrated with the ontology for bridge inspection data management. The effectiveness of the proposed framework for bridge inspection information management by integration of the extended IFC-BIM and ontology was tested and verified by extracting bridge inspection data via the SPARQL query.

International Patent Classificaton Using Latent Semantic Indexing (잠재 의미 색인 기법을 이용한 국제 특허 분류)

  • Jin, Hoon-Tae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1294-1297
    • /
    • 2013
  • 본 논문은 기계학습을 통하여 특허문서를 국제 특허 분류(IPC) 기준에 따라 자동으로 분류하는 시스템에 관한 연구로 잠재 의미 색인 기법을 이용하여 분류의 성능을 높일 수 있는 방법을 제안하기 위한 연구이다. 종래 특허문서에 관한 IPC 자동 분류에 관한 연구가 단어 매칭 방식의 색인 기법에 의존해서 이루어진바가 있으나, 현대 기술용어의 발생 속도와 다양성 등을 고려할 때 특허문서들 간의 관련성을 분석하는데 있어서는 단어 자체의 빈도 보다는 용어의 개념에 의한 접근이 보다 효과적일 것이라 판단하여 잠재 의미 색인(LSI) 기법에 의한 분류에 관한 연구를 하게 된 것이다. 실험은 단어 매칭 방식의 색인 기법의 대표적인 자질선택 방법인 정보획득량(IG)과 카이제곱 통계량(CHI)을 이용했을 때의 성능과 잠재 의미 색인 방법을 이용했을 때의 성능을 SVM, kNN 및 Naive Bayes 분류기를 사용하여 분석하고, 그중 가장 성능이 우수하게 나오는 SVM을 사용하여 잠재 의미 색인에서 명사가 해당 용어의 개념적 의미 구조를 구축하는데 기여하는 정도가 어느 정도인지 평가함과 아울러, LSI 기법 이용시 최적의 성능을 나타내는 특이값의 범위를 실험을 통해 비교 분석 하였다. 분석결과 LSI 기법이 단어 매칭 기법(IG, CHI)에 비해 우수한 성능을 보였으며, SVM, Naive Bayes 분류기는 단어 매칭 기법에서는 비슷한 수준을 보였으나, LSI 기법에서는 SVM의 성능이 월등이 우수한 것으로 나왔다. 또한, SVM은 LSI 기법에서 약 3%의 성능 향상을 보였지만 Naive Bayes는 오히려 20%의 성능 저하를 보였다. LSI 기법에서 명사가 잠재적 의미 구조에 미치는 영향은 모든 단어들을 내용어로 한 경우 보다 약 10% 더 향상된 결과를 보여주었고, 특이값의 범위에 따른 성능 분석에 있어서는 30% 수준에 Rank 되는 범위에서 가장 높은 성능의 결과가 나왔다.

Analysis on the Trend of The Journal of Information Systems Using TLS Mining (TLS 마이닝을 이용한 '정보시스템연구' 동향 분석)

  • Yun, Ji Hye;Oh, Chang Gyu;Lee, Jong Hwa
    • The Journal of Information Systems
    • /
    • v.31 no.1
    • /
    • pp.289-304
    • /
    • 2022
  • Purpose The development of the network and mobile industries has induced companies to invest in information systems, leading a new industrial revolution. The Journal of Information Systems, which developed the information system field into a theoretical and practical study in the 1990s, retains a 30-year history of information systems. This study aims to identify academic values and research trends of JIS by analyzing the trends. Design/methodology/approach This study aims to analyze the trend of JIS by compounding various methods, named as TLS mining analysis. TLS mining analysis consists of a series of analysis including Term Frequency-Inverse Document Frequency (TF-IDF) weight model, Latent Dirichlet Allocation (LDA) topic modeling, and a text mining with Semantic Network Analysis. Firstly, keywords are extracted from the research data using the TF-IDF weight model, and after that, topic modeling is performed using the Latent Dirichlet Allocation (LDA) algorithm to identify issue keywords. Findings The current study used the summery service of the published research paper provided by Korea Citation Index to analyze JIS. 714 papers that were published from 2002 to 2012 were divided into two periods: 2002-2011 and 2012-2021. In the first period (2002-2011), the research trend in the information system field had focused on E-business strategies as most of the companies adopted online business models. In the second period (2012-2021), data-based information technology and new industrial revolution technologies such as artificial intelligence, SNS, and mobile had been the main research issues in the information system field. In addition, keywords for improving the JIS citation index were presented.

Performance Improvement of Context-Sensitive Spelling Error Correction Techniques using Knowledge Graph Embedding of Korean WordNet (alias. KorLex) (한국어 어휘 의미망(alias. KorLex)의 지식 그래프 임베딩을 이용한 문맥의존 철자오류 교정 기법의 성능 향상)

  • Lee, Jung-Hun;Cho, Sanghyun;Kwon, Hyuk-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.3
    • /
    • pp.493-501
    • /
    • 2022
  • This paper is a study on context-sensitive spelling error correction and uses the Korean WordNet (KorLex)[1] that defines the relationship between words as a graph to improve the performance of the correction[2] based on the vector information of the word embedded in the correction technique. The Korean WordNet replaced WordNet[3] developed at Princeton University in the United States and was additionally constructed for Korean. In order to learn a semantic network in graph form or to use it for learned vector information, it is necessary to transform it into a vector form by embedding learning. For transformation, we list the nodes (limited number) in a line format like a sentence in a graph in the form of a network before the training input. One of the learning techniques that use this strategy is Deepwalk[4]. DeepWalk is used to learn graphs between words in the Korean WordNet. The graph embedding information is used in concatenation with the word vector information of the learned language model for correction, and the final correction word is determined by the cosine distance value between the vectors. In this paper, In order to test whether the information of graph embedding affects the improvement of the performance of context- sensitive spelling error correction, a confused word pair was constructed and tested from the perspective of Word Sense Disambiguation(WSD). In the experimental results, the average correction performance of all confused word pairs was improved by 2.24% compared to the baseline correction performance.

Attention-based word correlation analysis system for big data analysis (빅데이터 분석을 위한 어텐션 기반의 단어 연관관계 분석 시스템)

  • Chi-Gon, Hwang;Chang-Pyo, Yoon;Soo-Wook, Lee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.41-46
    • /
    • 2023
  • Recently, big data analysis can use various techniques according to the development of machine learning. Big data collected in reality lacks an automated refining technique for the same or similar terms based on semantic analysis of the relationship between words. Since most of the big data is described in general sentences, it is difficult to understand the meaning and terms of the sentences. To solve these problems, it is necessary to understand the morphological analysis and meaning of sentences. Accordingly, NLP, a technique for analyzing natural language, can understand the word's relationship and sentences. Among the NLP techniques, the transformer has been proposed as a way to solve the disadvantages of RNN by using self-attention composed of an encoder-decoder structure of seq2seq. In this paper, transformers are used as a way to form associations between words in order to understand the words and phrases of sentences extracted from big data.