• 제목/요약/키워드: PARSE

Search Result 132, Processing Time 0.027 seconds

A Base Address Analysis Tool for Static Analysis of ARM Architecture-Based Binary (ARM 아키텍처 기반 바이너리 정적 분석을 위한 기준 주소 분석 도구)

  • Kang, Ji-Hun;Ryou, Jae-Cheol
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.5
    • /
    • pp.1185-1189
    • /
    • 2016
  • In modern society, the number of embedded devices has been increasing. However, embedded devices is growing, and the backdoor and vulnerabilities are found continously. It is necessary for this analysis. In this paper, we developed a tool to extract the base address information for the static analysis environment built of the embedded device's firmware. By using this tool, we built the environment for static analysis. As a result, this point enables us to parse the strings and to check the reference. Also, through the increased number of functions, we proved the validity of the tool.

A Study of Parsing System Implementation Using Segmentation and Argument Information (구간 분할과 논항정보를 이용한 구문분석시스템 구현에 관한 연구)

  • Park, Yong Uk;Kwon, Hyuk Chul
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.3
    • /
    • pp.366-374
    • /
    • 2013
  • One of the most important problems in syntactic analysis is syntactic ambiguities. This paper proposes a parsing system and this system can reduce syntactic ambiguities by using segmentation method and argument information method. The proposed system uses morphemes for the input of syntax analysis system, and syntactic analysis system generates all possible parse trees from the given morphemes. Therefore, this system generates many syntactic ambiguity problems. We use three methods to solve these problems. First is disambiguation method in morphological analysis, second is segmentation method in syntactic analysis processing, and the last method is using argument information. Using these three methods, we can reduce many ambiguities in Korean syntactic analysis. In our experiment, our approach decreases about 53% of syntactic ambiguities.

Identification Systems of Fake News Contents on Artificial Intelligence & Bigdata

  • KANG, Jangmook;LEE, Sangwon
    • International journal of advanced smart convergence
    • /
    • v.10 no.3
    • /
    • pp.122-130
    • /
    • 2021
  • This study is about an Artificial Intelligence-based fake news identification system and its methods to determine the authenticity of content distributed over the Internet. Among the news we encounter is news that an individual or organization intentionally writes something that is not true to achieve a particular purpose, so-called fake news. In this study, we intend to design a system that uses Artificial Intelligence techniques to identify fake content that exists within the news. The proposed identification model will propose a method of extracting multiple unit factors from the target content. Through this, attempts will be made to classify unit factors into different types. In addition, the design of the preprocessing process will be carried out to parse only the necessary information by analyzing the unit factor. Based on these results, we will design the part where the unit fact is analyzed using the deep learning prediction model as a predetermined unit. The model will also include a design for a database that determines the degree of fake news in the target content and stores the information in the identified unit factor through the analyzed unit factor.

Phrase-Chunk Level Hierarchical Attention Networks for Arabic Sentiment Analysis

  • Abdelmawgoud M. Meabed;Sherif Mahdy Abdou;Mervat Hassan Gheith
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.120-128
    • /
    • 2023
  • In this work, we have presented ATSA, a hierarchical attention deep learning model for Arabic sentiment analysis. ATSA was proposed by addressing several challenges and limitations that arise when applying the classical models to perform opinion mining in Arabic. Arabic-specific challenges including the morphological complexity and language sparsity were addressed by modeling semantic composition at the Arabic morphological analysis after performing tokenization. ATSA proposed to perform phrase-chunks sentiment embedding to provide a broader set of features that cover syntactic, semantic, and sentiment information. We used phrase structure parser to generate syntactic parse trees that are used as a reference for ATSA. This allowed modeling semantic and sentiment composition following the natural order in which words and phrase-chunks are combined in a sentence. The proposed model was evaluated on three Arabic corpora that correspond to different genres (newswire, online comments, and tweets) and different writing styles (MSA and dialectal Arabic). Experiments showed that each of the proposed contributions in ATSA was able to achieve significant improvement. The combination of all contributions, which makes up for the complete ATSA model, was able to improve the classification accuracy by 3% and 2% on Tweets and Hotel reviews datasets, respectively, compared to the existing models.

SPDX Parser and Validator for Software Compliance (소프트웨어 컴플라이언스를 위한 SPDX Parser 및 Validator)

  • Yun, Ho-Yeong;Joe, Yong-Joon;Jung, Byung-Ok;Shin, Dong-Myung
    • Journal of Software Assessment and Valuation
    • /
    • v.13 no.1
    • /
    • pp.15-21
    • /
    • 2017
  • Analyzing a software package which is consisted of big numbers of files takes enormous costs and time. Therefore, SPDX (Software Package Data Exchange) working group collaborate with Linux Foundation published a software information(metadata) specification: SPDX. On the first half of 2017, the specification contains seven chapters and 66 items, according to Ver 2.1 of SPDX spec. It prefers Tag/Value or RDF forms but also supports spreadsheet form. In this paper, we introduce SPDX parsing & validation tools to check the validity of SPDX document. We'll develop SPDX document generator to manage software package more efficiently for our next target.

Differential Impacts of Discretionary Accrual Directions on Accounting Conservatism

  • Sangkwon CHA;HyeongTae CHO
    • The Journal of Economics, Marketing and Management
    • /
    • v.12 no.3
    • /
    • pp.13-22
    • /
    • 2024
  • Purpose: While there has been extensive research on discretionary accruals (hereafter, 'DA') and accounting conservatism, interpretations have varied among researchers depending on how discretionary accruals are determined as proxies. This study investigates the relationship between discretionary accruals (DA) and accounting conservatism, focusing on the distinctions between signed DA and absolute DA. Research design, data and methodology: Using financial data from companies listed on the KOSPI and KOSDAQ markets from 2010 to 2020, we employ regression analysis to explore how signed and absolute DA impact accounting conservatism. This approach allows us to parse out the effects of positive versus negative discretionary accruals systematically. Results: Our findings indicate a divergent impact of DA on accounting conservatism. Specifically, in cases of negative DA, an increase in DA corresponds with heightened accounting conservatism. Conversely, when DA is positive, increases in DA do not exhibit a significant relationship with changes in accounting conservatism. These effects suggest that the nature of DA-whether it represents upward or downward earnings adjustments-critically influences its relationship with conservatism. Conclusions: The results elucidate the nuanced role of discretionary accruals in influencing accounting conservatism. The decrease in accounting conservatism associated with absolute increases in DA appears primarily driven by groups with downward earnings adjustments. This suggests that as negative DA diminishes toward zero, accounting conservatism intensifies, whereas positive DA does not have a parallel effect.

Metadata design and system development for autonomous data survey using unmanned patrol robots (무인순찰로봇 활용 데이터 기록 자동화를 위한 메타데이터 정의 및 시스템 구축)

  • Jung, Namcheol;Lee, Giryun;Nho, Hyunju
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.11a
    • /
    • pp.267-268
    • /
    • 2023
  • Unmanned patrol robots are currently being developed for autonomous data survey in construction sites. As the amount of data acquired by robots increases, it is important to utilize proper metadata and system to manage data flow. In this study, we developed three materials, metadata design, robot system and web system, in the purpose of automating construction site data survey using unmanned patrol robots. The metadata was mainly designed to represent when and where raw data was acquired. To identify the location of data acquired, localization data from SLAM algorithm was converted to suit the construction drawings. The robot system and web system were developed to generate, store and parse the raw data and metadata automatically. The materials developed in this study was adopted to Boston Dynamics SPOT, a quadruped robot. Autonomous data survey of 360-picture and environment sensor was tested in two construction sites and the robot worked as intended. As a further study, development on the autonomous data survey to improve the convenience and productivity will be continued.

  • PDF

A Korean Grammar Checker based on the Trees Resulted from a Full Parser (전체 문장 분석에 기반한 한국어 문법 검사기)

  • 이공주;황선영;김지은
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.10
    • /
    • pp.992-999
    • /
    • 2003
  • The purpose of a grammar checker is to find a grammatical erroneous expression in a sentence, and to provide appropriate suggestions for them. To find those errors, grammar checker should parse the whole input sentence, which is a highly time-consuming job. B7or this reason, most Korean grammar checkers adopt a partial parser that can analyze a fragment of a sentence without an ambiguity. This paper presents a Korean grammar checker using a full parser in order to find grammatical errors. This approach allows the grammar checker to critique the errors between the two words in a long distance relationship within a sentence. As a result, this approach improves the accuracy in correcting errors, but it nay come at the expense of decrease in its performance. The Korean grammar checker described in this paper is implemented with 65 rules for checking and correcting the grammatical errors. The grammar checker shows 96.49% in checking accuracy against the test corpus including 7 million words.

Relation Extraction based on Extended Composite Kernel using Flat Lexical Features (평면적 어휘 자질들을 활용한 확장 혼합 커널 기반 관계 추출)

  • Chai, Sung-Pil;Jeong, Chang-Hoo;Chai, Yun-Soo;Myaeng, Sung-Hyon
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.8
    • /
    • pp.642-652
    • /
    • 2009
  • In order to improve the performance of the existing relation extraction approaches, we propose a method for combining two pivotal concepts which play an important role in classifying semantic relationships between entities in text. Having built a composite kernel-based relation extraction system, which incorporates both entity features and syntactic structured information of relation instances, we define nine classes of lexical features and synthetically apply them to the system. Evaluation on the ACE RDC corpus shows that our approach boosts the effectiveness of the existing composite kernels in relation extraction. It also confirms that by integrating the three important features (entity features, syntactic structures and contextual lexical features), we can improve the performance of a relation extraction process.

A Streaming XML Hardware Parser using a Tree with Failure Transition (실패 전이를 갖는 트리를 이용한 스트리밍 XML 하드웨어 파서)

  • Lee, Kyu-Hee;Han, Sang-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.10
    • /
    • pp.2323-2329
    • /
    • 2013
  • Web-services employ an XML to represent data and an XML parser is needed to use data. The DOM(Document Object Model) is widely used to parse an XML, but it is not suitable for any systems with limited resources because it requires a preprocessing to create the DOM and additional memory space. In this paper, we propose the StreXTree(Streaming XML Tree) with failure transitions and without any preprocessing tasks in order to improve the system performance. Compared to other works, our StreXTree parser achieves 2.39x and 3.02x improvement in system performance in Search and RBStreX, respectively. In addition, our StreXTree parser supports Well-Formed checking to verify the syntax and structure of XML.