• Title/Summary/Keyword: manual matching

Search Result 65, Processing Time 0.031 seconds

Malware Detection with Directed Cyclic Graph and Weight Merging

  • Li, Shanxi;Zhou, Qingguo;Wei, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.9
    • /
    • pp.3258-3273
    • /
    • 2021
  • Malware is a severe threat to the computing system and there's a long history of the battle between malware detection and anti-detection. Most traditional detection methods are based on static analysis with signature matching and dynamic analysis methods that are focused on sensitive behaviors. However, the usual detections have only limited effect when meeting the development of malware, so that the manual update for feature sets is essential. Besides, most of these methods match target samples with the usual feature database, which ignored the characteristics of the sample itself. In this paper, we propose a new malware detection method that could combine the features of a single sample and the general features of malware. Firstly, a structure of Directed Cyclic Graph (DCG) is adopted to extract features from samples. Then the sensitivity of each API call is computed with Markov Chain. Afterward, the graph is merged with the chain to get the final features. Finally, the detectors based on machine learning or deep learning are devised for identification. To evaluate the effect and robustness of our approach, several experiments were adopted. The results showed that the proposed method had a good performance in most tests, and the approach also had stability with the development and growth of malware.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

Evaluation on Tie Point Extraction Methods of WorldView-2 Stereo Images to Analyze Height Information of Buildings (건물의 높이 정보 분석을 위한 WorldView-2 스테레오 영상의 정합점 추출방법 평가)

  • Yeji, Kim;Yongil, Kim
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.5
    • /
    • pp.407-414
    • /
    • 2015
  • Interest points are generally located at the pixels where height changes occur. So, interest points can be the significant pixels for DSM generation, and these have the important role to generate accurate and reliable matching results. Manual operation is widely used to extract the interest points and to match stereo satellite images using these for generating height information, but it causes economic and time consuming problems. Thus, a tie point extraction method using Harris-affine technique and SIFT(Scale Invariant Feature Transform) descriptors was suggested to analyze height information of buildings in this study. Interest points on buildings were extracted by Harris-affine technique, and tie points were collected efficiently by SIFT descriptors, which is invariant for scale. Searching window for each interest points was used, and direction of tie points pairs were considered for more efficient tie point extraction method. Tie point pairs estimated by proposed method was used to analyze height information of buildings. The result had RMSE values less than 2m comparing to the height information estimated by manual method.

Change Attention-based Vehicle Scratch Detection System (변화 주목 기반 차량 흠집 탐지 시스템)

  • Lee, EunSeong;Lee, DongJun;Park, GunHee;Lee, Woo-Ju;Sim, Donggyu;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.228-239
    • /
    • 2022
  • In this paper, we propose an unmanned vehicle scratch detection deep learning model for car sharing services. Conventional scratch detection models consist of two steps: 1) a deep learning module for scratch detection of images before and after rental, 2) a manual matching process for finding newly generated scratches. In order to build a fully automatic scratch detection model, we propose a one-step unmanned scratch detection deep learning model. The proposed model is implemented by applying transfer learning and fine-tuning to the deep learning model that detects changes in satellite images. In the proposed car sharing service, specular reflection greatly affects the scratch detection performance since the brightness of the gloss-treated automobile surface is anisotropic and a non-expert user takes a picture with a general camera. In order to reduce detection errors caused by specular reflected light, we propose a preprocessing process for removing specular reflection components. For data taken by mobile phone cameras, the proposed system can provide high matching performance subjectively and objectively. The scores for change detection metrics such as precision, recall, F1, and kappa are 67.90%, 74.56%, 71.08%, and 70.18%, respectively.

A Study on A Tablet-PC Based Application Design For Self-examination of Dementia (치매 자가 진단을 위한 태블릿 PC용 어플리케이션 설계 연구)

  • Ryu, Neung Hwa;Park, Seung Ho
    • Design Convergence Study
    • /
    • v.13 no.2
    • /
    • pp.143-164
    • /
    • 2014
  • In this study, we suggest a tablet-PC based application as an instrument for self-examination of dementia and aim to evaluate its usability for the practical use. The tool for the self-diagnosis of dementia is testing a person's sentence comprehension, which is investigated by the accuracy of a sentence-picture matching task, and the result of the test can differentiate individuals with dementia of Alzheimer's type from normal aging population. According to its use for diagnosis of dementia, we developed a new version of sentence-picture matching task by a cooperative study with NABLE(Neurogenic communication And Brain Lab at Ewha) and applied to the application as a main function. Targeting the New Silver who are preliminary aged people in Korea, this application can provide them with three values: 'self-', 'easy', and 'simple'. When using this application, users can gain these values by an instruction manual with the recorded guide voice, easy-to-use functions and the simplified menu structure of it. After prototyping, we conducted the usability test and it was proved as a result that the New Silver users can easily operate the application by themselves.

3D Depth Information Extraction Algorithm Based on Motion Estimation in Monocular Video Sequence (단안 영상 시퀸스에서 움직임 추정 기반의 3차원 깊이 정보 추출 알고리즘)

  • Park, Jun-Ho;Jeon, Dae-Seong;Yun, Yeong-U
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.549-556
    • /
    • 2001
  • The general problems of recovering 3D for 2D imagery require the depth information for each picture element form focus. The manual creation of those 3D models is consuming time and cost expensive. The goal in this paper is to simplify the depth estimation algorithm that extracts the depth information of every region from monocular image sequence with camera translation to implement 3D video in realtime. The paper is based on the property that the motion of every point within image which taken from camera translation depends on the depth information. Full-search motion estimation based on block matching algorithm is exploited at first step and ten, motion vectors are compensated for the effect by camera rotation and zooming. We have introduced the algorithm that estimates motion of object by analysis of monocular motion picture and also calculates the averages of frame depth and relative depth of region to the average depth. Simulation results show that the depth of region belongs to a near object or a distant object is in accord with relative depth that human visual system recognizes.

  • PDF

The Validity and Reliability of the Second Korean Working Conditions Survey

  • Kim, Young Sun;Rhee, Kyung Yong;Oh, Min Jung;Park, Jungsun
    • Safety and Health at Work
    • /
    • v.4 no.2
    • /
    • pp.111-116
    • /
    • 2013
  • Background: The aim of this study was to evaluate the quality of the Second Korean Working Conditions Survey (KWCS), focusing on its validity and reliability. Methods: The external validity was evaluated by the assessment of sampling procedures and the response rate, in order to investigate the representativeness of the sample. The content validity was evaluated by the assessment of the development of the questionnaire, and the consistency of questions for the selected construct. The test-retest method was used to evaluate the reliability by means of a phone call survey of 30% of the respondents, who were randomly selected. The respondents' satisfaction regarding the survey procedures and interview time were analyzed to evaluate the quality of survey data. Results: The external validity was assured by an acceptable sampling procedure, rigid multi-stage stratified cluster random sampling. The content validity was also guaranteed by a reasonable procedure for the development of the questionnaire with a pretest. The internal consistency of the questions for work autonomy was maintained, with 0.738 of Cronbach's alpha. The response rate of 36% was lower than that of the European Working Conditions Survey (EWCS), with a contact rate of 66%, compared to 76% for the EWCS. The matching rates of the five retested questions were more than 98% reliable. Conclusion: The quality of the second KWCS was assured by the high external and content validity and reliability. The rigid sampling procedure and development of the questionnaire contributed to quality assurance. The high level of reliability may be guaranteed by the sophisticated field survey procedures and the development of a technical manual for interviewers. The technical strategies for a high response rate should be developed for future surveys.

Influence of implant mucosal thickness on early bone loss: a systematic review with meta-analysis

  • Di Gianfilippo, Riccardo;Valente, Nicola Alberto;Toti, Paolo;Wang, Hom-Lay;Barone, Antonio
    • Journal of Periodontal and Implant Science
    • /
    • v.50 no.4
    • /
    • pp.209-225
    • /
    • 2020
  • Purpose: Marginal bone loss (MBL) is an important clinical issue in implant therapy. One feature that has been cited as a contributing factor to this bone loss is peri-implant mucosal thickness. Therefore, in this report, we conducted a systematic review of the literature comparing bone remodeling around implants placed in areas with thick (≥2-mm) vs. thin (<2-mm) mucosa. Methods: A PICO question was defined. Manual and electronic searches were performed of the MEDLINE/PubMed and Cochrane Oral Health Group databases. The inclusion criteria were prospective studies that documented soft tissue thickness with direct intraoperative measurements and that included at least 1 year of follow-up. When possible, a meta-analysis was performed for both the overall and subgroup analyses. Results: Thirteen papers fulfilled the inclusion criteria. A meta-analysis of 7 randomized clinical trials was conducted. Significantly less bone loss was found around implants with thick mucosa than around those with thin mucosa (difference, -0.53 mm; P<0.0001). Subgroups were analyzed regarding the apico-coronal positioning, the use of platform-matched vs. platform-switched (PS) connections, and the use of cement-retained vs. screw-retained prostheses. In these analyses, thick mucosa was found to be associated with significantly less MBL than thin mucosa (P<0.0001). Among non-matching (PS) connections and screw-retained prostheses, bone levels were not affected by mucosal thickness. Conclusions: Soft tissue thickness was found to be correlated with MBL except in cases of PS connections used on implants with thin tissues and screw-retained prostheses. Mucosal thickness did not affect implant survival or the occurrence of biological or aesthetic complications.

Exterior Vision Inspection Method of Injection Molding Automotive Parts (사출성형 자동차부품의 외관 비전검사 방법)

  • Kim, HoYeon;Cho, Jae-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.2
    • /
    • pp.127-132
    • /
    • 2019
  • In this paper, we propose a visual inspection method of automotive parts for injection molding to improve the appearance quality and productivity of automotive parts. Exterior inspection of existing injection molding automobile parts was generally done by manual sampling inspection by human. First, we applied the edge-tolerance vision inspection algorithm ([1] - [4]) for vision inspection of electronic components (TFT-LCD and PCB) And we propose a new visual inspection method to overcome the problem. In the proposed visual inspection, the inspection images of the parts to be inspected are aligned on the basis of the reference image of good quality. Then, after partial adaptive binarization, the binary block matching algorithm is used to compare the good binary image and the test binary image. We verified the effectiveness of the edge-tolerance vision check algorithm and the proposed appearance vision test method through various comparative experiments using actual developed equipment.

Structuring of Pulmonary Function Test Paper Using Deep Learning

  • Jo, Sang-Hyun;Kim, Dae-Hoon;Kim, Yoon;Kwon, Sung-Ok;Kim, Woo-Jin;Lee, Sang-Ah
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.61-67
    • /
    • 2021
  • In this paper, we propose a method of extracting and recognizing related information for research from images of the unstructured pulmonary function test papers using character detection and recognition techniques. Also, we develop a post-processing method to reduce the character recognition error rate. The proposed structuring method uses a character detection model for the pulmonary function test paper images to detect all characters in the test paper and passes the detected character image through the character recognition model to obtain a string. The obtained string is reviewed for validity using string matching and structuring is completed. We confirm that our proposed structuring system is a more efficient and stable method than the structuring method through manual work of professionals because our system's error rate is within about 1% and the processing speed per pulmonary function test paper is within 2 seconds.