• Title/Summary/Keyword: Malicious domains

Search Result 11, Processing Time 0.021 seconds

Case Study of Building a Malicious Domain Detection Model Considering Human Habitual Characteristics: Focusing on LSTM-based Deep Learning Model (인간의 습관적 특성을 고려한 악성 도메인 탐지 모델 구축 사례: LSTM 기반 Deep Learning 모델 중심)

  • Jung Ju Won
    • Convergence Security Journal
    • /
    • v.23 no.5
    • /
    • pp.65-72
    • /
    • 2023
  • This paper proposes a method for detecting malicious domains considering human habitual characteristics by building a Deep Learning model based on LSTM (Long Short-Term Memory). DGA (Domain Generation Algorithm) malicious domains exploit human habitual errors, resulting in severe security threats. The objective is to swiftly and accurately respond to changes in malicious domains and their evasion techniques through typosquatting to minimize security threats. The LSTM-based Deep Learning model automatically analyzes and categorizes generated domains as malicious or benign based on malware-specific features. As a result of evaluating the model's performance based on ROC curve and AUC accuracy, it demonstrated 99.21% superior detection accuracy. Not only can this model detect malicious domains in real-time, but it also holds potential applications across various cyber security domains. This paper proposes and explores a novel approach aimed at safeguarding users and fostering a secure cyber environment against cyber attacks.

Detection Of Unknown Malicious Scripts using Code Insertion Technique (코드 삽입 기법을 이용한 알려지지 않은 악성 스크립트 탐지)

  • 이성욱;방효찬;홍만표
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.6
    • /
    • pp.663-673
    • /
    • 2002
  • Server-side anti-viruses are useful to protect their domains, because they can detect malicious codes at the gateway of their domains. In prevailing local network, all clients cannot be perfectly controlled by domain administrators, so server-side inspection, for example in e-mail server, is used as an efficient technique of detecting mobile malicious codes. However, current server-side anti-virus systems perform only signature-based detection for known malicious codes, simple filtering, and file name modification. One of the main reasons that they don't have detection features, for unknown malicious codes, is that activity monitoring technique is unavailable for server machines. In this paper, we propose a detection technique that is executed at the server, but it can monitor activities at the clients without any anti-virus features. we describe its implementation.

A Proactive Inference Method of Suspicious Domains (선제 대응을 위한 의심 도메인 추론 방안)

  • Kang, Byeongho;YANG, JISU;So, Jaehyun;Kim, Czang Yeob
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.2
    • /
    • pp.405-413
    • /
    • 2016
  • In this paper, we propose a proactive inference method of finding suspicious domains. Our method detects potential malicious domains from the seed domain information extracted from the TLD Zone files and WHOIS information. The inference process follows the three steps: searching the candidate domains, machine learning, and generating a suspicious domain pool. In the first step, we search the TLD Zone files and build a candidate domain set which has the same name server information with the seed domain. The next step clusters the candidate domains by the similarity of the WHOIS information. The final step in the inference process finds the seed domain's cluster, and make the cluster as a suspicious domain set. In experiments, we used .COM and .NET TLD Zone files, and tested 10 seed domains selected by our analysts. The experimental results show that our proposed method finds 55 suspicious domains and 52 true positives. F1 scores 0.91, and precision is 0.95 We hope our proposal will contribute to the further proactive malicious domain blacklisting research.

LoGos: Internet-Explorer-Based Malicious Webpage Detection

  • Kim, Sungjin;Kim, Sungkyu;Kim, Dohoon
    • ETRI Journal
    • /
    • v.39 no.3
    • /
    • pp.406-416
    • /
    • 2017
  • Malware propagated via the World Wide Web is one of the most dangerous tools in the realm of cyber-attacks. Its methodologies are effective, relatively easy to use, and are developing constantly in an unexpected manner. As a result, rapidly detecting malware propagation websites from a myriad of webpages is a difficult task. In this paper, we present LoGos, an automated high-interaction dynamic analyzer optimized for a browser-based Windows virtual machine environment. LoGos utilizes Internet Explorer injection and API hooks, and scrutinizes malicious behaviors such as new network connections, unused open ports, registry modifications, and file creation. Based on the obtained results, LoGos can determine the maliciousness level. This model forms a very lightweight system. Thus, it is approximately 10 to 18 times faster than systems proposed in previous work. In addition, it provides high detection rates that are equal to those of state-of-the-art tools. LoGos is a closed tool that can detect an extensive array of malicious webpages. We prove the efficiency and effectiveness of the tool by analyzing almost 0.36 M domains and 3.2 M webpages on a daily basis.

Detecting Cyber Threats Domains Based on DNS Traffic (DNS 트래픽 기반의 사이버 위협 도메인 탐지)

  • Lim, Sun-Hee;Kim, Jong-Hyun;Lee, Byung-Gil
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37B no.11
    • /
    • pp.1082-1089
    • /
    • 2012
  • Recent malicious attempts in Cyber space are intended to emerge national threats such as Suxnet as well as to get financial benefits through a large pool of comprised botnets. The evolved botnets use the Domain Name System(DNS) to communicate with the C&C server and zombies. DNS is one of the core and most important components of the Internet and DNS traffic are continually increased by the popular wireless Internet service. On the other hand, domain names are popular for malicious use. This paper studies on DNS-based cyber threats domain detection by data classification based on supervised learning. Furthermore, the developed cyber threats domain detection system using DNS traffic analysis provides collection, analysis, and normal/abnormal domain classification of huge amounts of DNS data.

LCT: A Lightweight Cross-domain Trust Model for the Mobile Distributed Environment

  • Liu, Zhiquan;Ma, Jianfeng;Jiang, Zhongyuan;Miao, Yinbin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.914-934
    • /
    • 2016
  • In the mobile distributed environment, an entity may move across domains with great frequency. How to utilize the trust information in the previous domains and quickly establish trust relationships with others in the current domain remains a challenging issue. The classic trust models do not support cross-domain and the existing cross-domain trust models are not in a fully distributed way. This paper improves the outstanding Certified Reputation (CR) model and proposes a Lightweight Cross-domain Trust (LCT) model for the mobile distributed environment in a fully distributed way. The trust certifications, in which the trust ratings contain various trust aspects with different interest preference weights, are collected and provided by the trustees. Furthermore, three factors are comprehensively considered to ease the issue of collusion attacks and make the trust certifications more accurate. Finally, a cross-domain scenario is deployed and implemented, and the comprehensive experiments and analysis are conducted. The results demonstrate that our LCT model obviously outperforms the Bayesian Network (BN) model and the CR model in our cross-domain scenario, and significantly improves the successful interaction rates of the honest entities without increasing the risks of interacting with the malicious entities.

Study on Machine Learning Techniques for Malware Classification and Detection

  • Moon, Jaewoong;Kim, Subin;Song, Jaeseung;Kim, Kyungshin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4308-4325
    • /
    • 2021
  • The importance and necessity of artificial intelligence, particularly machine learning, has recently been emphasized. In fact, artificial intelligence, such as intelligent surveillance cameras and other security systems, is used to solve various problems or provide convenience, providing solutions to problems that humans traditionally had to manually deal with one at a time. Among them, information security is one of the domains where the use of artificial intelligence is especially needed because the frequency of occurrence and processing capacity of dangerous codes exceeds the capabilities of humans. Therefore, this study intends to examine the definition of artificial intelligence and machine learning, its execution method, process, learning algorithm, and cases of utilization in various domains, particularly the cases and contents of artificial intelligence technology used in the field of information security. Based on this, this study proposes a method to apply machine learning technology to the method of classifying and detecting malware that has rapidly increased in recent years. The proposed methodology converts software programs containing malicious codes into images and creates training data suitable for machine learning by preparing data and augmenting the dataset. The model trained using the images created in this manner is expected to be effective in classifying and detecting malware.

Detection Models and Response Techniques of Fake Advertising Phishing Websites (가짜 광고성 피싱 사이트 탐지 모델 및 대응 기술)

  • Eunbeen Lee;Jeongeun Cho;Wonhyung Park
    • Convergence Security Journal
    • /
    • v.23 no.3
    • /
    • pp.29-36
    • /
    • 2023
  • With the recent surge in exposure to fake advertising phishing sites in search engines, the damage caused by poor search quality and personal information leakage is increasing. In particular, the seriousness of the problem is worsening faster as the possibility of automating the creation of advertising phishing sites through tools such as ChatGPT increases. In this paper, the source code of fake advertising phishing sites was statically analyzed to derive structural commonalities, and among them, a detection crawler that filters sites step by step based on foreign domains and redirection was developed to confirm that fake advertising posts were finally detected. In addition, we demonstrate the need for new guide lines by verifying that the redirection page of fake advertising sites is divided into three types and returns different sites according to each situation. Furthermore, we propose new detection guidelines for fake advertising phishing sites that cannot be detected by existing detection methods.

Variable Analysis on University Students' Ethical Utilization of the Internet shown in Internet Ethics Qualification(IEQ) (인터넷 윤리 자격 시험에 나타난 대학생들의 인터넷의 윤리적 활용 변인 분석)

  • Yoon, Mi-Sun;Kim, Bo-Ra;Moon, Young-Bin;Kim, Myuhng-Joo;Park, Jung-Ho
    • The Journal of Korean Association of Computer Education
    • /
    • v.16 no.3
    • /
    • pp.71-78
    • /
    • 2013
  • Internet ethics has been simply recognized as moral understanding, knowledge of etiquette or a kind of common sense. Recently, however, rapid growth of internet dysfunction such as the inadvertent disclosure of personal information, infringement of copyright and malicious code with hacking, has unavoidably broadened the territory of internet ethics. In this light, education contents of internet ethics must include not only laws and systems but specialized knowledge on prevention and action of internet dysfunction. In this paper, we analyze the variables affecting the educational achievement on diverse domains of internet ethics by investigating internet ethics qualifying examination and afterward we suggest some application methods to strengthen the internet ethics.

  • PDF

Adaptive boosting in ensembles for outlier detection: Base learner selection and fusion via local domain competence

  • Bii, Joash Kiprotich;Rimiru, Richard;Mwangi, Ronald Waweru
    • ETRI Journal
    • /
    • v.42 no.6
    • /
    • pp.886-898
    • /
    • 2020
  • Unusual data patterns or outliers can be generated because of human errors, incorrect measurements, or malicious activities. Detecting outliers is a difficult task that requires complex ensembles. An ideal outlier detection ensemble should consider the strengths of individual base detectors while carefully combining their outputs to create a strong overall ensemble and achieve unbiased accuracy with minimal variance. Selecting and combining the outputs of dissimilar base learners is a challenging task. This paper proposes a model that utilizes heterogeneous base learners. It adaptively boosts the outcomes of preceding learners in the first phase by assigning weights and identifying high-performing learners based on their local domains, and then carefully fuses their outcomes in the second phase to improve overall accuracy. Experimental results from 10 benchmark datasets are used to train and test the proposed model. To investigate its accuracy in terms of separating outliers from inliers, the proposed model is tested and evaluated using accuracy metrics. The analyzed data are presented as crosstabs and percentages, followed by a descriptive method for synthesis and interpretation.