• Title/Summary/Keyword: machine learning

Search Result 5,309, Processing Time 0.034 seconds

A Scheme for Identifying Malicious Applications Based on API Characteristics (API 특성 정보기반 악성 애플리케이션 식별 기법)

  • Cho, Taejoo;Kim, Hyunki;Lee, Junghwan;Jung, Moongyu;Yi, Jeong Hyun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.1
    • /
    • pp.187-196
    • /
    • 2016
  • Android applications are inherently vulnerable to a repackaging attack such that malicious codes are easily inserted into an application and then resigned by the attacker. These days, it occurs often that such private or individual information is leaked. In principle, all Android applications are composed of user defined methods and APIs. As well as accessing to resources on platform, APIs play a role as a practical functional feature, and user defined methods play a role as a feature by using APIs. In this paper we propose a scheme to analyze sensitive APIs mostly used in malicious applications in terms of how malicious applications operate and which API they use. Based on the characteristics of target APIs, we accumulate the knowledge on such APIs using a machine learning scheme based on Naive Bayes algorithm. Resulting from the learned results, we are able to provide fine-grained numeric score on the degree of vulnerabilities of mobile applications. In doing so, we expect the proposed scheme will help mobile application developers identify the security level of applications in advance.

Machine Learning Based Automated Source, Sink Categorization for Hybrid Approach of Privacy Leak Detection (머신러닝 기반의 자동화된 소스 싱크 분류 및 하이브리드 분석을 통한 개인정보 유출 탐지 방법)

  • Shim, Hyunseok;Jung, Souhwan
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.4
    • /
    • pp.657-667
    • /
    • 2020
  • The Android framework allows apps to take full advantage of personal information through granting single permission, and does not determine whether the data being leaked is actual personal information. To solve these problems, we propose a tool with static/dynamic analysis. The tool analyzes the Source and Sink used by the target app, to provide users with information on what personal information it used. To achieve this, we extracted the Source and Sink through Control Flow Graph and make sure that it leaks the user's privacy when there is a Source-to-Sink flow. We also used the sensitive permission information provided by Google to obtain information from the sensitive API corresponding to Source and Sink. Finally, our dynamic analysis tool runs the app and hooks information from each sensitive API. In the hooked data, we got information about whether user's personal information is leaked through this app, and delivered to user. In this process, an automated Source/Sink classification model was applied to collect latest Source/Sink information, and the we categorized latest release version of Android(9.0) with 88.5% accuracy. We evaluated our tool on 2,802 APKs, and found 850 APKs that leak personal information.

A Technique to Recommend Appropriate Developers for Reported Bugs Based on Term Similarity and Bug Resolution History (개발자 별 버그 해결 유형을 고려한 자동적 개발자 추천 접근법)

  • Park, Seong Hun;Kim, Jung Il;Lee, Eun Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.12
    • /
    • pp.511-522
    • /
    • 2014
  • During the development of the software, a variety of bugs are reported. Several bug tracking systems, such as, Bugzilla, MantisBT, Trac, JIRA, are used to deal with reported bug information in many open source development projects. Bug reports in bug tracking system would be triaged to manage bugs and determine developer who is responsible for resolving the bug report. As the size of the software is increasingly growing and bug reports tend to be duplicated, bug triage becomes more and more complex and difficult. In this paper, we present an approach to assign bug reports to appropriate developers, which is a main part of bug triage task. At first, words which have been included the resolved bug reports are classified according to each developer. Second, words in newly bug reports are selected. After first and second steps, vectors whose items are the selected words are generated. At the third step, TF-IDF(Term frequency - Inverse document frequency) of the each selected words are computed, which is the weight value of each vector item. Finally, the developers are recommended based on the similarity between the developer's word vector and the vector of new bug report. We conducted an experiment on Eclipse JDT and CDT project to show the applicability of the proposed approach. We also compared the proposed approach with an existing study which is based on machine learning. The experimental results show that the proposed approach is superior to existing method.

Automatic TV Program Recommendation using LDA based Latent Topic Inference (LDA 기반 은닉 토픽 추론을 이용한 TV 프로그램 자동 추천)

  • Kim, Eun-Hui;Pyo, Shin-Jee;Kim, Mun-Churl
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.270-283
    • /
    • 2012
  • With the advent of multi-channel TV, IPTV and smart TV services, excessive amounts of TV program contents become available at users' sides, which makes it very difficult for TV viewers to easily find and consume their preferred TV programs. Therefore, the service of automatic TV recommendation is an important issue for TV users for future intelligent TV services, which allows to improve access to their preferred TV contents. In this paper, we present a recommendation model based on statistical machine learning using a collaborative filtering concept by taking in account both public and personal preferences on TV program contents. For this, users' preference on TV programs is modeled as a latent topic variable using LDA (Latent Dirichlet Allocation) which is recently applied in various application domains. To apply LDA for TV recommendation appropriately, TV viewers's interested topics is regarded as latent topics in LDA, and asymmetric Dirichlet distribution is applied on the LDA which can reveal the diversity of the TV viewers' interests on topics based on the analysis of the real TV usage history data. The experimental results show that the proposed LDA based TV recommendation method yields average 66.5% with top 5 ranked TV programs in weekly recommendation, average 77.9% precision in bimonthly recommendation with top 5 ranked TV programs for the TV usage history data of similar taste user groups.

Research-platform Design for the Korean Smart Greenhouse Based on Cloud Computing (클라우드 기반 한국형 스마트 온실 연구 플랫폼 설계 방안)

  • Baek, Jeong-Hyun;Heo, Jeong-Wook;Kim, Hyun-Hwan;Hong, Youngsin;Lee, Jae-Su
    • Journal of Bio-Environment Control
    • /
    • v.27 no.1
    • /
    • pp.27-33
    • /
    • 2018
  • This study was performed to review the domestic and international smart farm service model based on the convergence of agriculture and information & communication technology and derived various factors needed to improve the Korean smart greenhouse. Studies on modelling of crop growth environment in domestic smart farms were limited. And it took a lot of time to build research infrastructure. The cloud-based research platform as an alternative is needed. This platform can provide an infrastructure for comprehensive data storage and analysis as it manages the growth model of cloud-based integrated data, growth environment model, actuators control model, and farm management as well as knowledge-based expert systems and farm dashboard. Therefore, the cloud-based research platform can be applied as to quantify the relationships among various factors, such as the growth environment of crops, productivity, and actuators control. In addition, it will enable researchers to analyze quantitatively the growth environment model of crops, plants, and growth by utilizing big data, machine learning, and artificial intelligences.

Prediction of Landslides and Determination of Its Variable Importance Using AutoML (AutoML을 이용한 산사태 예측 및 변수 중요도 산정)

  • Nam, KoungHoon;Kim, Man-Il;Kwon, Oil;Wang, Fawu;Jeong, Gyo-Cheol
    • The Journal of Engineering Geology
    • /
    • v.30 no.3
    • /
    • pp.315-325
    • /
    • 2020
  • This study was performed to develop a model to predict landslides and determine the variable importance of landslides susceptibility factors based on the probabilistic prediction of landslides occurring on slopes along the road. Field survey data of 30,615 slopes from 2007 to 2020 in Korea were analyzed to develop a landslide prediction model. Of the total 131 variable factors, 17 topographic factors and 114 geological factors (including 89 bedrocks) were used to predict landslides. Automated machine learning (AutoML) was used to classify landslides and non-landslides. The verification results revealed that the best model, an extremely randomized tree (XRT) with excellent predictive performance, yielded 83.977% of prediction rates on test data. As a result of the analysis to determine the variable importance of the landslide susceptibility factors, it was composed of 10 topographic factors and 9 geological factors, which was presented as a percentage for each factor. This model was evaluated probabilistically and quantitatively for the likelihood of landslide occurrence by deriving the ranking of variable importance using only on-site survey data. It is considered that this model can provide a reliable basis for slope safety assessment through field surveys to decision-makers in the future.

Analyzing dependency of Korean subordinate clauses using a composit kernel (복합 커널을 사용한 한국어 종속절의 의존관계 분석)

  • Kim, Sang-Soo;Park, Seong-Bae;Park, Se-Young;Lee, Sang-Jo
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.1
    • /
    • pp.1-15
    • /
    • 2008
  • Analyzing of dependency relation among clauses is one of the most critical parts in parsing Korean sentences because it generates severe ambiguities. To get successful results of analyzing dependency relation, this task has been the target of various machine learning methods including SVM. Especially, kernel methods are usually used to analyze dependency relation and it is reported that they show high performance. This paper proposes an expression and a composit kernel for dependency analysis of Korean clauses. The proposed expression adopts a composite kernel to obtain the similarity among clauses. The composite kernel consists of a parse tree kernel and a liner kernel. A parse tree kernel is used for treating structure information and a liner kernel is applied for using lexical information. the proposed expression is defined as three types. One is a expression of layers in clause, another is relation expression between clause and the other is an expression of inner clause. The experiment is processed by two steps that first is a relation expression between clauses and the second is a expression of inner clauses. The experimental results show that the proposed expression achieves 83.31% of accuracy.

  • PDF

Technology Analysis on Automatic Detection and Defense of SW Vulnerabilities (SW 보안 취약점 자동 탐색 및 대응 기술 분석)

  • Oh, Sang-Hwan;Kim, Tae-Eun;Kim, HwanKuk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.11
    • /
    • pp.94-103
    • /
    • 2017
  • As automatic hacking tools and techniques have been improved, the number of new vulnerabilities has increased. The CVE registered from 2010 to 2015 numbered about 80,000, and it is expected that more vulnerabilities will be reported. In most cases, patching a vulnerability depends on the developers' capability, and most patching techniques are based on manual analysis, which requires nine months, on average. The techniques are composed of finding the vulnerability, conducting the analysis based on the source code, and writing new code for the patch. Zero-day is critical because the time gap between the first discovery and taking action is too long, as mentioned. To solve the problem, techniques for automatically detecting and analyzing software (SW) vulnerabilities have been proposed recently. Cyber Grand Challenge (CGC) held in 2016 was the first competition to create automatic defensive systems capable of reasoning over flaws in binary and formulating patches without experts' direct analysis. Darktrace and Cylance are similar projects for managing SW automatically with artificial intelligence and machine learning. Though many foreign commercial institutions and academies run their projects for automatic binary analysis, the domestic level of technology is much lower. This paper is to study developing automatic detection of SW vulnerabilities and defenses against them. We analyzed and compared relative works and tools as additional elements, and optimal techniques for automatic analysis are suggested.

Motor Imagery Brain Signal Analysis for EEG-based Mouse Control (뇌전도 기반 마우스 제어를 위한 동작 상상 뇌 신호 분석)

  • Lee, Kyeong-Yeon;Lee, Tae-Hoon;Lee, Sang-Yoon
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.2
    • /
    • pp.309-338
    • /
    • 2010
  • In this paper, we studied the brain-computer interface (BCI). BCIs help severely disabled people to control external devices by analyzing their brain signals evoked from motor imageries. The findings in the field of neurophysiology revealed that the power of $\beta$(14-26 Hz) and $\mu$(8-12 Hz) rhythms decreases or increases in synchrony of the underlying neuronal populations in the sensorymotor cortex when people imagine the movement of their body parts. These are called Event-Related Desynchronization / Synchronization (ERD/ERS), respectively. We implemented a BCI-based mouse interface system which enabled subjects to control a computer mouse cursor into four different directions (e.g., up, down, left, and right) by analyzing brain signal patterns online. Tongue, foot, left-hand, and right-hand motor imageries were utilized to stimulate a human brain. We used a non-invasive EEG which records brain's spontaneous electrical activity over a short period of time by placing electrodes on the scalp. Because of the nature of the EEG signals, i.e., low amplitude and vulnerability to artifacts and noise, it is hard to analyze and classify brain signals measured by EEG directly. In order to overcome these obstacles, we applied statistical machine-learning techniques. We could achieve high performance in the classification of four motor imageries by employing Common Spatial Pattern (CSP) and Linear Discriminant Analysis (LDA) which transformed input EEG signals into a new coordinate system making the variances among different motor imagery signals maximized for easy classification. From the inspection of the topographies of the results, we could also confirm ERD/ERS appeared at different brain areas for different motor imageries showing the correspondence with the anatomical and neurophysiological knowledge.

  • PDF

Computational estimation of the earthquake response for fibre reinforced concrete rectangular columns

  • Liu, Chanjuan;Wu, Xinling;Wakil, Karzan;Jermsittiparsert, Kittisak;Ho, Lanh Si;Alabduljabbar, Hisham;Alaskar, Abdulaziz;Alrshoudi, Fahed;Alyousef, Rayed;Mohamed, Abdeliazim Mustafa
    • Steel and Composite Structures
    • /
    • v.34 no.5
    • /
    • pp.743-767
    • /
    • 2020
  • Due to the impressive flexural performance, enhanced compressive strength and more constrained crack propagation, Fibre-reinforced concrete (FRC) have been widely employed in the construction application. Majority of experimental studies have focused on the seismic behavior of FRC columns. Based on the valid experimental data obtained from the previous studies, the current study has evaluated the seismic response and compressive strength of FRC rectangular columns while following hybrid metaheuristic techniques. Due to the non-linearity of seismic data, Adaptive neuro-fuzzy inference system (ANFIS) has been incorporated with metaheuristic algorithms. 317 different datasets from FRC column tests has been applied as one database in order to determine the most influential factor on the ultimate strengths of FRC rectangular columns subjected to the simulated seismic loading. ANFIS has been used with the incorporation of Particle Swarm Optimization (PSO) and Genetic algorithm (GA). For the analysis of the attained results, Extreme learning machine (ELM) as an authentic prediction method has been concurrently used. The variable selection procedure is to choose the most dominant parameters affecting the ultimate strengths of FRC rectangular columns subjected to simulated seismic loading. Accordingly, the results have shown that ANFIS-PSO has successfully predicted the seismic lateral load with R2 = 0.857 and 0.902 for the test and train phase, respectively, nominated as the lateral load prediction estimator. On the other hand, in case of compressive strength prediction, ELM is to predict the compressive strength with R2 = 0.657 and 0.862 for test and train phase, respectively. The results have shown that the seismic lateral force trend is more predictable than the compressive strength of FRC rectangular columns, in which the best results belong to the lateral force prediction. Compressive strength prediction has illustrated a significant deviation above 40 Mpa which could be related to the considerable non-linearity and possible empirical shortcomings. Finally, employing ANFIS-GA and ANFIS-PSO techniques to evaluate the seismic response of FRC are a promising reliable approach to be replaced for high cost and time-consuming experimental tests.