• Title/Summary/Keyword: Kernel-based Approaches

Search Result 47, Processing Time 0.028 seconds

Eager Data Transfer Mechanism for Reducing Communication Latency in User-Level Network Protocols

  • Won, Chul-Ho;Lee, Ben;Park, Kyoung;Kim, Myung-Joon
    • Journal of Information Processing Systems
    • /
    • v.4 no.4
    • /
    • pp.133-144
    • /
    • 2008
  • Clusters have become a popular alternative for building high-performance parallel computing systems. Today's high-performance system area network (SAN) protocols such as VIA and IBA significantly reduce user-to-user communication latency by implementing protocol stacks outside of operating system kernel. However, emerging parallel applications require a significant improvement in communication latency. Since the time required for transferring data between host memory and network interface (NI) make up a large portion of overall communication latency, the reduction of data transfer time is crucial for achieving low-latency communication. In this paper, Eager Data Transfer (EDT) mechanism is proposed to reduce the time for data transfers between the host and network interface. The EDT employs cache coherence interface hardware to directly transfer data between the host and NI. An EDT-based network interface was modeled and simulated on the Linux-based, complete system simulation environment, Linux/SimOS. Our simulation results show that the EDT approach significantly reduces the data transfer time compared to DMA-based approaches. The EDTbased NI attains 17% to 38% reduction in user-to-user message time compared to the cache-coherent DMA-based NIs for a range of message sizes (64 bytes${\sim}$4 Kbytes) in a SAN environment.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

A Max-Flow-Based Similarity Measure for Spectral Clustering

  • Cao, Jiangzhong;Chen, Pei;Zheng, Yun;Dai, Qingyun
    • ETRI Journal
    • /
    • v.35 no.2
    • /
    • pp.311-320
    • /
    • 2013
  • In most spectral clustering approaches, the Gaussian kernel-based similarity measure is used to construct the affinity matrix. However, such a similarity measure does not work well on a dataset with a nonlinear and elongated structure. In this paper, we present a new similarity measure to deal with the nonlinearity issue. The maximum flow between data points is computed as the new similarity, which can satisfy the requirement for similarity in the clustering method. Additionally, the new similarity carries the global and local relations between data. We apply it to spectral clustering and compare the proposed similarity measure with other state-of-the-art methods on both synthetic and real-world data. The experiment results show the superiority of the new similarity: 1) The max-flow-based similarity measure can significantly improve the performance of spectral clustering; 2) It is robust and not sensitive to the parameters.

A Prediction-Based Dynamic Thermal Management Technique for Multi-Core Systems (멀티코어시스템에서의 예측 기반 동적 온도 관리 기법)

  • Kim, Won-Jin;Chung, Ki-Seok
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.4 no.2
    • /
    • pp.55-62
    • /
    • 2009
  • The power consumption of a high-end microprocessor increases very rapidly. High power consumption will lead to a rapid increase in the chip temperature as well. If the temperature reaches beyond a certain level, chip operation becomes either slow or unreliable. Therefore various approaches for Dynamic Thermal Management (DTM) have been proposed. In this paper, we propose a learning based temperature prediction scheme for a multi-core system. In this approach, from repeatedly executing an application, we learn the thermal patterns of the chip, and we control the temperature in advance through DTM. When the predicted temperature may go beyond a threshold value, we reduce the temperature by decreasing the operation frequencies of the corresponding core. We implement our temperature prediction on an Intel's Quad-Core system which has integrated digital thermal sensors. A Dynamic Frequency System (DFS) technique is implemented to have four frequency steps on a Linux kernel. We carried out experiments using Phoronix Test Suite benchmarks for Linux. The peak temperature has been reduced by on average $5^{\circ}C{\sim}7^{\circ}C$. The overall average temperature reduced from $72^{\circ}C$ to $65^{\circ}C$.

  • PDF

Preserving and Breakup for the Detailed Representation of Liquid Sheets in Particle-Based Fluid Simulations (입자 기반 유체 시뮬레이션에서 디테일한 액체 시트를 표현하기 위한 보존과 분해 기법)

  • Kim, Jong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.1
    • /
    • pp.13-22
    • /
    • 2019
  • In this paper, we propose a new method to improve the details of the fluid surface by removing liquid sheets that are over-preserved in particle-based water simulation. A variety of anisotropic approaches have been proposed to address the surface noise problem, one of the chronic problems in particle-based fluid simulation. However, a method of stably expressing the preservation and breakup of the liquid sheet has not been proposed. We propose a new framework that can dynamically add and remove the water particles based on anisotropic kernel and density to simultaneously represent two features of liquid sheet preservation and breakup in particle-based fluid simulations. The proposed technique well represented the characteristics of a fluid sheet that was breakup by removing the excessively preserved liquid sheet in a particle-based fluid simulation approach. As a result, the quality of the liquid sheet was improved without noise.

Detection of Multiple Salient Objects by Categorizing Regional Features

  • Oh, Kang-Han;Kim, Soo-Hyung;Kim, Young-Chul;Lee, Yu-Ra
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.272-287
    • /
    • 2016
  • Recently, various and effective contrast based salient object detection models to focus on a single target have been proposed. However, there is a lack of research on detection of multiple objects, and also it is a more challenging task than single target process. In the multiple target problem, we are confronted by new difficulties caused by distinct difference between properties of objects. The characteristic of existing models depending on the global maximum distribution of data point would become a drawback for detection of multiple objects. In this paper, by analyzing limitations of the existing methods, we have devised three main processes to detect multiple salient objects. In the first stage, regional features are extracted from over-segmented regions. In the second stage, the regional features are categorized into homogeneous cluster using the mean-shift algorithm with the kernel function having various sizes. In the final stage, we compute saliency scores of the categorized regions using only spatial features without the contrast features, and then all scores are integrated for the final salient regions. In the experimental results, the scheme achieved superior detection accuracy for the SED2 and MSRA-ASD benchmarks with both a higher precision and better recall than state-of-the-art approaches. Especially, given multiple objects having different properties, our model significantly outperforms all existing models.

Derivation of Flood Frequency Curve with Uncertainty of Rainfall and Rainfall-Runoff Model (강우 및 강우-유출 모형의 불확실성을 고려한 홍수빈도곡선 유도)

  • Kwon, Hyun-Han;Kim, Jang-Gyeong;Park, Sae-Hoon
    • Journal of Korea Water Resources Association
    • /
    • v.46 no.1
    • /
    • pp.59-71
    • /
    • 2013
  • The lack of sufficient flood data being kept across Korea has made it difficult to assess reliable estimates of the design flood while relatively sufficient rainfall data are available. In this regard, a rainfall simulation based derivation technique of flood frequency curve has been proposed in some of studies. The main issues in deriving the flood frequency curve is to develop the rainfall simulation model that is able to effectively reproduce extreme rainfall. Also the rainfall-runoff modeling that can convey uncertainties associated with model parameters needs to be developed. This study proposes a systematic approach to fully consider rainfallrunoff related uncertainties by coupling a piecewise Kernel-Pareto based multisite daily rainfall generation model and Bayesian HEC-1 model. The proposed model was applied to generate runoff ensemble at Daechung Dam watershed, and the flood frequency curve was successfully derived. It was confirmed that the proposed model is very promising in estimating design floods given a rigorous comparison with existing approaches.

Noise Removal in Magnetic Resonance Images based on Non-Local Means and Guided Image Filtering (비 지역적 평균과 유도 영상 필터링에 기반한 자기 공명 영상의 잡음 제거)

  • Mahmood, Muhammad Tariq;Choi, Young Kyu
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.11
    • /
    • pp.573-578
    • /
    • 2014
  • In this letter, we propose a noise reduction method for use in magnetic resonance images that is based on non-local mean and guided image filters. Our method consists of two phases. In the first phase, the guidance image is obtained from a noisy image by using an adaptive non-local mean filter. The spread of the kernel is adaptively by controlled by implementing the concept of edgeness. In the second phase, the noisy images and the guidance images are provided to the guided image filter as input in order to produce a noise-free image. The improved performance of the proposed method is investigated by conducting experiments on standard datasets that contain magnetic resonance images. The results show that the proposed scheme is superior over the existing approaches.

VirtAV: an Agentless Runtime Antivirus System for Virtual Machines

  • Tang, Hongwei;Feng, Shengzhong;Zhao, Xiaofang;Jin, Yan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.11
    • /
    • pp.5642-5670
    • /
    • 2017
  • Antivirus is an important issue to the security of virtual machine (VM). According to where the antivirus system resides, the existing approaches can be categorized into three classes: internal approach, external approach and hybrid approach. However, for the internal approach, it is susceptible to attacks and may cause antivirus storm and rollback vulnerability problems. On the other hand, for the external approach, the antivirus systems built upon virtual machine introspection (VMI) technology cannot find and prohibit viruses promptly. Although the hybrid approach performs virus scanning out of the virtual machine, it is still vulnerable to attacks since it completely depends on the agent and hooks to deliver events in the guest operating system. To solve the aforementioned problems, based on in-memory signature scanning, we propose an agentless runtime antivirus system VirtAV, which scans each piece of binary codes to execute in guest VMs on the VMM side to detect and prevent viruses. As an external approach, VirtAV does not rely on any hooks or agents in the guest OS, and exposes no attack surface to the outside world, so it guarantees the security of itself to the greatest extent. In addition, it solves the antivirus storm problem and the rollback vulnerability problem in virtualization environment. We implemented a prototype based on Qemu/KVM hypervisor and ClamAV antivirus engine. Experimental results demonstrate that VirtAV is able to detect both user-level and kernel-level virus programs inside Windows and Linux guest, no matter whether they are packed or not. From the performance aspect, the overhead of VirtAV on guest performance is acceptable. Especially, VirtAV has little impact on the performance of common desktop applications, such as video playing, web browsing and Microsoft Office series.

Study to detect and block leakage of personal information : Android-platform environment (개인정보 유출 탐지 및 차단에 관한 연구 : 안드로이드 플랫폼 환경)

  • Choi, Youngseok;Kim, Sunghoon;Lee, Dong Hoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.23 no.4
    • /
    • pp.757-766
    • /
    • 2013
  • The Malicious code that targets Android is growing dramatically as the number of Android users are increasing. Most of the malicious code have an intention of leaking personal information. Recently in Korea, a malicious code 'chest' has appeared and generated monetary damages by using malicious code to leak personal information and try to make small purchases. A variety of techniques to detect personal information leaks have been proposed on Android platform. However, the existing techniques are hard to apply to the user's smart-phone due to the characteristics of Android security model. This paper proposed a technique that detects and blocks file approaches and internet connections that are not allowed access to personal information by using the system call hooking in the kernel and white-list based approach policy. In addition, this paper proved the possibility of a real application on smart-phone through the implementation.