• Title/Summary/Keyword: kernel technique

Search Result 262, Processing Time 0.026 seconds

Fixed Time Synchronous IPC in Zephyr Kernel (Zephyr 커널에서 고정 시간 동기식 IPC 구현)

  • Jung, Jooyoung;Kim, Eunyoung;Shin, Dongha
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.4
    • /
    • pp.205-212
    • /
    • 2017
  • Linux Foundation has announced a real-time kernel, called Zephyr, for IoT applications recently. Zephyr kernel provides synchronous and asynchronous IPC for data communication between threads. Synchronous IPC is useful for programming multi-threads that need to be executed synchronously, since the sender thread is blocked until the data is delivered to the receiver thread and the completion of data transfer can be known to two threads. In general, 'IPC execution time' is defined as the time duration between the sender thread sends data and the receiver thread receives the data sent. Especially, it is important that 'IPC execution time' in the synchronous IPC should be fixed in real-time kernel like Zephyr. However, we have found that the execution time of the synchronous IPC in Zephyr kernel increases in proportion to the number of threads executing in the kernel. In this paper, we propose a method to implement a fixed time synchronous IPC in Zephyr kernel using Direct Thread Switching(DTS) technique. Using the technique, the receiver thread executes directly after the sender thread sends a data during the remaining time slice of the sender thread and we can archive a fixed IPC execution time even when the number of threads executing in the kernel increases. In this paper, we implemented synchronous IPC using DTS in the Zephyr kernel and found the IPC execution time of the IPC is always 389 cycle that is relatively small and fixed.

Survey on Nucleotide Encoding Techniques and SVM Kernel Design for Human Splice Site Prediction

  • Bari, A.T.M. Golam;Reaz, Mst. Rokeya;Choi, Ho-Jin;Jeong, Byeong-Soo
    • Interdisciplinary Bio Central
    • /
    • v.4 no.4
    • /
    • pp.14.1-14.6
    • /
    • 2012
  • Splice site prediction in DNA sequence is a basic search problem for finding exon/intron and intron/exon boundaries. Removing introns and then joining the exons together forms the mRNA sequence. These sequences are the input of the translation process. It is a necessary step in the central dogma of molecular biology. The main task of splice site prediction is to find out the exact GT and AG ended sequences. Then it identifies the true and false GT and AG ended sequences among those candidate sequences. In this paper, we survey research works on splice site prediction based on support vector machine (SVM). The basic difference between these research works is nucleotide encoding technique and SVM kernel selection. Some methods encode the DNA sequence in a sparse way whereas others encode in a probabilistic manner. The encoded sequences serve as input of SVM. The task of SVM is to classify them using its learning model. The accuracy of classification largely depends on the proper kernel selection for sequence data as well as a selection of kernel parameter. We observe each encoding technique and classify them according to their similarity. Then we discuss about kernel and their parameter selection. Our survey paper provides a basic understanding of encoding approaches and proper kernel selection of SVM for splice site prediction.

Sparse Kernel Independent Component Analysis for Blind Source Separation

  • Khan, Asif;Kim, In-Taek
    • Journal of the Optical Society of Korea
    • /
    • v.12 no.3
    • /
    • pp.121-125
    • /
    • 2008
  • We address the problem of Blind Source Separation(BSS) of superimposed signals in situations where one signal has constant or slowly varying intensities at some consecutive locations and at the corresponding locations the other signal has highly varying intensities. Independent Component Analysis(ICA) is a major technique for Blind Source Separation and the existing ICA algorithms fail to estimate the original intensities in the stated situation. We combine the advantages of existing sparse methods and Kernel ICA in our technique, by proposing wavelet packet based sparse decomposition of signals prior to the application of Kernel ICA. Simulations and experimental results illustrate the effectiveness and accuracy of the proposed approach. The approach is general in the way that it can be tailored and applied to a wide range of BSS problems concerning one-dimensional signals and images(two-dimensional signals).

Design of the Kernel Hardening in USB Driver for Linux DLM Function (리눅스 운영체제에서 DLM을 이용한 USB 디바이스 커널 하드닝 설계)

  • Jang, Seung-Ju
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.12
    • /
    • pp.2579-2585
    • /
    • 2009
  • It is an important problem without system breaking. Like this, to make a computer system operate normally, various commercial fault tolerant techniques are used. Almost commercial products of fault tolerant system consume much cost. This paper proposes kernel hardening technique that are reducing panic using DLM modue in Linux USB driver. I experimented the design technique in Linux O.S. By the experiment, the suggesting technique which includes USB module with DLMis working well.

The shifted Chebyshev series-based plug-in for bandwidth selection in kernel density estimation

  • Soratja Klaichim;Juthaphorn Sinsomboonthong;Thidaporn Supapakorn
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.3
    • /
    • pp.337-347
    • /
    • 2024
  • Kernel density estimation is a prevalent technique employed for nonparametric density estimation, enabling direct estimation from the data itself. This estimation involves two crucial elements: selection of the kernel function and the determination of the appropriate bandwidth. The selection of the bandwidth plays an important role in kernel density estimation, which has been developed over the past decade. A range of methods is available for selecting the bandwidth, including the plug-in bandwidth. In this article, the proposed plug-in bandwidth is introduced, which leverages shifted Chebyshev series-based approximation to determine the optimal bandwidth. Through a simulation study, the performance of the suggested bandwidth is analyzed to reveal its favorable performance across a wide range of distributions and sample sizes compared to alternative bandwidths. The proposed bandwidth is also applied for kernel density estimation on real dataset. The outcomes obtained from the proposed bandwidth indicate a favorable selection. Hence, this article serves as motivation to explore additional plug-in bandwidths that rely on function approximations utilizing alternative series expansions.

A Dynamic Kernel Update System with a Function Granularity for Linux (리눅스 환경에서의 함수 단위 동적 커널 업데이트 시스템의 설계와 구현)

  • Park, Hyun-Chan;Kim, Se-Won;Yoo, Chuck
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.5
    • /
    • pp.223-230
    • /
    • 2008
  • Dynamic update of kernel can change kernel functionality and fix bugs in runtime. Dynamic update is important because it leverages availability, reliability and flexibility of kernel. An instruction-granularity update technique has been used for dynamic update. However, it is difficult to apply update technique for a commodity operating system kernel because development and maintenance of update code must be performed with assembly language. To overcome this difficulty, we design the function-granularity dynamic update system which uses high-level language such as C language. The proposed update system makes the development and execution of update convenient by providing the development environment for update code which is same for kernel development. We implement this system for Linux and demonstrate an example of update for EXT3 file system. The update was successfully executed.

A study on Memory Analysis Bypass Technique and Kernel Tampering Detection (메모리 분석 우회 기법과 커널 변조 탐지 연구)

  • Lee, Haneol;Kim, Huy Kang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.4
    • /
    • pp.661-674
    • /
    • 2021
  • Malware, such as a rootkit that modifies the kernel, can adversely affect the analyst's judgment, making the analysis difficult or impossible if a mechanism to evade memory analysis is added. Therefore, we plan to preemptively respond to malware such as rootkits that bypass detection through advanced kernel modulation in the future. To this end, the main structure used in the Windows kernel was analyzed from the attacker's point of view, and a method capable of modulating the kernel object was applied to modulate the memory dump file. The result of tampering is confirmed through experimentation that it cannot be detected by memory analysis tool widely used worldwide. Then, from the analyst's point of view, using the concept of tamper resistance, it is made in the form of software that can detect tampering and shows that it is possible to detect areas that are not detected by existing memory analysis tools. Through this study, it is judged that it is meaningful in that it preemptively attempted to modulate the kernel area and derived insights to enable precise analysis. However, there is a limitation in that the necessary detection rules need to be manually created in software implementation for precise analysis.

Differences in Network-Based Kernel Density Estimation According to Pedestrian Network and Road Centerline Network

  • Lee, Byoungkil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.5
    • /
    • pp.335-341
    • /
    • 2018
  • The KDE (Kernel Density Estimation) technique in GIS (Geographic Information System) has been widely used as a method for determining whether a phenomenon occurring in space forms clusters. Most human-generated events such as traffic accidents and retail stores are distributed according to a road network. Even if events on forward and rear roads have short Euclidean distances, network distances may increase and the correlation between them may be low. Therefore, the NKDE (Network-based KDE) technique has been proposed and applied to the urban space where a road network has been developed. KDE is being studied in the field of business GIS, but there is a limit to the microscopic analysis of economic activity along a road. In this study, the NKDE technique is applied to the analysis of urban phenomena such as the density of shops rather than traffic accidents that occur on roads. The results of the NKDE technique are also compared to pedestrian networks and road centerline networks. The results show that applying NKDE to microscopic trade area analysis can yield relatively accurate results. In addition, it was found that pedestrian network data that can consider the movement of actual pedestrians are necessary for accurate trade area analysis using NKDE.

Numerical Study of Aggregation and Breakage of Particles in Taylor Reactor (테일러 반응기 내의 입자응집과 분해에 관한 수치 연구)

  • Lee, Seung Hun;Jeon, Dong Hyup
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.40 no.6
    • /
    • pp.365-372
    • /
    • 2016
  • Using the computational fluid dynamics (CFD) technique, we simulated the fluid flow in a Taylor reactor considering the aggregation and breakage of particles. We calculated the population balance equation (PBE) to determine the particle-size distribution by implementing the quadrature method-of-moment (QMOM). It was used that six moments for an initial moments, the sum of Brownian kernel and turbulent kernel for aggregation kernel, and power-law kernel for breakage kernel. We predicted the final mean particle size when the particle had various initial volume fraction values. The result showed that the mean particle size and initial growth rate increased as the initial volume fraction of the particle increased.

A Bootstrap Test for Linear Relationship by Kernel Smoothing (희귀모형의 선형성에 대한 커널붓스트랩검정)

  • Baek, Jang-Sun;Kim, Min-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.9 no.2
    • /
    • pp.95-103
    • /
    • 1998
  • Azzalini and Bowman proposed the pseudo-likelihood ratio test for checking the linear relationship using kernel regression estimator when the error of the regression model follows the normal distribution. We modify their method with the bootstrap technique to construct a new test, and examine the power of our test through simulation. Our method can be applied to the case where the distribution of the error is not normal.

  • PDF