• Title/Summary/Keyword: Multiple Optimization Problem

Search Result 444, Processing Time 0.025 seconds

Image-Based Machine Learning Model for Malware Detection on LLVM IR (LLVM IR 대상 악성코드 탐지를 위한 이미지 기반 머신러닝 모델)

  • Kyung-bin Park;Yo-seob Yoon;Baasantogtokh Duulga;Kang-bin Yim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.1
    • /
    • pp.31-40
    • /
    • 2024
  • Recently, static analysis-based signature and pattern detection technologies have limitations due to the advanced IT technologies. Moreover, It is a compatibility problem of multiple architectures and an inherent problem of signature and pattern detection. Malicious codes use obfuscation and packing techniques to hide their identity, and they also avoid existing static analysis-based signature and pattern detection techniques such as code rearrangement, register modification, and branching statement addition. In this paper, We propose an LLVM IR image-based automated static analysis of malicious code technology using machine learning to solve the problems mentioned above. Whether binary is obfuscated or packed, it's decompiled into LLVM IR, which is an intermediate representation dedicated to static analysis and optimization. "Therefore, the LLVM IR code is converted into an image before being fed to the CNN-based transfer learning algorithm ResNet50v2 supported by Keras". As a result, we present a model for image-based detection of malicious code.

Determination of Weight Coefficients of Multiple Objective Reservoir Operation Problem Considering Inflow Variation (유입량의 변동성을 고려한 저수지 연계 운영 모형의 가중치 선정)

  • Kim, Min-Gyu;Kim, Jae-Hee;Kim, Sheung-Kown
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.1
    • /
    • pp.1-15
    • /
    • 2008
  • The purpose of this study is to propose a procedure that will be able to find the most efficient sets of weight coefficients for the Geum-River basin in Korea. The result obtained from multi-objective optimization model is inherently sensitive to the weight coefficient on each objective. In multi-objective reservoir operation problems, the coefficient setting may be more complicated because of the natural variation of inflow. Therefore, for multi-objective reservoir operation problems, it may be important for modelers to provide reservoir operators with appropriate sets of weight coefficients considering the inflow variation. This study presents a procedure to find an appropriate set of weight coefficients under the situation that has inflow variation. The proposed procedure uses GA-CoMOM to provide a set of weight coefficient sets. A DEA-window analysis and a cross efficiency analysis are then performed in order to evaluate and rank the sets of weight coefficients for various inflow scenarios. This proposed procedure might be able to find the most efficient sets of weight coefficients for the Geum-River basin in Korea.

Design of Truss Structures with Real-World Cost Functions Using the Clustering Technique (클러스터링 기법을 이용한 실 경비함수를 가진 트러스 구조물의 설계)

  • Choi, Byoung Han;Lee, Gyu Won
    • Journal of Korean Society of Steel Construction
    • /
    • v.18 no.2
    • /
    • pp.213-223
    • /
    • 2006
  • Conventional truss optimization approaches, while often sophisticated and computationally intensive, have been applied to simple, minimum weight-cost models. These approaches do not perform well when applied to real-world trusses, which have costmodels that are complex and which often involve multiple objectives. Thus, this paper describes the optimization strategies that a clustering technique, which identifies members that are likely to have the same product type, uses for the optimal design of truss structures with real- world cost functions that consider the costs on the weight of the truss, the number of products in the design, the number of joints in the structures, and the costs required in the site.At first, the clustering technique is applied to identify the members and to generate a proper initial solution. A simple taboo search technique is then used, which attempts to generate the optimal solution by starting with the solution from the previous technique. For example, the proposed approach is a plied to a typical problem and to a problem similar to relative performances. The results show that this algorithm generates not only better-quality solutions but also more efficient ones

DNN-Based Dynamic Cell Selection and Transmit Power Allocation Scheme for Energy Efficiency Heterogeneous Mobile Communication Networks (이기종 이동통신 네트워크에서 에너지 효율화를 위한 DNN 기반 동적 셀 선택과 송신 전력 할당 기법)

  • Kim, Donghyeon;Lee, In-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1517-1524
    • /
    • 2022
  • In this paper, we consider a heterogeneous network (HetNet) consisting of one macro base station and multiple small base stations, and assume the coordinated multi-point transmission between the base stations. In addition, we assume that the channel between the base station and the user consists of path loss and Rayleigh fading. Under these assumptions, we present the energy efficiency (EE) achievable by the user for a given base station and we formulate an optimization problem of dynamic cell selection and transmit power allocation to maximize the total EE of the HetNet. In this paper, we propose an unsupervised deep learning method to solve the optimization problem. The proposed deep learning-based scheme can provide high EE while having low complexity compared to the conventional iterative convergence methods. Through the simulation, we show that the proposed dynamic cell selection scheme provides higher EE performance than the maximum signal-to-interference-plus-noise ratio scheme and the Lagrangian dual decomposition scheme, and the proposed transmit power allocation scheme provides the similar performance to the trust region interior point method which can achieve the maximum EE.

A Study on Scalability of Profiling Method Based on Hardware Performance Counter for Optimal Execution of Supercomputer (슈퍼컴퓨터 최적 실행 지원을 위한 하드웨어 성능 카운터 기반 프로파일링 기법의 확장성 연구)

  • Choi, Jieun;Park, Guenchul;Rho, Seungwoo;Park, Chan-Yeol
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.10
    • /
    • pp.221-230
    • /
    • 2020
  • Supercomputer that shares limited resources to multiple users needs a way to optimize the execution of application. For this, it is useful for system administrators to get prior information and hint about the applications to be executed. In most high-performance computing system operations, system administrators strive to increase system productivity by receiving information about execution duration and resource requirements from users when executing tasks. They are also using profiling techniques that generates the necessary information using statistics such as system usage to increase system utilization. In a previous study, we have proposed a scheduling optimization technique by developing a hardware performance counter-based profiling technique that enables characterization of applications without further understanding of the source code. In this paper, we constructed a profiling testbed cluster to support optimal execution of the supercomputer and experimented with the scalability of the profiling method to analyze application characteristics in the built cluster environment. Also, we experimented that the profiling method can be utilized in actual scheduling optimization with scalability even if the application class is reduced or the number of nodes for profiling is minimized. Even though the number of nodes used for profiling was reduced to 1/4, the execution time of the application increased by 1.08% compared to profiling using all nodes, and the scheduling optimization performance improved by up to 37% compared to sequential execution. In addition, profiling by reducing the size of the problem resulted in a quarter of the cost of collecting profiling data and a performance improvement of up to 35%.

A Mathematical Model for Coordinated Multiple Reservoir Operation (댐군의 연계운영을 위한 수학적 모형)

  • Kim, Seung-Gwon
    • Journal of Korea Water Resources Association
    • /
    • v.31 no.6
    • /
    • pp.779-793
    • /
    • 1998
  • In this study, for the purpose of water supply planning, we propose a sophisticated multi-period mixed integer programming model that can coordinate the behavior of multi-reservoir operation, minimizing unnecessary spill. It can simulate the system with operating rules which are self- generated by the optimization engine in the algorithm. It is an optimization model in structure, but it indeed simulates the coordinating behavior of multi-reservoir operation. It minimizes the water shortfalls in demand requirements, maintaining flood reserve volume, minimizing unnecessary spill, maximizing hydropower generation release, keeping water storage levels high for efficient hydroelectric turbine operation. This optimization model is a large scale mixed integer programming problem that consists of 3.920 integer variables and 68.658 by 132.384 node-arc incidence matrix for 28 years of data. In order to handle the enormous amount of data generated by a big mathematical model, the utilization of DBMS (data base management system)seems to be inevitable. It has been tested with the Han River multi-reservoir system in Korea, which consists of 2 large multipurpose dams and 3 hydroelectric dams. We demonstrated successfully that there is a good chance of saving substantial amount of water should it be put to use in real time with a good inflow forecasting system.

  • PDF

Digital Twin-Based Communication Optimization Method for Mission Validation of Swarm Robot (군집 로봇의 임무 검증 지원을 위한 디지털 트윈 기반 통신 최적화 기법)

  • Gwanhyeok, Kim;Hanjin, Kim;Junhyung, Kwon;Beomsu, Ha;Seok Haeng, Huh;Jee Hoon, Koo;Ho Jung, Sohn;Won-Tae, Kim
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • Robots are expected to expand their scope of application to the military field and take on important missions such as surveillance and enemy detection in the coming future warfare. Swarm robots can perform tasks that are difficult or time-consuming for a single robot to be performed more efficiently due to the advantage of having multiple robots. Swarm robots require mutual recognition and collaboration. So they send and receive vast amounts of data, making it increasingly difficult to verify SW. Hardware-in-the-loop simulation used to increase the reliability of mission verification enables SW verification of complex swarm robots, but the amount of verification data exchanged between the HILS device and the simulator increases exponentially according to the number of systems to be verified. So communication overload may occur. In this paper, we propose a digital twin-based communication optimization technique to solve the communication overload problem that occurs in mission verification of swarm robots. Under the proposed Digital Twin based Multi HILS Framework, Network DT can efficiently allocate network resources to each robot according to the mission scenario through the Network Controller algorithm, and can satisfy all sensor generation rates required by individual robots participating in the group. In addition, as a result of an experiment on packet loss rate, it was possible to reduce the packet loss rate from 15.7% to 0.2%.

Virtual Source and Flooding-Based QoS Unicast and Multicast Routing in the Next Generation Optical Internet based on IP/DWDM Technology (IP/DWDM 기반 차세대 광 인터넷 망에서 가상 소스와 플러딩에 기초한 QoS 제공 유니캐스트 및 멀티캐스트 라우팅 방법 연구)

  • Kim, Sung-Un;Park, Seon-Yeong
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.1
    • /
    • pp.33-43
    • /
    • 2011
  • Routing technologies considering QoS-based hypermedia services have been seen as a crucial network property in next generation optical Internet (NGOI) networks based on IP/dense-wavelength division multiplexing (DWDM). The huge potential capacity of one single fiber. which is in Tb/s range, can be exploited by applying DWDM technology which transfers multiple data streams (classified and aggregated IP traffics) on multiple wavelengths (classified with QoS-based) simultaneously. So, DWDM-based optical networks have been a favorable approach for the next generation optical backbone networks. Finding a qualified path meeting the multiple constraints is a multi-constraint optimization problem, which has been proven to be NP-complete and cannot be solved by a simple algorithm. The majority of previous works in DWDM networks has viewed heuristic QoS routing algorithms (as an extension of the current Internet routing paradigm) which are very complex and cause the operational and implementation overheads. This aspect will be more pronounced when the network is unstable or when the size of network is large. In this paper, we propose a flooding-based unicast and multicast QoS routing methodologies(YS-QUR and YS-QMR) which incur much lower message overhead yet yields a good connection establishment success rate. The simulation results demonstrate that the YS-QUR and YS-QMR algorithms are superior to the previous routing algorithms.

HW/SW Partitioning Techniques for Multi-Mode Multi-Task Embedded Applications (멀티모드 멀티태스크 임베디드 어플리케이션을 위한 HW/SW 분할 기법)

  • Kim, Young-Jun;Kim, Tae-Whan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.337-347
    • /
    • 2007
  • An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.

Centroidal Voronoi Tessellation-Based Reduced-Order Modeling of Navier-Stokes Equations

  • 이형천
    • Proceedings of the Korean Society of Computational and Applied Mathematics Conference
    • /
    • 2003.09a
    • /
    • pp.1-1
    • /
    • 2003
  • In this talk, a reduced-order modeling methodology based on centroidal Voronoi tessellations (CVT's)is introduced. CVT's are special Voronoi tessellations for which the generators of the Voronoi diagram are also the centers of mass (means) of the corresponding Voronoi cells. The discrete data sets, CVT's are closely related to the h-means clustering techniques. Even with the use of good mesh generators, discretization schemes, and solution algorithms, the computational simulation of complex, turbulent, or chaotic systems still remains a formidable endeavor. For example, typical finite element codes may require many thousands of degrees of freedom for the accurate simulation of fluid flows. The situation is even worse for optimization problems for which multiple solutions of the complex state system are usually required or in feedback control problems for which real-time solutions of the complex state system are needed. There hava been many studies devoted to the development, testing, and use of reduced-order models for complex systems such as unsteady fluid flows. The types of reduced-ordered models that we study are those attempt to determine accurate approximate solutions of a complex system using very few degrees of freedom. To do so, such models have to use basis functions that are in some way intimately connected to the problem being approximated. Once a very low-dimensional reduced basis has been determined, one can employ it to solve the complex system by applying, e.g., a Galerkin method. In general, reduced bases are globally supported so that the discrete systems are dense; however, if the reduced basis is of very low dimension, one does not care about the lack of sparsity in the discrete system. A discussion of reduced-ordering modeling for complex systems such as fluid flows is given to provide a context for the application of reduced-order bases. Then, detailed descriptions of CVT-based reduced-order bases and how they can be constructed of complex systems are given. Subsequently, some concrete incompressible flow examples are used to illustrate the construction and use of CVT-based reduced-order bases. The CVT-based reduced-order modeling methodology is shown to be effective for these examples and is also shown to be inexpensive to apply compared to other reduced-order methods.

  • PDF