• Title/Summary/Keyword: Large-scale optimization

Search Result 374, Processing Time 0.028 seconds

A Study on AES Extension for Large-Scale Data (대형 자료를 위한 AES 확장에 관한 연구)

  • Oh, Ju-Young;Kouh, Hoon-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.6
    • /
    • pp.63-68
    • /
    • 2009
  • In the whole information technology area, the protection of information from hacking or tapping becomes a very serious issue. Therefore, the more effective, convenient and secure methods are required to make the safe operation. Encryption algorithms are known to be computationally intensive. They consume a significant amount of computing resources such as CPU time and memory. In this paper we propose the scalable encryption scheme with four criteria, the compression of plaintext, variable size of block, selectable round and software optimization. We have tested our scheme by c++. Experimental results show that our scheme achieves the faster execution speed of encryption/decryption.

  • PDF

RLDB: Robust Local Difference Binary Descriptor with Integrated Learning-based Optimization

  • Sun, Huitao;Li, Muguo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.9
    • /
    • pp.4429-4447
    • /
    • 2018
  • Local binary descriptors are well-suited for many real-time and/or large-scale computer vision applications, while their low computational complexity is usually accompanied by the limitation of performance. In this paper, we propose a new optimization framework, RLDB (Robust-LDB), to improve a typical region-based binary descriptor LDB (local difference binary) and maintain its computational simplicity. RLDB extends the multi-feature strategy of LDB and applies a more complete region-comparing configuration. A cascade bit selection method is utilized to select the more representative patterns from massive comparison pairs and an online learning strategy further optimizes descriptor for each specific patch separately. They both incorporate LDP (linear discriminant projections) principle to jointly guarantee the robustness and distinctiveness of the features from various scales. Experimental results demonstrate that this integrated learning framework significantly enhances LDB. The improved descriptor achieves a performance comparable to floating-point descriptors on many benchmarks and retains a high computing speed similar to most binary descriptors, which better satisfies the demands of applications.

Development of Nonlinear Programming Approaches to Large Scale Linear Programming Problems (비선형계획법을 이용한 대규모 선형계획해법의 개발)

  • Chang, Soo-Y.
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.17 no.2
    • /
    • pp.131-142
    • /
    • 1991
  • The concept of criterion function is proposed as a framework for comparing the geometric and computational characteristics of various nonlinear programming approaches to linear programming such as the method of centers, Karmakar's algorithm and the gravitational method. Also, we discuss various computational issues involved in obtaining an efficient parallel implementation of these methods. Clearly, the most time consuming part in solving a linear programming problem is the direction finding procedure, where we obtain an improving direction. In most cases, finding an improving direction is equivalent to solving a simple optimization problem defined at the current feasible solution. Again, this simple optimization problem can be seen as a least squares problem, and the computational effort in solving the least squares problem is, in fact, same as the effort as in solving a system of linear equations. Hence, getting a solution to a system of linear equations fast is very important in solving a linear programming problem efficiently. For solving system of linear equations on parallel computing machines, an iterative method seems more adequate than direct methods. Therefore, we propose one possible strategy for getting an efficient parallel implementation of an iterative method for solving a system of equations and present the summary of computational experiment performed on transputer based parallel computing board installed on IBM PC.

  • PDF

Evaluation on Flexural Behavior of Double-tee Slabs with the Least Depth from Optimization Process (최적이론에 의하여 설계된 최소 깊이 더블티 슬래브의 휨거동 평가)

  • 유승룡;김대훈;유재천
    • Journal of the Korea Concrete Institute
    • /
    • v.11 no.3
    • /
    • pp.141-152
    • /
    • 1999
  • Precast prestressed double-tee slab may be designed by the PCI Design Handbook. It is based on the bridge construction and is required for reorganization for the use of buildings in the domestic construction environments. Much enhanced sections are developed from the reforming process on the determined design factors in the previous experimental works on double tees. Pre-determined shape, reinforcement detail, and 5000 psi concrete strength can not be expected as the best solution for the domestic construction requirements because large amount of use on that systems are anticipated. Flexural tests are performed on four full-scale 12.5m proto-type models, "least depth double tee", which are resulted from the optimization process. Domestic superimposed live load regulation, domestic material properties which is available to product, building design requirements and economy in construction are considered as the main factors to establish. the first two sections are double tee section for 1.2 ton/$\m^2$ market live load with straight and one-point depressed strands and the second two are for 0.6 ton/$\m^2$ parking live load with those strand types. All of the specimens tested fully comply with the flexural strength requirements as specified by ACI 318-95. However, the research has shown that following improved considerations are needed for better result in practice. The locations and method of connection for the lowest bottom mild bar, connection method between precast and cast-in-place concrete, and dap-end reinforcement are need to be improved.

Multi-dimensional sensor placement optimization for Canton Tower focusing on application demands

  • Yi, Ting-Hua;Li, Hong-Nan;Wang, Xiang
    • Smart Structures and Systems
    • /
    • v.12 no.3_4
    • /
    • pp.235-250
    • /
    • 2013
  • Optimal sensor placement (OSP) technique plays a key role in the structural health monitoring (SHM) of large-scale structures. According to the mathematical background and implicit assumptions made in the triaxial effective independence (EfI) method, this paper presents a novel multi-dimensional OSP method for the Canton Tower focusing on application demands. In contrast to existing methods, the presented method renders the corresponding target mode shape partitions as linearly independent as possible and, at the same time, maintains the stability of the modal matrix in the iteration process. The modal assurance criterion (MAC), determinant of the Fisher Information Matrix (FIM) and condition number of the FIM have been taken as the optimal criteria, respectively, to demonstrate the feasibility and effectiveness of the proposed method. Numerical investigations suggest that the proposed method outperforms the original EfI method in all instances as expected, which is looked forward to be even more pronounced should it be used for other multi-dimensional optimization problems.

Development of a Simulation Tool to Evaluate GNSS Positioning Performance in Urban Area

  • Wu, Falin;Liu, Gang-Jun;Zhang, Kefei;Densley, Liam
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • v.2
    • /
    • pp.71-76
    • /
    • 2006
  • With the rapid development of spatial infrastructure in US, Europe, Japan, China and India, there is no doubt that the next generation Global Navigation Satellite System (GNSS) will improve the integrity, accuracy, reliability and availability of the position solution. GNSS is becoming an essential element of personal, commercial and public infrastructure and consequently part of our daily lives. However, the applicability of GPS in supporting a range of location-sensitive applications such as location based services in an urban environment is severely curtailed by the interference of the 3D urban settings. To characterize and gain in-depth understanding of such interferences and to be able to provide location-based optimization alternatives, a high-fidelity 3D urban model of Melbourne CBD built with ArcGIS and large scale high-resolution spatial data sets is used in this study to support a comprehensive simulation of current and future GNSS signal performance, in terms of signal continuity, availability, strength, geometry, positioning accuracy and reliability based on a number of scenarios. The design, structure and major components of the simulator are outlined. Useful time-stamped spatial patterns of the signal performance over the experimental urban area have been revealed which are valuable for supporting location based services applications, such as emergency responses, the optimization of wireless communication infrastructures and vehicle navigation services.

  • PDF

Hybrid genetic-paired-permutation algorithm for improved VLSI placement

  • Ignatyev, Vladimir V.;Kovalev, Andrey V.;Spiridonov, Oleg B.;Kureychik, Viktor M.;Ignatyeva, Alexandra S.;Safronenkova, Irina B.
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.260-271
    • /
    • 2021
  • This paper addresses Very large-scale integration (VLSI) placement optimization, which is important because of the rapid development of VLSI design technologies. The goal of this study is to develop a hybrid algorithm for VLSI placement. The proposed algorithm includes a sequential combination of a genetic algorithm and an evolutionary algorithm. It is commonly known that local search algorithms, such as random forest, hill climbing, and variable neighborhoods, can be effectively applied to NP-hard problem-solving. They provide improved solutions, which are obtained after a global search. The scientific novelty of this research is based on the development of systems, principles, and methods for creating a hybrid (combined) placement algorithm. The principal difference in the proposed algorithm is that it obtains a set of alternative solutions in parallel and then selects the best one. Nonstandard genetic operators, based on problem knowledge, are used in the proposed algorithm. An investigational study shows an objective-function improvement of 13%. The time complexity of the hybrid placement algorithm is O(N2).

Long-Term Container Allocation via Optimized Task Scheduling Through Deep Learning (OTS-DL) And High-Level Security

  • Muthakshi S;Mahesh K
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1258-1275
    • /
    • 2023
  • Cloud computing is a new technology that has adapted to the traditional way of service providing. Service providers are responsible for managing the allocation of resources. Selecting suitable containers and bandwidth for job scheduling has been a challenging task for the service providers. There are several existing systems that have introduced many algorithms for resource allocation. To overcome these challenges, the proposed system introduces an Optimized Task Scheduling Algorithm with Deep Learning (OTS-DL). When a job is assigned to a Cloud Service Provider (CSP), the containers are allocated automatically. The article segregates the containers as' Long-Term Container (LTC)' and 'Short-Term Container (STC)' for resource allocation. The system leverages an 'Optimized Task Scheduling Algorithm' to maximize the resource utilisation that initially inquires for micro-task and macro-task dependencies. The bottleneck task is chosen and acted upon accordingly. Further, the system initializes a 'Deep Learning' (DL) for implementing all the progressive steps of job scheduling in the cloud. Further, to overcome container attacks and errors, the system formulates a Container Convergence (Fault Tolerance) theory with high-level security. The results demonstrate that the used optimization algorithm is more effective for implementing a complete resource allocation and solving the large-scale optimization problem of resource allocation and security issues.

Resource-constrained Scheduling at Different Project Sizes

  • Lazari, Vasiliki;Chassiakos, Athanasios;Karatzas, Stylianos
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.196-203
    • /
    • 2022
  • The resource constrained scheduling problem (RCSP) constitutes one of the most challenging problems in Project Management, as it combines multiple parameters, contradicting objectives (project completion within certain deadlines, resource allocation within resource availability margins and with reduced fluctuations), strict constraints (precedence constraints between activities), while its complexity grows with the increase in the number of activities being executed. Due to the large solution space size, this work investigates the application of Genetic Algorithms to approximate the optimal resource alolocation and obtain optimal trade-offs between different project goals. This analysis uses the cost of exceeding the daily resource availability, the cost from the day-by-day resource movement in and out of the site and the cost for using resources day-by-day, to form the objective cost function. The model is applied in different case studies: 1 project consisting of 10 activities, 4 repetitive projects consisting of 40 activities in total and 16 repetitive projects consisting of 160 activities in total, in order to evaluate the effectiveness of the algorithm in different-size solution spaces and under alternative optimization criteria by examining the quality of the solution and the required computational time. The case studies 2 & 3 have been developed by building upon the recurrence of the unit/sub-project (10 activities), meaning that the initial problem is multiplied four and sixteen times respectively. The evaluation results indicate that the proposed model can efficiently provide reliable solutions with respect to the individual goals assigned in every case study regardless of the project scale.

  • PDF

Optimization of fractionation efficiency (FE) and throughput (TP) in a large scale splitter less full-feed depletion SPLITT fractionation (Large scale FFD-SF) (대용량 splitter less full-feed depletion SPLITT 분획법 (Large scale FFD-SF)에서의 분획효율(FE)및 시료처리량(TP)의 최적화)

  • Eum, Chul Hun;Noh, Ahrahm;Choi, Jaeyeong;Yoo, Yeongsuk;Kim, Woon Jung;Lee, Seungho
    • Analytical Science and Technology
    • /
    • v.28 no.6
    • /
    • pp.453-459
    • /
    • 2015
  • Split-flow thin cell fractionation (SPLITT fractionation, SF) is a particle separation technique that allows continuous (and thus a preparative scale) separation into two subpopulations based on the particle size or the density. In SF, there are two basic performance parameters. One is the throughput (TP), which was defined as the amount of sample that can be processed in a unit time period. Another is the fractionation efficiency (FE), which was defined as the number % of particles that have the size predicted by theory. Full-feed depletion mode (FFD-SF) have only one inlet for the sample feed, and the channel is equipped with a flow stream splitter only at the outlet in SF mode. In conventional FFD-mode, it was difficult to extend channel due to splitter in channel. So, we use large scale splitter-less FFD-SF to increase TP from increase channel scale. In this study, a FFD-SF channel was developed for a large-scale fractionation, which has no flow stream splitters (‘splitter less’), and then was tested for optimum TP and FE by varying the sample concentration and the flow rates at the inlet and outlet of the channel. Polyurethane (PU) latex beads having two different size distribution (about 3~7 µm, and about 2~30 µm) were used for the test. The sample concentration was varied from 0.2 to 0.8% (wt/vol). The channel flow rate was varied from 70, 100, 120 and 160 mL/min. The fractionated particles were monitored by optical microscopy (OM). The sample recovery was determined by collecting the particles on a 0.1 µm membrane filter. Accumulation of relatively large micron sized particles in channel could be prevented by feeding carrier liquid. It was found that, in order to achieve effective TP, the concentration of sample should be at higher than 0.4%.