• Title/Summary/Keyword: Optimising

Search Result 41, Processing Time 0.022 seconds

Optimising Performance Management in VUCA Period: A Literature Review Study

  • Ileen SAVO;Ranzi RUSIKE;Stephen SENA
    • The Journal of Industrial Distribution & Business
    • /
    • v.15 no.4
    • /
    • pp.1-9
    • /
    • 2024
  • Purpose: The purpose of this paper is to explore literature on performance management in order to get insight into how the concept could be optimised during VUCA times for better performance of organisations. Research design, data and methodology: The study adopted a desktop research methodology. Extensive literature review has been conducted from various sources such as journals, research papers, organizational reports, government reports, media reports and articles available on web and effort has been made to assimilate the knowledge body on the topic in the current paper. Literature that enhances understanding on managing performance during VUCA times was reviewed. Results: Solutions to optimise performance management in organisations during VUCA times were proffered and these include innovative planning, innovative monitoring, innovative training and development, innovative rating and innovative rewarding. Conclusions: The study proves that, performance management process should not be done the ordinary way during VUCA times, but innovatively. In this regard innovative performance management can optimise performance of organisations during VUCA period. The study recommends that a further quantitative study be done to test the suitability of each of the proposed ways of innovatively practicing each element of the performance management process across different industries, countries or sector.

Reproductive performance of Korean native cattle (Hanwoo) focusing on calving interval and parity (분만간격과 산차를 중심으로 한국 재래종인 한우의 번식능력 분석)

  • Cho, Jaesung;Do, Changhee;Choi, Inchul
    • Journal of Embryo Transfer
    • /
    • v.31 no.3
    • /
    • pp.273-279
    • /
    • 2016
  • The Korean native cattle, Hanwoo, is the most popular breed of beef cattle in Korea. However, the reproductive performance data are limited although reproduction is one of the most economically and biologically important in beef production. Therefore, this study was undertaken to investigate reproductive performance parameters including calving interval, parity for life time production. Data collected from 206,827 calvings were analyzed. There were no significant differences in calving interval and gestation days as parity increased from 2nd and 13rd parity cow, from spring to winter. However, we found a dramatic increase in calving interval after year 2000. About 1 month were increased per year ( y = 30.578x + 344.45 $R^2=0.9157$). Interestingly, we observed that parities for life time can be affected by birth weight. Calves with 23 kg at birth showed highest parities, $3.4{\pm}2.0$ times. In summary, this study provides valuable data on reproductive performance of Hanwoo and the data presented here can be used as a standard target for optimising and enhancing reproductive performance.

Computational design of mould sprue for injection moulding thermoplastics

  • Lakkannan, Muralidhar;Mohan Kumar, G.C.;Kadoli, Ravikiran
    • Journal of Computational Design and Engineering
    • /
    • v.3 no.1
    • /
    • pp.37-52
    • /
    • 2016
  • To injection mould polymers, designing mould is a key task involving several critical decisions with direct implications to yield quality, productivity and frugality. One prominent decision among them is specifying sprue-bush conduit expansion as it significantly influences overall injection moulding; abstruseness anguish in its design criteria deceives direct determination. Intuitively designers decide it wisely and then exasperate by optimising or manipulating processing parameters. To overwhelm that anomaly this research aims at proposing an ideal design criteria holistically for all polymeric materials also tend as a functional assessment metric towards perfection i.e., criteria to specify sprue conduit size before mould development. Accordingly, a priori analytical criterion was deduced quantitatively as expansion ratio from ubiquitous empirical relationships specifically a.k.a an exclusive expansion angle imperatively configured for injectant properties. Its computational intelligence advantage was leveraged to augment functionality of perfectly injecting into an impression gap, while synchronising both injector capacity and desired moulding features. For comprehensiveness, it was continuously sensitised over infinite scale as an explicit factor dependent on in-situ spatio-temporal injectant state perplexity with discrete slope and altitude for each polymeric character. In which congregant ranges of apparent viscosity and shear thinning index were conceived to characteristically assort most thermoplastics. Thereon results accorded aggressive conduit expansion widening for viscous incrust, while a very aggressive narrowing for shear thinning encrust; among them apparent viscosity had relative dominance. This important rationale would certainly form a priori design basis as well diagnose filling issues causing several defects. Like this the proposed generic design criteria, being simple would immensely benefit mould designers besides serve as an inexpensive preventive cliché to moulders. Its adaption ease to practice manifests a hope of injection moulding extremely alluring polymers. Therefore, we concluded that appreciating injectant's polymeric character to design exclusive sprue bush offers a definite a priori advantage.

Optimising Ink Setting Properties on Double Coated Wood-free Papers

  • Bluvol, Guillermo;Carlsson, Roger
    • Proceedings of the Korea Technical Association of the Pulp and Paper Industry Conference
    • /
    • 2006.06b
    • /
    • pp.215-225
    • /
    • 2006
  • Today's requirements for print-press runnability and print quality demand an optimised absorption and adhesion of printing ink on the paper surface. Modern coating concepts for high glossing offset grades use ultra fine pigments, whereas binder level has continuously been decreased to a minimum in recent years to achieve the highest possible sheet gloss development and for economical reasons. Both the ultra fine pigments and the reduced binder levels lead in many cases to a faster ink setting rate. On the other hand, matt paper grades use relatively coarse pigments leading to a slow ink setting compared to the high glossing papers. Both too fast and too slow ink setting properties implicate drawbacks in print quality and print press runnability. The mechanisms behind the interactions between ink and coating have been presented in many previous publications. The purpose of this study was to determine and quantify how the ink setting rate is influenced by pigment system (GCC and GCC/clay blends), latex level and latex properties in the topcoat of double coated sheet fed offset paper. The roles of binder level and type in the precoat were also assessed. The effect of calendering (temperature and pressure) was studied with one formulation. The resulting ink setting characteristics were tested using three different laboratory testing instruments. The correlation amongst the different laboratory testing methods is discussed. The results show that by varying the latex properties, the pigment system and/or latex addition level, the ink tack development of ink applied to a topcoat pigment system can be significantly influenced. It can be slowed down as often desired with ultra fine pigments or speeded up in the case of coarse pigments. There was no visible effect on the ink setting rate by using different binder systems in the precoat..

  • PDF

The Bytecode Optimizer (바이트코드 최적화기)

  • 이야리;홍경표;오세만
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.73-80
    • /
    • 2003
  • The Java programming language is designed for developing effective applications in a heterogeneous network environment. Major problem in Java is its performance. many attractive features of Java make the development of software easy, but also make it expensive to support ; applications written in Java are often much slower than their counterparts written in C or C++. To use Java`s attractive features without the performance penalty, sophisticated optimizations and runtime systems are required. Optimising Java bytecode has several advantages. First, the bytecode is independent of any compiler that is used to generate it. Second, the bytecode optimization can be performed as a pre=pass to Just-In-Time(JIT) compilation. Many attractive features of Java make the development of software easy, but also make it expensive to support. The goal of this work is to develop automatic construction of code optimizer for Java bytecode. We`ve designed and implemented a Bytecode Optimizer that performs the peephole optimization, bytecode-specific optimization, and method-inlining techniques. Using the Classfile optimizer, we see up to 9% improvement in speed and about 20% size reduction in Java class files, when compared to average code using the interpreter alone.

Stochastic modelling and optimum inspection and maintenance strategy for fatigue affected steel bridge members

  • Huang, Tian-Li;Zhou, Hao;Chen, Hua-Peng;Ren, Wei-Xin
    • Smart Structures and Systems
    • /
    • v.18 no.3
    • /
    • pp.569-584
    • /
    • 2016
  • This paper presents a method for stochastic modelling of fatigue crack growth and optimising inspection and maintenance strategy for the structural members of steel bridges. The fatigue crack evolution is considered as a stochastic process with uncertainties, and the Gamma process is adopted to simulate the propagation of fatigue crack in steel bridge members. From the stochastic modelling for fatigue crack growth, the probability of failure caused by fatigue is predicted over the service life of steel bridge members. The remaining fatigue life of steel bridge members is determined by comparing the fatigue crack length with its predetermined threshold. Furthermore, the probability of detection is adopted to consider the uncertainties in detecting fatigue crack by using existing damage detection techniques. A multi-objective optimisation problem is proposed and solved by a genetic algorithm to determine the optimised inspection and maintenance strategy for the fatigue affected steel bridge members. The optimised strategy is achieved by minimizing the life-cycle cost, including the inspection, maintenance and failure costs, and maximizing the service life after necessary intervention. The number of intervention during the service life is also taken into account to investigate the relationship between the service life and the cost for maintenance. The results from numerical examples show that the proposed method can provide a useful approach for cost-effective inspection and maintenance strategy for fatigue affected steel bridges.

Value of Cultural Heritage and its Role for the Culture-Creative Industries (문화창의산업에서 문화유산의 가치와 활성화 방안)

  • Jang, Ho-su
    • Korean Journal of Heritage: History & Science
    • /
    • v.48 no.2
    • /
    • pp.82-95
    • /
    • 2015
  • Cultural heritage contains traditional values and we have to conserve its intrinsic value. But in the other hands it is argued that it's no need to preserve heritage for its own sake, and nowadays we appreciate that active use of heritage is enhancing its value and making position secure in its society. It will need not only to protect heritage, but also to ensure its use, and its economic value are harnessed to the benefit of local communities. We are going to enter upon experience economy through information society and to have a creative economy policy discourse. The effects of globalisation on societies are manifested in the attrition of their values, identities of vernacular heritage. Therefore relationship between development and heritage must be examined. In this article I suggest the methodologies of vitalizing cultural heritage based creative industries, especially through making the creative ecosystem and optimising the performance of the cultural heritage based cluster.

Lightweight of ONNX using Quantization-based Model Compression (양자화 기반의 모델 압축을 이용한 ONNX 경량화)

  • Chang, Duhyeuk;Lee, Jungsoo;Heo, Junyoung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.1
    • /
    • pp.93-98
    • /
    • 2021
  • Due to the development of deep learning and AI, the scale of the model has grown, and it has been integrated into other fields to blend into our lives. However, in environments with limited resources such as embedded devices, it is exist difficult to apply the model and problems such as power shortages. To solve this, lightweight methods such as clouding or offloading technologies, reducing the number of parameters in the model, or optimising calculations are proposed. In this paper, quantization of learned models is applied to ONNX models used in various framework interchange formats, neural network structure and inference performance are compared with existing models, and various module methods for quantization are analyzed. Experiments show that the size of weight parameter is compressed and the inference time is more optimized than before compared to the original model.

Exploring Support Vector Machine Learning for Cloud Computing Workload Prediction

  • ALOUFI, OMAR
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.374-388
    • /
    • 2022
  • Cloud computing has been one of the most critical technology in the last few decades. It has been invented for several purposes as an example meeting the user requirements and is to satisfy the needs of the user in simple ways. Since cloud computing has been invented, it had followed the traditional approaches in elasticity, which is the key characteristic of cloud computing. Elasticity is that feature in cloud computing which is seeking to meet the needs of the user's with no interruption at run time. There are traditional approaches to do elasticity which have been conducted for several years and have been done with different modelling of mathematical. Even though mathematical modellings have done a forward step in meeting the user's needs, there is still a lack in the optimisation of elasticity. To optimise the elasticity in the cloud, it could be better to benefit of Machine Learning algorithms to predict upcoming workloads and assign them to the scheduling algorithm which would achieve an excellent provision of the cloud services and would improve the Quality of Service (QoS) and save power consumption. Therefore, this paper aims to investigate the use of machine learning techniques in order to predict the workload of Physical Hosts (PH) on the cloud and their energy consumption. The environment of the cloud will be the school of computing cloud testbed (SoC) which will host the experiments. The experiments will take on real applications with different behaviours, by changing workloads over time. The results of the experiments demonstrate that our machine learning techniques used in scheduling algorithm is able to predict the workload of physical hosts (CPU utilisation) and that would contribute to reducing power consumption by scheduling the upcoming virtual machines to the lowest CPU utilisation in the environment of physical hosts. Additionally, there are a number of tools, which are used and explored in this paper, such as the WEKA tool to train the real data to explore Machine learning algorithms and the Zabbix tool to monitor the power consumption before and after scheduling the virtual machines to physical hosts. Moreover, the methodology of the paper is the agile approach that helps us in achieving our solution and managing our paper effectively.

Acceleration of computation speed for elastic wave simulation using a Graphic Processing Unit (그래픽 프로세서를 이용한 탄성파 수치모사의 계산속도 향상)

  • Nakata, Norimitsu;Tsuji, Takeshi;Matsuoka, Toshifumi
    • Geophysics and Geophysical Exploration
    • /
    • v.14 no.1
    • /
    • pp.98-104
    • /
    • 2011
  • Numerical simulation in exploration geophysics provides important insights into subsurface wave propagation phenomena. Although elastic wave simulations take longer to compute than acoustic simulations, an elastic simulator can construct more realistic wavefields including shear components. Therefore, it is suitable for exploration of the responses of elastic bodies. To overcome the long duration of the calculations, we use a Graphic Processing Unit (GPU) to accelerate the elastic wave simulation. Because a GPU has many processors and a wide memory bandwidth, we can use it in a parallelised computing architecture. The GPU board used in this study is an NVIDIA Tesla C1060, which has 240 processors and a 102 GB/s memory bandwidth. Despite the availability of a parallel computing architecture (CUDA), developed by NVIDIA, we must optimise the usage of the different types of memory on the GPU device, and the sequence of calculations, to obtain a significant speedup of the computation. In this study, we simulate two- (2D) and threedimensional (3D) elastic wave propagation using the Finite-Difference Time-Domain (FDTD) method on GPUs. In the wave propagation simulation, we adopt the staggered-grid method, which is one of the conventional FD schemes, since this method can achieve sufficient accuracy for use in numerical modelling in geophysics. Our simulator optimises the usage of memory on the GPU device to reduce data access times, and uses faster memory as much as possible. This is a key factor in GPU computing. By using one GPU device and optimising its memory usage, we improved the computation time by more than 14 times in the 2D simulation, and over six times in the 3D simulation, compared with one CPU. Furthermore, by using three GPUs, we succeeded in accelerating the 3D simulation 10 times.