• Title/Summary/Keyword: Size Optimization

Search Result 1,535, Processing Time 0.028 seconds

A mean-absolute-deviation based method for optimizing skid sequence in shipyard subassembly

  • Lee, Kyung-Tae;Kwon, Yung-Keun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.12
    • /
    • pp.277-284
    • /
    • 2022
  • In this paper, we proposes a method of optimizing the processing order of skids to minimize the span time in a conveyor environment of the shipbuilding subassembly process. The subassembly process consists of a series of fixed tasks where the required work time is varied according to the skid type. The loading order of skids on a conveyor which determines the span time should be properly optimized and the problem size exponentially increases with the number of skids. In this regard, we propose a novel method called UniDev by defining a measure of the mean-absolute-deviation about the time difference among simultaneously processed tasks and iteratively improving it. Through simulations with various numbers of skids and processes, it was observed that our proposed method can efficiently reduce the overall work time compared with the multi-start and the 2-OPT methods.

Optimization of Gas Detector Location by Analysis of the Dispersion Model of Hazardous Chemicals (유해화학물질의 확산 모델 분석을 통한 가스감지기 위치 최적화)

  • Jeong, Taejun;Lim, Dong-Hui;Kim, Min-Seop;Lee, Jae-Geol;Yoo, Byung Tae;Ko, Jae Wook
    • Journal of the Korean Institute of Gas
    • /
    • v.26 no.2
    • /
    • pp.39-48
    • /
    • 2022
  • The domestic gas detector installation standards applied to gas detectors, which are one of the facilities that can prevent accidents such as fire, explosion, and leakage that can cause serious industrial accidents, do not take into account the behavioral characteristics of hazardous chemicals in the atmosphere. It can be seen that the technical basis is insufficient because the standard is applied. Therefore, in this study, the size of the leak hole for each facility mainly used in chemical plants and the diffusion distance according to the concentration of interest of hazardous chemicals were analyzed, and based on this, the optimal installation distance for gas detectors for each material was suggested. Using the method presented in this study, more economical and effective gas detector installation can be expected, and furthermore, it can be expected to help prevent serious industrial accidents.

Simulated Annealing for Overcoming Data Imbalance in Mold Injection Process (사출성형공정에서 데이터의 불균형 해소를 위한 담금질모사)

  • Dongju Lee
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.4
    • /
    • pp.233-239
    • /
    • 2022
  • The injection molding process is a process in which thermoplastic resin is heated and made into a fluid state, injected under pressure into the cavity of a mold, and then cooled in the mold to produce a product identical to the shape of the cavity of the mold. It is a process that enables mass production and complex shapes, and various factors such as resin temperature, mold temperature, injection speed, and pressure affect product quality. In the data collected at the manufacturing site, there is a lot of data related to good products, but there is little data related to defective products, resulting in serious data imbalance. In order to efficiently solve this data imbalance, undersampling, oversampling, and composite sampling are usally applied. In this study, oversampling techniques such as random oversampling (ROS), minority class oversampling (SMOTE), ADASYN(Adaptive Synthetic Sampling), etc., which amplify data of the minority class by the majority class, and complex sampling using both undersampling and oversampling, are applied. For composite sampling, SMOTE+ENN and SMOTE+Tomek were used. Artificial neural network techniques is used to predict product quality. Especially, MLP and RNN are applied as artificial neural network techniques, and optimization of various parameters for MLP and RNN is required. In this study, we proposed an SA technique that optimizes the choice of the sampling method, the ratio of minority classes for sampling method, the batch size and the number of hidden layer units for parameters of MLP and RNN. The existing sampling methods and the proposed SA method were compared using accuracy, precision, recall, and F1 Score to prove the superiority of the proposed method.

Reduction of Polyspermy in Porcine in vitro Fertilization by Modified Swim-UP Method

  • Park, C.H.;B.S. Koo;Kim, M.G.;J.I. Yun;H.Y Son;Lee, S.G.;Lee, C.K.
    • Proceedings of the Korean Society of Developmental Biology Conference
    • /
    • 2003.10a
    • /
    • pp.110-110
    • /
    • 2003
  • The high incidence of polyspermic fertilization is one of the major causes lowering the overall efficiency of porcine IVF. The common procedure for IVF involves the co-culture of both gametes in the medium drop, which increases sperm concentration and incidence of polyspermy. Therefore, the present study was carried out to increase the efficiency of porcine IVF by reducing polyspermy using a modified swim-up method. This method modifies conventional swim-up washing by placing oocytes directly at the time of washing. Sperm pellet was prepared in the tube and mature oocytes were placed on cell strainer with $70 \mu m$ pore size (Falcon 2350) at the top of the tube. After insemination, the oocytes were stained for examination. Also, the developmental potential of fertilized embryos was measured to evaluate for the feasibility of this method. While having similar penetration rates in both methods ($86.67 \pm 2.36% to 83.33 \pm 1.36%$), there was a significant reduction of polyspermy in modified swim-up method ($17.50 \pm 1.60%$) compare to the control ($44.1 \pm 3.70%$ (p<0.05). Subsequent culture showed higher rate of blastocyst formation in modified swim-up method (20.44$\pm$0.99%) than the control ($15.73 \pm 3.26%$) (P<0.05), even though there was no significant difference. These results suggest that, by controlling the number of spermatozoa reaching the oocytes, porcine oocytes might be protected from polyspermy in vitro. Also, the developmental potential of the fertilized embryos using this method could be improved by increasing the pool of spermatozoa with better quality. Further optimization of the procedure required to implicate this method in routine porcine IVF.

  • PDF

Improvement in Inefficient Repetition of Gauss Sieve (Gauss Sieve 반복 동작에서의 비효율성 개선)

  • Byeongho Cheon;Changwon Lee;Chanho Jeon;Seokhie Hong;Suhri Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.2
    • /
    • pp.223-233
    • /
    • 2023
  • Gauss Sieve is an algorithm for solving SVP and requires exponential time and space complexity. The terminationcondition of the Sieve is determined by the size of the constructed list and the number of collisions related to space complexity. The term 'collision' refers to the state in which the sampled vector is reduced to the vector that is already inthe list. if collisions occur more than a certain number of times, the algorithm terminates. When executing previous algorithms, we noticed that unnecessary operations continued even after the shortest vector was found. This means that the existing termination condition is set larger than necessary. In this paper, after identifying the point where unnecessary operations are repeated, optimization is performed on the number of operations required. The tests are conducted by adjusting the threshold of the collision that becomes the termination condition and the distribution in whichthe sample vector is generated. According to the experiments, the operation that occupies the largest proportion decreased by62.6%. The space and time complexity also decreased by 4.3 and 1.6%, respectively.

Techno-economic Analysis of Power To Gas (P2G) Process for the Development of Optimum Business Model: Part 2 Methane to Electricity Production Pathway

  • Partho Sarothi Roy;Young Don Yoo;Suhyun Kim;Chan Seung Park
    • Clean Technology
    • /
    • v.29 no.1
    • /
    • pp.53-58
    • /
    • 2023
  • This study shows the summary of the economic performance of excess electricity conversion to hydrogen as well as methane and returned conversion to electricity using a fuel cell. The methane production process has been examined in a previous study. Here, this study focuses on the conversion of methane to electricity. As a part of this study, capital expenditure (CAPEX) is estimated under various sized plants (0.3, 3, 9, and 30 MW). The study shows a method for economic optimization of electricity generation using a fuel cell. The CAPEX and operating expenditure (OPEX) as well as the feed cost are used to calculate the discounted cash flow. Then the levelized cost of returned electricity (LCORE) is estimated from the discounted cash flow. This study found the LCORE value was ¢10.2/kWh electricity when a 9 MW electricity generating fuel cell was used. A methane production plant size of 1,500 Nm3/hr, a methane production cost of $11.47/mcf, a storage cost of $1/mcf, and a fuel cell efficiency of 54% were used as a baseline. A sensitivity analysis was performed by varying the storage cost, fuel cell efficiency, and excess electricity cost by ±20%, and fuel cell efficiency was found as the most dominating parameter in terms of the LCORE sensitivity. Therefore, for the best cost-performance, fuel cell manufacturing and efficiency need to be carefully evaluated. This study provides a general guideline for cost performance comparison with LCORE.

A Study on Optimizing Disk Utilization of Software-Defined Storage (소프트웨어 정의 스토리지의 디스크 이용을 최적화하는 방법에 관한 연구)

  • Lee Jung Il;Choi YoonA;Park Ju Eun;Jang, Minyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.4
    • /
    • pp.135-142
    • /
    • 2023
  • Recently, many companies are using public cloud services or building their own data center because digital transformation is expanding. The software-defined storage is a key solution for storing data on the cloud platform and its use is expanding worldwide. Software-defined storage has the advantage of being able to virtualize and use all storage resources as a single storage device and supporting flexible scale-out. On the other hand, since the size of an object is variable, an imbalance occurs in the use of the disk and may cause a failure. In this study, a method of redistributing objects by optimizing disk weights based on storage state information was proposed to solve the imbalance problem of disk use, and the experimental results were presented. As a result of the experiment, it was confirmed that the maximum utilization rate of the disk decreased by 10% from 89% to 79%. Failures can be prevented, and more data can be stored by optimizing the use of disk.

A Study on Development of Superconducting Wires for a Fault Current Limiter (한류기용 초전도 선재개발에 관한 연구)

  • Hwang, Kwang-Soo;Lee, Hun-Ju;Moon, Chae-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.279-290
    • /
    • 2022
  • A superconducting fault current limiter(SFCL) is a power device that exploits superconducting transition to control currents and enhances the flexibility, stability and reliability of the power system within a few milliseconds. With a high phase transition speed, high critical current densities and little AC loss, high-temperature superconducting (HTS) wires are suitable for a resistive-type SFCL. However, HTS wires due to the lack of optimization research are rather inefficient to directly apply to a fault current limiter in terms of the design and capacity, for the existing method relied the characteristics. Therefore, in order to develop a suitable wire for an SFCL, it is necessary to enhance critical current uniformity, select optimal stabilizer materials and conducted research on the development of uniform stabilizer layering technology. The high temperature superconducting wires manufactured by this study get an average critical current of 804 A/12mm-width at the length of 710m; therefore, conducted research was able to secure economic performance by improving efficiency, reducing costs, and reducing size.

An optimized ANFIS model for predicting pile pullout resistance

  • Yuwei Zhao;Mesut Gor;Daria K. Voronkova;Hamed Gholizadeh Touchaei;Hossein Moayedi;Binh Nguyen Le
    • Steel and Composite Structures
    • /
    • v.48 no.2
    • /
    • pp.179-190
    • /
    • 2023
  • Many recent attempts have sought accurate prediction of pile pullout resistance (Pul) using classical machine learning models. This study offers an improved methodology for this objective. Adaptive neuro-fuzzy inference system (ANFIS), as a popular predictor, is trained by a capable metaheuristic strategy, namely equilibrium optimizer (EO) to predict the Pul. The used data is collected from laboratory investigations in previous literature. First, two optimal configurations of EO-ANFIS are selected after sensitivity analysis. They are next evaluated and compared with classical ANFIS and two neural-based models using well-accepted accuracy indicators. The results of all five models were in good agreement with laboratory Puls (all correlations > 0.99). However, it was shown that both EO-ANFISs not only outperform neural benchmarks but also enjoy a higher accuracy compared to the classical version. Therefore, utilizing the EO is recommended for optimizing this predictive tool. Furthermore, a comparison between the selected EO-ANFISs, where one employs a larger population, revealed that the model with the population size of 75 is more efficient than 300. In this relation, root mean square error and the optimization time for the EO-ANFIS (75) were 19.6272 and 1715.8 seconds, respectively, while these values were 23.4038 and 9298.7 seconds for EO-ANFIS (300).

Optimization for Roughness Coefficient of River in Korea - Review of Application and Han River Project Water Elevation - (실측 자료를 이용한 국내하천의 조도계수 산정 -적용성 및 한강의 계획홍수위 검토-)

  • Kim, Jooyoung;Lee, Jong-Kyu;Ahn, Jong-Seo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.6B
    • /
    • pp.571-578
    • /
    • 2010
  • Manning's roughness coefficients were reevaluated for the computation of river flow of the Han River, the Nakdong River and the Geum River. The roughness coefficients were estimated by two methods. One is based on the assumption that roughness is primarily a function of grain diameter and the other is based on the findings that roughness may vary significantly with the flow discharge. The roughness coefficients adopted in each river improvement master plan have been compared with those obtained using the FLDWAV in this study, and their applicabilities have been reviewed, using the FLDWAV and HEC-RAS models. The design flood water levels computed by the abovementioned models with the roughness coefficients proposed in this study have shown good agreement with the measurements of time variation. The roughness coefficients computed using the FLDWAV model showed nearly no close correlation with the various hydraulic characteristic factors, such as grain size and river depth, etc.. Finally the design flood water levels and levee safety about the downstream part from the Paldang Dam of the Han River has been reviewed using HEC-2 model with roughness coefficients of this study and the results indicated that some parts of the existing levees were short of safety.