• Title/Summary/Keyword: Energy efficiency optimization

Search Result 620, Processing Time 0.021 seconds

Distribution System Reconfiguration Using the PC Cluster based Parallel Adaptive Evolutionary Algorithm

  • Mun Kyeong-Jun;Lee Hwa-Seok;Park June Ho;Hwang Gi-Hyun;Yoon Yoo-Soo
    • KIEE International Transactions on Power Engineering
    • /
    • v.5A no.3
    • /
    • pp.269-279
    • /
    • 2005
  • This paper presents an application of the parallel Adaptive Evolutionary Algorithm (AEA) to search an optimal solution of a reconfiguration in distribution systems. The aim of the reconfiguration is to determine the appropriate switch position to be opened for loss minimization in radial distribution systems, which is a discrete optimization problem. This problem has many constraints and it is very difficult to find the optimal switch position because of its numerous local minima. In this investigation, a parallel AEA was developed for the reconfiguration of the distribution system. In parallel AEA, a genetic algorithm (GA) and an evolution strategy (ES) in an adaptive manner are used in order to combine the merits of two different evolutionary algorithms: the global search capability of GA and the local search capability of ES. In the reproduction procedure, proportions of the population by GA and ES are adaptively modulated according to the fitness. After AEA operations, the best solutions of AEA processors are transferred to the neighboring processors. For parallel computing, a PC-cluster system consisting of 8 PCs·was developed. Each PC employs the 2 GHz Pentium IV CPU, and is connected with others through switch based fast Ethernet. The new developed algorithm has been tested and is compared to distribution systems in the reference paper to verify the usefulness of the proposed method. From the simulation results, it is found that the proposed algorithm is efficient and robust for distribution system reconfiguration in terms of the solution quality, speedup, efficiency, and computation time.

Cushioning Efficiency Evaluation by using the New Determination of Cushioning Curve in Cushioning Packaging Material Design for Agricultural Products (농산물 포장용 지류완충재의 새로운 완충곡선 구현을 통한 완충성능 평가)

  • Jung, Hyun Mo
    • KOREAN JOURNAL OF PACKAGING SCIENCE & TECHNOLOGY
    • /
    • v.19 no.1
    • /
    • pp.51-56
    • /
    • 2013
  • From the time the product is manufactured until it is carried and ultimately used, the product is subjected to some form of handling and transportations. During this process, the product can be subjected to many potential hazards. One of them is the damage caused by shocks. In order to design a product-package system to protect the product, the peak acceleration or G force to the product that causes damage needs to be determined. When a corrugated fiberboard box loaded with products is dropped onto the ground, part of the energy acquired due to the action of the gravitational acceleration during the free fall is dissipated in the product and the package in various ways. The shock absorbing characteristics of the packaging cushion materials are presented as a family of cushion curves in which curves showing peak accelerations during impacts for a range of static loads are shown for several drop heights. The new method for determining the shock absorbing characteristics of cushioning materials for protective packaging has been described and demonstrated. It has been shown that cushion curves can be produced by combining the static compression and impact characteristics of the material. The dynamic factor was determined by the iterative least mean squares (ILMS) optimization technique in which the discrepancies between peak acceleration data predicted from the theoretical model and obtained from the impact tests are minimized. The approach enabled an efficient determination of cushion curves from a small number of experimental impact data.

  • PDF

The Study on the Optimization of Premixed Gas Burner and Heat Exchanger (예혼합 가스버너와 열교환기의 최적화 연구)

  • Lee Kang Ju;Jang Gi Hyun;Lee Chang Eon
    • Journal of the Korean Institute of Gas
    • /
    • v.7 no.4 s.21
    • /
    • pp.7-13
    • /
    • 2003
  • This study was carried out to optimize premixed burner and heat exchanger of the condensing gas boiler which can save energy by utilizing latent heat of combustion gas and reduce pollutant in exhaust gas. The heat exchanger of the gas boiler was composed of three parts, which were an upper, lower, and coil heat exchanger. The upper heat exchanger was placed outside of the premixed burner and a lower heat exchanger was located under the upper heat exchanger. And, coil heat exchanger rounded the outer surface of an upper and lower heat exchanger. The boiler designed by this research reaches turn-down ratio 4:1 in the domain of equivalence ratio 0.75${\~}$0.8 and thermal efficiency of $97\%$. Emission of NOx and CO concentration was under 20ppm and 140ppm at equivalence ratio 0.8. When diameter of the burner is replaced from 60mm to 50mm, emission of CO was reduced about 50ppm remarkably.

  • PDF

Keypoint-based Fast CU Depth Decision for HEVC Intra Coding (HEVC 인트라 부호화를 위한 특징점 기반의 고속 CU Depth 결정)

  • Kim, Namuk;Lim, Sung-Chang;Ko, Hyunsuk;Jeon, Byeungwoo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.2
    • /
    • pp.89-96
    • /
    • 2016
  • The High Efficiency Video Coding (MPEG-H HEVC/ITU-T H.265) is the newest video coding standard which has the quadtree-structured coding unit (CU). The quadtree-structure splits a CU adaptively, and its optimum CU depth can be determined by rate-distortion optimization. Such HEVC encoding requires very high computational complexity for CU depth decision. Motivated that the blob detection, which is a well-known algorithm in computer vision, detects keypoints in pictures and decision of CU depth needs to consider high frequency energy distribution, in this paper, we propose to utilize these keypoints for fast CU depth decision. Experimental results show that 20% encoding time can be saved with only slightly increasing BDBR by 0.45% on all intra case.

Corn stover usage and farm profit for sustainable dairy farming in China

  • He, Yuan;Cone, John W.;Hendriks, Wouter H.;Dijkstra, Jan
    • Animal Bioscience
    • /
    • v.34 no.1
    • /
    • pp.36-47
    • /
    • 2021
  • Objective: This study determined the optimal ratio of whole plant corn silage (WPCS) to corn stover (stems+leaves) silage (CSS) (WPCS:CSS) to reach the greatest profit of dairy farmers and evaluated its consequences with corn available for other purposes, enteric methane production and milk nitrogen efficiency (MNE) at varying milk production levels. Methods: An optimization model was developed. Chemical composition, rumen undegradable protein and metabolizable energy (ME) of WPCS and CSS from 4 cultivars were determined to provide data for the model. Results: At production levels of 0, 10, 20, and 30 kg milk/cow/d, the WPCS:CSS to maximize the profit of dairy farmers was 16:84, 22:78, 44:56, and 88:12, respectively, and the land area needed to grow corn plants was 4.5, 31.4, 33.4, and 30.3 ha, respectively. The amount of corn available (ton DM/ha/yr) for other purposes saved from this land area decreased with higher producing cows. However, compared with high producing cows (30 kg/d milk), more low producing cows (10 kg/d milk) and more land area to grow corn and soybeans was needed to produce the same total amount of milk. Extra land is available to grow corn for a higher milk production, leading to more corn available for other purposes. Increasing ME content of CSS decreased the land area needed, increased the profit of dairy farms and provided more corn available for other purposes. At the optimal WPCS:CSS, MNE and enteric methane production was greater, but methane production per kg milk was lower, for high producing cows. Conclusion: The WPCS:CSS to maximize the profit for dairy farms increases with decreased milk production levels. At a fixed total amount of milk being produced, high producing cows increase corn available for other purposes. At the optimal WPCS:CSS, methane emission intensity is smaller and MNE is greater for high producing cows.

Development of AI-based Cognitive Production Technology for Digital Datadriven Agriculture, Livestock Farming, and Fisheries (디지털 데이터 중심의 AI기반 환경인지 생산기술 개발 방향)

  • Kim, S.H.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.1
    • /
    • pp.54-63
    • /
    • 2021
  • Since the recent COVID-19 pandemic, countries have been strengthening trade protection for their security, and the importance of securing strategic materials, such as food, is drawing attention. In addition to the cultural aspects, the global preference for food produced in Korea is increasing because of the Korean Wave. Thus, the Korean food industry can be developed into a high-value-added export food industry. Currently, Korea has a low self-sufficiency rate for foodstuffs apart from rice. Korea also suffers from problems arising from population decline, aging, rapid climate change, and various animal and plant diseases. It is necessary to develop technologies that can overcome the production structures highly dependent on the outside world of food and foster them into export-type system industries. The global agricultural industry-related technologies are actively being modified via data accumulation, e.g., environmental data, production information, and distribution and consumption information in climate and production facilities, and by actively expanding the introduction of the latest information and communication technologies such as big data and artificial intelligence. However, long-term research and investment should precede the field of living organisms. Compared to other industries, it is necessary to overcome poor production and labor environment investment efficiency in the food industry with respect to the production cost, equipment postmanagement, development tailored to the eye level of field workers, and service models suitable for production facilities of various sizes. This paper discusses the flow of domestic and international technologies that form the core issues of the site centered on the 4th Industrial Revolution in the field of agriculture, livestock, and fisheries. It also explains the environmental awareness production technologies centered on sustainable intelligence platforms that link climate change responses, optimization of energy costs, and mass production for unmanned production, distribution, and consumption using the unstructured data obtained based on detection and growth measurement data.

Optimization of Fractionation Conditions for Natural Organic Matter in Water by DAX-8 Resin and its Application to Environmental Samples (DAX-8 레진의 수중 자연유기물의 분획조건 최적화 및 환경시료에의 적용)

  • Lim, Hyebin;Hur, Jin;Kim, Joowon;Shin, Hyunsang
    • Journal of Korean Society on Water Environment
    • /
    • v.38 no.3
    • /
    • pp.133-142
    • /
    • 2022
  • Natural organic matter (NOM) is a heterogeneous mixture of organic matter with various polarities and molecular weights in an aquatic environment. This study investigated the effects of separation conditions (resin volume, organic matter, etc.) and the repeated use of the resin for the fractionation of organic components in the DAX resin fractionation method. The distribution characteristics of the organic components ((hydrophilic [Hi], hydrophobic acid [HoA], and hydrophobic neutral [HoN]) under the derived fractionation conditions were also analyzed. Constant fractionation results (i.e. HoA/Hi ratio) were obtained in the column capacity factor (i.e. the packed resin volume) in the range of 50 to 100. The resin-packed column maintained constant separation efficiency for up to two repeated uses. The above conditions were applied to wastewater and stream water samples (before and after rainfall). The results showed that the concentration of organic matter in the wastewater effluent was 2-15 times lower with an increased ratio of hydrophilicity to hydrophobicity (i.e. Ho/Hi) compared to the influent depending on the industrial wastewater classification. Particularly, HoN was found to have a high content distribution, 10.2-50.4% of the total dissolved organic matter (DOM), in the effluents. For the stream water, the content of Hi or HoN increased significantly after rainfall, suggesting a correlation with the distribution characteristics of pollutants from the stream watershed. The results provide useful data to enhance the reliability of the DAX resin fractionation and its application to environmental samples.

A Study of Double Dark Photons Produced by Lepton Colliders using High Performance Computing

  • Park, Kihong;Kim, Kyungho;Cho, Kihyeon
    • Journal of Astronomy and Space Sciences
    • /
    • v.39 no.1
    • /
    • pp.1-10
    • /
    • 2022
  • The universe is thought to be filled with not only Standard Model (SM) matters but also dark matters. Dark matter is thought to play a major role in its construction. However, the identity of dark matter is as yet unknown, with various search methods from astrophysical observartion to particle collider experiments. Because of the cross-section that is a thousand times smaller than SM particles, dark matter research requires a large amount of data processing. Therefore, optimization and parallelization in High Performance Computing is required. Dark matter in hypothetical hidden sector is though to be connected to dark photons which carries forces similar to photons in electromagnetism. In the recent analysis, it was studied using the decays of a dark photon at collider experiments. Based on this, we studies double dark photon decays at lepton colliders. The signal channels are e+e- → A'A' and e+e- → A'A'γ where dark photon A' decays dimuon. These signal channels are based on the theory that dark photons only decay into heavily charged leptons, which can explain the muon magnetic momentum anomaly. We scanned the cross-section according to the dark photon mass in experiments. MadGraph5 was used to generate events based on a simplified model. Additionally, to get the maximum expected number of events for the double dark photon channel, the detector efficiency for several center of mass (CM) energy were studied using Delphes and MadAnalysis5 for performance comparison. The results of this study will contribute to the search for double dark photon channels at lepton colliders.

Can Artificial Intelligence Boost Developing Electrocatalysts for Efficient Water Splitting to Produce Green Hydrogen?

  • Jaehyun Kim;Ho Won Jang
    • Korean Journal of Materials Research
    • /
    • v.33 no.5
    • /
    • pp.175-188
    • /
    • 2023
  • Water electrolysis holds great potential as a method for producing renewable hydrogen fuel at large-scale, and to replace the fossil fuels responsible for greenhouse gases emissions and global climate change. To reduce the cost of hydrogen and make it competitive against fossil fuels, the efficiency of green hydrogen production should be maximized. This requires superior electrocatalysts to reduce the reaction energy barriers. The development of catalytic materials has mostly relied on empirical, trial-and-error methods because of the complicated, multidimensional, and dynamic nature of catalysis, requiring significant time and effort to find optimized multicomponent catalysts under a variety of reaction conditions. The ultimate goal for all researchers in the materials science and engineering field is the rational and efficient design of materials with desired performance. Discovering and understanding new catalysts with desired properties is at the heart of materials science research. This process can benefit from machine learning (ML), given the complex nature of catalytic reactions and vast range of candidate materials. This review summarizes recent achievements in catalysts discovery for the hydrogen evolution reaction (HER) and oxygen evolution reaction (OER). The basic concepts of ML algorithms and practical guides for materials scientists are also demonstrated. The challenges and strategies of applying ML are discussed, which should be collaboratively addressed by materials scientists and ML communities. The ultimate integration of ML in catalyst development is expected to accelerate the design, discovery, optimization, and interpretation of superior electrocatalysts, to realize a carbon-free ecosystem based on green hydrogen.

Internal Dosimetry: State of the Art and Research Needed

  • Francois Paquet
    • Journal of Radiation Protection and Research
    • /
    • v.47 no.4
    • /
    • pp.181-194
    • /
    • 2022
  • Internal dosimetry is a discipline which brings together a set of knowledge, tools and procedures for calculating the dose received after incorporation of radionuclides into the body. Several steps are necessary to calculate the committed effective dose (CED) for workers or members of the public. Each step uses the best available knowledge in the field of radionuclide biokinetics, energy deposition in organs and tissues, the efficiency of radiation to cause a stochastic effect, or in the contributions of individual organs and tissues to overall detriment from radiation. In all these fields, knowledge is abundant and supported by many works initiated several decades ago. That makes the CED a very robust quantity, representing exposure for reference persons in reference situation of exposure and to be used for optimization and assessment of compliance with dose limits. However, the CED suffers from certain limitations, accepted by the International Commission on Radiological Protection (ICRP) for reasons of simplification. Some of its limitations deserve to be overcome and the ICRP is continuously working on this. Beyond the efforts to make the CED an even more reliable and precise tool, there is an increasing demand for personalized dosimetry, particularly in the medical field. To respond to this demand, currently available tools in dosimetry can be adjusted. However, this would require coupling these efforts with a better assessment of the individual risk, which would then have to consider the physiology of the persons concerned but also their lifestyle and medical history. Dosimetry and risk assessment are closely linked and can only be developed in parallel. This paper presents the state of the art of internal dosimetry knowledge and the limitations to be overcome both to make the CED more precise and to develop other dosimetric quantities, which would make it possible to better approximate the individual dose.