• Title/Summary/Keyword: 최적화과정

Search Result 2,352, Processing Time 0.026 seconds

Improving Physical Fouling Tolerance of PES Filtration Membranes by Using Double-layer Casting Methods (PES 여과막의 물리적 막오염 개선을 위한 기공 구조 개선 연구)

  • Chang-Hun Kim;Youngmin Yoo;In-Chul Kim;Seung-Eun Nam;Jung-Hyun Lee;Youngbin Baek;Young Hoon Cho
    • Membrane Journal
    • /
    • v.33 no.4
    • /
    • pp.191-200
    • /
    • 2023
  • Polyethersulfone (PES) is a widely employed membrane material for water and industrial purification applications owing to its hydrophilicity and ease of phase separation. However, PES membranes and filters prepared using the nonsolvent induced phase separation method often encounter significant flux decline due to pore clogging and cake layer formation on the dense membrane surfaces. Our investigation revealed that tight microfiltration or loose ultrafiltration membranes can be subject to physical fouling due to the formation of a dense skin layer on the bottom side caused by water intrusion to the gap between the shrank membrane and the substrate. To investigate the effect of the bottom surface porosity on membrane fouling, two membranes with the same selective layers but different sub-layer structures were prepared using single and double layer casting methods, respectively. The double layered PES membrane with highly porous bottom surface showed high flux and physical fouling tolerance compared to the pristine single layer membrane. This study highlights the importance of physical optimization of the membrane structure to prevent membrane fouling.

Leg Fracture Recovery Monitoring Simulation using Dual T-type Defective Microstrip Patch Antenna (쌍 T-형 결함 마이크로스트립 패치 안테나를 활용한 다리 골절 회복 모니터링 모의실험)

  • Byung-Mun Kim;Lee-Ho Yun;Sang-Min Lee;Yeon-Taek Park;Jae-Pyo Hong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.4
    • /
    • pp.587-594
    • /
    • 2023
  • In this paper, we present the design and optimization process of an on-body microstrip patch antenna with a paired T-type defect for monitoring fracture recovery of human legs. This antenna is designed to be light, thin and compact despite the improvement of return loss and bandwidth performance by adjusting the size of the T-type defect. The structure around the applied human leg is structured as a 5-layer dielectric plane, and the complex dielectric constant of each layer is calculated using the 4-pole Cole-Cole model parameters. In a normal case without bone fracture, the return loss of the on-body antenna is -66.71dB at 4.0196GHz, and the return loss difference ΔS11 is 37.95dB when the gallus layer have a length of 10.0mm, width of 1.0mme, and height of 2.0mm. A 3'rd degree polynomial is presented to predict the height of the gallus layer for the change in return loss, and the polynomial has a very high prediction suitability as RSS = 1.4751, R2 = 0.9988246, P-value = 0.0001841.

Forecasting Korean CPI Inflation (우리나라 소비자물가상승률 예측)

  • Kang, Kyu Ho;Kim, Jungsung;Shin, Serim
    • Economic Analysis
    • /
    • v.27 no.4
    • /
    • pp.1-42
    • /
    • 2021
  • The outlook for Korea's consumer price inflation rate has a profound impact not only on the Bank of Korea's operation of the inflation target system but also on the overall economy, including the bond market and private consumption and investment. This study presents the prediction results of consumer price inflation in Korea for the next three years. To this end, first, model selection is performed based on the out-of-sample predictive power of autoregressive distributed lag (ADL) models, AR models, small-scale vector autoregressive (VAR) models, and large-scale VAR models. Since there are many potential predictors of inflation, a Bayesian variable selection technique was introduced for 12 macro variables, and a precise tuning process was performed to improve predictive power. In the case of the VAR model, the Minnesota prior distribution was applied to solve the dimensional curse problem. Looking at the results of long-term and short-term out-of-sample predictions for the last five years, the ADL model was generally superior to other competing models in both point and distribution prediction. As a result of forecasting through the combination of predictions from the above models, the inflation rate is expected to maintain the current level of around 2% until the second half of 2022, and is expected to drop to around 1% from the first half of 2023.

Optimizing Graphene Growth on the Electrolytic Copper Foils by Controlling Surface Condition and Annealing Procedure (전해구리막의 표면 조건과 어닐링 과정을 통한 그래핀 성장 최적화)

  • Woo Jin Lee;Ha Eun Go;Tae Rim Koo;Jae Sung Lee;Joon Woo Lee;Soun Gi Hong;Sang-Ho Kim
    • Journal of the Korean institute of surface engineering
    • /
    • v.56 no.3
    • /
    • pp.192-200
    • /
    • 2023
  • Graphene, a two-dimensional material, has shown great potential in a variety of applications including microelectronics, optoelectronics, and graphene-based batteries due to its excellent electronic conductivity. However, the production of large-area, high-quality graphene remains a challenge. In this study, we investigated graphene growth on electrolytic copper foil using thermochemical vapor deposition (TCVD) to achieve a similar level of quality to the cold-rolled copper substrate at a lower cost. The combined effects of pre-annealing time, graphenized temperature, and partial pressure of hydrogen on graphene coverage and domain size were analyzed and correlated with the roughness and crystallographic texture of the copper substrate. Our results show that controlling the crystallographic texture of copper substrates through annealing is an effective way to improve graphene growth properties, which will potentially lead to more efficient and cost-effective graphene production. At a hydrogen partial pressure that is disadvantageous in graphene growth, electrolytic copper had an average size of 8.039 ㎛2, whereas rolled copper had a size of 19.092 ㎛2, which was a large difference of 42.1% compared to rolled copper. However, at the proper hydrogen partial pressure, electrolytic copper had an average size of 30.279 ㎛2 and rolled copper had a size of 32.378 ㎛2, showing a much smaller difference of 93.5% than before. This observation suggests this potentially leads the way for more efficient and cost-effective graphene production.

A Deep Learning-based Real-time Deblurring Algorithm on HD Resolution (HD 해상도에서 실시간 구동이 가능한 딥러닝 기반 블러 제거 알고리즘)

  • Shim, Kyujin;Ko, Kangwook;Yoon, Sungjoon;Ha, Namkoo;Lee, Minseok;Jang, Hyunsung;Kwon, Kuyong;Kim, Eunjoon;Kim, Changick
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.3-12
    • /
    • 2022
  • Image deblurring aims to remove image blur, which can be generated while shooting the pictures by the movement of objects, camera shake, blurring of focus, and so forth. With the rise in popularity of smartphones, it is common to carry portable digital cameras daily, so image deblurring techniques have become more significant recently. Originally, image deblurring techniques have been studied using traditional optimization techniques. Then with the recent attention on deep learning, deblurring methods based on convolutional neural networks have been actively proposed. However, most of them have been developed while focusing on better performance. Therefore, it is not easy to use in real situations due to the speed of their algorithms. To tackle this problem, we propose a novel deep learning-based deblurring algorithm that can be operated in real-time on HD resolution. In addition, we improved the training and inference process and could increase the performance of our model without any significant effect on the speed and the speed without any significant effect on the performance. As a result, our algorithm achieves real-time performance by processing 33.74 frames per second at 1280×720 resolution. Furthermore, it shows excellent performance compared to its speed with a PSNR of 29.78 and SSIM of 0.9287 with the GoPro dataset.

Manufacturing of a Treatment Agent for Corrosion Oxides of Iron Relics (철기 유물 부식 산화물 처리제의 제조)

  • Yang, Eun Hee;Han, Won-Sik;Choi, Kwang-Sun;Hong, Tae-Kee
    • Korea Science and Art Forum
    • /
    • v.30
    • /
    • pp.251-261
    • /
    • 2017
  • Metal is a material that has exerted a lot of influence on the development of human cultures, and has closely connected with our life from the past to the present. Types of metal we have used from the prehistoric times are varied, and iron relics take the largest percentage of metal relics excavated in our country. The biggest threat to the existence of iron relics ranging from excavated relics to the ones that are transmitted is the process of corrosion, and physical removal has been used the most for removing corroded oxides. For details for removal of corrosion oxides, this thesis aimed to research on the chemical corrosion oxides remover that protects parent material of iron relics but treats corrosion oxides only. For safe and effective removal of corrosion oxides of iron relics, this study was conducted aiming at finding the possibility of and optimized composition for removal of iron relics corrosion oxides by manufacturing new acid, alkaline and neutral oxides removers and changing their composition variously, exploring the possibility by applying the agents to modern relics. The results of this study are as follows: First, the acid solution removed only some part of corrosive substance oxidized on the surface of metal specimen. Second, the application of each of alkaline and neutral solution resulted in remaining black-colored corrosive substance, but it was removed when the quantity of the solution and the duration of application are increased. Third, All the three solutions did not cause any damage to parent material in the course of application, and showed the result that they are capable of removing unstable oxide layer while protecting parent material and stable corrosive layer as the solutions would be able to deal with situation by a relic only through the control of concentration of solution and duration of application.

Predicting the Fetotoxicity of Drugs Using Machine Learning (기계학습 기반 약물의 태아 독성 예측 연구)

  • Myeonghyeon Jeong;Sunyong Yoo
    • Journal of Life Science
    • /
    • v.33 no.6
    • /
    • pp.490-497
    • /
    • 2023
  • Pregnant women may need to take medications to treat preexisting diseases or diseases that develop during pregnancy. However, some drugs may be fetotoxic and lead to, for example, teratogenicity and growth retardation. Predicting the fetotoxicity of drugs is thus important for the health of the mother and fetus. The fetotoxicity of many drugs has not been established because various challenges hinder the ability of researchers to determine their fetotoxicity. The need exists for in silico-based fetotoxicity assessment models, as they can modernize the testing paradigm, improve predictability, and reduce the use of animals and the costs of fetotoxicity testing. In this study, we collected data on the fetotoxicity of drugs and constructed fetotoxicity prediction models based on various machine learning algorithms. We optimized the models for more precise predictions by tuning the hyperparameters. We then performed quantitative performance evaluations. The results indicated that the constructed machine learning-based models had high performance (AUROC >0.85, AUPR >0.9) in fetotoxicity prediction. We also analyzed the feature importance of our model's predictions, which could be leveraged to identify the specific features of drugs that are strongly associated with fetotoxicity. The proposed model can be used to prescreen drugs and drug candidates at a lower cost and in less time. It provides a predictive score for fetotoxicity risk, which may be beneficial in the design of studies on fetotoxicity in human pregnancy.

The Automated Scoring of Kinematics Graph Answers through the Design and Application of a Convolutional Neural Network-Based Scoring Model (합성곱 신경망 기반 채점 모델 설계 및 적용을 통한 운동학 그래프 답안 자동 채점)

  • Jae-Sang Han;Hyun-Joo Kim
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.3
    • /
    • pp.237-251
    • /
    • 2023
  • This study explores the possibility of automated scoring for scientific graph answers by designing an automated scoring model using convolutional neural networks and applying it to students' kinematics graph answers. The researchers prepared 2,200 answers, which were divided into 2,000 training data and 200 validation data. Additionally, 202 student answers were divided into 100 training data and 102 test data. First, in the process of designing an automated scoring model and validating its performance, the automated scoring model was optimized for graph image classification using the answer dataset prepared by the researchers. Next, the automated scoring model was trained using various types of training datasets, and it was used to score the student test dataset. The performance of the automated scoring model has been improved as the amount of training data increased in amount and diversity. Finally, compared to human scoring, the accuracy was 97.06%, the kappa coefficient was 0.957, and the weighted kappa coefficient was 0.968. On the other hand, in the case of answer types that were not included in the training data, the s coring was almos t identical among human s corers however, the automated scoring model performed inaccurately.

On-site Inventory Management Plan for Construction Materials Considering Activity Float Time and Size of a Stock Yard (공정별 여유시간과 야적장 규모를 고려한 건설자재의 현장 재고관리 방안 연구)

  • Kim, Yong Hwan;Yoon, Hyeong Seok;Lee, Jae Hee;Kang, Leen Seok
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.1
    • /
    • pp.79-89
    • /
    • 2023
  • The inventory of many materials requires a large storage space, and the longer the storage period, the higher the potential maintenance cost. When materials are stored on a construction site, there are also concerns about safety due to the reduction of room for movement and working. On the other hand, construction sites that do not store materials have insufficient inventory, making it difficult to respond to demands such as sudden design changes. Ordering materials is then subject to delays and extra costs. Although securing an appropriate amount of inventory is important, in many cases, material management on a construction site depends on the experience of the site manager, so a reasonable material inventory management plan that reflects the construction conditions of a site is required. This study proposes an economical material management method by reflecting variables such as the status of the preceding and following activities, site size, material delivery cost, timing of an order, and quantity of orders. To this end, we set the appropriate inventory amount while adjusting related activities in the activity network, using float time for each activity, the size of the yard, and the order quantity as the main variables, and applied a genetic algorithm to this process to suggest the optimal order timing and order quantity. The material delivery cost derived from the results is set as a fitness index and the efficiency of inventory management was verified through a case application.

Quality Visualization of Quality Metric Indicators based on Table Normalization of Static Code Building Information (정적 코드 내부 정보의 테이블 정규화를 통한 품질 메트릭 지표들의 가시화를 위한 추출 메커니즘)

  • Chansol Park;So Young Moon;R. Young Chul Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.5
    • /
    • pp.199-206
    • /
    • 2023
  • The current software becomes the huge size of source codes. Therefore it is increasing the importance and necessity of static analysis for high-quality product. With static analysis of the code, it needs to identify the defect and complexity of the code. Through visualizing these problems, we make it guild for developers and stakeholders to understand these problems in the source codes. Our previous visualization research focused only on the process of storing information of the results of static analysis into the Database tables, querying the calculations for quality indicators (CK Metrics, Coupling, Number of function calls, Bad-smell), and then finally visualizing the extracted information. This approach has some limitations in that it takes a lot of time and space to analyze a code using information extracted from it through static analysis. That is since the tables are not normalized, it may occur to spend space and time when the tables(classes, functions, attributes, Etc.) are joined to extract information inside the code. To solve these problems, we propose a regularized design of the database tables, an extraction mechanism for quality metric indicators inside the code, and then a visualization with the extracted quality indicators on the code. Through this mechanism, we expect that the code visualization process will be optimized and that developers will be able to guide the modules that need refactoring. In the future, we will conduct learning of some parts of this process.