• Title/Summary/Keyword: Processing technique

Search Result 5,697, Processing Time 0.03 seconds

Room Temperature Imprint Lithography for Surface Patterning of Al Foils and Plates (알루미늄 박 및 플레이트 표면 미세 패터닝을 위한 상온 임프린팅 기술)

  • Tae Wan Park;Seungmin Kim;Eun Bin Kang;Woon Ik Park
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.30 no.2
    • /
    • pp.65-70
    • /
    • 2023
  • Nanoimprint lithography (NIL) has attracted much attention due to its process simplicity, excellent patternability, process scalability, high productivity, and low processing cost for pattern formation. However, the pattern size that can be implemented on metal materials through conventional NIL technologies is generally limited to the micro level. Here, we introduce a novel hard imprint lithography method, extreme-pressure imprint lithography (EPIL), for the direct nano-to-microscale pattern formation on the surfaces of metal substrates with various thicknesses. The EPIL process allows reliable nanoscopic patterning on diverse surfaces, such as polymers, metals, and ceramics, without the use of ultraviolet (UV) light, laser, imprint resist, or electrical pulse. Micro/nano molds fabricated by laser micromachining and conventional photolithography are utilized for the nanopatterning of Al substrates through precise plastic deformation by applying high load or pressure at room temperature. We demonstrate micro/nanoscale pattern formation on the Al substrates with various thicknesses from 20 ㎛ to 100 mm. Moreover, we also show how to obtain controllable pattern structures on the surface of metallic materials via the versatile EPIL technique. We expect that this imprint lithography-based new approach will be applied to other emerging nanofabrication methods for various device applications with complex geometries on the surface of metallic materials.

Comparison of fit and trueness of zirconia crowns fabricated by different combinations of open CAD-CAM systems

  • Eun-Bin Bae;Won-Tak Cho;Do-Hyun Park;Su-Hyun Hwang;So-Hyoun Lee;Mi-Jung Yun;Chang-Mo Jeong;Jung-Bo Huh
    • The Journal of Advanced Prosthodontics
    • /
    • v.15 no.3
    • /
    • pp.155-170
    • /
    • 2023
  • PURPOSE. This study aims to clinically compare the fitness and trueness of zirconia crowns fabricated by different combinations of open CAD-CAM systems. MATERIALS AND METHODS. Total of 40 patients were enrolled in this study, and 9 different zirconia crowns were prepared per patient. Each crown was made through the cross-application of 3 different design software (EZIS VR, 3Shape Dental System, Exocad) with 3 different processing devices (Aegis HM, Trione Z, Motion 2). The marginal gap, absolute marginal discrepancy, internal gap(axial, line angle, occlusal) by a silicone replica technique were measured to compare the fit of the crown. The scanned inner and outer surfaces of the crowns were compared to CAD data using 3D metrology software to evaluate trueness. RESULTS. There were significant differences in the marginal gap, absolute marginal discrepancy, axial and line angle internal gap among the groups (P < .05) in the comparison of fit. There was no statistically significant difference among the groups in terms of occlusal internal gap. The trueness ranged from 36.19 to 43.78 ㎛ but there was no statistically significant difference within the groups (P > .05). CONCLUSION. All 9 groups showed clinically acceptable level of marginal gaps ranging from 74.26 to 112.20 ㎛ in terms of fit comparison. In the comparison of trueness, no significant difference within each group was spotted. Within the limitation of this study, open CAD-CAM systems used in this study can be assembled properly to fabricate zirconia crown.

Efficient Poisoning Attack Defense Techniques Based on Data Augmentation (데이터 증강 기반의 효율적인 포이즈닝 공격 방어 기법)

  • So-Eun Jeon;Ji-Won Ock;Min-Jeong Kim;Sa-Ra Hong;Sae-Rom Park;Il-Gu Lee
    • Convergence Security Journal
    • /
    • v.22 no.3
    • /
    • pp.25-32
    • /
    • 2022
  • Recently, the image processing industry has been activated as deep learning-based technology is introduced in the image recognition and detection field. With the development of deep learning technology, learning model vulnerabilities for adversarial attacks continue to be reported. However, studies on countermeasures against poisoning attacks that inject malicious data during learning are insufficient. The conventional countermeasure against poisoning attacks has a limitation in that it is necessary to perform a separate detection and removal operation by examining the training data each time. Therefore, in this paper, we propose a technique for reducing the attack success rate by applying modifications to the training data and inference data without a separate detection and removal process for the poison data. The One-shot kill poison attack, a clean label poison attack proposed in previous studies, was used as an attack model. The attack performance was confirmed by dividing it into a general attacker and an intelligent attacker according to the attacker's attack strategy. According to the experimental results, when the proposed defense mechanism is applied, the attack success rate can be reduced by up to 65% compared to the conventional method.

Restoration of Missing Data in Satellite-Observed Sea Surface Temperature using Deep Learning Techniques (딥러닝 기법을 활용한 위성 관측 해수면 온도 자료의 결측부 복원에 관한 연구)

  • Won-Been Park;Heung-Bae Choi;Myeong-Soo Han;Ho-Sik Um;Yong-Sik Song
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.6
    • /
    • pp.536-542
    • /
    • 2023
  • Satellites represent cutting-edge technology, of ering significant advantages in spatial and temporal observations. National agencies worldwide harness satellite data to respond to marine accidents and analyze ocean fluctuations effectively. However, challenges arise with high-resolution satellite-based sea surface temperature data (Operational Sea Surface Temperature and Sea Ice Analysis, OSTIA), where gaps or empty areas may occur due to satellite instrumentation, geographical errors, and cloud cover. These issues can take several hours to rectify. This study addressed the issue of missing OSTIA data by employing LaMa, the latest deep learning-based algorithm. We evaluated its performance by comparing it to three existing image processing techniques. The results of this evaluation, using the coefficient of determination (R2) and mean absolute error (MAE) values, demonstrated the superior performance of the LaMa algorithm. It consistently achieved R2 values of 0.9 or higher and kept MAE values under 0.5 ℃ or less. This outperformed the traditional methods, including bilinear interpolation, bicubic interpolation, and DeepFill v1 techniques. We plan to evaluate the feasibility of integrating the LaMa technique into an operational satellite data provision system.

Hybrid Offloading Technique Based on Auction Theory and Reinforcement Learning in MEC Industrial IoT Environment (MEC 산업용 IoT 환경에서 경매 이론과 강화 학습 기반의 하이브리드 오프로딩 기법)

  • Bae Hyeon Ji;Kim Sung Wook
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.9
    • /
    • pp.263-272
    • /
    • 2023
  • Industrial Internet of Things (IIoT) is an important factor in increasing production efficiency in industrial sectors, along with data collection, exchange and analysis through large-scale connectivity. However, as traffic increases explosively due to the recent spread of IIoT, an allocation method that can efficiently process traffic is required. In this thesis, I propose a two-stage task offloading decision method to increase successful task throughput in an IIoT environment. In addition, I consider a hybrid offloading system that can offload compute-intensive tasks to a mobile edge computing server via a cellular link or to a nearby IIoT device via a Device to Device (D2D) link. The first stage is to design an incentive mechanism to prevent devices participating in task offloading from acting selfishly and giving difficulties in improving task throughput. Among the mechanism design, McAfee's mechanism is used to control the selfish behavior of the devices that process the task and to increase the overall system throughput. After that, in stage 2, I propose a multi-armed bandit (MAB)-based task offloading decision method in a non-stationary environment by considering the irregular movement of the IIoT device. Experimental results show that the proposed method can obtain better performance in terms of overall system throughput, communication failure rate and regret compared to other existing methods.

Validation of Deep-Learning Image Reconstruction for Low-Dose Chest Computed Tomography Scan: Emphasis on Image Quality and Noise

  • Joo Hee Kim;Hyun Jung Yoon;Eunju Lee;Injoong Kim;Yoon Ki Cha;So Hyeon Bak
    • Korean Journal of Radiology
    • /
    • v.22 no.1
    • /
    • pp.131-138
    • /
    • 2021
  • Objective: Iterative reconstruction degrades image quality. Thus, further advances in image reconstruction are necessary to overcome some limitations of this technique in low-dose computed tomography (LDCT) scan of the chest. Deep-learning image reconstruction (DLIR) is a new method used to reduce dose while maintaining image quality. The purposes of this study was to evaluate image quality and noise of LDCT scan images reconstructed with DLIR and compare with those of images reconstructed with the adaptive statistical iterative reconstruction-Veo at a level of 30% (ASiR-V 30%). Materials and Methods: This retrospective study included 58 patients who underwent LDCT scan for lung cancer screening. Datasets were reconstructed with ASiR-V 30% and DLIR at medium and high levels (DLIR-M and DLIR-H, respectively). The objective image signal and noise, which represented mean attenuation value and standard deviation in Hounsfield units for the lungs, mediastinum, liver, and background air, and subjective image contrast, image noise, and conspicuity of structures were evaluated. The differences between CT scan images subjected to ASiR-V 30%, DLIR-M, and DLIR-H were evaluated. Results: Based on the objective analysis, the image signals did not significantly differ among ASiR-V 30%, DLIR-M, and DLIR-H (p = 0.949, 0.737, 0.366, and 0.358 in the lungs, mediastinum, liver, and background air, respectively). However, the noise was significantly lower in DLIR-M and DLIR-H than in ASiR-V 30% (all p < 0.001). DLIR had higher signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) than ASiR-V 30% (p = 0.027, < 0.001, and < 0.001 in the SNR of the lungs, mediastinum, and liver, respectively; all p < 0.001 in the CNR). According to the subjective analysis, DLIR had higher image contrast and lower image noise than ASiR-V 30% (all p < 0.001). DLIR was superior to ASiR-V 30% in identifying the pulmonary arteries and veins, trachea and bronchi, lymph nodes, and pleura and pericardium (all p < 0.001). Conclusion: DLIR significantly reduced the image noise in chest LDCT scan images compared with ASiR-V 30% while maintaining superior image quality.

Dimensional Quality Assessment for Assembly Part of Prefabricated Steel Structures Using a Stereo Vision Sensor (스테레오 비전 센서 기반 프리팹 강구조물 조립부 형상 품질 평가)

  • Jonghyeok Kim;Haemin Jeon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.3
    • /
    • pp.173-178
    • /
    • 2024
  • This study presents a technique for assessing the dimensional quality of assembly parts in Prefabricated Steel Structures (PSS) using a stereo vision sensor. The stereo vision system captures images and point cloud data of the assembly area, followed by applying image processing algorithms such as fuzzy-based edge detection and Hough transform-based circular bolt hole detection to identify bolt hole locations. The 3D center positions of each bolt hole are determined by correlating 3D real-world position information from depth images with the extracted bolt hole positions. Principal Component Analysis (PCA) is then employed to calculate coordinate axes for precise measurement of distances between bolt holes, even when the sensor and structure orientations differ. Bolt holes are sorted based on their 2D positions, and the distances between sorted bolt holes are calculated to assess the assembly part's dimensional quality. Comparison with actual drawing data confirms measurement accuracy with an absolute error of 1mm and a relative error within 4% based on median criteria.

The Contact and Parallel Analysis of Smoothed Particle Hydrodynamics (SPH) Using Polyhedral Domain Decomposition (다면체영역분할을 이용한 SPH의 충돌 및 병렬해석)

  • Moonho Tak
    • Journal of the Korean GEO-environmental Society
    • /
    • v.25 no.4
    • /
    • pp.21-28
    • /
    • 2024
  • In this study, a polyhedral domain decomposition method for Smoothed Particle Hydrodynamics (SPH) analysis is introduced. SPH which is one of meshless methods is a numerical analysis method for fluid flow simulation. It can be useful for analyzing fluidic soil or fluid-structure interaction problems. SPH is a particle-based method, where increased particle count generally improves accuracy but diminishes numerical efficiency. To enhance numerical efficiency, parallel processing algorithms are commonly employed with the Cartesian coordinate-based domain decomposition method. However, for parallel analysis of complex geometric shapes or fluidic problems under dynamic boundary conditions, the Cartesian coordinate-based domain decomposition method may not be suitable. The introduced polyhedral domain decomposition technique offers advantages in enhancing parallel efficiency in such problems. It allows partitioning into various forms of 3D polyhedral elements to better fit the problem. Physical properties of SPH particles are calculated using information from neighboring particles within the smoothing length. Methods for sharing particle information physically separable at partitioning and sharing information at cross-points where parallel efficiency might diminish are presented. Through numerical analysis examples, the proposed method's parallel efficiency approached 95% for up to 12 cores. However, as the number of cores is increased, parallel efficiency is decreased due to increased information sharing among cores.

Comparative analysis of wavelet transform and machine learning approaches for noise reduction in water level data (웨이블릿 변환과 기계 학습 접근법을 이용한 수위 데이터의 노이즈 제거 비교 분석)

  • Hwang, Yukwan;Lim, Kyoung Jae;Kim, Jonggun;Shin, Minhwan;Park, Youn Shik;Shin, Yongchul;Ji, Bongjun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.209-223
    • /
    • 2024
  • In the context of the fourth industrial revolution, data-driven decision-making has increasingly become pivotal. However, the integrity of data analysis is compromised if data quality is not adequately ensured, potentially leading to biased interpretations. This is particularly critical for water level data, essential for water resource management, which often encounters quality issues such as missing values, spikes, and noise. This study addresses the challenge of noise-induced data quality deterioration, which complicates trend analysis and may produce anomalous outliers. To mitigate this issue, we propose a noise removal strategy employing Wavelet Transform, a technique renowned for its efficacy in signal processing and noise elimination. The advantage of Wavelet Transform lies in its operational efficiency - it reduces both time and costs as it obviates the need for acquiring the true values of collected data. This study conducted a comparative performance evaluation between our Wavelet Transform-based approach and the Denoising Autoencoder, a prominent machine learning method for noise reduction.. The findings demonstrate that the Coiflets wavelet function outperforms the Denoising Autoencoder across various metrics, including Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Mean Squared Error (MSE). The superiority of the Coiflets function suggests that selecting an appropriate wavelet function tailored to the specific application environment can effectively address data quality issues caused by noise. This study underscores the potential of Wavelet Transform as a robust tool for enhancing the quality of water level data, thereby contributing to the reliability of water resource management decisions.

Comparative Study of Fish Detection and Classification Performance Using the YOLOv8-Seg Model (YOLOv8-Seg 모델을 이용한 어류 탐지 및 분류 성능 비교연구)

  • Sang-Yeup Jin;Heung-Bae Choi;Myeong-Soo Han;Hyo-tae Lee;Young-Tae Son
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.30 no.2
    • /
    • pp.147-156
    • /
    • 2024
  • The sustainable management and enhancement of marine resources are becoming increasingly important issues worldwide. This study was conducted in response to these challenges, focusing on the development and performance comparison of fish detection and classification models as part of a deep learning-based technique for assessing the effectiveness of marine resource enhancement projects initiated by the Korea Fisheries Resources Agency. The aim was to select the optimal model by training various sizes of YOLOv8-Seg models on a fish image dataset and comparing each performance metric. The dataset used for model construction consisted of 36,749 images and label files of 12 different species of fish, with data diversity enhanced through the application of augmentation techniques during training. When training and validating five different YOLOv8-Seg models under identical conditions, the medium-sized YOLOv8m-Seg model showed high learning efficiency and excellent detection and classification performance, with the shortest training time of 13 h and 12 min, an of 0.933, and an inference speed of 9.6 ms. Considering the balance between each performance metric, this was deemed the most efficient model for meeting real-time processing requirements. The use of such real-time fish detection and classification models could enable effective surveys of marine resource enhancement projects, suggesting the need for ongoing performance improvements and further research.