• Title/Summary/Keyword: hyper method

Search Result 389, Processing Time 0.027 seconds

Application and Comparison of Data Mining Technique to Prevent Metal-Bush Omission (메탈부쉬 누락예방을 위한 데이터마이닝 기법의 적용 및 비교)

  • Sang-Hyun Ko;Dongju Lee
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.139-147
    • /
    • 2023
  • The metal bush assembling process is a process of inserting and compressing a metal bush that serves to reduce the occurrence of noise and stable compression in the rotating section. In the metal bush assembly process, the head diameter defect and placement defect of the metal bush occur due to metal bush omission, non-pressing, and poor press-fitting. Among these causes of defects, it is intended to prevent defects due to omission of the metal bush by using signals from sensors attached to the facility. In particular, a metal bush omission is predicted through various data mining techniques using left load cell value, right load cell value, current, and voltage as independent variables. In the case of metal bush omission defect, it is difficult to get defect data, resulting in data imbalance. Data imbalance refers to a case where there is a large difference in the number of data belonging to each class, which can be a problem when performing classification prediction. In order to solve the problem caused by data imbalance, oversampling and composite sampling techniques were applied in this study. In addition, simulated annealing was applied for optimization of parameters related to sampling and hyper-parameters of data mining techniques used for bush omission prediction. In this study, the metal bush omission was predicted using the actual data of M manufacturing company, and the classification performance was examined. All applied techniques showed excellent results, and in particular, the proposed methods, the method of mixing Random Forest and SA, and the method of mixing MLP and SA, showed better results.

Risk Factor Analysis of Cryopreserved Autologous Bone Flap Resorption in Adult Patients Undergoing Cranioplasty with Volumetry Measurement Using Conventional Statistics and Machine-Learning Technique

  • Yohan Son;Jaewoo Chung
    • Journal of Korean Neurosurgical Society
    • /
    • v.67 no.1
    • /
    • pp.103-114
    • /
    • 2024
  • Objective : Decompressive craniectomy (DC) with duroplasty is one of the common surgical treatments for life-threatening increased intracranial pressure (ICP). Once ICP is controlled, cranioplasty (CP) with reinsertion of the cryopreserved autologous bone flap or a synthetic implant is considered for protection and esthetics. Although with the risk of autologous bone flap resorption (BFR), cryopreserved autologous bone flap for CP is one of the important material due to its cost effectiveness. In this article, we performed conventional statistical analysis and the machine learning technique understand the risk factors for BFR. Methods : Patients aged >18 years who underwent autologous bone CP between January 2015 and December 2021 were reviewed. Demographic data, medical records, and volumetric measurements of the autologous bone flap volume from 94 patients were collected. BFR was defined with absolute quantitative method (BFR-A) and relative quantitative method (BFR%). Conventional statistical analysis and random forest with hyper-ensemble approach (RF with HEA) was performed. And overlapped partial dependence plots (PDP) were generated. Results : Conventional statistical analysis showed that only the initial autologous bone flap volume was statistically significant on BFR-A. RF with HEA showed that the initial autologous bone flap volume, interval between DC and CP, and bone quality were the factors with most contribution to BFR-A, while, trauma, bone quality, and initial autologous bone flap volume were the factors with most contribution to BFR%. Overlapped PDPs of the initial autologous bone flap volume on the BRF-A crossed at approximately 60 mL, and a relatively clear separation was found between the non-BFR and BFR groups. Therefore, the initial autologous bone flap of over 60 mL could be a possible risk factor for BFR. Conclusion : From the present study, BFR in patients who underwent CP with autologous bone flap might be inevitable. However, the degree of BFR may differ from one to another. Therefore, considering artificial bone flaps as implants for patients with large DC could be reasonable. Still, the risk factors for BFR are not clearly understood. Therefore, chronological analysis and pathophysiologic studies are needed.

Robust Semi-auto Calibration Method for Various Cameras and Illumination Changes (다양한 카메라와 조명의 변화에 강건한 반자동 카메라 캘리브레이션 방법)

  • Shin, Dong-Won;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.36-42
    • /
    • 2016
  • Recently, many 3D contents have been produced through the multiview camera system. In this system, since a difference of the viewpoint between color and depth cameras is inevitable, the camera parameter plays the important role to adjust the viewpoint as a preprocessing step. The conventional camera calibration method is inconvenient to users since we need to choose pattern features manually after capturing a planar chessboard with various poses. Therefore, we propose a semi-auto camera calibration method using a circular sampling and an homography estimation. Firstly, The proposed method extracts the candidates of the pattern features from the images by FAST corner detector. Next, we reduce the amount of the candidates by the circular sampling and obtain the complete point cloud by the homography estimation. Lastly, we compute the accurate position having the sub-pixel accuracy of the pattern features by the approximation of the hyper parabola surface. We investigated which factor affects the result of the pattern feature detection at each step. Compared to the conventional method, we found the proposed method released the inconvenience of the manual operation but maintained the accuracy of the camera parameters.

Improvement of Cloud-data Filtering Method Using Spectrum of AERI (AERI 스펙트럼 분석을 통한 구름에 영향을 받은 스펙트럼 자료 제거 방법 개선)

  • Cho, Joon-Sik;Goo, Tae-Young;Shin, Jinho
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.2
    • /
    • pp.137-148
    • /
    • 2015
  • The National Institute of Meteorological Research (NIMR) has operated the Fourier Transform InfraRed (FTIR) spectrometer which is the Atmospheric Emitted Radiance Interferometer (AERI) in Anmyeon island, Korea since June 2010. The ground-based AERI with similar hyper-spectral infrared sensor to satellite could be an alternative way to validate satellite-based remote sensing. In this regard, the NIMR has focused on the improvement of retrieval quality from the AERI, particularly cloud-data filtering method. The AERI spectrum which is measured on a typical clear day is selected reference spectrum and we used region of atmospheric window. We performed test of threshold in order to select valid threshold. We retrieved methane using new method which is used reference spectrum, and the other method which is used KLAPS cloud cover information, each retrieved methane was compared with that of ground-based in-situ measurements. The quality of AERI methane retrievals of new method was significantly more improved than method of used KLAPS. In addition, the comparison of vertical total column of methane from AERI and GOSAT shows good result.

Application of Seasonal AERI Reference Spectrum for the Improvement of Cloud data Filtering Method (계절별 AERI 기준 스펙트럼 적용을 통한 구름에 영향을 받은 스펙트럼 자료 제거방법 개선)

  • Cho, Joon-Sik;Goo, Tae-Young;Shin, Jinho
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.5
    • /
    • pp.409-419
    • /
    • 2015
  • The Atmospheric Emitted Radiance Interferometer (AERI) which is the Fourier Transform InfraRed (FTIR) spectrometer has been operated by the National Institute of Meteorological Research (NIMR) in Anmyeon island, South Korea since June 2010. The ground-based AERI with similar hyper-spectral infrared sensor to satellite could be an alternative way to validate satellite-based remote sensing. In this regard, the NIMR has focused on the improvement of Cloud data Filtering Method (CFM) which employed only one reference spectrum of clear sky in winter season. This study suggests Seasonal-Cloud data Filtering Method (S-CFM) which applied seasonal AERI reference spectra. For the comparison of applied S-CFM and CFM, the methane retrievals (surface volume mixing ratio) from AERI spectra are used. The quality of AERI methane retrieval applied S-CFM was significantly more improved than that of CFM. The positive result of S-CFM is similar pattern with the seasonal variation of methane from ground-based in-situ measurement, even if the summer season's methane is retrieved over-estimation. In addition, the comparison of vertical total column of methane from AERI and GOSAT shows good result except for the summer season.

The Use of Reinforcement Learning and The Reference Page Selection Method to improve Web Spidering Performance (웹 탐색 성능 향상을 위한 강화학습 이용과 기준 페이지 선택 기법)

  • 이기철;이선애
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.3
    • /
    • pp.331-340
    • /
    • 2002
  • The web world is getting so huge and untractable that without an intelligent information extractor we would get more and more helpless. Conventional web spidering techniques for general purpose search engine may be too slow for the specific search engines, which concentrate only on specific areas or keywords. In this paper a new model for improving web spidering capabilities is suggested and experimented. How to select adequate reference web pages from the initial web Page set relevant to a given specific area (or keywords) can be very important to reduce the spidering speed. Our reference web page selection method DOPS dynamically and orthogonally selects web pages, and it can also decide the appropriate number of reference pages, using a newly defined measure. Even for a very specific area, this method worked comparably well almost at the level of experts. If we consider that experts cannot work on a huge initial page set, and they still have difficulty in deciding the optimal number of the reference web pages, this method seems to be very promising. We also applied reinforcement learning to web environment, and DOPS-based reinforcement learning experiments shows that our method works quite favorably in terms of both the number of hyper links and time.

  • PDF

Development of Three-dimensional Inversion Algorithm of Complex Resistivity Method (복소 전기비저항 3차원 역산 알고리듬 개발)

  • Son, Jeong-Sul;Shin, Seungwook;Park, Sam-Gyu
    • Geophysics and Geophysical Exploration
    • /
    • v.24 no.4
    • /
    • pp.180-193
    • /
    • 2021
  • The complex resistivity method is an exploration technique that can obtain various characteristic information of underground media by measuring resistivity and phase in the frequency domain, and its utilization has recently increased. In this paper, a three-dimensional inversion algorithm for the CR data was developed to increase the utilization of this method. The Poisson equation, which can be applied when the electromagnetic coupling effect is ignored, was applied to the modeling, and the inversion algorithm was developed by modifying the existing algorithm by adopting comlex variables. In order to increase the stability of the inversion, a technique was introduced to automatically adjust the Lagrangian multiplier according to the ratio of the error vector and the model update vector. Furthermore, to compensate for the loss of data due to noisy phase data, a two-step inversion method that conducts inversion iterations using only resistivity data in the beginning and both of resistivity and phase data in the second half was developed. As a result of the experiment for the synthetic data, stable inversion results were obtained, and the validity to real data was also confirmed by applying the developed 3D inversion algorithm to the analysis of field data acquired near a hydrothermal mine.

A Study on Optimal Shape of Stent by Finite Element Analysis (유한요소 해석을 이용한 스텐트 최적형상 설계)

  • Lee, Tae-Hyun;Yang, Chulho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.11
    • /
    • pp.1-6
    • /
    • 2017
  • Stents are widely used as the most common method of treating coronary artery disease with implants in the form of a metal mesh. The blood flow is normalized by inserting a stent into the narrowed or clogged areas of the human body. In this study, the mechanical characteristics of a stent are investigated according to the variations of its design parameters by the Taguchi method and finite element analysis. A stent model of the Palmaz-Schatz type was used for the analysis. In the analysis, an elasto-plastic material model was adopted for the stent and a hyper-elastic model was used for the balloon. The main interest of this study is to investigate the effects of the design parameters which reduce the possibility of restenosis by adjusting the recoil amount. A Taguchi orthogonal array was constructed on the model of the stent. The thickness and length and angle of the slot were selected as the design parameters. The amounts of radial recoil and longitudinal recoil were calculated by finite element analysis. The statistical analysis using the Taguchi method showed that optimizing the shape of the stent could reduce the possibility of restenosis. The optimized shape showed improvements of recoil in the radial and longitudinal directions of ~1% and ~0.1%, respectively, compared to the default model.

Application of the Web Design Elements using the Aesthetic Evaluation (감성평가를 이용한 웹 디자인 요소의 활용방안)

  • 김미영;정홍인
    • Archives of design research
    • /
    • v.17 no.3
    • /
    • pp.413-420
    • /
    • 2004
  • New design method has been required for web designers to grasp the proper emotion, impression, and feeling of a web site and reflect these elements in web design. It is certain that such a new methodology can be a useful design tool, although web designers have only relied on their intuition and experience to induce users to perceive specific emotion of web sites. In this study, Kansei Engineering Type Ⅰ (Nagamachi, 2002 and Park, 2000) method was applied to develop the methodology. One hundred thirty six web sites believed to convey emotions effectively were first selected by recommendation of professional web designers and twenty two web sites were finally chosen and evaluated using questionnaire. The web sites were then objectively and quantitatively assessed by measuring the degree of utilization of the design elements, balance, overall density, and homogeneity. We examined the cause-and-effect between the results of emotional and quantitative analysis by multiple regression and introduced the design methodology based on the examination. The research method and procedures applied to this study would be applicable to design studies related to the emotional inducement.

  • PDF

Estimation Model for Freight of Container Ships using Deep Learning Method (딥러닝 기법을 활용한 컨테이너선 운임 예측 모델)

  • Kim, Donggyun;Choi, Jung-Suk
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.5
    • /
    • pp.574-583
    • /
    • 2021
  • Predicting shipping markets is an important issue. Such predictions form the basis for decisions on investment methods, fleet formation methods, freight rates, etc., which greatly affect the profits and survival of a company. To this end, in this study, we propose a shipping freight rate prediction model for container ships using gated recurrent units (GRUs) and long short-term memory structure. The target of our freight rate prediction is the China Container Freight Index (CCFI), and CCFI data from March 2003 to May 2020 were used for training. The CCFI after June 2020 was first predicted according to each model and then compared and analyzed with the actual CCFI. For the experimental model, a total of six models were designed according to the hyperparameter settings. Additionally, the ARIMA model was included in the experiment for performance comparison with the traditional analysis method. The optimal model was selected based on two evaluation methods. The first evaluation method selects the model with the smallest average value of the root mean square error (RMSE) obtained by repeating each model 10 times. The second method selects the model with the lowest RMSE in all experiments. The experimental results revealed not only the improved accuracy of the deep learning model compared to the traditional time series prediction model, ARIMA, but also the contribution in enhancing the risk management ability of freight fluctuations through deep learning models. On the contrary, in the event of sudden changes in freight owing to the effects of external factors such as the Covid-19 pandemic, the accuracy of the forecasting model reduced. The GRU1 model recorded the lowest RMSE (69.55, 49.35) in both evaluation methods, and it was selected as the optimal model.