• Title/Summary/Keyword: Machine data

Search Result 6,249, Processing Time 0.035 seconds

Fault Detection and Diagnosis based on Fuzzy Algorithm in the Injection Molding Machine Barrel Temperature (사출 성형기 Barrel 온도에 관한 퍼지알고리즘 기반의 고장 검출 및 진단)

  • 김훈모
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.11
    • /
    • pp.958-962
    • /
    • 2003
  • We acquired data of injection molding machine in operation and stored the data in database. We acquired the data of injection molding machine for fault detection and diagnosis (FDD) continuously and estimated the fault results with a fuzzy algorithm. Many of FDD are applied to a huge system, nuclear power plant and a computer numerical control(CNC) machine for processing machinery. But, the research of FDD is rare in injection molding machine compare with computer numerical control machine. We appraise the accuracy of the FDD and the limit of the application to the injection molding machine. We construct the fault detection and diagnosis system based on fuzzy algorithm in the injection molding machine. Data of operating injection molding machine are acquired in order to improve the reliability of detection and diagnosis.

Prediction of Weight of Spiral Molding Using Injection Molding Analysis and Machine Learning (사출성형 CAE와 머신러닝을 이용한 스파이럴 성형품의 중량 예측)

  • Bum-Soo Kim;Seong-Yeol Han
    • Design & Manufacturing
    • /
    • v.17 no.1
    • /
    • pp.27-32
    • /
    • 2023
  • In this paper, we intend to predict the mass of the spiral using CAE and machine learning. First, We generated 125 data for the experiment through a complete factor design of 3 factors and 5 levels. Next, the data were derived by performing a molding analysis through CAE, and the machine learning process was performed using a machine learning tool. To select the optimal model among the models learned using the learning data, accuracy was evaluated using RMSE. The evaluation results confirmed that the Support Vector Machine had a good predictive performance. To evaluate the predictive performance of the predictive model, We randomly generated 10 non-overlapping data within the existing injection molding condition level. We compared the CAE and support vector machine results by applying random data. As a result, good performance was confirmed with a MAPE value of 0.48%.

  • PDF

Pattern Data Extraction and Generation Algorithm for A Computer Controlled Pattern Sewing Machine (컴퓨터 제어 패턴 재봉기를 위한 패턴 데이타 추출 및 생성 알고리즘)

  • Yun, Sung-yong;Baik, Sang-hyun;Kim, Il-hwan
    • Journal of Industrial Technology
    • /
    • v.19
    • /
    • pp.179-187
    • /
    • 1999
  • The computer pattern sewing machine is an automatic sewing machine that is controlled by an input pattern. Even a novice can run this machine for various tasks fast and reliably such as sewing a button, a belt ring and an airbag, etc. The pattern processing software, which is the main software of this machine, is for editing and modifying pattern data by online teaching or off-line editing, setting up parameters, and calculate a moving distance of working area on the x-y axes. In this paper we propose an algorithm to generate pattern data for sewing by simplifying image data. The pattern data are composed of outline data like dot, line, circle, arc, curve, etc. We need converting this data into sewing data which involve sewing parameter, moving distance of working are an the x-y axes, thread, spindle speed.

  • PDF

An Effective Data Model for Forecasting and Analyzing Securities Data

  • Lee, Seung Ho;Shin, Seung Jung
    • International journal of advanced smart convergence
    • /
    • v.5 no.4
    • /
    • pp.32-39
    • /
    • 2016
  • Machine learning is a field of artificial intelligence (AI), and a technology that collects, forecasts, and analyzes securities data is developed upon machine learning. The difference between using machine learning and not using machine learning is that machine learning-seems similar to big data-studies and collects data by itself which big data cannot do. Machine learning can be utilized, for example, to recognize a certain pattern of an object and find a criminal or a vehicle used in a crime. To achieve similar intelligent tasks, data must be more effectively collected than before. In this paper, we propose a method of effectively collecting data.

Effectiveness of Normalization Pre-Processing of Big Data to the Machine Learning Performance (빅데이터의 정규화 전처리과정이 기계학습의 성능에 미치는 영향)

  • Jo, Jun-Mo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.3
    • /
    • pp.547-552
    • /
    • 2019
  • Recently, the massive growth in the scale of data has been observed as a major issue in the Big Data. Furthermore, the Big Data should be preprocessed for normalization to get a high performance of the Machine learning since the Big Data is also an input of Machine Learning. The performance varies by many factors such as the scope of the columns in a Big Data or the methods of normalization preprocessing. In this paper, the various types of normalization preprocessing methods and the scopes of the Big Data columns will be applied to the SVM(: Support Vector Machine) as a Machine Learning method to get the efficient environment for the normalization preprocessing. The Machine Learning experiment has been programmed in Python and the Jupyter Notebook.

A study on the standardization strategy for building of learning data set for machine learning applications (기계학습 활용을 위한 학습 데이터세트 구축 표준화 방안에 관한 연구)

  • Choi, JungYul
    • Journal of Digital Convergence
    • /
    • v.16 no.10
    • /
    • pp.205-212
    • /
    • 2018
  • With the development of high performance CPU / GPU, artificial intelligence algorithms such as deep neural networks, and a large amount of data, machine learning has been extended to various applications. In particular, a large amount of data collected from the Internet of Things, social network services, web pages, and public data is accelerating the use of machine learning. Learning data sets for machine learning exist in various formats according to application fields and data types, and thus it is difficult to effectively process data and apply them to machine learning. Therefore, this paper studied a method for building a learning data set for machine learning in accordance with standardized procedures. This paper first analyzes the requirement of learning data set according to problem types and data types. Based on the analysis, this paper presents the reference model to build learning data set for machine learning applications. This paper presents the target standardization organization and a standard development strategy for building learning data set.

Interference Check and NC Data Optimization through Machine Simulation in 5 Axises Machining of a Vehicle Parts of Aluminum Alloy (Al 합금 수송기계부품의 5축 가공에서 머신시뮬레이션을 통한 간섭체크 및 NC 데이터 최적화)

  • Kim Hae Ji;Lee In-Su;Kim Nam Kyung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.12
    • /
    • pp.52-59
    • /
    • 2004
  • This paper shows about the machine simulation embodiment when it happens NC equipment and between workpiece and interference in 5 axises machining of aluminium alloy a vehicles parts. And this research has been chosen because of the highest equipment interference occurrence rate at a vehicles parts processing of 5 axises horizontal machine. It can verify simulation and machining process through correlation of their dynamic relations, interference, collision as embodied virtual manufacturing system of machine, workpiece, and holder etc. That is necessary element in shape of machine tool, function and processing in imagination ball. Also, it verifies about interference and collision between NC equipment and workpiece, as it applied machine simulation to NC Data of actuality aircraft parts of BULKHEAD and FRAME. As the result of this study, by removing the equipment interference and collision element which creates NC data, the virtual machine tool it the efficiency of machine process has increased.

A Study on Ontology Generation by Machine Learning in Big Data (빅 데이터에서 기계학습을 통한 온톨로지 생성에 관한 연구)

  • Hwang, Chi-Gon;Yoon, Chang-Pyo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.645-646
    • /
    • 2018
  • Recently, the concept of machine learning has been introduced as a decision making method through data processing. Machine learning uses the results of running based on existing data as a means of decision making. The data generated by the development of technology is vast. This data is called big data. It is important to extract the necessary data from these data. In this paper, we propose a method for extracting related data for constructing an ontology through machine learning. The results of machine learning can be given a relationship from a semantic perspective. it can be added to the ontology to support relationships depending on the needs of the application.

  • PDF

Machine Learning Methodology for Management of Shipbuilding Master Data

  • Jeong, Ju Hyeon;Woo, Jong Hun;Park, JungGoo
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.12 no.1
    • /
    • pp.428-439
    • /
    • 2020
  • The continuous development of information and communication technologies has resulted in an exponential increase in data. Consequently, technologies related to data analysis are growing in importance. The shipbuilding industry has high production uncertainty and variability, which has created an urgent need for data analysis techniques, such as machine learning. In particular, the industry cannot effectively respond to changes in the production-related standard time information systems, such as the basic cycle time and lead time. Improvement measures are necessary to enable the industry to respond swiftly to changes in the production environment. In this study, the lead times for fabrication, assembly of ship block, spool fabrication and painting were predicted using machine learning technology to propose a new management method for the process lead time using a master data system for the time element in the production data. Data preprocessing was performed in various ways using R and Python, which are open source programming languages, and process variables were selected considering their relationships with the lead time through correlation analysis and analysis of variables. Various machine learning, deep learning, and ensemble learning algorithms were applied to create the lead time prediction models. In addition, the applicability of the proposed machine learning methodology to standard work hour prediction was verified by evaluating the prediction models using the evaluation criteria, such as the Mean Absolute Percentage Error (MAPE) and Root Mean Squared Logarithmic Error (RMSLE).

Development of a Web-based Analysis Program for Reliability Assessment of Machine Tools (공작 기계의 신뢰성 평가를 위한 웹 기반 해석 프로그램 개발)

  • 강태한;김봉석;이수훈;송준엽;강재훈
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2004.10a
    • /
    • pp.369-374
    • /
    • 2004
  • Web-based analysis programs for reliability assessment of machine tools were developed in this study. First, the reliability data analysis program was developed to search for failure rate using failure data and reliability test data of mechanical part. Second, failure mode analysis was developed through performance tests like circular movement test vibration test for machine tools. This analysis program shows correlation between failure mode and performance test result. Third, tool life was predicted by correlation between flank wear and cutting time, using the extended Taylor tool life equation in turning data and the equivalently converted equation in order to apply ball endmill data to Taylor tool life equation in milling data. All the information related to input and result data can be stored in theses programs.

  • PDF