• Title/Summary/Keyword: big machine tools

Search Result 40, Processing Time 0.021 seconds

Study on Structure Design of High-Stiffness for 5 - Axis Machining Center (5축 공작기계의 고강성 구조설계에 관한 연구)

  • Hong, Jong-Pil;Gong, Byeong-Chae;Choi, Sung-Dae;Choi, Hyun-Jin;Lee, Dal-Sik
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.10 no.5
    • /
    • pp.7-12
    • /
    • 2011
  • This study covers the optimum design of the 5-axis machine tool. In addition, the intelligent control secures structural stability through the optimum design of the structure of the 5-axis machine center, main spindle, and the tilting index table. The big requirement, like above, ultimately leads to speed-up operation. And this is inevitable to understand the vibration phenomenon and its related mechanical phenomenon in terms of productivity and its accuracy. In general, the productivity is correlated with the operation speed and it has become bigger by its vibration scale and the operation speed so far. Vibration phenomenon and its heat-transformation of the machine is naturally occurred during the operation. If these entire machinery phenomenons are interpreted through the constructive understanding and the interpretation of the naturally produced vibration and heat-transformation, it would be very useful to improve the rapidity and its stability of the machine operation indeed. In this dissertation, the problems of structure through heating, stability, dynamic aspect and safety about intelligent 5-wheel machine tool are discovered to examine. All these discoveries are applied to the structure in order to enhance the density of it. It aims to improve the stability.

A Meta Analysis of the Edible Insects (식용곤충 연구 메타 분석)

  • Yu, Ok-Kyeong;Jin, Chan-Yong;Nam, Soo-Tai;Lee, Hyun-Chang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.182-183
    • /
    • 2018
  • Big data analysis is the process of discovering a meaningful correlation, pattern, and trends in large data set stored in existing data warehouse management tools and creating new values. In addition, by extracts new value from structured and unstructured data set in big volume means a technology to analyze the results. Most of the methods of Big data analysis technology are data mining, machine learning, natural language processing, pattern recognition, etc. used in existing statistical computer science. Global research institutes have identified Big data as the most notable new technology since 2011.

  • PDF

Guidelines for big data projects in artificial intelligence mathematics education (인공지능 수학 교육을 위한 빅데이터 프로젝트 과제 가이드라인)

  • Lee, Junghwa;Han, Chaereen;Lim, Woong
    • The Mathematical Education
    • /
    • v.62 no.2
    • /
    • pp.289-302
    • /
    • 2023
  • In today's digital information society, student knowledge and skills to analyze big data and make informed decisions have become an important goal of school mathematics. Integrating big data statistical projects with digital technologies in high school <Artificial Intelligence> mathematics courses has the potential to provide students with a learning experience of high impact that can develop these essential skills. This paper proposes a set of guidelines for designing effective big data statistical project-based tasks and evaluates the tasks in the artificial intelligence mathematics textbook against these criteria. The proposed guidelines recommend that projects should: (1) align knowledge and skills with the national school mathematics curriculum; (2) use preprocessed massive datasets; (3) employ data scientists' problem-solving methods; (4) encourage decision-making; (5) leverage technological tools; and (6) promote collaborative learning. The findings indicate that few textbooks fully align with these guidelines, with most failing to incorporate elements corresponding to Guideline 2 in their project tasks. In addition, most tasks in the textbooks overlook or omit data preprocessing, either by using smaller datasets or by using big data without any form of preprocessing. This can potentially result in misconceptions among students regarding the nature of big data. Furthermore, this paper discusses the relevant mathematical knowledge and skills necessary for artificial intelligence, as well as the potential benefits and pedagogical considerations associated with integrating technology into big data tasks. This research sheds light on teaching mathematical concepts with machine learning algorithms and the effective use of technology tools in big data education.

Recent Trends in the Application of Extreme Learning Machines for Online Time Series Data (온라인 시계열 자료를 위한 익스트림 러닝머신 적용의 최근 동향)

  • YeoChang Yoon
    • The Journal of Bigdata
    • /
    • v.8 no.2
    • /
    • pp.15-25
    • /
    • 2023
  • Extreme learning machines (ELMs) are a major analytical method in various prediction fields. ELMs can accurately predict even if the data contains noise or is nonlinear by learning the complex patterns of time series data through optimal learning. This study presents the recent trends of machine learning models that are mainly studied as tools for analyzing online time series data, along with the application characteristics using existing algorithms. In order to efficiently learn large-scale online data that is continuously and explosively generated, it is necessary to have a learning technology that can perform well even in properties that can evolve in various ways. Therefore, this study examines a comprehensive overview of the latest machine learning models applied to big data in the field of time series prediction, discusses the general characteristics of the latest models that learn online data, which is one of the major challenges of machine learning for big data, and how efficiently they can learn and use online time series data for prediction, and proposes alternatives.

Sequential Pattern Mining for Intrusion Detection System with Feature Selection on Big Data

  • Fidalcastro, A;Baburaj, E
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.5023-5038
    • /
    • 2017
  • Big data is an emerging technology which deals with wide range of data sets with sizes beyond the ability to work with software tools which is commonly used for processing of data. When we consider a huge network, we have to process a large amount of network information generated, which consists of both normal and abnormal activity logs in large volume of multi-dimensional data. Intrusion Detection System (IDS) is required to monitor the network and to detect the malicious nodes and activities in the network. Massive amount of data makes it difficult to detect threats and attacks. Sequential Pattern mining may be used to identify the patterns of malicious activities which have been an emerging popular trend due to the consideration of quantities, profits and time orders of item. Here we propose a sequential pattern mining algorithm with fuzzy logic feature selection and fuzzy weighted support for huge volumes of network logs to be implemented in Apache Hadoop YARN, which solves the problem of speed and time constraints. Fuzzy logic feature selection selects important features from the feature set. Fuzzy weighted supports provide weights to the inputs and avoid multiple scans. In our simulation we use the attack log from NS-2 MANET environment and compare the proposed algorithm with the state-of-the-art sequential Pattern Mining algorithm, SPADE and Support Vector Machine with Hadoop environment.

Differentiation of Legal Rules and Individualization of Court Decisions in Criminal, Administrative and Civil Cases: Identification and Assessment Methods

  • Egor, Trofimov;Oleg, Metsker;Georgy, Kopanitsa
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.12
    • /
    • pp.125-131
    • /
    • 2022
  • The diversity and complexity of criminal, administrative and civil cases resolved by the courts makes it difficult to develop universal automated tools for the analysis and evaluation of justice. However, big data generated in the scope of justice gives hope that this problem will be resolved as soon as possible. The big data applying makes it possible to identify typical options for resolving cases, form detailed rules for the individualization of a court decision, and correlate these rules with an abstract provisions of law. This approach allows us to somewhat overcome the contradiction between the abstract and the concrete in law, to automate the analysis of justice and to model e-justice for scientific and practical purposes. The article presents the results of using dimension reduction, SHAP value, and p-value to identify, analyze and evaluate the individualization of justice and the differentiation of legal regulation. Processing and analysis of arrays of court decisions by computational methods make it possible to identify the typical views of courts on questions of fact and questions of law. This knowledge, obtained automatically, is promising for the scientific study of justice issues, the improvement of the prescriptions of the law and the probabilistic prediction of a court decision with a known set of facts.

Sentimental Analysis of Twitter Data Using Machine Learning and Deep Learning: Nickel Ore Export Restrictions to Europe Under Jokowi's Administration 2022

  • Sophiana Widiastutie;Dairatul Maarif;Adinda Aulia Hafizha
    • Asia pacific journal of information systems
    • /
    • v.34 no.2
    • /
    • pp.400-420
    • /
    • 2024
  • Nowadays, social media has evolved into a powerful networked ecosystem in which governments and citizens publicly debate economic and political issues. This holds true for the pros and cons of Indonesia's ore nickel export restriction to Europe, which we aim to investigate further in this paper. Using Twitter as a dependable channel for conducting sentiment analysis, we have gathered 7070 tweets data for further processing using two sentiment analysis approaches, namely Support Vector Machine (SVM) and Long Short Term Memory (LSTM). Model construction stage has shown that Bidirectional LSTM performed better than LSTM and SVM kernels, with accuracy of 91%. The LSTM comes second and The SVM Radial Basis Function comes third in terms of best model, with 88% and 83% accuracies, respectively. In terms of sentiments, most Indonesians believe that the nickel ore provision will have a positive impact on the mining industry in Indonesia. However, a small number of Indonesian citizens contradict this policy due to fears of a trade dispute that could potentially harm Indonesia's bilateral relations with the EU. Hence, this study contributes to the advancement of measuring public opinions through big data tools by identifying Bidirectional LSTM as the optimal model for the dataset.

Predicting Surgical Complications in Adult Patients Undergoing Anterior Cervical Discectomy and Fusion Using Machine Learning

  • Arvind, Varun;Kim, Jun S.;Oermann, Eric K.;Kaji, Deepak;Cho, Samuel K.
    • Neurospine
    • /
    • v.15 no.4
    • /
    • pp.329-337
    • /
    • 2018
  • Objective: Machine learning algorithms excel at leveraging big data to identify complex patterns that can be used to aid in clinical decision-making. The objective of this study is to demonstrate the performance of machine learning models in predicting postoperative complications following anterior cervical discectomy and fusion (ACDF). Methods: Artificial neural network (ANN), logistic regression (LR), support vector machine (SVM), and random forest decision tree (RF) models were trained on a multicenter data set of patients undergoing ACDF to predict surgical complications based on readily available patient data. Following training, these models were compared to the predictive capability of American Society of Anesthesiologists (ASA) physical status classification. Results: A total of 20,879 patients were identified as having undergone ACDF. Following exclusion criteria, patients were divided into 14,615 patients for training and 6,264 for testing data sets. ANN and LR consistently outperformed ASA physical status classification in predicting every complication (p < 0.05). The ANN outperformed LR in predicting venous thromboembolism, wound complication, and mortality (p < 0.05). The SVM and RF models were no better than random chance at predicting any of the postoperative complications (p < 0.05). Conclusion: ANN and LR algorithms outperform ASA physical status classification for predicting individual postoperative complications. Additionally, neural networks have greater sensitivity than LR when predicting mortality and wound complications. With the growing size of medical data, the training of machine learning on these large datasets promises to improve risk prognostication, with the ability of continuously learning making them excellent tools in complex clinical scenarios.

IoT data analytics architecture for smart healthcare using RFID and WSN

  • Ogur, Nur Banu;Al-Hubaishi, Mohammed;Ceken, Celal
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.135-146
    • /
    • 2022
  • The importance of big data analytics has become apparent with the increasing volume of data on the Internet. The amount of data will increase even more with the widespread use of Internet of Things (IoT). One of the most important application areas of the IoT is healthcare. This study introduces new real-time data analytics architecture for an IoT-based smart healthcare system, which consists of a wireless sensor network and a radio-frequency identification technology in a vertical domain. The proposed platform also includes high-performance data analytics tools, such as Kafka, Spark, MongoDB, and NodeJS, in a horizontal domain. To investigate the performance of the system developed, a diagnosis of Wolff-Parkinson-White syndrome by logistic regression is discussed. The results show that the proposed IoT data analytics system can successfully process health data in real-time with an accuracy rate of 95% and it can handle large volumes of data. The developed system also communicates with a riverbed modeler using Transmission Control Protocol (TCP) to model any IoT-enabling technology. Therefore, the proposed architecture can be used as a time-saving experimental environment for any IoT-based system.

A Dynamic Approach to Extract the Original Semantics and Structure of VM-based Obfuscated Binary Executables (가상 머신 기반으로 난독화된 실행파일의 구조 및 원본의미 추출 동적 방법)

  • Lee, Sungho;Han, Taisook
    • Journal of KIISE
    • /
    • v.41 no.10
    • /
    • pp.859-869
    • /
    • 2014
  • In recent years, the obfuscation techniques are commonly exploited to protect malwares, so obfuscated malwares have become a big threat. Especially, it is extremely hard to analyze virtualization-obfuscated malwares based on unusual virtual machines, because the original program is hidden by the virtual machine as well as its semantics is mixed with the semantics of the virtual machine. To confront this threat, we suggest a framework to analyze virtualization-obfuscated programs based on the dynamic analysis. First, we extract the dynamic execution trace of the virtualization-obfuscated executables. Second, we analyze the traces by translating machine instruction sequences into the intermediate representation and extract the virtual machine architecture by constructing dynamic context flow graphs. Finally, we extract abstract semantics of the original program using the extracted virtual machine architecture. In this paper, we propose a method to extract the information of the original program from a virtualization-obfuscated program by some commercial obfuscation tools. We expect that our tool can be used to understand virtualization-obfuscated programs and integrate other program analysis techniques so that it can be applied to analysis of the semantics of original programs using the abstract semantics.