• Title/Summary/Keyword: automatic processing

Search Result 2,235, Processing Time 0.025 seconds

Numerical Approach to Optimize Piercing Punch and Die Shape in Hub Clutch Product (허브클러치 제품의 피어싱 펀치 및 금형 형상 최적화를 위한 수치접근법)

  • Gu, Bon-Joon;Hong, Seok-Moo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.517-524
    • /
    • 2019
  • The overdrive hub clutch is attached to a 6-speed automatic transmission to reduce fuel consumption by using the additional power of the engine. This paper proposes a means to minimize the load and roll-over ratio on the punch during the piercing process for the overdrive hub clutch product. Die clearance, shear angle, and friction coefficient, which can affect the load and roll-over ratio of the punch during processing, were set as the design variables. Sensitivity analysis was also conducted to determine the influence of each design variable on the punch load and roll-over ratio. As a result, shear angle, friction coefficient and die clearance were found to be sensitive to load and roll-over ratio. The punch load and roll-over ratio were set as the objective function and the equation of each design variable and objective function was derives using the Response Surface Method. Finally, the optimal value of the design variables was derived using the Response Surface Method. Application of this model to finite element analysis resulted in 22.14% improvement in the roll-over ratio of the punch load and material.

Design and Implementation of OpenCV-based Inventory Management System to build Small and Medium Enterprise Smart Factory (중소기업 스마트공장 구축을 위한 OpenCV 기반 재고관리 시스템의 설계 및 구현)

  • Jang, Su-Hwan;Jeong, Jopil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.1
    • /
    • pp.161-170
    • /
    • 2019
  • Multi-product mass production small and medium enterprise factories have a wide variety of products and a large number of products, wasting manpower and expenses for inventory management. In addition, there is no way to check the status of inventory in real time, and it is suffering economic damage due to excess inventory and shortage of stock. There are many ways to build a real-time data collection environment, but most of them are difficult to afford for small and medium-sized companies. Therefore, smart factories of small and medium enterprises are faced with difficult reality and it is hard to find appropriate countermeasures. In this paper, we implemented the contents of extension of existing inventory management method through character extraction on label with barcode and QR code, which are widely adopted as current product management technology, and evaluated the effect. Technically, through preprocessing using OpenCV for automatic recognition and classification of stock labels and barcodes, which is a method for managing input and output of existing products through computer image processing, and OCR (Optical Character Recognition) function of Google vision API. And it is designed to recognize the barcode through Zbar. We propose a method to manage inventory by real-time image recognition through Raspberry Pi without using expensive equipment.

A Semi-Automatic Semantic Mark Tagging System for Building Dialogue Corpus (대화 말뭉치 구축을 위한 반자동 의미표지 태깅 시스템)

  • Park, Junhyeok;Lee, Songwook;Lim, Yoonseob;Choi, Jongsuk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.5
    • /
    • pp.213-222
    • /
    • 2019
  • Determining the meaning of a keyword in a speech dialogue system is an important technology for the future implementation of an intelligent speech dialogue interface. After extracting keywords to grasp intention from user's utterance, the intention of utterance is determined by using the semantic mark of keyword. One keyword can have several semantic marks, and we regard the task of attaching the correct semantic mark to the user's intentions on these keyword as a problem of word sense disambiguation. In this study, about 23% of all keywords in the corpus is manually tagged to build a semantic mark dictionary, a synonym dictionary, and a context vector dictionary, and then the remaining 77% of all keywords is automatically tagged. The semantic mark of a keyword is determined by calculating the context vector similarity from the context vector dictionary. For an unregistered keyword, the semantic mark of the most similar keyword is attached using a synonym dictionary. We compare the performance of the system with manually constructed training set and semi-automatically expanded training set by selecting 3 high-frequency keywords and 3 low-frequency keywords in the corpus. In experiments, we obtained accuracy of 54.4% with manually constructed training set and 50.0% with semi-automatically expanded training set.

Automatic Classification and Vocabulary Analysis of Political Bias in News Articles by Using Subword Tokenization (부분 단어 토큰화 기법을 이용한 뉴스 기사 정치적 편향성 자동 분류 및 어휘 분석)

  • Cho, Dan Bi;Lee, Hyun Young;Jung, Won Sup;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2021
  • In the political field of news articles, there are polarized and biased characteristics such as conservative and liberal, which is called political bias. We constructed keyword-based dataset to classify bias of news articles. Most embedding researches represent a sentence with sequence of morphemes. In our work, we expect that the number of unknown tokens will be reduced if the sentences are constituted by subwords that are segmented by the language model. We propose a document embedding model with subword tokenization and apply this model to SVM and feedforward neural network structure to classify the political bias. As a result of comparing the performance of the document embedding model with morphological analysis, the document embedding model with subwords showed the highest accuracy at 78.22%. It was confirmed that the number of unknown tokens was reduced by subword tokenization. Using the best performance embedding model in our bias classification task, we extract the keywords based on politicians. The bias of keywords was verified by the average similarity with the vector of politicians from each political tendency.

Collision Risk Assessment by using Hierarchical Clustering Method and Real-time Data (계층 클러스터링과 실시간 데이터를 이용한 충돌위험평가)

  • Vu, Dang-Thai;Jeong, Jae-Yong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.4
    • /
    • pp.483-491
    • /
    • 2021
  • The identification of regional collision risks in water areas is significant for the safety of navigation. This paper introduces a new method of collision risk assessment that incorporates a clustering method based on the distance factor - hierarchical clustering - and uses real-time data in case of several surrounding vessels, group methodology and preliminary assessment to classify vessels and evaluate the basis of collision risk evaluation (called HCAAP processing). The vessels are clustered using the hierarchical program to obtain clusters of encounter vessels and are combined with the preliminary assessment to filter relatively safe vessels. Subsequently, the distance at the closest point of approach (DCPA) and time to the closest point of approach (TCPA) between encounter vessels within each cluster are calculated to obtain the relation and comparison with the collision risk index (CRI). The mathematical relationship of CRI for each cluster of encounter vessels with DCPA and TCPA is constructed using a negative exponential function. Operators can easily evaluate the safety of all vessels navigating in the defined area using the calculated CRI. Therefore, this framework can improve the safety and security of vessel traffic transportation and reduce the loss of life and property. To illustrate the effectiveness of the framework proposed, an experimental case study was conducted within the coastal waters of Mokpo, Korea. The results demonstrated that the framework was effective and efficient in detecting and ranking collision risk indexes between encounter vessels within each cluster, which allowed an automatic risk prioritization of encounter vessels for further investigation by operators.

Deep Learning Algorithm and Prediction Model Associated with Data Transmission of User-Participating Wearable Devices (사용자 참여형 웨어러블 디바이스 데이터 전송 연계 및 딥러닝 대사증후군 예측 모델)

  • Lee, Hyunsik;Lee, Woongjae;Jeong, Taikyeong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.6
    • /
    • pp.33-45
    • /
    • 2020
  • This paper aims to look at the perspective that the latest cutting-edge technologies are predicting individual diseases in the actual medical environment in a situation where various types of wearable devices are rapidly increasing and used in the healthcare domain. Through the process of collecting, processing, and transmitting data by merging clinical data, genetic data, and life log data through a user-participating wearable device, it presents the process of connecting the learning model and the feedback model in the environment of the Deep Neural Network. In the case of the actual field that has undergone clinical trial procedures of medical IT occurring in such a high-tech medical field, the effect of a specific gene caused by metabolic syndrome on the disease is measured, and clinical information and life log data are merged to process different heterogeneous data. That is, it proves the objective suitability and certainty of the deep neural network of heterogeneous data, and through this, the performance evaluation according to the noise in the actual deep learning environment is performed. In the case of the automatic encoder, we proved that the accuracy and predicted value varying per 1,000 EPOCH are linearly changed several times with the increasing value of the variable.

A Study on the Current State of the Library's AI Service and the Service Provision Plan (도서관의 인공지능(AI) 서비스 현황 및 서비스 제공 방안에 관한 연구)

  • Kwak, Woojung;Noh, Younghee
    • Journal of Korean Library and Information Science Society
    • /
    • v.52 no.1
    • /
    • pp.155-178
    • /
    • 2021
  • In the era of the 4th industrial revolution, public libraries need a strategy for promoting intelligent library services in order to actively respond to changes in the external environment such as artificial intelligence. Therefore, in this study, based on the concept of artificial intelligence and analysis of domestic and foreign artificial intelligence related trends, policies, and cases, we proposed the future direction of introduction and development of artificial intelligence services in the library. Currently, the library operates a reference information service that automatically provides answers through the introduction of artificial intelligence technologies such as deep learning and natural language processing, and develops a big data-based AI book recommendation and automatic book inspection system to increase business utilization and provide customized services for users. Has been provided. In the field of companies and industries, regardless of domestic and overseas, we are developing and servicing technologies based on autonomous driving using artificial intelligence, personal customization, etc., and providing optimal results by self-learning information using deep learning. It is developed in the form of an equation. Accordingly, in the future, libraries will utilize artificial intelligence to recommend personalized books based on the user's usage records, recommend reading and culture programs, and introduce real-time delivery services through transport methods such as autonomous drones and cars in the case of book delivery service. Service development should be promoted.

Performance Evaluation of KOMPSAT-3 Satellite DSM in Overseas Testbed Area (해외 테스트베드 지역 아리랑 위성 3호 DSM 성능평가)

  • Oh, Kwan-Young;Hwang, Jeong-In;Yoo, Woo-Sun;Lee, Kwang-Jae
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_2
    • /
    • pp.1615-1627
    • /
    • 2020
  • The purpose of this study is to compare and analyze the performance of KOMPSAT-3 Digital Surface Model (DSM) made in overseas testbed area. To that end, we collected the KOMPSAT-3 in-track stereo image taken in San Francisco, the U.S. The stereo geometry elements (B/H, converse angle, etc.) of the stereo image taken were all found to be in the stable range. By applying precise sensor modeling using Ground Control Point (GCP) and DSM automatic generation technique, DSM with 1 m resolution was produced. Reference materials for evaluation and calibration are ground points with accuracy within 0.01 m from Compass Data Inc., 1 m resolution Elevation 1-DSM produced by Airbus. The precision sensor modeling accuracy of KOMPSAT-3 was within 0.5 m (RMSE) in horizontal and vertical directions. When the difference map was written between the generated DSM and the reference DSM, the mean and standard deviation were 0.61 m and 5.25 m respectively, but in some areas, they showed a large difference of more than 100 m. These areas appeared mainly in closed areas where high-rise buildings were concentrated. If KOMPSAT-3 tri-stereo images are used and various post-processing techniques are developed, it will be possible to produce DSM with more improved quality.

Generation and Verification of Synthetic Wind Data With Seasonal Fluctuation Using Hidden Markov Model (은닉 마르코프 모델을 이용하여 계절의 변동을 동반한 인공 바람자료 생성 및 검증)

  • Park, Seok-Young;Ryu, Ki-Wahn
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.49 no.12
    • /
    • pp.963-969
    • /
    • 2021
  • The wind data measured from local meteorological masts is used to evaluate wind speed distribution and energy production in the specified site for wind farm However, wind data measured from meteorological masts often contain missing information or insufficient desired height or data length, making it difficult to perform wind turbine control and performance simulation. Therefore, long-term continuous wind data is very important to assess the annual energy production and the capacity factor for wind turbines or wind farms. In addition, if seasonal influences are distinct, such as on the Korean Peninsula, wind data with seasonal characteristics should be considered. This study presents methodologies for generating synthetic wind that take into account fluctuations in both wind speed and direction using the hidden Markov model, which is a statistical method. The wind data for statistical processing are measured at Maldo island in the Kokunnsan-gundo, Jeonbuk Province using the Automatic Weather System (AWS) of the Korea Meteorological Administration. The synthetic wind generated using the hidden Markov model will be validated by comparing statistical variables, wind energy density, seasonal mean speed, and prevailing wind direction with measurement data.

Automatic Generation of Bibliographic Metadata with Reference Information for Academic Journals (학술논문 내에서 참고문헌 정보가 포함된 서지 메타데이터 자동 생성 연구)

  • Jeong, Seonki;Shin, Hyeonho;Ji, Seon-Yeong;Choi, Sungphil
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.56 no.3
    • /
    • pp.241-264
    • /
    • 2022
  • Bibliographic metadata can help researchers effectively utilize essential publications that they need and grasp academic trends of their own fields. With the manual creation of the metadata costly and time-consuming. it is nontrivial to effectively automatize the metadata construction using rule-based methods due to the immoderate variety of the article forms and styles according to publishers and academic societies. Therefore, this study proposes a two-step extraction process based on rules and deep neural networks for generating bibliographic metadata of scientific articlles to overcome the difficulties above. The extraction target areas in articles were identified by using a deep neural network-based model, and then the details in the areas were analyzed and sub-divided into relevant metadata elements. IThe proposed model also includes a model for generating reference summary information, which is able to separate the end of the text and the starting point of a reference, and to extract individual references by essential rule set, and to identify all the bibliographic items in each reference by a deep neural network. In addition, in order to confirm the possibility of a model that generates the bibliographic information of academic papers without pre- and post-processing, we conducted an in-depth comparative experiment with various settings and configurations. As a result of the experiment, the method proposed in this paper showed higher performance.