• Title/Summary/Keyword: 오토시스템

Search Result 331, Processing Time 0.042 seconds

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Design of UHF Band Microstrip Antenna for Recovering Resonant Frequency and Return Loss Automatically (UHF 대역 공진 주파수 및 반사 손실 오토튜닝 마이크로스트립 안테나 설계)

  • Kim, Young-Ro;Kim, Yong-Hyu;Hur, Myung-Joon;Woo, Jong-Myung
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.24 no.3
    • /
    • pp.219-232
    • /
    • 2013
  • This paper presents a microstrip antenna which recovers its resonant frequency and impedance shifted automatically by the approach of other objects such as hands. This can be used for telemetry sensor applications in the ultrahigh frequency(UHF) industrial, scientific, and medical(ISM) band. It is the key element that an frequency-reconfigurable antenna could be electrically controlled. This antenna is miniaturized by loading the folded plates at both radiating edges, and varactor diodes are installed between the radiating edges and the ground plane to control the resonant frequency by adjusting the DC bias asymmetrically. Using this voltage-controlled antenna and the micro controller peripheral circuits of reading the returned level, the antenna is designed and fabricated which recovers its resonant frequency and impedance automatically. Designed frequency auto recovering antenna is conformed to be recovered within a few seconds when the resonant frequency and impedance are shifted by the approach of other objects such as hand, metal plate, dielectric and so on.

Timing Verification of AUTOSAR-compliant Diesel Engine Management System Using Measurement-based Worst-case Execution Time Analysis (측정기반 최악실행시간 분석 기법을 이용한 AUTOSAR 호환 승용디젤엔진제어기의 실시간 성능 검증에 관한 연구)

  • Park, Inseok;Kang, Eunhwan;Chung, Jaesung;Sohn, Jeongwon;Sunwoo, Myoungho;Lee, Kangseok;Lee, Wootaik;Youn, Jeamyoung;Won, Donghoon
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.5
    • /
    • pp.91-101
    • /
    • 2014
  • In this study, we presented a timing verification method for a passenger car diesel engine management system (EMS) using measurement-based worst-case execution time (WCET) analysis. In order to cope with AUTOSAR-compliant software architecture, a development process model is proposed. In the process model, a runnable is regarded as a test unit and its temporal behavior (i.e. maximum observed execution time, MOET) is obtained along with on-target functionality evaluation results during online unit test. Furthermore, a cost-effective framework for online unit test is proposed. Because the runtime environment layer and the standard calibration environment are utilized to implement test interface, additional resource consumption of the target processor is minimized. Using the proposed development process model and unit test framework, the MOETs of 86 runnables for diesel EMS are obtained with 213 unit test cases. Using the obtained MOETs of runnables, the WCETs of tasks are estimated and the schedulability is evaluated. From the schedulability analysis results, the problems of the initially designed schedule table is recognized and it is fixed by redesigning of the runnable mapping and task offset. Through the various test scenarios, the proposed method is validated.

A Study on the Application Method of 3-Dimensional Modeling Data (3차원 모델 링 데이터의 활용방법에 관한 연구)

  • 김현성;김낙권
    • Archives of design research
    • /
    • v.14
    • /
    • pp.109-119
    • /
    • 1996
  • One of the most important factors in the work environment of the industrial design is the design process which can systematically synthesize the informations of related fields. Because the computer technology is being radically developed, industrial should not only positively be able to cope with new technologies, but also should positively be able to take part in the development of the integrated system for the process by the new technologies. Recently in the industrial design field, designers are often using the computer~applied-3D modeling techniques in the development process of industrial design products, especially in the visualization level of the design development. In this paper, we studied the workstation modeling process to understand the computer~applied-3D modeling and presented the methods of transfer of the 3D-modeling data of a workstation(SGI) to the AutoCAD data of a personal computer which is generally being used as a drawing tool in mechanism parts. Through the development process of an electronic taxi meter as a practical case study, we check the possibility whether the 3D-modeling data transferred from an industrial design part to a mechanism part can directly be used as the mechanism data. For this, we transferred the 3D-modeling data of an electronic taxi meter created with a workstation(SGI) to the AutoCAD data of a personal computer and checked the usefulness of data transferred from an industrial design part to a mechanism part. Through these processes of data transfer, we aimed to find out the basic principles which can be rationally applied to a mechanism part or a production part.

  • PDF

Fast Generation of Intermediate View Image Using GPGPU-Based Disparity Increment Method (GPGPU 기반의 변위증분 방법을 이용한 중간시점 고속 생성)

  • Koo, Ja-Myung;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.8
    • /
    • pp.1908-1918
    • /
    • 2013
  • Free-view, auto-stereoscopic video service is a next generation broadcasting system which offers a three-dimensional video, images of the various point are needed. This paper proposes a method that parallelizes the algorithm for arbitrary intermediate view-point image fast generation and make it faster using General Propose Graphic Processing Unit(GPGPU) with help of the Compute Unified Device Architecture(CUDA). It uses a parallelized stereo-matching method between the leftmost and the rightmost depth images to obtain disparity information and It use data calculated disparity increment per depth value. The disparity increment is used to find the location in the intermediate view-point image for each depth in the given images. Then, It is eliminate to disocclusions complement each other and remaining holes are filled image using hole-filling method and to get the final intermediate view-point image. The proposed method was implemented and applied to several test sequences. The results revealed that the quality of the generated intermediate view-point image corresponds to 30.47dB of PSNR in average and it takes about 38 frames per second to generate a Full HD intermediate view-point image.

Trend Analyses of B777 FLCH Usage Beyond FAF Events (B777 항공기 Final Approach Fix(FAF) 이후 Flight Level Change(FLCH) 사용 이벤트 경향성 분석)

  • Chung, Seung Sup;Kim, Hyeon Deok
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.3
    • /
    • pp.248-255
    • /
    • 2021
  • The main causes of the July 2013 OZ 214 accident were poorly performed approach and the failure to recognize the autothrottle in the HOLD position which the automated speed control was not provided. The pilots late decision for go-around was also a critical factor leading to the accident. The B777 POM restricts the use of FLCH mode beyond the FAF. This research utilized the QAR data of an airline's B777 fleet in the period of two years where 44 cases were found. In many cases, the FLCH mode was used for rapid descent from an higher than normal situation. In addition, in the base turn, continuous use of FLCH mode even when the path was below the glide path were observed. Airports with elevation above 500 ft MSL had a higher rate of occurrence. In this research, the proper descent planning and vertical path monitoring, and the adherence to the limitation set in the manuals and the stabilized approach criteria were re-emphasized as mitigation to reduce event occurences.

Toward understanding learning patterns in an open online learning platform using process mining (프로세스 마이닝을 활용한 온라인 교육 오픈 플랫폼 내 학습 패턴 분석 방법 개발)

  • Taeyoung Kim;Hyomin Kim;Minsu Cho
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.285-301
    • /
    • 2023
  • Due to the increasing demand and importance of non-face-to-face education, open online learning platforms are getting interests both domestically and internationally. These platforms exhibit different characteristics from online courses by universities and other educational institutions. In particular, students engaged in these platforms can receive more learner autonomy, and the development of tools to assist learning is required. From the past, researchers have attempted to utilize process mining to understand realistic study behaviors and derive learning patterns. However, it has a deficiency to employ it to the open online learning platforms. Moreover, existing research has primarily focused on the process model perspective, including process model discovery, but lacks a method for the process pattern and instance perspectives. In this study, we propose a method to identify learning patterns within an open online learning platform using process mining techniques. To achieve this, we suggest three different viewpoints, e.g., model-level, variant-level, and instance-level, to comprehend the learning patterns, and various techniques are employed, such as process discovery, conformance checking, autoencoder-based clustering, and predictive approaches. To validate this method, we collected a learning log of machine learning-related courses on a domestic open education platform. The results unveiled a spaghetti-like process model that can be differentiated into a standard learning pattern and three abnormal patterns. Furthermore, as a result of deriving a pattern classification model, our model achieved a high accuracy of 0.86 when predicting the pattern of instances based on the initial 30% of the entire flow. This study contributes to systematically analyze learners' patterns using process mining.

Comparative Analysis of Self-supervised Deephashing Models for Efficient Image Retrieval System (효율적인 이미지 검색 시스템을 위한 자기 감독 딥해싱 모델의 비교 분석)

  • Kim Soo In;Jeon Young Jin;Lee Sang Bum;Kim Won Gyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.519-524
    • /
    • 2023
  • In hashing-based image retrieval, the hash code of a manipulated image is different from the original image, making it difficult to search for the same image. This paper proposes and evaluates a self-supervised deephashing model that generates perceptual hash codes from feature information such as texture, shape, and color of images. The comparison models are autoencoder-based variational inference models, but the encoder is designed with a fully connected layer, convolutional neural network, and transformer modules. The proposed model is a variational inference model that includes a SimAM module of extracting geometric patterns and positional relationships within images. The SimAM module can learn latent vectors highlighting objects or local regions through an energy function using the activation values of neurons and surrounding neurons. The proposed method is a representation learning model that can generate low-dimensional latent vectors from high-dimensional input images, and the latent vectors are binarized into distinguishable hash code. From the experimental results on public datasets such as CIFAR-10, ImageNet, and NUS-WIDE, the proposed model is superior to the comparative model and analyzed to have equivalent performance to the supervised learning-based deephashing model. The proposed model can be used in application systems that require low-dimensional representation of images, such as image search or copyright image determination.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Nonlinear Vector Alignment Methodology for Mapping Domain-Specific Terminology into General Space (전문어의 범용 공간 매핑을 위한 비선형 벡터 정렬 방법론)

  • Kim, Junwoo;Yoon, Byungho;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.127-146
    • /
    • 2022
  • Recently, as word embedding has shown excellent performance in various tasks of deep learning-based natural language processing, researches on the advancement and application of word, sentence, and document embedding are being actively conducted. Among them, cross-language transfer, which enables semantic exchange between different languages, is growing simultaneously with the development of embedding models. Academia's interests in vector alignment are growing with the expectation that it can be applied to various embedding-based analysis. In particular, vector alignment is expected to be applied to mapping between specialized domains and generalized domains. In other words, it is expected that it will be possible to map the vocabulary of specialized fields such as R&D, medicine, and law into the space of the pre-trained language model learned with huge volume of general-purpose documents, or provide a clue for mapping vocabulary between mutually different specialized fields. However, since linear-based vector alignment which has been mainly studied in academia basically assumes statistical linearity, it tends to simplify the vector space. This essentially assumes that different types of vector spaces are geometrically similar, which yields a limitation that it causes inevitable distortion in the alignment process. To overcome this limitation, we propose a deep learning-based vector alignment methodology that effectively learns the nonlinearity of data. The proposed methodology consists of sequential learning of a skip-connected autoencoder and a regression model to align the specialized word embedding expressed in each space to the general embedding space. Finally, through the inference of the two trained models, the specialized vocabulary can be aligned in the general space. To verify the performance of the proposed methodology, an experiment was performed on a total of 77,578 documents in the field of 'health care' among national R&D tasks performed from 2011 to 2020. As a result, it was confirmed that the proposed methodology showed superior performance in terms of cosine similarity compared to the existing linear vector alignment.