• Title/Summary/Keyword: generative models

Search Result 159, Processing Time 0.028 seconds

Generative optical flow based abnormal object detection method using a spatio-temporal translation network

  • Lim, Hyunseok;Gwak, Jeonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.11-19
    • /
    • 2021
  • An abnormal object refers to a person, an object, or a mechanical device that performs abnormal and unusual behavior and needs observation or supervision. In order to detect this through artificial intelligence algorithm without continuous human intervention, a method of observing the specificity of temporal features using optical flow technique is widely used. In this study, an abnormal situation is identified by learning an algorithm that translates an input image frame to an optical flow image using a Generative Adversarial Network (GAN). In particular, we propose a technique that improves the pre-processing process to exclude unnecessary outliers and the post-processing process to increase the accuracy of identification in the test dataset after learning to improve the performance of the model's abnormal behavior identification. UCSD Pedestrian and UMN Unusual Crowd Activity were used as training datasets to detect abnormal behavior. For the proposed method, the frame-level AUC 0.9450 and EER 0.1317 were shown in the UCSD Ped2 dataset, which shows performance improvement compared to the models in the previous studies.

Multidimensional data generation of water distribution systems using adversarially trained autoencoder (적대적 학습 기반 오토인코더(ATAE)를 이용한 다차원 상수도관망 데이터 생성)

  • Kim, Sehyeong;Jun, Sanghoon;Jung, Donghwi
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.7
    • /
    • pp.439-449
    • /
    • 2023
  • Recent advancements in data measuring technology have facilitated the installation of various sensors, such as pressure meters and flow meters, to effectively assess the real-time conditions of water distribution systems (WDSs). However, as cities expand extensively, the factors that impact the reliability of measurements have become increasingly diverse. In particular, demand data, one of the most significant hydraulic variable in WDS, is challenging to be measured directly and is prone to missing values, making the development of accurate data generation models more important. Therefore, this paper proposes an adversarially trained autoencoder (ATAE) model based on generative deep learning techniques to accurately estimate demand data in WDSs. The proposed model utilizes two neural networks: a generative network and a discriminative network. The generative network generates demand data using the information provided from the measured pressure data, while the discriminative network evaluates the generated demand outputs and provides feedback to the generator to learn the distinctive features of the data. To validate its performance, the ATAE model is applied to a real distribution system in Austin, Texas, USA. The study analyzes the impact of data uncertainty by calculating the accuracy of ATAE's prediction results for varying levels of uncertainty in the demand and the pressure time series data. Additionally, the model's performance is evaluated by comparing the results for different data collection periods (low, average, and high demand hours) to assess its ability to generate demand data based on water consumption levels.

Fraud Detection System Model Using Generative Adversarial Networks and Deep Learning (생성적 적대 신경망과 딥러닝을 활용한 이상거래탐지 시스템 모형)

  • Ye Won Kim;Ye Lim Yu;Hong Yong Choi
    • Information Systems Review
    • /
    • v.22 no.1
    • /
    • pp.59-72
    • /
    • 2020
  • Artificial Intelligence is establishing itself as a familiar tool from an intractable concept. In this trend, financial sector is also looking to improve the problem of existing system which includes Fraud Detection System (FDS). It is being difficult to detect sophisticated cyber financial fraud using original rule-based FDS. This is because diversification of payment environment and increasing number of electronic financial transactions has been emerged. In order to overcome present FDS, this paper suggests 3 types of artificial intelligence models, Generative Adversarial Network (GAN), Deep Neural Network (DNN), and Convolutional Neural Network (CNN). GAN proves how data imbalance problem can be developed while DNN and CNN show how abnormal financial trading patterns can be precisely detected. In conclusion, among the experiments on this paper, WGAN has the highest improvement effects on data imbalance problem. DNN model reflects more effects on fraud classification comparatively.

Learning Graphical Models for DNA Chip Data Mining

  • Zhang, Byoung-Tak
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2000.11a
    • /
    • pp.59-60
    • /
    • 2000
  • The past few years have seen a dramatic increase in gene expression data on the basis of DNA microarrays or DNA chips. Going beyond a generic view on the genome, microarray data are able to distinguish between gene populations in different tissues of the same organism and in different states of cells belonging to the same tissue. This affords a cell-wide view of the metabolic and regulatory processes under different conditions, building an effective basis for new diagnoses and therapies of diseases. In this talk we present machine learning techniques for effective mining of DNA microarray data. A brief introduction to the research field of machine learning from the computer science and artificial intelligence point of view is followed by a review of recently-developed learning algorithms applied to the analysis of DNA chip gene expression data. Emphasis is put on graphical models, such as Bayesian networks, latent variable models, and generative topographic mapping. Finally, we report on our own results of applying these learning methods to two important problems: the identification of cell cycle-regulated genes and the discovery of cancer classes by gene expression monitoring. The data sets are provided by the competition CAMDA-2000, the Critical Assessment of Techniques for Microarray Data Mining.

  • PDF

A Study on the Health Index Based on Degradation Patterns in Time Series Data Using ProphetNet Model (ProphetNet 모델을 활용한 시계열 데이터의 열화 패턴 기반 Health Index 연구)

  • Sun-Ju Won;Yong Soo Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.123-138
    • /
    • 2023
  • The Fourth Industrial Revolution and sensor technology have led to increased utilization of sensor data. In our modern society, data complexity is rising, and the extraction of valuable information has become crucial with the rapid changes in information technology (IT). Recurrent neural networks (RNN) and long short-term memory (LSTM) models have shown remarkable performance in natural language processing (NLP) and time series prediction. Consequently, there is a strong expectation that models excelling in NLP will also excel in time series prediction. However, current research on Transformer models for time series prediction remains limited. Traditional RNN and LSTM models have demonstrated superior performance compared to Transformers in big data analysis. Nevertheless, with continuous advancements in Transformer models, such as GPT-2 (Generative Pre-trained Transformer 2) and ProphetNet, they have gained attention in the field of time series prediction. This study aims to evaluate the classification performance and interval prediction of remaining useful life (RUL) using an advanced Transformer model. The performance of each model will be utilized to establish a health index (HI) for cutting blades, enabling real-time monitoring of machine health. The results are expected to provide valuable insights for machine monitoring, evaluation, and management, confirming the effectiveness of advanced Transformer models in time series analysis when applied in industrial settings.

Extrapolation of Hepatic Concentrations of Industrial Chemicals Using Pharmacokinetic Models to Predict Hepatotoxicity

  • Yamazaki, Hiroshi;Kamiya, Yusuke
    • Toxicological Research
    • /
    • v.35 no.4
    • /
    • pp.295-301
    • /
    • 2019
  • In this review, we describe the absorption rates (Caco-2 cell permeability) and hepatic/plasma pharmacokinetics of 53 diverse chemicals estimated by modeling virtual oral administration in rats. To ensure that a broad range of chemical structures is present among the selected substances, the properties described by 196 chemical descriptors in a chemoinformatics tool were calculated for 50,000 randomly selected molecules in the original chemical space. To allow visualization, the resulting chemical space was projected onto a two-dimensional plane using generative topographic mapping. The calculated absorbance rates of the chemicals based on cell permeability studies were found to be inversely correlated to the no-observed-effect levels for hepatoxicity after oral administration, as obtained from the Hazard Evaluation Support System Integrated Platform in Japan (r = -0.88, p < 0.01, n = 27). The maximum plasma concentrations and the areas under the concentration-time curves (AUC) of a varied selection of chemicals were estimated using two different methods: simple one-compartment models (i.e., high-throughput toxicokinetic models) and simplified physiologically based pharmacokinetic (PBPK) modeling consisting of chemical receptor (gut), metabolizing (liver), and central (main) compartments. The results obtained from the two methods were consistent. Although the maximum concentrations and AUC values of the 53 chemicals roughly correlated in the liver and plasma, inconsistencies were apparent between empirically measured concentrations and the PBPK-modeled levels. The lowest-observed-effect levels and the virtual hepatic AUC values obtained using PBPK models were inversely correlated (r = -0.78, p < 0.05, n = 7). The present simplified PBPK models could estimate the relationships between hepatic/plasma concentrations and oral doses of general chemicals using both forward and reverse dosimetry. These methods are therefore valuable for estimating hepatotoxicity.

GOMME: A Generic Ontology Modelling Methodology for Epics

  • Udaya Varadarajan;Mayukh Bagchi;Amit Tiwari;M.P. Satija
    • Journal of Information Science Theory and Practice
    • /
    • v.11 no.1
    • /
    • pp.61-78
    • /
    • 2023
  • Ontological knowledge modelling of epic texts, though being an established research arena backed by concrete multilingual and multicultural works, still suffers from two key shortcomings. Firstly, all epic ontological models developed till date have been designed following ad-hoc methodologies, most often combining existing general purpose ontology development methodologies. Secondly, none of the ad-hoc methodologies consider the potential reuse of existing epic ontological models for enrichment, if available. This paper presents, as a unified solution to the above shortcomings, the design and development of GOMME - the first dedicated methodology for iterative ontological modelling of epics, potentially extensible to works in different research arenas of digital humanities in general. GOMME is grounded in transdisciplinary foundations of canonical norms for epics, knowledge modelling best practices, application satisfiability norms, and cognitive generative questions. It is also the first methodology (in epic modelling but also in general) to be flexible enough to integrate, in practice, the options of knowledge modelling via reuse or from scratch. The feasibility of GOMME is validated via a first brief implementation of ontological modelling of the Indian epic Mahabharata by reusing an existing ontology. The preliminary results are promising, with the GOMME-produced model being both ontologically thorough and competent performance-wise.

Generation of Super-Resolution Benchmark Dataset for Compact Advanced Satellite 500 Imagery and Proof of Concept Results

  • Yonghyun Kim;Jisang Park;Daesub Yoon
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.4
    • /
    • pp.459-466
    • /
    • 2023
  • In the last decade, artificial intelligence's dramatic advancement with the development of various deep learning techniques has significantly contributed to remote sensing fields and satellite image applications. Among many prominent areas, super-resolution research has seen substantial growth with the release of several benchmark datasets and the rise of generative adversarial network-based studies. However, most previously published remote sensing benchmark datasets represent spatial resolution within approximately 10 meters, imposing limitations when directly applying for super-resolution of small objects with cm unit spatial resolution. Furthermore, if the dataset lacks a global spatial distribution and is specialized in particular land covers, the consequent lack of feature diversity can directly impact the quantitative performance and prevent the formation of robust foundation models. To overcome these issues, this paper proposes a method to generate benchmark datasets by simulating the modulation transfer functions of the sensor. The proposed approach leverages the simulation method with a solid theoretical foundation, notably recognized in image fusion. Additionally, the generated benchmark dataset is applied to state-of-the-art super-resolution base models for quantitative and visual analysis and discusses the shortcomings of the existing datasets. Through these efforts, we anticipate that the proposed benchmark dataset will facilitate various super-resolution research shortly in Korea.

Is ChatGPT a "Fire of Prometheus" for Non-Native English-Speaking Researchers in Academic Writing?

  • Sung Il Hwang;Joon Seo Lim;Ro Woon Lee;Yusuke Matsui;Toshihiro Iguchi;Takao Hiraki;Hyungwoo Ahn
    • Korean Journal of Radiology
    • /
    • v.24 no.10
    • /
    • pp.952-959
    • /
    • 2023
  • Large language models (LLMs) such as ChatGPT have garnered considerable interest for their potential to aid non-native English-speaking researchers. These models can function as personal, round-the-clock English tutors, akin to how Prometheus in Greek mythology bestowed fire upon humans for their advancement. LLMs can be particularly helpful for non-native researchers in writing the Introduction and Discussion sections of manuscripts, where they often encounter challenges. However, using LLMs to generate text for research manuscripts entails concerns such as hallucination, plagiarism, and privacy issues; to mitigate these risks, authors should verify the accuracy of generated content, employ text similarity detectors, and avoid inputting sensitive information into their prompts. Consequently, it may be more prudent to utilize LLMs for editing and refining text rather than generating large portions of text. Journal policies concerning the use of LLMs vary, but transparency in disclosing artificial intelligence tool usage is emphasized. This paper aims to summarize how LLMs can lower the barrier to academic writing in English, enabling researchers to concentrate on domain-specific research, provided they are used responsibly and cautiously.

A Study on Digital design process of the materialization of Free form Design Architecture (비정형 건축 구현을 위한 디지털 디자인 프로세스에 관한 연구)

  • Lee, Jae-Kook;Lee, Kang-Bok
    • Journal of The Korean Digital Architecture Interior Association
    • /
    • v.11 no.2
    • /
    • pp.13-19
    • /
    • 2011
  • Starting in modern times by Le Corbusier, architectures made by concretes are still developing in these times. For several decades, the shape of box architecture has been the most familiar type of buildings. Of course "The concrete is the type of box building" isn't always right, but what we have most seen was the buildings which has been stylized and made by concretes. Through modern times to these days based on international style and functionalism, the type of box building was the most effective and good profit type of architecture which has not disregarded the capitalism. Free-form building are becoming a common place, and many of these are designed and constructed using sophisticated techniques. The main technique being used is Generative Technology of Form for free-form construction. People's interest is growing in this, and it is becoming widely used both abroad and domestically. The purpose of this paper is to investigate the use of Generative Technology of Form which is a digitally adapted design methodology in architecture. The digital design process used for contemporary buildings share many typical features that exist within a standard digital template, but also an increasing amount of mass customization that has to be produced at an additional cost. This paper will summarize these features in terms of free-form architecture, and in terms of the digital design process. In fact, 3D models have to be conceded as main design products considering features of Free Form Design Architecture. However it is practical to design twice over, because all forms of architectural drawings are 2D. From now on, design of Free Form Design Building is not to separate between design process and practical process, but to compound them as unified design system applied the process to communicate information interactively. For this, it should be required to impose unified digital design process and perform researches about effective way to apply in the field of Free-form Design Architecture.