• Title/Summary/Keyword: Generative Models

Search Result 180, Processing Time 0.026 seconds

Extrapolation of Hepatic Concentrations of Industrial Chemicals Using Pharmacokinetic Models to Predict Hepatotoxicity

  • Yamazaki, Hiroshi;Kamiya, Yusuke
    • Toxicological Research
    • /
    • v.35 no.4
    • /
    • pp.295-301
    • /
    • 2019
  • In this review, we describe the absorption rates (Caco-2 cell permeability) and hepatic/plasma pharmacokinetics of 53 diverse chemicals estimated by modeling virtual oral administration in rats. To ensure that a broad range of chemical structures is present among the selected substances, the properties described by 196 chemical descriptors in a chemoinformatics tool were calculated for 50,000 randomly selected molecules in the original chemical space. To allow visualization, the resulting chemical space was projected onto a two-dimensional plane using generative topographic mapping. The calculated absorbance rates of the chemicals based on cell permeability studies were found to be inversely correlated to the no-observed-effect levels for hepatoxicity after oral administration, as obtained from the Hazard Evaluation Support System Integrated Platform in Japan (r = -0.88, p < 0.01, n = 27). The maximum plasma concentrations and the areas under the concentration-time curves (AUC) of a varied selection of chemicals were estimated using two different methods: simple one-compartment models (i.e., high-throughput toxicokinetic models) and simplified physiologically based pharmacokinetic (PBPK) modeling consisting of chemical receptor (gut), metabolizing (liver), and central (main) compartments. The results obtained from the two methods were consistent. Although the maximum concentrations and AUC values of the 53 chemicals roughly correlated in the liver and plasma, inconsistencies were apparent between empirically measured concentrations and the PBPK-modeled levels. The lowest-observed-effect levels and the virtual hepatic AUC values obtained using PBPK models were inversely correlated (r = -0.78, p < 0.05, n = 7). The present simplified PBPK models could estimate the relationships between hepatic/plasma concentrations and oral doses of general chemicals using both forward and reverse dosimetry. These methods are therefore valuable for estimating hepatotoxicity.

A Study on the Evaluation Methods for Assessing the Understanding of Korean Culture by Generative AI Models (생성형 AI 모델의 한국문화 이해 능력 평가 방법에 관한 연구)

  • Son Ki Jun;Kim Seung Hyun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.9
    • /
    • pp.421-428
    • /
    • 2024
  • Recently, services utilizing large-scale language models (LLMs) such as GPT-4 and LLaMA have been released, garnering significant attention. These models can respond fluently to various user queries, but their insufficient training on Korean data raises concerns about the potential to provide inaccurate information regarding Korean culture and language. In this study, we selected eight major publicly available models that have been trained on Korean data and evaluated their understanding of Korean culture using a dataset composed of five domains (Korean language comprehension and cultural aspects). The results showed that the commercial model HyperClovaX exhibited the best performance across all domains. Among the publicly available models, Bookworm demonstrated superior Korean language proficiency. Additionally, the LDCC-SOLAR model excelled in areas related to understanding Korean culture and language.

GOMME: A Generic Ontology Modelling Methodology for Epics

  • Udaya Varadarajan;Mayukh Bagchi;Amit Tiwari;M.P. Satija
    • Journal of Information Science Theory and Practice
    • /
    • v.11 no.1
    • /
    • pp.61-78
    • /
    • 2023
  • Ontological knowledge modelling of epic texts, though being an established research arena backed by concrete multilingual and multicultural works, still suffers from two key shortcomings. Firstly, all epic ontological models developed till date have been designed following ad-hoc methodologies, most often combining existing general purpose ontology development methodologies. Secondly, none of the ad-hoc methodologies consider the potential reuse of existing epic ontological models for enrichment, if available. This paper presents, as a unified solution to the above shortcomings, the design and development of GOMME - the first dedicated methodology for iterative ontological modelling of epics, potentially extensible to works in different research arenas of digital humanities in general. GOMME is grounded in transdisciplinary foundations of canonical norms for epics, knowledge modelling best practices, application satisfiability norms, and cognitive generative questions. It is also the first methodology (in epic modelling but also in general) to be flexible enough to integrate, in practice, the options of knowledge modelling via reuse or from scratch. The feasibility of GOMME is validated via a first brief implementation of ontological modelling of the Indian epic Mahabharata by reusing an existing ontology. The preliminary results are promising, with the GOMME-produced model being both ontologically thorough and competent performance-wise.

Generation of Super-Resolution Benchmark Dataset for Compact Advanced Satellite 500 Imagery and Proof of Concept Results

  • Yonghyun Kim;Jisang Park;Daesub Yoon
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.4
    • /
    • pp.459-466
    • /
    • 2023
  • In the last decade, artificial intelligence's dramatic advancement with the development of various deep learning techniques has significantly contributed to remote sensing fields and satellite image applications. Among many prominent areas, super-resolution research has seen substantial growth with the release of several benchmark datasets and the rise of generative adversarial network-based studies. However, most previously published remote sensing benchmark datasets represent spatial resolution within approximately 10 meters, imposing limitations when directly applying for super-resolution of small objects with cm unit spatial resolution. Furthermore, if the dataset lacks a global spatial distribution and is specialized in particular land covers, the consequent lack of feature diversity can directly impact the quantitative performance and prevent the formation of robust foundation models. To overcome these issues, this paper proposes a method to generate benchmark datasets by simulating the modulation transfer functions of the sensor. The proposed approach leverages the simulation method with a solid theoretical foundation, notably recognized in image fusion. Additionally, the generated benchmark dataset is applied to state-of-the-art super-resolution base models for quantitative and visual analysis and discusses the shortcomings of the existing datasets. Through these efforts, we anticipate that the proposed benchmark dataset will facilitate various super-resolution research shortly in Korea.

Is ChatGPT a "Fire of Prometheus" for Non-Native English-Speaking Researchers in Academic Writing?

  • Sung Il Hwang;Joon Seo Lim;Ro Woon Lee;Yusuke Matsui;Toshihiro Iguchi;Takao Hiraki;Hyungwoo Ahn
    • Korean Journal of Radiology
    • /
    • v.24 no.10
    • /
    • pp.952-959
    • /
    • 2023
  • Large language models (LLMs) such as ChatGPT have garnered considerable interest for their potential to aid non-native English-speaking researchers. These models can function as personal, round-the-clock English tutors, akin to how Prometheus in Greek mythology bestowed fire upon humans for their advancement. LLMs can be particularly helpful for non-native researchers in writing the Introduction and Discussion sections of manuscripts, where they often encounter challenges. However, using LLMs to generate text for research manuscripts entails concerns such as hallucination, plagiarism, and privacy issues; to mitigate these risks, authors should verify the accuracy of generated content, employ text similarity detectors, and avoid inputting sensitive information into their prompts. Consequently, it may be more prudent to utilize LLMs for editing and refining text rather than generating large portions of text. Journal policies concerning the use of LLMs vary, but transparency in disclosing artificial intelligence tool usage is emphasized. This paper aims to summarize how LLMs can lower the barrier to academic writing in English, enabling researchers to concentrate on domain-specific research, provided they are used responsibly and cautiously.

A Study on Digital design process of the materialization of Free form Design Architecture (비정형 건축 구현을 위한 디지털 디자인 프로세스에 관한 연구)

  • Lee, Jae-Kook;Lee, Kang-Bok
    • Journal of The Korean Digital Architecture Interior Association
    • /
    • v.11 no.2
    • /
    • pp.13-19
    • /
    • 2011
  • Starting in modern times by Le Corbusier, architectures made by concretes are still developing in these times. For several decades, the shape of box architecture has been the most familiar type of buildings. Of course "The concrete is the type of box building" isn't always right, but what we have most seen was the buildings which has been stylized and made by concretes. Through modern times to these days based on international style and functionalism, the type of box building was the most effective and good profit type of architecture which has not disregarded the capitalism. Free-form building are becoming a common place, and many of these are designed and constructed using sophisticated techniques. The main technique being used is Generative Technology of Form for free-form construction. People's interest is growing in this, and it is becoming widely used both abroad and domestically. The purpose of this paper is to investigate the use of Generative Technology of Form which is a digitally adapted design methodology in architecture. The digital design process used for contemporary buildings share many typical features that exist within a standard digital template, but also an increasing amount of mass customization that has to be produced at an additional cost. This paper will summarize these features in terms of free-form architecture, and in terms of the digital design process. In fact, 3D models have to be conceded as main design products considering features of Free Form Design Architecture. However it is practical to design twice over, because all forms of architectural drawings are 2D. From now on, design of Free Form Design Building is not to separate between design process and practical process, but to compound them as unified design system applied the process to communicate information interactively. For this, it should be required to impose unified digital design process and perform researches about effective way to apply in the field of Free-form Design Architecture.

A Study on Big Data Analysis of Related Patents in Smart Factories Using Topic Models and ChatGPT (토픽 모형과 ChatGPT를 활용한 스마트팩토리 연관 특허 빅데이터 분석에 관한 연구)

  • Sang-Gook Kim;Minyoung Yun;Taehoon Kwon;Jung Sun Lim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.15-31
    • /
    • 2023
  • In this study, we propose a novel approach to analyze big data related to patents in the field of smart factories, utilizing the Latent Dirichlet Allocation (LDA) topic modeling method and the generative artificial intelligence technology, ChatGPT. Our method includes extracting valuable insights from a large data-set of associated patents using LDA to identify latent topics and their corresponding patent documents. Additionally, we validate the suitability of the topics generated using generative AI technology and review the results with domain experts. We also employ the powerful big data analysis tool, KNIME, to preprocess and visualize the patent data, facilitating a better understanding of the global patent landscape and enabling a comparative analysis with the domestic patent environment. In order to explore quantitative and qualitative comparative advantages at this juncture, we have selected six indicators for conducting a quantitative analysis. Consequently, our approach allows us to explore the distinctive characteristics and investment directions of individual countries in the context of research and development and commercialization, based on a global-scale patent analysis in the field of smart factories. We anticipate that our findings, based on the analysis of global patent data in the field of smart factories, will serve as vital guidance for determining individual countries' directions in research and development investment. Furthermore, we propose a novel utilization of GhatGPT as a tool for validating the suitability of selected topics for policy makers who must choose topics across various scientific and technological domains.

Empirical Research on the Interaction between Visual Art Creation and Artificial Intelligence Collaboration (시각예술 창작과 인공지능 협업의 상호작용에 관한 실증연구)

  • Hyeonjin Kim;Yeongjo Kim;Donghyeon Yun;Hanjin Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.517-524
    • /
    • 2024
  • Generative AI, exemplified by models like ChatGPT, has revolutionized human-machine interactions in the 21st century. As these advancements permeate various sectors, their intersection with the arts is both promising and challenging. Despite the arts' historical resistance to AI replacement, recent developments have sparked active research in AI's role in artistry. This study delves into the potential of AI in visual arts education, highlighting the necessity of swift adaptation amidst the Fourth Industrial Revolution. This research, conducted at a 4-year global higher education institution located in Gyeongbuk, involved 70 participants who took part in a creative convergence module course project. The study aimed to examine the influence of AI collaboration in visual arts, analyzing distinctions across majors, grades, and genders. The results indicate that creative activities with AI positively influence students' creativity and digital media literacy. Based on these findings, there is a need to further develop effective educational strategies and directions that incorporate AI.

Comparison Analysis of Four Face Swapping Models for Interactive Media Platform COX (인터랙티브 미디어 플랫폼 콕스에 제공될 4가지 얼굴 변형 기술의 비교분석)

  • Jeon, Ho-Beom;Ko, Hyun-kwan;Lee, Seon-Gyeong;Song, Bok-Deuk;Kim, Chae-Kyu;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.5
    • /
    • pp.535-546
    • /
    • 2019
  • Recently, there have been a lot of researches on the whole face replacement system, but it is not easy to obtain stable results due to various attitudes, angles and facial diversity. To produce a natural synthesis result when replacing the face shown in the video image, technologies such as face area detection, feature extraction, face alignment, face area segmentation, 3D attitude adjustment and facial transposition should all operate at a precise level. And each technology must be able to be interdependently combined. The results of our analysis show that the difficulty of implementing the technology and contribution to the system in facial replacement technology has increased in facial feature point extraction and facial alignment technology. On the other hand, the difficulty of the facial transposition technique and the three-dimensional posture adjustment technique were low, but showed the need for development. In this paper, we propose four facial replacement models such as 2-D Faceswap, OpenPose, Deekfake, and Cycle GAN, which are suitable for the Cox platform. These models have the following features; i.e. these models include a suitable model for front face pose image conversion, face pose image with active body movement, and face movement with right and left side by 15 degrees, Generative Adversarial Network.

Recent Trends and Prospects of 3D Content Using Artificial Intelligence Technology (인공지능을 이용한 3D 콘텐츠 기술 동향 및 향후 전망)

  • Lee, S.W.;Hwang, B.W.;Lim, S.J.;Yoon, S.U.;Kim, T.J.;Kim, K.N.;Kim, D.H;Park, C.J.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.15-22
    • /
    • 2019
  • Recent technological advances in three-dimensional (3D) sensing devices and machine learning such as deep leaning has enabled data-driven 3D applications. Research on artificial intelligence has developed for the past few years and 3D deep learning has been introduced. This is the result of the availability of high-quality big data, increases in computing power, and development of new algorithms; before the introduction of 3D deep leaning, the main targets for deep learning were one-dimensional (1D) audio files and two-dimensional (2D) images. The research field of deep leaning has extended from discriminative models such as classification/segmentation/reconstruction models to generative models such as those including style transfer and generation of non-existing data. Unlike 2D learning, it is not easy to acquire 3D learning data. Although low-cost 3D data acquisition sensors have become increasingly popular owing to advances in 3D vision technology, the generation/acquisition of 3D data is still very difficult. Even if 3D data can be acquired, post-processing remains a significant problem. Moreover, it is not easy to directly apply existing network models such as convolution networks owing to the various ways in which 3D data is represented. In this paper, we summarize technological trends in AI-based 3D content generation.