• 제목/요약/키워드: Generative Model

검색결과 340건 처리시간 0.024초

Generative AI parameter tuning for online self-directed learning

  • Jin-Young Jun;Youn-A Min
    • 한국컴퓨터정보학회논문지
    • /
    • 제29권4호
    • /
    • pp.31-38
    • /
    • 2024
  • 본 연구는 온라인 원격교육에서 코딩 교육 활성화를 위해, 생성형 AI 기반의 학습 지원 도구개발에 필요한 하이퍼 파라미터 설정을 제안한다. 연구를 위해 세 가지 다른 학습 맥락에 따라 하이퍼 파라미터를 설정할 수 있는 실험 도구를 구현하고, 실험 도구를 통해 생성형 AI의 응답 품질을 평가하였다. 생성형 AI 자체의 기본 하이퍼 파라미터 설정을 유지한 실험은 대조군으로, 연구에서 설정한 하이퍼 파라미터를 사용한 실험은 실험군으로 하였다. 실험 결과, 첫 번째 학습맥락인 "학습 지원"에서는 실험군과 대조군 사이의 유의한 차이가 관찰되지 않았으나, 두 번째와 세 번째 학습 맥락인 "코드생성"과 "주석생성"에서는 실험군의 평가점수 평균이 대조군보다 각각 11.6% 포인트, 23% 포인트 높은 것으로 나타났다. 또한, system content에 응답이 학습 동기에 미칠 수 있는 영향을 제시하면 학습 정서를 고려한 응답이 생성되는 것이 관찰되었다.

Game Sprite Generator Using a Multi Discriminator GAN

  • Hong, Seungjin;Kim, Sookyun;Kang, Shinjin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권8호
    • /
    • pp.4255-4269
    • /
    • 2019
  • This paper proposes an image generation method using a Multi Discriminator Generative Adversarial Net (MDGAN) as a next generation 2D game sprite creation technique. The proposed GAN is an Autoencoder-based model that receives three areas of information-color, shape, and animation, and combines them into new images. This model consists of two encoders that extract color and shape from each image, and a decoder that takes all the values of each encoder and generates an animated image. We also suggest an image processing technique during the learning process to remove the noise of the generated images. The resulting images show that 2D sprites in games can be generated by independently learning the three image attributes of shape, color, and animation. The proposed system can increase the productivity of massive 2D image modification work during the game development process. The experimental results demonstrate that our MDGAN can be used for 2D image sprite generation and modification work with little manual cost.

Research on AI Painting Generation Technology Based on the [Stable Diffusion]

  • Chenghao Wang;Jeanhun Chung
    • International journal of advanced smart convergence
    • /
    • 제12권2호
    • /
    • pp.90-95
    • /
    • 2023
  • With the rapid development of deep learning and artificial intelligence, generative models have achieved remarkable success in the field of image generation. By combining the stable diffusion method with Web UI technology, a novel solution is provided for the application of AI painting generation. The application prospects of this technology are very broad and can be applied to multiple fields, such as digital art, concept design, game development, and more. Furthermore, the platform based on Web UI facilitates user operations, making the technology more easily applicable to practical scenarios. This paper introduces the basic principles of Stable Diffusion Web UI technology. This technique utilizes the stability of diffusion processes to improve the output quality of generative models. By gradually introducing noise during the generation process, the model can generate smoother and more coherent images. Additionally, the analysis of different model types and applications within Stable Diffusion Web UI provides creators with a more comprehensive understanding, offering valuable insights for fields such as artistic creation and design.

A Research on AI Generated 2D Image to 3D Modeling Technology

  • Ke Ma;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제16권2호
    • /
    • pp.81-86
    • /
    • 2024
  • Advancements in generative AI are reshaping graphic and 3D content design landscapes, where AI not only enriches graphic design but extends its reach to 3D content creation. Though 3D texture mapping through AI is advancing, AI-generated 3D modeling technology in this realm remains nascent. This paper presents AI 2D image-driven 3D modeling techniques, assessing their viability in 3D content design by scrutinizing various algorithms. Initially, four OBJ model-exporting AI algorithms are screened, and two are further evaluated. Results indicate that while AI-generated 3D models may not be directly usable, they effectively capture reference object structures, offering substantial time savings and enhanced design efficiency through manual refinements. This endeavor pioneers new avenues for 3D content creators, anticipating a dynamic fusion of AI and 3D design.

REVIEW OF DIFFUSION MODELS: THEORY AND APPLICATIONS

  • HYUNGJIN CHUNG;HYELIN NAM;JONG CHUL YE
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제28권1호
    • /
    • pp.1-21
    • /
    • 2024
  • This review comprehensively explores the evolution, theoretical underpinnings, variations, and applications of diffusion models. Originating as a generative framework, diffusion models have rapidly ascended to the forefront of machine learning research, owing to their exceptional capability, stability, and versatility. We dissect the core principles driving diffusion processes, elucidating their mathematical foundations and the mechanisms by which they iteratively refine noise into structured data. We highlight pivotal advancements and the integration of auxiliary techniques that have significantly enhanced their efficiency and stability. Variants such as bridges that broaden the applicability of diffusion models to wider domains are introduced. We put special emphasis on the ability of diffusion models as a crucial foundation model, with modalities ranging from image, 3D assets, and video. The role of diffusion models as a general foundation model leads to its versatility in many of the downstream tasks such as solving inverse problems and image editing. Through this review, we aim to provide a thorough and accessible compendium for both newcomers and seasoned researchers in the field.

Textual Inversion을 활용한 Adversarial Prompt 생성 기반 Text-to-Image 모델에 대한 멤버십 추론 공격 (Membership Inference Attack against Text-to-Image Model Based on Generating Adversarial Prompt Using Textual Inversion)

  • 오윤주;박소희;최대선
    • 정보보호학회논문지
    • /
    • 제33권6호
    • /
    • pp.1111-1123
    • /
    • 2023
  • 최근 생성 모델이 발전함에 따라 생성 모델을 위협하는 연구도 활발히 진행되고 있다. 본 논문은 Text-to-Image 모델에 대한 멤버십 추론 공격을 위한 새로운 제안 방법을 소개한다. 기존의 Text-to-Image 모델에 대한 멤버십 추론 공격은 쿼리 이미지의 caption으로 단일 이미지를 생성하여 멤버십을 추론하였다. 반면, 본 논문은 Textual Inversion을 통해 쿼리 이미지에 personalization된 임베딩을 사용하고, Adversarial Prompt 생성 방법으로 여러 장의 이미지를 효과적으로 생성하는 멤버십 추론 공격을 제안한다. 또한, Text-to-Image 모델 중 주목받고 있는 Stable Diffusion 모델에 대한 멤버십 추론 공격을 최초로 진행하였으며, 최대 1.00의 Accuracy를 달성한다.

3D Object Generation and Renderer System based on VAE ResNet-GAN

  • Min-Su Yu;Tae-Won Jung;GyoungHyun Kim;Soonchul Kwon;Kye-Dong Jung
    • International journal of advanced smart convergence
    • /
    • 제12권4호
    • /
    • pp.142-146
    • /
    • 2023
  • We present a method for generating 3D structures and rendering objects by combining VAE (Variational Autoencoder) and GAN (Generative Adversarial Network). This approach focuses on generating and rendering 3D models with improved quality using residual learning as the learning method for the encoder. We deep stack the encoder layers to accurately reflect the features of the image and apply residual blocks to solve the problems of deep layers to improve the encoder performance. This solves the problems of gradient vanishing and exploding, which are problems when constructing a deep neural network, and creates a 3D model of improved quality. To accurately extract image features, we construct deep layers of the encoder model and apply the residual function to learning to model with more detailed information. The generated model has more detailed voxels for more accurate representation, is rendered by adding materials and lighting, and is finally converted into a mesh model. 3D models have excellent visual quality and accuracy, making them useful in various fields such as virtual reality, game development, and metaverse.

A Research on Aesthetic Aspects of Checkpoint Models in [Stable Diffusion]

  • Ke Ma;Jeanhun Chung
    • International journal of advanced smart convergence
    • /
    • 제13권2호
    • /
    • pp.130-135
    • /
    • 2024
  • The Stable diffsuion AI tool is popular among designers because of its flexible and powerful image generation capabilities. However, due to the diversity of its AI models, it needs to spend a lot of time testing different AI models in the face of different design plans, so choosing a suitable general AI model has become a big problem at present. In this paper, by comparing the AI images generated by two different Stable diffsuion models, the advantages and disadvantages of each model are analyzed from the aspects of the matching degree of the AI image and the prompt, the color composition and light composition of the image, and the general AI model that the generated AI image has an aesthetic sense is analyzed, and the designer does not need to take cumbersome steps. A satisfactory AI image can be obtained. The results show that Playground V2.5 model can be used as a general AI model, which has both aesthetic and design sense in various style design requirements. As a result, content designers can focus more on creative content development, and expect more groundbreaking technologies to merge generative AI with content design.

복잡한 배경에서 신경망을 이용한 얼굴인식 (Face Recognition on complex backgrounds using Neural Network)

  • 한준희;남기환;박호식;이영식;정연길;나상동;배철수
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2005년도 춘계종합학술대회
    • /
    • pp.1149-1152
    • /
    • 2005
  • 복잡한 배경을 지닌 이미지에서 얼굴을 검출하기란 매우 어려운 일이다. 본 논문에서는 신경망 모델을 기반으로 한 제한생성모델(CGM: Constrained Generative Model)을 제안한다. 학습 과정의 목표라 할 수 있는 생성은 신경망 모델이 입력 데이터를 발생시킬 확률을 계산하도록 하는 것이고, 계산하는데 걸리는 시간을 줄이기 위해서 고속 탐지 알고리즘을 제안한다. 얼굴 측면 검출과 오 인식의 수를 줄이기 위해서 조건을 혼합한 신경망을 사용하였고 반증으로 인한 제한을 둠으로써 모델의 측정 품질을 증가시켰다. 본 논문에서 제안한 검출 알고리즘이 0$_{\circ}$ ${\sim}$60$_{\circ}$ 사이에서는 90%정도의 검출율을 나타냄을 알 수 있었다.

  • PDF

생성적 적대 네트워크로 자동 생성한 감성 텍스트의 성능 평가 (Evaluation of Sentimental Texts Automatically Generated by a Generative Adversarial Network)

  • 박천용;최용석;이공주
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제8권6호
    • /
    • pp.257-264
    • /
    • 2019
  • 최근 자연언어처리 분야에서 딥러닝 모델이 좋은 성과를 보이고 있다. 이러한 딥러닝 모델의 성능을 향상시키기 위해서는 많은 양의 데이터가 필요하다. 하지만 많은 양의 데이터를 모으기 위해서는 많은 인력과 시간이 소요되기 때문에 데이터 확장을 통해 이와 같은 문제를 해소할 수 있다. 그러나 문장 데이터의 경우 이미지 데이터에 비해 데이터 변형이 어렵기 때문에 다양한 문장을 생성할 수 있는 생성 모델을 통해 문장 데이터 자동 확장을 해보고자 한다. 본 연구에서는 최근 이미지 생성 모델에서 좋은 성능을 보이고 있는 생성적 적대 신경망 중 하나인 CS-GAN을 사용하여 학습 데이터로부터 새로운 문장들을 생성해 보고 유용성을 다양한 지표로 평가하였다. 평가 결과 CS-GAN이 기존의 언어 모델을 사용할 때보다 다양한 문장을 생성할 수 있었고 생성된 문장을 감성 분류기에 학습시켰을 때 감성 분류기의 성능이 향상됨을 보였다.