• Title/Summary/Keyword: user-generated image

Search Result 121, Processing Time 0.028 seconds

A Study on Participatory Culture of Korean Webtoon Focused on User-Generated Images - (한국 웹툰의 참여 문화 연구 - 사용자 생성 이미지를 중심으로 -)

  • Kim, Juna;Kim, Su-Jin
    • Cartoon and Animation Studies
    • /
    • s.44
    • /
    • pp.307-331
    • /
    • 2016
  • Webtoon is the most popular cultural contents representing contemporary Korea. This study explores the cultural aspects of participatory culture surrounding Webtoon, and reveals the cultural implications of webtoon in contemporary Korea. Particularly, this study notes that the engagement of the participatory culture is formed from the user-generated images and analyzes the reproduction patterns of them. Chapter 2 analyzes the mimicking process of user-generated images based on the 'meme' concept. Especially based on the variation degree of text or image, the user-generated images could be classified into three types of 'completely variant', 'partly variant', and 'completely same'. Users use these images as one of the fun factor by transplanting them into daily messenger conversation. Chapter 3 reveals the cultural meaning which is derived from the process of user-generated images creation. In particular, this study notes that most of the user-generated images are mimicking the main character of the original webtoon, and analyzes the underlying desire of the mass based on the literary theory of Northrop Frye. The main readers of webtoon are petit-bourgeois living in Korean metropolitan, and the user-generated images also reflects the daily lives of these ordinary people. User-generated images of webtoon are imitating the original contents in a way of replicating or mutating the images or texts. Also, they are consumed and enjoyed as an amusing code among users. Especially by mimicking the appearance of the main character in a self-reflective way, they appeal to day-to-day sympathy of users. In that user-generated images reveal the desire of the public living in contemporary Korea, this study examines the cultural implication of webtoon.

Comparative Analysis of AI Painting Using [Midjourney] and [Stable Diffusion] - A Case Study on Character Drawing -

  • Pingjian Jie;Xinyi Shan;Jeanhun Chung
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.2
    • /
    • pp.403-408
    • /
    • 2023
  • The widespread discussion of AI-generated content, fueled by the emergence of consumer applications like ChatGPT and Midjourney, has attracted significant attention. Among various AI applications, AI painting has gained popularity due to its mature technology, user-friendly nature, and excellent output quality, resulting in a rapid growth in user numbers. Midjourney and Stable Diffusion are two of the most widely used AI painting tools by users. In this study, the author adopts a perspective that represents the general public and utilizes case studies and comparative analysis to summarize the distinctive features and differences between Midjourney and Stable Diffusion in the context of AI character illustration. The aim is to provide informative material forthose interested in AI painting and lay a solid foundation for further in-depth research on AI-generated content. The research findings indicate that both software can generate excellent character images but with distinct features.

Understanding Brand Image from Consumer-generated Hashtags

  • Park, Keeyeon Ki-cheon;Kim, Hye-jin
    • Asia Marketing Journal
    • /
    • v.22 no.3
    • /
    • pp.71-85
    • /
    • 2020
  • Social media has emerged as a major hub of engagement between brands and consumers in recent years, and allows user-generated content to serve as a powerful means of encouraging communication between the sides. However, it is challenging to negotiate user-generated content owing to its lack of structure and the enormous amount generated. This study focuses on the hashtag, a metadata tag that reflects customers' brand perception through social media platforms. Online users share their knowledge and impressions using a wide variety of hashtags. We examine hashtags that co-occur with particular branded hashtags on the social media platform, Instagram, to derive insights about brand perception. We apply text mining technology and network analysis to identify the perceptions of brand images among consumers on the site, where this helps distinguish among the diverse personalities of the brands. This study contributes to highlighting the value of hashtags in constructing brand personality in the context of online marketing.

A Study on Authentication using Image Synthesis (이미지 합성을 이용한 인증에 대한 연구)

  • Kim, Suhee;Park, Bongjoo
    • Convergence Security Journal
    • /
    • v.4 no.3
    • /
    • pp.19-25
    • /
    • 2004
  • This research develops an algorithm using image synthesis for a server to authenticate users and implements it. The server creates cards with random dots for users and distribute them to users. The server also manages information of the cards distributed to users. When there is an authentication request from a user, the server creates a server card based on information of the user' s card in real time and send it to the user. Different server card is generated for each authentication. Thus, the server card plays a role of one-time password challenge. The user overlaps his/her card with the server card and read an image(eg. a number with four digits) made up from them and inputs the image to the system. This is the authentication process. Keeping security level high, this paper proposes a technique to generate the image clearly and implements it.

  • PDF

Hardware Accelerated Design on Bag of Words Classification Algorithm

  • Lee, Chang-yong;Lee, Ji-yong;Lee, Yong-hwan
    • Journal of Platform Technology
    • /
    • v.6 no.4
    • /
    • pp.26-33
    • /
    • 2018
  • In this paper, we propose an image retrieval algorithm for real-time processing and design it as hardware. The proposed method is based on the classification of BoWs(Bag of Words) algorithm and proposes an image search algorithm using bit stream. K-fold cross validation is used for the verification of the algorithm. Data is classified into seven classes, each class has seven images and a total of 49 images are tested. The test has two kinds of accuracy measurement and speed measurement. The accuracy of the image classification was 86.2% for the BoWs algorithm and 83.7% the proposed hardware-accelerated software implementation algorithm, and the BoWs algorithm was 2.5% higher. The image retrieval processing speed of BoWs is 7.89s and our algorithm is 1.55s. Our algorithm is 5.09 times faster than BoWs algorithm. The algorithm is largely divided into software and hardware parts. In the software structure, C-language is used. The Scale Invariant Feature Transform algorithm is used to extract feature points that are invariant to size and rotation from the image. Bit streams are generated from the extracted feature point. In the hardware architecture, the proposed image retrieval algorithm is written in Verilog HDL and designed and verified by FPGA and Design Compiler. The generated bit streams are stored, the clustering step is performed, and a searcher image databases or an input image databases are generated and matched. Using the proposed algorithm, we can improve convenience and satisfaction of the user in terms of speed if we search using database matching method which represents each object.

Evaluation of Geo-based Image Fusion on Mobile Cloud Environment using Histogram Similarity Analysis

  • Lee, Kiwon;Kang, Sanggoo
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.1
    • /
    • pp.1-9
    • /
    • 2015
  • Mobility and cloud platform have become the dominant paradigm to develop web services dealing with huge and diverse digital contents for scientific solution or engineering application. These two trends are technically combined into mobile cloud computing environment taking beneficial points from each. The intention of this study is to design and implement a mobile cloud application for remotely sensed image fusion for the further practical geo-based mobile services. In this implementation, the system architecture consists of two parts: mobile web client and cloud application server. Mobile web client is for user interface regarding image fusion application processing and image visualization and for mobile web service of data listing and browsing. Cloud application server works on OpenStack, open source cloud platform. In this part, three server instances are generated as web server instance, tiling server instance, and fusion server instance. With metadata browsing of the processing data, image fusion by Bayesian approach is performed using functions within Orfeo Toolbox (OTB), open source remote sensing library. In addition, similarity of fused images with respect to input image set is estimated by histogram distance metrics. This result can be used as the reference criterion for user parameter choice on Bayesian image fusion. It is thought that the implementation strategy for mobile cloud application based on full open sources provides good points for a mobile service supporting specific remote sensing functions, besides image fusion schemes, by user demands to expand remote sensing application fields.

COMMUNITY-GENERATED ONLINE IMAGE DICTORNARY

  • Li, Guangda;Li, Haojie;Tang, Jinhui;Chua, Tat-Seng
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.178-183
    • /
    • 2009
  • Online image dictionary has become more and more popular in concepts cognition. However, for existing online systems, only very few images are manually picked to demonstrate the concepts. Currently, there is very little research found on automatically choosing large scale online images with the help of semantic analysis. In this paper, we propose a novel framework to utilize community-generated online multimedia content to visually illustrate certain concepts. Our proposed framework adapts various techniques, including the correlation analysis, semantic and visual clustering to produce sets of high quality, precise, diverse and representative images to visually translate a given concept. To make the best use of our results, a user interface is deployed, which displays the representative images according the latent semantic coherence. The objective and subjective evaluations show the feasibility and effectiveness of our approach.

  • PDF

Interactive prostate shape reconstruction from 3D TRUS images

  • Furuhata, Tomotake;Song, Inho;Zhang, Hong;Rabin, Yoed;Shimada, Kenji
    • Journal of Computational Design and Engineering
    • /
    • v.1 no.4
    • /
    • pp.272-288
    • /
    • 2014
  • This paper presents a two-step, semi-automated method for reconstructing a three-dimensional (3D) shape of the prostate from a 3D transrectal ultrasound (TRUS) image. While the method has been developed for prostate ultrasound imaging, it can potentially be applicable to any other organ of the body and other imaging modalities. The proposed method takes as input a 3D TRUS image and generates a watertight 3D surface model of the prostate. In the first step, the system lets the user visualize and navigate through the input volumetric image by displaying cross sectional views oriented in arbitrary directions. The user then draws partial/full contours on selected cross sectional views. In the second step, the method automatically generates a watertight 3D surface of the prostate by fitting a deformable spherical template to the set of user-specified contours. Since the method allows the user to select the best cross-sectional directions and draw only clearly recognizable partial or full contours, the user can avoid time-consuming and inaccurate guesswork on where prostate contours are located. By avoiding the usage of noisy, incomprehensible portions of the TRUS image, the proposed method yields more accurate prostate shapes than conventional methods that demand complete cross-sectional contours selected manually, or automatically using an image processing tool. Our experiments confirmed that a 3D watertight surface of the prostate can be generated within five minutes even from a volumetric image with a high level of speckles and shadow noises.

Automatic Composition Algorithm based on Fractal Tree (프랙탈 트리를 이용한 자동 작곡 방법)

  • Kwak, Sung-Ho;Yoo, Min-Joon;Lee, In-Kwon
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.618-622
    • /
    • 2008
  • In this paper, we suggest new music composition algorithm based on fractal theory. User can define and control fractal shape by setting an initial state and production rules in L-System. We generate an asymmetric fractal tree based on L-System and probability. Then a music is generated by the fractal tree image using sonification techniques. We introduce two composition algorithm using the fractal tree. First, monophonic music can be generated by mapping x and y axis to velocity and pitch, respectively Second, harmonic music also can be generated by mapping x and y axis to time and pitch, respectively Using our composition algorithm, user can easily generate a music which has repeated pattern created by recursive feature of fractal, and a music which has structure similar to fractal tree image.

  • PDF

Semi-automatic 3D Building Reconstruction from Uncalibrated Images (비교정 영상에서의 반자동 3차원 건물 모델링)

  • Jang, Kyung-Ho;Jang, Jae-Seok;Lee, Seok-Jun;Jung, Soon-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.9
    • /
    • pp.1217-1232
    • /
    • 2009
  • In this paper, we propose a semi-automatic 3D building reconstruction method using uncalibrated images which includes the facade of target building. First, we extract feature points in all images and find corresponding points between each pair of images. Second, we extract lines on each image and estimate the vanishing points. Extracted lines are grouped with respect to their corresponding vanishing points. The adjacency graph is used to organize the image sequence based on the number of corresponding points between image pairs and camera calibration is performed. The initial solid model can be generated by some user interactions using grouped lines and camera pose information. From initial solid model, a detailed building model is reconstructed by a combination of predefined basic Euler operators on half-edge data structure. Automatically computed geometric information is visualized to help user's interaction during the detail modeling process. The proposed system allow the user to get a 3D building model with less user interaction by augmenting various automatically generated geometric information.

  • PDF