• Title/Summary/Keyword: Text image

Search Result 960, Processing Time 0.036 seconds

Web Image Clustering with Text Features and Measuring its Efficiency

  • Cho, Soo-Sun
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.699-706
    • /
    • 2007
  • This article is an approach to improving the clustering of Web images by using high-level semantic features from text information relevant to Web images as well as low-level visual features of image itself. These high-level text features can be obtained from image URLs and file names, page titles, hyperlinks, and surrounding text. As a clustering algorithm, a self-organizing map (SOM) proposed by Kohonen is used. To evaluate the clustering efficiencies of SOMs, we propose a simple but effective measure indicating the accumulativeness of same class images and the perplexities of class distributions. Our approach is to advance the existing measures through defining and using new measures accumulativeness on the most superior clustering node and concentricity to evaluate clustering efficiencies of SOMs. The experimental results show that the high-level text features are more useful in SOM-based Web image clustering.

  • PDF

Real Scene Text Image Super-Resolution Based on Multi-Scale and Attention Fusion

  • Xinhua Lu;Haihai Wei;Li Ma;Qingji Xue;Yonghui Fu
    • Journal of Information Processing Systems
    • /
    • v.19 no.4
    • /
    • pp.427-438
    • /
    • 2023
  • Plenty of works have indicated that single image super-resolution (SISR) models relying on synthetic datasets are difficult to be applied to real scene text image super-resolution (STISR) for its more complex degradation. The up-to-date dataset for realistic STISR is called TextZoom, while the current methods trained on this dataset have not considered the effect of multi-scale features of text images. In this paper, a multi-scale and attention fusion model for realistic STISR is proposed. The multi-scale learning mechanism is introduced to acquire sophisticated feature representations of text images; The spatial and channel attentions are introduced to capture the local information and inter-channel interaction information of text images; At last, this paper designs a multi-scale residual attention module by skillfully fusing multi-scale learning and attention mechanisms. The experiments on TextZoom demonstrate that the model proposed increases scene text recognition's (ASTER) average recognition accuracy by 1.2% compared to text super-resolution network.

Skewed Angle Detection in Text Images Using Orthogonal Angle View

  • Chin, Seong-Ah;Choo, Moon-Won
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.62-65
    • /
    • 2000
  • In this paper we propose skewed angle detection methods for images that contain text that is not aligned horizontally. In most images text areas are aligned along the horizontal axis, however there are many occasions when the text may be at a skewed angle (denoted by 0 < ${\theta}\;{\leq}\;{\pi}$). In the work described, we adapt the Hough transform, Shadow and Threshold Projection methods to detect the skewed angle of text in an input image using the orthogonal angle view property. The results of this method are a primary text skewed angle, which allows us to rotate the original input image into an image with horizontally aligned text. This utilizes document image processing prior to the recognition stage.

  • PDF

A Study on the Generation of Webtoons through Fine-Tuning of Diffusion Models (확산모델의 미세조정을 통한 웹툰 생성연구)

  • Kyungho Yu;Hyungju Kim;Jeongin Kim;Chanjun Chun;Pankoo Kim
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.76-83
    • /
    • 2023
  • This study proposes a method to assist webtoon artists in the process of webtoon creation by utilizing a pretrained Text-to-Image model to generate webtoon images from text. The proposed approach involves fine-tuning a pretrained Stable Diffusion model using a webtoon dataset transformed into the desired webtoon style. The fine-tuning process, using LoRA technique, completes in a quick training time of approximately 4.5 hours with 30,000 steps. The generated images exhibit the representation of shapes and backgrounds based on the input text, resulting in the creation of webtoon-like images. Furthermore, the quantitative evaluation using the Inception score shows that the proposed method outperforms DCGAN-based Text-to-Image models. If webtoon artists adopt the proposed Text-to-Image model for webtoon creation, it is expected to significantly reduce the time required for the creative process.

Consumer responses towards mobile coupon characteristics perception and causal relationships among variables (모바일 쿠폰 특성 지각에 따른 소비자 반응과 변인간 인과관계 연구)

  • Kim, Jae-Hee;Yoh, Eunah
    • The Research Journal of the Costume Culture
    • /
    • v.28 no.1
    • /
    • pp.15-29
    • /
    • 2020
  • Purpose of the study is to explore the effect of the types of mobile coupons(text- vs. image-focused coupons; free-gift vs. discount coupons) on characteristic perception of mobile coupons, and the causal relationships among characteristic perception, attitude, and use intention of mobile coupons. A total of 140 university students participated in experiments with questionnaires including one of the four stimuli. Important findings are as follows. First, image-focused mobile coupons generated more enjoyment than did text-focused coupons. However, the text/image-focused coupons were not different in perception of informativeness and credibility of mobile coupons. Second, enjoyment perception was significantly increased when image-focused contents were combined with discount coupons whereas enjoyment perception was decreased when text-focused contents were combined with free-gift coupons. This interaction effect reflects that the level of enjoyment of consumers can be changed in terms of the combination of the value-provision types of coupons and the text-image focused contents. Third, it was found that consumer perception of coupon characteristics formed attitudes toward mobile coupons, and use intention of mobile coupons was determined by attitudes toward mobile coupons. Study findings may fill the void of research investigating the effect of text-image contents and the types of coupons on consumer reponses toward mobile coupons. Mobile coupons have limited quantity of information within a small size of mobile phone screen, therefore, the results were not consistent with prior research tested with mobile advertisements indicating the effect of text-image contents on perception of informativeness and credibility.

Image Steganography to Hide Unlimited Secret Text Size

  • Almazaydeh, Wa'el Ibrahim A.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.73-82
    • /
    • 2022
  • This paper shows the hiding process of unlimited secret text size in an image using three methods: the first method is the traditional method in steganography that based on the concealing the binary value of the text using the least significant bits method, the second method is a new method to hide the data in an image based on Exclusive OR process and the third one is a new method for hiding the binary data of the text into an image (that may be grayscale or RGB images) using Exclusive and Huffman Coding. The new methods shows the hiding process of unlimited text size (data) in an image. Peak Signal to Noise Ratio (PSNR) is applied in the research to simulate the results.

A study on Extensions to Music Player MAF for Multiple JPEG images and Text data with Synchronization (다중 영상 및 텍스트 동기화를 고려한 Music Player MAF 의 확장 포맷 연구)

  • Yang, Chan-Suk;Lim, Jeong-Yeon;Kim, Mun-Churl
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.967-970
    • /
    • 2005
  • The Music Player MAF Player Format of ISO/IEC 23000-2 FDIS consists of MP3 data, MPEG-7 metadata and one optional JPEG image data based on MPEG-4 File Format. However, the current Music Player MAF format does not allow multiple JPEG image data or timed text data. It is helpful to use timed text data and multiple JPEG images in the various multimedia applications. For example, listening material for the foreign language needs an additional book which has text and images, the audio contents which can get image and text data can be helpful to understand the whole story and situations well. In this paper, we propose the detailed file structure in conjunction with MPEG-4 File Format in order to improve the functionalities, which carry multiple image data and text data with synchronization information between MP3 data and other resources.

  • PDF

A Novel Text Sample Selection Model for Scene Text Detection via Bootstrap Learning

  • Kong, Jun;Sun, Jinhua;Jiang, Min;Hou, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.771-789
    • /
    • 2019
  • Text detection has been a popular research topic in the field of computer vision. It is difficult for prevalent text detection algorithms to avoid the dependence on datasets. To overcome this problem, we proposed a novel unsupervised text detection algorithm inspired by bootstrap learning. Firstly, the text candidate in a novel form of superpixel is proposed to improve the text recall rate by image segmentation. Secondly, we propose a unique text sample selection model (TSSM) to extract text samples from the current image and eliminate database dependency. Specifically, to improve the precision of samples, we combine maximally stable extremal regions (MSERs) and the saliency map to generate sample reference maps with a double threshold scheme. Finally, a multiple kernel boosting method is developed to generate a strong text classifier by combining multiple single kernel SVMs based on the samples selected from TSSM. Experimental results on standard datasets demonstrate that our text detection method is robust to complex backgrounds and multilingual text and shows stable performance on different standard datasets.

A Fast Algorithm for Korean Text Extraction and Segmentation from Subway Signboard Images Utilizing Smartphone Sensors

  • Milevskiy, Igor;Ha, Jin-Young
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.3
    • /
    • pp.161-166
    • /
    • 2011
  • We present a fast algorithm for Korean text extraction and segmentation from subway signboards using smart phone sensors in order to minimize computational time and memory usage. The algorithm can be used as preprocessing steps for optical character recognition (OCR): binarization, text location, and segmentation. An image of a signboard captured by smart phone camera while holding smart phone by an arbitrary angle is rotated by the detected angle, as if the image was taken by holding a smart phone horizontally. Binarization is only performed once on the subset of connected components instead of the whole image area, resulting in a large reduction in computational time. Text location is guided by user's marker-line placed over the region of interest in binarized image via smart phone touch screen. Then, text segmentation utilizes the data of connected components received in the binarization step, and cuts the string into individual images for designated characters. The resulting data could be used as OCR input, hence solving the most difficult part of OCR on text area included in natural scene images. The experimental results showed that the binarization algorithm of our method is 3.5 and 3.7 times faster than Niblack and Sauvola adaptive-thresholding algorithms, respectively. In addition, our method achieved better quality than other methods.

Detecting and Segmenting Text from Images for a Mobile Translator System

  • Chalidabhongse, Thanarat H.;Jeeraboon, Poonsak
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.875-878
    • /
    • 2004
  • Researching in text detection and segmentation has been done for a long period in the OCR area. However, there is some other area that the text detection and segmentation from images can be very useful. In this report, we first propose the design of a mobile translator system which helps non-native speakers to understand the foreign language using ubiquitous mobile network and camera mobile phones. The main focus of the paper will be the algorithm in detecting and segmenting texts embedded in the natural scenes from taken images. The image, which is captured by a camera mobile phone, is transmitted to a translator server. It is initially passed through some preprocessing processes to smooth the image as well as suppress noises. A threshold is applied to binarize the image. Afterward, an edge detection algorithm and connected component analysis are performed on the filtered image to find edges and segment the components in the image. Finally, the pre-defined layout relation constraints are utilized in order to decide which components likely to be texts in the image. A preliminary experiment was done and the system yielded a recognition rate of 94.44% on a set of 36 various natural scene images that contain texts.

  • PDF