• 제목/요약/키워드: Text image

검색결과 972건 처리시간 0.033초

Web Image Clustering with Text Features and Measuring its Efficiency

  • Cho, Soo-Sun
    • 한국멀티미디어학회논문지
    • /
    • 제10권6호
    • /
    • pp.699-706
    • /
    • 2007
  • This article is an approach to improving the clustering of Web images by using high-level semantic features from text information relevant to Web images as well as low-level visual features of image itself. These high-level text features can be obtained from image URLs and file names, page titles, hyperlinks, and surrounding text. As a clustering algorithm, a self-organizing map (SOM) proposed by Kohonen is used. To evaluate the clustering efficiencies of SOMs, we propose a simple but effective measure indicating the accumulativeness of same class images and the perplexities of class distributions. Our approach is to advance the existing measures through defining and using new measures accumulativeness on the most superior clustering node and concentricity to evaluate clustering efficiencies of SOMs. The experimental results show that the high-level text features are more useful in SOM-based Web image clustering.

  • PDF

Real Scene Text Image Super-Resolution Based on Multi-Scale and Attention Fusion

  • Xinhua Lu;Haihai Wei;Li Ma;Qingji Xue;Yonghui Fu
    • Journal of Information Processing Systems
    • /
    • 제19권4호
    • /
    • pp.427-438
    • /
    • 2023
  • Plenty of works have indicated that single image super-resolution (SISR) models relying on synthetic datasets are difficult to be applied to real scene text image super-resolution (STISR) for its more complex degradation. The up-to-date dataset for realistic STISR is called TextZoom, while the current methods trained on this dataset have not considered the effect of multi-scale features of text images. In this paper, a multi-scale and attention fusion model for realistic STISR is proposed. The multi-scale learning mechanism is introduced to acquire sophisticated feature representations of text images; The spatial and channel attentions are introduced to capture the local information and inter-channel interaction information of text images; At last, this paper designs a multi-scale residual attention module by skillfully fusing multi-scale learning and attention mechanisms. The experiments on TextZoom demonstrate that the model proposed increases scene text recognition's (ASTER) average recognition accuracy by 1.2% compared to text super-resolution network.

Skewed Angle Detection in Text Images Using Orthogonal Angle View

  • Chin, Seong-Ah;Choo, Moon-Won
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 ITC-CSCC -1
    • /
    • pp.62-65
    • /
    • 2000
  • In this paper we propose skewed angle detection methods for images that contain text that is not aligned horizontally. In most images text areas are aligned along the horizontal axis, however there are many occasions when the text may be at a skewed angle (denoted by 0 < ${\theta}\;{\leq}\;{\pi}$). In the work described, we adapt the Hough transform, Shadow and Threshold Projection methods to detect the skewed angle of text in an input image using the orthogonal angle view property. The results of this method are a primary text skewed angle, which allows us to rotate the original input image into an image with horizontally aligned text. This utilizes document image processing prior to the recognition stage.

  • PDF

확산모델의 미세조정을 통한 웹툰 생성연구 (A Study on the Generation of Webtoons through Fine-Tuning of Diffusion Models)

  • 유경호;김형주;김정인;전찬준;김판구
    • 스마트미디어저널
    • /
    • 제12권7호
    • /
    • pp.76-83
    • /
    • 2023
  • 본 연구에서는 웹툰 작가의 웹툰 제작 과정을 보조하기 위해 사전학습된 Text-to-Image 모델을 미세조정하여 텍스트에서 웹툰을 생성하는 방법을 제안한다. 제안하는 방법은 웹툰 화풍으로 변환된 웹툰 데이터셋을 사용하여 사전학습된 Stable Diffusion 모델에 LoRA 기법을 활용하여 미세조정한다. 실험 결과 3만 스텝의 학습으로 약 4시간 반이 소요되어 빠르게 학습하는 것을 확인하였으며, 생성된 이미지에서는 입력한 텍스트에 표현된 형상이나 배경이 반영되어 웹툰 이미지가 생성되는 것을 확인하였다. 또한, Inception score를 통해 정량적인 평가를 수행하였을 때, DCGAN 기반의 Text-to-Image 모델보다 더 높은 성능을 나타냈다. 본 연구에서 제안된 웹툰 생성을 위한 Text-to-Image 모델을 웹툰 작가가 사용한다면, 웹툰 저작에 시간을 단축시킬 수 있을 것으로 기대된다.

모바일 쿠폰 특성 지각에 따른 소비자 반응과 변인간 인과관계 연구 (Consumer responses towards mobile coupon characteristics perception and causal relationships among variables)

  • 김재희;여은아
    • 복식문화연구
    • /
    • 제28권1호
    • /
    • pp.15-29
    • /
    • 2020
  • Purpose of the study is to explore the effect of the types of mobile coupons(text- vs. image-focused coupons; free-gift vs. discount coupons) on characteristic perception of mobile coupons, and the causal relationships among characteristic perception, attitude, and use intention of mobile coupons. A total of 140 university students participated in experiments with questionnaires including one of the four stimuli. Important findings are as follows. First, image-focused mobile coupons generated more enjoyment than did text-focused coupons. However, the text/image-focused coupons were not different in perception of informativeness and credibility of mobile coupons. Second, enjoyment perception was significantly increased when image-focused contents were combined with discount coupons whereas enjoyment perception was decreased when text-focused contents were combined with free-gift coupons. This interaction effect reflects that the level of enjoyment of consumers can be changed in terms of the combination of the value-provision types of coupons and the text-image focused contents. Third, it was found that consumer perception of coupon characteristics formed attitudes toward mobile coupons, and use intention of mobile coupons was determined by attitudes toward mobile coupons. Study findings may fill the void of research investigating the effect of text-image contents and the types of coupons on consumer reponses toward mobile coupons. Mobile coupons have limited quantity of information within a small size of mobile phone screen, therefore, the results were not consistent with prior research tested with mobile advertisements indicating the effect of text-image contents on perception of informativeness and credibility.

Image Steganography to Hide Unlimited Secret Text Size

  • Almazaydeh, Wa'el Ibrahim A.
    • International Journal of Computer Science & Network Security
    • /
    • 제22권4호
    • /
    • pp.73-82
    • /
    • 2022
  • This paper shows the hiding process of unlimited secret text size in an image using three methods: the first method is the traditional method in steganography that based on the concealing the binary value of the text using the least significant bits method, the second method is a new method to hide the data in an image based on Exclusive OR process and the third one is a new method for hiding the binary data of the text into an image (that may be grayscale or RGB images) using Exclusive and Huffman Coding. The new methods shows the hiding process of unlimited text size (data) in an image. Peak Signal to Noise Ratio (PSNR) is applied in the research to simulate the results.

다중 영상 및 텍스트 동기화를 고려한 Music Player MAF 의 확장 포맷 연구 (A study on Extensions to Music Player MAF for Multiple JPEG images and Text data with Synchronization)

  • 양찬석;임정연;김문철
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2005년도 추계종합학술대회
    • /
    • pp.967-970
    • /
    • 2005
  • The Music Player MAF Player Format of ISO/IEC 23000-2 FDIS consists of MP3 data, MPEG-7 metadata and one optional JPEG image data based on MPEG-4 File Format. However, the current Music Player MAF format does not allow multiple JPEG image data or timed text data. It is helpful to use timed text data and multiple JPEG images in the various multimedia applications. For example, listening material for the foreign language needs an additional book which has text and images, the audio contents which can get image and text data can be helpful to understand the whole story and situations well. In this paper, we propose the detailed file structure in conjunction with MPEG-4 File Format in order to improve the functionalities, which carry multiple image data and text data with synchronization information between MP3 data and other resources.

  • PDF

A Novel Text Sample Selection Model for Scene Text Detection via Bootstrap Learning

  • Kong, Jun;Sun, Jinhua;Jiang, Min;Hou, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.771-789
    • /
    • 2019
  • Text detection has been a popular research topic in the field of computer vision. It is difficult for prevalent text detection algorithms to avoid the dependence on datasets. To overcome this problem, we proposed a novel unsupervised text detection algorithm inspired by bootstrap learning. Firstly, the text candidate in a novel form of superpixel is proposed to improve the text recall rate by image segmentation. Secondly, we propose a unique text sample selection model (TSSM) to extract text samples from the current image and eliminate database dependency. Specifically, to improve the precision of samples, we combine maximally stable extremal regions (MSERs) and the saliency map to generate sample reference maps with a double threshold scheme. Finally, a multiple kernel boosting method is developed to generate a strong text classifier by combining multiple single kernel SVMs based on the samples selected from TSSM. Experimental results on standard datasets demonstrate that our text detection method is robust to complex backgrounds and multilingual text and shows stable performance on different standard datasets.

A Fast Algorithm for Korean Text Extraction and Segmentation from Subway Signboard Images Utilizing Smartphone Sensors

  • Milevskiy, Igor;Ha, Jin-Young
    • Journal of Computing Science and Engineering
    • /
    • 제5권3호
    • /
    • pp.161-166
    • /
    • 2011
  • We present a fast algorithm for Korean text extraction and segmentation from subway signboards using smart phone sensors in order to minimize computational time and memory usage. The algorithm can be used as preprocessing steps for optical character recognition (OCR): binarization, text location, and segmentation. An image of a signboard captured by smart phone camera while holding smart phone by an arbitrary angle is rotated by the detected angle, as if the image was taken by holding a smart phone horizontally. Binarization is only performed once on the subset of connected components instead of the whole image area, resulting in a large reduction in computational time. Text location is guided by user's marker-line placed over the region of interest in binarized image via smart phone touch screen. Then, text segmentation utilizes the data of connected components received in the binarization step, and cuts the string into individual images for designated characters. The resulting data could be used as OCR input, hence solving the most difficult part of OCR on text area included in natural scene images. The experimental results showed that the binarization algorithm of our method is 3.5 and 3.7 times faster than Niblack and Sauvola adaptive-thresholding algorithms, respectively. In addition, our method achieved better quality than other methods.

Detecting and Segmenting Text from Images for a Mobile Translator System

  • Chalidabhongse, Thanarat H.;Jeeraboon, Poonsak
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.875-878
    • /
    • 2004
  • Researching in text detection and segmentation has been done for a long period in the OCR area. However, there is some other area that the text detection and segmentation from images can be very useful. In this report, we first propose the design of a mobile translator system which helps non-native speakers to understand the foreign language using ubiquitous mobile network and camera mobile phones. The main focus of the paper will be the algorithm in detecting and segmenting texts embedded in the natural scenes from taken images. The image, which is captured by a camera mobile phone, is transmitted to a translator server. It is initially passed through some preprocessing processes to smooth the image as well as suppress noises. A threshold is applied to binarize the image. Afterward, an edge detection algorithm and connected component analysis are performed on the filtered image to find edges and segment the components in the image. Finally, the pre-defined layout relation constraints are utilized in order to decide which components likely to be texts in the image. A preliminary experiment was done and the system yielded a recognition rate of 94.44% on a set of 36 various natural scene images that contain texts.

  • PDF