• Title/Summary/Keyword: Text-to-Image

Search Result 889, Processing Time 0.025 seconds

Research Trend Analysis by using Text-Mining Techniques on the Convergence Studies of AI and Healthcare Technologies (텍스트 마이닝 기법을 활용한 인공지능과 헬스케어 융·복합 분야 연구동향 분석)

  • Yoon, Jee-Eun;Suh, Chang-Jin
    • Journal of Information Technology Services
    • /
    • v.18 no.2
    • /
    • pp.123-141
    • /
    • 2019
  • The goal of this study is to review the major research trend on the convergence studies of AI and healthcare technologies. For the study, 15,260 English articles on AI and healthcare related topics were collected from Scopus for 55 years from 1963, and text mining techniques were conducted. As a result, seven key research topics were defined : "AI for Clinical Decision Support System (CDSS)", "AI for Medical Image", "Internet of Healthcare Things (IoHT)", "Big Data Analytics in Healthcare", "Medical Robotics", "Blockchain in Healthcare", and "Evidence Based Medicine (EBM)". The result of this study can be utilized to set up and develop the appropriate healthcare R&D strategies for the researchers and government. In this study, text mining techniques such as Text Analysis, Frequency Analysis, Topic Modeling on LDA (Latent Dirichlet Allocation), Word Cloud, and Ego Network Analysis were conducted.

A study on the effect of JPEG recompression with the color image quality (JPEG 재압축이 컬러 이미지 품질에 미치는 영향에 관한 연구)

  • 이성형;조가람;구철희
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.18 no.2
    • /
    • pp.55-68
    • /
    • 2000
  • Joint photographic experts group (JPEG) is a standard still-image compression technique, established by the international organization for standardization (ISO) and international telecommunication standardization sector (ITUT). The standard is intended to be utilized in the various kinds of color still imaging systems as a standard color image coding format. Because JPEG is a lossy compression, the decompressed image pixel values are not the same as the value before compression. Various distortions of JPEG compression and JPEG recompression has been reported in various papers. The Image compressed by JPEG is often recompressed by same type compression method in JPEG. In general, JPEG is a lossy compression and the quality of compressed image is predicted that is varied in according to recompression Q-factor. In this paper, four difference color samples(photo image, gradient image, gradient image, vector drawing image, text image) were compressed in according to various Q-factor, and then the compressed images were recompressed according to various Q-factor once again. As the result, this paper evaluate the variation of image quality and file size in JPEG recompression and recommed the optimum recompression factor.

  • PDF

Word Image Decomposition from Image Regions in Document Images using Statistical Analyses (문서 영상의 그림 영역에서 통계적 분석을 이용한 단어 영상 추출)

  • Jeong, Chang-Bu;Kim, Soo-Hyung
    • The KIPS Transactions:PartB
    • /
    • v.13B no.6 s.109
    • /
    • pp.591-600
    • /
    • 2006
  • This paper describes the development and implementation of a algorithm to decompose word images from image regions mixed text/graphics in document images using statistical analyses. To decompose word images from image regions, the character components need to be separated from graphic components. For this process, we propose a method to separate them with an analysis of box-plot using a statistics of structural components. An accuracy of this method is not sensitive to the changes of images because the criterion of separation is defined by the statistics of components. And then the character regions are determined by analyzing a local crowdedness of the separated character components. finally, we devide the character regions into text lines and word images using projection profile analysis, gap clustering, special symbol detection, etc. The proposed system could reduce the influence resulted from the changes of images because it uses the criterion based on the statistics of image regions. Also, we made an experiment with the proposed method in document image processing system for keyword spotting and showed the necessity of studying for the proposed method.

Improvement OCR Algorithm for Efficient Book Catalog RetrievalTechnology (효과적인 도서목록 검색을 위한 개선된 OCR알고리즘에 관한 연구)

  • HeWen, HeWen;Baek, Young-Hyun;Moon, Sung-Ryong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.152-159
    • /
    • 2010
  • Existing character recognition algorithm recognize characters in simple conditional. It has the disadvantage that recognition rates often drop drastically when input document image has low quality, rotated text, various font or size text because of external noise or data loss. In this paper, proposes the optical character recognition algorithm which using bicubic interpolation method for the catalog retrieval when the input image has rotated text, blurred, various font and size. In this paper, applied optical character recognition algorithm consist of detection and recognition part. Detection part applied roberts and hausdorff distance algorithm for correct detection the catalog of book. Recognition part applied bicubic interpolation to interpolate data loss due to low quality, various font and size text. By the next time, applied rotation for the bicubic interpolation result image to slant proofreading. Experimental results show that proposal method can effectively improve recognition rate 6% and search-time 1.077s process result.

Sign2Gloss2Text-based Sign Language Translation with Enhanced Spatial-temporal Information Centered on Sign Language Movement Keypoints (수어 동작 키포인트 중심의 시공간적 정보를 강화한 Sign2Gloss2Text 기반의 수어 번역)

  • Kim, Minchae;Kim, Jungeun;Kim, Ha Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1535-1545
    • /
    • 2022
  • Sign language has completely different meaning depending on the direction of the hand or the change of facial expression even with the same gesture. In this respect, it is crucial to capture the spatial-temporal structure information of each movement. However, sign language translation studies based on Sign2Gloss2Text only convey comprehensive spatial-temporal information about the entire sign language movement. Consequently, detailed information (facial expression, gestures, and etc.) of each movement that is important for sign language translation is not emphasized. Accordingly, in this paper, we propose Spatial-temporal Keypoints Centered Sign2Gloss2Text Translation, named STKC-Sign2 Gloss2Text, to supplement the sequential and semantic information of keypoints which are the core of recognizing and translating sign language. STKC-Sign2Gloss2Text consists of two steps, Spatial Keypoints Embedding, which extracts 121 major keypoints from each image, and Temporal Keypoints Embedding, which emphasizes sequential information using Bi-GRU for extracted keypoints of sign language. The proposed model outperformed all Bilingual Evaluation Understudy(BLEU) scores in Development(DEV) and Testing(TEST) than Sign2Gloss2Text as the baseline, and in particular, it proved the effectiveness of the proposed methodology by achieving 23.19, an improvement of 1.87 based on TEST BLEU-4.

Block Classification of Document Images by Block Attributes and Texture Features (블록의 속성과 질감특징을 이용한 문서영상의 블록분류)

  • Jang, Young-Nae;Kim, Joong-Soo;Lee, Cheol-Hee
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.7
    • /
    • pp.856-868
    • /
    • 2007
  • We propose an effective method for block classification in a document image. The gray level document image is converted to the binary image for a block segmentation. This binary image would be smoothed to find the locations and sizes of each block. And especially during this smoothing, the inner block heights of each block are obtained. The gray level image is divided to several blocks by these location informations. The SGLDM(spatial gray level dependence matrices) are made using the each gray-level document block and the seven second-order statistical texture features are extracted from the (0,1) direction's SGLDM which include the document attributes. Document image blocks are classified to two groups, text and non-text group, by the inner block height of the block at the nearest neighbor rule. The seven texture features(that were extracted from the SGLDM) are used for the five detail categories of small font, large font, table, graphic and photo blocks. These document blocks are available not only for structure analysis of document recognition but also the various applied area.

  • PDF

Text extraction from camera based document image (카메라 기반 문서영상에서의 문자 추출)

  • 박희주;김진호
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.2
    • /
    • pp.14-20
    • /
    • 2003
  • This paper presents a text extraction method of camera based document image. It is more difficult to recognize camera based document image in comparison with scanner based image because of segmentation problem due to variable lighting condition and versatile fonts. Both document binarization and character extraction are important processes to recognize camera based document image. After converting color image into grey level image, gray level normalization is used to extract character region independent of lighting condition and background image. Local adaptive binarization method is then used to extract character from the background after the removal of noise. In this character extraction step, the information of the horizontal and vertical projection and the connected components is used to extract character line, word region and character region. To evaluate the proposed method, we have experimented with documents mixed Hangul, English, symbols and digits of the ETRI database. An encouraging binarization and character extraction results have been obtained.

  • PDF

A Consistent Quality Bit Rate Control for the Line-Based Compression

  • Ham, Jung-Sik;Kim, Ho-Young;Lee, Seong-Won
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.5
    • /
    • pp.310-318
    • /
    • 2016
  • Emerging technologies such as the Internet of Things (IoT) and the Advanced Driver Assistant System (ADAS) often have image transmission functions with tough constraints, like low power and/or low delay, which require that they adopt line-based, low memory compression methods instead of existing frame-based image compression standards. Bit rate control in the conventional frame-based compression systems requires a lot of hardware resources when the scope of handled data falls at the frame level. On the other hand, attempts to reduce the heavy hardware resource requirement by focusing on line-level processing yield uneven image quality through the frame. In this paper, we propose a bit rate control that maintains consistency in image quality through the frame and improves the legibility of text regions. To find the line characteristics, the proposed bit rate control tests each line for ease of compression and the existence of text. Experiments on the proposed bit rate control show peak signal-to-noise ratios (PSNRs) similar to those of conventional bit rate controls, but with the use of significantly fewer hardware resources.

An Improved Method for Detecting Caption in image using DCT-coefficient and Transition-map Analysis (DCT계수와 천이지도 분석을 이용한 개선된 영상 내 자막영역 검출방법)

  • An, Kwon-Jae;Joo, Sung-Il;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.4
    • /
    • pp.61-71
    • /
    • 2011
  • In this paper, we proposed the method for detecting text region on image using DCT-coefficient and transition-map analysis. The detecting rate of traditional method for detecting text region using DCT-coefficient analysis is high, but false positive detecting rate also is high and the method using transition-map often reject true text region in step of verification because of sticky threshold. To overcome these problems, we generated PTRmap(Promising Text Region map) through DCT-coefficient analysis and applied PTRmap to method for detecting text region using transition map. As the result, the false positive detecting rate decreased as compared with the method using DCT-coefficient analysis, and the detecting rate increased as compared with the method using transition map.

Automatic Target Recognition Study using Knowledge Graph and Deep Learning Models for Text and Image data (지식 그래프와 딥러닝 모델 기반 텍스트와 이미지 데이터를 활용한 자동 표적 인식 방법 연구)

  • Kim, Jongmo;Lee, Jeongbin;Jeon, Hocheol;Sohn, Mye
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.145-154
    • /
    • 2022
  • Automatic Target Recognition (ATR) technology is emerging as a core technology of Future Combat Systems (FCS). Conventional ATR is performed based on IMINT (image information) collected from the SAR sensor, and various image-based deep learning models are used. However, with the development of IT and sensing technology, even though data/information related to ATR is expanding to HUMINT (human information) and SIGINT (signal information), ATR still contains image oriented IMINT data only is being used. In complex and diversified battlefield situations, it is difficult to guarantee high-level ATR accuracy and generalization performance with image data alone. Therefore, we propose a knowledge graph-based ATR method that can utilize image and text data simultaneously in this paper. The main idea of the knowledge graph and deep model-based ATR method is to convert the ATR image and text into graphs according to the characteristics of each data, align it to the knowledge graph, and connect the heterogeneous ATR data through the knowledge graph. In order to convert the ATR image into a graph, an object-tag graph consisting of object tags as nodes is generated from the image by using the pre-trained image object recognition model and the vocabulary of the knowledge graph. On the other hand, the ATR text uses the pre-trained language model, TF-IDF, co-occurrence word graph, and the vocabulary of knowledge graph to generate a word graph composed of nodes with key vocabulary for the ATR. The generated two types of graphs are connected to the knowledge graph using the entity alignment model for improvement of the ATR performance from images and texts. To prove the superiority of the proposed method, 227 documents from web documents and 61,714 RDF triples from dbpedia were collected, and comparison experiments were performed on precision, recall, and f1-score in a perspective of the entity alignment..