• Title/Summary/Keyword: Image Databases

Search Result 238, Processing Time 0.041 seconds

Automatic Text Extraction from News Video using Morphology and Text Shape (형태학과 문자의 모양을 이용한 뉴스 비디오에서의 자동 문자 추출)

  • Jang, In-Young;Ko, Byoung-Chul;Kim, Kil-Cheon;Byun, Hye-Ran
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.479-488
    • /
    • 2002
  • In recent years the amount of digital video used has risen dramatically to keep pace with the increasing use of the Internet and consequently an automated method is needed for indexing digital video databases. Textual information, both superimposed and embedded scene texts, appearing in a digital video can be a crucial clue for helping the video indexing. In this paper, a new method is presented to extract both superimposed and embedded scene texts in a freeze-frame of news video. The algorithm is summarized in the following three steps. For the first step, a color image is converted into a gray-level image and applies contrast stretching to enhance the contrast of the input image. Then, a modified local adaptive thresholding is applied to the contrast-stretched image. The second step is divided into three processes: eliminating text-like components by applying erosion, dilation, and (OpenClose+CloseOpen)/2 morphological operations, maintaining text components using (OpenClose+CloseOpen)/2 operation with a new Geo-correction method, and subtracting two result images for eliminating false-positive components further. In the third filtering step, the characteristics of each component such as the ratio of the number of pixels in each candidate component to the number of its boundary pixels and the ratio of the minor to the major axis of each bounding box are used. Acceptable results have been obtained using the proposed method on 300 news images with a recognition rate of 93.6%. Also, my method indicates a good performance on all the various kinds of images by adjusting the size of the structuring element.

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.

A Study on the Information Searching Behavior of MEDLINE Retrieval in Medical Librarian (의학전문사서의 정보이용행위에 관한 연구)

  • Lee Jin-Young;Jeong Sang-Kyung
    • Journal of Korean Library and Information Science Society
    • /
    • v.30 no.2
    • /
    • pp.123-153
    • /
    • 1999
  • This article aims at finding the ways, on the basis of the studies about the behaviors to search the existing CD-ROM databases, so that the searchers who retrieve the on-line MEDLINE used in the medical libraries can use the data more efficiently than now. We gave the questionnaires to the librarians in 60 medical libraries and searched the literatures and realities on the behaviors of the data uses to examine the search behaviors of the MEDLINE in the medical libraries. The result is as follows: 1) The medical data system rate for single users was $53\%$ and the ons for multi users $43\%$. As for the time which users retrieve for a week, under two hours was $75\%$, between 3 and 8 hours $18.3\%$, and eve. 9 hours $6.7\%$. 2) The increasing factors of the search result are (1) an enough discussion and interview between librarians and users, and (2) the use of the correct indexing terms, Thesaurus, and Keyword. In principle users must search directly. However, the librarians searched instead in case that the retrieval result was under two hours a week$(75\%)$. 3) As for the search fee, $91\%$ was free and $9\%$ was charged. Also search effectiveness was enhanced by the means of Inter-Library Loan Service & Information Network. 4) The medical librarians answered the questionnaire that they need the application education of professional knowledge, medical terms(thesaurus) and electronic medium, and also they need the computer education, interview technique and reeducation to give a satisfactory service. 5) As for the satisfactory degree of MEDLINE application, they answered $44.6\%$ for economy, $38.2\%$ for the conveniency of the time required, and $58.9\%$ for the users' search satisfaction answered respectively. 6) The application of MEDLINE system enhanced the medical libraries' image and had an effect on the users' satisfaction of using the data and search, the data activities and the research achievement. 7) In the past MeSH was used but as the time passes CD-ROM MEDLINE search behavior was preferred to On-line one.

  • PDF

A New Similarity Measure for Categorical Attribute-Based Clustering (범주형 속성 기반 군집화를 위한 새로운 유사 측도)

  • Kim, Min;Jeon, Joo-Hyuk;Woo, Kyung-Gu;Kim, Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.37 no.2
    • /
    • pp.71-81
    • /
    • 2010
  • The problem of finding clusters is widely used in numerous applications, such as pattern recognition, image analysis, market analysis. The important factors that decide cluster quality are the similarity measure and the number of attributes. Similarity measures should be defined with respect to the data types. Existing similarity measures are well applicable to numerical attribute values. However, those measures do not work well when the data is described by categorical attributes, that is, when no inherent similarity measure between values. In high dimensional spaces, conventional clustering algorithms tend to break down because of sparsity of data points. To overcome this difficulty, a subspace clustering approach has been proposed. It is based on the observation that different clusters may exist in different subspaces. In this paper, we propose a new similarity measure for clustering of high dimensional categorical data. The measure is defined based on the fact that a good clustering is one where each cluster should have certain information that can distinguish it with other clusters. We also try to capture on the attribute dependencies. This study is meaningful because there has been no method to use both of them. Experimental results on real datasets show clusters obtained by our proposed similarity measure are good enough with respect to clustering accuracy.

Construction of the Regional Basemap for a Developing Country: Focused on the Bab Ezzouar Municipality in Algeria (개발도상국 지역분석용 베이스맵 구축방안: 알제리의 밥 에주아흐 지역을 중심으로)

  • Lee, Yong Jik;Choei, Nae Young
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.23 no.1
    • /
    • pp.89-99
    • /
    • 2015
  • Recently, our construction industry is actively participating in numerous city planning projects in the third world countries. Considering the current depression of domestic real estate market, the emerging foreign demands could certainly provide substantial opportunities for the domestic industry to overcome the trough. For the field planners dealing with such foreign projects, though, the immediate problem is the lack of public statistics and geographic information to perform spatial analyses and/or prepare master plans. This study, in this context, tries to simulate a process to construct a digitized basemap of the case area, 'Bab Ezzouar,' in Algeria of Northern Africa. The area is a typical municipality that lacks the IT databases. To overcome the data shortage, the study uses the satellite map tiles so as to digitize the roads and building structures. It then estimates the block-wise populations based on the building image interpolation as well as the supplementary field survey data. The topographic TINs are also built by the SRTM (Shuttle Radar Topography Mission) digital elevation maps so that the three-dimensional configuration of the structures and terrains are rendered to check the urban scenery and skylines.

Automated Data Extraction from Unstructured Geotechnical Report based on AI and Text-mining Techniques (AI 및 텍스트 마이닝 기법을 활용한 지반조사보고서 데이터 추출 자동화)

  • Park, Jimin;Seo, Wanhyuk;Seo, Dong-Hee;Yun, Tae-Sup
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.4
    • /
    • pp.69-79
    • /
    • 2024
  • Field geotechnical data are obtained from various field and laboratory tests and are documented in geotechnical investigation reports. For efficient design and construction, digitizing these geotechnical parameters is essential. However, current practices involve manual data entry, which is time-consuming, labor-intensive, and prone to errors. Thus, this study proposes an automatic data extraction method from geotechnical investigation reports using image-based deep learning models and text-mining techniques. A deep-learning-based page classification model and a text-searching algorithm were employed to classify geotechnical investigation report pages with 100% accuracy. Computer vision algorithms were utilized to identify valid data regions within report pages, and text analysis was used to match and extract the corresponding geotechnical data. The proposed model was validated using a dataset of 205 geotechnical investigation reports, achieving an average data extraction accuracy of 93.0%. Finally, a user-interface-based program was developed to enhance the practical application of the extraction model. It allowed users to upload PDF files of geotechnical investigation reports, automatically analyze these reports, and extract and edit data. This approach is expected to improve the efficiency and accuracy of digitizing geotechnical investigation reports and building geotechnical databases.

Analysis of Color Distortion in Hazy Images (안개가 포함된 영상에서의 색 왜곡 특성 분석)

  • JeongYeop Kim
    • Journal of Platform Technology
    • /
    • v.11 no.6
    • /
    • pp.68-78
    • /
    • 2023
  • In this paper, the color distortion in images with haze would be analyzed. When haze is included in the scene, the color signal reflected in the scene is accompanied by color distortion due to the influence of transmittance according to the haze component. When the influence of haze is excluded by a conventional de-hazing method, the distortion of color tends to not be sufficiently resolved. Khoury et al. used the dark channel priority technique, a haze model mentioned in many studies, to determine the degree of color distortion. However, only the tendency of distortion such as color error values was confirmed, and specific color distortion analysis was not performed. This paper analyzes the characteristic of color distortion and proposes a restoration method that can reduce color distortion. Input images of databases used by Khoury et al. include Macbeth color checker, a standard color tool. Using Macbeth color checker's color values, color distortion according to changes in haze concentration was analyzed, and a new color distortion model was proposed through modeling. The proposed method is to obtain a mapping function using the change in chromaticity by step according to the change in haze concentration and the color of the ground truth. Since the form of color distortion varies from step to step in proportion to the haze concentration, it is necessary to obtain an integrated thought function that operates stably at all stages. In this paper, the improvement of color distortion through the proposed method was estimated based on the value of angular error, and it was verified that there was an improvement effect of about 15% compared to the conventional method.

  • PDF

A Web-based 'Patterns of Care Study' System for Clinical Radiation Oncology in Korea: Development, Launching, and Characteristics (우리나라 임상방사선종양을 위한 웹 기반 PCS 시스템의 개발과 특성)

  • Kim, Il Han;Chie, Eui Kyu;Oh, Do Hoon;Suh Chang-Ok;Kim, Jong Hoon;Ahn, Yong Chan;Hur, Won-Joo;Chung, Woong Ki;Choi, Doo Ho;Lee, Jae Won
    • Radiation Oncology Journal
    • /
    • v.21 no.4
    • /
    • pp.291-298
    • /
    • 2003
  • Purpose: We report upon a web-based system for Patterns of Care Study (PCS) devised for Korean radiation oncology. This PCS was designed to establish standard tools for clinical quality assurance, to determine basic parameters for radiation oncology processes, to offer a solid system for cooperative clinical studies and a useful standard database for comparisons with other national databases. Materials and Methods: The system consisted of a main server with two back-ups in other locations. The program uses a Linux operating system and a MySQL database. Cancers with high frequencies in radiotherapy departments in Korea from 1998 to 1999 were chosen to have a developmental priority. Results: The web-based clinical PCS .system for radiotherapy in www.pcs.re.kr was developed in early 2003 for cancers of the breast, rectum, esophagus, larynx and lung, and for brain metastasis. The total number of PCS study items exceeded one thousand. Our PCS system features user-friendliness, double entry checking, data security, encryption, hard disc mirroring, double back-up, and statistical analysis. Alphanumeric data can be input as well as image data. In addition, programs were constructed for IRB submission, random sampling of data, and departmental structure. Conclusion: For the first time in the field of PCS, we have developed a web-based system and associated working programs. With this system, we can gather sample data in a short period and thus save, cost, effort and time. Data audits should be peformed to validate input data. We propose that this system should be considered as a standard method for PCS or similar types of data collection systems.