• Title/Summary/Keyword: image details

Search Result 515, Processing Time 0.023 seconds

Abdominal Digital Radiography with a Novel Post-Processing Technique: Phantom and Patient Studies (새로운 후처리 기술을 이용한 복부 디지털 방사선 촬영: 팬텀과 환자 연구)

  • Hyein Kang;Eun Sun Lee;Hyun Jeong Park;Byung Kwan Park;Jae Yong Park;Suk-Won Suh
    • Journal of the Korean Society of Radiology
    • /
    • v.81 no.4
    • /
    • pp.920-932
    • /
    • 2020
  • Purpose The aim of this study was to evaluate the diagnostic image quality of low dose abdominal digital radiography processed with a new post-processing technique. Materials and Methods Abdominal radiographs from phantom pilot studies were post-processed by the novel and conventional post-processing methods of our institution; the proper dose for the subsequent patient study of 49 subjects was determined by comparing image quality of the two preceding studies. Two radiographs of each patient were taken using the conventional and derived dose protocols with the proposed post-processing method. The image details and quality were evaluated by two radiologists. Results The radiation dose for the patient study was derived to be half of the conventional method. Overall half-dose image quality with the proposed method was significantly higher than that of the conventional method (p < 0.05) with moderate inter-rater agreement (κ = 0.60, 0.47). Conclusion By applying the new post-processing technique, half-dose abdominal digital radiography can demonstrate feasible image quality compared to the full-dose images.

Study of Feature Based Algorithm Performance Comparison for Image Matching between Virtual Texture Image and Real Image (가상 텍스쳐 영상과 실촬영 영상간 매칭을 위한 특징점 기반 알고리즘 성능 비교 연구)

  • Lee, Yoo Jin;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1057-1068
    • /
    • 2022
  • This paper compares the combination performance of feature point-based matching algorithms as a study to confirm the matching possibility between image taken by a user and a virtual texture image with the goal of developing mobile-based real-time image positioning technology. The feature based matching algorithm includes process of extracting features, calculating descriptors, matching features from both images, and finally eliminating mismatched features. At this time, for matching algorithm combination, we combined the process of extracting features and the process of calculating descriptors in the same or different matching algorithm respectively. V-World 3D desktop was used for the virtual indoor texture image. Currently, V-World 3D desktop is reinforced with details such as vertical and horizontal protrusions and dents. In addition, levels with real image textures. Using this, we constructed dataset with virtual indoor texture data as a reference image, and real image shooting at the same location as a target image. After constructing dataset, matching success rate and matching processing time were measured, and based on this, matching algorithm combination was determined for matching real image with virtual image. In this study, based on the characteristics of each matching technique, the matching algorithm was combined and applied to the constructed dataset to confirm the applicability, and performance comparison was also performed when the rotation was additionally considered. As a result of study, it was confirmed that the combination of Scale Invariant Feature Transform (SIFT)'s feature and descriptor detection had the highest matching success rate, but matching processing time was longest. And in the case of Features from Accelerated Segment Test (FAST)'s feature detector and Oriented FAST and Rotated BRIEF (ORB)'s descriptor calculation, the matching success rate was similar to that of SIFT-SIFT combination, while matching processing time was short. Furthermore, in case of FAST-ORB, it was confirmed that the matching performance was superior even when 10° rotation was applied to the dataset. Therefore, it was confirmed that the matching algorithm of FAST-ORB combination could be suitable for matching between virtual texture image and real image.

A Study on Damage Scale Tacking Technique for Debris Flow Occurrence Section Using Drones Image (드론영상을 활용한 토석류 발생구간의 피해규모 추적기법)

  • Shin, Hyunsun;Um, Jungsup;Kim, Junhyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.517-526
    • /
    • 2017
  • In this study, we analyzed the accuracy of elevation, slope, and area to the damage scale of the debris flow using the drones to track the details of the debris flow that method was between the digital topographical map(1/5,000) method and GPS ground survey method. The results are summarized as follows. At first, in the comparison of elevation, the value by the drones was 3.024m lower than the digital topography map, but in case of slope the average slope was $1.20^{\circ}$ and the maximum slope was $10.46^{\circ}$ which was higher by the drones image. Secondly, the difference area is $462m^2$ between on the digital topographic map and the drones image was calculated high, because it is determined by reflecting the uplift of the terrain as a point that calculated more accurate than the digital topographic map. Therefore, when compared with the existing method, the drone image method was very effective in terms of time and manpower.

Adaptive Irregular Binning and Its Application to Video Coding Scheme Using Iterative Decoding (적응 불규칙 양자화와 반복 복호를 이용한 비디오 코딩 방식에의 응용)

  • Choi Kang-Sun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.4C
    • /
    • pp.391-399
    • /
    • 2006
  • We propose a novel low complexity video encoder, at the expense of a complex decoder, where video frames are intra-coded periodically and frames in between successive intra-coded frames are coded efficiently using a proposed irregular binning technique. We investigate a method of forming an irregular binning which is capable of quantizing any value effectively with only small number of bins, by exploiting the correlation between successive frames. This correlation is additionally exploited at the decoder, where the quality of reconstructed frames is enhanced gradually by applying POCS(projection on the convex sets). After an image frame is reconstructed with the irregular binning information at the proposed decoder, we can further improve the resulting quality by modifying the reconstructed image with motion-compensated image components from the neighboring frames which are considered to contain image details. In the proposed decoder, several iterations of these modification and re-projection steps can be invoked. Experimental results show that the performance of the proposed coding scheme is comparable to that of H.264/AVC coding in m mode. Since the proposed video coding does not require motion estimation at the encoder, it can be considered as an alternative for some versions of H.264/AVC in applications requiring a simple encoder.

Colorization Algorithm Using Wavelet Packet Transform (웨이블릿 패킷 변환을 이용한 흑백 영상의 칼라화 알고리즘)

  • Ko, Kyung-Woo;Kwon, Oh-Seol;Son, Chang-Hwan;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.1
    • /
    • pp.1-10
    • /
    • 2008
  • Coloriztion algorithms, which hide color information into gray images and find them to recover color images, have been developed recently. In these methods, it is important to minimize the loss of original information while the color components are embedded and extracted. In this paper, we propose a colorization method using a wavelet packet transform in order to embed color components with minimum loss of original information. In addition, the compensation processing of color saturation in the recovered color images is achieved. In the color-to-gray process, an input RGB image is converted into Y, Cb, and Cr images, and a wavelet packet transform is applied to the Y image. After analyzing the amounts of total energy for each sub-band, color components are embedded into two sub-bands including minimum amount of energy on the Y image. This makes it possible not only to hide color components in the Y image, but to recover the Y image with minimum loss of original information. In the gray-to-color process, the color saturation of the recovered color images is decreased by printing and scanning process. To increase color saturation, the characteristic curve between printer and scanner, which can estimate the change of pixel values before and after printing and scanning process, is used to compensate the pixel values of printed and scanned gray images. In addition, the scaling method of the Cb and Cr components is applied to the gray-to-color process. Through the experiments, it is shown that the proposed method improves both boundary details and color saturation in the recovered color images.

A study on image region analysis and image enhancement using detail descriptor (디테일 디스크립터를 이용한 이미지 영역 분석과 개선에 관한 연구)

  • Lim, Jae Sung;Jeong, Young-Tak;Lee, Ji-Hyeok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.6
    • /
    • pp.728-735
    • /
    • 2017
  • With the proliferation of digital devices, the devices have generated considerable additive white Gaussian noise while acquiring digital images. The most well-known denoising methods focused on eliminating the noise, so detailed components that include image information were removed proportionally while eliminating the image noise. The proposed algorithm provides a method that preserves the details and effectively removes the noise. In this proposed method, the goal is to separate meaningful detail information in image noise environment using the edge strength and edge connectivity. Consequently, even as the noise level increases, it shows denoising results better than the other benchmark methods because proposed method extracts the connected detail component information. In addition, the proposed method effectively eliminated the noise for various noise levels; compared to the benchmark algorithms, the proposed algorithm shows a highly structural similarity index(SSIM) value and peak signal-to-noise ratio(PSNR) value, respectively. As shown the result of high SSIMs, it was confirmed that the SSIMs of the denoising results includes a human visual system(HVS).

Medical Image Denoising using Wavelet Transform-Based CNN Model

  • Seoyun Jang;Dong Hoon Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.10
    • /
    • pp.21-34
    • /
    • 2024
  • In medical images such as MRI(Magnetic Resonance Imaging) and CT(Computed Tomography) images, noise removal has a significant impact on the performance of medical imaging systems. Recently, the introduction of deep learning in image processing technology has improved the performance of noise removal methods. However, there is a limit to removing only noise while preserving details in the image domain. In this paper, we propose a wavelet transform-based CNN(Convolutional Neural Network) model, namely the WT-DnCNN(Wavelet Transform-Denoising Convolutional Neural Network) model, to improve noise removal performance. This model first removes noise by dividing the noisy image into frequency bands using wavelet transform, and then applies the existing DnCNN model to the corresponding frequency bands to finally remove noise. In order to evaluate the performance of the WT-DnCNN model proposed in this paper, experiments were conducted on MRI and CT images damaged by various noises, namely Gaussian noise, Poisson noise, and speckle noise. The performance experiment results show that the WT-DnCNN model is superior to the traditional filter, i.e., the BM3D(Block-Matching and 3D Filtering) filter, as well as the existing deep learning models, DnCNN and CDAE(Convolution Denoising AutoEncoder) model in qualitative comparison, and in quantitative comparison, the PSNR(Peak Signal-to-Noise Ratio) and SSIM(Structural Similarity Index Measure) values were 36~43 and 0.93~0.98 for MRI images and 38~43 and 0.95~0.98 for CT images, respectively. In addition, in the comparison of the execution speed of the models, the DnCNN model was much less than the BM3D model, but it took a long time due to the addition of the wavelet transform in the comparison with the DnCNN model.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

Support Vector Machine and Improved Adaptive Median Filtering for Impulse Noise Removal from Images (영상에서 Support Vector Machine과 개선된 Adaptive Median 필터를 이용한 임펄스 잡음 제거)

  • Lee, Dae-Geun;Park, Min-Jae;Kim, Jeong-Uk;Kim, Do-Yoon;Kim, Dong-Wook;Lim, Dong-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.1
    • /
    • pp.151-165
    • /
    • 2010
  • Images are often corrupted by impulse noise due to a noise sensor or channel transmission errors. The filter based on SVM(Support Vector Machine) and the improved adaptive median filtering is proposed to preserve image details while suppressing impulse noise for image restoration. Our approach uses an SVM impulse detector to judge whether the input pixel is noise. If a pixel is detected as a noisy pixel, the improved adaptive median filter is used to replace it. To demonstrate the performance of the proposed filter, extensive simulation experiments have been conducted under both salt-and-pepper and random-valued impulse noise models to compare our method with many other well known filters in the qualitative measure and quantitative measures such as PSNR and MAE. Experimental results indicate that the proposed filter performs significantly better than many other existing filters.

The role of EL2 in the infrared transmission images of defects in semi-insulating GaAs

  • Kang, Seong-Jun;Lee, Sung-Seok
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.6
    • /
    • pp.725-728
    • /
    • 2011
  • Infrared transmission images from GaAs semi insulating wafers were considered for years as directly related to the quantum absorption by electrons on fundamental states of deep centers, especially EL2. The satisfying correspondence of these images with the dislocations revealed by etching or X ray topography or infrared tomography led to the opinion that a strong concentration of EL2 centers was to be expected in the immediate vicinity of the dislocations. More recent work indicates that contrary to the expected behavior the photoqu$\acute{e}$nching of transmission images at T=80K does not appreciably change the image structure itself but more largely the uniform background level of absorption. Such investigations show that the transmission images of isolated dislocations (Indium doped materials) or cell structures of tangled dislocations (undoped materials) can be partly attributed to scattered light; similar operation at T=10K removes the dark features associated to EL2 but still preserves the skeleton of the pattern which is due to scattering. A result of the measurements is that dislocations must not be considered any longer as inexhaustive EL2 reservoirs. The lifetime of the photoqu$\acute{e}$nching mechanism is shown to vary differently for EL2 centers located close to the dislocations or in the matrix. In this paper we will develop the details of infrared image photoqu$\acute{e}$nching experiments in the vicinity of dislocations; undoped and In doped GaAs materials will be shown. These results will be discussed in the light of surface etching experiments.