• Title/Summary/Keyword: Reduced-size image

Search Result 259, Processing Time 0.025 seconds

Parallel Implementations of Digital Focus Indices Based on Minimax Search Using Multi-Core Processors

  • HyungTae, Kim;Duk-Yeon, Lee;Dongwoon, Choi;Jaehyeon, Kang;Dong-Wook, Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.542-558
    • /
    • 2023
  • A digital focus index (DFI) is a value used to determine image focus in scientific apparatus and smart devices. Automatic focus (AF) is an iterative and time-consuming procedure; however, its processing time can be reduced using a general processing unit (GPU) and a multi-core processor (MCP). In this study, parallel architectures of a minimax search algorithm (MSA) are applied to two DFIs: range algorithm (RA) and image contrast (CT). The DFIs are based on a histogram; however, the parallel computation of the histogram is conventionally inefficient because of the bank conflict in shared memory. The parallel architectures of RA and CT are constructed using parallel reduction for MSA, which is performed through parallel relative rating of the image pixel pairs and halved the rating in every step. The array size is then decreased to one, and the minimax is determined at the final reduction. Kernels for the architectures are constructed using open source software to make it relatively platform independent. The kernels are tested in a hexa-core PC and an embedded device using Lenna images of various sizes based on the resolutions of industrial cameras. The performance of the kernels for the DFIs was investigated in terms of processing speed and computational acceleration; the maximum acceleration was 32.6× in the best case and the MCP exhibited a higher performance.

Low Noise Vacuum Cleaner Design (저소음 청소기 개발)

  • Joo, Jae-Man;Lee, Jun-Hwa;Hong, Seun-Gee;Oh, Jang-Keun;Song, Hwa-Gyu
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2007.11a
    • /
    • pp.939-942
    • /
    • 2007
  • Vacuum cleaner is a close life product that can remove various dusts from our surroundings. However well vacuum cleaner clean our environments, many people are looking away from it, due to its loud noise. Its noise causes a big trouble in the usual life, for example, catch calls, TV watching and discussing etc. To reduce these inconveniences, noise reduction methods and systematic design of low noise vacuum cleaner are studied in this paper. At first, sound quality investigation is performed to get the noise level and quality that make people TV watching and catch calls available. Based on the European and domestic customer SQ survey result, sound power, peak noise level and target sound spectrum guideline are studied and introduced. As a second, precise product sound spectrums are designed into each part based on the sound quality result. Fan-motor, brush, mainbody, cyclone spectrums are decided to get the final target sound based on the contribution level. Fan-motor is the major noise source of vacuum cleaner. Specially, its peak sound, RPM peak and BPF Peak, cause the people nervous. To reduce these peak sounds, high rotating impeller and diffuser are focused due to its interaction. A lot of experimental and numerical tests, operation points are investigated and optimization of flow path area between diffusers is performed. As a bagless device, cyclones are one of the major noise sources of vacuum cleaner. To reduce its noise, previous research is used and adopted well. Brush is the most difficult part to reduce noise. Its noise sources are all comes from aero-acoustic phenomena. Numerical analysis helps the understanding of flow structure and pattern, and a lot of experimental test are performed to reduce the noise. Gaps between the carpet and brush are optimized and flow paths are re-designed to lower the noise. Reduction is performed with keeping the cleaning efficiency and handling power together and much reduction of noise is acquired. With all above parts, main-body design is studied. To do a systematic design, configuration design developments technique is introduced from airplane design and evolved with each component design. As a first configuration, fan-motor installation position is investigated and 10 configuration ideas are developed and tested. As a second step, reduced size and compressed configuration candidates are tested and evaluated by a lot of major factor. Noise, power, mass production availability, size, flow path are evaluated together. If noise reduction configuration results in other performance degrade, the noise reduction configuration is ineffective. As a third configuration, cyclones are introduced and the size is reduced one more time and fourth, fifth, sixth, seventh configuration are evolved with size and design image with noise and other performance indexes. Finally we can get a overall much noise level reduction configuration. All above investigations are adopted into vacuum cleaner design and final customer satisfaction tests in Europe are performed. 1st grade sound quality and lowest noise level of bagless vacuum cleaner are achieved.

  • PDF

A Study on the Restoration of a Low-Resoltuion Iris Image into a High-Resolution One Based on Multiple Multi-Layered Perceptrons (다중 다층 퍼셉트론을 이용한 저해상도 홍채 영상의 고해상도 복원 연구)

  • Shin, Kwang-Yong;Kang, Byung-Jun;Park, Kang-Ryoung;Shin, Jae-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.3
    • /
    • pp.438-456
    • /
    • 2010
  • Iris recognition uses a unique iris pattern of user to identify person. In order to enhance the performance of iris recognition, it is reported that the diameter of iris region should be greater than 200 pixels in the captured iris image. So, the previous iris system used zoom lens camera, which can increase the size and cost of system. To overcome these problems, we propose a new method of enhancing the accuracy of iris recognition on low-resolution iris images which are captured without a zoom lens. This research is novel in the following two ways compared to previous works. First, this research is the first one to analyze the performance degradation of iris recognition according to the decrease of the image resolution by excluding other factors such as image blurring and the occlusion of eyelid and eyelash. Second, in order to restore a high-resolution iris image from single low-resolution one, we propose a new method based on multiple multi-layered perceptrons (MLPs) which are trained according to the edge direction of iris patterns. From that, the accuracy of iris recognition with the restored images was much enhanced. Experimental results showed that when the iris images down-sampled by 6% compared to the original image were restored into the high resolution ones by using the proposed method, the EER of iris recognition was reduced as much as 0.133% (1.485% - 1.352%) in comparison with that by using bi-linear interpolation

Multi-mode Embedded Compression Algorithm and Architecture for Code-block Memory Size and Bandwidth Reduction in JPEG2000 System (JPEG2000 시스템의 코드블록 메모리 크기 및 대역폭 감소를 위한 Multi-mode Embedded Compression 알고리즘 및 구조)

  • Son, Chang-Hoon;Park, Seong-Mo;Kim, Young-Min
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.8
    • /
    • pp.41-52
    • /
    • 2009
  • In Motion JPEG2000 encoding, huge bandwidth requirement of data memory access is the bottleneck in required system performance. For the alleviation of this bandwidth requirement, a new embedded compression(EC) algorithm with a little bit of image quality drop is devised. For both random accessibility and low latency, very simple and efficient entropy coding algorithm is proposed. We achieved significant memory bandwidth reductions (about 53${\sim}$81%) and reduced code-block memory to about half size through proposed multi-mode algorithms, without requiring any modification in JPEG2000 standard algorithm.

Fine-scalable SPIHT Hardware Design for Frame Memory Compression in Video Codec

  • Kim, Sunwoong;Jang, Ji Hun;Lee, Hyuk-Jae;Rhee, Chae Eun
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.17 no.3
    • /
    • pp.446-457
    • /
    • 2017
  • In order to reduce the size of frame memory or bus bandwidth, frame memory compression (FMC) recompresses reconstructed or reference frames of video codecs. This paper proposes a novel FMC design based on discrete wavelet transform (DWT) - set partitioning in hierarchical trees (SPIHT), which supports fine-scalable throughput and is area-efficient. In the proposed design, multi-cores with small block sizes are used in parallel instead of a single core with a large block size. In addition, an appropriate pipelining schedule is proposed. Compared to the previous design, the proposed design achieves the processing speed which is closer to the target system speed, and therefore it is more efficient in hardware utilization. In addition, a scheme in which two passes of SPIHT are merged into one pass called merged refinement pass (MRP) is proposed. As the number of shifters decreases and the bit-width of remained shifters is reduced, the size of SPIHT hardware significantly decreases. The proposed FMC encoder and decoder designs achieve the throughputs of 4,448 and 4,000 Mpixels/s, respectively, and their gate counts are 76.5K and 107.8K. When the proposed design is applied to high efficiency video codec (HEVC), it achieves 1.96% lower average BDBR and 0.05 dB higher average BDPSNR than the previous FMC design.

Method for Determining Variable-Block Size of Depth Picture for Plane Coding (깊이 화면의 평면 부호화를 위한 가변 블록 크기 결정 방법)

  • Kwon, Soon-Kak;Lee, Dong-Seok
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.3
    • /
    • pp.39-47
    • /
    • 2017
  • The Depth Picture can be Encoded by the Plane Coding Mode that is the Method for Coding Mode by Considering a Part of the Picture as the Plane. In this Paper, we Propose the Method of Determining the Variable-sized Block for Variable Block Coding in the Plane Coding Mode for the Depth Picture. The Depth Picture Can be Encoded in the Plane Coding Through Estimating the Plane Which is Close to Pixels in the Block Using Depth Information. The Variable-sized Block Coding in the Plane Coding can be Applied as Follows. It Calculates the Prediction Error between Predicted Depths by the Plane Estimation and the Measured Depths. If Prediction Error is Below the Threshold, the Block is Encoded by Current Size. Otherwise, it Divides the Block and Repeats Above. If the Block is Divided Below the Minimum Size, the Block is not Encoded by the Plane Coding Mode. The Result of the Simulation of the Proposed Method Shows that the Number of Encoded Block is Reduced to 19% as Compared with the Method Using the Fixed-sized Block in the Depth Picture Composed of one Plane.

Dose Change according to Diameter Change of the Cone for Dental X-ray Apparatus (치과구내용 X선발생기의 조사통 직경 변화에 따른 선량변화)

  • Ahn, Sung-Min;Oh, Jung-Hoan;Choi, Jung-Hyun;Shin, Gwi-Soon;Kim, Sung-Chul
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.3
    • /
    • pp.266-270
    • /
    • 2010
  • In case of the Dental X-ray apparatus, the diameter (or the field size) of the tip of the cone should be less than 7 cm according to the Diagnosis Radiation Equipment Safety Management. However, deviation from the field is not expected to be big as photography is made at close range from the skin. Also, as the size of film or digital detector used in intra-oral photography is $3\times4cm^2$, the size mentioned above can be considered to be much bigger. Furthermore, the patient dose by short-distance photography can not be ignored. Therefore, effect on the patient dose, resolution and image qualty was examined by reducing the cone diameter by 0.5 cm interval. The result showed that the patient dose was reduced and a partial improvement in picture contrast was observed. Therefore, it can be concluded from these results that further investigation may be worthwhile in terms of policy.

Compact CNN Accelerator Chip Design with Optimized MAC And Pooling Layers (MAC과 Pooling Layer을 최적화시킨 소형 CNN 가속기 칩)

  • Son, Hyun-Wook;Lee, Dong-Yeong;Kim, HyungWon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.9
    • /
    • pp.1158-1165
    • /
    • 2021
  • This paper proposes a CNN accelerator which is optimized Pooling layer operation incorporated in Multiplication And Accumulation(MAC) to reduce the memory size. For optimizing memory and data path circuit, the quantized 8bit integer weights are used instead of 32bit floating-point weights for pre-training of MNIST data set. To reduce chip area, the proposed CNN model is reduced by a convolutional layer, a 4*4 Max Pooling, and two fully connected layers. And all the operations use specific MAC with approximation adders and multipliers. 94% of internal memory size reduction is achieved by simultaneously performing the convolution and the pooling operation in the proposed architecture. The proposed accelerator chip is designed by using TSMC65nmGP CMOS process. That has about half size of our previous paper, 0.8*0.9 = 0.72mm2. The presented CNN accelerator chip achieves 94% accuracy and 77us inference time per an MNIST image.

Comparison of Effectiveness about Image Quality and Scan Time According to Reconstruction Method in Bone SPECT (영상 재구성 방법에 따른 Bone SPECT 영상의 질과 검사시간에 대한 실효성 비교)

  • Kim, Woo-Hyun;Jung, Woo-Young;Lee, Ju-Young;Ryu, Jae-Kwang
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.9-14
    • /
    • 2009
  • Purpose: Nowadays in the nuclear medicine, many studies and efforts are being made to reduce the scan time, as well as the waiting time to be needed to execute exams after injection of radionuclide medicines. Several methods are being used in clinic, such as developing new radionuclide compounds that enable to be absorbed into target organs more quickly and reducing acquisition scan time by increase the number of Gamma Camera detectors to examine. Each medical equipment manufacturer has improved the imaging process techniques to reduce scan time. In this paper, we tried to analyze the difference of image quality between FBP, 3D OSEM reconstruction methods that commercialized and being clinically applied, and Astonish reconstruction method (A kind of Iterative fast reconstruction method of Philips), also difference of image quality on scan time. Material and Methods: We investigated in 32 patients that examined the Bone SPECT from June to July 2008 at department of nuclear medicine, ASAN Medical Center in Seoul. 40sec/frame and 20sec/frame images were acquired that using Philips‘ PRECEDENCE 16 Gamma Camera and then reconstructed those images by using the Astonish (Philips’ Reconstruction Method), 3D OSEM and FBP methods. The blinded test was performed to the clinical interpreting physicians with all images analyzed by each reconstruction method for qualitative analysis. And we analyzed target to non target ratio by draws lesions as the center of disease for quantitative analysis. At this time, each image was analyzed with same location and size of ROI. Results: In a qualitative analysis, there was no significant difference by acquisition time changes in image quality. In a quantitative analysis, the images reconstructed Astonish method showed good quality due to better sharpness and distinguish sharply between lesions and peripheral lesions. After measuring each mean value and standard deviation value of target to non target ratio with 40 sec/frame and 20sec/frame images, those values are Astonish (40 sec-$13.91{\pm}5.62$ : 20 sec-$13.88{\pm}5.92$), 3D OSEM (40 sec-$10.60{\pm}3.55$ : 20 sec-$10.55{\pm}3.64$), FBP (40 sec-$8.30{\pm}4.44$ : 20 sec-$8.19{\pm}4.20$). We analyzed target to non target ratio from 20 sec and 40 sec images. And we analyzed the result, In Astonish (t=0.16, p=0.872), 3D OSEM (t=0.51, p=0.610), FBP (t=0.73, p=0.469) methods, there was no significant difference statistically by acquisition time change in image quality. But FBP indicates no statistical differences while some images indicate difference between 40 sec/frame and 20 sec/frame images by various factors. Conclusions: In the circumstance, try to find a solution to reduce nuclear medicine scan time, the development of nuclear medicine equipment hardware has decreased while software has marched forward at a relentless. Due to development of computer hardware, the image reconstruction time was reduced and the expanded capacity to restore enables iterative methods that couldn't be performed before due to technical limits. As imaging process technique developed, it reduced scan time and we could observe that image quality keep similar level. While keeping exam quality and reducing scan time can induce the reduction of patient's pain and sensory waiting time, also accessibility of nuclear medicine exam will be improved and it provide better service to patients and clinical physician who order exams. Consequently, those things make the image of department of nuclear medicine be improved. Concurrent Imaging - A new function that setting up each image acquisition parameter and enables to acquire images simultaneously with various parameters to once examine.

  • PDF

3D Point Cloud Reconstruction Technique from 2D Image Using Efficient Feature Map Extraction Network (효율적인 feature map 추출 네트워크를 이용한 2D 이미지에서의 3D 포인트 클라우드 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.408-415
    • /
    • 2022
  • In this paper, we propose a 3D point cloud reconstruction technique from 2D images using efficient feature map extraction network. The originality of the method proposed in this paper is as follows. First, we use a new feature map extraction network that is about 27% efficient than existing techniques in terms of memory. The proposed network does not reduce the size to the middle of the deep learning network, so important information required for 3D point cloud reconstruction is not lost. We solved the memory increase problem caused by the non-reduced image size by reducing the number of channels and by efficiently configuring the deep learning network to be shallow. Second, by preserving the high-resolution features of the 2D image, the accuracy can be further improved than that of the conventional technique. The feature map extracted from the non-reduced image contains more detailed information than the existing method, which can further improve the reconstruction accuracy of the 3D point cloud. Third, we use a divergence loss that does not require shooting information. The fact that not only the 2D image but also the shooting angle is required for learning, the dataset must contain detailed information and it is a disadvantage that makes it difficult to construct the dataset. In this paper, the accuracy of the reconstruction of the 3D point cloud can be increased by increasing the diversity of information through randomness without additional shooting information. In order to objectively evaluate the performance of the proposed method, using the ShapeNet dataset and using the same method as in the comparative papers, the CD value of the method proposed in this paper is 5.87, the EMD value is 5.81, and the FLOPs value is 2.9G. It was calculated. On the other hand, the lower the CD and EMD values, the better the accuracy of the reconstructed 3D point cloud approaches the original. In addition, the lower the number of FLOPs, the less memory is required for the deep learning network. Therefore, the CD, EMD, and FLOPs performance evaluation results of the proposed method showed about 27% improvement in memory and 6.3% in terms of accuracy compared to the methods in other papers, demonstrating objective performance.