• Title/Summary/Keyword: Resizing Algorithm

Search Result 50, Processing Time 0.027 seconds

Adaptive Random Testing for Integrated System based on Output Distribution Estimation (통합 시스템을 위한 출력 분포 기반 적응적 랜덤 테스팅)

  • Shin, Seung-Hun;Park, Seung-Kyu;Choi, Kyung-Hee;Jung, Ki-Hyun
    • Journal of the Korea Society for Simulation
    • /
    • v.20 no.3
    • /
    • pp.19-28
    • /
    • 2011
  • Adaptive Random Testing (ART) aims to enhance the performance of pure random testing by detecting failure region in a software. The ART algorithm generates effective test cases which requires less number of test cases than that of pure random testing. However, all ART algorithms currently proposed are designed for the tests of monolithic system or unit level. In case of integrated system tests, ART approaches do not achieve same performances as those of ARTs applied to the unit or monolithic system. In this paper, we propose an extended ART algorithm which can be applied to the integrated system testing environment without degradation of performance. The proposed approach investigates an input distribution of the unit under a test with limited number of seed input data and generates information to be used to resizing input domain partitions. The simulation results show that our approach in an integration environment could achieve similar level of performance as an ART is applied to a unit testing. Results also show resilient effectiveness for various failure rates.

A Study on the Improvement of Color Detection Performance of Unmanned Salt Collection Vehicles Using an Image Processing Algorithm (이미지 처리 알고리즘을 이용한 무인 천일염 포집장치의 색상 검출 성능 향상에 관한 연구)

  • Kim, Seon-Deok;Ahn, Byong-Won;Park, Kyung-Min
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.6
    • /
    • pp.1054-1062
    • /
    • 2022
  • The population of Korea's solar salt-producing regions is rapidly aging, resulting in a decrease in the number of productive workers. In solar salt production, salt collection is the most labor-intensive operation because existing salt collection vehicles require human operators. Therefore, we intend to develop an unmanned solar salt collection vehicle to reduce manpower requirements. The unmanned solar salt collection vehicle is designed to identify the salt collection status and location in the salt plate via color detection, the color detection performance is a crucial consideration. Therefore, an image processing algorithm was developed to improve color detection performance. The algorithm generates an around-view image by using resizing, rotation, and perspective transformation of the input image, set the RoI to transform only the corresponding area to the HSV color model, and detects the color area through an AND operation. The detected color area was expanded and noise removed using morphological operations, and the area of the detection region was calculated using contour and image moment. The calculated area is compared with the set area to determine the location case of the collection vehicle within the salt plate. The performance was evaluated by comparing the calculated area of the final detected color to which the algorithm was applied and the area of the detected color in each step of the algorithm. It was confirmed that the color detection performance is improved by at least 25-99% for salt detection, at least 44-68% for red color, and an average of 7% for blue and an average of 15% for green. The proposed approach is well-suited to the operation of unmanned solar salt collection vehicles.

An Adaptive Grid-based Clustering Algorithm over Multi-dimensional Data Streams (적응적 격자기반 다차원 데이터 스트림 클러스터링 방법)

  • Park, Nam-Hun;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.14D no.7
    • /
    • pp.733-742
    • /
    • 2007
  • A data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. Due to this reason, memory usage for data stream analysis should be confined finitely although new data elements are continuously generated in a data stream. To satisfy this requirement, data stream processing sacrifices the correctness of its analysis result by allowing some errors. The old distribution statistics are diminished by a predefined decay rate as time goes by, so that the effect of the obsolete information on the current result of clustering can be eliminated without maintaining any data element physically. This paper proposes a grid based clustering algorithm for a data stream. Given a set of initial grid cells, the dense range of a grid cell is recursively partitioned into a smaller cell based on the distribution statistics of data elements by a top down manner until the smallest cell, called a unit cell, is identified. Since only the distribution statistics of data elements are maintained by dynamically partitioned grid cells, the clusters of a data stream can be effectively found without maintaining the data elements physically. Furthermore, the memory usage of the proposed algorithm is adjusted adaptively to the size of confined memory space by flexibly resizing the size of a unit cell. As a result, the confined memory space can be fully utilized to generate the result of clustering as accurately as possible. The proposed algorithm is analyzed by a series of experiments to identify its various characteristics

Design and Verification of Pipelined Face Detection Hardware (파이프라인 구조의 얼굴 검출 하드웨어 설계 및 검증)

  • Kim, Shin-Ho;Jeong, Yong-Jin
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.10
    • /
    • pp.1247-1256
    • /
    • 2012
  • There are many filter based image processing algorithms and they usually require a huge amount of computations and memory accesses making it hard to attain a real-time performance, expecially in embedded applications. In this paper, we propose a pipelined hardware structure of the filter based face detection algorithm to show that the real time performance can be achieved by hardware design. In our design, the whole computation is divided into three pipeline stages: resizing the image (Resize), Transforming the image (ICT), and finding candidate area (Find Candidate). Each stage is optimized by considering the parallelism of the computation to reduce the number of cycles and utilizing the line memory to minimize the memory accesses. The resulting hardware uses 507 KB internal SRAM and occupies 9,039 LUTs when synthesized and configured on Xilinx Virtex5LX330 FPGA. It can operate at maximum 165MHz clock, giving the performance of 108 frame/sec, while detecting up to 20 faces.

Adaptive Block Watermarking Based on JPEG2000 DWT (JPEG2000 DWT에 기반한 적응형 블록 워터마킹 구현)

  • Lim, Se-Yoon;Choi, Jun-Rim
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.11
    • /
    • pp.101-108
    • /
    • 2007
  • In this paper, we propose and verify an adaptive block watermarking algorithm based on JPEG2000 DWT, which determines watermarking for the original image by two scaling factors in order to overcome image degradation and blocking problem at the edge. Adaptive block watermarking algorithm uses 2 scaling factors, one is calculated by the ratio of present block average to the next block average, and the other is calculated by the ratio of total LL subband average to each block average. Signals of adaptive block watermark are obtained from an original image by itself and the strength of watermark is automatically controlled by image characters. Instead of conventional methods using identical intensity of a watermark, the proposed method uses adaptive watermark with different intensity controlled by each block. Thus, an adaptive block watermark improves the visuality of images by 4$\sim$14dB and it is robust against attacks such as filtering, JPEG2000 compression, resizing and cropping. Also we implemented the algorithm in ASIC using Hynix 0.25${\mu}m$ CMOS technology to integrate it in JPEG2000 codec chip.

Classification Method of Harmful Image Content Rates in Internet (인터넷에서의 유해 이미지 컨텐츠 등급 분류 기법)

  • Nam, Taek-Yong;Jeong, Chi-Yoon;Han, Chi-Moon
    • Journal of KIISE:Information Networking
    • /
    • v.32 no.3
    • /
    • pp.318-326
    • /
    • 2005
  • This paper presents the image feature extraction method and the image classification technique to select the harmful image flowed from the Internet by grade of image contents such as harmlessness, sex-appealing, harmfulness (nude), serious harmfulness (adult) by the characteristic of the image. In this paper, we suggest skin area detection technique to recognize whether an input image is harmful or not. We also propose the ROI detection algorithm that establishes region of interest to reduce some noise and extracts harmful degree effectively and defines the characteristics in the ROI area inside. And this paper suggests the multiple-SVM training method that creates the image classification model to select as 4 types of class defined above. This paper presents the multiple-SVM classification algorithm that categorizes harmful grade of input data with suggested classification model. We suggest the skin likelihood image made of the shape information of the skin area image and the color information of the skin ratio image specially. And we propose the image feature vector to use in the characteristic category at a course of traininB resizing the skin likelihood image. Finally, this paper presents the performance evaluation of experiment result, and proves the suitability of grading image using image feature classification algorithm.

A Real Time Processing Technique for Content-Aware Video Scaling (내용기반 동영상 기하학적 변환을 위한 실시간 처리 기법)

  • Lee, Kang-Hee;Yoo, Jae-Wook;Park, Dae-Hyun;Kim, Yoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.80-89
    • /
    • 2011
  • In this paper, a new real time video scaling technique which preserved the contents of a movie was proposed. Because in a movie a correlation exists between consecutive frames, in this paper by determining the seam of the current frame considering the seam of the previous frame, it was proposed the real time video scaling technique without the shaking phenomenon of the contents even though the entire video is not analyzed. For this purpose, frames which have similar features in a movie are classified into a scene, and the first frame of a scene is resized by the seam carving at the static images so that it can preserve the contents of the image to the utmost. At this time, the information about the seam extracted to convert the image size is saved, and the sizes of the next frames are controlled with reference to the seam information stored in the previous frame by each frame. The proposed algorithm has the fast processing speed of the extent of being similar to a bilinear method and preserves the main content of an image to the utmost at the same time. Also because the memory usage is remarkably small compared with the existing seam carving method, the proposed algorithm is usable in the mobile terminal in which there are many memory restrictions. Computer simulation results indicate that the proposed technique provides better objective performance and subjective image quality about the real time processing and shaking phenomenon removal and contents conservation than conventional algorithms.

Template-Matching-based High-Speed Face Tracking Method using Depth Information (깊이 정보를 이용한 템플릿 매칭 기반의 고속 얼굴 추적 방법)

  • Kim, Wooyoul;Seo, Youngho;Kim, Dongwook
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.349-361
    • /
    • 2013
  • This paper proposes a fast face tracking method with only depth information. It is basically a template matching method, but it uses a early termination scheme and a sparse search scheme to reduce the execution time to solve the problem of a template matching method, large execution time. Also a refinement process with the neighboring pixels is incorporated to alleviate the tracking error. The depth change of the face being tracked is compensated by predicting the depth of the face and resizing the template. Also the search area is adjusted on the basis of the resized template. With home-made test sequences, the parameters to be used in face tracking are determined empirically. Then the proposed algorithm and the extracted parameters are applied to the other home-made test sequences and a MPEG multi-view test sequence. The experimental results showed that the average tracking error and the execution time for the home-made sequences by Kinect ($640{\times}480$) were about 3% and 2.45ms, while the MPEG test sequence ($1024{\times}768$) showed about 1% of tracking error and 7.46ms of execution time.

A study on the process of mapping data and conversion software using PC-clustering (PC-clustering을 이용한 매핑자료처리 및 변환소프트웨어에 관한 연구)

  • WhanBo, Taeg-Keun;Lee, Byung-Wook;Park, Hong-Gi
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.7 no.2 s.14
    • /
    • pp.123-132
    • /
    • 1999
  • With the rapid increases of the amount of data and computing, the parallelization of the computing algorithm becomes necessary more than ever. However the parallelization had been conducted mostly in a super-computer until the rod 1990s, it was not for the general users due to the high price, the complexity of usage, and etc. A new concept for the parallel processing has been emerged in the form of K-clustering form the late 1990s, it becomes an excellent alternative for the applications need high computer power with a relative low cost although the installation and the usage are still difficult to the general users. The mapping algorithms (cut, join, resizing, warping, conversion from raster to vector and vice versa, etc) in GIS are well suited for the parallelization due to the characteristics of the data structure. If those algorithms are manipulated using PC-clustering, the result will be satisfiable in terms of cost and performance since they are processed in real flu with a low cos4 In this paper the tools and the libraries for the parallel processing and PC-clustering we introduced and how those tools and libraries are applied to mapping algorithms in GIS are showed. Parallel programs are developed for the mapping algorithms and the result of the experiments shows that the performance in most algorithms increases almost linearly according to the number of node.

  • PDF

Fast Content-Aware Video Retargeting Algorithm (고속 컨텐츠 인식 동영상 리타겟팅 기법)

  • Park, Dae-Hyun;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.77-86
    • /
    • 2013
  • In this paper, we propose a fast video retargeting method which preserves the contents of a video and converts the image size. Since the conventional Seam Carving which is the well-known content-aware image retargeting technique uses the dynamic programming method, the repetitive update procedure of the accumulation energy is absolutely needed to obtain seam. The energy update procedure cannot avoid the processing time delay because of many operations by the image full-searching. By applying the proposed method, frames which have similar features in video are classified into a scene, and the first frame of a scene is resized by the modified Seam Carving where multiple seams are extracted from candidate seams to reduce the repetitive update procedure. After resizing the first frame of a scene, all continuous frames of the same scene are resized with reference to the seam information stored in the previous frame without the calculation of the accumulation energy. Therefore, although the fast processing is possible with reducing complexity and without analyzing all frames of scene, the quality of an image can be analogously maintained with an existing method. The experimental results show that the proposed method can preserve the contents of an image and can be practically applied to retarget the image on real time.