• Title/Summary/Keyword: Resizing

Search Result 143, Processing Time 0.028 seconds

Twin models for high-resolution visual inspections

  • Seyedomid Sajedi;Kareem A. Eltouny;Xiao Liang
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.351-363
    • /
    • 2023
  • Visual structural inspections are an inseparable part of post-earthquake damage assessments. With unmanned aerial vehicles (UAVs) establishing a new frontier in visual inspections, there are major computational challenges in processing the collected massive amounts of high-resolution visual data. We propose twin deep learning models that can provide accurate high-resolution structural components and damage segmentation masks efficiently. The traditional approach to cope with high memory computational demands is to either uniformly downsample the raw images at the price of losing fine local details or cropping smaller parts of the images leading to a loss of global contextual information. Therefore, our twin models comprising Trainable Resizing for high-resolution Segmentation Network (TRS-Net) and DmgFormer approaches the global and local semantics from different perspectives. TRS-Net is a compound, high-resolution segmentation architecture equipped with learnable downsampler and upsampler modules to minimize information loss for optimal performance and efficiency. DmgFormer utilizes a transformer backbone and a convolutional decoder head with skip connections on a grid of crops aiming for high precision learning without downsizing. An augmented inference technique is used to boost performance further and reduce the possible loss of context due to grid cropping. Comprehensive experiments have been performed on the 3D physics-based graphics models (PBGMs) synthetic environments in the QuakeCity dataset. The proposed framework is evaluated using several metrics on three segmentation tasks: component type, component damage state, and global damage (crack, rebar, spalling). The models were developed as part of the 2nd International Competition for Structural Health Monitoring.

Adaptive Block Watermarking Based on JPEG2000 DWT (JPEG2000 DWT에 기반한 적응형 블록 워터마킹 구현)

  • Lim, Se-Yoon;Choi, Jun-Rim
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.11
    • /
    • pp.101-108
    • /
    • 2007
  • In this paper, we propose and verify an adaptive block watermarking algorithm based on JPEG2000 DWT, which determines watermarking for the original image by two scaling factors in order to overcome image degradation and blocking problem at the edge. Adaptive block watermarking algorithm uses 2 scaling factors, one is calculated by the ratio of present block average to the next block average, and the other is calculated by the ratio of total LL subband average to each block average. Signals of adaptive block watermark are obtained from an original image by itself and the strength of watermark is automatically controlled by image characters. Instead of conventional methods using identical intensity of a watermark, the proposed method uses adaptive watermark with different intensity controlled by each block. Thus, an adaptive block watermark improves the visuality of images by 4$\sim$14dB and it is robust against attacks such as filtering, JPEG2000 compression, resizing and cropping. Also we implemented the algorithm in ASIC using Hynix 0.25${\mu}m$ CMOS technology to integrate it in JPEG2000 codec chip.

A Study on Current State of Web Content Accessibility on General Hospital Websites in Korea (국내 종합병원의 웹 접근성 실태에 관한 연구)

  • Kim, Yong-Seob;Oh, Kun-Seok
    • Journal of Internet Computing and Services
    • /
    • v.11 no.3
    • /
    • pp.87-103
    • /
    • 2010
  • In the study, we introduce the trend in domestic and foreign web accessibility, as well as the legal system that ensures web accessibility. Based on Korean Web Content Accessibility Guidelines (KWCAG)1.0, we investigated the web content accessibility of 80 tertiary health-care hospitals and general hospitals in Korea. We evaluated accessibility by combining accessibility-based criteria (ABC) with usability-based criteria (UBC). ABC was limited to an alternative text for Guideline 1, using a small number of frames and keyboard accessibility for Guideline 2. UBC checked the voice service (TTS), resizing text, providing multi-lingual websites, and disclosing web accessibility policy. KADO-WAH2.0 was used for representing the compliance rate. The evaluation result was a considerable improvement from previous results, even though the rate of compliance with web accessibility was generally insufficient. There was a significant difference between those medical centers which did and did not comply with web accessibility. Incidentally, many hospitals were found to have attempted to confront and come to terms with web accessibility. In future, the following factors are advisable for medical centers with publicity or public interest: they must employ active and aggressive promotion of establishment of independent accessibility guidelines to secure web accessibility, they should effect an improvement of the realization of web accessibility, there can be constant education and promotion, and there can be an institutional supplementation, as well as others.

Geometric analysis and anti-aliasing filter for stereoscopic 3D image scaling (스테레오 3D 영상 스케일링에 대한 기하학적 분석 및 anti-aliasing 필터)

  • Kim, Wook-Joong;Hur, Nam-Ho;Kim, Jin-Woong
    • Journal of Broadcast Engineering
    • /
    • v.14 no.5
    • /
    • pp.638-649
    • /
    • 2009
  • Image resizing (or scaling) is one of the most essential issues for the success of visual service because image data has to be adapted to the variety of display features. For 2D imaging, the image scaling is generally accomplished by 2D image re-sampling (i.e., up-/down-sampling). However, when it comes to stereoscopic 3D images, 2D re-sampling methods are inadequate because additional consideration on the third dimension of depth is not incorporated. Practically, stereoscopic 3D image scaling is process with left/right images, not stereoscopic 3D image itself, because the left/right Images are only tangible data. In this paper, we analyze stereoscopic 3D image scaling from two aspects: geometrical deformation and frequency-domain aliasing. A number of 3D displays are available in the market and they have various screen dimensions. As we have more varieties of the displays, efficient stereoscopic 3D image scaling is becoming more emphasized. We present the recommendations for the 3D scaling from the geometric analysis and propose a disparity-adaptive filter for anti-aliasing which could occur during the image scaling process.

Template-Matching-based High-Speed Face Tracking Method using Depth Information (깊이 정보를 이용한 템플릿 매칭 기반의 고속 얼굴 추적 방법)

  • Kim, Wooyoul;Seo, Youngho;Kim, Dongwook
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.349-361
    • /
    • 2013
  • This paper proposes a fast face tracking method with only depth information. It is basically a template matching method, but it uses a early termination scheme and a sparse search scheme to reduce the execution time to solve the problem of a template matching method, large execution time. Also a refinement process with the neighboring pixels is incorporated to alleviate the tracking error. The depth change of the face being tracked is compensated by predicting the depth of the face and resizing the template. Also the search area is adjusted on the basis of the resized template. With home-made test sequences, the parameters to be used in face tracking are determined empirically. Then the proposed algorithm and the extracted parameters are applied to the other home-made test sequences and a MPEG multi-view test sequence. The experimental results showed that the average tracking error and the execution time for the home-made sequences by Kinect ($640{\times}480$) were about 3% and 2.45ms, while the MPEG test sequence ($1024{\times}768$) showed about 1% of tracking error and 7.46ms of execution time.

Setting Up a CR Based Filmless Environment for the Radiation Oncology (CR 시스템을 이용한 방사선 종양학과의 Filmless 환경 구축)

  • Kim, Dong-Young;Lee, Ji-Hae;Kim, Myung-Soo;Ha, Bo-Ram;Lee, Cheon-Hee;Kim, So-Yeong;Ahn, So-Hyun;Lee, Re-Na
    • Progress in Medical Physics
    • /
    • v.22 no.3
    • /
    • pp.155-162
    • /
    • 2011
  • The analog image based system consisted of a simulator and medical linear accelerator (LINAC) for radiotherapy was upgraded to digital medical image based system by exchanging the X-ray film with Computed Radiography (CR). With minimum equipments shift and similar treatment process, it was possible that the new digital image system was adapted by the users in short time. The film cassette and the film developer device were substituted with a CR cassette and a CR Reader, where the ViewBox was replaced with a small size PC and a monitor. The viewer software suitable for radiotherapy was developed to maximize the benefit of digital image, and as the result the convenience and the effectiveness was improved. It has two windows to display two different images in the same time and equipped various search capability, contouring, window leveling, image resizing, translation, rotation and registration functions. In order to avoid any discontinuance of the treatment while the transition to digital image, the film and the CR was used together for 1 week, and then the film developer was removed. Since then the CR System has been operated stably for 2 months, and the various requests from users have been reflected to improve the system.

A study on the process of mapping data and conversion software using PC-clustering (PC-clustering을 이용한 매핑자료처리 및 변환소프트웨어에 관한 연구)

  • WhanBo, Taeg-Keun;Lee, Byung-Wook;Park, Hong-Gi
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.7 no.2 s.14
    • /
    • pp.123-132
    • /
    • 1999
  • With the rapid increases of the amount of data and computing, the parallelization of the computing algorithm becomes necessary more than ever. However the parallelization had been conducted mostly in a super-computer until the rod 1990s, it was not for the general users due to the high price, the complexity of usage, and etc. A new concept for the parallel processing has been emerged in the form of K-clustering form the late 1990s, it becomes an excellent alternative for the applications need high computer power with a relative low cost although the installation and the usage are still difficult to the general users. The mapping algorithms (cut, join, resizing, warping, conversion from raster to vector and vice versa, etc) in GIS are well suited for the parallelization due to the characteristics of the data structure. If those algorithms are manipulated using PC-clustering, the result will be satisfiable in terms of cost and performance since they are processed in real flu with a low cos4 In this paper the tools and the libraries for the parallel processing and PC-clustering we introduced and how those tools and libraries are applied to mapping algorithms in GIS are showed. Parallel programs are developed for the mapping algorithms and the result of the experiments shows that the performance in most algorithms increases almost linearly according to the number of node.

  • PDF

Classification Method of Harmful Image Content Rates in Internet (인터넷에서의 유해 이미지 컨텐츠 등급 분류 기법)

  • Nam, Taek-Yong;Jeong, Chi-Yoon;Han, Chi-Moon
    • Journal of KIISE:Information Networking
    • /
    • v.32 no.3
    • /
    • pp.318-326
    • /
    • 2005
  • This paper presents the image feature extraction method and the image classification technique to select the harmful image flowed from the Internet by grade of image contents such as harmlessness, sex-appealing, harmfulness (nude), serious harmfulness (adult) by the characteristic of the image. In this paper, we suggest skin area detection technique to recognize whether an input image is harmful or not. We also propose the ROI detection algorithm that establishes region of interest to reduce some noise and extracts harmful degree effectively and defines the characteristics in the ROI area inside. And this paper suggests the multiple-SVM training method that creates the image classification model to select as 4 types of class defined above. This paper presents the multiple-SVM classification algorithm that categorizes harmful grade of input data with suggested classification model. We suggest the skin likelihood image made of the shape information of the skin area image and the color information of the skin ratio image specially. And we propose the image feature vector to use in the characteristic category at a course of traininB resizing the skin likelihood image. Finally, this paper presents the performance evaluation of experiment result, and proves the suitability of grading image using image feature classification algorithm.

A Real Time Processing Technique for Content-Aware Video Scaling (내용기반 동영상 기하학적 변환을 위한 실시간 처리 기법)

  • Lee, Kang-Hee;Yoo, Jae-Wook;Park, Dae-Hyun;Kim, Yoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.80-89
    • /
    • 2011
  • In this paper, a new real time video scaling technique which preserved the contents of a movie was proposed. Because in a movie a correlation exists between consecutive frames, in this paper by determining the seam of the current frame considering the seam of the previous frame, it was proposed the real time video scaling technique without the shaking phenomenon of the contents even though the entire video is not analyzed. For this purpose, frames which have similar features in a movie are classified into a scene, and the first frame of a scene is resized by the seam carving at the static images so that it can preserve the contents of the image to the utmost. At this time, the information about the seam extracted to convert the image size is saved, and the sizes of the next frames are controlled with reference to the seam information stored in the previous frame by each frame. The proposed algorithm has the fast processing speed of the extent of being similar to a bilinear method and preserves the main content of an image to the utmost at the same time. Also because the memory usage is remarkably small compared with the existing seam carving method, the proposed algorithm is usable in the mobile terminal in which there are many memory restrictions. Computer simulation results indicate that the proposed technique provides better objective performance and subjective image quality about the real time processing and shaking phenomenon removal and contents conservation than conventional algorithms.

An Adaptive Grid-based Clustering Algorithm over Multi-dimensional Data Streams (적응적 격자기반 다차원 데이터 스트림 클러스터링 방법)

  • Park, Nam-Hun;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.14D no.7
    • /
    • pp.733-742
    • /
    • 2007
  • A data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. Due to this reason, memory usage for data stream analysis should be confined finitely although new data elements are continuously generated in a data stream. To satisfy this requirement, data stream processing sacrifices the correctness of its analysis result by allowing some errors. The old distribution statistics are diminished by a predefined decay rate as time goes by, so that the effect of the obsolete information on the current result of clustering can be eliminated without maintaining any data element physically. This paper proposes a grid based clustering algorithm for a data stream. Given a set of initial grid cells, the dense range of a grid cell is recursively partitioned into a smaller cell based on the distribution statistics of data elements by a top down manner until the smallest cell, called a unit cell, is identified. Since only the distribution statistics of data elements are maintained by dynamically partitioned grid cells, the clusters of a data stream can be effectively found without maintaining the data elements physically. Furthermore, the memory usage of the proposed algorithm is adjusted adaptively to the size of confined memory space by flexibly resizing the size of a unit cell. As a result, the confined memory space can be fully utilized to generate the result of clustering as accurately as possible. The proposed algorithm is analyzed by a series of experiments to identify its various characteristics