• Title/Summary/Keyword: Ratio algorithm

Search Result 3,123, Processing Time 0.03 seconds

Quality Assurance of Multileaf Collimator Using Electronic Portal Imaging (전자포탈영상을 이용한 다엽시준기의 정도관리)

  • ;Jason W Sohn
    • Progress in Medical Physics
    • /
    • v.14 no.3
    • /
    • pp.151-160
    • /
    • 2003
  • The application of more complex radiotherapy techniques using multileaf collimation (MLC), such as 3D conformal radiation therapy and intensity-modulated radiation therapy (IMRT), has increased the significance of verifying leaf position and motion. Due to thier reliability and empirical robustness, quality assurance (QA) of MLC. However easy use and the ability to provide digital data of electronic portal imaging devices (EPIDs) have attracted attention to portal films as an alternatives to films for routine qualify assurance, despite concerns about their clinical feasibility, efficacy, and the cost to benefit ratio. In this study, we developed method for daily QA of MLC using electronic portal images (EPIs). EPID availability for routine QA was verified by comparing of the portal films, which were simultaneously obtained when radiation was delivered and known prescription input to MLC controller. Specially designed two-test patterns of dynamic MLC were applied for image acquisition. Quantitative off-line analysis using an edge detection algorithm enhanced the verification procedure as well as on-line qualitative visual assessment. In conclusion, the availability of EPI was enough for daily QA of MLC leaf position with the accuracy of portal films.

  • PDF

Study on Development of Automated Program Model for Measuring Sensibility Preference of Portrait (인물사진의 감성 선호도 측정 자동화 프로그램 모형 개발 연구)

  • Lee, Chang-Seop;Jung, Da-Yeon;Lee, Eun-Ju;Har, Dong-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.9
    • /
    • pp.34-43
    • /
    • 2018
  • The purpose of this study is to develop measurement program model for a human being-oriented product through the between the evaluation factors of portrait and general preferences of portraits. We added new items that are essential to the image evaluation by analysing previous studies. In this study, We identified the facial focus for the first step, and the portraits were evaluated by dividing it into objective and subjective image quality evaluation items. RSC Contrast and Dynamic Range were selected as the Objective evaluation items, and the numerical values of each image could be evaluation items, and the numerical values of each image could be evaluated by statistical analysis method. Facial Exposure, Composition, Position, Ratio, Out of focus, and Emotions and Color tone of image were selected as the Subjective evaluation items. In addition, a new face recognition algorithm is applied to judge the emotions, the manufacturer can get the information that they can analyze the people's emotion. The program developed to quantitatively and qualitatively compiles the evaluation items when evaluating portraits. The program that I developed through this study can be used an analysis program that produce the data for developing the evaluation model of the product more suitable to general users of imaging systems.

News Video Shot Boundary Detection using Singular Value Decomposition and Incremental Clustering (특이값 분해와 점증적 클러스터링을 이용한 뉴스 비디오 샷 경계 탐지)

  • Lee, Han-Sung;Im, Young-Hee;Park, Dai-Hee;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.2
    • /
    • pp.169-177
    • /
    • 2009
  • In this paper, we propose a new shot boundary detection method which is optimized for news video story parsing. This new news shot boundary detection method was designed to satisfy all the following requirements: 1) minimizing the incorrect data in data set for anchor shot detection by improving the recall ratio 2) detecting abrupt cuts and gradual transitions with one single algorithm so as to divide news video into shots with one scan of data set; 3) classifying shots into static or dynamic, therefore, reducing the search space for the subsequent stage of anchor shot detection. The proposed method, based on singular value decomposition with incremental clustering and mercer kernel, has additional desirable features. Applying singular value decomposition, the noise or trivial variations in the video sequence are removed. Therefore, the separability is improved. Mercer kernel improves the possibility of detection of shots which is not separable in input space by mapping data to high dimensional feature space. The experimental results illustrated the superiority of the proposed method with respect to recall criteria and search space reduction for anchor shot detection.

Performance Analysis on Declustering High-Dimensional Data by GRID Partitioning (그리드 분할에 의한 다차원 데이터 디클러스터링 성능 분석)

  • Kim, Hak-Cheol;Kim, Tae-Wan;Li, Ki-Joune
    • The KIPS Transactions:PartD
    • /
    • v.11D no.5
    • /
    • pp.1011-1020
    • /
    • 2004
  • A lot of work has been done to improve the I/O performance of such a system that store and manage a massive amount of data by distributing them across multiple disks and access them in parallel. Most of the previous work has focused on an efficient mapping from a grid ceil, which is determined bY the interval number of each dimension, to a disk number on the assumption that each dimension is split into disjoint intervals such that entire data space is GRID-like partitioned. However, they have ignored the effects of a GRID partitioning scheme on declustering performance. In this paper, we enhance the performance of mapping function based declustering algorithms by applying a good GRID par-titioning method. For this, we propose an estimation model to count the number of grid cells intersected by a range query and apply a GRID partitioning scheme which minimizes query result size among the possible schemes. While it is common to do binary partition for high-dimensional data, we choose less number of dimensions than needed for binary partition and split several times along that dimensions so that we can reduce the number of grid cells touched by a query. Several experimental results show that the proposed estimation model gives accuracy within 0.5% error ratio regardless of query size and dimension. We can also improve the performance of declustering algorithm based on mapping function, called Kronecker Sequence, which has been known to be the best among the mapping functions for high-dimensional data, up to 23 times by applying an efficient GRID partitioning scheme.

A Fault Tolerant ATM Switch using a Fully Adaptive Self-routing Algorithm - The Cyclic Banyan Network (실내 무선 통신로에서 파일럿 심볼을 삽입한 Concatenated FEC 부호에 의한 WATM의 성능 개선)

  • 박기식;강영흥;김종원;정해원;양해권;조성준
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.9A
    • /
    • pp.1276-1284
    • /
    • 1999
  • We have evaluated the BER's and CLP's of Wireless ATM (WATM) cells employing the concatenated FEC code with pilot symbols for fading compensation through the simulation in indoor wireless channel modeled as a Rayleigh and a Rician fading channel, respectively. The results of the performance evaluation are compared with those obtained by employing the convolutional code in the same condition. In Rayleigh fading channel, considering the maximum tolerance BER ( $10^-3$) as a criterion of the voice service, it is blown that the performance improvement of about 4 dB is obtained in terms of $E_b/N_o$ by employing the concatenated FEC code with pilot symbols rather than the convolutional code with pilot symbols.When the values of K parameter which means the ratio of the direct signal to scattered signal power in Rician fading channel are 6 and 10, it is shown that the performance improvement of about 4 dB and 2 dB is obtained, respectively, in terms of $E_b/N_o$ by employing the concatenated FEC code with pilot symbols considering the maximum tolerance BER of the voice service. Also in Rician fading channel of K=6 and K= 10, considering CLP = $10^-3$ as a criterion, it is observed that the performance improvement of about 3.5 dB and1.5 dB is obtained, respectively, in terms of $E_b/N_o$ by employing the concatenated FEC code with pilot symbols.

  • PDF

Why Gabor Frames? Two Fundamental Measures of Coherence and Their Role in Model Selection

  • Bajwa, Waheed U.;Calderbank, Robert;Jafarpour, Sina
    • Journal of Communications and Networks
    • /
    • v.12 no.4
    • /
    • pp.289-307
    • /
    • 2010
  • The problem of model selection arises in a number of contexts, such as subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper studies non-asymptotic model selection for the general case of arbitrary (random or deterministic) design matrices and arbitrary nonzero entries of the signal. In this regard, it generalizes the notion of incoherence in the existing literature on model selection and introduces two fundamental measures of coherence-termed as the worst-case coherence and the average coherence-among the columns of a design matrix. It utilizes these two measures of coherence to provide an in-depth analysis of a simple, model-order agnostic one-step thresholding (OST) algorithm for model selection and proves that OST is feasible for exact as well as partial model selection as long as the design matrix obeys an easily verifiable property, which is termed as the coherence property. One of the key insights offered by the ensuing analysis in this regard is that OST can successfully carry out model selection even when methods based on convex optimization such as the lasso fail due to the rank deficiency of the submatrices of the design matrix. In addition, the paper establishes that if the design matrix has reasonably small worst-case and average coherence then OST performs near-optimally when either (i) the energy of any nonzero entry of the signal is close to the average signal energy per nonzero entry or (ii) the signal-to-noise ratio in the measurement system is not too high. Finally, two other key contributions of the paper are that (i) it provides bounds on the average coherence of Gaussian matrices and Gabor frames, and (ii) it extends the results on model selection using OST to low-complexity, model-order agnostic recovery of sparse signals with arbitrary nonzero entries. In particular, this part of the analysis in the paper implies that an Alltop Gabor frame together with OST can successfully carry out model selection and recovery of sparse signals irrespective of the phases of the nonzero entries even if the number of nonzero entries scales almost linearly with the number of rows of the Alltop Gabor frame.

Development of Evaluation Metrics that Consider Data Imbalance between Classes in Facies Classification (지도학습 기반 암상 분류 시 클래스 간 자료 불균형을 고려한 평가지표 개발)

  • Kim, Dowan;Choi, Junhwan;Byun, Joongmoo
    • Geophysics and Geophysical Exploration
    • /
    • v.23 no.3
    • /
    • pp.131-140
    • /
    • 2020
  • In training a classification model using machine learning, the acquisition of training data is a very important stage, because the amount and quality of the training data greatly influence the model performance. However, when the cost of obtaining data is so high that it is difficult to build ideal training data, the number of samples for each class may be acquired very differently, and a serious data-imbalance problem can occur. If such a problem occurs in the training data, all classes are not trained equally, and classes containing relatively few data will have significantly lower recall values. Additionally, the reliability of evaluation indices such as accuracy and precision will be reduced. Therefore, this study sought to overcome the problem of data imbalance in two stages. First, we introduced weighted accuracy and weighted precision as new evaluation indices that can take into account a data-imbalance ratio by modifying conventional measures of accuracy and precision. Next, oversampling was performed to balance weighted precision and recall among classes. We verified the algorithm by applying it to the problem of facies classification. As a result, the imbalance between majority and minority classes was greatly mitigated, and the boundaries between classes could be more clearly identified.

Face Detection Using A Selectively Attentional Hough Transform and Neural Network (선택적 주의집중 Hough 변환과 신경망을 이용한 얼굴 검출)

  • Choi, Il;Seo, Jung-Ik;Chien, Sung-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.93-101
    • /
    • 2004
  • A face boundary can be approximated by an ellipse with five-dimensional parameters. This property allows an ellipse detection algorithm to be adapted to detecting faces. However, the construction of a huge five-dimensional parameter space for a Hough transform is quite unpractical. Accordingly, we Propose a selectively attentional Hough transform method for detecting faces from a symmetric contour in an image. The idea is based on the use of a constant aspect ratio for a face, gradient information, and scan-line-based orientation decomposition, thereby allowing a 5-dimensional problem to be decomposed into a two-dimensional one to compute a center with a specific orientation and an one-dimensional one to estimate a short axis. In addition, a two-point selection constraint using geometric and gradient information is also employed to increase the speed and cope with a cluttered background. After detecting candidate face regions using the proposed Hough transform, a multi-layer perceptron verifier is adopted to reject false positives. The proposed method was found to be relatively fast and promising.

A Study on Joint Damage Model and Neural Networks-Based Approach for Damage Assessment of Structure (구조물 손상평가를 위한 접합부 손상모델 및 신경망기법에 관한 연구)

  • 윤정방;이진학;방은영
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.3 no.3
    • /
    • pp.9-20
    • /
    • 1999
  • A method is proposed to estimate the joint damages of a steel structure from modal data using the neural networks technique. The beam-to-column connection in a steel frame structure is represented by a zero-length rotational spring of the end of the beam element, and the connection fixity factor is defined based on the rotational stiffness so that the factor may be in the range 0~1.0. Then, the severity of joint damage is defined as the reduction ratio of the connection fixity factor. Several advanced techniques are employed to develop the robust damage identification technique using neural networks. The concept of the substructural indentification is used for the localized damage assessment in the large structure. The noise-injection learning algorithm is used to reduce the effects of the noise in the modal data. The data perturbation scheme is also employed to assess the confidence in the estimated damages based on a few sets of actual measurement data. The feasibility of the proposed method is examined through a numerical simulation study on a 2-bay 10-story structure and an experimental study on a 2-story structure. It has been found that the joint damages can be reasonably estimated even for the case where the measured modal vectors are limited to a localized substructure and the data are severely corrupted with noise.

  • PDF

Real-Time Hybrid Testing Using a Fixed Iteration Implicit HHT Time Integration Method for a Reinforced Concrete Frame (고정반복법에 의한 암시적 HHT 시간적분법을 이용한 철근콘크리트 골조구조물의 실시간 하이브리드실험)

  • Kang, Dae-Hung;Kim, Sung-Il
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.15 no.5
    • /
    • pp.11-24
    • /
    • 2011
  • A real-time hybrid test of a 3 story-3 bay reinforced concrete frame which is divided into numerical and physical substructure models under uniaxial earthquake excitation was run using a fixed iteration implicit HHT time integration method. The first story inner non-ductile column was selected as the physical substructure model, and uniaxial earthquake excitation was applied to the numerical model until the specimen failed due to severe damage. A finite-element analysis program, Mercury, was newly developed and optimized for a real-time hybrid test. The drift ratio based on the top horizontal displacement of the physical substructure model was compared with the result of a numerical simulation by OpenSees and the result of a shaking table test. The experiment in this paper is one of the most complex real-time hybrid tests, and the description of the hardware, algorithm and models is presented in detail. If there is an improvement in the numerical model, the evaluation of the tangent stiffness matrix of the physical substructure model in the finite element analysis program and better software to reduce the computational time of the element state determination for the force-based beam-column element, then the comparison with the results of the real-time hybrid test and the shaking table test deserves to make a recommendation. In addition, for the goal of a "Numerical simulation of the complex structures under dynamic loading", the real time hybrid test has enough merit as an alternative to dynamic experiments of large and complex structures.