• Title/Summary/Keyword: higher order accuracy

Search Result 791, Processing Time 0.029 seconds

The useage of the EPID as a QA tools (EPID의 적정관리 도구로서의 유용성에 관한 연구)

  • Cho Jung Hee;Bang Dong Wan;Yoon Seong Ik;Park Jae Il
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.11 no.1
    • /
    • pp.16-21
    • /
    • 1999
  • Purpose : The aim of this study is to conform the possibility of the liquid type EPID as a QC tools to clinical indication and of replacement of the film dosimetry. Aditional aim is to describe a procedure for the use of a EPID as a physics calibration tool in the measurements of radiation beam parameters which are typically carried out with film. Method & Materials : In this study we used the Clinac 2100c/d with EPID. This system contains 65536 liquid-filled ion chambers arranged in a $256{\times}256$ matrix and the imaging area is $32.5{\times}32.5cm$ with liquid layer thickness of 1mm. The EPID was tested for different field sizes under typical clinical conditions and pixel values were calibrated against dose by producing images using various thickness of lead attenuators(lead step wedge) using 6 & 10MV x-ray. We placed various thickness of lead on the table of linear accelerator and set the portal vision an SDD of 100cm. To acquire portal image we change the field size and energy, and we recorded the average pixel value in a $3{\times}3$ pixel region of interest(ROI) at field center was recorded. The pixel values were also measured for different field sizes in order to evaluate the dependence of pixel value on x-ray energy spectrum and various scatter components. Result : The EPID, as a whole, was useful as a QA tool and dosimetry device. In mechanical check, cross-hair centering was well matched and the error was less than ?2mm and light/radiation field coincidence was less than 1mm also. In portal dosimetry the wider the field size the the higher the pixel value and as the lead thickness increase, the pixel value was exponentially decreased. Conclusions : The EPID was very suitable for QA tools and it can be used to measure exit dose during patients treatment with reasonable accuracy. But when indicate the EPID to clincal study deep consideration required

  • PDF

Design and Implementation of Static Program Analyzer Finding All Buffer Overrun Errors in C Programs (C 프로그램의 버퍼 오버런(buffer overrun) 오류를 찾아 주는 정적 분석기의 설계와 구현)

  • Yi Kwang-Keun;Kim Jae-Whang;Jung Yung-Bum
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.5
    • /
    • pp.508-524
    • /
    • 2006
  • We present our experience of combining, in a realistic setting, a static analyzer with a statistical analysis. This combination is in order to reduce the inevitable false alarms from a domain-unaware static analyzer. Our analyzer named Airac(Array Index Range Analyzer for C) collects all the true buffer-overrun points in ANSI C programs. The soundness is maintained, and the analysis' cost-accuracy improvement is achieved by techniques that static analysis community has long accumulated. For still inevitable false alarms (e.g. Airac raised 970 buffer-overrun alarms in commercial C programs of 5.3 million lines and 737 among the 970 alarms were false), which are always apt for particular C programs, we use a statistical post analysis. The statistical analysis, given the analysis results (alarms), sifts out probable false alarms and prioritizes true alarms. It estimates the probability of each alarm being true. The probabilities are used in two ways: 1) only the alarms that have true-alarm probabilities higher than a threshold are reported to the user; 2) the alarms are sorted by the probability before reporting, so that the user can check highly probable errors first. In our experiments with Linux kernel sources, if we set the risk of missing true error is about 3 times greater than false alarming, 74.83% of false alarms could be filtered; only 15.17% of false alarms were mixed up until the user observes 50% of the true alarms.

Designing a FRBR Work Grouping Algorithm of Bibliographic Records using a Role Term Dictionary of Authors (저자역할용어사전 구축 및 저작군집화에 관한 연구)

  • Yun, Jaehyuk;Do, Seulki;Oh, Sam G.
    • Journal of the Korean Society for information Management
    • /
    • v.37 no.2
    • /
    • pp.197-223
    • /
    • 2020
  • The purpose of this study is to analyze the issues resulted from the process of grouping KORMARC records using FRBR WORK concept and to suggest a new method. The previous studies did not sufficiently address the criteria or processes for identifying representative authors of records and their derivatives. Therefore, our study focused on devising a method of identifying the representative author when there are multiple contributors in a work. The study developed a method of identifying representative authors using an author role dictionary constructed by extracting role-terms from the statement of responsibility field (245). We also designed another way to group records as a work by calculating similarity measures of authors and titles. The accuracy rate of WORK grouping was the highest when blank spaces, parentheses, and controling processes were removed from titles and the measured similarity rates of authors and titles were higher than 80 percent. This was an experiment study where we developed an author-role dictionary that can be utilized in selecting a representative author and measured the similarity rate of authors and titles in order to achieve effective WORK grouping of KORMARC records. The future study will attempt to devise a way to improve the similarity measure of titles, incorporate FRBR Group 1 entities such as expression, manifestation and item data into the algorithm, and a method of improving the algorithm by utilizing other forms of MARC data that are widely used in Korea.

Analysis on Field Applicability of SWAN Nested Model (SWAN Nested model의 현장 적용성 분석)

  • Kim, Kang-Min;Dae, Nam-Ki;Lee, Joong-Woo
    • Journal of Navigation and Port Research
    • /
    • v.35 no.1
    • /
    • pp.45-49
    • /
    • 2011
  • The recent trend for numerical experiment requires more higher resolution and accuracy. Generally, in the wave field calculation, it starts with a large region formulation first and follows by a separated detailed region formulation by more denser grids for the main interest area considering the geographical and bathymetrical variation. The wave fields resulted from the large region calculation is being introduced into the detail region calculation as the incident waves. In this process there exists a problem of continuity. In order to get over such problem, method of variable gridding system or spectrum sampling, etc., is being used. However, it seems not enough to examine and analyze the related numerical errors. Therefore, it is investigated in this study the field applicability of the most pervasive use of wave model, the nested SWAN model. For this purpose, we made model experiment for two coastal harbours with different tidal environment, and compared and analyzed the result. From the analysis, it was found that both the extracted values, near the boundaries of the large and detail region and the nested formulation of SWAN model, show almost the same and no different between those with different tidal environment conditions. However it is necessary for reducing the numerical errors to set the boundaries for the detailed region outside of the rapid bathymetric change and deeper region.

An Analysis Method of User Preference by using Web Usage Data in User Device (사용자 기기에서 이용한 웹 데이터 분석을 통한 사용자 취향 분석 방법)

  • Lee, Seung-Hwa;Choi, Hyoung-Kee;Lee, Eun-Seok
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.3
    • /
    • pp.189-199
    • /
    • 2009
  • The amount of information on the Web is explosively growing as the Internet gains in popularity. However, only a small portion of the information on the Web is truly relevant or useful to the user. Thus, offering suitable information according to user demand is an important subject in information retrieval. In e-commerce, the recommender system is essential to revitalize commercial transactions, raise user satisfaction and loyalty towards the information provider. The existing recommender systems are mostly based on user data collected at servers, so user data are dispersed over several servers. Therefore, web servers that lack sufficient user behavior data cannot easily infer user preferences. Also, if the user visits the server infrequently, it may be hard to reflect the dynamically changing user's interest. This paper proposes a novel personalization system analyzing the user preference based on web documents that are accessed by the user on a user device. The system also identifies non-content blocks appearing repeatedly in the dynamically generated web documents, and adds weight to the keywords extracted from the hyperlink sentence selected by the user. Therefore, the system establishes at an early stage recommendation strategies for the web server that has little user data. Also, user profiles are generated rapidly and more accurately by identifying the information blocks. In order to evaluate the proposed system, this study collected web data and purchase history from users who have current purchase activity. Then, we computed the similarity between purchase data and the user profile. We confirm the accuracy of the generated user profile since the web page containing the purchased item has higher correlation than other item pages.

A Study on Self-image and Clothing-Purchasing-Behavior of Adolescence (여고생의 자아 이미지와 의복구매 행동에 관한 연구)

  • 김영신;한명숙
    • The Research Journal of the Costume Culture
    • /
    • v.6 no.1
    • /
    • pp.94-109
    • /
    • 1998
  • The objective of this study is to measure self-image of adolescence, analyze empirically clothing-purchasing-behavior of adolescence and clarify correlation of two variables, self-image and clothing-purchasing-behavior. For this purpose, the techniques involve theoretical studies and researches based on historical obtained from previous related studies and surveys, 431 high school female students who reside in Seoul are asked to answer selected survey questions to examine three aspects, clothing-purchasing-behavior, self-image and demographics. The evaluation of surveyed information is analyzed by statistical techniques to improve the accuracy of data. Statistical methods used are as follows; Descriptive(frequency, mean, percentage), Factor Analysis(varimax rotation), Crosstabs(Chi-square), T-test, One-Way ANOVA< Correlation Analysis, Reliability Analysis and Duncan's Multiple Range Test. The mjor results of this study were as follows: Firstly, there is a discrepancy between real self-image and ideal self-image. Furthermore more significant differences is seen from physical aspects than psychological aspects. Consequently, research proves that the difference derived from their ideal situation and real situation leads to psychological unstableness. In addition, making their real self-image is dependent upon several elements such things as family economic level, pocket money, expenditure on clothing. Therefore, it is critical to combine all factors in order to decide how much to spend for children's clothing and pocket money in parents point of view. Secondly, research shows that shows hat there is correlated relationship between average expenditure on clothing and presence of mother's job. Average expenditure on clothing is, generally, influenced by vogue which is tend to be changed seasonally. It, also, shows that there is positive linear regression between expenditure on clothe and sensitivity for vogue. That is to say, dependent variable, expenditure on clothing, is varied as independent variable, sensitivity for vogue, changes. Female high school students are likely to give much value on brand. Moreover people who are spending more money on clothes have higher tendencies in prompt purchases than who are not. Thirdly, the analysis of clothing-purchasing-behavior and self-image shows that the difference between real self-image and ideal self-image draws the main reason of dissatisfaction after purchase of clothes. As a consequence, their unfilled needs lead them to keep making another purchase to satisfy themselves. Therefore, it is strongly recommended that parents' advices and directions on their children's money spending on clothes are imperative to establish well-behaved purchasing patterns.

  • PDF

Numerical Analysis of Multi-dimensional Consolidation Based on Non-Linear Model (비선형 모델에 의한 다차원 압밀의 수치해석)

  • Jeong, Jin-Seop;Gang, Byeong-Seon;Nam, Gung-Mun
    • Geotechnical Engineering
    • /
    • v.1 no.1
    • /
    • pp.59-72
    • /
    • 1985
  • This paper deals with the numerical analysis by the (mite element method introducing Biot's theory of consolidation and the modified Cambridge model proposed by Roscoe school of Cambridge University as constitutive equation and using Christian-Boehner's technique. Especially, time interval and division of elements are investigated in vies of stability and economics. In order to check the validity of author's program, the program was tested with one-dimensional consolidation case followed by Terzaghi's exact solution and with the results of the Magnan's analysis for existing banking carried out for study at Cubzac-les-ports in France. The main conclusions obtained are summarized as follows: 1. In the case of one-dimensional consolidation, the more divided the elements are near the surface of the foundation, the higher the accuracy of the numerical analysis is. 2. For the time interval, it is stable to divide 20 times per 1-lg cycle. 3. At the element which has long drain distance, the Mandel-fryer effect appears due to time lag. 4. Lateral displacement at an initial loading stage predicted by author's program, in which the load was assumed as not concentrative. but rather in grid form, is well consistent with the value of observation. 5. The pore water pressure predicted by author's program has a better accordance with the value of observation compared with Magnan's results. 6. Optimum construction control by Matsuo-Kawamura's method is possible with the predicted lateral displacement and settlement by the program.

  • PDF

The Analysis and Design of Advanced Neurofuzzy Polynomial Networks (고급 뉴로퍼지 다항식 네트워크의 해석과 설계)

  • Park, Byeong-Jun;O, Seong-Gwon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.3
    • /
    • pp.18-31
    • /
    • 2002
  • In this study, we introduce a concept of advanced neurofuzzy polynomial networks(ANFPN), a hybrid modeling architecture combining neurofuzzy networks(NFN) and polynomial neural networks(PNN). These networks are highly nonlinear rule-based models. The development of the ANFPN dwells on the technologies of Computational Intelligence(Cl), namely fuzzy sets, neural networks and genetic algorithms. NFN contributes to the formation of the premise part of the rule-based structure of the ANFPN. The consequence part of the ANFPN is designed using PNN. At the premise part of the ANFPN, NFN uses both the simplified fuzzy inference and error back-propagation learning rule. The parameters of the membership functions, learning rates and momentum coefficients are adjusted with the use of genetic optimization. As the consequence structure of ANFPN, PNN is a flexible network architecture whose structure(topology) is developed through learning. In particular, the number of layers and nodes of the PNN are not fixed in advance but is generated in a dynamic way. In this study, we introduce two kinds of ANFPN architectures, namely the basic and the modified one. Here the basic and the modified architecture depend on the number of input variables and the order of polynomial in each layer of PNN structure. Owing to the specific features of two combined architectures, it is possible to consider the nonlinear characteristics of process system and to obtain the better output performance with superb predictive ability. The availability and feasibility of the ANFPN are discussed and illustrated with the aid of two representative numerical examples. The results show that the proposed ANFPN can produce the model with higher accuracy and predictive ability than any other method presented previously.

Topology Optimization of Incompressible Flow Using P1 Nonconforming Finite Elements (P1 비순응 요소를 이용한 비압축성 유동 문제의 위상최적화)

  • Jang, Gang-Won;Chang, Se-Myong
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.10
    • /
    • pp.1139-1146
    • /
    • 2012
  • An alternative approach for topology optimization of steady incompressible Navier-Stokes flow problems is presented by using P1 nonconforming finite elements. This study is the extended research of the earlier application of P1 nonconforming elements to topology optimization of Stokes problems. The advantages of the P1 nonconforming elements for topology optimization of incompressible materials based on locking-free property and linear shape functions are investigated if they are also valid in fluid equations with the inertia term. Compared with a mixed finite element formulation, the number of degrees of freedom of P1 nonconforming elements is reduced by using the discrete divergence-free property; the continuity equation of incompressible flow can be imposed by using the penalty method into the momentum equation. The effect of penalty parameters on the solution accuracy and proper bounds will be investigated. While nodes of most quadrilateral nonconforming elements are located at the midpoints of element edges and higher order shape functions are used, the present P1 nonconforming elements have P1, {1, x, y}, shape functions and vertex-wisely defined degrees of freedom. So its implentation is as simple as in the standard bilinear conforming elements. The effectiveness of the proposed formulation is verified by showing examples with various Reynolds numbers.

Calpain-10 SNP43 and SNP19 Polymorphisms and Colorectal Cancer: a Matched Case-control Study

  • Hu, Xiao-Qin;Yuan, Ping;Luan, Rong-Sheng;Li, Xiao-Ling;Liu, Wen-Hui;Feng, Fei;Yan, Jin;Yang, Yan-Fang
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.14 no.11
    • /
    • pp.6673-6680
    • /
    • 2013
  • Objective: Insulin resistance (IR) is an established risk factor for colorectal cancer (CRC). Given that CRC and IR physiologically overlap and the calpain-10 gene (CAPN10) is a candidate for IR, we explored the association between CAPN10 and CRC risk. Methods: Blood samples of 400 case-control pairs were genotyped, and the lifestyle and dietary habits of these pairs were recorded and collected. Unconditional logistic regression (LR) was used to assess the effects of CAPN10 SNP43 and SNP19, and environmental factors. Both generalized multifactor dimensionality reduction (GMDR) and the classification and regression tree (CART) were used to test gene-environment interactions for CRC risk. Results: The GA+AA genotype of SNP43 and the Del/Ins+Ins/Ins genotype of SNP19 were marginally related to CRC risk (GA+AA: OR = 1.35, 95% CI = 0.92-1.99; Del/Ins+Ins/Ins: OR = 1.31, 95% CI = 0.84-2.04). Notably, a high-order interaction was consistently identified by GMDR and CART analyses. In GMDR, the four-factor interaction model of SNP43, SNP19, red meat consumption, and smoked meat consumption was the best model, with a maximum cross-validation consistency of 10/10 and testing balance accuracy of 0.61 (P < 0.01). In LR, subjects with high red and smoked meat consumption and two risk genotypes had a 6.17-fold CRC risk (95% CI = 2.44-15.6) relative to that of subjects with low red and smoked meat consumption and null risk genotypes. In CART, individuals with high smoked and red meat consumption, SNP19 Del/Ins+Ins/Ins, and SNP43 GA+AA had higher CRC risk (OR = 4.56, 95%CI = 1.94-10.75) than those with low smoked and red meat consumption. Conclusions: Though the single loci of CAPN10 SNP43 and SNP19 are not enough to significantly increase the CRC susceptibility, the combination of SNP43, SNP19, red meat consumption, and smoked meat consumption is associated with elevated risk.