• Title/Summary/Keyword: accuracy analysis

Search Result 12,083, Processing Time 0.047 seconds

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

RELIABILITY OF SPIRAL TOMOGRAPHY FOR IMPLANT SITE MEASUREMENT OF THE MANDIBLE (하악골 매식 부위 계측을 위한 나선형 단층촬영술의 신뢰도)

  • Kim Kee-Deog;Park Chang-Seo
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.27 no.2
    • /
    • pp.27-47
    • /
    • 1997
  • The purpose of this study was to evaluate the accuracy and usefulness of spiral tomography through the comparison and analysis of SCANORA cross-sectional tomographs and DentaScan computed tomographic images of dry mandibles taken by a SCANORA spiral tomographic machine and a computed tomographic machine. Thirty-one dry mandibles with full or partial edentulous areas were used. To evaluate the possible effect of location in the edentulous area, it was divided into 4 regions of Me (region of mental foramen), MI (the midportion between Me and M2), M2 (the midportion between mental foramen and mandibular foramen) and S (the midportion of the mandibular symphysis). A ZPC column (sized 4 mm x 5 mm) was seated on the edentulous regions of Me, MI, M2 and S using the acrylic stent. Then SCANORA spiral tomography and computed tomography were taken on the edentulous regions which contained the ZPC column. The ZPC columns and cross-sectional images of the mandible were measured in the radiographs by three observers and the differences between the two imaging modalities were analysed. The results were as follows: 1. In comparing the actual measurements of the ZPC column and measurements in the radiographs, the mean error of the DentaScan computed tomography was 0.07 mm in vertical direction and -0.06 mm in horiwntal direction, while the mean error of the SCANORA spiral tomography was 0.06 mm in vertical direction and -0.12 mm in horizontal direction. There was a significant difference between the two radiographic techniques in the horizontal measurement of the ZPC column of the symphysis region (p<0.05). But there was no significant difference in the measurements of other regions (p>0.05). 2. In measurements of the distance from the alveolar crest to the inferior border of the mandible (H), and of the distance from the alveolar crest to the superior border of the mandibular canal (Y), there was no significant difference between the two radiographic techniques (p>0.05). 3. In measurements of the distance from the lingual border of the mandible to the buccal border of the mandible (W), and of the distance from the lingual border of the mandible to the lingual border of the mandibular canal (X), there was a significant difference between the two radiographic techniques in measurements of the midportion between the mental foramen and the mandibular foramen (M2) (p<0.05). But there were no significant differences in measurements of the other regions of symphysis (S), mental foramen (Me), the first one-fourth portion between the mental foramen and the mandibular foramen (M1) (p>0.05). 4. Considering the mean range of measurements between observers, the measurements of SCANORA spiral tomography showed higher value than those of DentaScan computed tomography, except in measurements of symphysis (S). 5. On the detectability of the mandibular canal, there was no significant difference between the two radiographic techniques (p>0.05). In conclusion, SCANORA spiral tomography demonstrated a higher interobserver variance than that of DentaScan computed tomography for implant site measurements in the posterior edentulous area of the mandible. These differences were mainly the result of difficulty in the detection of the border of the mandible in SCANORA spiral tomography. But considering the cost and the radiation exposure, SCANORA spiral tomography can be said to be a relatively good radiographic technique for implant site measurement.

  • PDF

A Study of Six Sigma and Total Error Allowable in Chematology Laboratory (6 시그마와 총 오차 허용범위의 개발에 대한 연구)

  • Chang, Sang-Wu;Kim, Nam-Yong;Choi, Ho-Sung;Kim, Yong-Whan;Chu, Kyung-Bok;Jung, Hae-Jin;Park, Byong-Ok
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.37 no.2
    • /
    • pp.65-70
    • /
    • 2005
  • Those specifications of the CLIA analytical tolerance limits are consistent with the performance goals in Six Sigma Quality Management. Six sigma analysis determines performance quality from bias and precision statistics. It also shows if the method meets the criteria for the six sigma performance. Performance standards calculates allowable total error from several different criteria. Six sigma means six standard deviations from the target value or mean value and about 3.4 failures per million opportunities for failure. Sigma Quality Level is an indicator of process centering and process variation total error allowable. Tolerance specification is replaced by a Total Error specification, which is a common form of a quality specification for a laboratory test. The CLIA criteria for acceptable performance in proficiency testing events are given in the form of an allowable total error, TEa. Thus there is a published list of TEa specifications for regulated analytes. In terms of TEa, Six Sigma Quality Management sets a precision goal of TEa/6 and an accuracy goal of 1.5 (TEa/6). This concept is based on the proficiency testing specification of target value +/-3s, TEa from reference intervals, biological variation, and peer group median mean surveys. We have found rules to calculate as a fraction of a reference interval and peer group median mean surveys. We studied to develop total error allowable from peer group survey results and CLIA 88 rules in US on 19 items TP, ALB, T.B, ALP, AST, ALT, CL, LD, K, Na, CRE, BUN, T.C, GLU, GGT, CA, phosphorus, UA, TG tests in chematology were follows. Sigma level versus TEa from peer group median mean CV of each item by group mean were assessed by process performance, fitting within six sigma tolerance limits were TP ($6.1{\delta}$/9.3%), ALB ($6.9{\delta}$/11.3%), T.B ($3.4{\delta}$/25.6%), ALP ($6.8{\delta}$/31.5%), AST ($4.5{\delta}$/16.8%), ALT ($1.6{\delta}$/19.3%), CL ($4.6{\delta}$/8.4%), LD ($11.5{\delta}$/20.07%), K ($2.5{\delta}$/0.39mmol/L), Na ($3.6{\delta}$/6.87mmol/L), CRE ($9.9{\delta}$/21.8%), BUN ($4.3{\delta}$/13.3%), UA ($5.9{\delta}$/11.5%), T.C ($2.2{\delta}$/10.7%), GLU ($4.8{\delta}$/10.2%), GGT ($7.5{\delta}$/27.3%), CA ($5.5{\delta}$/0.87mmol/L), IP ($8.5{\delta}$/13.17%), TG ($9.6{\delta}$/17.7%). Peer group survey median CV in Korean External Assessment greater than CLIA criteria were CL (8.45%/5%), BUN (13.3%/9%), CRE (21.8%/15%), T.B (25.6%/20%), and Na (6.87mmol/L/4mmol/L). Peer group survey median CV less than it were as TP (9.3%/10%), AST (16.8%/20%), ALT (19.3%/20%), K (0.39mmol/L/0.5mmol/L), UA (11.5%/17%), Ca (0.87mg/dL1mg/L), TG (17.7%/25%). TEa in 17 items were same one in 14 items with 82.35%. We found out the truth on increasing sigma level due to increased total error allowable, and were sure that the goal of setting total error allowable would affect the evaluation of sigma metrics in the process, if sustaining the same process.

  • PDF

Establishment of an Analytical Method for Prometryn Residues in Clam Using GC-MS (GC-MS를 이용한 바지락 중 prometryn 잔류분석법 확립)

  • Chae, Young-Sik;Cho, Yoon-Jae;Jang, Kyung-Joo;Kim, Jae-Young;Lee, Sang-Mok;Chang, Moon-Ik
    • Korean Journal of Food Science and Technology
    • /
    • v.45 no.5
    • /
    • pp.531-536
    • /
    • 2013
  • We developed a simple, sensitive, and specific analytical method for prometryn using gas chromatography-mass spectrometry (GC-MS). Prometryn is a selective herbicide used for the control of annual grasses and broadleaf weeds in cotton and celery crops. On the basis of high specificity, sensitivity, and reproducibility, combined with simple analytical operation, we propose that our newly developed method is suitable for use as a Ministry of Food and Drug Safety (MFDS, Korea) official method in the routine analysis of individual pesticide residues. Further, the method is applicable in clams. The separation condition for GC-MS was optimized by using a DB-5MS capillary column ($30m{\times}0.25mm$, 0.25 ${\mu}m$) with helium as the carrier gas, at a flow rate of 0.9 mL/min. We achieved high linearity over the concentration range 0.02-0.5 mg/L (correlation coefficient, $r^2$ >0.998). Our method is specific and sensitive, and has a quantitation limit of 0.04 mg/kg. The average recovery in clams ranged from 84.0% to 98.0%. The reproducibility of measurements expressed as the coefficient of variation (CV%) ranged from 3.0% to 7.1%. Our analytical procedure showed high accuracy and acceptable sensitivity regarding the analytical requirements for prometryn in fishery products. Finally, we successfully applied our method to the determination of residue levels in fishery products, and showed that none of the analyzed samples contained detectable amounts of residues.

Analysis of dosimetric leaf gap variation on dose rate variation for dynamic IMRT (동적 세기조절방사선 치료 시 선량률 변화에 따른 선량학적엽간격 변화 분석)

  • Yang, Myung Sic;Park, Ju Kyeong;Lee, Seung Hun;Kim, Yang Su;Lee, Sun Young;Cha, Seok Yong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.1
    • /
    • pp.47-55
    • /
    • 2016
  • To evaluate the position accuracy of the MLC. This study analyzed the variations of the dosimetric leaf gap(DLG) and MLC transmission factor to reflect the location of the MLC leaves according to the dose rate variation for dynamic IMRT. We used the 6 MV and 10 MV X-ray beams from linear accelerator with a Millennium 120 MLC system. We measured the variation of DLG and MLC transmission factor at depth of 10 cm for the water phantom by varying the dose rate to 200, 300, 400, 500 and 600 MU/min using the CC13 and FC-65G chambers. For 6 MV X-ray beam, a result of measuring based on a dose rate 400 MU/min by varying the dose rate to 200, 300, 400, 500 and 600 MU/min of the difference rate was respectively -2.59, -1.89, 0.00, -0.58, -2.89%. For 10 MV X-ray beam, the difference rate was respectively ?2.52, -1.69, 0.00, +1.28, -1.98%. The difference rate of MLC transmission factor was in the range of about ${\pm}1%$ of the measured values at the two types of energy and all of the dose rates. This study evaluated the variation of DLG and MLC transmission factor for the dose rate variation for dynamic IMRT. The difference of the MLC transmission factor according to the dose rate variation is negligible, but, the difference of the DLG was found to be large. Therefore, when randomly changing the dose rate dynamic IMRT, it may significantly affect the dose delivered to the tumor. Unless you change the dose rate during dynamic IMRT, it is thought that is to be the more accurate radiation therapy.

  • PDF

A Multimodal Profile Ensemble Approach to Development of Recommender Systems Using Big Data (빅데이터 기반 추천시스템 구현을 위한 다중 프로파일 앙상블 기법)

  • Kim, Minjeong;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.93-110
    • /
    • 2015
  • The recommender system is a system which recommends products to the customers who are likely to be interested in. Based on automated information filtering technology, various recommender systems have been developed. Collaborative filtering (CF), one of the most successful recommendation algorithms, has been applied in a number of different domains such as recommending Web pages, books, movies, music and products. But, it has been known that CF has a critical shortcoming. CF finds neighbors whose preferences are like those of the target customer and recommends products those customers have most liked. Thus, CF works properly only when there's a sufficient number of ratings on common product from customers. When there's a shortage of customer ratings, CF makes the formation of a neighborhood inaccurate, thereby resulting in poor recommendations. To improve the performance of CF based recommender systems, most of the related studies have been focused on the development of novel algorithms under the assumption of using a single profile, which is created from user's rating information for items, purchase transactions, or Web access logs. With the advent of big data, companies got to collect more data and to use a variety of information with big size. So, many companies recognize it very importantly to utilize big data because it makes companies to improve their competitiveness and to create new value. In particular, on the rise is the issue of utilizing personal big data in the recommender system. It is why personal big data facilitate more accurate identification of the preferences or behaviors of users. The proposed recommendation methodology is as follows: First, multimodal user profiles are created from personal big data in order to grasp the preferences and behavior of users from various viewpoints. We derive five user profiles based on the personal information such as rating, site preference, demographic, Internet usage, and topic in text. Next, the similarity between users is calculated based on the profiles and then neighbors of users are found from the results. One of three ensemble approaches is applied to calculate the similarity. Each ensemble approach uses the similarity of combined profile, the average similarity of each profile, and the weighted average similarity of each profile, respectively. Finally, the products that people among the neighborhood prefer most to are recommended to the target users. For the experiments, we used the demographic data and a very large volume of Web log transaction for 5,000 panel users of a company that is specialized to analyzing ranks of Web sites. R and SAS E-miner was used to implement the proposed recommender system and to conduct the topic analysis using the keyword search, respectively. To evaluate the recommendation performance, we used 60% of data for training and 40% of data for test. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. A widely used combination metric called F1 metric that gives equal weight to both recall and precision was employed for our evaluation. As the results of evaluation, the proposed methodology achieved the significant improvement over the single profile based CF algorithm. In particular, the ensemble approach using weighted average similarity shows the highest performance. That is, the rate of improvement in F1 is 16.9 percent for the ensemble approach using weighted average similarity and 8.1 percent for the ensemble approach using average similarity of each profile. From these results, we conclude that the multimodal profile ensemble approach is a viable solution to the problems encountered when there's a shortage of customer ratings. This study has significance in suggesting what kind of information could we use to create profile in the environment of big data and how could we combine and utilize them effectively. However, our methodology should be further studied to consider for its real-world application. We need to compare the differences in recommendation accuracy by applying the proposed method to different recommendation algorithms and then to identify which combination of them would show the best performance.

Improvement of analytical methods for arsenic in soil using ICP-AES (ICP-AES를 이용한 토양 시료 중 비소 분석 방법 개선)

  • Lee, Hong-gil;Kim, Ji In;Kim, Rog-young;Ko, Hyungwook;Kim, Tae Seung;Yoon, Jeong Ki
    • Analytical Science and Technology
    • /
    • v.28 no.6
    • /
    • pp.409-416
    • /
    • 2015
  • ICP-AES has been used in many laboratories due to the advantages of wide calibration range and multi-element analysis, but it may give erroneous results and suffer from spectral interference due to the large number of emission lines associated with each element. In this study, certified reference materials (CRMs) and field samples were analyzed by ICP-AES and HG-AAS according to the official Korean testing method for soil pollution to investigate analytical problems. The applicability of HG-ICP-AES was also tested as an alternative method. HG-AAS showed good accuracies (90.8~106.3%) in all CRMs, while ICP-AES deviated from the desired range in CRMs with low arsenic and high Fe/Al. The accuracy in CRM030 was estimated as below 39% at the wavelength of 193.696 nm by ICP-AES. Significant partial overlaps and sloping background interferences were observed near to 193.696 nm with the presence of 50 mg/L Fe and Al. Most CRMs were quantified with few or no interferences of Fe and Al at 188.980 nm. ICP-AES properly assessed low and high level arsenic for field samples, at 188.980 nm and 193.696 nm, respectively. The importance of the choice of measurement wavelengths corresponding to relative arsenic level should be noted. Because interferences were affected by the sample matrix, operation conditions and instrument figures, the analysts were required to consider spectral interferences and compare the analytical performance of the recommended wavelengths. HG-ICP-AES was evaluated as a suitable alternative method for ICP-AES due to improvement of the detection limit, wide calibration ranges, and reduced spectral interferences by HG.

A Study on Horizontal Reference Planes in Lateral Cephalogram in Korean Children (한국 아동의 측모두부 수평 기준선에 관한 연구)

  • Kim, Kyung-Ho;Choy, Kwang-Chul;Lee, Ji-Yeon
    • The korean journal of orthodontics
    • /
    • v.29 no.2 s.73
    • /
    • pp.251-265
    • /
    • 1999
  • Various types of horizontal reference planes are used for diagnosis, treatment planning and evaluation of treatment results. But these reference Planes lack accuracy and repro-ducibility, and are mainly for Caucasian. Unlike the adult patients who have completed growth, the horizontal reference planes for growing children may change continuously during growth. Therefore this must be considered in selecting the horizontal reference plane. The purpose of this study was to Investigate the angle formed by the Sella-Nasion(SN) plane and Frankfort-Horizontal(FH) plane and evaluate the angle formed by FH plane and other horizontal reference planes in relation to different skeletal maturity and malocclusion types. 540 subjects with no orthodontic treatment history were chosen, and hand -wrist X-rays and lateral cephalometric X-rays were taken. According to SMA(Skeletal Maturity Assessment) of hand-wrist X-rays, the subjects were classified into 3 skeletal maturity groups : SMI 1-4 for group A, SMI 5-7 for group B and SMI 8-11 for group C. A second classification was made according to cephalometric analysis of lateral cephalograms. The subjects were classified into 3 malocclusion groups : Skeletal Class I, II and III malocclusion group. 10 measurements were evaluated. The results were as follows. 1. The angle formed by the SN plane and FH plane showed no difference among skeletal maturity groups, malocclusion groups, and between .sexes. 2. The angles formed by the SN plane and FH plane were $8.27^{\circ}{\pm}2.31^{\circ}$ for males and $8.59^{\circ}{\pm}2.24^{\circ}$ for females. The average value for females and males was $8.42^{\circ}{\pm}2.28^{\circ}$. 3. The angle formed by the FH plane and palatal plane was almost constant showing no difference among skeletal maturity groups, malocclusion groups, and between sexes($1.09^{\circ}{\pm}3.21^{\circ}$).

  • PDF

Usefulness of High-B-value Diffusion - Weighted MR Imaging for the Pre-operative Detection of Rectal Cancers (B-values 변환 자기공명영상: 국소 직장암 수술 전 검출을 위한 적합한 b-value 유용성)

  • Lee, Jae-Seung;Goo, Eun-Hoe;Lee, Sun-Yeob;Park, Cheol-Soo;Choi, Ji-Won
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.12
    • /
    • pp.683-690
    • /
    • 2009
  • The purpose of this study is to evaluate the usefulness of high-b-values diffusion weighted magnetic resonance imaging for the preoperative detection of focal rectum cancers. 60patients with diffusion weighted imaging were evaluated for the presence of rectal cancers. Forty were male and twenty were female, and their ages ranged from 38 to 71 (mean, 56) years. Used equipment was 1.5Tesla MRI((GE, General Electric Medical System, Excite HD). Examination protocols were used the fast spin echo T2, T1 weighted imaging. All examination protocols were performed by the same location with diffusion weighted imaging for accuracy detection. The b-values used in DWI were 250, 500, 750, 1000. 1500, 2000$(s/mm^2)$. The rectum, bladder to tumor contrast-to-noise ratio (CNR) of MR images were quantitativlely analyzed using GE software Functool tool, four experienced radiologists and three radiotechnologists qualitatively evaluated image quality in terms of image artifacts, lesion conspicuity and rectal wall. These data were analysed by using ANOVA and Freedman test with each b-value(p<0.05). Contrast to noise ratio of rectum, bladder and tumor in b-value 1000 were 27.21, 24.44, respectively(p<0.05) and aADC value was $0.73\times10^{-3}$. As a qualitative analysis, the conspicuity and discrimination from the rectal wall of lesions were high results as $4.0\pm0.14$, $4.4\pm0.16$ on b-value 1000(p<0.05), image artifacts were high results as $4.8\pm0.25$ on b-value 2000(p<0.05). In conclusion, DWI was provided useful information with depicting the pre-operative detection of rectal cancers, High-b-value 1000 image was the most excellent DWI value.