• Title/Summary/Keyword: 시간영역 계산

Search Result 1,141, Processing Time 0.036 seconds

Temperature change and performance of bur efficiency for two different drill combinations (두 가지 임플란트 드릴 조합에 따른 온도 변화 및 효율 비교)

  • Hwang-Bo, Heung;Park, Jae-Young;Lee, Sang-Youn;Son, Keunbada;Lee, Kyu-Bok
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.60 no.2
    • /
    • pp.143-151
    • /
    • 2022
  • Purpose. The purpose of this study was to evaluate the performance efficiency of two different drill combinations according to the heat generated and drilling time. Materials and methods. In this study, cow ribs were used as research materials. To test the specimen, cow bones were rid of fascia and muscles, and a temperature sensor was mounted around the drilling area. The experimental group was divided into a group using a guide drill and a group using a Lindmann drill according to the drill used before the initial drilling. The drilling sequence of the guide drilling group is as follows; guide drill (ø 2.25), initial drill (ø 2.25), twist drill (ø 2.80), and twist drill (ø 3.20). The drilling sequence of the Lindmann drilling group is as follows; Lindmann drill (ø 2.10), initial drill (ø 2.25), twist drill (ø 2.80), and twist drill (ø 3.20). The temperature was measured after drilling. For statistical analysis, the difference between the groups was analyzed using the Mann-Whitney U test and the Friedman test was used (α = .05). Results. The average performance efficiency for each specimen of guide drilling group ranged from 0.3861 to 1.1385 mm3/s and that of Lindmann drilling group ranged from 0.1700 to 0.4199 mm3/s. The two drill combinations contained a guide drill and Lindmann drill as their first drills. The combination using the guide drill demonstrated excellent performance efficiency when calculated using the drilling time (P < .001). Conclusion. Since the guide drill group showed better performance efficiency than the Lindmann drill group, the use of the guide drill was more suitable for the primary drilling process.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

An Experimental Method for the Scatter Correction of MV Images Using Scatter to Primary Ratios (SPRs) (산란선 대 일차선비(SPR)를 이용한 MV 영상의 산란 보정을 위한 실험적 방법)

  • Jeon, Hosang;Park, Dahl;Lee, Jayeong;Nam, Jiho;Kim, Wontaek;Ki, Yongkan;Kim, Donghyun;Lee, Ju Hye;Kim, Dongwon
    • Progress in Medical Physics
    • /
    • v.25 no.3
    • /
    • pp.143-150
    • /
    • 2014
  • In general radiotherapy, mega-voltage (MV) x-ray images are widely used as the unique method to verify radio-therapeutic fields. But, the image quality of MV images is much lower than that of kilo-voltage x-ray images due to scatter interactions. Since 1990s, studies for the scatter correction have performed with digital-based MV imaging systems. In this study, a novel method for the scatter correction is suggested using scatter to primary ratio (SPR), instead of conventional methods such as digital image processing or scatter kernel calculations. We measured two MV images with and without a solid water phantom describing a patient body with given imaging conditions, and calculated un-attenuated ratios. Then, we obtained SPR distributions for the scatter correction. For experimental validation, a line-pair (LP) phantom using several Al bars and a clinical pelvis MV image was used. As the result, scatter signals of the LP phantom image were successfully reduced so that original density distribution of the phantom was restored. Moreover, image contrast values increased after SPR correction at all ROIs of the clinical image. The mean value of increases was 48%. The SPR correction method suggested in this study has high reliability because it is based on actually measured data. Also, this method can be easily adopted in clinics without additional cost. We expected that the SPR correction can be an effective method to improve the quality of MV image guided radiotherapy.

Detection of Hepatic Lesion: Comparison of Free-Breathing and Respiratory-Triggered Diffusion-Weighted MR imaging on 1.5-T MR system (국소 간 병변의 발견: 1.5-T 자기공명영상에서의 자유호흡과 호흡유발 확산강조 영상의 비교)

  • Park, Hye-Young;Cho, Hyeon-Je;Kim, Eun-Mi;Hur, Gham;Kim, Yong-Hoon;Lee, Byung-Hoon
    • Investigative Magnetic Resonance Imaging
    • /
    • v.15 no.1
    • /
    • pp.22-31
    • /
    • 2011
  • Purpose : To compare free-breathing and respiratory-triggered diffusion-weighted imaging on 1.5-T MR system in the detection of hepatic lesions. Materials and Methods: This single-institution study was approved by our institutional review board. Forty-seven patients (mean 57.9 year; M:F = 25:22) underwent hepatic MR imaging on 1.5-T MR system using both free-breathing and respiratory-triggered diffusion-weighted imaging (DWI) at a single examination. Two radiologists retrospectively reviewed respiratory-triggered and free-breathing sets (B50, B400, B800 diffusion weighted images and ADC map) in random order with a time interval of 2 weeks. Liver SNR and lesion-to-liver CNR of DWI were calculated measuring ROI. Results : Total of 62 lesions (53 benign, 9 malignant) that included 32 cysts, 13 hemangiomas, 7 hepatocellular carcinomas (HCCs), 5 eosinophilic infiltration, 2 metastases, 1 eosinophilic abscess, focal nodular hyperplasia, and pseudolipoma of Glisson's capsule were reviewed by two reviewers. Though not reaching statistical significance, the overall lesion sensitivities were increased in respiratory-triggered DWI [reviewer1: reviewer2, 47/62(75.81%):45/62(72.58%)] than free-breathing DWI [44/62(70.97%):41/62(66.13%)]. Especially for smaller than 1 cm hepatic lesions, sensitivity of respiratory-triggered DWI [24/30(80%):21/30(70%)] was superior to free-breathing DWI [17/30(56.7%):15/30(50%)]. The diagnostic accuracy measuring the area under the ROC curve (Az value) of free-breathing and respiratory-triggered DWI was not statistically different. Liver SNR and lesion-to-liver CNR of respiratory-triggered DWI ($87.6{\pm}41.4$, $41.2{\pm}62.5$) were higher than free-breathing DWI ($38.8:{\pm}13.6$, $24.8{\pm}36.8$) (p value < 0.001, respectively). Conclusion: Respiratory-triggered diffusion-weighted MR imaging seemed to be better than free-breathing diffusion-weighted MR imaging on 1.5-T MR system for the detection of smaller than 1 cm lesions by providing high SNR and CNR.

An Assessment of the Accuracy of 3 Dimensional Acquisition in F-18 fluorodeoxyglucose Brain PET Imaging (3차원 데이터획득 뇌 FDG-PET의 정확도 평가)

  • Lee, Jeong-Rim;Choi, Yong;Kim, Sang-Eun;Lee, Kyung-Han;Kim, Byung-Tae;Choi, Chang-Woon;Lim, Sang-Moo;Hong, Seong-Wun
    • The Korean Journal of Nuclear Medicine
    • /
    • v.33 no.3
    • /
    • pp.327-336
    • /
    • 1999
  • Purpose: To assess the quantitative accuracy and the clinical utility of 3D volumetric PET imaging with FDG in brain studies, 24 patients with various neurological disorders were studied. Materials and Methods: Each patient was injected with 370 MBq of 2-[$^{18}F$]fluoro-2-deoxy-D-glucose. After a 30 min uptake period, the patients were imaged for 30 min in 2 dimensional acquisition (2D) and subsequently for 10 min in 3 dimensional acquisition imaging (3D) using a GE $Advance^{TM}$ PET system, The scatter corrected 3D (3D SC) and non scatter-corrected 3D images were compared with 2D images by applying ROIs on gray and white matter, lesion and contralateral normal areas. Measured and calculated attenuation correction methods for emission images were compared to get the maximum advantage of high sensitivity of 3D acquisition. Results: When normalized to the contrast of 2D images, the contrasts of gray to white matter were $0.75{\pm}0.13$ (3D) and $0.95{\pm}0.12$ (3D SC). The contrasts of normal area to lesion were $0.83{\pm}0.05$ (3D) and $0.96{\pm}0.05$ (3D SC). Three nuclear medicine physicians judged 3D SC images to be superior to the 2D with regards to resolution and noise. Regional counts of calculated attenuation correction was not significantly different to that of measured attenuation correction. Conclusion: 3D PET images with the scatter correction in FDG brain studies provide quantitatively and qualitatively similar images to 2D and can be utilized in a routine clinical setting to reduce scanning time and patient motion artifacts.

  • PDF

Dynamics of Barrel-Shaped Young Supernova Remnants (항아리 형태 젊은 초신성 잔해의 동력학)

  • Choe, Seung-Urn;Jung, Hyun-Chul
    • Journal of the Korean earth science society
    • /
    • v.23 no.4
    • /
    • pp.357-368
    • /
    • 2002
  • In this study we have tried to explain the barrel-shaped morphology for young supernova remnants considering the dynamical effects of the ejecta. We consider the magnetic field amplification resulting from the Rayleigh-Taylor instability near the contact discontinuity. We can generate the synthetic radio image assuming the cosmic-ray pressure and calculate the azimuthal intensity ratio (A) to enable a quantitative comparison with observations. The postshock magnetic field are amplified by shearing, stretching, and compressing at the R-T finger boundary. The evolution of the instability strongly depends on the deceleration of the ejecta and the evolutionary stage of the remnant. the strength of the magnetic field increases in the initial phase and decreases after the reverse shock passes the constant density region of the ejecta. However, some memory of the earlier phases of amplification is retained in the interior even when the outer regions turn into a blast wave. The ratio of the averaged magnetic field strength at the equator to the one at the pole in the turbulent region can amount to 7.5 at the peak. The magnetic field amplification can make the large azimuthal intensity ratio (A=15). The magnitude of the amplification is sensitive to numerical resolution. This mens the magnetic field amplification can explain the barrel-shaped morphology of young supernova remnant without the dependence of the efficiency of the cosmic-ray acceleration on the magnetic field configuration. In order for this mechanism to be effective, the surrounding magnetic field must be well-ordered. The small number of barrel-shaped remnants may indicate that this condition rarely occurs.

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

The Development of Self-Directed CAI Using Web - The main theme is the figure part of mathematics - (웹을 이용한 자기 주도적 CAI 개발 - 수학과 도형영역 중심 -)

  • Kang, Seak;Ko, Byung-Oh
    • Journal of The Korean Association of Information Education
    • /
    • v.5 no.1
    • /
    • pp.33-45
    • /
    • 2001
  • In order to adapt ourselves to the Informationalization Society of twenty-first century, it is required to have ability to find quickly the necessary information and solve the problem of our own. In the field of school, it should be educated to develop learner's ability that can cope with the Informationalization Society. When a learner can study in such direction, he or she will be able to plan the learning of his own as the subject of education, and develop his ability to solve the problem by collecting and examining various information. It is self-leading learning that can make education like this possible. Through computer, especially Web site, self-directed learning can develop can develop the individuality and creativity of learners. They can collect and utilize autonomously information and knowledge. To do such an education, the program that can work out self-directed learning is needed. Therefore the program I want to develop is to reconstruct the 'figure' part of mathematics in elementary school into five steps by utilizing Web site. In the first step is to learn the concept of various shape. This step enable learners to know what figure is and how it can be utilized in our real life. The second step of dot, line and angle makes it possible that learners can consolidate the foundation of the study about figure and recognize the relation between angle and figure. In the third step of plane figure, we can study how to calculate the relation of plane figures and the area of figure with various shapes by cutting and adding them. The fourth step is about congruence and symmetry. Learners can learn to know the figure in congruence, reduction and enlargement and how it is used in our real life. In the fifth step of solid figure, we can learn the relation among the plane figure, solid figure, the body of revolution, corn and pyramid etc. controling the speed of learning on the basis of his ability. In the process of the program, it is also possible to develop learner's ability of self-leading learning by solving the problem by himself. Because this program is progressed on the Web site, it is possible to learn anytime and anywhere. In addition to it, a learner can learn beyond the grade as well as do the perfect learning by controling the pace of learning on the basis of his ability. In the process of the program, it is also possible to develop learner's ability of self-leading learning by solving the problem by himself.

  • PDF

Petrochmical study on the Volcanic Rocks Related to Depth to the Benioff Zone and Crustal Thickness in the Kyongsang Basin, Korea: A Review (경상분지 화산암류의 지화학적 연구. 섭입대(베니오프대)의 깊이와 지각의 두께)

  • Jong Gyu Sung
    • Economic and Environmental Geology
    • /
    • v.32 no.4
    • /
    • pp.323-337
    • /
    • 1999
  • Late Cretaceous to early Tertiary volcanic rocks in the Kyongsang basin exhibit high-K calc-alkaline characteristics, and originated from the magmatism related genetically to subduction of Kula-Pacific plate. They represent HFSE depletion and LlLE enrichment characteristics as shown by magmas related to subduction. Early studies on the depth of magma generation has been estimated as 180-230 km based on K-h relation should be reevaluated, because the depth of peridotite partial melting with 0.4 wt. % water is 80-120 km at subduction zone, and subducting slab in premature arc can melted even lower than 70 km. Moreover the increase of potassium contents depends on either contamination of crustal material and fluids of subducting slab or low degree of partial melting. If the inclination of subduction zone is 30 degrees and the depth to the Benioff zone is 180-230 km, the calculated distance between the volcanic zone and trench axis would be 310-400 km. It is unlikely because the distance between the Kyongsang basin and trench during late Cretaceous to early Tertiary is closer than this value and not comparable with generally-accepted models in subduction zone magmatism. $K_{55}$ of the volcanics in the Kyongsang basin is 0.3-2.3 wt.% and the average indicate that the depth ranges between 80-170 km on the diagram of Marsh, Carmichael (1974). Fractionation from garnet lherzolite, assumed the depth of 180-230km, is not consistent with the REE patterns of the volcanoes in the Kyongsang basin. Futhermore, the range of depth suggested by many workers, who studied magmatism related to subduction, imply shallower than this depth. Crustal thickness calculated by the content of CaO and $Na_2O$ is about 30 km and about 35 km, respectively. Paleo-crustal thickness during late Cretaceous to early Tertiary times in the Kyongsang basin inferred about 30 km calculated by La/Sm versus LaJYb data, which is also supported by many previous studies.

  • PDF

Introduction of GOCI-II Atmospheric Correction Algorithm and Its Initial Validations (GOCI-II 대기보정 알고리즘의 소개 및 초기단계 검증 결과)

  • Ahn, Jae-Hyun;Kim, Kwang-Seok;Lee, Eun-Kyung;Bae, Su-Jung;Lee, Kyeong-Sang;Moon, Jeong-Eon;Han, Tai-Hyun;Park, Young-Je
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_2
    • /
    • pp.1259-1268
    • /
    • 2021
  • The 2nd Geostationary Ocean Color Imager (GOCI-II) is the successor to the Geostationary Ocean Color Imager (GOCI), which employs one near-ultraviolet wavelength (380 nm) and eight visible wavelengths(412, 443, 490, 510, 555, 620, 660, 680 nm) and three near-infrared wavelengths(709, 745, 865 nm) to observe the marine environment in Northeast Asia, including the Korean Peninsula. However, the multispectral radiance image observed at satellite altitude includes both the water-leaving radiance and the atmospheric path radiance. Therefore, the atmospheric correction process to estimate the water-leaving radiance without the path radiance is essential for analyzing the ocean environment. This manuscript describes the GOCI-II standard atmospheric correction algorithm and its initial phase validation. The GOCI-II atmospheric correction method is theoretically based on the previous GOCI atmospheric correction, then partially improved for turbid water with the GOCI-II's two additional bands, i.e., 620 and 709 nm. The match-up showed an acceptable result, with the mean absolute percentage errors are fall within 5% in blue bands. It is supposed that part of the deviation over case-II waters arose from a lack of near-infrared vicarious calibration. We expect the GOCI-II atmospheric correction algorithm to be improved and updated regularly to the GOCI-II data processing system through continuous calibration and validation activities.