• Title/Summary/Keyword: Progress Measurement Model

Search Result 82, Processing Time 0.038 seconds

Classification of Nasal Index in Koreans According to Sex

  • Sung-Suk Bae;Hee-Jeung Jee;Min-Gyu Park;Jeong-Hyun Lee
    • Journal of dental hygiene science
    • /
    • v.23 no.3
    • /
    • pp.193-198
    • /
    • 2023
  • Background: The nose is located at the center of the face, and it is possible to determine race, sex, and the like. Research using the nasal index (NI) classification method to classify the shape of the nose is currently in progress. However, domestic research is required as most research is being conducted abroad. In this study, we used a 3D program to confirm the ratio of the nose shape of Koreans. Methods: One hundred patients (50 males and 50 females) in their 20s were evaluated (IRB approval no. DKUDH IRB 2020-01-007). Cone beam computed tomography was performed using the Mimics ver.22 (Materialise Co., Leuven, Belgium) 3D program to model the patient's skull and soft tissues into three views: coronal, sagittal, and frontal. To confirm the ratio of measurement metrics, analysis was performed using the SPSS ver. 23.0 (IBM Co., Armonk, NY, USA) program. Results: Ten leptorrhine (long and narrow) type, 76 mesorrhine (moderate shape) type, and 14 platyrrhine (broad and short) type noses were observed. In addition, as a result of sex comparison, five males had the leptorrhine (long and narrow) type, 40 mesorrhine (moderate shape), and five platyrrhine (broad and short) types. For females, five patients had the leptorrhine (long and narrow) type, 36 patients had the mesorrhine (moderate shape) type, and nine patients had the platyrrhine (broad and short) type. Conclusion: This study will be helpful when performing nose-related surgeries and procedures in clinical practice and for similar studies in the future.

Link Quality Enhancement with Beamforming Using Kalman-based Motion Tracking for Maritime Communication

  • Kyeongjea Lee;Joo-Hyun Jo;Sungyoon Cho;Kiwon Kwon;Dong Ku Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.6
    • /
    • pp.1659-1674
    • /
    • 2024
  • Conventional maritime communication struggles to provide high data rate services for Internet of Things (IoT) devices due to the variability of maritime environments, making it challenging to ensure consistent connectivity for onboard sensors and devices. To resolve this, we perform mathematical modeling of the maritime channel and compare it with real measurement data. Through the modeled channel, we verify the received beam gain at buoys on the ocean surface. Additionally, leveraging the modeled wave motions, we estimate future angles of the buoy to use the Extended Kalman Filter (EKF) for design beamforming strategies that adapt to the evolving maritime environment over time. We further validate the effectiveness of these strategies by assessing the results from an outage probability perspective. focuses on improving maritime communication by developing a dynamic model of the maritime channel and implementing a Kalman filter-based buoy motion tracking system. This system is designed to enable precise beamforming, a technique used to direct communication signals more accurately. By improving beamforming, the aim is to enhance the quality of communication links, even in challenging maritime conditions like rough seas and varying sea states. In our simulations that consider realistic wave motions, you've observed significant improvements in link quality due to the enhanced beamforming technique. These improvements are particularly notable in environments with high sea states, where communication challenges are typically more pronounced. The progress made in this area is not just a technical achievement; it has broad implications for the future of maritime communication technologies. This paper promises to revolutionize the way we approach communication in maritime environments, paving the way for more reliable and efficient information exchange on the seas.

Development and Evaluation of Quality Assurance Worksheet for the Radiation Treatment Planning System (방사선치료계획 시스템의 정도관리 절차서 개발 및 유용성 평가)

  • Cho Kwang Hwan;Choi Jinho;Shin Dong Oh;Kwon Soo Il;Choi Doo Ho;Kim Yong Ho;Lee Sang Hoon
    • Progress in Medical Physics
    • /
    • v.15 no.4
    • /
    • pp.186-191
    • /
    • 2004
  • The periodic Quality Assurance (QA) of each radiation treatment related equipments is important one, but quality assurance of the radiation treatment planning system (RTPS) is still not sufficient rather than other related equipments in clinics. Therefore, this study will present and test the periodic QA program to compare, evaluation the efficiency of the treatment planning systems. This QA program is divided to terms for the input, output devices and dosimetric data and categorized to the weekly, monthly, yearly and non-periodically with respect to the job time, frequency of error, priority of importance. CT images of the water equivalent solid phantom with a heterogeneity condition are input into the RTPS to proceed the test. The actual measurement data are obtained by using the ion chamber for the 6 MV, 10 MV photon beam, then compared a calculation data with a measurement data to evaluate the accuracy of the RTPS. Most of results for the accuracy of geometry and beam data are agreed within the error criteria which is recommended from the various advanced country and related societies. This result can be applied to the periodic QA program to improve the treatment outcome as a proper model in Korea and used to evaluate the accuracy of the RTPS.

  • PDF

Dead Layer Thickness and Geometry Optimization of HPGe Detector Based on Monte Carlo Simulation

  • Suah Yu;Na Hye Kwon;Young Jae Jang;Byungchae Lee;Jihyun Yu;Dong-Wook Kim;Gyu-Seok Cho;Kum-Bae Kim;Geun Beom Kim;Cheol Ha Baek;Sang Hyoun Choi
    • Progress in Medical Physics
    • /
    • v.33 no.4
    • /
    • pp.129-135
    • /
    • 2022
  • Purpose: A full-energy-peak (FEP) efficiency correction is required through a Monte Carlo simulation for accurate radioactivity measurement, considering the geometrical characteristics of the detector and the sample. However, a relative deviation (RD) occurs between the measurement and calculation efficiencies when modeling using the data provided by the manufacturers due to the randomly generated dead layer. This study aims to optimize the structure of the detector by determining the dead layer thickness based on Monte Carlo simulation. Methods: The high-purity germanium (HPGe) detector used in this study was a coaxial p-type GC2518 model, and a certified reference material (CRM) was used to measure the FEP efficiency. Using the MC N-Particle Transport Code (MCNP) code, the FEP efficiency was calculated by increasing the thickness of the outer and inner dead layer in proportion to the thickness of the electrode. Results: As the thickness of the outer and inner dead layer increased by 0.1 mm and 0.1 ㎛, the efficiency difference decreased by 2.43% on average up to 1.0 mm and 1.0 ㎛ and increased by 1.86% thereafter. Therefore, the structure of the detector was optimized by determining 1.0 mm and 1.0 ㎛ as thickness of the dead layer. Conclusions: The effect of the dead layer on the FEP efficiency was evaluated, and an excellent agreement between the measured and calculated efficiencies was confirmed with RDs of less than 4%. It suggests that the optimized HPGe detector can be used to measure the accurate radioactivity using in dismantling and disposing medical linear accelerators.

A Study on Actual Usage of Information Systems: Focusing on System Quality of Mobile Service (정보시스템의 실제 이용에 대한 연구: 모바일 서비스 시스템 품질을 중심으로)

  • Cho, Woo-Chul;Kim, Kimin;Yang, Sung-Byung
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.611-635
    • /
    • 2014
  • Information systems (IS) have become ubiquitous and changed every aspect of how people live their lives. While some IS have been successfully adopted and widely used, others have failed to be adopted and crowded out in spite of remarkable progress in technologies. Both the technology acceptance model (TAM) and the IS Success Model (ISSM), among many others, have contributed to explain the reasons of success as well as failure in IS adoption and usage. While the TAM suggests that intention to use and perceived usefulness lead to actual IS usage, the ISSM indicates that information quality, system quality, and service quality affect IS usage and user satisfaction. Upon literature review, however, we found a significant void in theoretical development and its applications that employ either of the two models, and we raise research questions. First of all, in spite of the causal relationship between intention to use and actual usage, in most previous studies, only intention to use was employed as a dependent variable without overt explaining its relationship with actual usage. Moreover, even in a few studies that employed actual IS usage as a dependent variable, the degree of actual usage was measured based on users' perceptual responses to survey questionnaires. However, the measurement of actual usage based on survey responses might not be 'actual' usage in a strict sense that responders' perception may be distorted due to their selective perceptions or stereotypes. By the same token, the degree of system quality that IS users perceive might not be 'real' quality as well. This study seeks to fill this void by measuring the variables of actual usage and system quality using 'fact' data such as system logs and specifications of users' information and communications technology (ICT) devices. More specifically, we propose an integrated research model that bring together the TAM and the ISSM. The integrated model is composed of both the variables that are to be measured using fact as well as survey data. By employing the integrated model, we expect to reveal the difference between real and perceived degree of system quality, and to investigate the relationship between the perception-based measure of intention to use and the fact-based measure of actual usage. Furthermore, we also aim to add empirical findings on the general research question: what factors influence actual IS usage and how? In order to address the research question and to examine the research model, we selected a mobile campus application (MCA). We collected both fact data and survey data. For fact data, we retrieved them from the system logs such information as menu usage counts, user's device performance, display size, and operating system revision version number. At the same time, we conducted a survey among university students who use an MCA, and collected 180 valid responses. A partial least square (PLS) method was employed to validate our research model. Among nine hypotheses developed, we found five were supported while four were not. In detail, the relationships between (1) perceived system quality and perceived usefulness, (2) perceived system quality and perceived intention to use, (3) perceived usefulness and perceived intention to use, (4) quality of device platform and actual IS usage, and (5) perceived intention to use and actual IS usage were found to be significant. In comparison, the relationships between (1) quality of device platform and perceived system quality, (2) quality of device platform and perceived usefulness, (3) quality of device platform and perceived intention to use, and (4) perceived system quality and actual IS usage were not significant. The results of the study reveal notable differences from those of previous studies. First, although perceived intention to use shows a positive effect on actual IS usage, its explanatory power is very weak ($R^2$=0.064). Second, fact-based system quality (quality of user's device platform) shows a direct impact on actual IS usage without the mediating role of intention to use. Lastly, the relationships between perceived system quality (perception-based system quality) and other constructs show completely different results from those between quality of device platform (fact-based system quality) and other constructs. In the post-hoc analysis, IS users' past behavior was additionally included in the research model to further investigate the cause of such a low explanatory power of actual IS usage. The results show that past IS usage has a strong positive effect on current IS usage while intention to use does not have, implying that IS usage has already become a habitual behavior. This study provides the following several implications. First, we verify that fact-based data (i.e., system logs of real usage records) are more likely to reflect IS users' actual usage than perception-based data. In addition, by identifying the direct impact of quality of device platform on actual IS usage (without any mediating roles of attitude or intention), this study triggers further research on other potential factors that may directly influence actual IS usage. Furthermore, the results of the study provide practical strategic implications that organizations equipped with high-quality systems may directly expect high level of system usage.

Peripheral Dose Distributions of Clinical Photon Beams (광자선에 의한 민조사면 경계영역의 선량분포)

  • 김진기;김정수;권형철
    • Progress in Medical Physics
    • /
    • v.12 no.1
    • /
    • pp.71-77
    • /
    • 2001
  • The region, near the edge of a radiation beam, where the dose changes rapidly according to the distance from the beam axis is known as the penumbra. There is a sharp dose gradient zone even in megavoltage photon beams due to source size, collimator, lead alloy block, other accessories, and internal scatter ray. We investigate dosimetric characteristics on penumbra regions of a standard collimator and compare to those of theoritical model for the optimal use of the system in radiotherapy. Peripheral dose distribution of 6 W Photon beams represents penumbral forming function as the depth. Also we have discussed that the peripheral dose distribution of clinical photon beams, differences between calculation dose use of emperical penumbral forming function and measurements in penumbral region. Predictions by emperical penumbral forming functions are compared with measurements in 3-dimensional water phantom and it is shown that the method is capable of reproduceing the measured peripheral dose values usually to within the statistical uncertainties of the data. The semiconductor detector and ion chamber were positioned at a dmax depth, 5cm depth, 10cm depth, and its specific ratio was determined using a scanning data. The effective penumbra, the distance from 80% to 20% isodose lines were analyzed as a function of the distance. The extent of penumbra will also expand with depth increase. Difference of measurement value and model functions value according to character of the detector show small error in dose distribution of the peripheral dose.

  • PDF

An Empirical Study on the Clustering Measurement and Trend Analysis among the Asian Ports Using the Context-dependent and Measure-specific Models (컨텍스트의존 모형 및 측정특유 모형을 이용한 아시아항만들의 클러스터링 측정 및 추세분석에 관한 실증적 연구)

  • Park, Ro-Kyung
    • Journal of Korea Port Economic Association
    • /
    • v.28 no.1
    • /
    • pp.53-82
    • /
    • 2012
  • The purpose of this paper is to show the clustering trend by using the context-dependent and measure-specific models for 38 Asian ports during 10 years(2001-2009) with 4 inputs and 1 output. The main empirical results of this paper are as follows. First, clustering results by using context-dependent and measure-specific models are same. Second, the most efficient clustering was shown among the Hong Kong, Singapore, Ningbo, Guangzhou, and Kaosiung ports. Third, Port Sultan Qaboos, Jeddah, and Aden ports showed the lowest level clustering. Fourth, ranking order of attractiveness is Guangzhou, Dubai, HongKong, Ningbo, and Shanghai, and the results of progressive scores confirmed that low level ports can increase their efficiency by benchmarking the upper level ports. Fifth, benchmark share showed that Dubai(birth length), and HongKong(port depth, total area, and no. of cranes) have affected the efficiency of the inefficient ports.

A Study on the Development and Utilization of Indoor Spatial Information Visualization Tool Using the Open BIM based IFC Model (개방형 BIM 기반 IFC 모델을 이용한 실내공간정보 시각화 도구개발 및 활용방안 연구)

  • Ryu, Jung Rim;Mun, Son Ki;Choo, Seung Yeon
    • Spatial Information Research
    • /
    • v.23 no.5
    • /
    • pp.41-52
    • /
    • 2015
  • MOLIT (Minister of Land, Infrastructure and Transport) authorized Indoor Spatial Information as Basic spatial information in 2013. It became a legal evidence for constructing and managing Indoor Spatial Information. Although it has a little advantage to utilize as service level that Indoor Spatial Information by laser scan or measurement, it has a lot of problems such as consuming many resources, requiring additional progresses for inputting Object Information. In conclusion, it is inefficient to utilize for the maintenance and domestic AEC/FM field. The purposes of this study is to output Indoor Spatial Information by operating IFC model which based on open BIM and to improve availability of Indoor Spatial Information with data visualization. The open-sources of IFC Exporter, a inner program of Revit (Autodesk Inc), is used to output Indoor Spatial Information. Directs 3D Library is also operated to visualize Indoor Spatial Information. It is possible to inter-operate between XML format and the objects of Indoor Spatial Information. It can be utilized in various field as well. For example COBie linkage in facility management, construction of geo-database using air-photogrammetry of UAV (Unmaned Areal Vehicle), the simulation of large-scale military operations and the simulation of large-scale evacuation. The method that is purposed in this study has outstanding advantages such as conformance with national spatial information policy, high level of interoperability as indoor spatial information objects based on IFC, convenience of editing information, light level of data and simplifying progress of producing information.

Monte Carlo Study Using GEANT4 of Cyberknife Stereotactic Radiosurgery System (GEANT4를 이용한 정위적 사이버나이프 선량분포의 계산과 측정에 관한 연구)

  • Lee, Chung-Il;Shin, Jae-Won;Shin, Hun-Joo;Jung, Jae-Yong;Kim, Yon-Lae;Min, Jeong-Hwan;Hong, Seung-Woo;Chung, Su-Mi;Jung, Won-Gyun;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.21 no.2
    • /
    • pp.192-200
    • /
    • 2010
  • Cyberknife with small field size is more difficult and complex for dosimetry compared with conventional radiotherapy due to electronic disequilibrium, steep dose gradients and spectrum change of photons and electrons. The purpose of this study demonstrate the usefulness of Geant4 as verification tool of measurement dose for delivering accurate dose by comparing measurement data using the diode detector with results by Geant4 simulation. The development of Monte Carlo Model for Cyberknife was done through the two-step process. In the first step, the treatment head was simulated and Bremsstrahlung spectrum was calculated. Secondly, percent depth dose (PDD) was calculated for six cones with different size, i.e., 5 mm, 10 mm, 20 mm, 30 mm, 50 mm and 60 mm in the model of water phantom. The relative output factor was calculated about 12 fields from 5 mm to 60 mm and then it compared with measurement data by the diode detector. The beam profiles and depth profiles were calculated about different six cones and about each depth of 1.5 cm, 10 cm and 20 cm, respectively. The results about PDD were shown the error the less than 2% which means acceptable in clinical setting. For comparison of relative output factors, the difference was less than 3% in the cones lager than 7.5 mm. However, there was the difference of 6.91% in the 5 mm cone. Although beam profiles were shown the difference less than 2% in the cones larger than 20 mm, there was the error less than 3.5% in the cones smaller than 20 mm. From results, we could demonstrate the usefulness of Geant4 as dose verification tool.

Quantitative Assessment Technology of Small Animal Myocardial Infarction PET Image Using Gaussian Mixture Model (다중가우시안혼합모델을 이용한 소동물 심근경색 PET 영상의 정량적 평가 기술)

  • Woo, Sang-Keun;Lee, Yong-Jin;Lee, Won-Ho;Kim, Min-Hwan;Park, Ji-Ae;Kim, Jin-Su;Kim, Jong-Guk;Kang, Joo-Hyun;Ji, Young-Hoon;Choi, Chang-Woon;Lim, Sang-Moo;Kim, Kyeong-Min
    • Progress in Medical Physics
    • /
    • v.22 no.1
    • /
    • pp.42-51
    • /
    • 2011
  • Nuclear medicine images (SPECT, PET) were widely used tool for assessment of myocardial viability and perfusion. However it had difficult to define accurate myocardial infarct region. The purpose of this study was to investigate methodological approach for automatic measurement of rat myocardial infarct size using polar map with adaptive threshold. Rat myocardial infarction model was induced by ligation of the left circumflex artery. PET images were obtained after intravenous injection of 37 MBq $^{18}F$-FDG. After 60 min uptake, each animal was scanned for 20 min with ECG gating. PET data were reconstructed using ordered subset expectation maximization (OSEM) 2D. To automatically make the myocardial contour and generate polar map, we used QGS software (Cedars-Sinai Medical Center). The reference infarct size was defined by infarction area percentage of the total left myocardium using TTC staining. We used three threshold methods (predefined threshold, Otsu and Multi Gaussian mixture model; MGMM). Predefined threshold method was commonly used in other studies. We applied threshold value form 10% to 90% in step of 10%. Otsu algorithm calculated threshold with the maximum between class variance. MGMM method estimated the distribution of image intensity using multiple Gaussian mixture models (MGMM2, ${\cdots}$ MGMM5) and calculated adaptive threshold. The infarct size in polar map was calculated as the percentage of lower threshold area in polar map from the total polar map area. The measured infarct size using different threshold methods was evaluated by comparison with reference infarct size. The mean difference between with polar map defect size by predefined thresholds (20%, 30%, and 40%) and reference infarct size were $7.04{\pm}3.44%$, $3.87{\pm}2.09%$ and $2.15{\pm}2.07%$, respectively. Otsu verse reference infarct size was $3.56{\pm}4.16%$. MGMM methods verse reference infarct size was $2.29{\pm}1.94%$. The predefined threshold (30%) showed the smallest mean difference with reference infarct size. However, MGMM was more accurate than predefined threshold in under 10% reference infarct size case (MGMM: 0.006%, predefined threshold: 0.59%). In this study, we was to evaluate myocardial infarct size in polar map using multiple Gaussian mixture model. MGMM method was provide adaptive threshold in each subject and will be a useful for automatic measurement of infarct size.