• Title/Summary/Keyword: automatic threshold

Search Result 270, Processing Time 0.025 seconds

Association between cadmium exposure and hearing impairment: a population-based study in Korean adults

  • Jung, Da Jung
    • Journal of Yeungnam Medical Science
    • /
    • v.36 no.2
    • /
    • pp.141-147
    • /
    • 2019
  • Background: The present study aimed to evaluate the clinical association between cadmium exposure and hearing impairment among the Korean population. Methods: This retrospective cross-sectional study used the data obtained from the Korean National Health and Nutrition Examination Survey were used for our study. Finally, 3,228 participants were included in our study, which were then divided into quartiles based on their blood cadmium levels: first quartile (1Q), second quartile (2Q), third quartile (3Q), and fourth quartile (4Q) groups. The hearing thresholds were measured using an automatic audiometer at 0.5, 1, 2, 3, 4, and 6 kHz. Hearing loss (HL) was defined as >25 dB average hearing threshold (AHT). Results: All the groups had 807 participants each. The area under the receiver operating characteristic curves of cadmium level for HL were 0.634 (95% confidence interval [CI], 0.621-0.646). The participants in the 4Q group had higher Low/Mid-Freq, High-Freq, and AHT values than those in the other groups in the multivariate analysis after adjusting for confounding factors. The logistic regression showed that the OR for HL per $1{\mu}g/L$ increase in cadmium was 1.25 (95% CI, 1.09-1.44; p=0.002) on the multivariate analysis. Moreover, the multivariate logistic regression analyses revealed that the participants in the 4Q group exhibited a 1.59-, 1.38-, and 1.41-fold higher odds for HL than those in the 1Q, 2Q, and 3Q groups, respectively. Conclusion: High cadmium level quartile was associated with increased hearing thresholds and HL among the Korean adult population.

Effect of different voxel sizes on the accuracy of CBCT measurements of trabecular bone microstructure: A comparative micro-CT study

  • Tayman, Mahmure Ayse;Kamburoglu, Kivanc;Ocak, Mert;Ozen, Dogukan
    • Imaging Science in Dentistry
    • /
    • v.52 no.2
    • /
    • pp.171-179
    • /
    • 2022
  • Purpose: The aim of this study was to assess the accuracy of cone-beam computed tomographic (CBCT) images obtained using different voxel sizes in measuring trabecular bone microstructure in comparison to micro-CT. Materials and Methods: Twelve human skull bones containing posterior-mandibular alveolar bone regions were analyzed. CBCT images were obtained at voxel sizes of 0.075mm(high: HI) and 0.2mm(standard: Std), while microCT imaging used voxel sizes of 0.06 mm (HI) and 0.12 mm (Std). Analyses were performed using CTAn software with the standardized automatic global threshold method. Intraclass correlation coefficients were used to evaluate the consistency and agreement of paired measurements for bone volume (BV), percent bone volume (BV/TV), bone surface (BS), trabecular thickness (TbTh), trabecular separation (TbSp), trabecular number (TbN), trabecular pattern factor(TbPf), and structure model index (SMI). Results: When compared to micro-CT, CBCT images had higher BV, BV/TV, and TbTh values, while micro-CT images had lower BS, TbSp, TbN, TbPf, and SMI values (P<0.05). The BV, BV/BT, TbTh, and TbSp variables were higher with Std voxels, whereas the BS, TbPf, and SMI variables were higher with HI voxels for both imaging methods. For each imaging modality and voxel size evaluated, BV, BS, and TbTh were significantly different(P<0.05). TbN, TbPf, and SMI showed statistically significant differences between imaging methods(P<0.05). The consistency and absolute agreement between micro-CT and CBCT were excellent for all variables. Conclusion: This study demonstrated the potential of high-resolution CBCT imaging for quantitative bone morphometry assessment.

FATIGUE DESIGN OF BUTT-WELDED TUBULAR JOINTS

  • Kim, D. S.;S. Nho;F. Kopp
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.127-132
    • /
    • 2002
  • Recent deepwater offshore structures in Gulf of Mexico utilize butt welded tubular joints. Application of welded tubular joint includes tendons, production risers, and steel catenary risers. Fatigue life assessment of these joints becomes more critical because the structures to which they are attached are allowed to undergo cyclic and sometimes large displacements around an anchored position. Estimating the fatigue behavior of these tubular members in the design stage is generally conducted by using S-N curves specified in the codes and standards. Applying the stress concentration factor of the welded structure to S-N approach often results in very conservative assessment because the stress field acting on the tubular has a non-uniform distribution through the thickness. Fracture mechanics and fitness for service (FFS) technology have been applied in the design of the catenary risers. This technology enables the engineer to establish proper requirements on weld quality and inspection acceptance criteria to assure satisfactory structural integrity during its design life. It also provides guidance on proper design curves to be used and a methodology for accounting for the effects of non-uniform stress distribution through the wall thickness. An attempt was made to develop set of S-N curves based on fracture mechanics approach by considering non-uniform stress distribution and a threshold stress intensity factor. Series of S-N curves generated from this approach were compared to the existing S-N curves. For flat plate butt joint, the S-N curve generated from fracture mechanics matches with the IIW class 100 curve when initial crack depth was 0.5 mm (0.02"). Similar comparison with API X′ was made for tubular joint.. These initial crack depths are larger than the limits of inspection by current Non-destructive examination (NDE) means, such as Automatic Ultrasonic Inspection (AUT). Thus a safe approach can be taken by specifying acceptance criteria that are close to limits of sizing capability of the selected NDE method. The comparison illustrates conservatism built into the S-N design curve.

  • PDF

Adaptive Enhancement of Low-light Video Images Algorithm Based on Visual Perception (시각 감지 기반의 저조도 영상 이미지 적응 보상 증진 알고리즘)

  • Li Yuan;Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.51-60
    • /
    • 2024
  • Aiming at the problem of low contrast and difficult to recognize video images in low-light environment, we propose an adaptive contrast compensation enhancement algorithm based on human visual perception. First of all, the video image characteristic factors in low-light environment are extracted: AL (average luminance), ABWF (average bandwidth factor), and the mathematical model of human visual CRC(contrast resolution compensation) is established according to the difference of the original image's grayscale/chromaticity level, and the proportion of the three primary colors of the true color is compensated by the integral, respectively. Then, when the degree of compensation is lower than the bright vision precisely distinguishable difference, the compensation threshold is set to linearly compensate the bright vision to the full bandwidth. Finally, the automatic optimization model of the compensation ratio coefficient is established by combining the subjective image quality evaluation and the image characteristic factor. The experimental test results show that the video image adaptive enhancement algorithm has good enhancement effect, good real-time performance, can effectively mine the dark vision information, and can be widely used in different scenes.

Association between fatty liver disease and hearing impairment in Korean adults: a retrospective cross-sectional study

  • Da Jung Jung
    • Journal of Yeungnam Medical Science
    • /
    • v.40 no.4
    • /
    • pp.402-411
    • /
    • 2023
  • Background: We hypothesized that fatty liver disease (FLD) is associated with a high prevalence of hearing loss (HL) owing to metabolic disturbances. This study aimed to evaluate the association between FLD and HL in a large sample of the Korean population. Methods: We used a dataset of adults who underwent routine voluntary health checkups (n=21,316). Fatty liver index (FLI) was calculated using Bedogni's equation. The patients were divided into two groups: the non-FLD (NFLD) group (n=18,518, FLI <60) and the FLD group (n=2,798, FLI ≥60). Hearing thresholds were measured using an automatic audiometer. The average hearing threshold (AHT) was calculated as the pure-tone average at four frequencies (0.5, 1, 2, and 3 kHz). HL was defined as an AHT of >40 dB. Results: HL was observed in 1,370 (7.4%) and 238 patients (8.5%) in the NFLD and FLD groups, respectively (p=0.041). Compared with the NFLD group, the odds ratio for HL in the FLD group was 1.16 (p=0.040) and 1.46 (p<0.001) in univariate and multivariate logistic regression analyses, respectively. Linear regression analyses revealed that FLI was positively associated with AHT in both univariate and multivariate analyses. Analyses using a propensity score-matched cohort showed trends similar to those using the total cohort. Conclusion: FLD and FLI were associated with poor hearing thresholds and HL. Therefore, active monitoring of hearing impairment in patients with FLD may be helpful for early diagnosis and treatment of HL in the general population.

Development of Automated Region of Interest for the Evaluation of Renal Scintigraphy : Study on the Inter-operator Variability (신장 핵의학 영상의 정량적 분석을 위한 관심영역 자동설정 기능 개발 및 사용자별 분석결과의 변화도 감소효과 분석)

  • 이형구;송주영;서태석;최보영;신경섭
    • Progress in Medical Physics
    • /
    • v.12 no.1
    • /
    • pp.41-50
    • /
    • 2001
  • The quantification analysis of renal scintigraphy is strongly affected by the location, shape and size of region of interest(ROI). When ROIs are drawn manually, these ROIs are not reproducible due to the operators' subjective point of view, and may lead to inconsistent results even if the same data were analyzed. In this study, the effect of the ROI variation on the analysis of renal scintigraphy when the ROIs are drawn manually was investigated, and in order to obtain more consistent results, methods for automated ROI definition were developed and the results from the application of the developed methods were analyzed. Relative renal function, glomerular filtration rate and mean transit time were selected as clinical parameters for the analysis of the effect of ROI and the analysis tools were designed with the programming language of IDL5.2. To obtain renal scintigraphy, $^{99m}$Tc-DTPA was injected to the 11 adults of normal condition and to study the inter-operator variability, 9 researchers executed the analyses. The calculation of threshold using the gradient value of pixels and border tracing technique were used to define renal ROI and then the background ROI and aorta ROI were defined automatically considering anatomical information and pixel value. The automatic methods to define renal ROI were classified to 4 groups according to the exclusion of operator's subjectiveness. These automatic methods reduced the inter-operator variability remarkably in comparison with manual method and proved the effective tool to obtain reasonable and consistent results in analyzing the renal scintigraphy quantitatively.

  • PDF

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

Definition of Tumor Volume Based on 18F-Fludeoxyglucose Positron Emission Tomography in Radiation Therapy for Liver Metastases: An Relational Analysis Study between Image Parameters and Image Segmentation Methods (간 전이 암 환자의 18F-FDG PET 기반 종양 영역 정의: 영상 인자와 자동 영상 분할 기법 간의 관계분석)

  • Kim, Heejin;Park, Seungwoo;Jung, Haijo;Kim, Mi-Sook;Yoo, Hyung Jun;Ji, Young Hoon;Yi, Chul-Young;Kim, Kum Bae
    • Progress in Medical Physics
    • /
    • v.24 no.2
    • /
    • pp.99-107
    • /
    • 2013
  • The surgical resection was occurred mainly in liver metastasis before the development of radiation therapy techniques. Recently, Radiation therapy is increased gradually due to the development of radiation dose delivery techniques. 18F-FDG PET image showed better sensitivity and specificity in liver metastasis detection. This image modality is important in the radiation treatment with planning CT for tumor delineation. In this study, we applied automatic image segmentation methods on PET image of liver metastasis and examined the impact of image factors on these methods. We selected the patients who were received the radiation therapy and 18F-FDG PET/CT in Korea Cancer Center Hospital from 2009 to 2012. Then, three kinds of image segmentation methods had been applied; The relative threshold method, the Gradient method and the region growing method. Based on these results, we performed statistical analysis in two directions. 1. comparison of GTV and image segmentation results. 2. performance of regression analysis for relation between image factor affecting image segmentation techniques. The mean volume of GTV was $60.9{\pm}65.9$ cc and the $GTV_{40%}$ was $22.43{\pm}35.27$ cc, and the $GTV_{50%}$ was $10.11{\pm}17.92$ cc, the $GTV_{RG}$ was $32.89{\pm}36.8$4 cc, the $GTV_{GD}$ was $30.34{\pm}35.77$ cc, respectively. The most similar segmentation method with the GTV result was the region growing method. For the quantitative analysis of the image factors which influenced on the region growing method, we used the standardized coefficient ${\beta}$, factors affecting the region growing method show GTV, $TumorSUV_{MAX/MIN}$, $SUV_{max}$, TBR in order. The result of the region growing (automatic segmentation) method showed the most similar result with the CT based GTV and the region growing method was affected by image factors. If we define the tumor volume by the auto image segmentation method which reflect the PET image parameters, more accurate and consistent tumor contouring can be done. And we can irradiate the optimized radiation dose to the cancer, ultimately.

A Study on Optimal Site Selection for Automatic Mountain Meteorology Observation System (AMOS): the Case of Honam and Jeju Areas (최적의 산악기상관측망 적정위치 선정 연구 - 호남·제주 권역을 대상으로)

  • Yoon, Sukhee;Won, Myoungsoo;Jang, Keunchang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.18 no.4
    • /
    • pp.208-220
    • /
    • 2016
  • Automatic Mountain Meteorology Observation System (AMOS) is an important ingredient for several climatological and forest disaster prediction studies. In this study, we select the optimal sites for AMOS in the mountain areas of Honam and Jeju in order to prevent forest disasters such as forest fires and landslides. So, this study used spatial dataset such as national forest map, forest roads, hiking trails and 30m DEM(Digital Elevation Model) as well as forest risk map(forest fire and landslide), national AWS information to extract optimal site selection of AMOS. Technical methods for optimal site selection of the AMOS was the firstly used multifractal model, IDW interpolation, spatial redundancy for 2.5km AWS buffering analysis, and 200m buffering analysis by using ArcGIS. Secondly, optimal sites selected by spatial analysis were estimated site accessibility, observatory environment of solar power and wireless communication through field survey. The threshold score for the final selection of the sites have to be higher than 70 points in the field assessment. In the result, a total of 159 polygons in national forest map were extracted by the spatial analysis and a total of 64 secondary candidate sites were selected for the ridge and the top of the area using Google Earth. Finally, a total of 26 optimal sites were selected by quantitative assessment based on field survey. Our selection criteria will serve for the establishment of the AMOS network for the best observations of weather conditions in the national forests. The effective observation network may enhance the mountain weather observations, which leads to accurate prediction of forest disasters.

Forecasting the Precipitation of the Next Day Using Deep Learning (딥러닝 기법을 이용한 내일강수 예측)

  • Ha, Ji-Hun;Lee, Yong Hee;Kim, Yong-Hyuk
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.2
    • /
    • pp.93-98
    • /
    • 2016
  • For accurate precipitation forecasts the choice of weather factors and prediction method is very important. Recently, machine learning has been widely used for forecasting precipitation, and artificial neural network, one of machine learning techniques, showed good performance. In this paper, we suggest a new method for forecasting precipitation using DBN, one of deep learning techniques. DBN has an advantage that initial weights are set by unsupervised learning, so this compensates for the defects of artificial neural networks. We used past precipitation, temperature, and the parameters of the sun and moon's motion as features for forecasting precipitation. The dataset consists of observation data which had been measured for 40 years from AWS in Seoul. Experiments were based on 8-fold cross validation. As a result of estimation, we got probabilities of test dataset, so threshold was used for the decision of precipitation. CSI and Bias were used for indicating the precision of precipitation. Our experimental results showed that DBN performed better than MLP.