• Title/Summary/Keyword: Commercial software

Search Result 1,725, Processing Time 0.029 seconds

Effects of fermentation on protein profile of coffee by-products and its relationship with internal protein structure measured by vibrational spectroscopy

  • Samadi;Xin Feng;Luciana Prates;Siti Wajizah;Zulfahrizal;Agus Arip Munawar;Peiqiang Yu
    • Animal Bioscience
    • /
    • v.36 no.8
    • /
    • pp.1190-1198
    • /
    • 2023
  • Objective: To our knowledge, there are few studies on the correlation between internal structure of fermented products and nutrient delivery from by-products from coffee processing in the ruminant system. The objective of this project was to use advanced mid-infrared vibrational spectroscopic technique (ATR-FT/IR) to reveal interactive correlation between protein internal structure and ruminant-relevant protein and energy metabolic profiles of by-products from coffee processing affected by added-microorganism fermentation duration. Methods: The by-products from coffee processing were fermented using commercial fermentation product, called Saus Burger Pakan, consisting of various microorganisms: cellulolytic, lactic acid, amylolytic, proteolytic, and xylanolytic microbes, for 0, 7, 14, 21, and 28 days. Protein chemical profiles, Cornell Net Carbohydrate and Protein System crude protein and CHO subfractions, and ruminal degradation and intestinal digestion of protein were evaluated. The attenuated total reflectance-Ft/IR (ATR-FTIR) spectroscopy was used to study protein structural features of spectra that were affected by added microorganism fermentation duration. The molecular spectral analyses were carried using OMNIC software. Molecular spectral analysis parameters in fermented and non-fermented by-products from coffee processing included: Amide I area (AIA), Amide II (AIIA) area, Amide I heigh (AIH), Amide II height (AIIH), α-helix height (αH), β-sheet height (βH), AIA to AIIA ratio, AIH to AIIH ratio, and αH to βH ratio. The relationship between protein structure spectral profiles of by-products from coffee processing and protein related metabolic features in ruminant were also investigated. Results: Fermentation decreased rumen degradable protein and increased rumen undegradable protein of by-products from coffee processing (p<0.05), indicating more protein entering from rumen to the small intestine for animal use. The fermentation duration significantly impacted (p<0.05) protein structure spectral features. Fermentation tended to increase (p<0.10) AIA and AIH as well as β-sheet height which all are significantly related to the protein level. Conclusion: Protein structure spectral profiles of by-product form coffee processing could be utilized as potential evaluators to estimate protein related chemical profile and protein metabolic characteristics in ruminant system.

Equivalent Stress Distribution of a Stepped Bar with Hole under Torsional Loading (구멍이 있는 단이 진 비틀림 봉의 등가응력분포)

  • Kang, Eun Hye;Kim, Young Chul;Kim, Myung Soo;Baek, Tae Hyun
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.3
    • /
    • pp.411-419
    • /
    • 2017
  • Stress concentration is one of the causes of the damage due to the large stress than the mean stress acting on the bar. This paper presents the results for stress of a stepped bar with a hole under torsional loading. The analysis for stress concentration and shearing stress was done by ANSYS Workbench which is a commercial finite element analysis software. The analysis results on fillet and hole are increased as the distance between them are become close. In addition, the distribution of the maximum equivalent stress developed in the fillet and hole in the outside range of the specific distance L (-100 mm ~ 300 mm) was almost constant in the models used in the analysis. On the other hand, the distribution of the maximum equivalent stress developed in the fillet and hole in the inside range of the specific distance L (-100 mm ~ 300 mm) was rapidly increasing and decreasing the change in the models used in the analysis. In addition, it was also possible to identify the location where the differences between equivalent stresses of hole and fillet occurred within a specific distance L (-100 mm ~ 300 mm). The analysis results of paper can used when selecting a hole location in a stepped bar under torsional loading.

Development of Intelligent OCR Technology to Utilize Document Image Data (문서 이미지 데이터 활용을 위한 지능형 OCR 기술 개발)

  • Kim, Sangjun;Yu, Donghui;Hwang, Soyoung;Kim, Minho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.212-215
    • /
    • 2022
  • In the era of so-called digital transformation today, the need for the construction and utilization of big data in various fields has increased. Today, a lot of data is produced and stored in a digital device and media-friendly manner, but the production and storage of data for a long time in the past has been dominated by print books. Therefore, the need for Optical Character Recognition (OCR) technology to utilize the vast amount of print books accumulated for a long time as big data was also required in line with the need for big data. In this study, a system for digitizing the structure and content of a document object inside a scanned book image is proposed. The proposal system largely consists of the following three steps. 1) Recognition of area information by document objects (table, equation, picture, text body) in scanned book image. 2) OCR processing for each area of the text body-table-formula module according to recognized document object areas. 3) The processed document informations gather up and returned to the JSON format. The model proposed in this study uses an open-source project that additional learning and improvement. Intelligent OCR proposed as a system in this study showed commercial OCR software-level performance in processing four types of document objects(table, equation, image, text body).

  • PDF

Empirical and Numerical Analyses of a Small Planing Ship Resistance using Longitudinal Center of Gravity Variations (경험식과 수치해석을 이용한 종방향 무게중심 변화에 따른 소형선박의 저항성능 변화에 관한 연구)

  • Michael;Jun-Taek Lim;Nam-Kyun Im;Kwang-Cheol Seo
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.971-979
    • /
    • 2023
  • Small ships (<499 GT) constitute 46% of the existing ships, therefore, it can be concluded that they produce relatively high CO2 gas emissions. Operating in optimal trim conditions can reduce the resistance of the ship, which results in fewer greenhouse gases. An affordable way for trim optimization is to adjust the weight distribution to obtain an optimum longitudinal center of gravity (LCG). Therefore, in this study, the effect of LCG changes on the resistance of a small planing ship is studied using empirical and numerical analyses. The Savitsky method employing Maxsurf resistance and the STAR-CCM+ commercial computational fluid dynamics (CFD) software is used for the empirical and numerical analyses, respectively. Finally, the total resistance from the ship design process is compared to obtain the optimum LCG. To summarize, using numerical analysis, optimum LCG is achieved at the 46.2% length overall (LoA) at Froude Number 0.56, and 43.4% LoA at Froude Number 0.63, which provides a significant resistance reduction of 41.12 - 45.16% compared to the reference point at 29.2% LoA.

Evaluation of Vertical Vibration Performance of Tridimensional Hybrid Isolation System for Traffic Loads (교통하중에 대한 3차원 하이브리드 면진시스템의 수직 진동성능 평가)

  • Yonghun Lee;Sang-Hyun Lee;Moo-Won Hur
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.1
    • /
    • pp.70-81
    • /
    • 2024
  • In this study, Tridimensional Hybrid Isolation System(THIS) was proposed as a vibration isolator for traffic loads, combining vertical and horizontal isolation systems. Its efficacy in improving serviceability for vertical vibration was analytically evaluated. Firstly, for the analysis, the major vibration modes of the existing apartment were identified through eigenvalue analysis for the system and pulse response analysis for the bedroom slab using commercial structural analysis software. Subsequently, a 16-story model with horizontal, vertical and rotational degrees of freedom for each slab was numerically organized to represent the achieved modes. The dynamic analysis for the measured acceleration from an adjacent ground to high-speed railway was performed by state-space equations with the stiffness and damping ratio of THIS as variables. The result indicated that as the vertical period ratio increased, the threshold period ratio where the slab response started to be suppressed varied. Specifically, when the period ratio is greater than or equal to 5, the acceleration levels of all slabs decreased to approximately 70% or less compared to the non-isolated condition. On the other hand, it was ascertained that the influence of damping ratios on the response control of THIS is inconsequential in the analysis. Finally, the improvement in vertical vibration performance of THIS was evaluated according to design guidelines for floor vibration of AIJ, SCI and AISC. It was confirmed that, after the application of THIS, the residential performance criteria were met, whereas the non-isolated structure failed to satisfy them.

Deep Learning-Based Computed Tomography Image Standardization to Improve Generalizability of Deep Learning-Based Hepatic Segmentation

  • Seul Bi Lee;Youngtaek Hong;Yeon Jin Cho;Dawun Jeong;Jina Lee;Soon Ho Yoon;Seunghyun Lee;Young Hun Choi;Jung-Eun Cheon
    • Korean Journal of Radiology
    • /
    • v.24 no.4
    • /
    • pp.294-304
    • /
    • 2023
  • Objective: We aimed to investigate whether image standardization using deep learning-based computed tomography (CT) image conversion would improve the performance of deep learning-based automated hepatic segmentation across various reconstruction methods. Materials and Methods: We collected contrast-enhanced dual-energy CT of the abdomen that was obtained using various reconstruction methods, including filtered back projection, iterative reconstruction, optimum contrast, and monoenergetic images with 40, 60, and 80 keV. A deep learning based image conversion algorithm was developed to standardize the CT images using 142 CT examinations (128 for training and 14 for tuning). A separate set of 43 CT examinations from 42 patients (mean age, 10.1 years) was used as the test data. A commercial software program (MEDIP PRO v2.0.0.0, MEDICALIP Co. Ltd.) based on 2D U-NET was used to create liver segmentation masks with liver volume. The original 80 keV images were used as the ground truth. We used the paired t-test to compare the segmentation performance in the Dice similarity coefficient (DSC) and difference ratio of the liver volume relative to the ground truth volume before and after image standardization. The concordance correlation coefficient (CCC) was used to assess the agreement between the segmented liver volume and ground-truth volume. Results: The original CT images showed variable and poor segmentation performances. The standardized images achieved significantly higher DSCs for liver segmentation than the original images (DSC [original, 5.40%-91.27%] vs. [standardized, 93.16%-96.74%], all P < 0.001). The difference ratio of liver volume also decreased significantly after image conversion (original, 9.84%-91.37% vs. standardized, 1.99%-4.41%). In all protocols, CCCs improved after image conversion (original, -0.006-0.964 vs. standardized, 0.990-0.998). Conclusion: Deep learning-based CT image standardization can improve the performance of automated hepatic segmentation using CT images reconstructed using various methods. Deep learning-based CT image conversion may have the potential to improve the generalizability of the segmentation network.

Automated Measurement of Native T1 and Extracellular Volume Fraction in Cardiac Magnetic Resonance Imaging Using a Commercially Available Deep Learning Algorithm

  • Suyon Chang;Kyunghwa Han;Suji Lee;Young Joong Yang;Pan Ki Kim;Byoung Wook Choi;Young Joo Suh
    • Korean Journal of Radiology
    • /
    • v.23 no.12
    • /
    • pp.1251-1259
    • /
    • 2022
  • Objective: T1 mapping provides valuable information regarding cardiomyopathies. Manual drawing is time consuming and prone to subjective errors. Therefore, this study aimed to test a DL algorithm for the automated measurement of native T1 and extracellular volume (ECV) fractions in cardiac magnetic resonance (CMR) imaging with a temporally separated dataset. Materials and Methods: CMR images obtained for 95 participants (mean age ± standard deviation, 54.5 ± 15.2 years), including 36 left ventricular hypertrophy (12 hypertrophic cardiomyopathy, 12 Fabry disease, and 12 amyloidosis), 32 dilated cardiomyopathy, and 27 healthy volunteers, were included. A commercial deep learning (DL) algorithm based on 2D U-net (Myomics-T1 software, version 1.0.0) was used for the automated analysis of T1 maps. Four radiologists, as study readers, performed manual analysis. The reference standard was the consensus result of the manual analysis by two additional expert readers. The segmentation performance of the DL algorithm and the correlation and agreement between the automated measurement and the reference standard were assessed. Interobserver agreement among the four radiologists was analyzed. Results: DL successfully segmented the myocardium in 99.3% of slices in the native T1 map and 89.8% of slices in the post-T1 map with Dice similarity coefficients of 0.86 ± 0.05 and 0.74 ± 0.17, respectively. Native T1 and ECV showed strong correlation and agreement between DL and the reference: for T1, r = 0.967 (95% confidence interval [CI], 0.951-0.978) and bias of 9.5 msec (95% limits of agreement [LOA], -23.6-42.6 msec); for ECV, r = 0.987 (95% CI, 0.980-0.991) and bias of 0.7% (95% LOA, -2.8%-4.2%) on per-subject basis. Agreements between DL and each of the four radiologists were excellent (intraclass correlation coefficient [ICC] of 0.98-0.99 for both native T1 and ECV), comparable to the pairwise agreement between the radiologists (ICC of 0.97-1.00 and 0.99-1.00 for native T1 and ECV, respectively). Conclusion: The DL algorithm allowed automated T1 and ECV measurements comparable to those of radiologists.

Establishment of Local Diagnostic Reference Levels of Pediatric Abdominopelvic and Chest CT Examinations Based on the Body Weight and Size in Korea

  • Jae-Yeon Hwang;Young Hun Choi;Hee Mang Yoon;Young Jin Ryu;Hyun Joo Shin;Hyun Gi Kim;So Mi Lee;Sun Kyung You;Ji Eun Park
    • Korean Journal of Radiology
    • /
    • v.22 no.7
    • /
    • pp.1172-1184
    • /
    • 2021
  • Objective: The purposes of this study were to analyze the radiation doses for pediatric abdominopelvic and chest CT examinations from university hospitals in Korea and to establish the local diagnostic reference levels (DRLs) based on the body weight and size. Materials and Methods: At seven university hospitals in Korea, 2494 CT examinations of patients aged 15 years or younger (1625 abdominopelvic and 869 chest CT examinations) between January and December 2017 were analyzed in this study. CT scans were transferred to commercial automated dose management software for the analysis after being de-identified. DRLs were calculated after grouping the patients according to the body weight and effective diameter. DRLs were set at the 75th percentile of the distribution of each institution's typical values. Results: For body weights of 5, 15, 30, 50, and 80 kg, DRLs (volume CT dose index [CTDIvol]) were 1.4, 2.2, 2.7, 4.0, and 4.7 mGy, respectively, for abdominopelvic CT and 1.2, 1.5, 2.3, 3.7, and 5.8 mGy, respectively, for chest CT. For effective diameters of < 13 cm, 14-16 cm, 17-20 cm, 21-24 cm, and > 24 cm, DRLs (size-specific dose estimates [SSDE]) were 4.1, 5.0, 5.7, 7.1, and 7.2 mGy, respectively, for abdominopelvic CT and 2.8, 4.6, 4.3, 5.3, and 7.5 mGy, respectively, for chest CT. SSDE was greater than CTDIvol in all age groups. Overall, the local DRL was lower than DRLs in previously conducted dose surveys and other countries. Conclusion: Our study set local DRLs in pediatric abdominopelvic and chest CT examinations for the body weight and size. Further research involving more facilities and CT examinations is required to develop national DRLs and update the current DRLs.

Construction and Evaluation of Cohort Based Model for Predicting Population Dynamics of Riptortus pedestris (Fabricicus) (Hemiptera: Alydidae) Using DYMEX (상용소프트웨어(DYMEX)를 이용한 톱다리개미허리노린재(Riptortus pedestris) 밀도 변동 양상 예측 모델 구축 및 평가)

  • Park, Chang-Gyu;Yum, Ki-Hong;Lee, Sang-Ku;Lee, Sang-Guei
    • Korean journal of applied entomology
    • /
    • v.54 no.2
    • /
    • pp.73-81
    • /
    • 2015
  • A Cohort based model for temperature-dependent population dynacmics of Riptortus pedestris was constructed by using a commercial software (DYMEX) and seasonal occurrence along with pesticide treatments effect was simulated and validated with pheromone trap data. Ten modules of DYMEX software were used to construct the model and Lifecycle module was consisted of seven developmental stages (egg, 1 - 5 nymphal instars, and adult) of R. pedestirs. Simulated peaks of adult populations occurred three or four times after the peak of overwintered populations which was similar to those observed from pheromone trap catch. Estimated dates for the second peak were quite similar (1-2 day difference) with those observed with pheromone trap. However, the estimated dates for the first population peak were 9-16 days later than the observed dates by pheromone trap and the estimated dates for the last peak were 17-23 days earlier than the observed dates. When insecticide treatments were included in the simulation, the biggest decrease in R. pedestris adult density occurred when insecticide was applied on July 1 for the first peak population: the estimated adult density of the second peak was 3% of untreated population density. When insecticide was assumed to be applied on August 30 for the second peak population, the estimated adult density of the following generation was about 25% of untreated population and the peak density of the following generation reached about two weeks later than untreated population. These results can be used for the efficient management strategies for the populations of R. pedestris.

Social Network-based Hybrid Collaborative Filtering using Genetic Algorithms (유전자 알고리즘을 활용한 소셜네트워크 기반 하이브리드 협업필터링)

  • Noh, Heeryong;Choi, Seulbi;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.19-38
    • /
    • 2017
  • Collaborative filtering (CF) algorithm has been popularly used for implementing recommender systems. Until now, there have been many prior studies to improve the accuracy of CF. Among them, some recent studies adopt 'hybrid recommendation approach', which enhances the performance of conventional CF by using additional information. In this research, we propose a new hybrid recommender system which fuses CF and the results from the social network analysis on trust and distrust relationship networks among users to enhance prediction accuracy. The proposed algorithm of our study is based on memory-based CF. But, when calculating the similarity between users in CF, our proposed algorithm considers not only the correlation of the users' numeric rating patterns, but also the users' in-degree centrality values derived from trust and distrust relationship networks. In specific, it is designed to amplify the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the trust relationship network. Also, it attenuates the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the distrust relationship network. Our proposed algorithm considers four (4) types of user relationships - direct trust, indirect trust, direct distrust, and indirect distrust - in total. And, it uses four adjusting coefficients, which adjusts the level of amplification / attenuation for in-degree centrality values derived from direct / indirect trust and distrust relationship networks. To determine optimal adjusting coefficients, genetic algorithms (GA) has been adopted. Under this background, we named our proposed algorithm as SNACF-GA (Social Network Analysis - based CF using GA). To validate the performance of the SNACF-GA, we used a real-world data set which is called 'Extended Epinions dataset' provided by 'trustlet.org'. It is the data set contains user responses (rating scores and reviews) after purchasing specific items (e.g. car, movie, music, book) as well as trust / distrust relationship information indicating whom to trust or distrust between users. The experimental system was basically developed using Microsoft Visual Basic for Applications (VBA), but we also used UCINET 6 for calculating the in-degree centrality of trust / distrust relationship networks. In addition, we used Palisade Software's Evolver, which is a commercial software implements genetic algorithm. To examine the effectiveness of our proposed system more precisely, we adopted two comparison models. The first comparison model is conventional CF. It only uses users' explicit numeric ratings when calculating the similarities between users. That is, it does not consider trust / distrust relationship between users at all. The second comparison model is SNACF (Social Network Analysis - based CF). SNACF differs from the proposed algorithm SNACF-GA in that it considers only direct trust / distrust relationships. It also does not use GA optimization. The performances of the proposed algorithm and comparison models were evaluated by using average MAE (mean absolute error). Experimental result showed that the optimal adjusting coefficients for direct trust, indirect trust, direct distrust, indirect distrust were 0, 1.4287, 1.5, 0.4615 each. This implies that distrust relationships between users are more important than trust ones in recommender systems. From the perspective of recommendation accuracy, SNACF-GA (Avg. MAE = 0.111943), the proposed algorithm which reflects both direct and indirect trust / distrust relationships information, was found to greatly outperform a conventional CF (Avg. MAE = 0.112638). Also, the algorithm showed better recommendation accuracy than the SNACF (Avg. MAE = 0.112209). To confirm whether these differences are statistically significant or not, we applied paired samples t-test. The results from the paired samples t-test presented that the difference between SNACF-GA and conventional CF was statistical significant at the 1% significance level, and the difference between SNACF-GA and SNACF was statistical significant at the 5%. Our study found that the trust/distrust relationship can be important information for improving performance of recommendation algorithms. Especially, distrust relationship information was found to have a greater impact on the performance improvement of CF. This implies that we need to have more attention on distrust (negative) relationships rather than trust (positive) ones when tracking and managing social relationships between users.