• Title/Summary/Keyword: target precision

Search Result 544, Processing Time 0.035 seconds

Insight from sirtuins interactome: topological prominence and multifaceted roles of SIRT1 in modulating immunity, aging, and cancer

  • Nur Diyana Zulkifli;Nurulisa Zulkifle
    • Genomics & Informatics
    • /
    • v.21 no.2
    • /
    • pp.23.1-23.9
    • /
    • 2023
  • The mammalian sirtuin family, consisting of SIRT1-SIRT7, plays a vital role in various biological processes, including cancer, diabetes, neurodegeneration, cardiovascular disease, cellular metabolism, and cellular homeostasis maintenance. Due to their involvement in these biological processes, modulating sirtuin activity seems promising to impact immuneand aging-related diseases, as well as cancer pathways. However, more understanding is required regarding the safety and efficacy of sirtuin-targeted therapies due to the complex regulatory mechanisms that govern their activity, particularly in the context of multiple targets. In this study, the interaction landscape of the sirtuin family was analyzed using a systems biology approach. A sirtuin protein-protein interaction network was built using the Cytoscape platform and analyzed using the NetworkAnalyzer and stringApp plugins. The result revealed the sirtuin family's association with numerous proteins that play diverse roles, suggesting a complex interplay between sirtuins and other proteins. Based on network topological and functional analysis, SIRT1 was identified as the most prominent among sirtuin family members, demonstrating that 25 of its protein partners are involved in cancer, 22 in innate immune response, and 29 in aging, with some being linked to a combination of two or more pathways. This study lays the foundation for the development of novel therapies that can target sirtuins with precision and efficacy. By illustrating the various interactions among the proteins in the sirtuin family, we have revealed the multifaceted roles of SIRT1 and provided a framework for their possible roles to be precisely understood, manipulated, and translated into therapeutics in the future.

Application of Performance Based Mixture Design (PBMD) for High Strength Concrete (고강도 콘크리트의 성능기반형 배합설계방법)

  • Kim, Jang-Ho Jay;Oh, Il Sun;Phan, Duc Hung;Lee, Keun Sung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.6A
    • /
    • pp.561-572
    • /
    • 2010
  • This paper is a study about application of recently proposed Performance Based Mixture Design (PBMD) for design of high strength concrete (HSC) to obtain HSC mix proportion that satisfies required performances. The PBMD method which uses Satisfaction curve based on a Bayesian method is a performance oriented concrete mix proportion design procedure easily applicable to any condition and environment for a possible replacement to the current prescriptive design standards. Based on extensive experimental results obtained for various materials and performance parameters of HSC, the application feasibility of the developed PBMD procedure for HSC has been verified. Also, the proposed PBMD procedure has been used to perform application examples to obtain desired target performances of HSC with optimum concrete mixture proportions using locally available materials, local environmental conditions, and available concrete production technologies. The validity and precision of HSC mix proportion design obtained using the PBMD method is verified with the experimental and ACI presented results to check the feasibility for actual design usage.

Classifying Social Media Users' Stance: Exploring Diverse Feature Sets Using Machine Learning Algorithms

  • Kashif Ayyub;Muhammad Wasif Nisar;Ehsan Ullah Munir;Muhammad Ramzan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.79-88
    • /
    • 2024
  • The use of the social media has become part of our daily life activities. The social web channels provide the content generation facility to its users who can share their views, opinions and experiences towards certain topics. The researchers are using the social media content for various research areas. Sentiment analysis, one of the most active research areas in last decade, is the process to extract reviews, opinions and sentiments of people. Sentiment analysis is applied in diverse sub-areas such as subjectivity analysis, polarity detection, and emotion detection. Stance classification has emerged as a new and interesting research area as it aims to determine whether the content writer is in favor, against or neutral towards the target topic or issue. Stance classification is significant as it has many research applications like rumor stance classifications, stance classification towards public forums, claim stance classification, neural attention stance classification, online debate stance classification, dialogic properties stance classification etc. This research study explores different feature sets such as lexical, sentiment-specific, dialog-based which have been extracted using the standard datasets in the relevant area. Supervised learning approaches of generative algorithms such as Naïve Bayes and discriminative machine learning algorithms such as Support Vector Machine, Naïve Bayes, Decision Tree and k-Nearest Neighbor have been applied and then ensemble-based algorithms like Random Forest and AdaBoost have been applied. The empirical based results have been evaluated using the standard performance measures of Accuracy, Precision, Recall, and F-measures.

Determination of Flunixin and 5-Hydroxy Flunixin Residues in Livestock and Fishery Products Using Liquid Chromatography-Tandem Mass Spectrometry (LC-MS/MS)

  • Dahae Park;Yong Seok Choi;Ji-Young Kim;Jang-Duck Choi;Gui-Im Moon
    • Food Science of Animal Resources
    • /
    • v.44 no.4
    • /
    • pp.873-884
    • /
    • 2024
  • Flunixin is a veterinary nonsteroidal anti-inflammatory agent whose residues have been investigated in their original form within tissues such as muscle and liver. However, flunixin remains in milk as a metabolite, and 5-hydroxy flunixin has been used as the primary marker for its surveillance. This study aimed to develop a quantitative method for detecting flunixin and 5-hydroxy flunixin in milk and to strengthen the monitoring system by applying to other livestock and fishery products. Two different methods were compared, and the target compounds were extracted from milk using an organic solvent, purified with C18, concentrated, and reconstituted using a methanol-based solvent. Following filtering, the final sample was analyzed using liquid chromatography-tandem mass spectrometry. Method 1 is environmentally friendly due to the low use of reagents and is based on a multi-residue, multi-class analysis method approved by the Ministry of Food and Drug Safety. The accuracy and precision of both methods were 84.6%-115% and 0.7%-9.3%, respectively. Owing to the low matrix effect in milk and its convenience, Method 1 was evaluated for other matrices (beef, chicken, egg, flatfish, and shrimp) and its recovery and coefficient of variation are sufficient according to the Codex criteria (CAC/GL 71-2009). The limits of detection and quantification were 2-8 and 5-27 ㎍/kg for flunixin and 2-10 and 6-33 ㎍/kg for 5-hydroxy flunixin, respectively. This study can be used as a monitoring method for a positive list system that regulates veterinary drug residues for all livestock and fisheries products.

Development of Operation System for Satellite Laser Ranging on Geochang Station (거창 인공위성 레이저 추적을 위한 운영 시스템 개발)

  • Ki-Pyoung Sung;Hyung-Chul Lim;Man-Soo Choi;Sung-Yeol Yu
    • Journal of Space Technology and Applications
    • /
    • v.4 no.2
    • /
    • pp.169-183
    • /
    • 2024
  • Korea Astronomy and Space Science Institute (KASI) developed the Geochang satellite laser ranging (SLR) system for the scientific research on the space geodesy as well as for the national space missions including precise orbit determination and space surveillance. The operation system was developed based on the server-client communication structure, which controls the SLR subsystems, provides manual and automatic observation modes based on the observation algorithm, generates the range data between satellites and SLR stations, and carry out the post-processing to remove noises. In this study, we analyzed the requirements of operation system, and presented the development environments, the software structure and the observation algorithm, for the server-client communications. We also obtained laser ranging data for the ground target and the space geodetic satellite, and then analyzed the ranging precision between the Geochang SLR station and the International Laser Ranging Service (ILRS) network stations, in order to verify the operation system.

A Study of Six Sigma and Total Error Allowable in Chematology Laboratory (6 시그마와 총 오차 허용범위의 개발에 대한 연구)

  • Chang, Sang-Wu;Kim, Nam-Yong;Choi, Ho-Sung;Kim, Yong-Whan;Chu, Kyung-Bok;Jung, Hae-Jin;Park, Byong-Ok
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.37 no.2
    • /
    • pp.65-70
    • /
    • 2005
  • Those specifications of the CLIA analytical tolerance limits are consistent with the performance goals in Six Sigma Quality Management. Six sigma analysis determines performance quality from bias and precision statistics. It also shows if the method meets the criteria for the six sigma performance. Performance standards calculates allowable total error from several different criteria. Six sigma means six standard deviations from the target value or mean value and about 3.4 failures per million opportunities for failure. Sigma Quality Level is an indicator of process centering and process variation total error allowable. Tolerance specification is replaced by a Total Error specification, which is a common form of a quality specification for a laboratory test. The CLIA criteria for acceptable performance in proficiency testing events are given in the form of an allowable total error, TEa. Thus there is a published list of TEa specifications for regulated analytes. In terms of TEa, Six Sigma Quality Management sets a precision goal of TEa/6 and an accuracy goal of 1.5 (TEa/6). This concept is based on the proficiency testing specification of target value +/-3s, TEa from reference intervals, biological variation, and peer group median mean surveys. We have found rules to calculate as a fraction of a reference interval and peer group median mean surveys. We studied to develop total error allowable from peer group survey results and CLIA 88 rules in US on 19 items TP, ALB, T.B, ALP, AST, ALT, CL, LD, K, Na, CRE, BUN, T.C, GLU, GGT, CA, phosphorus, UA, TG tests in chematology were follows. Sigma level versus TEa from peer group median mean CV of each item by group mean were assessed by process performance, fitting within six sigma tolerance limits were TP ($6.1{\delta}$/9.3%), ALB ($6.9{\delta}$/11.3%), T.B ($3.4{\delta}$/25.6%), ALP ($6.8{\delta}$/31.5%), AST ($4.5{\delta}$/16.8%), ALT ($1.6{\delta}$/19.3%), CL ($4.6{\delta}$/8.4%), LD ($11.5{\delta}$/20.07%), K ($2.5{\delta}$/0.39mmol/L), Na ($3.6{\delta}$/6.87mmol/L), CRE ($9.9{\delta}$/21.8%), BUN ($4.3{\delta}$/13.3%), UA ($5.9{\delta}$/11.5%), T.C ($2.2{\delta}$/10.7%), GLU ($4.8{\delta}$/10.2%), GGT ($7.5{\delta}$/27.3%), CA ($5.5{\delta}$/0.87mmol/L), IP ($8.5{\delta}$/13.17%), TG ($9.6{\delta}$/17.7%). Peer group survey median CV in Korean External Assessment greater than CLIA criteria were CL (8.45%/5%), BUN (13.3%/9%), CRE (21.8%/15%), T.B (25.6%/20%), and Na (6.87mmol/L/4mmol/L). Peer group survey median CV less than it were as TP (9.3%/10%), AST (16.8%/20%), ALT (19.3%/20%), K (0.39mmol/L/0.5mmol/L), UA (11.5%/17%), Ca (0.87mg/dL1mg/L), TG (17.7%/25%). TEa in 17 items were same one in 14 items with 82.35%. We found out the truth on increasing sigma level due to increased total error allowable, and were sure that the goal of setting total error allowable would affect the evaluation of sigma metrics in the process, if sustaining the same process.

  • PDF

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

Improvements for Atmospheric Motion Vectors Algorithm Using First Guess by Optical Flow Method (옵티컬 플로우 방법으로 계산된 초기 바람 추정치에 따른 대기운동벡터 알고리즘 개선 연구)

  • Oh, Yurim;Park, Hyungmin;Kim, Jae Hwan;Kim, Somyoung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.763-774
    • /
    • 2020
  • Wind data forecasted from the numerical weather prediction (NWP) model is generally used as the first-guess of the target tracking process to obtain the atmospheric motion vectors(AMVs) because it increases tracking accuracy and reduce computational time. However, there is a contradiction that the NWP model used as the first-guess is used again as the reference in the AMVs verification process. To overcome this problem, model-independent first guesses are required. In this study, we propose the AMVs derivation from Lucas and Kanade optical flow method and then using it as the first guess. To retrieve AMVs, Himawari-8/AHI geostationary satellite level-1B data were used at 00, 06, 12, and 18 UTC from August 19 to September 5, 2015. To evaluate the impact of applying the optical flow method on the AMV derivation, cross-validation has been conducted in three ways as follows. (1) Without the first-guess, (2) NWP (KMA/UM) forecasted wind as the first-guess, and (3) Optical flow method based wind as the first-guess. As the results of verification using ECMWF ERA-Interim reanalysis data, the highest precision (RMSVD: 5.296-5.804 ms-1) was obtained using optical flow based winds as the first-guess. In addition, the computation speed for AMVs derivation was the slowest without the first-guess test, but the other two had similar performance. Thus, applying the optical flow method in the target tracking process of AMVs algorithm, this study showed that the optical flow method is very effective as a first guess for model-independent AMVs derivation.

The Influences of Bowel Condition with Lumbar Spine BMD Measurement (요추부 골밀도 측정 시 장내 변화가 골밀도에 미치는 영향)

  • Yoon, Joon;Kim, Yun-Min;Lee, Hoo-Min;Lee, Jung Min;Kwon, Soon-Mu;Cho, Hyung-Wook;Kang, Yeong-Han;Kim, Boo-Soon;Kim, Jung-Soo
    • Journal of radiological science and technology
    • /
    • v.37 no.4
    • /
    • pp.273-278
    • /
    • 2014
  • Bone density measurement use of diagnosis of osteoporosis and it is an important indicator for treatment as well as prevention. But errors in degree of precision of BMD can be occurred by status of patient, bone densitometer and radiological technologist. Therefore the author evaluated that how BMD changes according to the condition of the patient. As Lumbar region, which could lead to substantial effects on bone density by diverse factors such as the water, food, intentional bowels. We recognized a change of bone mineral density in accordance with the height of the water tank and in the presence or absence of the gas using the Aluminum Spine Phantom. We also figured out the influence of bone mineral density by increasing the water and food into a target on the volunteers. Measured bone mineral density through Aluminum Spine Phantom had statistically significant difference accordance with increasing the height of water tank(p=0.026). There was no significant difference in BMD according to the existence of the bowl gas(p=0.587). There was no significant difference in a study of six people targeted volunteers in the presence or absence of the food(p=0.812). And also there was no significant difference according to the existence of water(p=0.618). If it is not difficult to recognize the surround of bone in measuring BMD of lumbar bone, it is not the factor which has the great effect on bone mineral density whether the test is after endoscopic examination of large intestine and patient's fast or not.

Simultaneous determination of illegal galactagogue adulterants in supplement diets by LC-MS/MS

  • Lee, Ji Hyun;Cho, So-Hyun;Park, Han Na;Park, Hyoung Joon;Kim, Nam Sook;Park, Sung Kwan;Kang, Hoil
    • Analytical Science and Technology
    • /
    • v.31 no.4
    • /
    • pp.171-178
    • /
    • 2018
  • Recently, for successful lactation, many breastfeeding mothers seek various products, including herbal medicine, dietary supplements, and prescribed medicines, to improve milk production. As demand for galactogogues grows, it is highly possible that pharmaceutical galactogogues may be adulterated with illegal products to maximize their efficacy. For continuous control and supervision of illegal products, we developed and validated a simple and sensitive LC-MS/MS method capable of simultaneously determining five galactogogues. Chromatographic separation was conducted using an Agilent Poroshell $120SB-C_{18}$ column with a mobile phase consisting of 20 mM ammonium formate (pH 5.4) and 100 % acetonitrile. The total run time was 13 min per analyte. The proposed method was performed according to the guidelines of the International Conference of Harmonization and it produced reliable results. This method showed high sensitivity and specificity, with a limit of detection (LOD) and limit of quantitation (LOQ) of 0.01-0.82 ng/mL and 0.02-2.45 ng/mL, respectively, for the solid- and liquid-type samples. Specificity was evaluated by analyzing matrix-blank samples spiked with the target compounds at LOQ levels, which provided a good separation of all peaks without interference. Additionally, the repeatability and intermediate precision were typically <15 %, whereas the recovery was 80-120 % of the values obtained using blank samples. Thus, we concluded that this method could be used for the identification and quantification of galactogogues in food or herbal products.