• Title/Summary/Keyword: Automatic rank

Search Result 45, Processing Time 0.025 seconds

Blood-Brain Barrier Disruption in Mild Traumatic Brain Injury Patients with Post-Concussion Syndrome: Evaluation with Region-Based Quantification of Dynamic Contrast-Enhanced MR Imaging Parameters Using Automatic Whole-Brain Segmentation

  • Heera Yoen;Roh-Eul Yoo;Seung Hong Choi;Eunkyung Kim;Byung-Mo Oh;Dongjin Yang;Inpyeong Hwang;Koung Mi Kang;Tae Jin Yun;Ji-hoon Kim;Chul-Ho Sohn
    • Korean Journal of Radiology
    • /
    • v.22 no.1
    • /
    • pp.118-130
    • /
    • 2021
  • Objective: This study aimed to investigate the blood-brain barrier (BBB) disruption in mild traumatic brain injury (mTBI) patients with post-concussion syndrome (PCS) using dynamic contrast-enhanced (DCE) magnetic resonance (MR) imaging and automatic whole brain segmentation. Materials and Methods: Forty-two consecutive mTBI patients with PCS who had undergone post-traumatic MR imaging, including DCE MR imaging, between October 2016 and April 2018, and 29 controls with DCE MR imaging were included in this retrospective study. After performing three-dimensional T1-based brain segmentation with FreeSurfer software (Laboratory for Computational Neuroimaging), the mean Ktrans and vp from DCE MR imaging (derived using the Patlak model and extended Tofts and Kermode model) were analyzed in the bilateral cerebral/cerebellar cortex, bilateral cerebral/cerebellar white matter (WM), and brainstem. Ktrans values of the mTBI patients and controls were calculated using both models to identify the model that better reflected the increased permeability owing to mTBI (tendency toward higher Ktrans values in mTBI patients than in controls). The Mann-Whitney U test and Spearman rank correlation test were performed to compare the mean Ktrans and vp between the two groups and correlate Ktrans and vp with neuropsychological tests for mTBI patients. Results: Increased permeability owing to mTBI was observed in the Patlak model but not in the extended Tofts and Kermode model. In the Patlak model, the mean Ktrans in the bilateral cerebral cortex was significantly higher in mTBI patients than in controls (p = 0.042). The mean vp values in the bilateral cerebellar WM and brainstem were significantly lower in mTBI patients than in controls (p = 0.009 and p = 0.011, respectively). The mean Ktrans of the bilateral cerebral cortex was significantly higher in patients with atypical performance in the auditory continuous performance test (commission errors) than in average or good performers (p = 0.041). Conclusion: BBB disruption, as reflected by the increased Ktrans and decreased vp values from the Patlak model, was observed throughout the bilateral cerebral cortex, bilateral cerebellar WM, and brainstem in mTBI patients with PCS.

Pose and Expression Invariant Alignment based Multi-View 3D Face Recognition

  • Ratyal, Naeem;Taj, Imtiaz;Bajwa, Usama;Sajid, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4903-4929
    • /
    • 2018
  • In this study, a fully automatic pose and expression invariant 3D face alignment algorithm is proposed to handle frontal and profile face images which is based on a two pass course to fine alignment strategy. The first pass of the algorithm coarsely aligns the face images to an intrinsic coordinate system (ICS) through a single 3D rotation and the second pass aligns them at fine level using a minimum nose tip-scanner distance (MNSD) approach. For facial recognition, multi-view faces are synthesized to exploit real 3D information and test the efficacy of the proposed system. Due to optimal separating hyper plane (OSH), Support Vector Machine (SVM) is employed in multi-view face verification (FV) task. In addition, a multi stage unified classifier based face identification (FI) algorithm is employed which combines results from seven base classifiers, two parallel face recognition algorithms and an exponential rank combiner, all in a hierarchical manner. The performance figures of the proposed methodology are corroborated by extensive experiments performed on four benchmark datasets: GavabDB, Bosphorus, UMB-DB and FRGC v2.0. Results show mark improvement in alignment accuracy and recognition rates. Moreover, a computational complexity analysis has been carried out for the proposed algorithm which reveals its superiority in terms of computational efficiency as well.

Comparison of the Anaerobic Threshold Level Between Subjects With and Without Non-Specific Chronic Low Back Pain (비특이성 만성요통 유무에 따른 무산소성 역치수준 비교)

  • Seong, Jun-Hyuk;Kwon, Oh-Yun;Yi, Chung-Hwi;Cynn, Heon-Seock;Cho, Young-Ki
    • Physical Therapy Korea
    • /
    • v.18 no.1
    • /
    • pp.74-82
    • /
    • 2011
  • The purpose of this study was to compare the anaerobic threshold (AT) between subjects with and without non-specific chronic low back pain (NCLBP). The patient group included 15 women with NCLBP. The normal group included 15 women without NCLBP who were age-, height-, weight-, and activity level-matched. The subjects performed a Balke treadmill protocol which was symptom-limited progressive loading test. Their heart rate (HR), ventilatory gas and metabolic equivalents (METs) were measured using the automatic breath gas analyzing system. After the test, each subjects' ratings of perceived exertion (RPE) were evaluated. The visual analog scale (VAS) was assessed pre- and post-test. The independent t-test and Wilcoxon's signed-rank test were used for analysis of the data. Time, HR, the volume of oxygen consumption ($VO_2$), relative $VO_2$, and METs at the AT level of the patient group were significantly lower than those of the healthy group (p<.05). However, there were no significant differences in RPE, VAS, and breathing frequency at the AT level (p>.05). The findings of this study indicate that patients with NCLBP had a lower aerobic fitness than healthy subjects. Thus, implementation of rehabilitation program to increase aerobic fitness may be considered in patietns with NCLBP, and further studies are required to determine the etiological factors of decreased aerobic fitness.

Domestic current situation and Improvement plan Consideration of Electricity Design & Supervision System (전기설계.감리제도의 합리적인 운영방안에 관한 고찰)

  • Jeong, Yeon-Hae;Nam, Ki-Beom;Sin, Hwa-Yeong;Jeong, Hyeong-Yong;Lee, Jong-Hyuk;Jun, Young-Su
    • Proceedings of the KIEE Conference
    • /
    • 2007.04b
    • /
    • pp.10-15
    • /
    • 2007
  • For the introduction of the electricity of the special design and supervision system to Protect the faulty-workmanship of the electricity equipment and to reserve the electricity safety, last 1995 "Electricity Technology Management Act" were carried out enactment. March 2002, we introduced the pre-Qualification system that is selected the competent company about the electricity design and supervision service over so much capacity which is ordered by the Public institution such as the nation, local autonomous entity, government investment institute and so on. December 2005, the electricity equipment over so much capacity was taken the housing construction plan's approval by the housing act in case, the municipal or district governor with the authority to approve who selected the electricity supervision company according to the pre-Qualification system. Due to the introduction of the system, we could expect the rights and interests increase of the consumer by eradicating dumping of the close relationship between supervision company and the builder, reduction and concurrent position of the supervisor, according to confirm the quality of the electricity equipment and electricity safety. In spite of introducing of the system, the problems occured as reducing the electricity designer's task range compared with different field, modifing the supervisor's rank system rationally, because of researching about the electricity design and supervision technique lacks. Also, KEEA(Korea Electric Engineers Association) manage the caller of the electricity design and supervisor(company or people), but the problems occured as opening to the public about the curler management, introducing automatic bidding system and We need the solution plan about this. Consequently, we consider the present situation of the domestic and outside and problem about the electricity design and supervision system in this paper, try to present rational operation plan through this.

  • PDF

Modern Paper Quality Control

  • Komppa, Olavi
    • Journal of Korea Technical Association of The Pulp and Paper Industry
    • /
    • v.32 no.5
    • /
    • pp.72-79
    • /
    • 2000
  • On the other hand, the fiber orientation at the surface and middle layer of the sheet controls the bending stiffness of paperboard. Therefore, a reliable measurement of paper surface fiber orientation gives us a magnificent tool to investigate and predict paper curling and cockling tendency, and provides the necessary information to fine-tune the manufacturing process for optimum quality. Many papers, especially heavily calendered and coated grades, do resist liquid and gas penetration very much, being beyond the measurement range of the traditional instruments or resulting inconveniently long measuring time per sample. The increased surface hardness and use of filler minerals and mechanical pulp make a reliable, non-leaking sample contact to the measurement head a challenge of its own. Paper surface coating causes, as expected, a layer which has completely different permeability characteristics compared to the other layers of the sheet. The latest developments in sensor technologies have made it possible to reliably measure gas flow n well controlled conditions, allowing us to investigate the gas penetration of open structures, such as cigarette paper, tissue or sack paper, and in the low permeability range analyze even fully greaseproof papers, silicon papers, heavily coated papers and boards or even detect defects in barrier coatings! Even nitrogen or helium may be used as the gas, giving us completely new possibilities to rank the products or to find correlation to critical process or converting parameters. All the modern paper machines include many on-line measuring instruments which are used to give the necessary information for automatic process control systems. Hence, the reliability of this information obtained from different sensors is vital for good optimizing and process stability. If any of these on-line sensors do not operate perfectly as planned (having even small measurement error or malfunction), the process control will set the machine to operate away from the optimum, resulting loss of profit or eventual problems in quality or runnability. To assure optimum operation of the paper machines, a novel quality assurance policy for the on-line measurements has been developed, including control procedures utilizing traceable, accredited standards for the best reliability and performance.

  • PDF

An Algorithm for Detecting Residual Quantity of Ringer's Solution for Automatic Replacement (링거 자동 교체를 위한 잔량 검출 알고리즘)

  • Kim, Chang-Wook;Woo, Sang-Hyo;Zia, Mohy Ud Din;Won, Chul-Ho;Hong, Jae-Pyo;Cho, Jin-Ho
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.1
    • /
    • pp.30-36
    • /
    • 2008
  • Recently, ere are many researches to improve the quality of e medical service such as Point of care (POC). To improve the quality of the medical service, not only good medical device but also more man power is required. Especially, the number of nurses are very few in Korea that is almost the lowest rank compared to OECD countries. If the simple repetition works of the nurse could be removed, it is possible to use the skillful nurse for other works and provide better quality services. There are many simple repetition works which the nurses have to do, such as replacing the ringer's solution. To replace the ringer's solution automatically, it is necessary to detect residual quantity of the ringer's solution. In this paper, image processing is used to detect the residual quantity of ringer's solution, and modified self quotient image (SQI) algorithm is used to strong background lights. After modified SQI algorithm, the simple histogram accumulation is done to find the residual quantity of the ringer's solution. The implemented algorithm could be use to replace the ringer's solution automatically or alarm to the nurses to replace the solution.

  • PDF

Real-time Moving Object Detection Based on RPCA via GD for FMCW Radar

  • Nguyen, Huy Toan;Yu, Gwang Hyun;Na, Seung You;Kim, Jin Young;Seo, Kyung Sik
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.6
    • /
    • pp.103-114
    • /
    • 2019
  • Moving-target detection using frequency-modulated continuous-wave (FMCW) radar systems has recently attracted attention. Detection tasks are more challenging with noise resulting from signals reflected from strong static objects or small moving objects(clutter) within radar range. Robust Principal Component Analysis (RPCA) approach for FMCW radar to detect moving objects in noisy environments is employed in this paper. In detail, compensation and calibration are first applied to raw input signals. Then, RPCA via Gradient Descents (RPCA-GD) is adopted to model the low-rank noisy background. A novel update algorithm for RPCA is proposed to reduce the computation cost. Finally, moving-targets are localized using an Automatic Multiscale-based Peak Detection (AMPD) method. All processing steps are based on a sliding window approach. The proposed scheme shows impressive results in both processing time and accuracy in comparison to other RPCA-based approaches on various experimental scenarios.

A Comprehensive Survey of Lightweight Neural Networks for Face Recognition (얼굴 인식을 위한 경량 인공 신경망 연구 조사)

  • Yongli Zhang;Jaekyung Yang
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.1
    • /
    • pp.55-67
    • /
    • 2023
  • Lightweight face recognition models, as one of the most popular and long-standing topics in the field of computer vision, has achieved vigorous development and has been widely used in many real-world applications due to fewer number of parameters, lower floating-point operations, and smaller model size. However, few surveys reviewed lightweight models and reimplemented these lightweight models by using the same calculating resource and training dataset. In this survey article, we present a comprehensive review about the recent research advances on the end-to-end efficient lightweight face recognition models and reimplement several of the most popular models. To start with, we introduce the overview of face recognition with lightweight models. Then, based on the construction of models, we categorize the lightweight models into: (1) artificially designing lightweight FR models, (2) pruned models to face recognition, (3) efficient automatic neural network architecture design based on neural architecture searching, (4) Knowledge distillation and (5) low-rank decomposition. As an example, we also introduce the SqueezeFaceNet and EfficientFaceNet by pruning SqueezeNet and EfficientNet. Additionally, we reimplement and present a detailed performance comparison of different lightweight models on the nine different test benchmarks. At last, the challenges and future works are provided. There are three main contributions in our survey: firstly, the categorized lightweight models can be conveniently identified so that we can explore new lightweight models for face recognition; secondly, the comprehensive performance comparisons are carried out so that ones can choose models when a state-of-the-art end-to-end face recognition system is deployed on mobile devices; thirdly, the challenges and future trends are stated to inspire our future works.

Modern Paper Quality Control

  • Olavi Komppa
    • Proceedings of the Korea Technical Association of the Pulp and Paper Industry Conference
    • /
    • 2000.06a
    • /
    • pp.16-23
    • /
    • 2000
  • The increasing functional needs of top-quality printing papers and packaging paperboards, and especially the rapid developments in electronic printing processes and various computer printers during past few years, set new targets and requirements for modern paper quality. Most of these paper grades of today have relatively high filler content, are moderately or heavily calendered , and have many coating layers for the best appearance and performance. In practice, this means that many of the traditional quality assurance methods, mostly designed to measure papers made of pure. native pulp only, can not reliably (or at all) be used to analyze or rank the quality of modern papers. Hence, introduction of new measurement techniques is necessary to assure and further develop the paper quality today and in the future. Paper formation , i.e. small scale (millimeter scale) variation of basis weight, is the most important quality parameter of paper-making due to its influence on practically all the other quality properties of paper. The ideal paper would be completely uniform so that the basis weight of each small point (area) measured would be the same. In practice, of course, this is not possible because there always exists relatively large local variations in paper. However, these small scale basis weight variations are the major reason for many other quality problems, including calender blacking uneven coating result, uneven printing result, etc. The traditionally used visual inspection or optical measurement of the paper does not give us a reliable understanding of the material variations in the paper because in modern paper making process the optical behavior of paper is strongly affected by using e.g. fillers, dye or coating colors. Futhermore, the opacity (optical density) of the paper is changed at different process stages like wet pressing and calendering. The greatest advantage of using beta transmission method to measure paper formation is that it can be very reliably calibrated to measure true basis weight variation of all kinds of paper and board, independently on sample basis weight or paper grade. This gives us the possibility to measure, compare and judge papers made of different raw materials, different color, or even to measure heavily calendered, coated or printed papers. Scientific research of paper physics has shown that the orientation of the top layer (paper surface) fibers of the sheet paly the key role in paper curling and cockling , causing the typical practical problems (paper jam) with modern fax and copy machines, electronic printing , etc. On the other hand, the fiber orientation at the surface and middle layer of the sheet controls the bending stiffness of paperboard . Therefore, a reliable measurement of paper surface fiber orientation gives us a magnificent tool to investigate and predict paper curling and coclking tendency, and provides the necessary information to finetune, the manufacturing process for optimum quality. many papers, especially heavily calendered and coated grades, do resist liquid and gas penetration very much, bing beyond the measurement range of the traditional instruments or resulting invonveniently long measuring time per sample . The increased surface hardness and use of filler minerals and mechanical pulp make a reliable, nonleaking sample contact to the measurement head a challenge of its own. Paper surface coating causes, as expected, a layer which has completely different permeability characteristics compared to the other layer of the sheet. The latest developments in sensor technologies have made it possible to reliably measure gas flow in well controlled conditions, allowing us to investigate the gas penetration of open structures, such as cigarette paper, tissue or sack paper, and in the low permeability range analyze even fully greaseproof papers, silicon papers, heavily coated papers and boards or even detect defects in barrier coatings ! Even nitrogen or helium may be used as the gas, giving us completely new possibilities to rank the products or to find correlation to critical process or converting parameters. All the modern paper machines include many on-line measuring instruments which are used to give the necessary information for automatic process control systems. hence, the reliability of this information obtained from different sensors is vital for good optimizing and process stability. If any of these on-line sensors do not operate perfectly ass planned (having even small measurement error or malfunction ), the process control will set the machine to operate away from the optimum , resulting loss of profit or eventual problems in quality or runnability. To assure optimum operation of the paper machines, a novel quality assurance policy for the on-line measurements has been developed, including control procedures utilizing traceable, accredited standards for the best reliability and performance.

Estimation and assessment of natural drought index using principal component analysis (주성분 분석을 활용한 자연가뭄지수 산정 및 평가)

  • Kim, Seon-Ho;Lee, Moon-Hwan;Bae, Deg-Hyo
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.6
    • /
    • pp.565-577
    • /
    • 2016
  • The objective of this study is to propose a method for computing the Natural Drought Index (NDI) that does not consider man-made drought facilities. Principal Component Analysis (PCA) was used to estimate the NDI. Three monthly moving cumulative runoff, soil moisture and precipitation were selected as input data of the NDI during 1977~2012. Observed precipitation data was collected from KMA ASOS (Korea Meteorological Association Automatic Synoptic Observation System), while model-driven runoff and soil moisture from Variable Infiltration Capacity Model (VIC Model) were used. Time series analysis, drought characteristic analysis and spatial analysis were used to assess the utilization of NDI and compare with existing SPI, SRI and SSI. The NDI precisely reflected onset and termination of past drought events with mean absolute error of 0.85 in time series analysis. It explained well duration and inter-arrival time with 1.3 and 1.0 respectively in drought characteristic analysis. Also, the NDI reflected regional drought condition well in spatial analysis. The accuracy rank of drought onset, termination, duration and inter-arrival time was calculated by using NDI, SPI, SRI and SSI. The result showed that NDI is more precise than the others. The NDI overcomes the limitation of univariate drought indices and can be useful for drought analysis as representative measure of different types of drought such as meteorological, hydrological and agricultural droughts.