• Title/Summary/Keyword: Computation

Search Result 7,985, Processing Time 0.035 seconds

Geographic Distribution of Physician Manpower by Gini Index (GINI계수에 의한 의사의 지역간 분포양상)

  • Moon, Byung-Wook;Park, Jae-Yong
    • Journal of Preventive Medicine and Public Health
    • /
    • v.20 no.2 s.22
    • /
    • pp.301-311
    • /
    • 1987
  • The purpose of this study is to analyze degree of geographic maldistribution of physicians and changes in the distributional pattern in Korea over the years 1980-1985. In assessing the degree of disparity in physician distribution and in identifying changes in the distributional pattern, the Gini index of concentration was used. The geographical units selected for computation of the Gini index in this analysis are districts (Gu), cities (Si), and counties (Gun). Locational data for 1980 and 1985 were obtained from the population census data in the Economic Planning Board and regular reports of physicians in the Korean Medical Association. The rates of physicians located counties to whole physicaians were 10.4% in 1980 and 9.6% in 1985. In term of the ratio of physicians per 100,000 population, rural area had 9.18 physicians in 1980 and 12.95 in 1985, 7.13 general practitioner in 1980 and 7.29 in 1955, and 2.05 specialists in 1980 and 5.66 in 1985. Only specialists of genral surgery and preventive medicine were distributed over 10% in county and distribution of every specialists except chest surgery in county increased in 1955, comparing with that rates of 1980. The Gini index computed to measure inequality of physician distribution in 1985 indicate as follows; physicians 0.3466, general practitioners 0.5479, and specialists 0.5092. But the Gini index for physicians and specialists fell -15.40% and -10.42% from 1980 to 1985, indication more even distribution. The changes in the Gini index over the period for specialists from 0.3639 to 0.4542 for districts, from 0.2510 to 0.1949 for cities, and 0.5303 to 0.5868 for counties indicate distributional change of 24.81%, -22.35%, and 10.65% respectively. The Gini indices for specialists of neuro-surgery, chest surgery, plastic surgery, ophthalmology, tuberculosis, preventive medicine, and anatomical pathology in 1985 were higher than Gini indices in 1980.

  • PDF

A Variable Latency Newton-Raphson's Floating Point Number Reciprocal Square Root Computation (가변 시간 뉴톤-랍손 부동소수점 역수 제곱근 계산기)

  • Kim Sung-Gi;Cho Gyeong-Yeon
    • The KIPS Transactions:PartA
    • /
    • v.12A no.5 s.95
    • /
    • pp.413-420
    • /
    • 2005
  • The Newton-Raphson iterative algorithm for finding a floating point reciprocal square mot calculates it by performing a fixed number of multiplications. In this paper, a variable latency Newton-Raphson's reciprocal square root algorithm is proposed that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the rediprocal square root of a floating point number F, the algorithm repeats the following operations: '$X_{i+1}=\frac{{X_i}(3-e_r-{FX_i}^2)}{2}$, $i\in{0,1,2,{\ldots}n-1}$' with the initial value is '$X_0=\frac{1}{\sqrt{F}}{\pm}e_0$'. The bits to the right of p fractional bits in intermediate multiplication results are truncated and this truncation error is less than '$e_r=2^{-p}$'. The value of p is 28 for the single precision floating point, and 58 for the double precision floating point. Let '$X_i=\frac{1}{\sqrt{F}}{\pm}e_i$, there is '$X_{i+1}=\frac{1}{\sqrt{F}}-e_{i+1}$, where '$e_{i+1}{<}\frac{3{\sqrt{F}}{{e_i}^2}}{2}{\mp}\frac{{Fe_i}^3}{2}+2e_r$'. If '$|\frac{\sqrt{3-e_r-{FX_i}^2}}{2}-1|<2^{\frac{\sqrt{-p}{2}}}$' is true, '$e_{i+1}<8e_r$' is less than the smallest number which is representable by floating point number. So, $X_{i+1}$ is approximate to '$\frac{1}{\sqrt{F}}$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications Per an operation is derived from many reciprocal square root tables ($X_0=\frac{1}{\sqrt{F}}{\pm}e_0$) with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a reciprocal square root unit. Also, it can be used to construct optimized approximate reciprocal square root tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc.

Noise-robust electrocardiogram R-peak detection with adaptive filter and variable threshold (적응형 필터와 가변 임계값을 적용하여 잡음에 강인한 심전도 R-피크 검출)

  • Rahman, MD Saifur;Choi, Chul-Hyung;Kim, Si-Kyung;Park, In-Deok;Kim, Young-Pil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.126-134
    • /
    • 2017
  • There have been numerous studies on extracting the R-peak from electrocardiogram (ECG) signals. However, most of the detection methods are complicated to implement in a real-time portable electrocardiograph device and have the disadvantage of requiring a large amount of calculations. R-peak detection requires pre-processing and post-processing related to baseline drift and the removal of noise from the commercial power supply for ECG data. An adaptive filter technique is widely used for R-peak detection, but the R-peak value cannot be detected when the input is lower than a threshold value. Moreover, there is a problem in detecting the P-peak and T-peak values due to the derivation of an erroneous threshold value as a result of noise. We propose a robust R-peak detection algorithm with low complexity and simple computation to solve these problems. The proposed scheme removes the baseline drift in ECG signals using an adaptive filter to solve the problems involved in threshold extraction. We also propose a technique to extract the appropriate threshold value automatically using the minimum and maximum values of the filtered ECG signal. To detect the R-peak from the ECG signal, we propose a threshold neighborhood search technique. Through experiments, we confirmed the improvement of the R-peak detection accuracy of the proposed method and achieved a detection speed that is suitable for a mobile system by reducing the amount of calculation. The experimental results show that the heart rate detection accuracy and sensitivity were very high (about 100%).

Application of the Homogenization Analysis to Calculation of a Permeability Coefficient (투수계수 산정을 위한 균질화 해석법의 적응)

  • 채병곤
    • Journal of Soil and Groundwater Environment
    • /
    • v.9 no.1
    • /
    • pp.79-86
    • /
    • 2004
  • Hydraulic conductivity along rock fracture is mainly dependent on fracture geometries such as orientation, aperture, roughness and connectivity. Therefore, it needs to consider fracture geometries sufficiently on a fracture model for a numerical analysis to calculate permeability coefficient in a fracture. This study performed new type of numerical analysis using a homogenization analysis method to calculate permeability coefficient accurately along single fractures with several fracture models that were considered fracture geometries as much as possible. First of all, fracture roughness and aperture variation due to normal stress applied on a fracture were directly measured under a confocal laser scaning microscope (CLSM). The acquired geometric data were used as input data to construct fracture models for the homogenization analysis (HA). Using the constructed fracture models, the homogenization analysis method can compute permeability coefficient with consideration of material properties both in microscale and in macroscale. The HA is a new type of perturbation theory developed to characterize the behavior of a micro inhomogeneous material with a periodic microstructure. It calculates micro scale permeability coefficient at homogeneous microscale, and then, computes a homogenized permeability coefficient (C-permeability coefficient) at macro scale. Therefore, it is possible to analyze accurate characteristics of permeability reflected with local effect of facture geometry. Several computations of the HA were conducted to prove validity of the HA results compared with the empirical equations of permeability in the previous studies using the constructed 2-D fracture models. The model can be classified into a parallel plate model that has fracture roughness and identical aperture along a fracture. According to the computation results, the conventional C-permeability coefficients have values in the range of the same order or difference of one order from the permeability coefficients calculated by an empirical equation. It means that the HA result is valid to calculate permeability coefficient along a fracture. However, it should be noted that C-permeability coefficient is more accurate result than the preexisting equations of permeability calculation, because the HA considers permeability characteristics of locally inhomogeneous fracture geometries and material properties both in microscale and macroscale.

Estimation of river discharge using satellite-derived flow signals and artificial neural network model: application to imjin river (Satellite-derived flow 시그널 및 인공신경망 모형을 활용한 임진강 유역 유출량 산정)

  • Li, Li;Kim, Hyunglok;Jun, Kyungsoo;Choi, Minha
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.7
    • /
    • pp.589-597
    • /
    • 2016
  • In this study, we investigated the use of satellite-derived flow (SDF) signals and a data-based model for the estimation of outflow for the river reach where in situ measurements are either completely unavailable or are difficult to access for hydraulic and hydrology analysis such as the upper basin of Imjin River. It has been demonstrated by many studies that the SDF signals can be used as the river width estimates and the correlation between SDF signals and river width is related to the shape of cross sections. To extract the nonlinear relationship between SDF signals and river outflow, Artificial Neural Network (ANN) model with SDF signals as its inputs were applied for the computation of flow discharge at Imjin Bridge located in Imjin River. 15 pixels were considered to extract SDF signals and Partial Mutual Information (PMI) algorithm was applied to identify the most relevant input variables among 150 candidate SDF signals (including 0~10 day lagged observations). The estimated discharges by ANN model were compared with the measured ones at Imjin Bridge gauging station and correlation coefficients of the training and validation were 0.86 and 0.72, respectively. It was found that if the 1 day previous discharge at Imjin bridge is considered as an input variable for ANN model, the correlation coefficients were improved to 0.90 and 0.83, respectively. Based on the results in this study, SDF signals along with some local measured data can play an useful role in river flow estimation and especially in flood forecasting for data-scarce regions as it can simulate the peak discharge and peak time of flood events with satisfactory accuracy.

Development of Highway Safety Evaluation Considering Design Consistency using Acceleration (가속도를 고려한 도로의 설계일관성 평가기법에 관한 연구)

  • 하태준;박제진;김유철
    • Journal of Korean Society of Transportation
    • /
    • v.21 no.1
    • /
    • pp.127-136
    • /
    • 2003
  • Road safety is defined under the minimum design standard and design examination process is consisted of the standard according to current road design. However, road safety in practical way is correlative to not only all element of roads but also road shape, such as, between straight line and curved line and between curved lines. Also. it is related to alignments such as horizontal alignment and vertical alignment, and cross section. That is, the practical road design should be examined in both sides of 3 dimension and consecutiveness (consistency) as the actual road is a 3 - dimensional successive object. The paper presents a concept for acceleration to evaluate consistency of road considering actual road shape on 3-dimension. Acceleration of vehicle is influential to road consistency based on running state of vehicles and state of drivers. The magnitude of acceleration. especially, is a quite influential element to drivers. Based on above, the acceleration on each point on 3-D road can be calculated and then displacement can be done. Computation of acceleration means total calculation on each axis. Speed profile refers to “Development of a safety evaluation model for highway horizontal alignment based on running speed(Jeong, Jun-Hwa, 2001)” and then acceleration can be calculated by using the speed pronto. According to literature review, definition of acceleration on 3-D and g-g-g diagram are established. For example, as a result of the evaluation, if the acceleration is out of range, the road is out of consistency. The paper shows calculation for change of acceleration on imaginary road under minimum design standard and the change tried to be applied to consistency. However accurate acceleration is not shown because the speed forecasting model is limited and the paper did not consider state of vehicles (suspension, tires and model of vehicles). If speed pronto is defined exactly, acceleration is calculated on all road shapes, such as. compound curve and clothoid curve. and then it is appled to consistency evaluation. Unfortunately, speed forecasting model on 3 -D road and on compound curves have rarely presented. Speed forecasting model and speed profile model need to be established and standard of consistency evaluation need to developed and verified by experimental vehicles.

Multiple Linear Analysis for Generating Parametric Images of Irreversible Radiotracer (비가역 방사성추적자 파라메터 영상을 위한 다중선형분석법)

  • Kim, Su-Jin;Lee, Jae-Sung;Lee, Won-Woo;Kim, Yu-Kyeong;Jang, Sung-June;Son, Kyu-Ri;Kim, Hyo-Cheol;Chung, Jin-Wook;Lee, Dong-Soo
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.4
    • /
    • pp.317-325
    • /
    • 2007
  • Purpose: Biological parameters can be quantified using dynamic PET data with compartment modeling and Nonlinear Least Square (NLS) estimation. However, the generation of parametric images using the NLS is not appropriate because of the initial value problem and excessive computation time. In irreversible model, Patlak graphical analysis (PGA) has been commonly used as an alternative to the NLS method. In PGA, however, the start time ($t^*$, time where linear phase starts) has to be determined. In this study, we suggest a new Multiple Linear Analysis for irreversible radiotracer (MLAIR) to estimate fluoride bone influx rate (Ki). Methods: $[^{18}F]Fluoride$ dynamic PET scans was acquired for 60 min in three normal mini-pigs. The plasma input curve was derived using blood sampling from the femoral artery. Tissue time-activity curves were measured by drawing region of interests (ROls) on the femur head, vertebra, and muscle. Parametric images of Ki were generated using MLAIR and PGA methods. Result: In ROI analysis, estimated Ki values using MLAIR and PGA method was slightly higher than those of NLS, but the results of MLAIR and PGA were equivalent. Patlak slopes (Ki) were changed with different $t^*$ in low uptake region. Compared with PGA, the quality of parametric image was considerably improved using new method. Conclusion: The results showed that the MLAIR was efficient and robust method for the generation of Ki parametric image from $[^{18}F]Fluoride$ PET. It will be also a good alternative to PGA for the radiotracers with irreversible three compartment model.

Design and Implementation of an Execution-Provenance Based Simulation Data Management Framework for Computational Science Engineering Simulation Platform (계산과학공학 플랫폼을 위한 실행-이력 기반의 시뮬레이션 데이터 관리 프레임워크 설계 및 구현)

  • Ma, Jin;Lee, Sik;Cho, Kum-won;Suh, Young-kyoon
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.77-86
    • /
    • 2018
  • For the past few years, KISTI has been servicing an online simulation execution platform, called EDISON, allowing users to conduct simulations on various scientific applications supplied by diverse computational science and engineering disciplines. Typically, these simulations accompany large-scale computation and accordingly produce a huge volume of output data. One critical issue arising when conducting those simulations on an online platform stems from the fact that a number of users simultaneously submit to the platform their simulation requests (or jobs) with the same (or almost unchanging) input parameters or files, resulting in charging a significant burden on the platform. In other words, the same computing jobs lead to duplicate consumption computing and storage resources at an undesirably fast pace. To overcome excessive resource usage by such identical simulation requests, in this paper we introduce a novel framework, called IceSheet, to efficiently manage simulation data based on execution metadata, that is, provenance. The IceSheet framework captures and stores each provenance associated with a conducted simulation. The collected provenance records are utilized for not only inspecting duplicate simulation requests but also performing search on existing simulation results via an open-source search engine, ElasticSearch. In particular, this paper elaborates on the core components in the IceSheet framework to support the search and reuse on the stored simulation results. We implemented as prototype the proposed framework using the engine in conjunction with the online simulation execution platform. Our evaluation of the framework was performed on the real simulation execution-provenance records collected on the platform. Once the prototyped IceSheet framework fully functions with the platform, users can quickly search for past parameter values entered into desired simulation software and receive existing results on the same input parameter values on the software if any. Therefore, we expect that the proposed framework contributes to eliminating duplicate resource consumption and significantly reducing execution time on the same requests as previously-executed simulations.

External Gravity Field in the Korean Peninsula Area (한반도 지역에서의 상층중력장)

  • Jung, Ae Young;Choi, Kwang-Sun;Lee, Young-Cheol;Lee, Jung Mo
    • Economic and Environmental Geology
    • /
    • v.48 no.6
    • /
    • pp.451-465
    • /
    • 2015
  • The free-air anomalies are computed using a data set from various types of gravity measurements in the Korean Peninsula area. The gravity values extracted from the Earth Gravitational Model 2008 are used in the surrounding region. The upward continuation technique suggested by Dragomir is used in the computation of the external free-air anomalies at various altitudes. The integration radius 10 times the altitude is used in order to keep the accuracy of results and computational resources. The direct geodesic formula developed by Bowring is employed in integration. At the 1-km altitude, the free-air anomalies vary from -41.315 to 189.327 mgal with the standard deviation of 22.612 mgal. At the 3-km altitude, they vary from -36.478 to 156.209 mgal with the standard deviation of 20.641 mgal. At the 1,000-km altitude, they vary from 3.170 to 5.864 mgal with the standard deviation of 0.670 mgal. The predicted free-air anomalies at 3-km altitude are compared to the published free-air anomalies reduced from the airborne gravity measurements at the same altitude. The rms difference is 3.88 mgal. Considering the reported 2.21-mgal airborne gravity cross-over accuracy, this rms difference is not serious. Possible causes in the difference appear to be external free-air anomaly simulation errors in this work and/or the gravity reduction errors of the other. The external gravity field is predicted by adding the external free-air anomaly to the normal gravity computed using the closed form formula for the gravity above and below the surface of the ellipsoid. The predicted external gravity field in this work is expected to reasonably present the real external gravity field. This work seems to be the first structured research on the external free-air anomaly in the Korean Peninsula area, and the external gravity field can be used to improve the accuracy of the inertial navigation system.

An Investigation of Glyceollin I's Inhibitory Effect on The Mammalian Adenylyl (글리세올린 I의 아데니닐 고리화 효소 활성 억제 효능과 결합 부위 비교 분석)

  • Kim, Dong-Chan;Kim, Nam Doo;Kim, Sung In;Jang, Chul-Soo;Kweon, Chang Oh;Kim, Byung Weon;Ryu, Jae-Ki;Kim, Hyun-Kyung;Lee, Suk Jun;Lee, Seungho;Kim, Dongjin
    • Journal of Life Science
    • /
    • v.23 no.5
    • /
    • pp.609-615
    • /
    • 2013
  • Glyceollin I has gained attention as a useful therapy for various dermatological diseases. However, the binding property of glyceollin I to the mammalian adenylyl cyclase (hereafter mAC), a critical target enzyme for the down-regulation of skin melanogenesis, has not been fully explored. To clarify the action mechanism between glyceollin I and mAC, we first investigated the molecular docking property of glyceollin I to mAC and compared with that of SQ22,536, a well-known mAC inhibitor, to mAC. Glyceollin I showed superiority by forming three hydrogen bonds with Asp 1018, Trp 1020, and Asn 1025, which exist in the catalytic site of mAC. However, SQ22,536 formed only two hydrogen bonds with Asp 1018 and Asn 1025. Secondly, we confirmed that glyceollin I effectively inhibits the formation of forskolin-induced cAMP and the phosphorylation of PKA from a cell-based assay. Long term treatment with glyceollin I had little effect on the cell viability. The findings of the present study also suggest that glyceollin I may be extended to be used as an effective inhibitor of hyperpigmentation.