• Title/Summary/Keyword: Complex algorithm

Search Result 2,600, Processing Time 0.027 seconds

Customizable Global Job Scheduler for Computational Grid (계산 그리드를 위한 커스터마이즈 가능한 글로벌 작업 스케줄러)

  • Hwang Sun-Tae;Heo Dae-Young
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.7
    • /
    • pp.370-379
    • /
    • 2006
  • Computational grid provides the environment which integrates v 따 ious computing resources. Grid environment is more complex and various than traditional computing environment, and consists of various resources where various software packages are installed in different platforms. For more efficient usage of computational grid, therefore, some kind of integration is required to manage grid resources more effectively. In this paper, a global scheduler is suggested, which integrates grid resources at meta level with applying various scheduling policies. The global scheduler consists of a mechanical part and three policies. The mechanical part mainly search user queues and resource queues to select appropriate job and computing resource. An algorithm for the mechanical part is defined and optimized. Three policies are user selecting policy, resource selecting policy, and executing policy. These can be defined newly and replaced with new one freely while operation of computational grid is temporarily holding. User selecting policy, for example, can be defined to select a certain user with higher priority than other users, resource selecting policy is for selecting the computing resource which is matched well with user's requirements, and executing policy is to overcome communication overheads on grid middleware. Finally, various algorithms for user selecting policy are defined only in terms of user fairness, and their performances are compared.

A Study on the Analysis of Physical Function in Adults with Sarcopenia (근감소증 성인의 신체 기능 분석)

  • Kim, Myungchul;Kim, Haein;Park, Sangwoong;Cho, Ilhoon;Yu, Wonjong
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.8 no.2
    • /
    • pp.199-209
    • /
    • 2020
  • Purpose : This study used a sarcopenia diagnostic algorithm proposed by the Asia working group in adults over 50 to diagnose sarcopenia and analyze body function. The purpose of this study is to prepare basic data for the management and prevention of sarcopenia. Methods : We performed a diagnostic evaluation of sarcopenia in 97 adults over the age of 50 years with the cooperation of the Seongnam senior experience complex in Seongnam-si, Gyeonggi-do. As a result of the diagnostic process, 24 subjects were placed into the sarcopenia group, while 73 subjects were placed into the normal group. We measured each subject's body, performed the timed up and go test to evaluate functional mobility, and conducted a questionnaire on the pre-symptom of locomotive syndrome and locomotive syndrome. Results : There were statistically significant differences in height, weight, and skeletal muscle mass between the two groups. There was also a statistically significant difference in the timed up and go test, which confirmed the difference in functional mobility between the two groups. In addition, there was a statistically significant difference between the two groups in the proportion and the mean score of subjects with pre-symptom of locomotive syndrome and locomotive syndrome. In the correlation analysis, grip strength was statistically significantly correlated with height, weight, skeletal muscle mass, waist circumference, timed up and go test, pre-symptom of locomotive syndrome and locomotive syndrome. Gait speed was significantly correlated with the timed up and go test and locomotive syndrome. Appendicular skeletal muscle index was significantly correlated with height, weight, waist circumference, hip circumference, and the pre-symptom of locomotive syndrome. Conclusion : In conclusion, sarcopenia is closely related to height, weight, skeletal muscle mass and functional mobility, as well as the pre-symptom of locomotive syndrome and, locomotive syndrome. In consideration of this, the prevention and management of sarcopenia should be made accordingly.

Preliminary Study on the Enhancement of Reconstruction Speed for Emission Computed Tomography Using Parallel Processing (병렬 연산을 이용한 방출 단층 영상의 재구성 속도향상 기초연구)

  • Park, Min-Jae;Lee, Jae-Sung;Kim, Soo-Mee;Kang, Ji-Yeon;Lee, Dong-Soo;Park, Kwang-Suk
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.443-450
    • /
    • 2009
  • Purpose: Conventional image reconstruction uses simplified physical models of projection. However, real physics, for example 3D reconstruction, takes too long time to process all the data in clinic and is unable in a common reconstruction machine because of the large memory for complex physical models. We suggest the realistic distributed memory model of fast-reconstruction using parallel processing on personal computers to enable large-scale technologies. Materials and Methods: The preliminary tests for the possibility on virtual manchines and various performance test on commercial super computer, Tachyon were performed. Expectation maximization algorithm with common 2D projection and realistic 3D line of response were tested. Since the process time was getting slower (max 6 times) after a certain iteration, optimization for compiler was performed to maximize the efficiency of parallelization. Results: Parallel processing of a program on multiple computers was available on Linux with MPICH and NFS. We verified that differences between parallel processed image and single processed image at the same iterations were under the significant digits of floating point number, about 6 bit. Double processors showed good efficiency (1.96 times) of parallel computing. Delay phenomenon was solved by vectorization method using SSE. Conclusion: Through the study, realistic parallel computing system in clinic was established to be able to reconstruct by plenty of memory using the realistic physical models which was impossible to simplify.

Application of neural network for airship take-off and landing mode by buoyancy control (기낭 부력 제어에 의한 비행선 이착륙의 인공신경망 적용)

  • Chang, Yong-Jin;Woo, Gui-Ae;Kim, Jong-Kwon;Lee, Dae-Woo;Cho, Kyeum-Rae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.33 no.2
    • /
    • pp.84-91
    • /
    • 2005
  • For long time, the takeoff and landing control of airship was worked by human handling. With the development of the autonomous control system, the exact controls during the takeoff and landing were required and lots of methods and algorithms were suggested. This paper presents the result of airship take-off and landing by buoyancy control using air ballonet volume change and performance control of pitch angle for stable flight within the desired altitude. For the complexity of airship's dynamics, firstly, simple PID controller was applied. Due to the various atmospheric conditions, this controller didn't give satisfactory results. Therefore, new control method was designed to reduce rapidly the error between designed trajectory and actual trajectory by learning algorithm using an artificial neural network. Generally, ANN has various weaknesses such as large training time, selection of neuron and hidden layer numbers required to deal with complex problem. To overcome these drawbacks, in this paper, the RBFN (radial basis function network) controller developed. The weight value of RBFN is acquired by learning which to reduce the error between desired input output through and airship dynamics to impress the disturbance. As a result of simulation, the controller using the RBFN is superior to PID controller which maximum error is 15M.

An Energy-Efficient Clustering Using Division of Cluster in Wireless Sensor Network (무선 센서 네트워크에서 클러스터의 분할을 이용한 에너지 효율적 클러스터링)

  • Kim, Jong-Ki;Kim, Yoeng-Won
    • Journal of Internet Computing and Services
    • /
    • v.9 no.4
    • /
    • pp.43-50
    • /
    • 2008
  • Various studies are being conducted to achieve efficient routing and reduce energy consumption in wireless sensor networks where energy replacement is difficult. Among routing mechanisms, the clustering technique has been known to be most efficient. The clustering technique consists of the elements of cluster construction and data transmission. The elements that construct a cluster are repeated in regular intervals in order to equalize energy consumption among sensor nodes in the cluster. The algorithms for selecting a cluster head node and arranging cluster member nodes optimized for the cluster head node are complex and requires high energy consumption. Furthermore, energy consumption for the data transmission elements is proportional to $d^2$ and $d^4$ around the crossover region. This paper proposes a means of reducing energy consumption by increasing the efficiency of the cluster construction elements that are regularly repeated in the cluster technique. The proposed approach maintains the number of sensor nodes in a cluster at a constant level by equally partitioning the region where nodes with density considerations will be allocated in cluster construction, and reduces energy consumption by selecting head nodes near the center of the cluster. It was confirmed through simulation experiments that the proposed approach consumes less energy than the LEACH algorithm.

  • PDF

Extraction of Network Threat Signatures Using Latent Dirichlet Allocation (LDA를 활용한 네트워크 위협 시그니처 추출기법)

  • Lee, Sungil;Lee, Suchul;Lee, Jun-Rak;Youm, Heung-youl
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.1-10
    • /
    • 2018
  • Network threats such as Internet worms and computer viruses have been significantly increasing. In particular, APTs(Advanced Persistent Threats) and ransomwares become clever and complex. IDSes(Intrusion Detection Systems) have performed a key role as information security solutions during last few decades. To use an IDS effectively, IDS rules must be written properly. An IDS rule includes a key signature and is incorporated into an IDS. If so, the network threat containing the signature can be detected by the IDS while it is passing through the IDS. However, it is challenging to find a key signature for a specific network threat. We first need to analyze a network threat rigorously, and write a proper IDS rule based on the analysis result. If we use a signature that is common to benign and/or normal network traffic, we will observe a lot of false alarms. In this paper, we propose a scheme that analyzes a network threat and extracts key signatures corresponding to the threat. Specifically, our proposed scheme quantifies the degree of correspondence between a network threat and a signature using the LDA(Latent Dirichlet Allocation) algorithm. Obviously, a signature that has significant correspondence to the network threat can be utilized as an IDS rule for detection of the threat.

Design of Head Blood Pressure(HBP) Measurement System and Correlativity Extraction of Blood Pressure(BP) and HBP (두부혈압 측정 시스템의 설계 및 두부혈압과 상완혈압과의 상관성 추출)

  • 이용흠;정석준;장근중;정동영
    • Journal of Biomedical Engineering Research
    • /
    • v.24 no.5
    • /
    • pp.381-389
    • /
    • 2003
  • Various adult diseases (cerebral apoplexy, athymiait, etc.) result from hypertension, blood circulation disturbance and increment of HBP. In early diagnosis of these diseases, MRI, X-ray and PET have been used rather aim for treatment than for a prevention of disease. Since. cerebral apoplexy and athymiait could appear to the regular/irregular persons, it is very important to measure HBP which has connection with cerebral blood flow state. HBP has more diagnosis elements than that of BP. So, we can diagnose accurate hypertension by measuring of HBP. But, existing sphygmomanometers and automatic BP monitors can not measure HBP, and can not execute complex function(measuring of BP/HBP, blood flow improvement). Purpose of this paper is to develop a system and algorithm which can measure BP/HBP for accurate diagnosis. Also, we extracted diagnosis factors by correlativity analysis of BP/HBP. Maximum pressure of HBP corresponds to 62% that of BP, Minimum pressure of HBP corresponds to 46% that of BP. Therefore, we developed the multi-function automatic blood pressure monitor which can measure BP/HBP and improve cerebral blood flow state.

Design of a Holter Monitoring System with Flash Memory Card (플레쉬 메모리 카드를 이용한 홀터 심전계의 설계)

  • 송근국;이경중
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.3
    • /
    • pp.251-260
    • /
    • 1998
  • The Holter monitoring system is a widely used noninvasive diagnostic tool for ambulatory patient who may be at risk from latent life-threatening cardiac abnormalities. In this paper, we design a high performance intelligent holter monitoring system which is characterized by the small-sized and the low-power consumption. The system hardware consists of one-chip microcontroller(68HC11E9), ECG preprocessing circuit, and flash memory card. ECG preprocessing circuit is made of ECG preamplifier with gain of 250, 500 and 1000, the bandpass filter with bandwidth of 0.05-100Hz, the auto-balancing circuit and the saturation-calibrating circuit to eliminate baseline wandering, ECG signal sampled at 240 samples/sec is converted to the digital signal. We use a linear recursive filter and preprocessing algorithm to detect the ECG parameters which are QRS complex, and Q-R-T points, ST-level, HR, QT interval. The long-term acquired ECG signals and diagnostic parameters are compressed by the MFan(Modified Fan) and the delta modulation method. To easily interface with the PC based analyzer program which is operated in DOS and Windows, the compressed data, that are compatible to FFS(flash file system) format, are stored at the flash memory card with SBF(symmetric block format).

  • PDF

Stock prediction using combination of BERT sentiment Analysis and Macro economy index

  • Jang, Euna;Choi, HoeRyeon;Lee, HongChul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.47-56
    • /
    • 2020
  • The stock index is used not only as an economic indicator for a country, but also as an indicator for investment judgment, which is why research into predicting the stock index is ongoing. The task of predicting the stock price index involves technical, basic, and psychological factors, and it is also necessary to consider complex factors for prediction accuracy. Therefore, it is necessary to study the model for predicting the stock price index by selecting and reflecting technical and auxiliary factors that affect the fluctuation of the stock price according to the stock price. Most of the existing studies related to this are forecasting studies that use news information or macroeconomic indicators that create market fluctuations, or reflect only a few combinations of indicators. In this paper, this we propose to present an effective combination of the news information sentiment analysis and various macroeconomic indicators in order to predict the US Dow Jones Index. After Crawling more than 93,000 business news from the New York Times for two years, the sentiment results analyzed using the latest natural language processing techniques BERT and NLTK, along with five macroeconomic indicators, gold prices, oil prices, and five foreign exchange rates affecting the US economy Combination was applied to the prediction algorithm LSTM, which is known to be the most suitable for combining numeric and text information. As a result of experimenting with various combinations, the combination of DJI, NLTK, BERT, OIL, GOLD, and EURUSD in the DJI index prediction yielded the smallest MSE value.

Development and Inspection of the Ortho-Calc v1.0 Program for the Calculation of the Orthometric Correction (정사보정량 계산을 위한 Ortho-Calc v1.0 프로그램의 개발과 검증)

  • Lee, Suk Bae;Sim, Jung Min
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.23 no.3
    • /
    • pp.41-47
    • /
    • 2015
  • To determine the accurate height, it should be considered geometric height difference obtained by levelling as well as the physical height difference what so called orthometric correction. The orthometric correction amount is small enough to ignore at flatland but the amount is big at high mountains, so it should be considered to obtain accurate height at high mountains. But the calculation process is difficult and complex, not easy to calculate. So, to make the process easy using a user-friendly visual, a orthometric correction calculation program Ortho-Calc. v1.0 was developed in this study. This program was adopted the algorithm of Nassar and Hwang & Hsiao, and Strang Van Hees and it could be to selectivily calculate the correction amount. The inspection result exhibited high accuracy with the standard deviation of 0.024mm by the comparison of previous study. Therefore, This program Ortho-Calc. v1.0 developed in this study, will contribute orthometric correction calculation quickly and easily. And, if this program is widely popular, it could be expected to make a contribution the Benchmark's official height renewal using orthometric correction.