• 제목/요약/키워드: Vertex method

Search Result 307, Processing Time 0.021 seconds

Imputation of Missing SST Observation Data Using Multivariate Bidirectional RNN (다변수 Bidirectional RNN을 이용한 표층수온 결측 데이터 보간)

  • Shin, YongTak;Kim, Dong-Hoon;Kim, Hyeon-Jae;Lim, Chaewook;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.34 no.4
    • /
    • pp.109-118
    • /
    • 2022
  • The data of the missing section among the vertex surface sea temperature observation data was imputed using the Bidirectional Recurrent Neural Network(BiRNN). Among artificial intelligence techniques, Recurrent Neural Networks (RNNs), which are commonly used for time series data, only estimate in the direction of time flow or in the reverse direction to the missing estimation position, so the estimation performance is poor in the long-term missing section. On the other hand, in this study, estimation performance can be improved even for long-term missing data by estimating in both directions before and after the missing section. Also, by using all available data around the observation point (sea surface temperature, temperature, wind field, atmospheric pressure, humidity), the imputation performance was further improved by estimating the imputation data from these correlations together. For performance verification, a statistical model, Multivariate Imputation by Chained Equations (MICE), a machine learning-based Random Forest model, and an RNN model using Long Short-Term Memory (LSTM) were compared. For imputation of long-term missing for 7 days, the average accuracy of the BiRNN/statistical models is 70.8%/61.2%, respectively, and the average error is 0.28 degrees/0.44 degrees, respectively, so the BiRNN model performs better than other models. By applying a temporal decay factor representing the missing pattern, it is judged that the BiRNN technique has better imputation performance than the existing method as the missing section becomes longer.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Development of Quantification Methods for the Myocardial Blood Flow Using Ensemble Independent Component Analysis for Dynamic $H_2^{15}O$ PET (동적 $H_2^{15}O$ PET에서 앙상블 독립성분분석법을 이용한 심근 혈류 정량화 방법 개발)

  • Lee, Byeong-Il;Lee, Jae-Sung;Lee, Dong-Soo;Kang, Won-Jun;Lee, Jong-Jin;Kim, Soo-Jin;Choi, Seung-Jin;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.6
    • /
    • pp.486-491
    • /
    • 2004
  • Purpose: factor analysis and independent component analysis (ICA) has been used for handling dynamic image sequences. Theoretical advantages of a newly suggested ICA method, ensemble ICA, leaded us to consider applying this method to the analysis of dynamic myocardial $H_2^{15}O$ PET data. In this study, we quantified patients' blood flow using the ensemble ICA method. Materials and Methods: Twenty subjects underwent $H_2^{15}O$ PET scans using ECAT EXACT 47 scanner and myocardial perfusion SPECT using Vertex scanner. After transmission scanning, dynamic emission scans were initiated simultaneously with the injection of $555{\sim}740$ MBq $H_2^{15}O$. Hidden independent components can be extracted from the observed mixed data (PET image) by means of ICA algorithms. Ensemble learning is a variational Bayesian method that provides an analytical approximation to the parameter posterior using a tractable distribution. Variational approximation forms a lower bound on the ensemble likelihood and the maximization of the lower bound is achieved through minimizing the Kullback-Leibler divergence between the true posterior and the variational posterior. In this study, posterior pdf was approximated by a rectified Gaussian distribution to incorporate non-negativity constraint, which is suitable to dynamic images in nuclear medicine. Blood flow was measured in 9 regions - apex, four areas in mid wall, and four areas in base wall. Myocardial perfusion SPECT score and angiography results were compared with the regional blood flow. Results: Major cardiac components were separated successfully by the ensemble ICA method and blood flow could be estimated in 15 among 20 patients. Mean myocardial blood flow was $1.2{\pm}0.40$ ml/min/g in rest, $1.85{\pm}1.12$ ml/min/g in stress state. Blood flow values obtained by an operator in two different occasion were highly correlated (r=0.99). In myocardium component image, the image contrast between left ventricle and myocardium was 1:2.7 in average. Perfusion reserve was significantly different between the regions with and without stenosis detected by the coronary angiography (P<0.01). In 66 segment with stenosis confirmed by angiography, the segments with reversible perfusion decrease in perfusion SPECT showed lower perfusion reserve values in $H_2^{15}O$ PET. Conclusions: Myocardial blood flow could be estimated using an ICA method with ensemble learning. We suggest that the ensemble ICA incorporating non-negative constraint is a feasible method to handle dynamic image sequence obtained by the nuclear medicine techniques.

A study on the shear bond strength between Co-Cr denture base and relining materials (금속의치상과 의치이장재료 간의 결합력에 관한 연구)

  • Lee, Na-Young;Kim, Doo-Yong;Lee, Young-Soo;Park, Won-Hee
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.49 no.1
    • /
    • pp.8-15
    • /
    • 2011
  • Purpose: This study evaluated the bonding strength of direct relining resin to Co-Cr denture base material according to surface treatment and immersion time. Materials and methods: In this study, Co-Cr alloy was used in hexagon shape. Each specimen was cut in flat surface, and sandblasted with $110\;{\mu}m$ $Al_2O_3$ for 1 minute. 54 specimens were divided into 3 groups; group A-control group, group B-applied with surface primer A, group C-applied with surface primer B. Self curing direct resin was used for this study. Each group was subdivided into another 3 groups according to the immersion time. After the wetting storage, shear bond strength of the specimens were measured with universal testing machine. The data were analyzed using two-way analysis of variance and Tukey post hoc method. Results: In experiment of sandblasting specimens, surface roughness of the alloy was the highest after 1 minute sandblasting. In experiment of testing shear bond strength, bonding strength was lowered on group B, C, A. There were significant differences between 3 groups. According to period, Bonding strength was the highest on 0 week storage group, and the weakest on 2 week storage group. But there were no significant differences between 3 periods. According to group and period, bonding strength of all group were lowered according to immersion time but there were no significant differences on group B and group C, but there was significant difference according to immersion time on group A. Conclusion: It is useful to sandblast and adopt metal primers when relining Co-Cr metal base dentures in chair-side.

Characteristic Findings of Exercise ECG Test, Perfusion SPECT and Coronary Angiography in Patients with Exercise Induced Myocardial Stunning (게이트 심근관류 SPECT상 운동 유발성 기절심근을 보이는 환자의 운동부하 심전도, 관류 SPECT 및 심혈관 조영술 소견)

  • Ahn, Byeong-Cheol;Seo, Ji-Hyoung;Bae, Jin-Ho;Jeong, Shin-Young;Park, Hun-Sik;Lee, Jae-Tae;Chae, Shung-Chul;Lee, Kyu-Bo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.3
    • /
    • pp.225-232
    • /
    • 2004
  • Purpose : Transient wall motion abnormality and contractile dysfunction of the left ventricle (LV) can be observed in patients with coronary artery disease due to post-stress myocardial stunning. To understand clinical characteristics of stress induced LV dysfunction, we have compared the findings of exercise stress test, myocardial perfusion SPECT and coronary angiography between subjects with and without post-stress LV dysfunction. Materials and Methods : Among subjects who underwent exercise stress test, myocardial perfusion SPECT and coronary angiography within a month of interval, we enrolled 36 patients with post-stress LV election fraction (LVEF) was $\geq5%$ lower than rest (stunning group) and 16 patients with difference of post-stress and rest LVEF was lesser than 1 %(non-stunning group) for this study. Treadmill exercise stress gated myocardial perfusion SPECT was performed with dual head SPECT camera using 740 MBq Tc-99m MIBI and coronary angiography was also performed by conventional Judkins method. Results : Stunning group had a significantly higher incidence of hypercholesterolemia than non-stunning group(45.5 vs. 7.1%, p=0.01). Stunning group also had higher incidence of diabetes mellitus and lower incidence of hypertension, but these were not statistically significant. Stunning group had larger and more severe perfusion defect in stress perfusion myocardial SPECT than non-stunning group(extent 18.2 vs. 9.2%, p=0.029; severity 13.5 vs. 6.9, p=0.040). Stunning group also had higher degree of reversibility of perfusion defect, higher incidence of positive exercise stress test and higher incidence of having severe stenosis ($80{\sim}99%$) in coronary angiography than non-stunning group, but these were not statistically significant. In stunning group, all of 4 patients without perfusion defect had significant coronary artery stenosis and had received revascularization treatment. Conclusion : Patients with post-stress LV dysfunction had larger and more severe perfusion defect and severe coronary artery stenosis than patients without post-stress LV dysfunction. All of the patients without perfusion defect in stunning group had significant coronary artery stenosis and needed revascularization. Therefore, we suggest that invasive diagnostic procedures and therapeutic interventions might be needed in patients with post-stress LV dysfunction.

Evaluation of the Neural Fiber Tractography Associated with Aging in the Normal Corpus Callosum Using the Diffusion Tensor Imaging (DTI) (확산텐서영상(Diffusion Tensor Imaging)을 이용한 정상 뇌량에서의 연령대별 신경섬유로의 변화)

  • Im, In-Chul;Goo, Eun-Hoe;Lee, Jae-Seung
    • Journal of the Korean Society of Radiology
    • /
    • v.5 no.4
    • /
    • pp.189-194
    • /
    • 2011
  • This study used magnetic resonance diffusion tensor imaging (DTI) to quantitatively analyze the neural fiber tractography according to the age of normal corpus callosum and to evaluate of usefulness. The research was intended for the applicants of 60 persons that was in a good state of health with not brain or other disease. The test parameters were TR: 6650 ms, TE: 66 ms, FA: $90^{\circ}$, NEX: 2, thickness: 2 mm, no gap, FOV: 220 mm, b-value: $800s/mm^2$, sense factor: 2, acquisition matrix size: $2{\times}2{\times}2mm^3$, and the test time was 3 minutes 46 seconds. The evaluation method was constructed the color-cored FA map include to the skull vertex from the skull base in scan range. We set up the five ROI of corpus callosum of genu, anterior-mid body, posterior-mid body, isthmus, and splenium, tracking, respectively, and to quantitatively measured the length of neural fiber. As a result, the length of neural fiber, for the corpus callosum of genu was 20's: $61.8{\pm}6.8$, 30's: $63.9{\pm}3.8$, 40's: $65.5{\pm}6.4$, 50's: $57.8{\pm}6.0$, 60's: $58.9{\pm}4.5$, more than 70's: $54.1{\pm}8.1mm$, for the anterior-mid body was 20's: $54.8{\pm}8.8$, 30's: $58.5{\pm}7.9$, 40's: $54.8{\pm}7.8$, 50's: $56.1{\pm}10.2$, 60's: $48.5{\pm}6.2$, more than 70's: $48.6{\pm}8.3mm$, for the posterior-mid body was 20's: $72.7{\pm}9.1$, 30's: $61.6{\pm}9.1$, 40's: $60.9{\pm}10.5$, 50's: $61.4{\pm}11.7$, 60's: $54.9{\pm}10.0$, more than 70's: $53.1{\pm}10.5mm$, for the isthmus was 20's: $71.5{\pm}17.4$, 30's: $74.1{\pm}14.9$, 40's: $73.6{\pm}14.2$, 50's: $66.3{\pm}12.9$, 60's: $56.5{\pm}11.2$, more than 70's: $56.8{\pm}11.3mm$, and for the splenium was 20's: $82.6{\pm}6.8$, 30's: $86.9{\pm}6.4$, 40's: $83.1{\pm}7.1$, 50's: $81.5{\pm}7.4$, 60's: $78.6{\pm}6.0$, more than 70's: $80.55{\pm}8.6mm$. The length of neural fiber for normal corpus callosum were statistically significant in the genu(P=0.001), posterior-mid body(P=0.009), and istumus(P=0.012) of corpus callosum. In order of age, the length of neural fiber increased from 30s to 40s, as one grows older tended to decrease. For this reason, the nerve cells of brain could be confirmed through the neural fiber tractography to progress actively in middle age.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.