• Title/Summary/Keyword: Kernel function

Search Result 621, Processing Time 0.031 seconds

Development of Drought Index based on Streamflow for Monitoring Hydrological Drought (수문학적 가뭄감시를 위한 하천유량 기반 가뭄지수 개발)

  • Yoo, Jiyoung;Kim, Tae-Woong;Kim, Jeong-Yup;Moon, Jang-Won
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.37 no.4
    • /
    • pp.669-680
    • /
    • 2017
  • This study evaluated the consistency of the standard flow to forecast low-flow based on various drought indices. The data used in this study were streamflow data at the Gurye2 station located in the Seomjin River and the Angang station located in the Hyeongsan River, as well as rainfall data of nearby weather stations (Namwon and Pohang). Using streamflow data, the streamflow accumulation drought index (SADI) was developed in this study to represent the hydrological drought condition. For SADI calculations, the threshold of drought was determined by a Change-Point analysis of the flow pattern and a reduction factor was estimated based on the kernel density function. Standardized runoff index (SRI) and standardized precipitation index (SPI) were also calculated to compared with the SADI. SRI and SPI were calculated for the 30-, 90-, 180-, and 270-day period and then an ROC curve analysis was performed to determine the appropriate time-period which has the highest consistency with the standard flow. The result of ROC curve analysis indicated that for the Seomjin River-Gurye2 station SADI_C3, SRI30, SADI_C1, SADI_C2, and SPI90 were confirmed in oder of having high consistency with standard flow under the attention stage and for the Hyeongsan River-Angang station, SADI_C3, SADI_C1, SPI270, SRI30, and SADI_C2 have order of high consistency with standard flow under the attention stage.

Weighted Census Transform and Guide Filtering based Depth Map Generation Method (가중치를 이용한 센서스 변환과 가이드 필터링 기반깊이지도 생성 방법)

  • Mun, Ji-Hun;Ho, Yo-Sung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.2
    • /
    • pp.92-98
    • /
    • 2017
  • Generally, image contains geometrical and radiometric errors. Census transform can solve the stereo mismatching problem caused by the radiometric distortion. Since the general census transform compares center of window pixel value with neighbor pixel value, it is hard to obtain an accurate matching result when the difference of pixel value is not large. To solve that problem, we propose a census transform method that applies different 4-step weight for each pixel value difference by applying an assistance window inside the window kernel. If the current pixel value is larger than the average of assistance window pixel value, a high weight value is given. Otherwise, a low weight value is assigned to perform a differential census transform. After generating an initial disparity map using a weighted census transform and input images, the gradient information is additionally used to model a cost function for generating a final disparity map. In order to find an optimal cost value, we use guided filtering. Since the filtering is performed using the input image and the disparity image, the object boundary region can be preserved. From the experimental results, we confirm that the performance of the proposed stereo matching method is improved compare to the conventional method.

Implementation of GLCM/GLDV-based Texture Algorithm and Its Application to High Resolution Imagery Analysis (GLCM/GLDV 기반 Texture 알고리즘 구현과 고 해상도 영상분석 적용)

  • Lee Kiwon;Jeon So-Hee;Kwon Byung-Doo
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.2
    • /
    • pp.121-133
    • /
    • 2005
  • Texture imaging, which means texture image creation by co-occurrence relation, has been known as one of the useful image analysis methodologies. For this purpose, most commercial remote sensing software provides texture analysis function named GLCM (Grey Level Co-occurrence Matrix). In this study, texture-imaging program based on GLCM algorithm is newly implemented. As well, texture imaging modules for GLDV (Grey Level Difference Vector) are contained in this program. As for GLCM/GLDV Texture imaging parameters, it composed of six types of second order texture functions such as Homogeneity, Dissimilarity, Energy, Entropy, Angular Second Moment, and Contrast. As for co-occurrence directionality in GLCM/GLDV, two direction modes such as Omni-mode and Circular mode newly implemented in this program are provided with basic eight-direction mode. Omni-mode is to compute all direction to avoid directionality complexity in the practical level, and circular direction is to compute texture parameters by circular direction surrounding a target pixel in a kernel. At the second phase of this study, some case studies with artificial image and actual satellite imagery are carried out to analyze texture images in different parameters and modes by correlation matrix analysis. It is concluded that selection of texture parameters and modes is the critical issues in an application based on texture image fusion.

Development of Software-Defined Perimeter-based Access Control System for Security of Cloud and IoT System (Cloud 및 IoT 시스템의 보안을 위한 소프트웨어 정의 경계기반의 접근제어시스템 개발)

  • Park, Seung-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.2
    • /
    • pp.15-26
    • /
    • 2021
  • Recently, as the introduction of cloud, mobile, and IoT has become active, there is a growing need for technology development that can supplement the limitations of traditional security solutions based on fixed perimeters such as firewalls and Network Access Control (NAC). In response to this, SDP (Software Defined Perimeter) has recently emerged as a new base technology. Unlike existing security technologies, SDP can sets security boundaries (install Gateway S/W) regardless of the location of the protected resources (servers, IoT gateways, etc.) and neutralize most of the network-based hacking attacks that are becoming increasingly sofiscated. In particular, SDP is regarded as a security technology suitable for the cloud and IoT fields. In this study, a new access control system was proposed by combining SDP and hash tree-based large-scale data high-speed signature technology. Through the process authentication function using large-scale data high-speed signature technology, it prevents the threat of unknown malware intruding into the endpoint in advance, and implements a kernel-level security technology that makes it impossible for user-level attacks during the backup and recovery of major data. As a result, endpoint security, which is a weak part of SDP, has been strengthened. The proposed system was developed as a prototype, and the performance test was completed through a test of an authorized testing agency (TTA V&V Test). The SDP-based access control solution is a technology with high potential that can be used in smart car security.

Drought Frequency Analysis Using Cluster Analysis and Bivariate Probability Distribution (군집분석과 이변량 확률분포를 이용한 가뭄빈도해석)

  • Yoo, Ji Young;Kim, Tae-Woong;Kim, Sangdan
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.6B
    • /
    • pp.599-606
    • /
    • 2010
  • Due to the short period of precipitation data in Korea, the uncertainty of drought analysis is inevitable from a point frequency analysis. So it is desired to introduce a regional drought frequency analysis. This study first extracted drought characteristics from 3-month and 12-month moving average rainfalls which represent short and long-term droughts, respectively. Then, the homogeneous regions were distinguished by performing a principal component analysis and cluster analysis. The Korean peninsula was classified into five regions based on drought characteristics. Finally, this study applied the bivariate frequency analysis using a kernel density function to quantify the regionalized drought characteristics. Based on the bivariate drought frequency curves, the drought severities of five regions were evaluated for durations of 2, 5, 10, and 20 months, and return periods of 5, 10, 20, 50, and 100 years. As a result, the largest severity of drought was occurred in the Lower Geum River basin, in the Youngsan River basin, and over in the southern coast of Korea.

Performance Evaluation and Analysis on Single and Multi-Network Virtualization Systems with Virtio and SR-IOV (가상화 시스템에서 Virtio와 SR-IOV 적용에 대한 단일 및 다중 네트워크 성능 평가 및 분석)

  • Jaehak Lee;Jongbeom Lim;Heonchang Yu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.48-59
    • /
    • 2024
  • As functions that support virtualization on their own in hardware are developed, user applications having various workloads are operating efficiently in the virtualization system. SR-IOV is a virtualization support function that takes direct access to PCI devices, thus giving a high I/O performance by minimizing the need for hypervisor or operating system interventions. With SR-IOV, network I/O acceleration can be realized in virtualization systems that have relatively long I/O paths compared to bare-metal systems and frequent context switches between the user area and kernel area. To take performance advantages of SR-IOV, network resource management policies that can derive optimal network performance when SR-IOV is applied to an instance such as a virtual machine(VM) or container are being actively studied.This paper evaluates and analyzes the network performance of SR-IOV implementing I/O acceleration is compared with Virtio in terms of 1) network delay, 2) network throughput, 3) network fairness, 4) performance interference, and 5) multi-network. The contributions of this paper are as follows. First, the network I/O process of Virtio and SR-IOV was clearly explained in the virtualization system, and second, the evaluation results of the network performance of Virtio and SR-IOV were analyzed based on various performance metrics. Third, the system overhead and the possibility of optimization for the SR-IOV network in a virtualization system with high VM density were experimentally confirmed. The experimental results and analysis of the paper are expected to be referenced in the network resource management policy for virtualization systems that operate network-intensive services such as smart factories, connected cars, deep learning inference models, and crowdsourcing.

Development of Sentiment Analysis Model for the hot topic detection of online stock forums (온라인 주식 포럼의 핫토픽 탐지를 위한 감성분석 모형의 개발)

  • Hong, Taeho;Lee, Taewon;Li, Jingjing
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.187-204
    • /
    • 2016
  • Document classification based on emotional polarity has become a welcomed emerging task owing to the great explosion of data on the Web. In the big data age, there are too many information sources to refer to when making decisions. For example, when considering travel to a city, a person may search reviews from a search engine such as Google or social networking services (SNSs) such as blogs, Twitter, and Facebook. The emotional polarity of positive and negative reviews helps a user decide on whether or not to make a trip. Sentiment analysis of customer reviews has become an important research topic as datamining technology is widely accepted for text mining of the Web. Sentiment analysis has been used to classify documents through machine learning techniques, such as the decision tree, neural networks, and support vector machines (SVMs). is used to determine the attitude, position, and sensibility of people who write articles about various topics that are published on the Web. Regardless of the polarity of customer reviews, emotional reviews are very helpful materials for analyzing the opinions of customers through their reviews. Sentiment analysis helps with understanding what customers really want instantly through the help of automated text mining techniques. Sensitivity analysis utilizes text mining techniques on text on the Web to extract subjective information in the text for text analysis. Sensitivity analysis is utilized to determine the attitudes or positions of the person who wrote the article and presented their opinion about a particular topic. In this study, we developed a model that selects a hot topic from user posts at China's online stock forum by using the k-means algorithm and self-organizing map (SOM). In addition, we developed a detecting model to predict a hot topic by using machine learning techniques such as logit, the decision tree, and SVM. We employed sensitivity analysis to develop our model for the selection and detection of hot topics from China's online stock forum. The sensitivity analysis calculates a sentimental value from a document based on contrast and classification according to the polarity sentimental dictionary (positive or negative). The online stock forum was an attractive site because of its information about stock investment. Users post numerous texts about stock movement by analyzing the market according to government policy announcements, market reports, reports from research institutes on the economy, and even rumors. We divided the online forum's topics into 21 categories to utilize sentiment analysis. One hundred forty-four topics were selected among 21 categories at online forums about stock. The posts were crawled to build a positive and negative text database. We ultimately obtained 21,141 posts on 88 topics by preprocessing the text from March 2013 to February 2015. The interest index was defined to select the hot topics, and the k-means algorithm and SOM presented equivalent results with this data. We developed a decision tree model to detect hot topics with three algorithms: CHAID, CART, and C4.5. The results of CHAID were subpar compared to the others. We also employed SVM to detect the hot topics from negative data. The SVM models were trained with the radial basis function (RBF) kernel function by a grid search to detect the hot topics. The detection of hot topics by using sentiment analysis provides the latest trends and hot topics in the stock forum for investors so that they no longer need to search the vast amounts of information on the Web. Our proposed model is also helpful to rapidly determine customers' signals or attitudes towards government policy and firms' products and services.

Performance Improvement on Short Volatility Strategy with Asymmetric Spillover Effect and SVM (비대칭적 전이효과와 SVM을 이용한 변동성 매도전략의 수익성 개선)

  • Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.119-133
    • /
    • 2020
  • Fama asserted that in an efficient market, we can't make a trading rule that consistently outperforms the average stock market returns. This study aims to suggest a machine learning algorithm to improve the trading performance of an intraday short volatility strategy applying asymmetric volatility spillover effect, and analyze its trading performance improvement. Generally stock market volatility has a negative relation with stock market return and the Korean stock market volatility is influenced by the US stock market volatility. This volatility spillover effect is asymmetric. The asymmetric volatility spillover effect refers to the phenomenon that the US stock market volatility up and down differently influence the next day's volatility of the Korean stock market. We collected the S&P 500 index, VIX, KOSPI 200 index, and V-KOSPI 200 from 2008 to 2018. We found the negative relation between the S&P 500 and VIX, and the KOSPI 200 and V-KOSPI 200. We also documented the strong volatility spillover effect from the VIX to the V-KOSPI 200. Interestingly, the asymmetric volatility spillover was also found. Whereas the VIX up is fully reflected in the opening volatility of the V-KOSPI 200, the VIX down influences partially in the opening volatility and its influence lasts to the Korean market close. If the stock market is efficient, there is no reason why there exists the asymmetric volatility spillover effect. It is a counter example of the efficient market hypothesis. To utilize this type of anomalous volatility spillover pattern, we analyzed the intraday volatility selling strategy. This strategy sells short the Korean volatility market in the morning after the US stock market volatility closes down and takes no position in the volatility market after the VIX closes up. It produced profit every year between 2008 and 2018 and the percent profitable is 68%. The trading performance showed the higher average annual return of 129% relative to the benchmark average annual return of 33%. The maximum draw down, MDD, is -41%, which is lower than that of benchmark -101%. The Sharpe ratio 0.32 of SVS strategy is much greater than the Sharpe ratio 0.08 of the Benchmark strategy. The Sharpe ratio simultaneously considers return and risk and is calculated as return divided by risk. Therefore, high Sharpe ratio means high performance when comparing different strategies with different risk and return structure. Real world trading gives rise to the trading costs including brokerage cost and slippage cost. When the trading cost is considered, the performance difference between 76% and -10% average annual returns becomes clear. To improve the performance of the suggested volatility trading strategy, we used the well-known SVM algorithm. Input variables include the VIX close to close return at day t-1, the VIX open to close return at day t-1, the VK open return at day t, and output is the up and down classification of the VK open to close return at day t. The training period is from 2008 to 2014 and the testing period is from 2015 to 2018. The kernel functions are linear function, radial basis function, and polynomial function. We suggested the modified-short volatility strategy that sells the VK in the morning when the SVM output is Down and takes no position when the SVM output is Up. The trading performance was remarkably improved. The 5-year testing period trading results of the m-SVS strategy showed very high profit and low risk relative to the benchmark SVS strategy. The annual return of the m-SVS strategy is 123% and it is higher than that of SVS strategy. The risk factor, MDD, was also significantly improved from -41% to -29%.

A Study on Nursing Service of Chronic Diseases by the First Step and Third Step Medical Treatment (1차 및 3차 진료기관 이용 만성질환자의 간호서비스에 관한 연구)

  • Cho Chong Sook
    • Journal of Korean Public Health Nursing
    • /
    • v.10 no.2
    • /
    • pp.103-118
    • /
    • 1996
  • It is to be growing up the interest of community health affairs through visiting nursing care. The health medical treatment of Korea has been changed largely on the period. The juvenile population has decreased. This means that is has took the population consensus of advanced national organization to be increased by the old age. The transition of disease has changed from the contagious disease importance to the chronicity disease omportance because the domestic district population has experienced the sudden urbanization circumstance district population has experienced the sudden urbanization circumstance to be growing up $70\%$ of the whole population. When the nursing service has common function to be delivering from all direction to home, this study is getting the great important phase velocity in order to manage the kernel questional adult chronicity disease of health medical institution at the present age. (1) community over system or with people particularity (2) the first of third step medical treatments. The variety of medical treatments organization has quantity of the delivery manpower and specially between consumers and rdlated person. A qualitative difference is showed at the purpose to be seizing. That research related person is use at district health center in Seoul, by foundation on nurse registration book of H collage hospital and public health registration book. According the chronicity disease. age. and sex. nature agree-able standard 54 people took the content analysis on nurse registration book of total 108 people. The results of the study were as follows: 1. General background factors are houses or kind of medical facilities and number of patients in family. The first medical treatment is more patients than third medical treatment organization. The first medical treatment of economic environment os appering to be worse. 2. The chronicity disease frequency have been different speciality according to medical treatment organization. On case of the first medical treatment. Diabetes and High Blood Pressure were good but Cerebrum Vascular Accident(CVA) showed many for bed case. In addition. the number of family is comparative large exception of CVA on according for moving condition and health more than the first medical treatment. However. family condition. whole family percentage is decreasing preferably through the potential resource is increasing by the number of and the construction of family. The ability of real resource is considered to be low. 3. The average percentage of nurse service has appered to be differed two groups by the first step medical treatment(33.72 times) and third step medical treatment(45.70 times). However, the difference (the first step medical treatment and third step medical treatment) is to be limited to issue the medicine at the service. The condition of nurse care was the indirect nursing care. Supportiong area was to be related to volunteer service and administration support. 4. The various nursing care average percentage of the chronicity disease was increased by orders of Diabetes. High Blood Pressure. and CVA in examination result and the medical treatment. The indirect nursing care was also same. At third step medical treatment, orders of chronicity disease were same. The case of other area on service conditions were increased by order of Diabetes. High Blood Pressure, and CVA. However. it is never appearing the difference at bottleneck affairs nursing care. 5. When the visiting nursing care demand particularly. the average percentage of nursing care from the first step medical treatment that the time under a person is many more than the time over two people. However, there was no difference in statistic. Third step medical treatment is $49.81\%$ at the time under a person. The average nursing care service is appeared by more many when the visiting nursing care demand is a few by 12.83 at the time over two people. 6. By visiting nursing care percentage to be frequency that nursing care averaghe percentage and inter-relation are large. The related factor of the first medical treatment is 0.96. However, the related factor of third medical treatment has shown the decreased 0.49 for the condition of relation more than that. Therefore. the nursing care average percentage is related to the visiting times of a nurse. This result is be showing the obvious fact that the first step medical treatment is few more than third step medical treatment.

  • PDF

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.