• Title/Summary/Keyword: kernel feature

Search Result 191, Processing Time 0.023 seconds

History of the Photon Beam Dose Calculation Algorithm in Radiation Treatment Planning System

  • Kim, Dong Wook;Park, Kwangwoo;Kim, Hojin;Kim, Jinsung
    • Progress in Medical Physics
    • /
    • v.31 no.3
    • /
    • pp.54-62
    • /
    • 2020
  • Dose calculation algorithms play an important role in radiation therapy and are even the basis for optimizing treatment plans, an important feature in the development of complex treatment technologies such as intensity-modulated radiation therapy. We reviewed the past and current status of dose calculation algorithms used in the treatment planning system for radiation therapy. The radiation-calculating dose calculation algorithm can be broadly classified into three main groups based on the mechanisms used: (1) factor-based, (2) model-based, and (3) principle-based. Factor-based algorithms are a type of empirical dose calculation that interpolates or extrapolates the dose in some basic measurements. Model-based algorithms, represented by the pencil beam convolution, analytical anisotropic, and collapse cone convolution algorithms, use a simplified physical process by using a convolution equation that convolutes the primary photon energy fluence with a kernel. Model-based algorithms allowing side scattering when beams are transmitted to the heterogeneous media provide more precise dose calculation results than correction-based algorithms. Principle-based algorithms, represented by Monte Carlo dose calculations, simulate all real physical processes involving beam particles during transportation; therefore, dose calculations are accurate but time consuming. For approximately 70 years, through the development of dose calculation algorithms and computing technology, the accuracy of dose calculation seems close to our clinical needs. Next-generation dose calculation algorithms are expected to include biologically equivalent doses or biologically effective doses, and doctors expect to be able to use them to improve the quality of treatment in the near future.

Efficient Thread Allocation Method of Convolutional Neural Network based on GPGPU (GPGPU 기반 Convolutional Neural Network의 효율적인 스레드 할당 기법)

  • Kim, Mincheol;Lee, Kwangyeob
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.10
    • /
    • pp.935-943
    • /
    • 2017
  • CNN (Convolution neural network), which is used for image classification and speech recognition among neural networks learning based on positive data, has been continuously developed to have a high performance structure to date. There are many difficulties to utilize in an embedded system with limited resources. Therefore, we use GPU (General-Purpose Computing on Graphics Processing Units), which is used for general-purpose operation of GPU to solve the problem because we use pre-learned weights but there are still limitations. Since CNN performs simple and iterative operations, the computation speed varies greatly depending on the thread allocation and utilization method in the Single Instruction Multiple Thread (SIMT) based GPGPU. To solve this problem, there is a thread that needs to be relaxed when performing Convolution and Pooling operations with threads. The remaining threads have increased the operation speed by using the method used in the following feature maps and kernel calculations.

Deep Learning-based Hyperspectral Image Classification with Application to Environmental Geographic Information Systems (딥러닝 기반의 초분광영상 분류를 사용한 환경공간정보시스템 활용)

  • Song, Ahram;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_2
    • /
    • pp.1061-1073
    • /
    • 2017
  • In this study, images were classified using convolutional neural network (CNN) - a deep learning technique - to investigate the feasibility of information production through a combination of artificial intelligence and spatial data. CNN determines kernel attributes based on a classification criterion and extracts information from feature maps to classify each pixel. In this study, a CNN network was constructed to classify materials with similar spectral characteristics and attribute information; this is difficult to achieve by conventional image processing techniques. A Compact Airborne Spectrographic Imager(CASI) and an Airborne Imaging Spectrometer for Application (AISA) were used on the following three study sites to test this method: Site 1, Site 2, and Site 3. Site 1 and Site 2 were agricultural lands covered in various crops,such as potato, onion, and rice. Site 3 included different buildings,such as single and joint residential facilities. Results indicated that the classification of crop species at Site 1 and Site 2 using this method yielded accuracies of 96% and 99%, respectively. At Site 3, the designation of buildings according to their purpose yielded an accuracy of 96%. Using a combination of existing land cover maps and spatial data, we propose a thematic environmental map that provides seasonal crop types and facilitates the creation of a land cover map.

Comparison of Texture Images and Application of Template Matching for Geo-spatial Feature Analysis Based on Remote Sensing Data (원격탐사 자료 기반 지형공간 특성분석을 위한 텍스처 영상 비교와 템플레이트 정합의 적용)

  • Yoo Hee Young;Jeon So Hee;Lee Kiwon;Kwon Byung-Doo
    • Journal of the Korean earth science society
    • /
    • v.26 no.7
    • /
    • pp.683-690
    • /
    • 2005
  • As remote sensing imagery with high spatial resolution (e.g. pixel resolution of 1m or less) is used widely in the specific application domains, the requirements of advanced methods for this imagery are increasing. Among many applicable methods, the texture image analysis, which was characterized by the spatial distribution of the gray levels in a neighborhood, can be regarded as one useful method. In the texture image, we compared and analyzed different results according to various directions, kernel sizes, and parameter types for the GLCM algorithm. Then, we studied spatial feature characteristics within each result image. In addition, a template matching program which can search spatial patterns using template images selected from original and texture images was also embodied and applied. Probabilities were examined on the basis of the results. These results would anticipate effective applications for detecting and analyzing specific shaped geological or other complex features using high spatial resolution imagery.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

Change Detection for High-resolution Satellite Images Using Transfer Learning and Deep Learning Network (전이학습과 딥러닝 네트워크를 활용한 고해상도 위성영상의 변화탐지)

  • Song, Ah Ram;Choi, Jae Wan;Kim, Yong Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.199-208
    • /
    • 2019
  • As the number of available satellites increases and technology advances, image information outputs are becoming increasingly diverse and a large amount of data is accumulating. In this study, we propose a change detection method for high-resolution satellite images that uses transfer learning and a deep learning network to overcome the limit caused by insufficient training data via the use of pre-trained information. The deep learning network used in this study comprises convolutional layers to extract the spatial and spectral information and convolutional long-short term memory layers to analyze the time series information. To use the learned information, the two initial convolutional layers of the change detection network are designed to use learned values from 40,000 patches of the ISPRS (International Society for Photogrammertry and Remote Sensing) dataset as initial values. In addition, 2D (2-Dimensional) and 3D (3-dimensional) kernels were used to find the optimized structure for the high-resolution satellite images. The experimental results for the KOMPSAT-3A (KOrean Multi-Purpose SATllite-3A) satellite images show that this change detection method can effectively extract changed/unchanged pixels but is less sensitive to changes due to shadow and relief displacements. In addition, the change detection accuracy of two sites was improved by using 3D kernels. This is because a 3D kernel can consider not only the spatial information but also the spectral information. This study indicates that we can effectively detect changes in high-resolution satellite images using the constructed image information and deep learning network. In future work, a pre-trained change detection network will be applied to newly obtained images to extend the scope of the application.

Graph Cut-based Automatic Color Image Segmentation using Mean Shift Analysis (Mean Shift 분석을 이용한 그래프 컷 기반의 자동 칼라 영상 분할)

  • Park, An-Jin;Kim, Jung-Whan;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.11
    • /
    • pp.936-946
    • /
    • 2009
  • A graph cuts method has recently attracted a lot of attentions for image segmentation, as it can globally minimize energy functions composed of data term that reflects how each pixel fits into prior information for each class and smoothness term that penalizes discontinuities between neighboring pixels. In previous approaches to graph cuts-based automatic image segmentation, GMM(Gaussian mixture models) is generally used, and means and covariance matrixes calculated by EM algorithm were used as prior information for each cluster. However, it is practicable only for clusters with a hyper-spherical or hyper-ellipsoidal shape, as the cluster was represented based on the covariance matrix centered on the mean. For arbitrary-shaped clusters, this paper proposes graph cuts-based image segmentation using mean shift analysis. As a prior information to estimate the data term, we use the set of mean trajectories toward each mode from initial means randomly selected in $L^*u^*{\upsilon}^*$ color space. Since the mean shift procedure requires many computational times, we transform features in continuous feature space into 3D discrete grid, and use 3D kernel based on the first moment in the grid, which are needed to move the means to modes. In the experiments, we investigate the problems of mean shift-based and normalized cuts-based image segmentation methods that are recently popular methods, and the proposed method showed better performance than previous two methods and graph cuts-based automatic image segmentation using GMM on Berkeley segmentation dataset.

A Design and Implementation of RSS Data Collecting Engine based on Web 2.0 (웹 2.0 기반 RSS 데이터 수집 엔진의 설계 및 구현)

  • Kang, Pil-Gu;Kim, Jae-Hwan;Lee, Sang-Jun;Chae, Jin-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.11
    • /
    • pp.1496-1506
    • /
    • 2007
  • The environment of web service has changed a great deal due to the progress of internet technology and positive participation of users. The established web service is static and passive, but the recent web service is becoming dynamic and active. Web 2.0 reflects current web service change well. The primary feature of web 2.0 is positive participation of users. Since the size of generated information is becoming larger, it is highly required to share the information fast and correctly. The technology to satisfy this need is web syndication and tagging in web 2.0. The web syndication makes feeds for another site or users to receive the content of web site. In addition, the tagging is the kernel of a information. Many internet users share rapidly the information through tag search. In this paper, we propose the efficient technique to improve the web 2.0 technology such as web syndication and tagging by using the data collection engine. Data collection engine has stored in a database, a user's Web site to use the information. and it has a user's Web site with access to updated data to collect. The experimental results show that our approach can improve the search speed up to 3.14 times better than the existing method and reduce the size of data up to 66% for building associated tags.

  • PDF

THE KOREAN PRIVATE COLLEGE LIBRARIES UNDER THE PERIOD OF JAPANESE CONTROL (일제하(日帝下) 사립전문학교(私立專門學校) 도서관(図書館))

  • Kim, Yong-Sung
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.5 no.1
    • /
    • pp.37-84
    • /
    • 1981
  • Korean private colleges under the period of Japanese control was the kernel for the educational resistance, one of the save-the-nation movements because of inculcating in Koreans the spirit of independence and self-respect under that period. Posung College Library building, erected in Commemoration of the 30th Anniversary of the Foundation, especially, was the result of Koreans systematizing ability and iron will of independence for the future generations. In this paper, an attempt to study the Korean private college libraries under the period of Japanese control is provided. The main institutions in this study are Posung College library, Chosen Christian College library (Yunhee College Library), and Ewha College Library. This study will focus to review the followings: 1. The historical background of above mentioned libraries. 2. The educational resistance under that period. 3. The comparative and analytical study of these private college libraries and Keijo Imperial University library. 4. The facilities and the basic collection development plan on the basis of presentation. 5. library services including readers services. 6. The impact of these libraries on the present private university libraries. 7. The organization and staffing pattern, and budget of these private college libraries. The followings are the outlines conclusions: 1. Korean private college libraries had been established in order to perform the educational resistance. They are one of the supporting agencies for the research activities, among the most important means of social education, and provide, no doubt, the full nutriment for hungry soul under that period. 2. These libraries have not outstripped Keijo Imperial University in collection of books as well as man power, but their collection of books coincided, in general, with their curricula, and had feature to perform the save-the-nation-movement by education. 3. The library services were appeared in the forms such as the Circulation Library, Lectures on the Use of Books and Libraries, Library Week, and Training Course for the Librarians, etc. It is thought that these activities contributed to the social and cultural development of Korea indirectly. 4. The library administration of the private colleges depended upon the director of library because of the frequent changes of staffs and the simple functional system without its middle class. 5. The Japanese Government-General in Korea gave no financial assistance to the private colleges though they were in financial difficulties more than Keijo Imperial University. 6. The ambitious ideal for foundation of universities in reality as well as in name was not achieved during that period because of the monstrous obstacle so-called the Japanese Government-General in Korea, but its ideal had a desirable effect upon these college libraries development, in particular, before and after 1935.

  • PDF