• Title/Summary/Keyword: features extracting

Search Result 607, Processing Time 0.028 seconds

Increasing Accuracy of Stock Price Pattern Prediction through Data Augmentation for Deep Learning (데이터 증강을 통한 딥러닝 기반 주가 패턴 예측 정확도 향상 방안)

  • Kim, Youngjun;Kim, Yeojeong;Lee, Insun;Lee, Hong Joo
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.1-12
    • /
    • 2019
  • As Artificial Intelligence (AI) technology develops, it is applied to various fields such as image, voice, and text. AI has shown fine results in certain areas. Researchers have tried to predict the stock market by utilizing artificial intelligence as well. Predicting the stock market is known as one of the difficult problems since the stock market is affected by various factors such as economy and politics. In the field of AI, there are attempts to predict the ups and downs of stock price by studying stock price patterns using various machine learning techniques. This study suggest a way of predicting stock price patterns based on the Convolutional Neural Network(CNN) among machine learning techniques. CNN uses neural networks to classify images by extracting features from images through convolutional layers. Therefore, this study tries to classify candlestick images made by stock data in order to predict patterns. This study has two objectives. The first one referred as Case 1 is to predict the patterns with the images made by the same-day stock price data. The second one referred as Case 2 is to predict the next day stock price patterns with the images produced by the daily stock price data. In Case 1, data augmentation methods - random modification and Gaussian noise - are applied to generate more training data, and the generated images are put into the model to fit. Given that deep learning requires a large amount of data, this study suggests a method of data augmentation for candlestick images. Also, this study compares the accuracies of the images with Gaussian noise and different classification problems. All data in this study is collected through OpenAPI provided by DaiShin Securities. Case 1 has five different labels depending on patterns. The patterns are up with up closing, up with down closing, down with up closing, down with down closing, and staying. The images in Case 1 are created by removing the last candle(-1candle), the last two candles(-2candles), and the last three candles(-3candles) from 60 minutes, 30 minutes, 10 minutes, and 5 minutes candle charts. 60 minutes candle chart means one candle in the image has 60 minutes of information containing an open price, high price, low price, close price. Case 2 has two labels that are up and down. This study for Case 2 has generated for 60 minutes, 30 minutes, 10 minutes, and 5minutes candle charts without removing any candle. Considering the stock data, moving the candles in the images is suggested, instead of existing data augmentation techniques. How much the candles are moved is defined as the modified value. The average difference of closing prices between candles was 0.0029. Therefore, in this study, 0.003, 0.002, 0.001, 0.00025 are used for the modified value. The number of images was doubled after data augmentation. When it comes to Gaussian Noise, the mean value was 0, and the value of variance was 0.01. For both Case 1 and Case 2, the model is based on VGG-Net16 that has 16 layers. As a result, 10 minutes -1candle showed the best accuracy among 60 minutes, 30 minutes, 10 minutes, 5minutes candle charts. Thus, 10 minutes images were utilized for the rest of the experiment in Case 1. The three candles removed from the images were selected for data augmentation and application of Gaussian noise. 10 minutes -3candle resulted in 79.72% accuracy. The accuracy of the images with 0.00025 modified value and 100% changed candles was 79.92%. Applying Gaussian noise helped the accuracy to be 80.98%. According to the outcomes of Case 2, 60minutes candle charts could predict patterns of tomorrow by 82.60%. To sum up, this study is expected to contribute to further studies on the prediction of stock price patterns using images. This research provides a possible method for data augmentation of stock data.

  • PDF

Diagnostic Classification of Chest X-ray Pneumonia using Inception V3 Modeling (Inception V3를 이용한 흉부촬영 X선 영상의 폐렴 진단 분류)

  • Kim, Ji-Yul;Ye, Soo-Young
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.6
    • /
    • pp.773-780
    • /
    • 2020
  • With the development of the 4th industrial, research is being conducted to prevent diseases and reduce damage in various fields of science and technology such as medicine, health, and bio. As a result, artificial intelligence technology has been introduced and researched for image analysis of radiological examinations. In this paper, we will directly apply a deep learning model for classification and detection of pneumonia using chest X-ray images, and evaluate whether the deep learning model of the Inception series is a useful model for detecting pneumonia. As the experimental material, a chest X-ray image data set provided and shared free of charge by Kaggle was used, and out of the total 3,470 chest X-ray image data, it was classified into 1,870 training data sets, 1,100 validation data sets, and 500 test data sets. I did. As a result of the experiment, the result of metric evaluation of the Inception V3 deep learning model was 94.80% for accuracy, 97.24% for precision, 94.00% for recall, and 95.59 for F1 score. In addition, the accuracy of the final epoch for Inception V3 deep learning modeling was 94.91% for learning modeling and 89.68% for verification modeling for pneumonia detection and classification of chest X-ray images. For the evaluation of the loss function value, the learning modeling was 1.127% and the validation modeling was 4.603%. As a result, it was evaluated that the Inception V3 deep learning model is a very excellent deep learning model in extracting and classifying features of chest image data, and its learning state is also very good. As a result of matrix accuracy evaluation for test modeling, the accuracy of 96% for normal chest X-ray image data and 97% for pneumonia chest X-ray image data was proven. The deep learning model of the Inception series is considered to be a useful deep learning model for classification of chest diseases, and it is expected that it can also play an auxiliary role of human resources, so it is considered that it will be a solution to the problem of insufficient medical personnel. In the future, this study is expected to be presented as basic data for similar studies in the case of similar studies on the diagnosis of pneumonia using deep learning.

Analysis and Performance Evaluation of Pattern Condensing Techniques used in Representative Pattern Mining (대표 패턴 마이닝에 활용되는 패턴 압축 기법들에 대한 분석 및 성능 평가)

  • Lee, Gang-In;Yun, Un-Il
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.77-83
    • /
    • 2015
  • Frequent pattern mining, which is one of the major areas actively studied in data mining, is a method for extracting useful pattern information hidden from large data sets or databases. Moreover, frequent pattern mining approaches have been actively employed in a variety of application fields because the results obtained from them can allow us to analyze various, important characteristics within databases more easily and automatically. However, traditional frequent pattern mining methods, which simply extract all of the possible frequent patterns such that each of their support values is not smaller than a user-given minimum support threshold, have the following problems. First, traditional approaches have to generate a numerous number of patterns according to the features of a given database and the degree of threshold settings, and the number can also increase in geometrical progression. In addition, such works also cause waste of runtime and memory resources. Furthermore, the pattern results excessively generated from the methods also lead to troubles of pattern analysis for the mining results. In order to solve such issues of previous traditional frequent pattern mining approaches, the concept of representative pattern mining and its various related works have been proposed. In contrast to the traditional ones that find all the possible frequent patterns from databases, representative pattern mining approaches selectively extract a smaller number of patterns that represent general frequent patterns. In this paper, we describe details and characteristics of pattern condensing techniques that consider the maximality or closure property of generated frequent patterns, and conduct comparison and analysis for the techniques. Given a frequent pattern, satisfying the maximality for the pattern signifies that all of the possible super sets of the pattern must have smaller support values than a user-specific minimum support threshold; meanwhile, satisfying the closure property for the pattern means that there is no superset of which the support is equal to that of the pattern with respect to all the possible super sets. By mining maximal frequent patterns or closed frequent ones, we can achieve effective pattern compression and also perform mining operations with much smaller time and space resources. In addition, compressed patterns can be converted into the original frequent pattern forms again if necessary; especially, the closed frequent pattern notation has the ability to convert representative patterns into the original ones again without any information loss. That is, we can obtain a complete set of original frequent patterns from closed frequent ones. Although the maximal frequent pattern notation does not guarantee a complete recovery rate in the process of pattern conversion, it has an advantage that can extract a smaller number of representative patterns more quickly compared to the closed frequent pattern notation. In this paper, we show the performance results and characteristics of the aforementioned techniques in terms of pattern generation, runtime, and memory usage by conducting performance evaluation with respect to various real data sets collected from the real world. For more exact comparison, we also employ the algorithms implementing these techniques on the same platform and Implementation level.

A Study on the Improvement of Guideline in Digital Forest Type Map (수치임상도 작업매뉴얼의 개선방안에 관한 연구)

  • PARK, Jeong-Mook;DO, Mi-Ryung;SIM, Woo-Dam;LEE, Jung-Soo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.1
    • /
    • pp.168-182
    • /
    • 2019
  • The objectives of this study were to examine the production processes and methods of "Forest Type Map Actualization Production (Database (DB) Construction Work Manual)" (Work Manual) identify issues associated with the production processes and methods, and suggest solutions for them by applying evaluation items to a 1:5k digital forest type map. The evaluation items applied to a forest type map were divided into zoning and attributes, and the issues associated with the production processes and methods of Work Manual were derived through analyzing the characteristics of the stand structure and fragmentation by administrative districts. Korea is divided into five divisions, where one is set as the area changed naturally and the other four areas set as the area changed artificially. The area changed naturally has been updated every five years, and those changed artificially have been updated annually. The fragmentation of South Korea was analyzed in order to examine the consistency of the DB established for each region. The results showed that, in South Korea, the number of patches increased and the mean patch size decreased. As a result, the degree of fragmentation and the complexity of shapes increased. The degree of fragmentation and the complexity of shapes decreased in four regions out of 17 regions (metropolitan cities and provinces). The results indicated that there were spatial variations. The "Forest Classification" defines the minimum area of a zoning as 0.1ha. This study examined the criteria for the minimum area of a zoning by estimating the divided object (polygon unit) in a forest type map. The results of this study revealed that approximately 26% of objects were smaller than the minimum area of a zoning. The results implied that it would be necessary to establish the definition and the regeneration interval of "Areas Changed Artificially and Areas Changed Naturally", and improve the standard for the minimum area of a zoning. Among the attributes of Work Manual, "Species Change" item classifies terrain features into 52 types, and 43 types of them belong to stocking land. This study examined distribution ratios by extracting species information from the forest type map. It was found that each of 23 species, approximately 53% of species, occupied less than 0.1% of Forested land. The top three species were pine and other species. Although undergrowth on unstocked forest land are classified in the terrain feature system, their definition and classification criteria are not established in the "Forest Classification" item. Therefore, it will be needed to reestablish the terrain feature system and set the definitions of undergrowth.

A Study on Training Dataset Configuration for Deep Learning Based Image Matching of Multi-sensor VHR Satellite Images (다중센서 고해상도 위성영상의 딥러닝 기반 영상매칭을 위한 학습자료 구성에 관한 연구)

  • Kang, Wonbin;Jung, Minyoung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1505-1514
    • /
    • 2022
  • Image matching is a crucial preprocessing step for effective utilization of multi-temporal and multi-sensor very high resolution (VHR) satellite images. Deep learning (DL) method which is attracting widespread interest has proven to be an efficient approach to measure the similarity between image pairs in quick and accurate manner by extracting complex and detailed features from satellite images. However, Image matching of VHR satellite images remains challenging due to limitations of DL models in which the results are depending on the quantity and quality of training dataset, as well as the difficulty of creating training dataset with VHR satellite images. Therefore, this study examines the feasibility of DL-based method in matching pair extraction which is the most time-consuming process during image registration. This paper also aims to analyze factors that affect the accuracy based on the configuration of training dataset, when developing training dataset from existing multi-sensor VHR image database with bias for DL-based image matching. For this purpose, the generated training dataset were composed of correct matching pairs and incorrect matching pairs by assigning true and false labels to image pairs extracted using a grid-based Scale Invariant Feature Transform (SIFT) algorithm for a total of 12 multi-temporal and multi-sensor VHR images. The Siamese convolutional neural network (SCNN), proposed for matching pair extraction on constructed training dataset, proceeds with model learning and measures similarities by passing two images in parallel to the two identical convolutional neural network structures. The results from this study confirm that data acquired from VHR satellite image database can be used as DL training dataset and indicate the potential to improve efficiency of the matching process by appropriate configuration of multi-sensor images. DL-based image matching techniques using multi-sensor VHR satellite images are expected to replace existing manual-based feature extraction methods based on its stable performance, thus further develop into an integrated DL-based image registration framework.

A study on characteristics of palace wallpaper in the Joseon Dynasty - Focusing on Gyeongbokgung Palace, Changdeokgung Palace and Chilgung Palace - (조선시대 궁궐 도배지 특성 연구 - 경복궁, 창덕궁, 칠궁을 중심으로 -)

  • KIM Jiwon;KIM Jisun;KIM, Myoungnam;JEONG Seonhwa
    • Korean Journal of Heritage: History & Science
    • /
    • v.56 no.1
    • /
    • pp.80-97
    • /
    • 2023
  • By taking wallpaper specimens from Gyeongbokgung Palace, Changdeokgung Palace, and Chilgung Palace preserved from the late Joseon Dynasty to the present, we planned in this study to determine the types and characteristics of the paper used as wallpaper in the Joseon royal family. First, we confirmed the features of paper hanging in the palaces with old literature on the wallpaper used by the royal family based on archival research. Second, we conducted a field survey targeting the royal palaces whose construction period was relatively clear, and analyzed the first layer of wallpaper directly attached to the wall structure after sampling the specimens. Therefore, we confirmed that the main raw material was hanji, which was used as a wallpaper by the royal family, and grasped the types of substances(dyes and pigments) used to produce a blue color in spaces that must have formality by analyzing the blue-colored paper. Based on the results confirmed through the analysis, we checked documents and the existing wallpaper by comparing the old literature related to wallpaper records of the Joseon Dynasty palaces. We also built a database for the restoration of cultural properties when conserving the wallpaper in the royal palaces. We examined the changes in wallpaper types by century and the content according to the place of use by extracting wallpaper-related contents recorded in 36 cases of Uigwe from the 17th to 20th centuries. As a result, it was found that the names used for document paper and wallpaper were not different, thus document paper and wallpaper were used without distinction during the Joseon Dynasty. And though there are differences in the types of wallpaper depending on the period, it was confirmed that the foundation of wallpaper continued until the late Joseon Dynasty, with Baekji(white hanji), Hubaekji(thick white paper), jeojuji(common hanji used to write documents), chojuji(hanji used as a draft for writing documents) and Gakjang(a wide and thick hanji used as a pad). As a result of fiber identification by the morphological characteristics of fibers and the normal color reaction(KS M ISO 9184-4: Graph "C" staining test) for the first layer of paper directly attached to the palace wall, the main materials of hanji used by the royal family were confirmed and the raw materials used to make hanii in buildings of palaces based on the construction period were determined. Also, as a result of analyzing the coloring materials of the blue decorative paper with an optical microscope, ultraviolet-visible spectroscopic analysis(UV-Vis), and X-ray diffraction analysis(XRD), we determined that the type of blue decorative paper dyes and pigments used in the palaces must have formality and identified that the raw materials used to produce the blue color were natural indigo, lazurite and cobalt blue.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.