• Title/Summary/Keyword: Binary images

Search Result 572, Processing Time 0.028 seconds

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Subimage Detection of Window Image Using AdaBoost (AdaBoost를 이용한 윈도우 영상의 하위 영상 검출)

  • Gil, Jong In;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.578-589
    • /
    • 2014
  • Window image is displayed through a monitor screen when we execute the application programs on the computer. This includes webpage, video player and a number of applications. The webpage delivers a variety of information by various types in comparison with other application. Unlike a natural image captured from a camera, the window image like a webpage includes diverse components such as text, logo, icon, subimage and so on. Each component delivers various types of information to users. However, the components with different characteristic need to be divided locally, because text and image are served by various type. In this paper, we divide window images into many sub blocks, and classify each divided region into background, text and subimage. The detected subimages can be applied into 2D-to-3D conversion, image retrieval, image browsing and so forth. There are many subimage classification methods. In this paper, we utilize AdaBoost for verifying that the machine learning-based algorithm can be efficient for subimage detection. In the experiment, we showed that the subimage detection ratio is 93.4 % and false alarm is 13 %.

Construction of voxel head phantom and application to BNCT dose calculation (Voxel 머리팬텀 제작 및 붕소중성자포획요법 선량계산에의 응용)

  • Lee, Choon-Sik;Lee, Choon-Ik;Lee, Jai-Ki
    • Journal of Radiation Protection and Research
    • /
    • v.26 no.2
    • /
    • pp.93-99
    • /
    • 2001
  • Voxel head phantom for overcoming the limitation of mathematical phantom in depleting anatomical details was constructed and example dose calculation for BNCT was performed. The repeated structure algorithm of the general purpose Monte Carlo code, MCNP4B was applied for yokel Monte Carlo calculation. Simple binary yokel phantom and combinatorial geometry phantom composed of two materials were constructed for validating the voxel Monte Carlo calculation system. The tomographic images of VHP man provided by NLM(National Library of Medicine) were segmented and indexed to construct yokel head phantom. Comparison of doses for broad parallel gamma and neutron beams in AP and PA directions showed decrease of brain dose due to the attenuation of neutron in eye balls in case of yokel head phantom. The spherical tumor volume with diameter, 5cm was defined in the center of brain for BNCT dose calculation in which accurate 3 dimensional dose calculation is essential. As a result of BNCT dose calculation for downward neutron beam of 10keV and 40keV, the tumor dose is about doubled when boron concentration ratio between the tumor to the normal tissue is $30{\mu}g/g$ to $3{\mu}g/g$. This study established the voxel Monte Carlo calculation system and suggested the feasibility of precise dose calculation in therapeutic radiology.

  • PDF

Performance Test for the Long Distance Sprayer by an Image Processing (영상처리를 이용한 광역방제기 팬의 성능실험)

  • Min, B.R.;Kim, D.W.;Seo, K.W.;Hong, J.T.;Kim, W.;Choi, J.H.;Lee, D.W.
    • Journal of Animal Environmental Science
    • /
    • v.14 no.3
    • /
    • pp.159-166
    • /
    • 2008
  • This research was carried out to test and analyze capacity of the long distance sprayer fan in large livestock farmhouses. Long distance sprayer was manufactured to be able to spray a lot of water, which was a solvent for agricultural chemicals and black dye with the maximum spraying distance of 140 m and the effective spraying distance of 100 m. The spraying quantity and the distance were measured the intensity values of images within A4 papers, which absorbed the agricultural chemicals by spraying by binary image processing. These A4 papers were fixed upon the height of 1 m from soil ground at regular 10 m interval. After the A4 papers were collected and analyzed the intensity values of gray level. Gray level was ranged from 0 to 255, where 0 was black and 255 was white. A4 paper was fallen down from the stick at 10 m distance, because there were too large amount of sprayed water with black dye. Also, the paper showed low gray level at distance 30 m because of dropping lots of black water. The intensity value of gray level was showed almost less than 200 on the A4 papers between the distance 20 m and 100 m, which meant equality of spraying quantity. Additionally, it was possible to spay agricultural chemicals of until 180 m. Throughout this research, long distance sprayer could apply for preventing hoof-and-mouth disease in large livestock farmhouses.

  • PDF

An Efficient Bitmap Indexing Method for Multimedia Data Reflecting the Characteristics of MPEG-7 Visual Descriptors (MPEG-7 시각 정보 기술자의 특성을 반영한 효율적인 멀티미디어 데이타 비트맵 인덱싱 방법)

  • Jeong Jinguk;Nang Jongho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.9-20
    • /
    • 2005
  • Recently, the MPEG-7 standard a multimedia content description standard is wide]y used for content based image/video retrieval systems. However, since the descriptors standardized in MPEG-7 are usually multidimensional and the problem called 'Curse of dimensionality', previously proposed indexing methods(for example, multidimensional indexing methods, dimensionality reduction methods, filtering methods, and so on) could not be used to effectively index the multimedia database represented in MPEG-7. This paper proposes an efficient multimedia data indexing mechanism reflecting the characteristics of MPEG-7 visual descriptors. In the proposed indexing mechanism, the descriptor is transformed into a histogram of some attributes. By representing the value of each bin as a binary number, the histogram itself that is a visual descriptor for the object in multimedia database could be represented as a bit string. Bit strings for all objects in multimedia database are collected to form an index file, bitmap index, in the proposed indexing mechanism. By XORing them with the descriptors for query object, the candidate solutions for similarity search could be computed easily and they are checked again with query object to precisely compute the similarity with exact metric such as Ll-norm. These indexing and searching mechanisms are efficient because the filtering process is performed by simple bit-operation and it reduces the search space dramatically. Upon experimental results with more than 100,000 real images, the proposed indexing and searching mechanisms are about IS times faster than the sequential searching with more than 90% accuracy.

A High Performance License Plate Recognition System (고속처리 자동차 번호판 인식시스템)

  • 남기환;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1352-1357
    • /
    • 2002
  • This Paper describes algorithm to extract license plates in vehicle images. Conventional methods perform preprocessing on the entire vehicle image to produce the edge image and binarize it. Hough transform is applied to the binary image to find horizontal and vertical lines, and the license plate area is extracted using the characteristics of license plates. Problems with this approach are that real-time processing is not feasible due to long processing time and that the license plate area is not extracted when lighting is irregular such as at night or when the plate boundary does not show up in the image. This research uses the gray level transition characteristics of license plates to verify the digit area by examining the digit width and the level difference between the background area the digit area, and then extracts the plate area by testing the distance between the verified digits. This research solves the problem of failure in extracting the license plates due to degraded plate boundary as in the conventional methods and resolves the problem of the time requirement by processing the real time such that practical application is possible. This paper Presents a power automated license plate recognition system, which is able to read license numbers of cars, even under circumstances, which are far from ideal. In a real-life test, the percentage of rejected plates wan 13%, whereas 0.4% of the plates were misclassified. Suggestions for further improvements are given.

Content-based Image Retrieval Using Color Adjacency and Gradient (칼라 인접성과 기울기를 이용한 내용 기반 영상 검색)

  • Jin, Hong-Yan;Lee, Ho-Young;Kim, Hee-Soo;Kim, Gi-Seok;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.1
    • /
    • pp.104-115
    • /
    • 2001
  • A new content-based color image retrieval method integrating the features of the color adjacency and the gradient is proposed in this paper. As the most used feature of color image, color histogram has its own advantages that it is invariant to the changes in viewpoint and the rotation of the image etc., and the computation of the feature is simple and fast. However, it is difficult to distinguish those different images having similar color distributions using histogram-based image retrieval, because the color histogram is generated on uniformly quantized colors and the histogram itself contains no spatial information. And another shortcoming of the histogram-based image retrieval is the storage of the features is usually very large. In order to prevent the above drawbacks, the gradient that is the largest color difference of neighboring pixels is calculated in the proposed method instead of the uniform quantization which is commonly used at most histogram-based methods. And the color adjacency information which indicates major color composition feature of an image is extracted and represented as a binary form to reduce the amount of feature storage. The two features are integrated to allow the retrieval more robust to the changes of various external conditions.

  • PDF

Facial Feature Detection and Facial Contour Extraction using Snakes (얼굴 요소의 영역 추출 및 Snakes를 이용한 윤곽선 추출)

  • Lee, Kyung-Hee;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.7
    • /
    • pp.731-741
    • /
    • 2000
  • This paper proposes a method to detect a facial region and extract facial features which is crucial for visual recognition of human faces. In this paper, we extract the MER(Minimum Enclosing Rectangle) of a face and facial components using projection analysis on both edge image and binary image. We use an active contour model(snakes) for extraction of the contours of eye, mouth, eyebrow, and face in order to reflect the individual differences of facial shapes and converge quickly. The determination of initial contour is very important for the performance of snakes. Particularly, we detect Minimum Enclosing Rectangle(MER) of facial components and then determine initial contours using general shape of facial components within the boundary of the obtained MER. We obtained experimental results to show that MER extraction of the eye, mouth, and face was performed successfully. But in the case of images with bright eyebrow, MER extraction of eyebrow was performed poorly. We obtained good contour extraction with the individual differences of facial shapes. Particularly, in the eye contour extraction, we combined edges by first order derivative operator and zero crossings by second order derivative operator in designing energy function of snakes, and we achieved good eye contours. For the face contour extraction, we used both edges and grey level intensity of pixels in designing of energy function. Good face contours were extracted as well.

  • PDF

Surface Modification of Poly(vinylidene fluoride) Membranes using Surface Modifying Macromolecules (SMMs) and Their Application to Pervaporation Separation (SMMs을 이용한 고분자막의 표면개질과 이의 투과증발분리 연구)

  • Rhim, Ji-Won;Lee, Byung-Seong;Kim, Dae-Hoon;Lee, Bo-Sung;Yoon, Seok-Won;Im, Hyeon-Soo;Moon, Go-Young;Nam, Sang-Yong;Byun, Hong-Sik
    • Membrane Journal
    • /
    • v.18 no.3
    • /
    • pp.206-213
    • /
    • 2008
  • Poly(vinylidene fluoride) (PVDF) membrane surfaces were modified using surface modifying macromolecules (SMMs). The Zonyl BA-L as SMM was used and the various PVDF membranes containing 0 to 2 wt% SMM were prepared. The resulting membranes were characterized through SEM, contact angle measurements and pervaporation separation of water-ethanol system. SMM layers were created in the surface regions of PVDF membranes by SEM images and the contact angles were increased more than untreated PVDF membranes. The pervaporation was carried out at 50, 60 and $70^{\circ}C$, and the PVDF membranes containing 1 and 2 wt% SMM were used for 10, 20, 50 wt% water in the binary water/ethanol mixtures and pure water. PVDF/2 wt% Zrlnyl BA-L membrane showed the permeability 5.3 $g/m^2hr$ and separation factor 287 at $50^{\circ}C$ for water : ethanol = 10 : 50 solution.

A Robust Staff Line Height and Staff Line Space Estimation for the Preprocessing of Music Score Recognition (악보인식 전처리를 위한 강건한 오선 두께와 간격 추정 방법)

  • Na, In-Seop;Kim, Soo-Hyung;Nquyen, Trung Quy
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.29-37
    • /
    • 2015
  • In this paper, we propose a robust pre-processing module for camera-based Optical Music Score Recognition (OMR) on mobile device. The captured images likely suffer for recognition from many distortions such as illumination, blur, low resolution, etc. Especially, the complex background music sheets recognition are difficult. Through any symbol recognition system, the staff line height and staff line space are used many times and have a big impact on recognition module. A robust and accurate staff line height and staff line space are essential. Some staff line height and staff line space are proposed for binary image. But in case of complex background music sheet image, the binarization results from common binarization algorithm are not satisfactory. It can cause incorrect staff line height and staff line space estimation. We propose a robust staff line height and staff line space estimation by using run-length encoding technique on edge image. Proposed method is composed of two steps, first step, we conducted the staff line height and staff line space estimation based on edge image using by Sobel operator on image blocks. Each column of edge image is encoded by run-length encoding algorithm Second step, we detect the staff line using by Stable Path algorithm and removal the staff line using by adaptive Line Track Height algorithm which is to track the staff lines positions. The result has shown that robust and accurate estimation is possible even in complex background cases.