• Title/Summary/Keyword: performance distribution

Search Result 7,582, Processing Time 0.051 seconds

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Establishment of the Suitability Class in Ginseng Cultivated Lands (인삼 재배 적지 기준 설정 연구)

  • Hyeon, Geun-Soo;Kim, Seong-Min;Song, Kwan-Cheol;Yeon, Byeong-Yeol;Hyun, Dong-Yun
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.42 no.6
    • /
    • pp.430-438
    • /
    • 2009
  • An attempt was made to establish the suitability classes of lands for the cultivation of ginseng(Panax ginseng C. A. Meyer). For this study, the relationships between various soil characteristics and ginseng yields were investigated on altogether 450 ginseng fields (150 sites in paddy and 300 sites in upland), across Kangwon, Kyunggi, Chungbug, Chungnam, Jonbug and Kyungbug Provinces, where ginseng is widely cultivated. In the paddy fields, most influential properties of soil on the ginseng yields was found to be the drainage class. Texture of surface soil and available soil depths affected the ginseng yields to some extents. However, the topography, slope, and the gravel content were found not to affect the ginseng yields. In the uplands, the texture of surface soil was most influential and the topography, slope, and occurrence depth of hard-pan were least influential on the performance of the crop. Making use of multiple regression, by SAS, the contribution of soil morphological and physical properties such as, topography, surface soil texture, drainage class, slope, available soil depth, gravel content, and appearance depth of hard-pan, for the suitability of land for ginseng cultivation was analyzed. Based on the results of above analysis, adding up all of the suitability indices, land suitability classes for ginseng cultivation were proposed. On top of this, taking the weather conditions into consideration, suitability of land for ginseng cultivation was established in paddy field and in uplands. As an example, maps showing the distribution of suitable land for ginseng cultivation were drawn, adopting the land suitability classes obtained through current study, soil map, climate map, and GIS information, for Eumsung County, Chungbug Province. Making use of the information on the land suitability for ginseng cultivation obtained from current study, the suitability of lands currently under cultivation of ginseng was investigated. The results indicate that 74.0% of them in paddy field and 88.3% in upland are "highly suitable" and "suitable".

Adsorption and Transfer of Trace Elements in Repellent Soils (토양 소수성에 따른 미량원소의 흡착 및 이동)

  • Choi, Jun-Yong;Lee, Sang-Soo;Ok, Yong-Sik;Chun, So-Ul;Joo, Young-Kyoo
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.2
    • /
    • pp.204-208
    • /
    • 2012
  • Water repellency which affects infiltration, evaporation, erosion and other water transfer mechanisms through soil has been observed under several natural conditions. Water repellency is thought to be caused by hydrophobic organic compounds, which are present as coatings on soil particles or as an interstitial matter between soil particles. This study was conducted to evaluate the characteristics of the water repellent soil and transport characteristics of trace elements within this soil. Capillary height of the water repellent soil was measured. Batch and column studies were accompanied to identify sorption and transport mechanism of trace elements such as $Cu^{2+}$, $Mn^{2+}$, $Fe^{2+}$, $Zn^{2+}$ and $Mo^{5+}$. Difference of sorption capacity between common and repellent soils was observed depended on the degree of repellency. In the column study, the desorption of trace elements and the spatial concentration distribution as a function of time were evaluated. The capillary height was in the repellency order of 0% > 15% > 40% > 70% > 100%. No water was absorbed in soil indicating >70% repellency. Using trace elements, $Fe^{2+}$ and $Mo^{5+}$ showed higher sorption capacity in the repellent soil than in non-repellent soil. The sorption performance of $Fe^{2+}$ was found to be in the repellency order of 40% > 15% > 0%. Our results found that transfer of $Mo^{5+}$ had similar sorption tendency in soils having 0%, 15% and 40% repellency at the beginning, however, the higher desorption capacity was observed as time passes in the repellent soil compared to in non-repellent soils.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (비정형 텍스트 분석을 활용한 이슈의 동적 변이과정 고찰)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.1-18
    • /
    • 2016
  • Owing to the extensive use of Web media and the development of the IT industry, a large amount of data has been generated, shared, and stored. Nowadays, various types of unstructured data such as image, sound, video, and text are distributed through Web media. Therefore, many attempts have been made in recent years to discover new value through an analysis of these unstructured data. Among these types of unstructured data, text is recognized as the most representative method for users to express and share their opinions on the Web. In this sense, demand for obtaining new insights through text analysis is steadily increasing. Accordingly, text mining is increasingly being used for different purposes in various fields. In particular, issue tracking is being widely studied not only in the academic world but also in industries because it can be used to extract various issues from text such as news, (SocialNetworkServices) to analyze the trends of these issues. Conventionally, issue tracking is used to identify major issues sustained over a long period of time through topic modeling and to analyze the detailed distribution of documents involved in each issue. However, because conventional issue tracking assumes that the content composing each issue does not change throughout the entire tracking period, it cannot represent the dynamic mutation process of detailed issues that can be created, merged, divided, and deleted between these periods. Moreover, because only keywords that appear consistently throughout the entire period can be derived as issue keywords, concrete issue keywords such as "nuclear test" and "separated families" may be concealed by more general issue keywords such as "North Korea" in an analysis over a long period of time. This implies that many meaningful but short-lived issues cannot be discovered by conventional issue tracking. Note that detailed keywords are preferable to general keywords because the former can be clues for providing actionable strategies. To overcome these limitations, we performed an independent analysis on the documents of each detailed period. We generated an issue flow diagram based on the similarity of each issue between two consecutive periods. The issue transition pattern among categories was analyzed by using the category information of each document. In this study, we then applied the proposed methodology to a real case of 53,739 news articles. We derived an issue flow diagram from the articles. We then proposed the following useful application scenarios for the issue flow diagram presented in the experiment section. First, we can identify an issue that actively appears during a certain period and promptly disappears in the next period. Second, the preceding and following issues of a particular issue can be easily discovered from the issue flow diagram. This implies that our methodology can be used to discover the association between inter-period issues. Finally, an interesting pattern of one-way and two-way transitions was discovered by analyzing the transition patterns of issues through category analysis. Thus, we discovered that a pair of mutually similar categories induces two-way transitions. In contrast, one-way transitions can be recognized as an indicator that issues in a certain category tend to be influenced by other issues in another category. For practical application of the proposed methodology, high-quality word and stop word dictionaries need to be constructed. In addition, not only the number of documents but also additional meta-information such as the read counts, written time, and comments of documents should be analyzed. A rigorous performance evaluation or validation of the proposed methodology should be performed in future works.

Optimization of fractionation efficiency (FE) and throughput (TP) in a large scale splitter less full-feed depletion SPLITT fractionation (Large scale FFD-SF) (대용량 splitter less full-feed depletion SPLITT 분획법 (Large scale FFD-SF)에서의 분획효율(FE)및 시료처리량(TP)의 최적화)

  • Eum, Chul Hun;Noh, Ahrahm;Choi, Jaeyeong;Yoo, Yeongsuk;Kim, Woon Jung;Lee, Seungho
    • Analytical Science and Technology
    • /
    • v.28 no.6
    • /
    • pp.453-459
    • /
    • 2015
  • Split-flow thin cell fractionation (SPLITT fractionation, SF) is a particle separation technique that allows continuous (and thus a preparative scale) separation into two subpopulations based on the particle size or the density. In SF, there are two basic performance parameters. One is the throughput (TP), which was defined as the amount of sample that can be processed in a unit time period. Another is the fractionation efficiency (FE), which was defined as the number % of particles that have the size predicted by theory. Full-feed depletion mode (FFD-SF) have only one inlet for the sample feed, and the channel is equipped with a flow stream splitter only at the outlet in SF mode. In conventional FFD-mode, it was difficult to extend channel due to splitter in channel. So, we use large scale splitter-less FFD-SF to increase TP from increase channel scale. In this study, a FFD-SF channel was developed for a large-scale fractionation, which has no flow stream splitters (‘splitter less’), and then was tested for optimum TP and FE by varying the sample concentration and the flow rates at the inlet and outlet of the channel. Polyurethane (PU) latex beads having two different size distribution (about 3~7 µm, and about 2~30 µm) were used for the test. The sample concentration was varied from 0.2 to 0.8% (wt/vol). The channel flow rate was varied from 70, 100, 120 and 160 mL/min. The fractionated particles were monitored by optical microscopy (OM). The sample recovery was determined by collecting the particles on a 0.1 µm membrane filter. Accumulation of relatively large micron sized particles in channel could be prevented by feeding carrier liquid. It was found that, in order to achieve effective TP, the concentration of sample should be at higher than 0.4%.

The Waveform and Spectrum analysis of Tursiops truncatus (Bottlenose Dolphin) Sonar Signals on the Show at the Aquarium (쇼 학습시 병코돌고래 명음의 주파수 스펙트럼 분석)

  • 윤분도;신형일;이장욱;황두진;박태건
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.36 no.2
    • /
    • pp.117-125
    • /
    • 2000
  • The waveform and spectrum analysis of Tursiops truncatus(bottlenose dolphin) sonar signals were carried out on the basis of data collected during the dolphin show at the aquarium of Cheju Pacificland from October 1998 to February 1999. When greeting to audience, the pulse width, peak frequency and spectrum level from the five dolphins'sonar signals were 3.0ms, 4.54kHz and 125.6dB, respectively. At the time of warm-up just before the show, their figures were 5.0㎳, 5.24kHz and 127.0dB, respectively. During the performance of dolphins, with singing, peak frequency ranged 3.28∼5.78kHz and spectrum level ranged 137.0∼142.0dB. With playing ring, pulse width, peak frequency and spectrum level were 7.0㎳, 2.54kHz and 135.9dB, and when playing the ball, the values were 9.0㎳, 2.78kHz and 135.2dB, respectively. The values determined from the five dolphins during jump-up out of water were : pulse width 2.0㎳, peak frequency 4.50kHz and spectrum level 126.8dB. When they responded to trainer's instructions, the values were 2.25㎳, 248kHz and 148.7dB, respectively, and greeting to audience, the peak frequency and spectrum level were 5.84kHz and 122.5dB. During swimming under water, peak frequency and spectrum level were determined to be 10.10kHz and 126.8dB. It was found that there exited close consistencies in pulse width, frequency distribution and spectrum level between whistle sounds and dolphin's sonar signals. Accordingly, the dolphins can be easily trained by using whistle sound based on the results obtained from the waveform and spectrum of the dolphin's sonar signals.

  • PDF

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

Business Strategies for Korean Private Security-Guard Companies Utilizing Resource-based Theory and AHP Method (자원기반 이론과 AHP 방법을 활용한 민간 경호경비 기업의 전략 연구)

  • Kim, Heung-Ki;Lee, Jong-Won
    • Korean Security Journal
    • /
    • no.36
    • /
    • pp.177-200
    • /
    • 2013
  • As we enter a high industrial society that widens the gap between the rich and poor, demand for the security services has grown explosively. With the growth in quantitative expansion of security services, people have also placed increased requirements on more sophisticated and diversified security services. Consequently, market outlook for private security services industry is positive. However, Korea's private security services companies are experiencing difficulties in finding a direction to capture this new market opportunity due to their small sizes and lack of management-strategic thinking skills. Therefore, we intend to offer a direction of development for our private security services industry using a management-strategy theory and the Analytic Hierarchy Process(AHP), a structured decision-making method. A resource-based theory is one of the important management strategy theories. It explains that a company's overall performance is primarily determined by its competitive resources. Using this theory, we could analyze a company's unique resources and core competencies and set a strategic direction for the company accordingly. The usefulness and validity of this theory has been demonstrated as it has often been subject to empirical verification since 1990s. Based on this theory, we outlined a set of basic procedures to establish a management strategy for the private security services companies. We also used the AHP method to identify competitive resources, core competencies, and strategies from private security services companies in contrast with public companies. The AHP method is a technique that can be used in the decision making process by quantifying experts' knowledge and unstructured problems. This is a verified method that has been used in the management decision making in the corporate environment as well as for the various academic studies. In order to perform this method, we gathered data from 11 experts from academic, industrial, and research sectors and drew distinctive resources, competencies, and strategic direction for private security services companies vis-a-vis public organizations. Through this process, we came to the conclusion that private security services companies generally have intangible resources as their distinctive resources compared with public organization. Among those intangible resources, relational resources, customer information, and technologies were analyzed as important. In contrast, tangible resources such as equipment, funds, distribution channels are found to be relatively scarce. We also found the competencies in sales and marketing and new product development as core competencies. We chose a concentration strategy focusing on a particular market segment as a strategic direction considering these resources and competencies of private security services companies. A concentration strategy is the right fit for smaller companies as a strategy to allow them to focus all of their efforts on target customers in a single segment. Thus, private security services companies would face the important tasks such as developing a new market and appropriate products for such market segment and continuing marketing activities to manage their customers. Additionally, continuous recruitment is required to facilitate the effective use of human resources in order to strengthen their marketing competency in a long term.

  • PDF

Design and Implementation of Game Server using the Efficient Load Balancing Technology based on CPU Utilization (게임서버의 CPU 사용율 기반 효율적인 부하균등화 기술의 설계 및 구현)

  • Myung, Won-Shig;Han, Jun-Tak
    • Journal of Korea Game Society
    • /
    • v.4 no.4
    • /
    • pp.11-18
    • /
    • 2004
  • The on-line games in the past were played by only two persons exchanging data based on one-to-one connections, whereas recent ones (e.g. MMORPG: Massively Multi-player Online Role-playings Game) enable tens of thousands of people to be connected simultaneously. Specifically, Korea has established an excellent network infrastructure that can't be found anywhere in the world. Almost every household has a high-speed Internet access. What made this possible was, in part, high density of population that has accelerated the formation of good Internet infrastructure. However, this rapid increase in the use of on-line games may lead to surging traffics exceeding the limited Internet communication capacity so that the connection to the games is unstable or the server fails. expanding the servers though this measure is very costly could solve this problem. To deal with this problem, the present study proposes the load distribution technology that connects in the form of local clustering the game servers divided by their contents used in each on-line game reduces the loads of specific servers using the load balancer, and enhances performance of sewer for their efficient operation. In this paper, a cluster system is proposed where each Game server in the system has different contents service and loads are distributed efficiently using the game server resource information such as CPU utilization. Game sewers having different contents are mutually connected and managed with a network file system to maintain information consistency required to support resource information updates, deletions, and additions. Simulation studies show that our method performs better than other traditional methods. In terms of response time, our method shows shorter latency than RR (Round Robin) and LC (Least Connection) by about 12%, 10% respectively.

  • PDF

Development of Preliminary Quality Assurance Software for $GafChromic^{(R)}$ EBT2 Film Dosimetry ($GafChromic^{(R)}$ EBT2 Film Dosimetry를 위한 품질 관리용 초기 프로그램 개발)

  • Park, Ji-Yeon;Lee, Jeong-Woo;Choi, Kyoung-Sik;Hong, Semie;Park, Byung-Moon;Bae, Yong-Ki;Jung, Won-Gyun;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.21 no.1
    • /
    • pp.113-119
    • /
    • 2010
  • Software for GafChromic EBT2 film dosimetry was developed in this study. The software provides film calibration functions based on color channels, which are categorized depending on the colors red, green, blue, and gray. Evaluations of the correction effects for light scattering of a flat-bed scanner and thickness differences of the active layer are available. Dosimetric results from EBT2 films can be compared with those from the treatment planning system ECLIPSE or the two-dimensional ionization chamber array MatriXX. Dose verification using EBT2 films is implemented by carrying out the following procedures: file import, noise filtering, background correction and active layer correction, dose calculation, and evaluation. The relative and absolute background corrections are selectively applied. The calibration results and fitting equation for the sensitometric curve are exported to files. After two different types of dose matrixes are aligned through the interpolation of spatial pixel spacing, interactive translation, and rotation, profiles and isodose curves are compared. In addition, the gamma index and gamma histogram are analyzed according to the determined criteria of distance-to-agreement and dose difference. The performance evaluations were achieved by dose verification in the $60^{\circ}$-enhanced dynamic wedged field and intensity-modulated (IM) beams for prostate cancer. All pass ratios for the two types of tests showed more than 99% in the evaluation, and a gamma histogram with 3 mm and 3% criteria was used. The software was developed for use in routine periodic quality assurance and complex IM beam verification. It can also be used as a dedicated radiochromic film software tool for analyzing dose distribution.