• Title/Summary/Keyword: 그래프

Search Result 3,874, Processing Time 0.034 seconds

Rare-Earth Metal Complex-Functionalized Mesoporous Silica for a Potential UV Sensor (잠재적인 UV 센서를 위한 희토류 금속착물이 기능화된 메조다공성 실리카)

  • Sung Soo Park;Mi-Ra Kim;Weontae Oh;Yedam Kim;Yeeun Lee;Youngeon Lee;Kangbeom Ha;Dojun Jung
    • Journal of Adhesion and Interface
    • /
    • v.24 no.4
    • /
    • pp.136-142
    • /
    • 2023
  • In this study, TEOS was used as a silica source, and a triblock copolymer (P123) was used as a template to produce mesoporous silica with a well-ordered hexagonal mesopore array through a self-assembly method and hydrothermal process under acidic condition. (Surfactant-extracted SBA-15). Surfactant-extracted SBA-15 showed the particle shape of a short rod with a size of approximately 980 nm. The surface area and pore diameter were 730 m2g-1 and 70.8 Å, respectively. Meanwhile, aminosilane (3-aminopropyltriethoxysilane, APTES) was grafted into the mesopores using a post-synthesis method. Mesoporous silica (APTES-SBA-15) modified with aminosilane had a well-ordered pore structure (p6mm) and well-maintained the particle shape of short rods. The surface area and pore diameter of APTES-SBA-15 decreased to 350 m2g-1 and 60.7 Å, respectively. APTES-modified mesoporous silica was treated with a solution of rare earth metal ions (Eu3+, Tb3+) to synthesize a mesoporous silica material in which rare earth metal complexes were introduced into the mesopores. (Eu/APTES-SBA-15, Tb/APTES-SBA-15) These materials exhibited characteristic photoluminescence spectra by λex=250 nm. (5D47F5 (543.5 nm), 5D47F4 (583.5 nm), 5D47F3 (620.2 nm) transitions for Tb/APTES-SBA-15; 5D07F0 (577.7 nm), 5D07F1 (592.0 nm), 5D07F2 (614.9 nm), 5D07F3 (650.3 nm) and 5D07F4 (698.5 nm) transitions for Eu/APTES-SBA-15)

Optimization and Applicability Verification of Simultaneous Chlorogenic acid and Caffeine Analysis in Health Functional Foods using HPLC-UVD (HPLC-UVD를 이용한 건강기능식품에서 클로로겐산과 카페인 동시분석법 최적화 및 적용성 검증)

  • Hee-Sun Jeong;Se-Yun Lee;Kyu-Heon Kim;Mi-Young Lee;Jung-Ho Choi;Jeong-Sun Ahn;Jae-Myoung Oh;Kwang-Il Kwon;Hye-Young Lee
    • Journal of Food Hygiene and Safety
    • /
    • v.39 no.2
    • /
    • pp.61-71
    • /
    • 2024
  • In this study, we analyzed chlorogenic acid indicator components in preparation for the additional listing of green coffee bean extract in the Health Functional Food Code and optimized caffeine for simultaneous analysis. We extracted chlorogenic acid and caffeine using 30% methanol, phosphoric acid solution, and acetonitrile-containing phosphoric acid and analyzed them at 330 and 280 nm, respectively, using liquid chromatography. Our analysis validation results yielded a correlation coefficient (R2) revealing a significance level of at least 0.999 within the linear quantitative range. The chlorogenic acid and caffeine detection and quantification limits were 0.5 and 0.2 ㎍/mL and 1.4, and 0.4 ㎍/mL, respectively. We confirmed that the precision and accuracy results were suitable using the AOAC validation guidelines. Finally, we developed a simultaneous chlorogenic acid and caffeine analysis approach. In addition, we confirmed that our analysis approach could simultaneously quantify chlorogenic acid and caffeine by examining the applicability of each formulation through prototypes and distribution products. In conclusion, the results of this study demonstrated that the standardized analysis would expectably increase chlorogenic acidcontaining health functional food quality control reliability.

Factors influencing the axes of anterior teeth during SWA on masse sliding retraction with orthodontic mini-implant anchorage: a finite element study (교정용 미니 임플랜트 고정원과 SWA on masse sliding retraction 시 전치부 치축 조절 요인에 관한 유한요소해석)

  • Jeong, Hye-Sim;Moon, Yoon-Shik;Cho, Young-Soo;Lim, Seung-Min;Sung, Sang-Jin
    • The korean journal of orthodontics
    • /
    • v.36 no.5
    • /
    • pp.339-348
    • /
    • 2006
  • Objective: With development of the skeletal anchorage system, orthodontic mini-implant (OMI) assisted on masse sliding retraction has become part of general orthodontic treatment. But compared to the emphasis on successful anchorage preparation, the control of anterior teeth axis has not been emphasized enough. Methods: A 3-D finite element Base model of maxillary dental arch and a Lingual tipping model with lingually inclined anterior teeth were constructed. To evaluate factors influencing the axis of anterior teeth when OMI was used as anchorage, models were simulated with 2 mm or 5 mm retraction hooks and/or by the addition of 4 mm of compensating curve (CC) on the main archwire. The stress distribution on the roots and a 25000 times enlarged axis graph were evaluated. Results: Intrusive component of retraction force directed postero-superiorly from the 2 mm height hook did not reduce the lingual tipping of anterior teeth. When hook height was increased to 5 mm, lateral incisor showed crown-labial and root-lingual torque and uncontrolled tipping of the canine was increased.4 mm of CC added to the main archwire also induced crown-labial and root-lingual torque of the lateral incisor but uncontrolled tipping of the canine was decreased. Lingual tipping model showed very similar results compared with the Base model. Conclusion: The results of this study showed that height of the hook and compensating curve on the main archwire can influence the axis of anterior teeth. These data can be used as guidelines for clinical application.

The Plan of Dose Reduction by Measuring and Evaluating Occupationally Exposed Dose in vivo Tests of Nuclear Medicine (핵의학 체내검사 업무 단계 별 피폭선량 측정 및 분석을 통한 피폭선량 감소 방안)

  • Kil, Sang-Hyeong;Lim, Yeong-Hyeon;Park, Kwang-Youl;Jo, Kyung-Nam;Kim, Jung-Hun;Oh, Ji-Eun;Lee, Sang-Hyup;Lee, Su-Jung;Jun, Ji-Tak;Jung, Eui-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.26-32
    • /
    • 2010
  • Purpose: It is to find the way to minimize occupationally exposed dose for workers in vivo tests in each working stage within the range of the working environment which does not ruin the examination and the performance efficiency. Materials and Methods: The process of the nuclear tests in vivo using a radioactive isotope consists of radioisotope distribution, a radioisotope injection ($^{99m}Tc$, $^{18}F$-FDG), and scanning and guiding patients. Using a measuring instrument of RadEye-G10 gamma survey meter (Thermo SCIENTIFIC), the exposure doses in each working stage are measured and evaluated. Before the radioisotope injection the patients are explained about the examination and educated about matters that require attention. It is to reduce the meeting time with the patients. In addition, workers are also educated about the outside exposure and have to put on the protected devices. When the radioisotope is injected to the patients the exposure doses are measured due to whether they are in the protected devices or not. It is also measured due to whether there are the explanation about the examination and the education about matters that require attention or not. The total exposure dose is visualized into the graph in using Microsoft office excel 2007. The difference of this doses are analyzed by wilcoxon signed ranks test in using SPSS (statistical package for the social science) program 12.0. In this case of p<0.01, this study is reliable in the statistics. Results: It was reliable in the statistics that the exposure dose of injecting $^{99m}Tc$-DPD 20 mCi in wearing the protected devices showed 88% smaller than the dose of injecting it without the protected devices. However, it was not reliable in the statistics that the exposure dose of injecting $^{18}F$-FDG 10 mCi with wearing protected devices had 26% decrease than without them. Training before injecting $^{99m}Tc$-DPD 20 mCi to patient made the exposure dose drop to 63% comparing with training after the injection. The dose of training before injecting $^{18}F$-FDG 10 mCi had 52% less then the training after the injection. Both of them were reliable in the statistics. Conclusion: In the examination of using the radioisotope $^{99m}Tc$, wearing the protected devices are more effective to reduce the exposure dose than without wearing them. In the case of using $^{18}F$-FDG, reducing meeting time with patients is more effective to drop the exposure dose. Therefore if we try to protect workers from radioactivity according to each radioisotope characteristic it could be more effective and active radiation shield from radioactivity.

  • PDF

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

COMPARISON OF POLYMERIZATION SHRINKAGE AND STRAIN STRESS OF SEVERAL COMPOSITE RESINS USING STRAIN GUAGE (스트레인 게이지를 이용한 수종의 복합레진의 중합수축 및 수축응력의 비교)

  • Kim, Young-Kwang;Yoo, Seung-Hoon;Kim, Jong-Soo
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.31 no.3
    • /
    • pp.516-526
    • /
    • 2004
  • Polymerization shrinkage of photoinitiation type composite resin cause several clinical problems. The purpose of this study was to evaluate the shrinkage strain stress, linear polymerization shrinkage, compressive strength and microhardness of recently developed composite resins. The composite resins were divided into four groups according to the contents of matrix and filler type. Group I : $Denfil^{TM}$(Vericom, Korea) with conventional matrix, Group II : $Charmfil^{(R)}$(Dentkist, Korea) with microfiller and nanofller mixture, Group III : $Filtek^{TM}$ Z250(3M-ESPE, USA) TEGDMA replaced by UDMA and Bis-EMA(6) in the matrix, and Group IV : $Filtek^{TM}$ Supreme(3M-ESPE, USA) using pure nanofiller. Preparation of acrylic molds were followed by filling and curing with light gun. Strain gauges were attached to each sample and the leads were connected to a strainmeter. With strainmeter shrinkage strain stress and linear polymerization shrinkage was measured for 10 minutes. The data detected at 1 minute and 10 minutes were analysed statistically with ONE-way ANOVA test. To evaluate the mechanical properties of tested materials, compressive hardness test and microhardness test were also rendered. The results can be summarized as follows : 1. Filling materials in acrylic molds showed initial temporary expansion in the early phase of polymerization. This was followed by contraction with the rapid increase in strain stress during the first 1 minute and gradually decreased during post-gel shrinkage phase. After 1 minute, there's no statistical differences of strain stress between groups. The highest strain stress was found in group IV and followed by group III, I, II at 10 minutes-measurement(p>.05). In regression analysis of strain stress, group III showed minimal inclination and followed by group II, I, IV during 1 minute. 2. In linear polymerization shrinkage test, the composite resins in every group showed initial increase of shrinkage velocity during the first 1 minute, followed by gradually decrease of shrinkage velocity. After 1 minute, group IV and group III showed statistical difference(p<.05). After 10 minutes, there were statistical differences between group IV and group I, III(p<.05) and between group II and group III(p<.05). In regression analysis of linear polymerization shrinkage, group II showed minimal inclination and followed by group IV, III, I during 1 minute. 3. In compressive strength test, group III showed the highest strength and followed by group II, IV, I. There were statistical differences between group III and group IV, I(p<.05). 4. In microhardness test, upper surfaces showed higher value than lower surfaces in every group(p<.05).

  • PDF

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

The Validation Study of the Questionnaire for Sasang Constitution Classification (the 2nd edition revised in 1995) - In the field of profile analysis (사상체질분류검사지(四象體質分類檢査紙)(QSCC)II에 대(對)한 타당화(妥當化) 연구(硏究) -각(各) 체질집단(體質集團)의 군집별(群集別) Profile 분석(分析)을 중심(中心)으로-)

  • Lee, Jung-Chan;Go, Byeong-Hui;Song, Il-Byeong
    • Journal of Sasang Constitutional Medicine
    • /
    • v.8 no.1
    • /
    • pp.247-294
    • /
    • 1996
  • By means of the statistical data which has been collected with newly revised QSCC made use of the outpatient group examined at Kyung-Hee Medical Center and an open ordinary person group, the author proceeded statistical analysis for the validation study of the revised questionnaire itself. First, check the accurate discrimination rate by performing discriminant analysis on the statistical data of the patient group. And next, sought T-score by applying the norms gained in process of standadization of the open ordinary person group to the Sasang scale score of the outpatient group and investigated the distinctive feature between the subpopulations which was devided in the process of multivarite cluster analysis. The result was summarized as follows ; 1. The validity of the questionnaire was established through the fact that the accurate discrimination rate the ratio between predicted group and actual group was figured out 70.08%. 2. At the profile analysis the response to the relevant scale showed notable upward tendency in each constitutional group and therefore it seems to be pertinent in the field of constitutional discrimination. 3. In the observation of the power of expression through the profile analysis of each constitutional group the Soyang group demonstrated the most remarkable outcome, the Soeum group was the most inferior and the Taieum group revealed a sort of dual property. 4. What is called the group of seceder out of three subpopulation of each constitutional group distinguished definitely from the contrasted groups at the point of the distinctive profile feature and the content is like following description. (1) The seceder group of Soyang-in showed considerably passive disposition differently from general character of ordinary Soyang group and an appearance attracting the attention is that they demonstrated comparatively higher response at Soeum scale (2) The seceder group of Taieum-in gained low scores in general that informed the passive disposition of the group and the other way of the general property of Taieum group which showed accompanied ascension in Taiyang-Taieum scales they demonstrated sharply declined score at Taiyang scale (3) The seceder group of Soeum-in demonstrated distinctive property similar to the profile feature of Soyang group and it notifies that the passive property of Soeum group was diluted for the most part. According to the above result, the validity of newly revised questionnaire has been proven successfully and the property of seceder groups could be noticed to some degree through the profile analysis on the course of this study. The result of this study is expected to use as a research materials to produce next edition of the questionnaire and it is regarded that further inquisition about the difference between the seceder group and the contrasted group is required for the promotion of the questionnaire as it refered several times in the contents of the main discourse.

  • PDF

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Varietal and Locational Variation of Grain Quality Components of Rice Produced n Middle and Southern Plain Areas in Korea (중ㆍ남부 평야지산 발 형태 및 이화학적 특성의 품종 및 산지간 변이)

  • Choi, Hae-Chune;Chi, Jeong-Hyun;Lee, Chong-Seob;Kim, Young-Bae;Cho, Soo-Yeon
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.39 no.1
    • /
    • pp.15-26
    • /
    • 1994
  • To understand the relative contribution of varietal and environmental variation on various grain quality components in rice, grain appearance, milling recovery, several physicochemical properties of rice grain and texture or palatability of cooked rice for milled rice materials of seven cultivars(five japonica & two Tongil-type), produced at six locations of the middle and southern plain area of Korea in 1989, were evaluated and analyzed the obtained data. Highly significant varietal variations were detected in all grain quality components of the rice materials and marked locational variations with about 14-54% portion of total variation were recognized in grain appearance, milling recovery, alkali digestibility, protein content, K /Mg ratio, gelatinization temperature, breakdown and setback viscosities. Variations of variety x location interaction were especially large in overall palatability score of cooked rice and consistency or set- back viscosities of amylograph. Tongil-type cultivars showed poor marketing quality, lower milling recovery, slightly lower alkali digestibility and amylose content, a little higher protein content and K /Mg ratio, relatively higher peak, breakdown and consistency viscosities, significantly lower setback viscosity, and more undesirable palatability of cooked rice compared with japonica rices. The japonica rice varieties possessing good palatability of cooked rice were slightly low in protein content and a little high in K /Mg ratio and stickiness /hardness ratio of cooked rice. Rice 1000-kernel weight was significantly heavier in rice materials produced in Iri lowland compared with other locations. Milling recovery from rough to brown rice and ripening quality were lowest in Milyang late-planted rice while highest in Iri lowland and Gyehwa reclaimed-land rice. Amylose content of milled rice was about 1% lower in Gyehwa rice compared with other locations. Protein content of polished rice was about 1% lower in rice materials of middle plain area than those of southern plain regions. K/Mg ratio of milled rice was lowest in Iri rice while highest in Milyang rice. Alkali digestibility was highest in Milyang rice while lowest in Honam plain rice, but the temperature of gelatinization initiation of rice flour in amylograph was lowest in Suwon and Iri rices while highest in Milyang rice. Breakdown viscosity was lowest in Milyang rice and next lower in Ichon lowland rice while highest in Gyehwa and Iri rices, and setback viscosity was the contrary tendency. The stickiness/hardness ratio of cooked rice was slightly lower in southern-plain rices than in middle-plain ones, and the palatability of cooked rice was best in Namyang reclaimed-land rice and next better with the order of Suwon$\geq$Iri$\geq$Ichon$\geq$Gyehwa$\geq$Milyang rices. The rice materials can be classified genotypically into two ecotypes of japonica and Tongil-type rice groups, and environmentally into three regions of Milyang, middle and Honam lowland by the distribution on the plane of 1st and 2nd principal components contracted from eleven grain quality properties closely associated with palatability of cooked rice by principal component analysis.

  • PDF