• 제목/요약/키워드: CONSTRUCTION

검색결과 54,749건 처리시간 0.077초

Wind and Flooding Damages of Rice Plants in Korea (한국의 도작과 풍수해)

  • 강양순
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • 제34권s02호
    • /
    • pp.45-65
    • /
    • 1989
  • The Korean peninsular having the complexity of the photography and variability of climate is located within passing area of a lots of typhoon occurring from the southern islands of Philippines. So, there are various patterns of wind and flooding damages in paddy field occuring by the strong wind and the heavy rain concentrated during the summer season of rice growing period in Korea. The wind damages to rice plants in Korea were mainly caused by saline wind, dry wind and strong wind when typhoon occurred. The saline wind damage having symptom of white head or dried leaves occurred by 1.1 to 17.2 mg of salt per dry weight stuck on the plant which was located at 2. 5km away from seashore of southern coastal area during the period(from 27th to 29th, August, 1986) of typhoon &Vera& accompanying 62-96% of relative humidity, more than 6 m per second of wind velocity and 22.5 to 26.4$^{\circ}C$ of air temperature without rain. Most of the typhoons accompanying 4.0 to 8. 5m per second of wind and low humidity (lesp an 60%) with high temperature in the east coastal area and southen area of Korea. were changed to dry and hot wind by the foehn phenomenon. The dry wind damages with the symptom of the white head or the discolored brownish grain occurred at the rice heading stage. The strong wind caused the severe damages such as the broken leaves, cut-leaves and dried leaves before heading stage, lodging and shattering of grain at ripening stage mechanically during typhoon. To reduce the wind damages to rice plant, cultivation of resistant varieties to wind damages such as Sangpoongbyeo and Cheongcheongbyeo and the escape of heading stage during period of typhoon by accelerating of heading within 15th, August are effective. Though the flood disasters to rice plant such as earring away of field, burying of field, submerging and lodging damage are getting low by the construction of dam for multiple purpose and river bank, they are occasionally occurred by the regional heavy rain and water filled out in bank around the river. Paddy field were submerged for 2 to 4 days when typhoon and heavy rain occurred about the end of August. At this time, the rice plants that was in younger growing stage in the late transplanting field of southern area of Korea had the severe damages. Although panicles of rice plant which was in the meiotic growing stage and heading stage were died when flooded, they had 66% of yield compensating ability by the upper tilling panicle produced from tiller with dead panicle in ordinary transplanting paddy field. It is effective for reduction of flooding damages to cultivate the resistant variety to flooding having the resistance to bacterial leaf blight, lodging and small brown planthopper simultaneously. Especially, Tongil type rice varieties are relatively resistant to flooding, compared to Japonica rice varieties. Tongil type rice varieties had high survivals, low elongation ability of leaf sheath and blade, high recovering ability by the high root activity and photosynthesis and high yield compensating ability by the upper tillering panicle when flooded. To minimize the flooding and wind damage to rice plants in future, following research have to be carried out; 1. Data analysis by telemetering and computerization of climate, actual conditions and growing diagnosis of crops damaged by disasters. 2. Development of tolerant varieties to poor natural conditions related to flooding and wind damages. 3. Improvement of the reasonable cropping system by introduction of other crops compensating the loss of the damaged rice. 4. Increament of utilization of rice plant which was damaged.

  • PDF

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • 제16권1호
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

Study on the Analysis of β-lactoglobulin and κ-casein Genotypes of Cattle using Polymerase Chain Reaction (PCR 기법을 이용한 축우의 β-lactoglobulin 및 κ-casein 유전자형 분석에 관한 연구)

  • Sang, Byung Chan;Ryoo, Seung Heui;Lee, Sang Hoon;Song, Chi Eun;Nam, Myung Soo;Chon, Byung Soon
    • Korean Journal of Agricultural Science
    • /
    • 제25권2호
    • /
    • pp.216-224
    • /
    • 1998
  • This study was performed to offer the basic and applicable data for improvement of Korean cattle and dairy cattle, according to finding the genetic construction obtained from analysis of genetic polymorphisms of ${\beta}$-lactoglobulin and ${\kappa}$-casein loci related Korean cattle and Holstein cows using PCR-RFLP. Genomic DNA used in this study was prepared from the blood of 253 individuals of Korean cattle in Korean Native Cattle Improvement Center, NLCF, and the blood of 113 individuals of Holstein cows in National Livestock Research Institute. The results obtained are summarized as follows : 1. This study confirmed amplified products of 530bp and 262bp fragments obtained from the amplification of ${\beta}$-lactoglobulin and ${\kappa}$-casein loci in Korean cattle and Holstein breed by PCR. 2. The ${\beta}$-lactoglobulin AA genotype showed 153bp and 109bp fragments, and ${\beta}$-lactoglobulin AB genotype showed 153bp, 109bp, 79bp and 74bp fragments, and BB genotype showed 109bp, 79bp and 74bp fragments in amplified products of ${\beta}$-lactoglobulin loci with the restricted enzyme digestion of Hae III. 3. The ${\kappa}$-casein AA genotype showed a 530bp fragment, and ${\kappa}$-casein AB genotype showed 530bp, 344bp and 186bp fragments, and BB genotype showed 344bp and 186bp fragments in amplified products of ${\kappa}$-casein loci with the restricted enzyme digestion of Taq I. 4. On ${\beta}$-lactoglobulin genotypes and gene frequencies, Korean cattle were 6.72%, 26.09% and 67.19% for AA, AB and BB genotypes, and ${\beta}$-lactoglobulin A and B alleles were 0.197 and 0.803, and Holstein were 35.40%, 56.64% and 7.96% for AA, AB and BB genotypes, and ${\beta}$-lactoglobulin A and B alleles were 0.637 and 0.363, respectively. 5. On ${\kappa}$-casein genotypes and gene frequencies, Korean cattle were 46.25%, 39.13% and 14.62% for AA, AB and BB genotypes, and ${\kappa}$-casein A and B alleles were 0.658 and 0.342, and Holstein were 60.18% and 38.94% and 0.88% for AA, AB and BB genotypes, and ${\kappa}$-casein A and B alleles were 0.796 and 0.204, respectively. 6. As a consequence, the gene frequency was 0.197 and 0.803 for ${\beta}$-lactoglobulin A and B alleles, and 0.658 and 0.342 for ${\kappa}$-casein A and B alleles in Korea cattle, but was 0.637 and 0.363 for ${\beta}$-lactoglobulin A and B alleles, and 0.796 and 0.204 for ${\kappa}$-casein A and B alleles in Holstein, respectively.

  • PDF

A Methodology to Develop a Curriculum based on National Competency Standards - Focused on Methodology for Gap Analysis - (국가직무능력표준(NCS)에 근거한 조경분야 교육과정 개발 방법론 - 갭분석을 중심으로 -)

  • Byeon, Jae-Sang;Ahn, Seong-Ro;Shin, Sang-Hyun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • 제43권1호
    • /
    • pp.40-53
    • /
    • 2015
  • To train the manpower to meet the requirements of the industrial field, the introduction of the National Qualification Frameworks(hereinafter referred to as NQF) was determined in 2001 by National Competency Standards(hereinafter referred to as NCS) centrally of the Office for Government Policy Coordination. Also, for landscape architecture in the construction field, the "NCS -Landscape Architecture" pilot was developed in 2008 to be test operated for 3 years starting in 2009. Especially, as the 'realization of a competence-based society, not by educational background' was adopted as one of the major government projects in the Park Geun-Hye government(inaugurated in 2013) the NCS system was constructed on a nationwide scale as a detailed method for practicing this. However, in the case of the NCS developed by the nation, the ideal job performing abilities are specified, therefore there are weaknesses of not being able to reflect the actual operational problem differences in the student level between universities, problems of securing equipment and professors, and problems in the number of current curricula. For soft landing to practical curriculum, the process of clearly analyzing the gap between the current curriculum and the NCS must be preceded. Gap analysis is the initial stage methodology to reorganize the existing curriculum into NCS based curriculum, and based on the ability unit elements and performance standards for each NCS ability unit, the discrepancy between the existing curriculum within the department or the level of coincidence used a Likert scale of 1 to 5 to fill in and analyze. Thus, the universities wishing to operate NCS in the future measuring the level of coincidence and the gap between the current university curriculum and NCS can secure the basic tool to verify the applicability of NCS and the effectiveness of further development and operation. The advantages of reorganizing the curriculum through gap analysis are, first, that the government financial support project can be connected to provide quantitative index of the NCS adoption rate for each qualitative department, and, second, an objective standard is provided on the insufficiency or sufficiency when reorganizing to NCS based curriculum. In other words, when introducing in the subdivisions of the relevant NCS, the insufficient ability units and the ability unit elements can be extracted, and the supplementary matters for each ability unit element per existing subject can be extracted at the same time. There is an advantage providing directions for detailed class program and basic subject opening. The Ministry of Education and the Ministry of Employment and Labor must gather people from the industry to actively develop and supply the NCS standard a practical level to systematically reflect the requirements of the industrial field the educational training and qualification, and the universities wishing to apply NCS must reorganize the curriculum connecting work and qualification based on NCS. To enable this, the universities must consider the relevant industrial prospect and the relation between the faculty resources within the university and the local industry to clearly select the NCS subdivision to be applied. Afterwards, gap analysis must be used for the NCS based curriculum reorganization to establish the direction of the reorganization more objectively and rationally in order to participate in the process evaluation type qualification system efficiently.

Aesthetics of Samjae and Inequilateral Triangle Found in Ancient Triad of Buddha Carved on Rock - Centering on Formative Characteristics of Triad of Buddha Carved on Rock in Seosan - (고대(古代) 마애삼존불(磨崖三尊佛)에서 찾는 삼재(三才)와 부등변삼각(不等邊三角)의 미학(美學) - 서산마애삼존불의 형식미를 중심으로 -)

  • Rho, Jae-Hyun;Lee, Kyu-Wan;Jang, Il-Young;Goh, Yeo-Bin
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • 제28권3호
    • /
    • pp.72-84
    • /
    • 2010
  • This study was attempted in order to offer basic data for implementing and applying Samjonseokjo(三尊石造), which is one of traditional stone construction method, by confirming how the constructive principle is expressed such as proportional beauty, which is contained in the modeling of Triad of Buddha Carved on Rock that was formed in the period of the Three States, centering on Triad of Buddha Carved on Rock in Susan. The summarized findings are as follows. 1. As a result of analyzing size and proportion of totally 17 of Triad of Buddha Carved on Rock, the average total height in Bonjonbul(本尊佛) was 2.96m. Right Hyeopsi(右挾侍) was 2.19m. Left Hyeopsi(左挾侍) was 2.16m. The height ratio according to this was 100:75:75, thereby having shown the relationship of left-right symmetrical balance. The area ratio in left-right Hyeopsi was 13.4:13.7, thereby the two area having been evenly matched. 2. The Triad of Buddha Carved on Rock in Seosan is carved on Inam(印岩) rock after crossing over Sambulgyo bridge of the Yonghyeon valley. Left direction was measured with $S47^{\circ}E$ in an angle of direction. This is judged to target an image change and an aesthetic sense in a Buddhist statue according to direction of sunlight while blocking worshipers' dazzling. 3. As for iconic characteristics of Buddha Carved on Rock in Seosan, there is even Hyeopsi in Bangasang(半跏像) and Bongjiboju(捧持寶珠) type Bosangipsang. In the face of Samjon composition in left-right asymmetry, the unification is indicated while the same line and shape are repeated. Thus, the stably visual balance is being shown. 4. In case of Triad of Buddha Carved on Rock in Seosan, total height in Bonjonbul, left Hyeopsi, and right Hyeopsi was 2.80m, 1.66m, and 1.70m, respectively. Height ratio in left-right Hyeopsibul was 0.60:0.62, thereby having been almost equal. On the other hand, the area ratio was 28.8:25.2, thereby having shown bigger difference. The area ratio on a plane was grasped to come closer to Samjae aesthetic proportion. 5. The axial angle of centering on Gwangbae was 84:46:50, thereby having been close to right angle. On the other hand, the axial angle ratio of centering on Yeonhwajwa(蓮華坐: lotus position) was measured to be 135:25:20, thereby having shown the form of inequilateral triangle close to obtuse angle. Accordingly, the upper part and the lower part of Triad of Buddha Carved on Rock in Susan are taking the stably proportional sense in the middle of maintaining the corresponding relationship through angular proportion of inequilateral triangle in right angle and obtuse angle. 6. The distance ratio in the upper half was 0.51:0.36:0.38. On the other hand, the distance ratio in the lower half was 0.53 : 0.33 : 0.27. Thus, the up-down and left-right symmetrical balance is being formed while showing the image closer to inequilateral triangle. 7. As a result of examining relationship of Samjae-mi(三才美) targeting Triad of Buddha Carved on Rock in Susan, the angular ratio was shown to be more notable that forms the area ratio or triangular form rather than length ratio. The inequilateral triangle, which is formed centering on Gwangbae(光背) in the upper part and Yeonhwajwa(lotus position) in the lower part, is becoming very importantly internal motive of doubling the constructive beauty among Samjae, no less than the mutually height and area ratio in Samjonbul.

A Study on the Improvement Plans of Police Fire Investigation (경찰화재조사의 개선방안에 관한 연구)

  • SeoMoon, Su-Cheol
    • Journal of Korean Institute of Fire Investigation
    • /
    • 제9권1호
    • /
    • pp.103-121
    • /
    • 2006
  • We are living in more comfortable circumstances with the social developments and the improvement of the standard of living, but, on the other hand, we are exposed to an increase of the occurrences of tires on account of large-sized, higher stories, deeper underground building and the use of various energy resources. The materials of the floor in a residence modern society have been going through various alterations in accordance with the uses of a residence and are now used as final goods in interioring the bottom of apartments, houses and shops. There are so many kinds of materials you usually come in contact with, but in the first place, we need to make an experiment on the spread of the fire with the hypocaust used as the floors of apartments, etc. and the floor covers you usually can get easily. We, scientific investigators, can get in contact with the accidents caused by incendiarism or an accidental fire closely connected with petroleum stuffs on the floor materials that give rise to lots of problems. on this account, I'd like to propose that we conduct an experiment on fire shapes by each petroleum stuff and that discriminate an accidental tire from incendiarism. In an investigation, it seems that finding a live coal could be an essential part of clearing up the cause of a tire but it could not be the cause of a fire itself. And besides, all sorts of tire cases or fire accidents have some kind of legislation and standard to minimize and at an early stage cope with the damage by tires. That is to say, we are supposed to install each kind of electric apparatus, automatic alarm equipment, automatic fire extinguisher in order to protect ourselves from the danger of fires and check them at any time and also escape urgently in case of fire-outbreaking or build a tire-proof construction to prevent flames from proliferating to the neighboring areas. Namely, you should take several factors into consideration to investigate a cause of a case or an accident related to fire. That means it's not in reason for one investigator or one investigative team to make clear of the starting part and the cause of a tire. accordingly, in this thesis, explanations would be given set limits to the judgement and verification on the cause of a fire and the concrete tire-spreading part through investigation on the very spot that a fire broke out. The fire-discernment would also be focused on the early stage fire-spreading part fire-outbreaking resources, and I think the realities of police tire investigations and the problems are still a matter of debate. The cause of a fire must be examined into by logical judgement on the basis of abundant scientific knowledge and experience covering the whole of fire phenomena. The judgement of the cause should be made with fire-spreading situation at the spot as the central figure and in case of verifying, you are supposed to prove by the situational proof from the traces of the tire-spreading to the fire-outbreaking sources. The causal relation on a fire-outbreak should not be proved by arbitrary opinion far from concrete facts, and also there is much chance of making mistakes if you draw deduction from a coincidence. It is absolutely necessary you observe in an objective attitude and grasp the situation of a tire in the investigation of the cause. Having a look at the spot with a prejudice is not allowed. The source of tire-outbreak itself is likely to be considered as the cause of a tire and that makes us doubt about the results according to interests of the independent investigators. So to speak, they set about investigations, the police investigation in the hope of it not being incendiarism, the fire department in the hope of it not being problems in installments or equipments, insurance companies in the hope of it being any incendiarism, electric fields in the hope of it not being electric defects, the gas-related in the hope of it not being gas problems. You could not look forward to more fair investigation and break off their misgivings. It is because the firing source itself is known as the cause of a fire and civil or criminal responsibilities are respected to the firing source itself. On this occasion, investigating the cause of a fire should be conducted with research, investigation, emotion independent, and finally you should clear up the cause with the results put together.

  • PDF

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • 제20권1호
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

A study of compaction ratio and permeability of soil with different water content (축제용흙의 함수비 변화에 의한 다짐율 및 수용계수 변화에 관한 연구)

  • 윤충섭
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • 제13권4호
    • /
    • pp.2456-2470
    • /
    • 1971
  • Compaction of soil is very important for construction of soil structures such as highway fills, embankment of reservoir and seadike. With increasing compaction effort, the strength of soil, interor friction and Cohesion increas greatly while the reduction of permerbilityis evident. Factors which may influence compaction effort are moisture content, grain size, grain distribution and other physical properties as well as the variable method of compaction. The moisture content among these parameter is the most important thing. For making the maximum density to a given soil, the comparable optimum water content is required. If there is a slight change in water content when compared with optimum water content, the compaction ratio will decrease and the corresponding mechanical properties will change evidently. The results in this study of soil compaction with different water content are summarized as follows. 1) The maximum dry density increased and corresponding optimum moisture content decreased with increasing of coarse grain size and the compaction curve is steeper than increasing of fine grain size. 2) The maximum dry density is decreased with increasing of the optimum water content and a relationship both parameter becomes rdam-max=2.232-0.02785 $W_0$ But this relstionship will be change to $r_d=ae^{-bw}$ when comparable water content changes. 3) In case of most soils, a dry condition is better than wet condition to give a compactive effort, but the latter condition is only preferable when the liquid limit of soil exceeds 50 percent. 4) The compaction ratio of cohesive soil is greeter than cohesionless soil even the amount of coarse grain sizes are same. 5) The relationship between the maximum dry density and porosity is as rdmax=2,186-0.872e, but it changes to $r_d=ae^{be}$ when water content vary from optimum water content. 6) The void ratio is increased with increasing of optimum water content as n=15.85+1.075 w, but therelation becames $n=ae^{bw}$ if there is a variation in water content. 7) The increament of permeabilty is high when the soil is a high plasticity or coarse. 8) The coefficient of permeability of soil compacted in wet condition is lower than the soil compacted in dry condition. 9) Cohesive soil has higher permeability than cohesionless soil even the amount of coarse particles are same. 10) In generall, the soil which has high optimum water content has lower coefficient of permeability than low optimum water content. 11) The coefficient of permeability has a certain relations with density, gradation and void ratio and it increase with increasing of saturation degree.

  • PDF

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • 제16권3호
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • 제19권2호
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.