• Title/Summary/Keyword: Quantitative Approaches

Search Result 420, Processing Time 0.026 seconds

Assessment of Landslide Susceptibility using a Coupled Infinite Slope Model and Hydrologic Model in Jinbu Area, Gangwon-Do (무한사면모델과 수리학적 모델의 결합을 통한 강원도 진부지역의 산사태 취약성 분석)

  • Lee, Jung Hyun;Park, Hyuck Jin
    • Economic and Environmental Geology
    • /
    • v.45 no.6
    • /
    • pp.697-707
    • /
    • 2012
  • The quantitative landslide susceptibility assessment methods can be divided into statistical approaches and geomechanical approaches based on the consideration of the triggering factors and landslide models. The geomechanical approach is considered as one of the most effective approaches since this approach proposes physical slope model and considers geomorphological and geomechanical properties of slope materials. Therefore, the geomechanical approaches has been used widely in landslide susceptibility analysis using the infinite slope model as physical slope model. However, the previous studies assumed constant groundwater level for broad study area without the consideration of rainfall intensity and hydraulic properties of soil materials. Therefore, in this study, landslide susceptibility assessment was implemented using the coupled infinite slope model with hydrologic model. For the analysis, geomechanical and hydrualic properties of slope materials and rainfall intensity were measured from the soil samples which were obtained from field investigation. For the practical application, the proposed approach was applied to Jinbu area, Gangwon-Do which was experienced large amount of landslides in July 2006. In order to compare to the proposed approach, the previous approach was used to analyze the landslide susceptibility using randomly selected groundwater level. Comparison of the results shows that the accuracy of the proposed method was improved with the consideration of the hydrologic model.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Quantitative Flood Forecasting Using Remotely-Sensed Data and Neural Networks

  • Kim, Gwangseob
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2002.05a
    • /
    • pp.43-50
    • /
    • 2002
  • Accurate quantitative forecasting of rainfall for basins with a short response time is essential to predict streamflow and flash floods. Previously, neural networks were used to develop a Quantitative Precipitation Forecasting (QPF) model that highly improved forecasting skill at specific locations in Pennsylvania, using both Numerical Weather Prediction (NWP) output and rainfall and radiosonde data. The objective of this study was to improve an existing artificial neural network model and incorporate the evolving structure and frequency of intense weather systems in the mid-Atlantic region of the United States for improved flood forecasting. Besides using radiosonde and rainfall data, the model also used the satellite-derived characteristics of storm systems such as tropical cyclones, mesoscale convective complex systems and convective cloud clusters as input. The convective classification and tracking system (CCATS) was used to identify and quantify storm properties such as life time, area, eccentricity, and track. As in standard expert prediction systems, the fundamental structure of the neural network model was learned from the hydroclimatology of the relationships between weather system, rainfall production and streamflow response in the study area. The new Quantitative Flood Forecasting (QFF) model was applied to predict streamflow peaks with lead-times of 18 and 24 hours over a five year period in 4 watersheds on the leeward side of the Appalachian mountains in the mid-Atlantic region. Threat scores consistently above .6 and close to 0.8 ∼ 0.9 were obtained fur 18 hour lead-time forecasts, and skill scores of at least 4% and up to 6% were attained for the 24 hour lead-time forecasts. This work demonstrates that multisensor data cast into an expert information system such as neural networks, if built upon scientific understanding of regional hydrometeorology, can lead to significant gains in the forecast skill of extreme rainfall and associated floods. In particular, this study validates our hypothesis that accurate and extended flood forecast lead-times can be attained by taking into consideration the synoptic evolution of atmospheric conditions extracted from the analysis of large-area remotely sensed imagery While physically-based numerical weather prediction and river routing models cannot accurately depict complex natural non-linear processes, and thus have difficulty in simulating extreme events such as heavy rainfall and floods, data-driven approaches should be viewed as a strong alternative in operational hydrology. This is especially more pertinent at a time when the diversity of sensors in satellites and ground-based operational weather monitoring systems provide large volumes of data on a real-time basis.

  • PDF

An analysis study for reasonable installation of tunnel fire safety facility (터널 방재설비의 합리적 설치를 위한 분석적 연구)

  • Park, Jin-Ouk;Yoo, Yong-Ho;Park, Byoung-Jik
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.17 no.3
    • /
    • pp.243-248
    • /
    • 2015
  • Domestic road and railroad construction have been increasingly growing and for reasons of mitigating traffic congestion, urban plan and refurbishment project, deeper and longer tunnels have been built. The event of fire is the most fatal accident in a tunnel, and it can be very disastrous with a high possibility. In this study, QRA (Quantitative Risk Analysis) which is one of quantitative risk analysis approaches was applied to tunnel fire safety design and the evaluation of QRA cases and the cost comparison of QRA methods were carried out. In addition analysis of risk reduction effect of tunnel fire safety system was conducted using AHP (Analytic Hierarchy Process) and the priority of major factors that could mitigate the risk in tunnel fire was presented. As a result, significant cost reduction effect could be obtained by incorporating QRA and it is expected to design fire safety system rationally. The priority of fire safety system based on risk mitigation effect by fire safety system considering the cost is in order of water pipe, emergency lighting, evacuation passage and smoke control system.

Impact Assessment of Forest Development on Net Primary Production using Satellite Image Spatial-temporal Fusion and CASA-Model (위성영상 시공간 융합과 CASA 모형을 활용한 산지 개발사업의 식생 순일차생산량에 대한 영향 평가)

  • Jin, Yi-Hua;Zhu, Jing-Rong;Sung, Sun-Yong;Lee, Dong-Ku
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.20 no.4
    • /
    • pp.29-42
    • /
    • 2017
  • As the "Guidelines for GHG Environmental Assessment" was revised, it pointed out that the developers should evaluate GHG sequestration and storage of the developing site. However, the current guidelines only taking into account the quantitative reduction lost within the development site, and did not consider the qualitative decrease in the carbon sequestration capacity of forest edge produced by developments. In order to assess the quantitative and qualitative effects of vegetation carbon uptake, the CASA-NPP model and satellite image spatial-temporal fusion were used to estimate the annual net primary production in 2005 and 2015. The development projects between 2006 and 2014 were examined for evaluate quantitative changes in development site and qualitative changes in surroundings by development types. The RMSE value of the satellite image fusion results is less than 0.1 and approaches 0, and the correlation coefficient is more than 0.6, which shows relatively high prediction accuracy. The NPP estimation results range from 0 to $1335.53g\;C/m^2$ year before development and from 0 to $1333.77g\;C/m^2$ year after development. As a result of analyzing NPP reduction amount within the development area by type of forest development, the difference is not significant by type of development but it shows the lowest change in the sports facilities development. It was also found that the vegetation was most affected by the edge vegetation of industrial development. This suggests that the industrial development causes additional development in the surrounding area and indirectly influences the carbon sequestration function of edge vegetaion due to the increase of the edge and influx of disturbed species. The NPP calculation method and results presented in this study can be applied to quantitative and qualitative impact assessment of before and after development, and it can be applied to policies related to greenhouse gas in environmental impact assessment.

Effects of Philanthropy Education on Elementary School Students in Korea : Analysis Using a Multiple Convergence Model (나눔교육을 통한 아동의 변화 연구: Multiple Convergence Model의 적용)

  • Kang, Chul-Hee;Kim, Mi-Ok;Lee, Jong-Eun;Lee, Kyoung-Eun
    • Korean Journal of Social Welfare
    • /
    • v.59 no.4
    • /
    • pp.5-34
    • /
    • 2007
  • This study explores the effects of philanthropy education for elementary school students by using a mixed method. To examine the effects of philanthropy education, two different approaches in research methods are conducted with different data collected from different target groups on the same phenomenon: a) experimental designs to analyze students' change(prosocial behaviors) by philanthropy education program performed in a summer camp(43 participants) and elementary schools(162 students); and b) qualitative analysis on students' changes in perceptual, attitudinal, and behavioral aspects by students' diary and memorandum(66 participants) and intensive interviews with teachers(5 teachers) and parents(4 mothers). The analysis of both quantitative and qualitative results shows that philanthropy education has effects on students' changes in diverse aspects including prosocial behavior. First, the results of quantitative analysis show that in every component of the prosocial behavior such as helping, being kind, empathizing, sharing, protecting, and cooperating, students have positive changes after philanthropy education. Such changes are statistically significant as well. Second, the results of qualitative analysis show that students after having philanthropy education display positive changes in diverse aspects. Particularly, the quantitative results are converged with the qualitative results from students, parents, and teachers. On the other hand, unique finding from qualitative analysis is that students after having philanthropy education can have fundamental changes in their personality. Such a change is commonly confirmed by students, parents, and teachers. This study makes it possible to compare results or to validate, confirm, or corroborate quantitative results with qualitative findings on the effects of philanthropy education for students.

  • PDF

Shared Value Expectation on Lifelong Education (평생교육에 대한 공유기대가치 연구)

  • Kim, Chul-Ho
    • Journal of Digital Convergence
    • /
    • v.13 no.12
    • /
    • pp.325-336
    • /
    • 2015
  • The purpose of this study is to confirm the components of users' shared value expectation(SVE) on lifelong education through convergent approaches. After reviewing primary data and collecting secondary data with quantitative and qualitative research methodologies, the components of users' SVE on lifelong education were categorized into 10 perspectives, 27 measured variables, and 81 questions. After conducting a confirmatory factor analysis, the latent measurement model was confirmed as reasonable. Internal reliability, construct convergent discriminant validity were also confirmed as reasonable. In a viewpoint that builds interdisciplinary theory, this research may help grasp users' SVE on lifelong education with interdisciplinary approaches. In a strategic viewpoint, this study may contribute to both understanding categorized users' value expectation and planning/executing suitable programs that can meet the expectations. In a managerial viewpoint, this result may help measure the effectiveness of SVE on lifelong education quantitatively.

A Study on the Scale Development of Clothing Consumption Value for Male Consumers -Focused on the Purchase Behavior in Fashion Multi-brand Store and Tailor Shop- (남성 소비자의 의복 소비가치 척도 개발 연구 -의류편집매장, 맞춤정장매장 구매행동을 중심으로-)

  • Kim, Tae Youn;Lee, Yoon-Jung
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.39 no.6
    • /
    • pp.885-898
    • /
    • 2015
  • This study develops scales to measure clothing consumption values for Korean male consumers. This study conducted qualitative and quantitative research to explore a new clothing consumption value among males as well as investigate empirically the measurement of clothing consumption values. In-depth interviews and focus group interviews were collected for qualitative research on 20 Korean men in their 20s-40s who had experience with 2 types of stores in Korean men's fashion. An analysis of qualitative data based on grounded theory approaches identified 6 factors and 15 items. For the empirical research, the questionnaire which consist of 9 factors and 46 items were developed by the results of grounded theory approaches and prior studies. Final measurement scales were based on 651 data used in exploratory factor analysis (EFA) and a confirmatory factor analysis (CFA). All subjects were in their 20s-40s. The result from CFA suggested 4 factors and 18 items with showing acceptable construct validity and discriminant validity. Therefore, this study confirmed that clothing consumption value for Korean male consumer consist of ostentatious and brand value, epistemic and possession value, conditional value, and reasonable value. These constructs will provide critical insight in understanding and segmenting Korean male consumers.

A Framework for the Comparative Study of Local Social Policy in the Post-Industrial Era (후기산업사회 지역복지정책의 발달원인에 관한 이론적 고찰)

  • Jang, Dong-Ho
    • Korean Journal of Social Welfare
    • /
    • v.59 no.3
    • /
    • pp.229-252
    • /
    • 2007
  • Traditional quantitative approaches to comparative social policy research in welfare state have mostly focused on cross-national variations in social policy. More recently, they have attempted to account for disparities in the local provision of social policy. However, heavily relying on traditional theories of welfare state development (e.g., industrialism theory, power resource approach, and state-centric thesis), most of them seem to have explained the local variation from a central or national perspective, thereby completely ignoring the local perspective. Furthermore, their theoretical bases have been exclusively derived from the social context of post-war era. In tackling these issues, this study aims at localizing and updating the theoretical framework of local welfare policy in the post-industrial age. The framework provided in this study calls for a shift in the theoretical perspective towards more local and new approaches (e.g., new social risks, new public management and intergovernmental relations).

  • PDF

Comparative Evaluation between Administrative and Watershed Boundary in Carbon Sequestration Monitoring - Towards UN-REDD for Mt. Geum-gang of North Korea - (탄소 저장량 감시에서 배수구역과 행정구역의 비교 평가 - 금강산에 대한 UN-REDD 대응 차원에서 -)

  • Kim, Jun-Woo;Um, Jung-Sup
    • Journal of Environmental Impact Assessment
    • /
    • v.22 no.5
    • /
    • pp.439-454
    • /
    • 2013
  • UN-REDD (United Nations programme on Reducing Emissions from Deforestation and forest Degradation) is currently being emerged as one of important mechanism to reduce carbon dioxide in relation to the deforestation. Although administrative boundary has already gained world-wide recognition as a typical method of monitoring unit in the process of GHG (Greenhouse Gas) reduction project, this approach did not provide a realistic evidence in the carbon sequestering monitoring in terms of UN-REDD; the meaningful comparison of land use patterns among watershed boundaries, interpretation for distribution trends of carbon density, calculation of opportunity cost, leakage management, etc. This research proposes a comparative evaluation framework in a more objective and quantitative way for carbon sequestering monitoring between administrative and watershed boundary approaches. Mt. Geumgang of North Korea was selected as a survey objective and an exhaustive and realistic comparison of carbon sequestration between the two approaches was conducted, based on change detection using TM satellite images. It was possible for drainage boundary approach to identify more detailed area-wide patterns of carbon distribution than traditional administrative one, such as estimations of state and trends, including historical trends, of land use / land cover and carbon density in the Mt. Geumgang. The distinctive changing trends in terms of carbon sequestration were specifically identified over the watershed boundary from 4.0% to 34.8% while less than 1% difference was observed in the administrative boundaries, which were resulting in almost 21-22%. It is anticipated that this research output could be used as a valuable reference to support more scientific and objective decision-making in introducing watershed boundary as carbon sequestering monitoring unit.