• Title/Summary/Keyword: 해석기법

Search Result 8,045, Processing Time 0.033 seconds

Shape Scheme and Size Discrete Optimum Design of Plane Steel Trusses Using Improved Genetic Algorithm (개선된 유전자 알고리즘을 이용한 평면 철골트러스의 형상계획 및 단면 이산화 최적설계)

  • Kim, Soo-Won;Yuh, Baeg-Youh;Park, Choon-Wok;Kang, Moon-Myung
    • Journal of Korean Association for Spatial Structures
    • /
    • v.4 no.2 s.12
    • /
    • pp.89-97
    • /
    • 2004
  • The objective of this study is the development of a scheme and discrete optimum design algorithm, which is based on the genetic algorithm. The algorithm can perform both scheme and size optimum designs of plane trusses. The developed Scheme genetic algorithm was implemented in a computer program. For the optimum design, the objective function is the weight of structures and the constraints are limits on loads and serviceability. The basic search method for the optimum design is the genetic algorithm. The algorithm is known to be very efficient for the discrete optimization. However, its application to the complicated structures has been limited because of the extreme time need for a number of structural analyses. This study solves the problem by introducing the size & scheme genetic algorithm operators into the genetic algorithm. The genetic process virtually takes no time. However, the evolutionary process requires a tremendous amount of time for a number of structural analyses. Therefore, the application of the genetic algorithm to the complicated structures is extremely difficult, if not impossible. The scheme genetic algorithm operators was introduced to overcome the problem and to complement the evolutionary process. It is very efficient in the approximate analyses and scheme and size optimization of plane trusses structures and considerably reduces structural analysis time. Scheme and size discrete optimum combined into the genetic algorithm is what makes the practical discrete optimum design of plane fusses structures possible. The efficiency and validity of the developed discrete optimum design algorithm was verified by applying the algorithm to various optimum design examples: plane pratt, howe and warren truss.

  • PDF

Introduction of Denitrification Method for Nitrogen and Oxygen Stable Isotopes (δ15N-NO3 and δ18O-NO3) in Nitrate and Case Study for Tracing Nitrogen Source (탈질미생물을 이용한 질산성 질소의 산소 및 질소 동위원소 분석법 소개)

  • Lim, Bo-La;Kim, Min-Seob;Yoon, Suk-Hee;Park, Jaeseon;Park, Hyunwoo;Chung, Hyen-Mi;Choi, Jong-Woo
    • Korean Journal of Ecology and Environment
    • /
    • v.50 no.4
    • /
    • pp.459-469
    • /
    • 2017
  • Nitrogen (N) loading from domestic, agricultural and industrial sources can lead to excessive growth of macrophytes or phytoplankton in aquatic environment. Many studies have used stable isotope ratios to identify anthropogenic nitrogen in aquatic systems as a useful method for studying nitrogen cycle. In this study to evaluate the precision and accuracy of denitrification bacteria method (Pseudomonas chlororaphis ssp. Aureofaciens ($ATCC^{(R)}$ 13985)), three reference (IAEA-NO-3 (Potassium nitrate $KNO_3$), USGS34 (Potassium nitrate $KNO_3$), USGS35 (Sodium nitrate $KNO_3$)) were analyzed 5 times repeatedly. Measured the ${\delta}^{15}N-NO_3$ and ${\delta}^{18}O-NO_3$ values of IAEA-NO-3, USGS 34 and USGS35 were ${\delta}^{15}N:4.7{\pm}0.1$${\delta}^{18}O:25.6{\pm}0.5$‰, ${\delta}^{15}N:-1.8{\pm}0.1$${\delta}^{18}O:-27.8{\pm}0.4$‰, and ${\delta}^{15}N:2.7{\pm}0.2$${\delta}^{18}O:57.5{\pm}0.7$‰, respectively, which are within recommended values of analytical uncertainties. Also, we investigated isotope values of potential nitrogen source (soil, synthetic fertilizer and organic-animal manures) and temporal patterns of ${\delta}^{15}N-NO_3$ and ${\delta}^{18}O-NO_3$ values in river samples during from May to December. ${\delta}^{15}N-NO_3$ and ${\delta}^{18}O-NO_3$ values are enriched in December suggesting that organic-animal manures should be one of the main N sources in those areas. The current study clarifies the reliability of denitrification bacteria method and the usefulness of stable isotopic techniques to trace the anthropogenic nitrogen source in freshwater ecosystem.

Parameters Estimation of Clark Model based on Width Function (폭 함수를 기반으로 한 Clark 모형의 매개변수 추정)

  • Park, Sang Hyun;Kim, Joo-Cheol;Jung, Kwansue
    • Journal of Korea Water Resources Association
    • /
    • v.46 no.6
    • /
    • pp.597-611
    • /
    • 2013
  • This paper presents the methodology for construction of time-area curve via the width function and thereby rational estimation of time of concentration and storage coefficient of Clark model within the framework of method of moments. To this end time-area curve is built by rescaling the grid-based width function under the assumption of pure translation and then the analytical expressions for two parameters of Clark model are proposed in terms of method of moments. The methodology in this study based on the analytical expressions mentioned before is compared with both (1) the traditional optimization method of Clark model provided by HEC-1 in which the symmetric time-area curve is used and the difference between observed and simulated hydrographs is minimized (2) and the same optimization method but replacing time-area curve with rescaled width function in respect of peak discharge and time to peak of simulated direct runoff hydrographs and their efficiency coefficient relative to the observed ones. The following points are worth of emphasizing: (1) The optimization method by HEC-1 with rescaled width function among others results in the parameters well reflecting the observed runoff hydrograph with respect to peak discharge coordinates and coefficient of efficiency; (2) For the better application of Clark model it is recommended to use the time-area curve capable of accounting for irregular drainage structure of a river basin such as rescaled width function instead of symmetric time-area curve by HEC-1; (3) Moment-based methodology with rescaled width function developed in this study also gives rise to satisfactory simulation results in terms of peak discharge coordinates and coefficient of efficiency. Especially the mean velocities estimated from this method, characterizing the translation effect of time-area curve, are well consistent with the field surveying results for the points of interest in this study; (4) It is confirmed that the moment-based methodology could be an effective tool for quantitative assessment of translation and storage effects of natural river basin; (5) The runoff hydrographs simulated by the moment-based methodology tend to be more right skewed relative to the observed ones and have lower peaks. It is inferred that this is due to consideration of only one mean velocity in the parameter estimation. Further research is required to combine the hydrodynamic heterogeneity between hillslope and channel network into the construction of time-area curve.

Geomagnetic Paleosecular Variation in the Korean Peninsula during the First Six Centuries (기원후 600년간 한반도 지구 자기장 고영년변화)

  • Park, Jong kyu;Park, Yong-Hee
    • The Journal of Engineering Geology
    • /
    • v.32 no.4
    • /
    • pp.611-625
    • /
    • 2022
  • One of the applications of geomagnetic paleo-secular variation (PSV) is the age dating of archeological remains (i.e., the archeomagnetic dating technique). This application requires the local model of PSV that reflects non-dipole fields with regional differences. Until now, the tentative Korean paleosecular variation (t-KPSV) calculated based on JPSV (SW Japanese PSV) has been applied as a reference curve for individual archeomagnetic directions in Korea. However, it is less reliable due to regional differences in the non-dipole magnetic field. Here, we present PSV curves for AD 1 to 600, corresponding to the Korean Three Kingdoms (including the Proto Three Kingdoms) Period, using the results of archeomagnetic studies in the Korean Peninsula and published research data. Then we compare our PSV with the global geomagnetic prediction model and t-KPSV. A total of 49 reliable archeomagnetic directional data from 16 regions were compiled for our PSV. In detail, each data showed statistical consistency (N > 6, 𝛼95 < 7.8°, and k > 57.8) and had radiocarbon or archeological ages in the range of AD 1 to 600 years with less than ±200 years error range. The compiled PSV for the initial six centuries (KPSV0.6k) showed declination and inclination in the range of 341.7° to 20.1° and 43.5° to 60.3°, respectively. Compared to the t-KPSV, our curve revealed different variation patterns both in declination and inclination. On the other hand, KPSV0.6k and global geomagnetic prediction models (ARCH3K.1, CALS3K.4, and SED3K.1) revealed consistent variation trends during the first six centennials. In particular, the ARCH3K.1 showed the best fitting with our KPSV0.6k. These results indicate that contribution of the non-dipole field to Korea and Japan is quite different, despite their geographical proximity. Moreover, the compilation of archeomagnetic data from the Korea territory is essential to build a reliable PSV curve for an age dating tool. Lastly, we double-check the reliability of our KPSV0.6k by showing a good fitting of newly acquired age-controlled archeomagnetic data on our curve.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

Geochemical Equilibria and Kinetics of the Formation of Brown-Colored Suspended/Precipitated Matter in Groundwater: Suggestion to Proper Pumping and Turbidity Treatment Methods (지하수내 갈색 부유/침전 물질의 생성 반응에 관한 평형 및 반응속도론적 연구: 적정 양수 기법 및 탁도 제거 방안에 대한 제안)

  • 채기탁;윤성택;염승준;김남진;민중혁
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.7 no.3
    • /
    • pp.103-115
    • /
    • 2000
  • The formation of brown-colored precipitates is one of the serious problems frequently encountered in the development and supply of groundwater in Korea, because by it the water exceeds the drinking water standard in terms of color. taste. turbidity and dissolved iron concentration and of often results in scaling problem within the water supplying system. In groundwaters from the Pajoo area, brown precipitates are typically formed in a few hours after pumping-out. In this paper we examine the process of the brown precipitates' formation using the equilibrium thermodynamic and kinetic approaches, in order to understand the origin and geochemical pathway of the generation of turbidity in groundwater. The results of this study are used to suggest not only the proper pumping technique to minimize the formation of precipitates but also the optimal design of water treatment methods to improve the water quality. The bed-rock groundwater in the Pajoo area belongs to the Ca-$HCO_3$type that was evolved through water/rock (gneiss) interaction. Based on SEM-EDS and XRD analyses, the precipitates are identified as an amorphous, Fe-bearing oxides or hydroxides. By the use of multi-step filtration with pore sizes of 6, 4, 1, 0.45 and 0.2 $\mu\textrm{m}$, the precipitates mostly fall in the colloidal size (1 to 0.45 $\mu\textrm{m}$) but are concentrated (about 81%) in the range of 1 to 6 $\mu\textrm{m}$in teams of mass (weight) distribution. Large amounts of dissolved iron were possibly originated from dissolution of clinochlore in cataclasite which contains high amounts of Fe (up to 3 wt.%). The calculation of saturation index (using a computer code PHREEQC), as well as the examination of pH-Eh stability relations, also indicate that the final precipitates are Fe-oxy-hydroxide that is formed by the change of water chemistry (mainly, oxidation) due to the exposure to oxygen during the pumping-out of Fe(II)-bearing, reduced groundwater. After pumping-out, the groundwater shows the progressive decreases of pH, DO and alkalinity with elapsed time. However, turbidity increases and then decreases with time. The decrease of dissolved Fe concentration as a function of elapsed time after pumping-out is expressed as a regression equation Fe(II)=10.l exp(-0.0009t). The oxidation reaction due to the influx of free oxygen during the pumping and storage of groundwater results in the formation of brown precipitates, which is dependent on time, $Po_2$and pH. In order to obtain drinkable water quality, therefore, the precipitates should be removed by filtering after the stepwise storage and aeration in tanks with sufficient volume for sufficient time. Particle size distribution data also suggest that step-wise filtration would be cost-effective. To minimize the scaling within wells, the continued (if possible) pumping within the optimum pumping rate is recommended because this technique will be most effective for minimizing the mixing between deep Fe(II)-rich water and shallow $O_2$-rich water. The simultaneous pumping of shallow $O_2$-rich water in different wells is also recommended.

  • PDF

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

An Analytical Approach Using Topic Mining for Improving the Service Quality of Hotels (호텔 산업의 서비스 품질 향상을 위한 토픽 마이닝 기반 분석 방법)

  • Moon, Hyun Sil;Sung, David;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.21-41
    • /
    • 2019
  • Thanks to the rapid development of information technologies, the data available on Internet have grown rapidly. In this era of big data, many studies have attempted to offer insights and express the effects of data analysis. In the tourism and hospitality industry, many firms and studies in the era of big data have paid attention to online reviews on social media because of their large influence over customers. As tourism is an information-intensive industry, the effect of these information networks on social media platforms is more remarkable compared to any other types of media. However, there are some limitations to the improvements in service quality that can be made based on opinions on social media platforms. Users on social media platforms represent their opinions as text, images, and so on. Raw data sets from these reviews are unstructured. Moreover, these data sets are too big to extract new information and hidden knowledge by human competences. To use them for business intelligence and analytics applications, proper big data techniques like Natural Language Processing and data mining techniques are needed. This study suggests an analytical approach to directly yield insights from these reviews to improve the service quality of hotels. Our proposed approach consists of topic mining to extract topics contained in the reviews and the decision tree modeling to explain the relationship between topics and ratings. Topic mining refers to a method for finding a group of words from a collection of documents that represents a document. Among several topic mining methods, we adopted the Latent Dirichlet Allocation algorithm, which is considered as the most universal algorithm. However, LDA is not enough to find insights that can improve service quality because it cannot find the relationship between topics and ratings. To overcome this limitation, we also use the Classification and Regression Tree method, which is a kind of decision tree technique. Through the CART method, we can find what topics are related to positive or negative ratings of a hotel and visualize the results. Therefore, this study aims to investigate the representation of an analytical approach for the improvement of hotel service quality from unstructured review data sets. Through experiments for four hotels in Hong Kong, we can find the strengths and weaknesses of services for each hotel and suggest improvements to aid in customer satisfaction. Especially from positive reviews, we find what these hotels should maintain for service quality. For example, compared with the other hotels, a hotel has a good location and room condition which are extracted from positive reviews for it. In contrast, we also find what they should modify in their services from negative reviews. For example, a hotel should improve room condition related to soundproof. These results mean that our approach is useful in finding some insights for the service quality of hotels. That is, from the enormous size of review data, our approach can provide practical suggestions for hotel managers to improve their service quality. In the past, studies for improving service quality relied on surveys or interviews of customers. However, these methods are often costly and time consuming and the results may be biased by biased sampling or untrustworthy answers. The proposed approach directly obtains honest feedback from customers' online reviews and draws some insights through a type of big data analysis. So it will be a more useful tool to overcome the limitations of surveys or interviews. Moreover, our approach easily obtains the service quality information of other hotels or services in the tourism industry because it needs only open online reviews and ratings as input data. Furthermore, the performance of our approach will be better if other structured and unstructured data sources are added.

Adaptive Row Major Order: a Performance Optimization Method of the Transform-space View Join (적응형 행 기준 순서: 변환공간 뷰 조인의 성능 최적화 방법)

  • Lee Min-Jae;Han Wook-Shin;Whang Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.345-361
    • /
    • 2005
  • A transform-space index indexes objects represented as points in the transform space An advantage of a transform-space index is that optimization of join algorithms using these indexes becomes relatively simple. However, the disadvantage is that these algorithms cannot be applied to original-space indexes such as the R-tree. As a way of overcoming this disadvantages, the authors earlier proposed the transform-space view join algorithm that joins two original- space indexes in the transform space through the notion of the transform-space view. A transform-space view is a virtual transform-space index that allows us to perform join in the transform space using original-space indexes. In a transform-space view join algorithm, the order of accessing disk pages -for which various space filling curves could be used -makes a significant impact on the performance of joins. In this paper, we Propose a new space filling curve called the adaptive row major order (ARM order). The ARM order adaptively controls the order of accessing pages and significantly reduces the one-pass buffer size (the minimum buffer size required for guaranteeing one disk access per page) and the number of disk accesses for a given buffer size. Through analysis and experiments, we verify the excellence of the ARM order when used with the transform-space view join. The transform-space view join with the ARM order always outperforms existing ones in terms of both measures used: the one-pass buffer size and the number of disk accesses for a given buffer size. Compared to other conventional space filling curves used with the transform-space view join, it reduces the one-pass buffer size by up to 21.3 times and the number of disk accesses by up to $74.6\%$. In addition, compared to existing spatial join algorithms that use R-trees in the original space, it reduces the one-pass buffer size by up to 15.7 times and the number of disk accesses by up to $65.3\%$.

A Characterization of Oil Sand Reservoir and Selections of Optimal SAGD Locations Based on Stochastic Geostatistical Predictions (지구통계 기법을 이용한 오일샌드 저류층 해석 및 스팀주입중력법을 이용한 비투멘 회수 적지 선정 사전 연구)

  • Jeong, Jina;Park, Eungyu
    • Economic and Environmental Geology
    • /
    • v.46 no.4
    • /
    • pp.313-327
    • /
    • 2013
  • In the study, three-dimensional geostatistical simulations on McMurray Formation which is the largest oil sand reservoir in Athabasca area, Canada were performed, and the optimal site for steam assisted gravity drainage (SAGD) was selected based on the predictions. In the selection, the factors related to the vertical extendibility of steam chamber were considered as the criteria for an optimal site. For the predictions, 110 borehole data acquired from the study area were analyzed in the Markovian transition probability (TP) framework and three-dimensional distributions of the composing media were predicted stochastically through an existing TP based geostatistical model. The potential of a specific medium at a position within the prediction domain was estimated from the ensemble probability based on the multiple realizations. From the ensemble map, the cumulative thickness of the permeable media (i.e. Breccia and Sand) was analyzed and the locations with the highest potential for SAGD applications were delineated. As a supportive criterion for an optimal SAGD site, mean vertical extension of a unit permeable media was also delineated through transition rate based computations. The mean vertical extension of a permeable media show rough agreement with the cumulative thickness in their general distribution. However, the distributions show distinctive disagreement at a few locations where the cumulative thickness was higher due to highly alternating juxtaposition of the permeable and the less permeable media. This observation implies that the cumulative thickness alone may not be a sufficient criterion for an optimal SAGD site and the mean vertical extension of the permeable media needs to be jointly considered for the sound selections.