• Title/Summary/Keyword: parameter analysis

Search Result 7,613, Processing Time 0.044 seconds

A Sensitivity Analysis of JULES Land Surface Model for Two Major Ecosystems in Korea: Influence of Biophysical Parameters on the Simulation of Gross Primary Productivity and Ecosystem Respiration (한국의 두 주요 생태계에 대한 JULES 지면 모형의 민감도 분석: 일차생산량과 생태계 호흡의 모사에 미치는 생물리모수의 영향)

  • Jang, Ji-Hyeon;Hong, Jin-Kyu;Byun, Young-Hwa;Kwon, Hyo-Jung;Chae, Nam-Yi;Lim, Jong-Hwan;Kim, Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.12 no.2
    • /
    • pp.107-121
    • /
    • 2010
  • We conducted a sensitivity test of Joint UK Land Environment Simulator (JULES), in which the influence of biophysical parameters on the simulation of gross primary productivity (GPP) and ecosystem respiration (RE) was investigated for two typical ecosystems in Korea. For this test, we employed the whole-year observation of eddy-covariance fluxes measured in 2006 at two KoFlux sites: (1) a deciduous forest in complex terrain in Gwangneung and (2) a farmland with heterogeneous mosaic patches in Haenam. Our analysis showed that the simulated GPP was most sensitive to the maximum rate of RuBP carboxylation and leaf nitrogen concentration for both ecosystems. RE was sensitive to wood biomass parameter for the deciduous forest in Gwangneung. For the mixed farmland in Haenam, however, RE was most sensitive to the maximum rate of RuBP carboxylation and leaf nitrogen concentration like the simulated GPP. For both sites, the JULES model overestimated both GPP and RE when the default values of input parameters were adopted. Considering the fact that the leaf nitrogen concentration observed at the deciduous forest site was only about 60% of its default value, the significant portion of the model's overestimation can be attributed to such a discrepancy in the input parameters. Our finding demonstrates that the abovementioned key biophysical parameters of the two ecosystems should be evaluated carefully prior to any simulation and interpretation of ecosystem carbon exchange in Korea.

Reliability Assessment of Flexible InGaP/GaAs Double-Junction Solar Module Using Experimental and Numerical Analysis (유연 InGaP/GaAs 2중 접합 태양전지 모듈의 신뢰성 확보를 위한 실험 및 수치 해석 연구)

  • Kim, Youngil;Le, Xuan Luc;Choa, Sung-Hoon
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.26 no.4
    • /
    • pp.75-82
    • /
    • 2019
  • Flexible solar cells have attracted enormous attention in recent years due to their wide applications such as portable batteries, wearable devices, robotics, drones, and airplanes. In particular, the demands of the flexible silicon and compound semiconductor solar cells with high efficiency and high reliability keep increasing. In this study, we fabricated a flexible InGaP/GaAs double-junction solar module. Then, the effects of the wind speed and ambient temperature on the operating temperature of the solar cell were analyzed with the numerical simulation. The temperature distributions of the solar modules were analyzed for three different wind speeds of 0 m/s, 2.5 m/s, and 5 m/s, and two different ambient temperature conditions of 25℃ and 33℃. The flexibility of the flexible solar module was also evaluated with the bending tests and numerical bending simulation. When the wind speed was 0 m/s at 25 ℃, the maximum temperature of the solar cell was reached to be 149.7℃. When the wind speed was increased to 2.5 m/s, the temperature of the solar cell was reduced to 66.2℃. In case of the wind speed of 5 m/s, the temperature of the solar cell dropped sharply to 48.3℃. Ambient temperature also influenced the operating temperature of the solar cell. When the ambient temperature increased to 33℃ at 2.5 m/s, the temperature of the solar cell slightly increased to 74.2℃ indicating that the most important parameter affecting the temperature of the solar cell was heat dissipation due to wind speed. Since the maximum temperatures of the solar cell are lower than the glass transition temperatures of the materials used, the chances of thermal deformation and degradation of the module will be very low. The flexible solar module can be bent to a bending radius of 7 mm showing relatively good bending capability. Neutral plane analysis was also indicated that the flexibility of the solar module can be further improved by locating the solar cell in the neutral plane.

Comparative Analysis of GNSS Precipitable Water Vapor and Meteorological Factors (GNSS 가강수량과 기상인자의 상호 연관성 분석)

  • Jae Sup, Kim;Tae-Suk, Bae
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.4
    • /
    • pp.317-324
    • /
    • 2015
  • GNSS was firstly proposed for application in weather forecasting in the mid-1980s. It has continued to demonstrate the practical uses in GNSS meteorology, and other relevant researches are currently being conducted. Precipitable Water Vapor (PWV), calculated based on the GNSS signal delays due to the troposphere of the Earth, represents the amount of the water vapor in the atmosphere, and it is therefore widely used in the analysis of various weather phenomena such as monitoring of weather conditions and climate change detection. In this study we calculated the PWV through the meteorological information from an Automatic Weather Station (AWS) as well as GNSS data processing of a Continuously Operating Reference Station (CORS) in order to analyze the heavy snowfall of the Ulsan area in early 2014. Song’s model was adopted for the weighted mean temperature model (Tm), which is the most important parameter in the calculation of PWV. The study period is a total of 56 days (February 2013 and 2014). The average PWV of February 2014 was determined to be 11.29 mm, which is 11.34% lower than that of the heavy snowfall period. The average PWV of February 2013 was determined to be 10.34 mm, which is 8.41% lower than that of not the heavy snowfall period. In addition, certain meteorological factors obtained from AWS were compared as well, resulting in a very low correlation of 0.29 with the saturated vapor pressure calculated using the empirical formula of Magnus. The behavioral pattern of PWV has a tendency to change depending on the precipitation type, specifically, snow or rain. It was identified that the PWV showed a sudden increase and a subsequent rapid drop about 6.5 hours before precipitation. It can be concluded that the pattern analysis of GNSS PWV is an effective method to analyze the precursor phenomenon of precipitation.

Effects of Elevated Temperature after the Booting Stage on Physiological Characteristics and Grain Development in Wheat (밀에서 출수 후 잎의 생리적 특성 및 종실 생장에 대한 수잉기 이후 고온의 효과)

  • Song, Ki Eun;Choi, Jae Eun;Jung, Jae Gyeong;Ko, Jong Han;Lee, Kyung Do;Shim, Sang-In
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.66 no.4
    • /
    • pp.307-317
    • /
    • 2021
  • In recent years, global warming has led to frequent climate change-related problems, and elevated temperatures, among adverse climatic factors, represent a critical problem negatively affecting crop growth and yield. In this context, the present study examined the physiological traits of wheat plants grown under high temperatures. Specifically, the effects of elevated temperatures on seed development after heading were evaluated, and the vegetation indices of different organs were assessed using hyperspectral analysis. Among physiological traits, leaf greenness and OJIP parameters were higher in the high-temperature treatment than in the control treatment. Similarly, the leaf photosynthetic rate during seed development was higher in the high-temperature treatment than in the control treatment. Moreover, temperature by organ was higher in the high-temperature treatment than in the control treatment; consequently, the leaf transpiration rate and stomatal conductance were higher in the control treatment than in the high-temperature treatment. On all measuring dates, the weight of spikes and seeds corresponding to the sink organs was greater in the high-temperature treatment than in the control treatment. Additionally, the seed growth rate was higher in the high-temperature treatment than in the control treatment 14 days after heading, which may be attributed to the higher redistribution of photosynthates at the early stage of seed development in the former. In hyperspectral analysis, the vegetation indices related to leaf chlorophyll content and nitrogen state were higher in the high-temperature treatment than in the control treatment after heading. Our results suggest that elevated temperatures after the booting stage positively affect wheat growth and yield.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.

Design and Implementation of a Web Application Firewall with Multi-layered Web Filter (다중 계층 웹 필터를 사용하는 웹 애플리케이션 방화벽의 설계 및 구현)

  • Jang, Sung-Min;Won, Yoo-Hun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.157-167
    • /
    • 2009
  • Recently, the leakage of confidential information and personal information is taking place on the Internet more frequently than ever before. Most of such online security incidents are caused by attacks on vulnerabilities in web applications developed carelessly. It is impossible to detect an attack on a web application with existing firewalls and intrusion detection systems. Besides, the signature-based detection has a limited capability in detecting new threats. Therefore, many researches concerning the method to detect attacks on web applications are employing anomaly-based detection methods that use the web traffic analysis. Much research about anomaly-based detection through the normal web traffic analysis focus on three problems - the method to accurately analyze given web traffic, system performance needed for inspecting application payload of the packet required to detect attack on application layer and the maintenance and costs of lots of network security devices newly installed. The UTM(Unified Threat Management) system, a suggested solution for the problem, had a goal of resolving all of security problems at a time, but is not being widely used due to its low efficiency and high costs. Besides, the web filter that performs one of the functions of the UTM system, can not adequately detect a variety of recent sophisticated attacks on web applications. In order to resolve such problems, studies are being carried out on the web application firewall to introduce a new network security system. As such studies focus on speeding up packet processing by depending on high-priced hardware, the costs to deploy a web application firewall are rising. In addition, the current anomaly-based detection technologies that do not take into account the characteristics of the web application is causing lots of false positives and false negatives. In order to reduce false positives and false negatives, this study suggested a realtime anomaly detection method based on the analysis of the length of parameter value contained in the web client's request. In addition, it designed and suggested a WAF(Web Application Firewall) that can be applied to a low-priced system or legacy system to process application data without the help of an exclusive hardware. Furthermore, it suggested a method to resolve sluggish performance attributed to copying packets into application area for application data processing, Consequently, this study provide to deploy an effective web application firewall at a low cost at the moment when the deployment of an additional security system was considered burdened due to lots of network security systems currently used.

Estimation of Rice Canopy Height Using Terrestrial Laser Scanner (레이저 스캐너를 이용한 벼 군락 초장 추정)

  • Dongwon Kwon;Wan-Gyu Sang;Sungyul Chang;Woo-jin Im;Hyeok-jin Bak;Ji-hyeon Lee;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.387-397
    • /
    • 2023
  • Plant height is a growth parameter that provides visible insights into the plant's growth status and has a high correlation with yield, so it is widely used in crop breeding and cultivation research. Investigation of the growth characteristics of crops such as plant height has generally been conducted directly by humans using a ruler, but with the recent development of sensing and image analysis technology, research is being attempted to digitally convert growth measurement technology to efficiently investigate crop growth. In this study, the canopy height of rice grown at various nitrogen fertilization levels was measured using a laser scanner capable of precise measurement over a wide range, and a comparative analysis was performed with the actual plant height. As a result of comparing the point cloud data collected with a laser scanner and the actual plant height, it was confirmed that the estimated plant height measured based on the average height of the top 1% points showed the highest correlation with the actual plant height (R2 = 0.93, RMSE = 2.73). Based on this, a linear regression equation was derived and used to convert the canopy height measured with a laser scanner to the actual plant height. The rice growth curve drawn by combining the actual and estimated plant height collected by various nitrogen fertilization conditions and growth period shows that the laser scanner-based canopy height measurement technology can be effectively utilized for assessing the plant height and growth of rice. In the future, 3D images derived from laser scanners are expected to be applicable to crop biomass estimation, plant shape analysis, etc., and can be used as a technology for digital conversion of conventional crop growth assessment methods.

Implementation Strategy for the Elderly Care Solution Based on Usage Log Analysis: Focusing on the Case of Hyodol Product (사용자 로그 분석에 기반한 노인 돌봄 솔루션 구축 전략: 효돌 제품의 사례를 중심으로)

  • Lee, Junsik;Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.117-140
    • /
    • 2019
  • As the aging phenomenon accelerates and various social problems related to the elderly of the vulnerable are raised, the need for effective elderly care solutions to protect the health and safety of the elderly generation is growing. Recently, more and more people are using Smart Toys equipped with ICT technology for care for elderly. In particular, log data collected through smart toys is highly valuable to be used as a quantitative and objective indicator in areas such as policy-making and service planning. However, research related to smart toys is limited, such as the development of smart toys and the validation of smart toy effectiveness. In other words, there is a dearth of research to derive insights based on log data collected through smart toys and to use them for decision making. This study will analyze log data collected from smart toy and derive effective insights to improve the quality of life for elderly users. Specifically, the user profiling-based analysis and elicitation of a change in quality of life mechanism based on behavior were performed. First, in the user profiling analysis, two important dimensions of classifying the type of elderly group from five factors of elderly user's living management were derived: 'Routine Activities' and 'Work-out Activities'. Based on the dimensions derived, a hierarchical cluster analysis and K-Means clustering were performed to classify the entire elderly user into three groups. Through a profiling analysis, the demographic characteristics of each group of elderlies and the behavior of using smart toy were identified. Second, stepwise regression was performed in eliciting the mechanism of change in quality of life. The effects of interaction, content usage, and indoor activity have been identified on the improvement of depression and lifestyle for the elderly. In addition, it identified the role of user performance evaluation and satisfaction with smart toy as a parameter that mediated the relationship between usage behavior and quality of life change. Specific mechanisms are as follows. First, the interaction between smart toy and elderly was found to have an effect of improving the depression by mediating attitudes to smart toy. The 'Satisfaction toward Smart Toy,' a variable that affects the improvement of the elderly's depression, changes how users evaluate smart toy performance. At this time, it has been identified that it is the interaction with smart toy that has a positive effect on smart toy These results can be interpreted as an elderly with a desire to meet emotional stability interact actively with smart toy, and a positive assessment of smart toy, greatly appreciating the effectiveness of smart toy. Second, the content usage has been confirmed to have a direct effect on improving lifestyle without going through other variables. Elderly who use a lot of the content provided by smart toy have improved their lifestyle. However, this effect has occurred regardless of the attitude the user has toward smart toy. Third, log data show that a high degree of indoor activity improves both the lifestyle and depression of the elderly. The more indoor activity, the better the lifestyle of the elderly, and these effects occur regardless of the user's attitude toward smart toy. In addition, elderly with a high degree of indoor activity are satisfied with smart toys, which cause improvement in the elderly's depression. However, it can be interpreted that elderly who prefer outdoor activities than indoor activities, or those who are less active due to health problems, are hard to satisfied with smart toys, and are not able to get the effects of improving depression. In summary, based on the activities of the elderly, three groups of elderly were identified and the important characteristics of each type were identified. In addition, this study sought to identify the mechanism by which the behavior of the elderly on smart toy affects the lives of the actual elderly, and to derive user needs and insights.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

Influence of Microcrack on Brazilian Tensile Strength of Jurassic Granite in Hapcheon (미세균열이 합천지역 쥬라기 화강암의 압열인장강도에 미치는 영향)

  • Park, Deok-Won;Kim, Kyeong-Su
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.34 no.1
    • /
    • pp.41-56
    • /
    • 2021
  • The characteristics of the six rock cleavages(R1~H2) in Jurassic Hapcheon granite were analyzed using the distribution of ① microcrack lengths(N=230), ② microcrack spacings(N=150) and ③ Brazilian tensile strengths(N=30). The 18 cumulative graphs for these three factors measured in the directions parallel to the six rock cleavages were mutually contrasted. The main results of the analysis are summarized as follows. First, the frequency ratio(%) of Brazilian tensile strength values(kg/㎠) divided into nine class intervals increases in the order of 60~70(3.3) < 140~150(6.7) < 100~110·110~120(10.0) < 90~100(13.3) < 80~90(16.7) < 120~130·130~140(20.0). The distribution curve of strength according to the frequency of each class interval shows a bimodal distribution. Second, the graphs for the length, spacing and tensile strength were arranged in the order of H2 < H1 < G2 < G1 < R2 < R1. Exponent difference(λS-λL, Δλ) between the two graphs for the spacing and length increases in the order of H2(-1.59) < H1(-0.02) < G2(0.25) < G1(0.63) < R2(1.59) < R1(1.96)(2 < 1). From the related chart, the six graphs for the tensile strength move gradually to the left direction with the increase of the above exponent difference. The negative slope(a) of the graphs for the tensile strength, suggesting a degree of uniformity of the texture, increases in the order of H((H1+H2)/2, 0.116) < G((G1+G2)/2, 0.125) < R((R1+R2)/2, 0.191). Third, the order of arrangement between the two graphs for the two directions that make up each rock cleavage(R1·R2(R), G1·G2(G), H1·H2(H)) were compared. The order of arrangement of the two graphs for the length and spacing is reverse order with each other. The two graphs for the spacing and tensile strength is mutually consistent in the order of arrangement. The exponent differences(ΔλL and ΔλS) for the length and spacing increase in the order of rift(R, -0.08) < grain(G, 0.14) < hardway(H, 0.75) and hardway(H, 0.16) < grain(G, 0.23) < rift(R, 0.45), respectively. Fourth, the general chart for the six graphs showing the distribution characteristics of the microcrack lengths, microcrack spacings and Brazilian tensile strengths were made. According to the range of length, the six graphs show orders of G2 < H2 < H1 < R2 < G1 < R1(< 7 mm) and G2 < H1 < H2 < R2 < G1 < R1(≦2.38 mm). The six graphs for the spacing intersect each other by forming a bottleneck near the point corresponding to the cumulative frequency of 12 and the spacing of 0.53 mm. Fifth, the six values of each parameter representing the six rock cleavages were arranged in the order of increasing and decreasing. Among the 8 parameters related to the length, the total length(Lt) and the graph(≦2.38 mm) are mutually congruent in order of arrangement. Among the 7 parameters related to the spacing, the frequency of spacing(N), the mean spacing(Sm) and the graph (≦5 mm) are mutually consistent in order of arrangement. In terms of order of arrangement, the values of the above three parameters for the spacing are consistent with the maximum tensile strengths belonging to group E. As shown in Table 8, the order of arrangement of these parameter values is useful for prior recognition of the six rock cleavages and the three quarrying planes.