• Title/Summary/Keyword: 인식비교

Search Result 6,317, Processing Time 0.037 seconds

A Study on Dietary Habits and Nutrient Intakes by Skipping Meals of Elementary School Children in Incheon (인천 지역 초등학생의 결식에 따른 식습관과 영양 섭취 상태에 관한 연구)

  • Park, Sook-Kyoung;Kim, Myung-Hee;Choi, Mi-Kyeong
    • Journal of the East Asian Society of Dietary Life
    • /
    • v.20 no.5
    • /
    • pp.668-679
    • /
    • 2010
  • The purpose of this study was to analyze the relations of children's skipping meals after researching eating habits and lifestyle, parents' appreciation in nutrition behavior and dietary intake, throughout the research based on 4th to 6th grade students, total of 362 children at an elementary school in Inchoen. There were 104 students in skipping meals group and 258 students in eating meals group, with the average ages of 10.9, and 10.8, respectively. The average height and weight were 144.5 cm, 38.6 kg for skipping meals group, and 145.7 cm, 39.3 kg for eating meals group. Parents' appreciation of importance in breakfast showed a significant difference in whether children skip the meals or not (p<0.01). 43.7% of parents in skipping meals group answered that they serve breakfast everyday, compared to eating meals group with the percentage of 94.9%, showing significant difference in frequency of serving breakfast for their children (p<0.001). The skipping meals group answered that the reason they do not have breakfast is because they do not have time, which showed the highest percentage of 41.2%. For the eating meals group, 40.5% of students answered that they do not have appetite, which also showed difference (p<0.001). The skipping meals group tended to wake up later than those who have breakfast in the morning(p<0.01). The breakfast time for skipping meals group was later than the eating meals group, and according to whether they have breakfast of not, it showed a difference as well(p<0.01). Total score of nutrition attitude in skipping meals group and eating meals group were 30.8 and 32.1, showing that eating meals group showed more good in nutrition attitude (p<0.05). Daily intakes of energy (p<0.01) and protein (p<0.01) in skipping meals group were significantly lower than those in eating meals group. Skipping meals group bad lower rates in INQs of protein (p<0.01) and zinc (p<0.01), showing that skipping meals group is having low quality meals in nutrition. In conclusion, this study revealed that students with skipping meals are more likely to have meals that lacks nutrition or have low quality meals, and the time of rising hour in the morning, frequency of eating snacks can also affect whether or not they skip meals.

Clinical Study of Corrosive Esophagitis (부식성 식도염에 관한 임상적 고찰)

  • 이원상;정승규;최홍식;김상기;김광문;홍원표
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1981.05a
    • /
    • pp.6-7
    • /
    • 1981
  • With the improvement of living standard and educational level of the people, there is an increasing awareness about the dangers of toxic substances and lethal drugs. In addition to the above, the governmental control of these substances has led to a progressive decrease in the accidents with corrosive substances. However there are still sporadic incidences of suicidal attempts with the substances due to the unbalance between the cultural development in society and individual emotion. The problem is explained by the fact that there is a variety of corrosive agents easily available to the people due to the considerable industrial development and industrialization. Salzen(1920), Bokey(1924) were pioneers on the subject of the corrosive esophagitis and esophageal stenosis by dilatation method. Since then there had been a continuing improvement on the subject with researches on various acid(Pitkin, 1935, Carmody, 1936) and alkali (Tree, 1942, Tucker, 1951) corrosive agents, and the use of steroid (Spain, 1950) and antibiotics. Recently, early esophagoscopic examination is emphasized on the purpose of determining the way of the treatment in corrosive esophagitis patients. In order to find the effective treatment of such patients in future, the authors selected 96 corrosive esophagitis patients who were admitted and treated at the ENT department of Severance hospital from 1971 to March, 1981 to attempt a clinical study. 1. Sex incidence……male: female=1 : 1.7, Age incidence……21-30 years age group; 38 cases (39.6%). 2. Suicidal attempt……80 cases(83.3%), Accidental ingestion……16 cases (16.7%). Among those who ingested the substance accidentally, children below ten years were most numerous with nine patients. 3. Incidence acetic acid……41 cases(41.8%), lye…20 cases (20.4%), HCI……17 cases (17.3%). There was a trend of rapid rise in the incidence of acidic corrosive agents especially acetic acid. 4. Lavage……57 cases (81.1%). 5. Nasogastric tube insertion……80 cases (83.3%), No insertion……16 cases(16.7%), late admittance……10 cases, failure…4 cases, other……2 cases. 6. Tracheostomy……17 cases(17.7%), respiratory problems(75.0%), mental problems (25.0%). 7. Early endoscopy……11 cases(11.5%), within 48 hours……6 cases (54.4%). Endoscopic results; moderate mucosal ulceration…8 cases (72.7%), mild mucosal erythema……2 cases (18.2%), severe mucosal ulceration……1 cases (9.1%) and among those who took early endoscopic examination; 6 patients were confirmed mild lesion and so they were discharged after endoscopy. Average period of admittance in the cases of nasogastric tube insertion was 4 weeks. 8. Nasogastric tube indwelling period……average 11.6 days, recently our treatment trend in the corrosive esophagitis patients with nasogastric tube indwelling is determined according to the finding of early endoscopy. 9. The No. of patients who didn't given and delayed administration of steroid……7 cases(48.9%): causes; kind of drug(acid, unknown)……12 cases, late admittance……11 cases, mild case…9 cases, contraindication……7 cases, other …8 cases. 10. Management of stricture; bougienage……7 cases, feeding gastrostomy……6 cases, other surgical management……4 cases. 11. Complication……27 cases(28.1%); cardio-pulmonary……10 cases, visceral rupture……8 cases, massive bleeding……6 cases, renal failure……4 cases, other…2 cases, expire and moribund discharge…8 cases. 12. No. of follow-up case……23 cases; esophageal stricture……13 cases and site of stricture; hypopharynx……1 case, mid third of esophagus…5 cases, upper third of esophagus…3 cases, lower third of esophagus……3 cases pylorus……1 case, diffuse esophageal stenosis……1 case.

  • PDF

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

NFC-based Smartwork Service Model Design (NFC 기반의 스마트워크 서비스 모델 설계)

  • Park, Arum;Kang, Min Su;Jun, Jungho;Lee, Kyoung Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.157-175
    • /
    • 2013
  • Since Korean government announced 'Smartwork promotion strategy' in 2010, Korean firms and government organizations have started to adopt smartwork. However, the smartwork has been implemented only in a few of large enterprises and government organizations rather than SMEs (small and medium enterprises). In USA, both Yahoo! and Best Buy have stopped their flexible work because of its reported low productivity and job loafing problems. In addition, according to the literature on smartwork, we could draw obstacles of smartwork adoption and categorize them into the three types: institutional, organizational, and technological. The first category of smartwork adoption obstacles, institutional, include the difficulties of smartwork performance evaluation metrics, the lack of readiness of organizational processes, limitation of smartwork types and models, lack of employee participation in smartwork adoption procedure, high cost of building smartwork system, and insufficiency of government support. The second category, organizational, includes limitation of the organization hierarchy, wrong perception of employees and employers, a difficulty in close collaboration, low productivity with remote coworkers, insufficient understanding on remote working, and lack of training about smartwork. The third category, technological, obstacles include security concern of mobile work, lack of specialized solution, and lack of adoption and operation know-how. To overcome the current problems of smartwork in reality and the reported obstacles in literature, we suggest a novel smartwork service model based on NFC(Near Field Communication). This paper suggests NFC-based Smartwork Service Model composed of NFC-based Smartworker networking service and NFC-based Smartwork space management service. NFC-based smartworker networking service is comprised of NFC-based communication/SNS service and NFC-based recruiting/job seeking service. NFC-based communication/SNS Service Model supplements the key shortcomings that existing smartwork service model has. By connecting to existing legacy system of a company through NFC tags and systems, the low productivity and the difficulty of collaboration and attendance management can be overcome since managers can get work processing information, work time information and work space information of employees and employees can do real-time communication with coworkers and get location information of coworkers. Shortly, this service model has features such as affordable system cost, provision of location-based information, and possibility of knowledge accumulation. NFC-based recruiting/job-seeking service provides new value by linking NFC tag service and sharing economy sites. This service model has features such as easiness of service attachment and removal, efficient space-based work provision, easy search of location-based recruiting/job-seeking information, and system flexibility. This service model combines advantages of sharing economy sites with the advantages of NFC. By cooperation with sharing economy sites, the model can provide recruiters with human resource who finds not only long-term works but also short-term works. Additionally, SMEs (Small Medium-sized Enterprises) can easily find job seeker by attaching NFC tags to any spaces at which human resource with qualification may be located. In short, this service model helps efficient human resource distribution by providing location of job hunters and job applicants. NFC-based smartwork space management service can promote smartwork by linking NFC tags attached to the work space and existing smartwork system. This service has features such as low cost, provision of indoor and outdoor location information, and customized service. In particular, this model can help small company adopt smartwork system because it is light-weight system and cost-effective compared to existing smartwork system. This paper proposes the scenarios of the service models, the roles and incentives of the participants, and the comparative analysis. The superiority of NFC-based smartwork service model is shown by comparing and analyzing the new service models and the existing service models. The service model can expand scope of enterprises and organizations that adopt smartwork and expand the scope of employees that take advantages of smartwork.

Anthropometric Measurement, Dietary Behaviors, Health-related Behaviors and Nutrient Intake According to Lifestyles of College Students (대학생의 라이프스타일 유형에 따른 신체계측, 식행동, 건강관련 생활습관 및 영양소 섭취상태에 관한 연구)

  • Cheong, Sun-Hee;Na, Young-Joo;Lee, Eun-Hee;Chang, Kyung-Ja
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.36 no.12
    • /
    • pp.1560-1570
    • /
    • 2007
  • The purpose of this study was to investigate the differences according to lifestyle in anthropometric measurement, dietary attitude, health-related behaviors and nutrient intake among the college students. The subjects were 994 nation-wide college students (male: 385, female: 609) and divided into 7 clusters (PEAO: passive economy/appearance-oriented type, NCPR: non-consumption/pursuit of relationship type, PTA: pursuit of traditional actuality type, PAT: pursuit of active health type, UO: utility-oriented type, POF: pursuit of open fashion type, PFR: pursuit of family relations type). A cross-sectional survey was conducted using a self administered questionnaire, and the data were collected via Internet or by mail. The nutrient intake data collected from food record were analyzed by the Computer Aided Nutritional Analysis Program. Data were analyzed by a SPSS 12.0 program. Average age of male and female college students were 23.7 years and 21.6 years, respectively. Most of the college students had poor eating habits. In particular, about 60% of the PEAO group has irregularity in meal time. The students in PAH and POF groups showed significantly higher consumption frequency of fruits, meat products and foods cooked with oil compared to the other groups. As for exercise, drinking and smoking, there were significant differences between PAH and the other groups. Asked for the reason for body weight control, 16.2% of NCPR group answered "for health", but 24.8% of PEAO group and 26.3% of POF group answered "for appearance". Calorie, vitamin A, vitamin $B_2$, calcium and iron intakes of all the groups were lower than the Korean DRIs. Female students in PTA group showed significantly lower vitamin $B_1$ and niacin intakes compared to the PFR group. Therefore, these results provide nation-wide information on health-related behaviors and nutrient intake according to lifestyles among Korean college students.

Query-based Answer Extraction using Korean Dependency Parsing (의존 구문 분석을 이용한 질의 기반 정답 추출)

  • Lee, Dokyoung;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.161-177
    • /
    • 2019
  • In this paper, we study the performance improvement of the answer extraction in Question-Answering system by using sentence dependency parsing result. The Question-Answering (QA) system consists of query analysis, which is a method of analyzing the user's query, and answer extraction, which is a method to extract appropriate answers in the document. And various studies have been conducted on two methods. In order to improve the performance of answer extraction, it is necessary to accurately reflect the grammatical information of sentences. In Korean, because word order structure is free and omission of sentence components is frequent, dependency parsing is a good way to analyze Korean syntax. Therefore, in this study, we improved the performance of the answer extraction by adding the features generated by dependency parsing analysis to the inputs of the answer extraction model (Bidirectional LSTM-CRF). The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. In this study, we compared the performance of the answer extraction model when inputting basic word features generated without the dependency parsing and the performance of the model when inputting the addition of the Eojeol tag feature and dependency graph embedding feature. Since dependency parsing is performed on a basic unit of an Eojeol, which is a component of sentences separated by a space, the tag information of the Eojeol can be obtained as a result of the dependency parsing. The Eojeol tag feature means the tag information of the Eojeol. The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. From the dependency parsing result, a graph is generated from the Eojeol to the node, the dependency between the Eojeol to the edge, and the Eojeol tag to the node label. In this process, an undirected graph is generated or a directed graph is generated according to whether or not the dependency relation direction is considered. To obtain the embedding of the graph, we used Graph2Vec, which is a method of finding the embedding of the graph by the subgraphs constituting a graph. We can specify the maximum path length between nodes in the process of finding subgraphs of a graph. If the maximum path length between nodes is 1, graph embedding is generated only by direct dependency between Eojeol, and graph embedding is generated including indirect dependencies as the maximum path length between nodes becomes larger. In the experiment, the maximum path length between nodes is adjusted differently from 1 to 3 depending on whether direction of dependency is considered or not, and the performance of answer extraction is measured. Experimental results show that both Eojeol tag feature and dependency graph embedding feature improve the performance of answer extraction. In particular, considering the direction of the dependency relation and extracting the dependency graph generated with the maximum path length of 1 in the subgraph extraction process in Graph2Vec as the input of the model, the highest answer extraction performance was shown. As a result of these experiments, we concluded that it is better to take into account the direction of dependence and to consider only the direct connection rather than the indirect dependence between the words. The significance of this study is as follows. First, we improved the performance of answer extraction by adding features using dependency parsing results, taking into account the characteristics of Korean, which is free of word order structure and omission of sentence components. Second, we generated feature of dependency parsing result by learning - based graph embedding method without defining the pattern of dependency between Eojeol. Future research directions are as follows. In this study, the features generated as a result of the dependency parsing are applied only to the answer extraction model in order to grasp the meaning. However, in the future, if the performance is confirmed by applying the features to various natural language processing models such as sentiment analysis or name entity recognition, the validity of the features can be verified more accurately.

Hwaunsi(和韻詩) on the Poems of Tu Fu(杜甫) and Su Shi(蘇軾) Written by Simjae(深齋) Cho Geung-seop(曺兢燮) in the Turning Point of Modern Era (근대 전환기 심재 조긍섭의 두(杜)·소시(蘇詩) 화운시)

  • Kim, Bo-kyeong
    • (The)Study of the Eastern Classic
    • /
    • no.56
    • /
    • pp.35-73
    • /
    • 2014
  • This paper examined the poem world of Simjae(深齋) Cho Geung-seop(曺兢燮: 1873-1933) in the turning point of the modern era, focused on his Hwaunsi (和韻詩: Poems written by using the rhymes of other poets' poems). In his poems, there are lots of Hwaunsi on the poems of Tu Fu(杜甫) and Su Shi(蘇軾), especially. This makes him regarded as a medieval poet, engaged in Chinese poem creation in the most traditional method in the turbulent period. Looking at the Hawunsi(和韻詩) alone, Simjae's creative life became the starting point of turnaround at around 40 years old. Before the age of 40, the poets in the Tang Dynasty and Song Dynasty and Ming Dynasty and Qing Dynasty and Korean figures like Lee Hwang(李滉), as well as Tu Fu and Su Shi were the subjects of his Hwanunsi. After the age of 40, some examples of writing poems using the rhymes of other poets' poems, especially Korean figures related to regions, are often found, reducing Hwaunsi on Tu Fu and Su Shi. Simjae called Tu Fu the integration of poets, talking about the integrity of poetic talent and his being highly proficient in mood and view. As reflecting such an awareness, the themes and moods and views are demonstrated diversely in Simjae's Hwaunsi. Although, he did not reveal his thinking about the poems of Su Shi, he seemed to love Su Shi's poems to some degree. The closeness to the original poems, the poems of Tu Fu are relatively higher than those of Su Shi. Roughly speaking, Simjae tried to find his own individuality, intending to follow Tu Fu, but, he seemed to attempt to reveal his intention using Su Shi's poems, rather than trying to imitate. To carefully examine, Simjae wrote Hwaunsi, but he did not just imitate, but revealed the aesthetics of comparison and difference. In many cases, he made new meanings by implanting his intentions in the poems, while sharing the opportunity of creation, rather than bringing the theme and mood and view as they are. The Hwaunsi on Su Shi's poems reveal the closeness to the original poems relatively less. This can be the trace of an effort to make his own theme and individuality, not being dominated by the Hwaun(和韻: using the rhymes of other poets' poems) entirely, as he used the creative method having many restrictions. However, it is noted that the Hwaunsi on Tu Fu's poems was not written much, after the age of 40. Is this the reason why he realized literary reality that he could not cope with anymore with only his effort within the Hwaunsi? For example, he wrote four poems by borrowing Su Shi's Okjungsi(獄中詩: poem written in jail) rhymes and also wrote Gujung Japje(拘中雜題), in 1919, while he was detained. In these poems, his complex contemplation and emotion, not restricted by any poet's rhymes, are revealed diversely. Simjae's Hwaunsi testifies the reality, in which Chinese poetry's habitus existed and the impressive existence mode at the turning point of the modern era. Although, the creation of Hwaunsi reflects his disposition of liking the old things, it is judged that his psychology, resisting modern characters' change, affected to some degree in the hidden side. In this regard, Simaje's Hwaunsi encounters limitation on its own, however, it has significance in that some hidden facts were revealed in the modern Chinese poetry history, which was captured with attention under the name of novelty, eccentricity and modernity.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Influence of Microcrack on Brazilian Tensile Strength of Jurassic Granite in Hapcheon (미세균열이 합천지역 쥬라기 화강암의 압열인장강도에 미치는 영향)

  • Park, Deok-Won;Kim, Kyeong-Su
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.34 no.1
    • /
    • pp.41-56
    • /
    • 2021
  • The characteristics of the six rock cleavages(R1~H2) in Jurassic Hapcheon granite were analyzed using the distribution of ① microcrack lengths(N=230), ② microcrack spacings(N=150) and ③ Brazilian tensile strengths(N=30). The 18 cumulative graphs for these three factors measured in the directions parallel to the six rock cleavages were mutually contrasted. The main results of the analysis are summarized as follows. First, the frequency ratio(%) of Brazilian tensile strength values(kg/㎠) divided into nine class intervals increases in the order of 60~70(3.3) < 140~150(6.7) < 100~110·110~120(10.0) < 90~100(13.3) < 80~90(16.7) < 120~130·130~140(20.0). The distribution curve of strength according to the frequency of each class interval shows a bimodal distribution. Second, the graphs for the length, spacing and tensile strength were arranged in the order of H2 < H1 < G2 < G1 < R2 < R1. Exponent difference(λS-λL, Δλ) between the two graphs for the spacing and length increases in the order of H2(-1.59) < H1(-0.02) < G2(0.25) < G1(0.63) < R2(1.59) < R1(1.96)(2 < 1). From the related chart, the six graphs for the tensile strength move gradually to the left direction with the increase of the above exponent difference. The negative slope(a) of the graphs for the tensile strength, suggesting a degree of uniformity of the texture, increases in the order of H((H1+H2)/2, 0.116) < G((G1+G2)/2, 0.125) < R((R1+R2)/2, 0.191). Third, the order of arrangement between the two graphs for the two directions that make up each rock cleavage(R1·R2(R), G1·G2(G), H1·H2(H)) were compared. The order of arrangement of the two graphs for the length and spacing is reverse order with each other. The two graphs for the spacing and tensile strength is mutually consistent in the order of arrangement. The exponent differences(ΔλL and ΔλS) for the length and spacing increase in the order of rift(R, -0.08) < grain(G, 0.14) < hardway(H, 0.75) and hardway(H, 0.16) < grain(G, 0.23) < rift(R, 0.45), respectively. Fourth, the general chart for the six graphs showing the distribution characteristics of the microcrack lengths, microcrack spacings and Brazilian tensile strengths were made. According to the range of length, the six graphs show orders of G2 < H2 < H1 < R2 < G1 < R1(< 7 mm) and G2 < H1 < H2 < R2 < G1 < R1(≦2.38 mm). The six graphs for the spacing intersect each other by forming a bottleneck near the point corresponding to the cumulative frequency of 12 and the spacing of 0.53 mm. Fifth, the six values of each parameter representing the six rock cleavages were arranged in the order of increasing and decreasing. Among the 8 parameters related to the length, the total length(Lt) and the graph(≦2.38 mm) are mutually congruent in order of arrangement. Among the 7 parameters related to the spacing, the frequency of spacing(N), the mean spacing(Sm) and the graph (≦5 mm) are mutually consistent in order of arrangement. In terms of order of arrangement, the values of the above three parameters for the spacing are consistent with the maximum tensile strengths belonging to group E. As shown in Table 8, the order of arrangement of these parameter values is useful for prior recognition of the six rock cleavages and the three quarrying planes.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.