• Title/Summary/Keyword: Process Condition

Search Result 8,736, Processing Time 0.044 seconds

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Changes in blood pressure and determinants of blood pressure level and change in Korean adolescents (성장기 청소년의 혈압변화와 결정요인)

  • Suh, Il;Nam, Chung-Mo;Jee, Sun-Ha;Kim, Suk-Il;Kim, Young-Ok;Kim, Sung-Soon;Shim, Won-Heum;Kim, Chun-Bae;Lee, Kang-Hee;Ha, Jong-Won;Kang, Hyung-Gon;Oh, Kyung-Won
    • Journal of Preventive Medicine and Public Health
    • /
    • v.30 no.2 s.57
    • /
    • pp.308-326
    • /
    • 1997
  • Many studies have led to the notion that essential hypertension in adults is the result of a process that starts early in life: investigation of blood pressure(BP) in children and adolescents can therefore contribute to knowledge of the etiology of the condition. A unique longitudinal study on BP in Korea, known as Kangwha Children's Blood Pressure(KCBP) Study was initiated in 1986 to investigate changes in BP in children. This study is a part of the KCBP study. The purposes of this study are to show changes in BP and to determine factors affecting to BP level and change in Korean adolescents during age period 12 to 16 years. A total of 710 students(335 males, 375 females) who were in the first grade at junior high school(12 years old) in 1992 in Kangwha County, Korea have been followed to measure BP and related factors(anthropometric, serologic and dietary factors) annually up to 1996. A total of 562 students(242 males, 320 females) completed all five annual examinations. The main results are as follows: 1. For males, mean systolic and diastolic BP at age 12 and 16 years old were 108.7 mmHg and 118.1 mmHg(systolic), and 69.5 mmHg and 73.4 mmHg(diastolic), respectively. BP level was the highest when students were at 15 years old. For females, mean systolic and diastolic BP at age 12 and 16 years were 114.4 mmHg and 113.5 mmHg(systolic) and 75.2 mmHg and 72.1 mmHg(diastolic), respectively. BP level reached the highest point when they were 13-14 years old. 2. Anthropometric variables(height, weight and body mass index, etc) increased constantly during the study period for males. However, the rate of increase was decreased for females after age 15 years. Serum total cholesterol decreased and triglyceride increased according to age for males, but they did not show any significant trend fer females. Total fat intake increased at age 16 years compared with that at age 14 years. Compositions of carbohydrate, protein and fat among total energy intake were 66.2:12.0:19.4, 64.1:12.1:21.8 at age 14 and 16 years, respectively. 3. Most of anthropometric measures, especially, height, body mass index(BMI) and triceps skinfold thickness showed a significant correlation with BP level in both sexes. When BMI was adjusted, serum total cholesterol showed a significant negative correlation with systolic BP at age 12 years in males, but at age 14 years the direction of correlation changed to positive. In females serum total cholesterol was negatively correlated with diastolic BP at age 15 and 16 years. Triglyceride and creatinine showed positive correlation with systolic and diastolic BP in males, but they did not show any correlation in females. There was no consistent findings between nutrient intake and BP level. However, protein intake correlated positively with diastolic BP level in males. 4. Blood pressure change was positively associated with changes in BMI and serum total cholesterol in both sexes. Change in creatinine was associated with BP change positively in males and negatively in females. Students whose sodium intake was high showed higher systolic and diastolic BP in males, and students whose total fat intake was high maintained lower level of BP in females. The major determinants on BP change was BMI in both sexes.

  • PDF

Structure of Export Competition between Asian NIEs and Japan in the U.S. Import Market and Exchange Rate Effects (한국(韓國)의 아시아신흥공업국(新興工業國) 및 일본(日本)과의 대미수출경쟁(對美輸出競爭) : 환율효과(換率效果)를 중심(中心)으로)

  • Jwa, Sung-hee
    • KDI Journal of Economic Policy
    • /
    • v.12 no.2
    • /
    • pp.3-49
    • /
    • 1990
  • This paper analyzes U.S. demand for imports from Asian NIEs and Japan, utilizing the Almost Ideal Demand System (AIDS) developed by Deaton and Muellbauer, with an emphasis on the effect of changes in the exchange rate. The empirical model assumes a two-stage budgeting process in which the first stage represents the allocation of total U.S. demand among three groups: the Asian NIEs and Japan, six Western developed countries, and the U.S. domestic non-tradables and import competing sector. The second stage represents the allocation of total U.S. imports from the Asian NIEs and Japan among them, by country. According to the AIDS model, the share equation for the Asia NIEs and Japan in U.S. nominal GNP is estimated as a single equation for the first stage. The share equations for those five countries in total U.S. imports are estimated as a system with the general demand restrictions of homogeneity, symmetry and adding-up, together with polynomially distributed lag restrictions. The negativity condition is also satisfied for all cases. The overall results of these complicated estimations, using quarterly data from the first quarter of 1972 to the fourth quarter of 1989, are quite promising in terms of the significance of individual estimators and other statistics. The conclusions drawn from the estimation results and the derived demand elasticities can be summarized as follows: First, the exports of each Asian NIE to the U.S. are competitive with (substitutes for) Japan's exports, while complementary to the exports of fellow NIEs, with the exception of the competitive relation between Hong Kong and Singapore. Second, the exports of each Asian NIE and of Japan to the U.S. are competitive with those of Western developed countries' to the U.S, while they are complementary to the U.S.' non-tradables and import-competing sector. Third, as far as both the first and second stages of budgeting are coneidered, the imports from each Asian NIE and Japan are luxuries in total U.S. consumption. However, when only the second budgeting stage is considered, the imports from Japan and Singapore are luxuries in U.S. imports from the NIEs and Japan, while those of Korea, Taiwan and Hong Kong are necessities. Fourth, the above results may be evidenced more concretely in their implied exchange rate effects. It appears that, in general, a change in the yen-dollar exchange rate will have at least as great an impact, on an NIE's share and volume of exports to the U.S. though in the opposite direction, as a change in the exchange rate of the NIE's own currency $vis-{\grave{a}}-vis$ the dollar. Asian NIEs, therefore, should counteract yen-dollar movements in order to stabilize their exports to the U.S.. More specifically, Korea should depreciate the value of the won relative to the dollar by approximately the same proportion as the depreciation rate of the yen $vis-{\grave{a}}-vis$ the dollar, in order to maintain the volume of Korean exports to the U.S.. In the worst case scenario, Korea should devalue the won by three times the maguitude of the yen's depreciation rate, in order to keep market share in the aforementioned five countries' total exports to the U.S.. Finally, this study provides additional information which may support empirical findings on the competitive relations among the Asian NIEs and Japan. The correlation matrices among the strutures of those five countries' exports to the U.S.. during the 1970s and 1980s were estimated, with the export structure constructed as the shares of each of the 29 industrial sectors' exports as defined by the 3 digit KSIC in total exports to the U.S. from each individual country. In general, the correlation between each of the four Asian NIEs and Japan, and that between Hong Kong and Singapore, are all far below .5, while the ones among the Asian NIEs themselves (except for the one between Hong Kong and Singapore) all greatly exceed .5. If there exists a tendency on the part of the U.S. to import goods in each specific sector from different countries in a relatively constant proportion, the export structures of those countries will probably exhibit a high correlation. To take this hypothesis to the extreme, if the U.S. maintained an absolutely fixed ratio between its imports from any two countries for each of the 29 sectors, the correlation between the export structures of these two countries would be perfect. Therefore, since any two goods purchased in a fixed proportion could be classified as close complements, a high correlation between export structures will imply a complementary relationship between them. Conversely, low correlation would imply a competitive relationship. According to this interpretation, the pattern formed by the correlation coefficients among the five countries' export structures to the U.S. are consistent with the empirical findings of the regression analysis.

  • PDF

Genetic Counseling in Korean Health Care System (한국 의료제도와 유전상담 서비스의 구축)

  • Kim, Hyon-J.
    • Journal of Genetic Medicine
    • /
    • v.8 no.2
    • /
    • pp.89-99
    • /
    • 2011
  • Over the years Korean health care system has improved in delivery of quality care to the general population for many areas of the health problems. The system is now being recognized in the world as the most cost effective one. It is covered by the uniform national health insurance policy for which most people in Korea are mandatory policy holders. Genetic counseling service, however, which is well recognized as an integral part of clinical genetics service deals with diagnosis and management of genetic condition as well as genetic information presentation and family support, is yet to be delivered in comprehensive way for the patients and families in need. Two major obstacles in providing genetic counseling service in korean health care system are identified; One is the lack of recognition for the need for genetic counseling service as necessary service by the national health insurance. Genetic counseling consumes a significant time in delivery and the current very low-fee schedule for physician service makes it very difficult to provide meaningful service. Second is the critical shortage of qualified professionals in the field of medical genetics and genetic counseling who can provide the service of genetic counseling in clinical setting. However, recognition and understanding of the fact that the scope and role of genetic counseling is expanding in post genomic era of personalized medicine for delivery of quality health care, will lead to the efforts to overcome obstacles in providing genetic counseling service in korean health care system. Only concerted efforts from health care policy makers of government on clinical genetics service and genetic counseling for establishing adequate reimbursement coverage and professional communities for developing educational program and certification process for professional genetic counselors, are necessary for the delivery of much needed clinical genetic counseling service in Korea.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Geology of Athabasca Oil Sands in Canada (캐나다 아사바스카 오일샌드 지질특성)

  • Kwon, Yi-Kwon
    • The Korean Journal of Petroleum Geology
    • /
    • v.14 no.1
    • /
    • pp.1-11
    • /
    • 2008
  • As conventional oil and gas reservoirs become depleted, interests for oil sands has rapidly increased in the last decade. Oil sands are mixture of bitumen, water, and host sediments of sand and clay. Most oil sand is unconsolidated sand that is held together by bitumen. Bitumen has hydrocarbon in situ viscosity of >10,000 centipoises (cP) at reservoir condition and has API gravity between $8-14^{\circ}$. The largest oil sand deposits are in Alberta and Saskatchewan, Canada. The reverves are approximated at 1.7 trillion barrels of initial oil-in-place and 173 billion barrels of remaining established reserves. Alberta has a number of oil sands deposits which are grouped into three oil sand development areas - the Athabasca, Cold Lake, and Peace River, with the largest current bitumen production from Athabasca. Principal oil sands deposits consist of the McMurray Fm and Wabiskaw Mbr in Athabasca area, the Gething and Bluesky formations in Peace River area, and relatively thin multi-reservoir deposits of McMurray, Clearwater, and Grand Rapid formations in Cold Lake area. The reservoir sediments were deposited in the foreland basin (Western Canada Sedimentary Basin) formed by collision between the Pacific and North America plates and the subsequent thrusting movements in the Mesozoic. The deposits are underlain by basement rocks of Paleozoic carbonates with highly variable topography. The oil sands deposits were formed during the Early Cretaceous transgression which occurred along the Cretaceous Interior Seaway in North America. The oil-sands-hosting McMurray and Wabiskaw deposits in the Athabasca area consist of the lower fluvial and the upper estuarine-offshore sediments, reflecting the broad and overall transgression. The deposits are characterized by facies heterogeneity of channelized reservoir sands and non-reservoir muds. Main reservoir bodies of the McMurray Formation are fluvial and estuarine channel-point bar complexes which are interbedded with fine-grained deposits formed in floodplain, tidal flat, and estuarine bay. The Wabiskaw deposits (basal member of the Clearwater Formation) commonly comprise sheet-shaped offshore muds and sands, but occasionally show deep-incision into the McMurray deposits, forming channelized reservoir sand bodies of oil sands. In Canada, bitumen of oil sands deposits is produced by surface mining or in-situ thermal recovery processes. Bitumen sands recovered by surface mining are changed into synthetic crude oil through extraction and upgrading processes. On the other hand, bitumen produced by in-situ thermal recovery is transported to refinery only through bitumen blending process. The in-situ thermal recovery technology is represented by Steam-Assisted Gravity Drainage and Cyclic Steam Stimulation. These technologies are based on steam injection into bitumen sand reservoirs for increase in reservoir in-situ temperature and in bitumen mobility. In oil sands reservoirs, efficiency for steam propagation is controlled mainly by reservoir geology. Accordingly, understanding of geological factors and characteristics of oil sands reservoir deposits is prerequisite for well-designed development planning and effective bitumen production. As significant geological factors and characteristics in oil sands reservoir deposits, this study suggests (1) pay of bitumen sands and connectivity, (2) bitumen content and saturation, (3) geologic structure, (4) distribution of mud baffles and plugs, (5) thickness and lateral continuity of mud interbeds, (6) distribution of water-saturated sands, (7) distribution of gas-saturated sands, (8) direction of lateral accretion of point bar, (9) distribution of diagenetic layers and nodules, and (10) texture and fabric change within reservoir sand body.

  • PDF

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

A Study on the Tree Surgery Problem and Protection Measures in Monumental Old Trees (천연기념물 노거수 외과수술 문제점 및 보존 관리방안에 관한 연구)

  • Jung, Jong Soo
    • Korean Journal of Heritage: History & Science
    • /
    • v.42 no.1
    • /
    • pp.122-142
    • /
    • 2009
  • This study explored all domestic and international theories for maintenance and health enhancement of an old and big tree, and carried out the anatomical survey of the operation part of the tree toward he current status of domestic surgery and the perception survey of an expert group, and drew out following conclusion through the process of suggesting its reform plan. First, as a result of analyzing the correlation of the 67 subject trees with their ages, growth status. surroundings, it revealed that they were closely related to positional characteristic, damage size, whereas were little related to materials by fillers. Second, the size of the affected part was the most frequent at the bough sheared part under $0.09m^2$, and the hollow size by position(part) was the biggest at 'root + stem' starting from the behind of the main root and stem As a result of analyzing the correlation, the same result was elicited at the group with low correlation. Third, the problem was serious in charging the fillers (especially urethane) in the big hollow or exposed root produced at the behind of the root and stem part, or surface-processing it. The benefit by charging the hollow part was analyzed as not so much. Fourth, the surface-processing of fillers currently used (artificial bark) is mainly 'epoxy+woven fabric+cork', but it is not flexible, so it has brought forth problems of frequent cracks and cracked surface at the joint part with the treetextured part. Fifth, the correlation with the external status of the operated part was very high with the closeness, surface condition, formation of adhesive tissue and internal survey result. Sixth, the most influential thing on flushing by the wrong management of an old and big tree was banking, and a wrong pruning was the source of the ground part damage. In pruning a small bough can easily recover itself from its damage as its formation of adhesive tissue when it is cut by a standard method. Seventh, the parameters affecting the times of related business handling of an old and big tree are 'the need of the conscious reform of the manager and related business'. Eighth, a reform plan in an institutional aspect can include the arrangement of the law and organization of the old and big tree management and preservation at an institutional aspect. This study for preparing a reform plan through the status survey of the designated old and big tree, has a limit inducing a reform plan based on the status survey through individual research, and a weak point suggesting grounds by any statistical data. This can be complemented by subsequent studies.

A Study on Estimating Shear Strength of Continuum Rock Slope (연속체 암반비탈면의 강도정수 산정 연구)

  • Kim, Hyung-Min;Lee, Su-gon;Lee, Byok-Kyu;Woo, Jae-Gyung;Hur, Ik;Lee, Jun-Ki
    • Journal of the Korean Geotechnical Society
    • /
    • v.35 no.5
    • /
    • pp.5-19
    • /
    • 2019
  • Considering the natural phenomenon in which steep slopes ($65^{\circ}{\sim}85^{\circ}$) consisting of rock mass remain stable for decades, slopes steeper than 1:0.5 (the standard of slope angle for blast rock) may be applied in geotechnical conditions which are similar to those above at the design and initial construction stages. In the process of analysing the stability of a good to fair continuum rock slope that can be designed as a steep slope, a general method of estimating rock mass strength properties from design practice perspective was required. Practical and genealized engineering methods of determining the properties of a rock mass are important for a good continuum rock slope that can be designed as a steep slope. The Genealized Hoek-Brown (H-B) failure criterion and GSI (Geological Strength Index), which were revised and supplemented by Hoek et al. (2002), were assessed as rock mass characterization systems fully taking into account the effects of discontinuities, and were widely utilized as a method for calculating equivalent Mohr-Coulomb shear strength (balancing the areas) according to stress changes. The concept of calculating equivalent M-C shear strength according to the change of confining stress range was proposed, and on a slope, the equivalent shear strength changes sensitively with changes in the maximum confining stress (${{\sigma}^{\prime}}_{3max}$ or normal stress), making it difficult to use it in practical design. In this study, the method of estimating the strength properties (an iso-angle division method) that can be applied universally within the maximum confining stress range for a good to fair continuum rock mass slope is proposed by applying the H-B failure criterion. In order to assess the validity and applicability of the proposed method of estimating the shear strength (A), the rock slope, which is a study object, was selected as the type of rock (igneous, metamorphic, sedimentary) on the steep slope near the existing working design site. It is compared and analyzed with the equivalent M-C shear strength (balancing the areas) proposed by Hoek. The equivalent M-C shear strength of the balancing the areas method and iso-angle division method was estimated using the RocLab program (geotechnical properties calculation software based on the H-B failure criterion (2002)) by using the basic data of the laboratory rock triaxial compression test at the existing working design site and the face mapping of discontinuities on the rock slope of study area. The calculated equivalent M-C shear strength of the balancing the areas method was interlinked to show very large or small cohesion and internal friction angles (generally, greater than $45^{\circ}$). The equivalent M-C shear strength of the iso-angle division is in-between the equivalent M-C shear properties of the balancing the areas, and the internal friction angles show a range of $30^{\circ}$ to $42^{\circ}$. We compared and analyzed the shear strength (A) of the iso-angle division method at the study area with the shear strength (B) of the existing working design site with similar or the same grade RMR each other. The application of the proposed iso-angle division method was indirectly evaluated through the results of the stability analysis (limit equilibrium analysis and finite element analysis) applied with these the strength properties. The difference between A and B of the shear strength is about 10%. LEM results (in wet condition) showed that Fs (A) = 14.08~58.22 (average 32.9) and Fs (B) = 18.39~60.04 (average 32.2), which were similar in accordance with the same rock types. As a result of FEM, displacement (A) = 0.13~0.65 mm (average 0.27 mm) and displacement (B) = 0.14~1.07 mm (average 0.37 mm). Using the GSI and Hoek-Brown failure criterion, the significant result could be identified in the application evaluation. Therefore, the strength properties of rock mass estimated by the iso-angle division method could be applied with practical shear strength.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.