• Title/Summary/Keyword: Construction-Field-Data

Search Result 1,448, Processing Time 0.027 seconds

Determination of shear wave velocity profiles in soil deposit from seismic piezo-cone penetration test (탄성파 피에조콘 관입 시험을 통한 국내 퇴적 지반의 전단파 속도 결정)

  • Sun Chung Guk;Jung Gyungja;Jung Jong Hong;Kim Hong-Jong;Cho Sung-Min
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2005.09a
    • /
    • pp.125-153
    • /
    • 2005
  • It has been widely known that the seismic piezo-cone penetration test (SCPTU) is one of the most useful techniques for investigating the geotechnical characteristics including dynamic soil properties. As the practical applications in Korea, SCPTU was carried out at two sites in Busan and four sites in Incheon, which are mainly composed of alluvial or marine soil deposits. From the SCPTU waveform data obtained from the testing sites, the first arrival times of shear waves were and the corresponding time differences with depth were determined using the cross-over method, and the shear wave velocity profiles (VS) were derived based on the refracted ray path method based on Snell's law and similar to the trend of cone tip resistance (qt) profiles. In Incheon area, the testing depths of SCPTU were deeper than those of conventional down-hole seismic tests. Moreover, for the application of the conventional CPTU to earthquake engineering practices, the correlations between VS and CPTU data were deduced based on the SCPTU results. For the empirical evaluation of VS for all soils together with clays and sands which are classified unambiguously in this study by the soil behavior type classification Index (IC), the authors suggested the VS-CPTU data correlations expressed as a function of four parameters, qt, fs, $\sigma$, v0 and Bq, determined by multiple statistical regression modeling. Despite the incompatible strain levels of the down-hole seismic test during SCPTU and the conventional CPTU, it is shown that the VS-CPTU data correlations for all soils clays and sands suggested in this study is applicable to the preliminary estimation of VS for the Korean deposits and is more reliable than the previous correlations proposed by other researchers.

  • PDF

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Definition and Division in Intelligent Service Facility for Integrating Management (지능화시설의 통합운영관리를 위한 정의 및 구분에 관한 연구)

  • PARK, Jeong-Woo;YIM, Du-Hyun;NAM, Kwang-Woo;KIM, Jin-Young
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.4
    • /
    • pp.52-62
    • /
    • 2016
  • Smart City is urban development for complex problem solving that provides convenience and safety for citizens, and it is a blueprint for future cities. In 2008, the Korean government defined the construction, management, and government support of U-Cities in the legislation, Act on the Construction, Etc. of Ubiquitous Cities (Ubiquitous City Act), which included definitions of terms used in the act. In addition, the Minister of Land, Infrastructure and Transport has established a "ubiquitous city master plan" considering this legislation. The concept of U-Cities is complex, due to the mix of informatization and urban planning. Because of this complexity, the foundation of relevant regulations is inadequate, which is impeding the establishment and implementation of practical plans. Smart City intelligent service facilities are not easy to define and classify, because technology is rapidly changing and includes various devices for gathering and expressing information. The purpose of this study is to complement the legal definition of the intelligent service facility, which is necessary for integrated management and operation. The related laws and regulations on U-City were analyzed using text-mining techniques to identify insufficient legal definitions of intelligent service facilities. Using data gathered from interviews with officials responsible for constructing U-Cities, this study identified problems generated by implementing intelligent service facilities at the field level. This strategy should contribute to improved efficiency management, the foundation for building integrated utilization between departments. Efficiencies include providing a clear concept for establishing five-year renewable plans for U-Cities.

Variations of Soil Bulk Density and Natural Revegetation on the Logging Road of Timber Harvested-Sites (벌채적지(伐採跡地) 운재로(運材路)의 토양가밀도(土壤假密度) 변화(變化)와 자연식생회복(自然植生回復)에 관한 연구(硏究))

  • Woo, Bo-Myeong;Park, Jae-Hyeon;Kim, Kyung-Hoon
    • Journal of Korean Society of Forest Science
    • /
    • v.83 no.4
    • /
    • pp.545-555
    • /
    • 1994
  • The objective of the study was to provide the useful scientific data on the early rehabilitation of the legging road after timber harvesting in the forest area. This study was carried out at logging roads which were constructed during 1989 and 1994 in Mt. Baekwoon. The field survey was conducted in July, 1991. Judging from the analysis of soil bulk density, time required for recovery as the undisturbed forest soil condition was more than 10 years in the road which was left, and the regression equation is as follows, $$Y_1=1.4195-0.0744{\cdot}X(R^2=0.91)$$ $$Y_2=1.4673-0.0688{\cdot}X(R^2=0.73)$$ (X : elapsed year after road construction. $Y_1$, $Y_2$ : soil bulk density($g/cm^3$) at 0~7.5cm, and 7.5~15.0cm, respectively) Especially soil bulk density with buffer strip-woods was $0.890-0.903g/cm^3$, so it was 20% lower than that of logging road surface without buffer strip-woods. Among the 7 factors, location, sand content, and soil hardness had statistically significant effect on the soil bulk density in logging road surface. The pioneer species on logging road surface were Rhus cratargifolius, Prunus chinensis, and Lespedeza cyrtobotrya, etc. in woody species, and Pteridium aquilinum, Arundinella hirta, and Lysimachia clethroides, etc. in herb species. So, in process of year, average plant coverage were 70% on cutting and banking slope and 20% on logging road surface which elapsed 6 years after logging road construction. Through this research, buffer strip-woods must be remained for environmental conservation of forest conditions, and from the time to be closed the road, planting, seeding, and grazing works could be effective to the soil condition and vegetation recovery.

  • PDF

A Study on the Estimation Measure of Delay Cost on Work Zone Using the Traffic Flow Model (교통류 모형을 이용한 도로 점용공사 구간의 지체비용 산정방안)

  • Kim, Yunsik;Lee, Minjae
    • Korean Journal of Construction Engineering and Management
    • /
    • v.17 no.5
    • /
    • pp.120-129
    • /
    • 2016
  • The user cost is an important analysis item which should be considered together with life-cycle of facility, administrator cost and discount rate in LCCA for efficient asset management of SOC facilities. Especially, a significant delay cost occurs often for users in the road field due to a work zone for cleaning and maintenance, and in such case, the administrator should consider the administrator cost as well as the user cost for more rational decision making. However, the user cost has not been considered in most decision making steps until recently and relevant studies also have not been carried out actively. In this study, the methodology to estimate the user cost and delay cost required in the decision making step using the traffic flow model and the direct benefit estimation model in the traffic facility investment evaluation guideline is suggested. And, the traffic flow model was estimated on 4 national highway sections where maintenance was actually carried out in 2014 using VISSIM and, the user cost and the delay cost were estimated based on the suggested methodology. The analysis result showed that the average user cost of $17,569,000KRW/km{\times}day$ occurred on Section A with approximately 30,000 AADT before a work zone occurred, and in case the first lane was blocked for maintenance, the delay cost of $10,193,000KRW/km{\times}day$ (158%) on average occurred additionally. The delay cost of $1,507,000KRW/km{\times}day$ (115%) and $1,985,000KRW/km{\times}day$ (119%) occurred on Sections B and D with approximately 20,000 AADT respectively and the delay cost of $262,000KRW/km{\times}day$ (105%) occurred on Section C with approximately 10,000 AADT. This result of this study was estimated based on the simulation of traffic flow model so that there is a limitation in its actual application. A study ot develop a highly appropriate model using actual observation data and improve the possibility to apply it through the verification using the simulation will be necessary in future.

A Basic Study on the Euryale ferox Salisbury for Introduction in Garden Pond(II) - Focusing with Soil and Water Conditions - (정원 연못내 가시연꽃(Euryale ferox Salisbury) 도입을 위한 기초연구 II - 토양과 수환경을 중심으로 -)

  • Lee, Suk-Woo;Rho, Jae-Hyun;Park, Jae-Cheol;Kim, Hwa-Ok
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.34 no.3
    • /
    • pp.28-37
    • /
    • 2016
  • Through the research and analysis on the hydrological environment and soil environment of habitats through documentary and field studies over 14 habitats of Euryale ferox Salisb. within Jeollabukdo, with the objective of acquiring the basic data for forming an environment based on plantation of reservoirs that are composed with Euryale ferox, the following results were obtained. As a result of analyzing the construction period of the habitats of Euryale ferox from a total of 14 places, the average period of duration after construction of all subject reservoirs appeared to be 71.8 years. Moreover, when examining the relationship between the age of reservoirs and eutrophication, it could be judged that at least the eutrophication of subsoil and water environment is not an obstacle to the growth of Euryale ferox grows in habitats that have a reservoir age of approximately 70 years or more. As a result of analyzing the gardening of soil sediment of the Euryale ferox habitats, the component ingredients appeared to be composed of 80.2% of clay, 16.7% of silt and 3.1% of sand, and the soil class pursuant to such was classified as 'heavy clay'. The organic matter contents of soil sediment appeared to be an average of 36g/kg, and there appeared to be no noticeable difference between the habitats and non-habitats of Euryale ferox. The water quality environment of Euryale ferox habitat appeared to be pH 6.5~7.9, concentration of dissolved oxygen to be $1.8{\sim}8.8mg/{\ell}$, concentration of COD to be $6.8{\sim}74mg/{\ell}$, floating materials to be $2.0{\sim}213mg/{\ell}$, total nitrogen to be $0.422{\sim}10.723mg/{\ell}$, and phosphate to be $0.003{\sim}0.126mg/{\ell}$. The average DO concentration of Aedang Reservoir at Jeongeup, Daejeong Reservoir at Imsil, and Myeongdeokji at Gimje with high vitality and green coverage ratio of Euryale ferox appeared to be $3.5mg/{\ell}$, total nitrogen to be $1.33mg/{\ell}$, and concentration of phosphorus-phosphate to be $0.061mg/{\ell}$. When comparing such with the entire average value, the DO and total nitrogen concentration appeared to be rather low, and the phosphorus-phosphate concentration appeared to be higher by two times or more, thus, an in-depth study on the correlation of the vitality of Euryale ferox Salisb. and concentration of phosphorate-phosphorus will be needed in the future.

A Study on Spatial Changes around Jangseogak(Former Yi Royal-Family Museum) in Changgyeonggung during the Japanese colonial period (일제강점기 창경궁 장서각(구 이왕가박물관) 주변의 공간 변화에 관한 연구)

  • Yee, Sun
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.39 no.4
    • /
    • pp.10-23
    • /
    • 2021
  • During the Japanese colonial period, the palaces of Joseon were damaged in many parts. Changgyeonggung Palace is the most demolished palace with the establishment of a zoo, botanical garden, and museum. During the Japanese colonial period, the palaces of Joseon were damaged in many parts. This study examined the construction process of Jangseogak(Yi Royal-Family Museum), located right next to the Jagyeongjeon site, which was considered the most important space in the Changgyeonggung residential area of royal family zone, through historical materials and field research. Built in 1911, Jangseogak is located at a location overlooking the entire Changgyeonggung Palace and overlooking the Gyeongseong Shrine of Namsan in the distance. Changes in the surrounding space during the construction of Jangseogak can be summarized as follows. First, in the early 1910s, the topography of the garden behind Jagyeongjeon and part of the Janggo were damaged to create the site of Jangseogak. The front yard was built in the front of Jangseogak, and a stone pillar was installed, and a staircase was installed to the south. In the process, the original stone system at the rear of Yanghwadang was destroyed, and it is presumed that Jeong Iljae and other buildings were demolished. Second, in the 1920s, many pavilions were demolished and the zoo and botanical gardens and museums were completed through leveling. After the Jangseogak was completed, the circulation of the Naejeon and surrounding areas was also changed. Cherry trees and peonies were planted in the flower garden around the front yard of Jangseogak and the stairs, and a Japanese-style garden was created between Yanghwadang and Jibbokheon. Third, in the 1930s, the circulation around Jangseogak was completed in its present form, and the museum, Jangseogak, Zoological and Botanical Gardens, and Changgyeonggung, which became a cherry tree garden, were transformed into a Japanese-style cultural park. After that, the surrounding space did not change much until it was demolished. The restoration of the present palace is a long-term, national project of the Cultural Heritage Administration. The results of this study will provide important data for the restoration plan of Changgyeonggung Palace in the future, and it is expected that it will provide additional information to related researchers in the future.

Monitoring of Working Environment Exposed to Particulate Matter in Greenhouse for Cultivating Flower and Fruit (과수 및 화훼 시설하우스 내 작업자의 미세먼지 노출현황 모니터링)

  • Seo, Hyo-Jae;Kim, Hyo-Cher;Seo, Il-Hwan
    • Journal of Bio-Environment Control
    • /
    • v.31 no.2
    • /
    • pp.79-89
    • /
    • 2022
  • With the wide use of greenhouses, the working hours have been increasing inside the greenhouse for workers. In the closed ventilated greenhouse, the internal environment has less affected to external weather during making a suitable temperature for crop growth. Greenhouse workers are exposed to organic dust including soil dust, pollen, pesticide residues, microorganisms during tillage process, soil grading, fertilizing, and harvesting operations. Therefore, the health status and working environment exposed to workers should be considered inside the greenhouse. It is necessary to secure basic data on particulate matter (PM) concentrations in order to set up dust reduction and health safety plans. To understand the PM concentration of working environment in greenhouse, the PM concnentrations were monitored in the cut-rose and Hallabong greenhouses in terms of PM size, working type, and working period. Compare to no-work (move) period, a significant increase in PM concentration was found during tillage operation in Hallabong greenhouse by 4.94 times on TSP (total suspended particle), 2.71 times on PM-10 (particle size of 10 ㎛ or larger), and 1.53 times on PM-2.5, respectively. During pruning operation in cut-rose greenhouse, TSP concentration was 7.4 times higher and PM-10 concentration was 3.2 times higher than during no-work period. As a result of analysis of PM contribution ratio by particle sizes, it was shown that PM-10 constitute the largest percentage. There was a significant difference in the PM concentration between work and no-work periods, and the concentration of PM during work was significant higher (p < 0.001). It was found that workers were generally exposed to a high level of dust concentration from 2.5 ㎛ to 35.15 ㎛ during tillage operation.

Optimum Design of Soil Nailing Excavation Wall System Using Genetic Algorithm and Neural Network Theory (유전자 알고리즘 및 인공신경망 이론을 이용한 쏘일네일링 굴착벽체 시스템의 최적설계)

  • 김홍택;황정순;박성원;유한규
    • Journal of the Korean Geotechnical Society
    • /
    • v.15 no.4
    • /
    • pp.113-132
    • /
    • 1999
  • Recently in Korea, application of the soil nailing is gradually extended to the sites of excavations and slopes having various ground conditions and field characteristics. Design of the soil nailing is generally carried out in two steps, The First step is to examine the minimum safety factor against a sliding of the reinforced nailed-soil mass based on the limit equilibrium approach, and the second step is to check the maximum displacement expected to occur at facing using the numerical analysis technique. However, design parameters related to the soil nailing system are so various that a reliable design method considering interrelationships between these design parameters is continuously necessary. Additionally, taking into account the anisotropic characteristics of in-situ grounds, disturbances in collecting the soil samples and errors in measurements, a systematic analysis of the field measurement data as well as a rational technique of the optimum design is required to improve with respect to economical efficiency. As a part of these purposes, in the present study, a procedure for the optimum design of a soil nailing excavation wall system is proposed. Focusing on a minimization of the expenses in construction, the optimum design procedure is formulated based on the genetic algorithm. Neural network theory is further adopted in predicting the maximum horizontal displacement at a shotcrete facing. Using the proposed procedure, various effects of relevant design parameters are also analyzed. Finally, an optimized design section is compared with the existing design section at the excavation site being constructed, in order to verify a validity of the proposed procedure.

  • PDF