• Title/Summary/Keyword: building technology

Search Result 7,883, Processing Time 0.087 seconds

Recommender system using BERT sentiment analysis (BERT 기반 감성분석을 이용한 추천시스템)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.1-15
    • /
    • 2021
  • If it is difficult for us to make decisions, we ask for advice from friends or people around us. When we decide to buy products online, we read anonymous reviews and buy them. With the advent of the Data-driven era, IT technology's development is spilling out many data from individuals to objects. Companies or individuals have accumulated, processed, and analyzed such a large amount of data that they can now make decisions or execute directly using data that used to depend on experts. Nowadays, the recommender system plays a vital role in determining the user's preferences to purchase goods and uses a recommender system to induce clicks on web services (Facebook, Amazon, Netflix, Youtube). For example, Youtube's recommender system, which is used by 1 billion people worldwide every month, includes videos that users like, "like" and videos they watched. Recommended system research is deeply linked to practical business. Therefore, many researchers are interested in building better solutions. Recommender systems use the information obtained from their users to generate recommendations because the development of the provided recommender systems requires information on items that are likely to be preferred by the user. We began to trust patterns and rules derived from data rather than empirical intuition through the recommender systems. The capacity and development of data have led machine learning to develop deep learning. However, such recommender systems are not all solutions. Proceeding with the recommender systems, there should be no scarcity in all data and a sufficient amount. Also, it requires detailed information about the individual. The recommender systems work correctly when these conditions operate. The recommender systems become a complex problem for both consumers and sellers when the interaction log is insufficient. Because the seller's perspective needs to make recommendations at a personal level to the consumer and receive appropriate recommendations with reliable data from the consumer's perspective. In this paper, to improve the accuracy problem for "appropriate recommendation" to consumers, the recommender systems are proposed in combination with context-based deep learning. This research is to combine user-based data to create hybrid Recommender Systems. The hybrid approach developed is not a collaborative type of Recommender Systems, but a collaborative extension that integrates user data with deep learning. Customer review data were used for the data set. Consumers buy products in online shopping malls and then evaluate product reviews. Rating reviews are based on reviews from buyers who have already purchased, giving users confidence before purchasing the product. However, the recommendation system mainly uses scores or ratings rather than reviews to suggest items purchased by many users. In fact, consumer reviews include product opinions and user sentiment that will be spent on evaluation. By incorporating these parts into the study, this paper aims to improve the recommendation system. This study is an algorithm used when individuals have difficulty in selecting an item. Consumer reviews and record patterns made it possible to rely on recommendations appropriately. The algorithm implements a recommendation system through collaborative filtering. This study's predictive accuracy is measured by Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Netflix is strategically using the referral system in its programs through competitions that reduce RMSE every year, making fair use of predictive accuracy. Research on hybrid recommender systems combining the NLP approach for personalization recommender systems, deep learning base, etc. has been increasing. Among NLP studies, sentiment analysis began to take shape in the mid-2000s as user review data increased. Sentiment analysis is a text classification task based on machine learning. The machine learning-based sentiment analysis has a disadvantage in that it is difficult to identify the review's information expression because it is challenging to consider the text's characteristics. In this study, we propose a deep learning recommender system that utilizes BERT's sentiment analysis by minimizing the disadvantages of machine learning. This study offers a deep learning recommender system that uses BERT's sentiment analysis by reducing the disadvantages of machine learning. The comparison model was performed through a recommender system based on Naive-CF(collaborative filtering), SVD(singular value decomposition)-CF, MF(matrix factorization)-CF, BPR-MF(Bayesian personalized ranking matrix factorization)-CF, LSTM, CNN-LSTM, GRU(Gated Recurrent Units). As a result of the experiment, the recommender system based on BERT was the best.

Knowledge graph-based knowledge map for efficient expression and inference of associated knowledge (연관지식의 효율적인 표현 및 추론이 가능한 지식그래프 기반 지식지도)

  • Yoo, Keedong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.49-71
    • /
    • 2021
  • Users who intend to utilize knowledge to actively solve given problems proceed their jobs with cross- and sequential exploration of associated knowledge related each other in terms of certain criteria, such as content relevance. A knowledge map is the diagram or taxonomy overviewing status of currently managed knowledge in a knowledge-base, and supports users' knowledge exploration based on certain relationships between knowledge. A knowledge map, therefore, must be expressed in a networked form by linking related knowledge based on certain types of relationships, and should be implemented by deploying proper technologies or tools specialized in defining and inferring them. To meet this end, this study suggests a methodology for developing the knowledge graph-based knowledge map using the Graph DB known to exhibit proper functionality in expressing and inferring relationships between entities and their relationships stored in a knowledge-base. Procedures of the proposed methodology are modeling graph data, creating nodes, properties, relationships, and composing knowledge networks by combining identified links between knowledge. Among various Graph DBs, the Neo4j is used in this study for its high credibility and applicability through wide and various application cases. To examine the validity of the proposed methodology, a knowledge graph-based knowledge map is implemented deploying the Graph DB, and a performance comparison test is performed, by applying previous research's data to check whether this study's knowledge map can yield the same level of performance as the previous one did. Previous research's case is concerned with building a process-based knowledge map using the ontology technology, which identifies links between related knowledge based on the sequences of tasks producing or being activated by knowledge. In other words, since a task not only is activated by knowledge as an input but also produces knowledge as an output, input and output knowledge are linked as a flow by the task. Also since a business process is composed of affiliated tasks to fulfill the purpose of the process, the knowledge networks within a business process can be concluded by the sequences of the tasks composing the process. Therefore, using the Neo4j, considered process, task, and knowledge as well as the relationships among them are defined as nodes and relationships so that knowledge links can be identified based on the sequences of tasks. The resultant knowledge network by aggregating identified knowledge links is the knowledge map equipping functionality as a knowledge graph, and therefore its performance needs to be tested whether it meets the level of previous research's validation results. The performance test examines two aspects, the correctness of knowledge links and the possibility of inferring new types of knowledge: the former is examined using 7 questions, and the latter is checked by extracting two new-typed knowledge. As a result, the knowledge map constructed through the proposed methodology has showed the same level of performance as the previous one, and processed knowledge definition as well as knowledge relationship inference in a more efficient manner. Furthermore, comparing to the previous research's ontology-based approach, this study's Graph DB-based approach has also showed more beneficial functionality in intensively managing only the knowledge of interest, dynamically defining knowledge and relationships by reflecting various meanings from situations to purposes, agilely inferring knowledge and relationships through Cypher-based query, and easily creating a new relationship by aggregating existing ones, etc. This study's artifacts can be applied to implement the user-friendly function of knowledge exploration reflecting user's cognitive process toward associated knowledge, and can further underpin the development of an intelligent knowledge-base expanding autonomously through the discovery of new knowledge and their relationships by inference. This study, moreover than these, has an instant effect on implementing the networked knowledge map essential to satisfying contemporary users eagerly excavating the way to find proper knowledge to use.

A Convergent and Combined Activation Plan for Exercise Rehabilitation in the Era of the Fourth Industrial Revolution (4차 산업혁명시대에 운동재활분야의 융·복합적 활성화 방안)

  • Cho, Kyoung-Hwan
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.8
    • /
    • pp.407-426
    • /
    • 2020
  • The purpose of this study was to make convergent and combined analysis of the sport industry and exercise rehabilitation in the era of New Normal based on the Fourth Industrial Revolution and devise a comprehensive plan for future activation. For this purpose, literature review was performed mainly by analyzing the environment of the sport industry in the New Normal era based on the Fourth Industrial Revolution and by carrying out convergent and combined analysis of the sport industry to present a convergent and combined activation plan for exercise rehabilitation comprehensively as follows: First, it is necessary to make a strategy of promoting exercise rehabilitation in convergent and combined ways at the sport industry level. This means development of a convergent and combined exercise rehabilitation-tourism-ICT model as well as a convergent and combined exercise rehabilitation-ICT model through collaboration among ministries, including those of health and sports. Second, it is necessary to convert into a convergent and combined way of thinking and extend and reinforce educational competitiveness in the area of exercise rehabilitation. That is, it is necessary to refine the education and training systems for reinforcing personal ICT competence of exercise rehabilitation majors and relevant ones and provide convergent and combined business commencement education. Third, it is necessary to make different types of research and development by applying practical, convergent and combined skills based on the industrial field to exercise rehabilitation and relevant areas. Efforts should be made to overcome any risk in the era of New Normal and support business commencement with convergent and combined skills for exercise rehabilitation. Fourth, it is necessary to make mid- and long-term clusters where exercise rehabilitation and relevant businesses can be accumulated. This means building an industrial hub and complex for exercise rehabilitation and requires making an R&D-based cluster with industrial-academic-governmental collaboration, maximizing the synergy effects with local infrastructures, and fulfilling the function of realizing a spontaneous profit-generating structure.

Estimation of spatial distribution of snow depth using DInSAR of Sentinel-1 SAR satellite images (Sentinel-1 SAR 위성영상의 위상차분간섭기법(DInSAR)을 이용한 적설심의 공간분포 추정)

  • Park, Heeseong;Chung, Gunhui
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.12
    • /
    • pp.1125-1135
    • /
    • 2022
  • Damages by heavy snow does not occur very often, but when it does, it causes damage to a wide area. To mitigate snow damage, it is necessary to know, in advance, the depth of snow that causes damage in each region. However, snow depths are measured at observatory locations, and it is difficult to understand the spatial distribution of snow depth that causes damage in a region. To understand the spatial distribution of snow depth, the point measurements are interpolated. However, estimating spatial distribution of snow depth is not easy when the number of measured snow depth is small and topographical characteristics such as altitude are not similar. To overcome this limit, satellite images such as Synthetic Aperture Radar (SAR) can be analyzed using Differential Interferometric SAR (DInSAR) method. DInSAR uses two different SAR images measured at two different times, and is generally used to track minor changes in topography. In this study, the spatial distribution of snow depth was estimated by DInSAR analysis using dual polarimetric IW mode C-band SAR data of Sentinel-1B satellite operated by the European Space Agency (ESA). In addition, snow depth was estimated using geostationary satellite Chollian-2 (GK-2A) to compare with the snow depth from DInSAR method. As a result, the accuracy of snow cover estimation in terms with grids was about 0.92% for DInSAR and about 0.71% for GK-2A, indicating high applicability of DInSAR method. Although there were cases of overestimation of the snow depth, sufficient information was provided for estimating the spatial distribution of the snow depth. And this will be helpful in understanding regional damage-causing snow depth.

Analysis of Reinforcement Effect of Hollow Modular Concrete Block on Sand by Laboratory Model Tests (실내모형실험을 통한 모래지반에서의 중공블록 보강효과 분석)

  • Lee, Chul-Hee;Shin, Eun-Chul;Yang, Tae-Chul
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.7
    • /
    • pp.49-62
    • /
    • 2022
  • The hollow modular concrete block reinforced foundation method is one of the ground reinforcement foundation methods that uses hexagonal honeycomb-shaped concrete blocks with mixed crushed rock to reinforce soft grounds. It then forms an artificial layered ground that increases bearing capacity and reduces settlement. The hollow modular honeycomb-shaped concrete block is a geometrically economical, stable structure that distributes forces in a balanced way. However, the behavioral characteristics of hollow modular concrete block reinforced foundations are not yet fully understood. In this study, a bearing capacity test is performed to analyze the reinforcement effectiveness of the hollow modular concrete block through the laboratory model tests. From the load-settlement curve, punching shear failure occurs under the unfilled sand condition (A-1-N). However, the filled sand condition (A-1-F) shows a linear curve without yielding, confirming the reinforcement effect is three times higher than that of unreinforced ground. The bearing capacity equation is proposed for the parts that have contact pressure under concrete, vertical stress of hollow blocks, and the inner skin friction force from horizontal stress by confining effect based on the schematic diagram of confining effect inside a hollow modular concrete block. As a result of calculating the bearing capacity, the percentage of load distribution for contact force on the area of concrete is about 65%, vertical force on the area of hollow is 16.5% and inner skin friction force of area of the inner wall is about 18.5%. When the surcharge load is applied to the concrete part, the vertical stress occurs on the area of the hollow part by confining effect first. Then, in the filled sand in the hollow where the horizontal direction is constrained, the inner skin friction force occurs by the horizontal stress on the inner wall of the hollow modular concrete block. The inner skin friction force suppresses the punching of the concrete part and reduces contact pressure.

Development of disaster severity classification model using machine learning technique (머신러닝 기법을 이용한 재해강도 분류모형 개발)

  • Lee, Seungmin;Baek, Seonuk;Lee, Junhak;Kim, Kyungtak;Kim, Soojun;Kim, Hung Soo
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.4
    • /
    • pp.261-272
    • /
    • 2023
  • In recent years, natural disasters such as heavy rainfall and typhoons have occurred more frequently, and their severity has increased due to climate change. The Korea Meteorological Administration (KMA) currently uses the same criteria for all regions in Korea for watch and warning based on the maximum cumulative rainfall with durations of 3-hour and 12-hour to reduce damage. However, KMA's criteria do not consider the regional characteristics of damages caused by heavy rainfall and typhoon events. In this regard, it is necessary to develop new criteria considering regional characteristics of damage and cumulative rainfalls in durations, establishing four stages: blue, yellow, orange, and red. A classification model, called DSCM (Disaster Severity Classification Model), for the four-stage disaster severity was developed using four machine learning models (Decision Tree, Support Vector Machine, Random Forest, and XGBoost). This study applied DSCM to local governments of Seoul, Incheon, and Gyeonggi Province province. To develop DSCM, we used data on rainfall, cumulative rainfall, maximum rainfalls for durations of 3-hour and 12-hour, and antecedent rainfall as independent variables, and a 4-class damage scale for heavy rain damage and typhoon damage for each local government as dependent variables. As a result, the Decision Tree model had the highest accuracy with an F1-Score of 0.56. We believe that this developed DSCM can help identify disaster risk at each stage and contribute to reducing damage through efficient disaster management for local governments based on specific events.

Operation Measures of Sea Fog Observation Network for Inshore Route Marine Traffic Safety (연안항로 해상교통안전을 위한 해무관측망 운영방안에 관한 연구)

  • Joo-Young Lee;Kuk-Jin Kim;Yeong-Tae Son
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.2
    • /
    • pp.188-196
    • /
    • 2023
  • Among marine accidents caused by bad weather, visibility restrictions caused by sea fog occurrence cause accidents such as ship strand and ship bottom damage, and at the same time involve casualties caused by accidents, which continue to occur every year. In addition, low visibility at sea is emerging as a social problem such as causing considerable inconvenience to islanders in using transportation as passenger ships are collectively delayed and controlled even if there are local differences between regions. Moreover, such measures are becoming more problematic as they cannot objectively quantify them due to regional deviations or different criteria for judging observations from person to person. Currently, the VTS of each port controls the operation of the ship if the visibility distance is less than 1km, and in this case, there is a limit to the evaluation of objective data collection to the extent that the visibility of sea fog depends on the visibility meter or visual observation. The government is building a marine weather signal sign and sea fog observation networks for sea fog detection and prediction as part of solving these obstacles to marine traffic safety, but the system for observing locally occurring sea fog is in a very insufficient practical situation. Accordingly, this paper examines domestic and foreign policy trends to solve social problems caused by low visibility at sea and provides basic data on the need for government support to ensure maritime traffic safety due to sea fog by factually investigating and analyzing social problems. Also, this aims to establish a more stable maritime traffic operation system by blocking marine safety risks that may ultimately arise from sea fog in advance.

A study on multidisciplinary and convergent research using the case of 3D bioprinting (3D 바이오프린팅 사례로 본 다학제간 융복합 연구에 대한 소고)

  • Park, Ju An;Jung, Sungjune;Ma, Eunjeong
    • Korea Science and Art Forum
    • /
    • v.30
    • /
    • pp.151-161
    • /
    • 2017
  • In the fields of science and engineering, multidisciplinary research is common, and researchers with a diverse range of expertise collaborate to achieve common goals. As the 4th industrial revolution gains currency in society, there is growing demand on talented personnel both with technical knowledge and skills and with communicative skills. That is, future engineers are expected to possess competence in social and artistic skills in addition to specialized knowledge and skills in engineering. In this paper we introduce an emerging field of 3D bioprinting as an exemplary case of interdisciplinary research. We have chosen the case to demonstrate the possibility of cultivating engineers with π-shaped expertise. Building on the concept of T-shaped talent, we define π-shaped expertise as having both technical skills in more than one specialized field and interpersonal/communicative skills. Wtih references to such concepts as trading zones and interactional expertise, we suggest that π-shaped expertise can be cultivated via the creation of multi-level trading zones. Trading zones are referred to as the physical, conceptual, or metaphorical spaces in which experts with different world views trade ideas, objects, and the like. Interactional expertise is cultivated, as interactions between researches are under way, with growing understanding of each other's expertise. Under the support of the university and the government, two researchers with expertise in printing technology and life sciences cooperate to develop a 3D bioprinting system. And the primary investigator of the research laboratory under study has aimed to create multiple dimensions of trading zones where researchers with different educational and cultural backgrounds can exchange ideas and interact with each other. As 3D bioprinting has taken shape, we have found that a new form of expertise, namely π-shaped expertise is formed.

A Comparative Study on Application of Material in Traditional Residents of Korea, China and Japan - Focusing on Representative Upper-class House - (한·중·일 전통주거의 재료적용 특성 비교 연구 - 각국 대표 상류주택을 중심으로 -)

  • Kim, Hwi Kyung;Choi, Kyung Ran
    • Korea Science and Art Forum
    • /
    • v.19
    • /
    • pp.293-305
    • /
    • 2015
  • At the same time the unique cultural traits of each country are valued, it has become an essential element to establish the cultural identity of a country. This study is aimed at comparing the residence architectural cultures in East-Asia and thus identifying Korea's own unique traits by determining the application characteristics of traditional architectures of Korea, China and Japan through practical investigation of materials, a basic element of architectural shaping. Literature survey and field study were conducted in parallel for this study, and architectural buildings under investigation included Mucheomdang House in Korea, Prince Gong Mansion in China and Dokyudo Building in Japan. Construction materials in Korea, China and Japan include natural materials such as wood, stone and clay, and artificial materials such as metals, paper, roof tiles, plug and glass. and the buildings were constructed with the combination of these materials. This commonality can be often found in the architectural composition. However, in the interior composition, the choice and application of different materials were clear between three countries, which were shown to be different depending on climates, processing methods and living culture of each country. First of all, since each country selected materials under the influence of its own vegetation and climates, living environment of each country could be seen via its residence. Also, it could be seen that while Korea and Japan show a certain similarity such as the traits of standing-sitting culture and the finish of paper in the interior, China is clearly different. In particular, regarding the material processing, the artificial processing was minimized in Korea, which mainly gave rough and crude feelings while due to the use of straight timbers, the architectural representation with organized and refined feelings was made in Japan. China showed the highest percentage of artificial processing of materials among three countries, which was highly associated with the coloring culture of China. Also, it could be seen that technology related to fine architectural materials such as bricks and glass was greatly advanced in China. Thus, how immaterial elements such as natural characteristics, functionality and aesthetics were applied in relation to residence in Korea, Japan and China could be determined through the comparison of architectural materials.

Application of Amplitude Demodulation to Acquire High-sampling Data of Total Flux Leakage for Tendon Nondestructive Estimation (덴던 비파괴평가를 위한 Total Flux Leakage에서 높은 측정빈도의 데이터를 획득하기 위한 진폭복조의 응용)

  • Joo-Hyung Lee;Imjong Kwahk;Changbin Joh;Ji-Young Choi;Kwang-Yeun Park
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.2
    • /
    • pp.17-24
    • /
    • 2023
  • A post-processing technique for the measurement signal of a solenoid-type sensor is introduced. The solenoid-type sensor nondestructively evaluates an external tendon of prestressed concrete using the total flux leakage (TFL) method. The TFL solenoid sensor consists of primary and secondary coils. AC electricity, with the shape of a sinusoidal function, is input in the primary coil. The signal proportional to the differential of the input is induced in the secondary coil. Because the amplitude of the induced signal is proportional to the cross-sectional area of the tendon, sectional loss of the tendon caused by ruptures or corrosion can be identified by the induced signal. Therefore, it is important to extract amplitude information from the measurement signal of the TFL sensor. Previously, the amplitude was extracted using local maxima, which is the simplest way to obtain amplitude information. However, because the sampling rate is dramatically decreased by amplitude extraction using the local maxima, the previous method places many restrictions on the direction of TFL sensor development, such as applying additional signal processing and/or artificial intelligence. Meanwhile, the proposed method uses amplitude demodulation to obtain the signal amplitude from the TFL sensor, and the sampling rate of the amplitude information is same to the raw TFL sensor data. The proposed method using amplitude demodulation provides ample freedom for development by eliminating restrictions on the first coil input frequency of the TFL sensor and the speed of applying the sensor to external tension. It also maintains a high measurement sampling rate, providing advantages for utilizing additional signal processing or artificial intelligence. The proposed method was validated through experiments, and the advantages were verified through comparison with the previous method. For example, in this study the amplitudes extracted by amplitude demodulation provided a sampling rate 100 times greater than those of the previous method. There may be differences depending on the given situation and specific equipment settings; however, in most cases, extracting amplitude information using amplitude demodulation yields more satisfactory results than previous methods.