• Title/Summary/Keyword: experimental modeling

Search Result 3,453, Processing Time 0.03 seconds

Study on the Fire Risk Prediction Assessment due to Deterioration contact of combustible cables in Underground Common Utility Tunnels (지하공동구내 가연성케이블의 열화접촉으로 인한 화재위험성 예측평가)

  • Ko, Jaesun
    • Journal of the Society of Disaster Information
    • /
    • v.11 no.1
    • /
    • pp.135-147
    • /
    • 2015
  • Recent underground common utility tunnels are underground facilities for jointly accommodating more than 2 kinds of air-conditioning and heating facilities, vacuum dust collector, information processing cables as well as electricity, telecommunications, waterworks, city gas, sewerage system required when citizens live their daily lives and facilities responsible for the central function of the country but it is difficult to cope with fire accidents quickly and hard to enter into common utility tunnels to extinguish a fire due to toxic gases and smoke generated when various cables are burnt. Thus, in the event of a fire, not only the nerve center of the country is paralyzed such as significant property damage and loss of communication etc. but citizen inconveniences are caused. Therefore, noticing that most fires break out by a short circuit due to electrical works and degradation contact due to combustible cables as the main causes of fires in domestic and foreign common utility tunnels fire cases that have occurred so far, the purpose of this paper is to scientifically analyze the behavior of a fire by producing the model of actual common utility tunnels and reproducing the fire. A fire experiment was conducted in a state that line type fixed temperature detector, fire door, connection deluge set and ventilation equipment are installed in underground common utility tunnels and transmission power distribution cables are coated with fire proof paints in a certain section and heating pipes are fire proof covered. As a result, in the case of Type II, the maximum temperature was measured as $932^{\circ}C$ and line type fixed temperature detector displayed the fire location exactly in the receiver at a constant temperature. And transmission power distribution cables painted with fire proof paints in a certain section, the case of Type III, were found not to be fire resistant and fire proof covered heating pipes to be fire resistant for about 30 minutes. Also, fire simulation was carried out by entering fire load during a real fire test and as a result, the maximum temperature is $943^{\circ}C$, almost identical with $932^{\circ}C$ during a real fire test. Therefore, it is considered that fire behaviour can be predicted by conducting fire simulation only with common utility tunnels fire load and result values of heat release rate, height of the smoke layer, concentration of O2, CO, CO2 etc. obtained by simulation are determined to be applied as the values during a real fire experiment. In the future, it is expected that more reliable information on domestic underground common utility tunnels fire accidents can be provided and it will contribute to construction and maintenance repair effectively and systematically by analyzing and accumulating experimental data on domestic underground common utility tunnels fire accidents built in this study and fire cases continuously every year and complementing laws and regulations and administration manuals etc.

Numerical analysis of FEBEX at Grimsel Test Site in Switzerland (스위스 Grimsel Test Site에서 수행된 FEBEX 현장시험에 대한 수치해석적 연구)

  • Lee, Changsoo;Lee, Jaewon;Kim, Geon-Young
    • Tunnel and Underground Space
    • /
    • v.30 no.4
    • /
    • pp.359-381
    • /
    • 2020
  • Within the framework of DECOVALEX-2019 Task D, full-scale engineered barriers experiment (FEBEX) at Grimsel Test Site was numerically simulated to investigate an applicability of implemented Barcelona basic model (BBM) into TOUGH2-MP/FLAC3D simulator, which was developed for the prediction of the coupled thermo-hydro-mechanical behavior of bentonite buffer. And the calculated heater power, temperature, relative humidity, total stress, saturation, water content and dry density were compared with in situ data monitored in the various sections. In general, the calculated heater power and temperature provided a fairly good agreement with experimental observations, however, the difference between power of heater #1 and that of heater #2 could not captured in the numerical analysis. It is necessary to consider lamprophyre with low thermal conductivity around heater #1 and non-simplified installation progresses of bentonite blocks in the tunnel for better modeling results. The evolutions and distributions of relative humidity were well reproduced, but hydraulic model needs to be modified because the re-saturation process was relatively fast near the heaters. In case of stress evolutions due to the thermal and hydraulic expansions, the computed stress was in good agreement with the data. But, the stress is slightly higher than the measured in situ data at the early stage of the operation, because gap between rock mass and bentonite blocks have not been considered in the numerical simulations. The calculated distribution of saturation, water content, and dry density along the radial distance showed good agreement with the observations after the first and final dismantling. The calculated dry density near the center of the FEBEX tunnel and heaters were overestimated compared with the observations. As a result, the saturation and water content were underestimated with the measurements. Therefore, numerical model of permeability is needed to modify for the production of better numerical results. It will be possible to produce the better analysis results and more realistically predict the coupled THM behavior in the bentonite blocks by performing the additional studies and modifying the numerical model based on the results of this study.

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

Video Camera Characterization with White Balance (기준 백색 선택에 따른 비디오 카메라의 전달 특성)

  • 김은수;박종선;장수욱;한찬호;송규익
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.23-34
    • /
    • 2004
  • Video camera can be a useful tool to capture images for use in colorimeter. However the RGB signals generated by different video camera are not equal for the same scene. The video camera for use in colorimeter is characterized based on the CIE standard colorimetric observer. One method of deriving a colorimetric characterization matrix between camera RGB output signals and CIE XYZ tristimulus values is least squares polynomial modeling. However it needs tedious experiments to obtain camera transfer matrix under various white balance point for the same camera. In this paper, a new method to obtain camera transfer matrix under different white balance by using 3${\times}$3 camera transfer matrix under a certain white balance point is proposed. According to the proposed method camera transfer matrix under any other white balance could be obtained by using colorimetric coordinates of phosphor derived from 3${\times}$3 linear transfer matrix under the certain white balance point. In experimental results, it is demonstrated that proposed method allow 3${\times}$3 linear transfer matrix under any other white balance having a reasonable degree of accuracy compared with the transfer matrix obtained by experiments.

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

Ontology-based Course Mentoring System (온톨로지 기반의 수강지도 시스템)

  • Oh, Kyeong-Jin;Yoon, Ui-Nyoung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.149-162
    • /
    • 2014
  • Course guidance is a mentoring process which is performed before students register for coming classes. The course guidance plays a very important role to students in checking degree audits of students and mentoring classes which will be taken in coming semester. Also, it is intimately involved with a graduation assessment or a completion of ABEEK certification. Currently, course guidance is manually performed by some advisers at most of universities in Korea because they have no electronic systems for the course guidance. By the lack of the systems, the advisers should analyze each degree audit of students and curriculum information of their own departments. This process often causes the human error during the course guidance process due to the complexity of the process. The electronic system thus is essential to avoid the human error for the course guidance. If the relation data model-based system is applied to the mentoring process, then the problems in manual way can be solved. However, the relational data model-based systems have some limitations. Curriculums of a department and certification systems can be changed depending on a new policy of a university or surrounding environments. If the curriculums and the systems are changed, a scheme of the existing system should be changed in accordance with the variations. It is also not sufficient to provide semantic search due to the difficulty of extracting semantic relationships between subjects. In this paper, we model a course mentoring ontology based on the analysis of a curriculum of computer science department, a structure of degree audit, and ABEEK certification. Ontology-based course guidance system is also proposed to overcome the limitation of the existing methods and to provide the effectiveness of course mentoring process for both of advisors and students. In the proposed system, all data of the system consists of ontology instances. To create ontology instances, ontology population module is developed by using JENA framework which is for building semantic web and linked data applications. In the ontology population module, the mapping rules to connect parts of degree audit to certain parts of course mentoring ontology are designed. All ontology instances are generated based on degree audits of students who participate in course mentoring test. The generated instances are saved to JENA TDB as a triple repository after an inference process using JENA inference engine. A user interface for course guidance is implemented by using Java and JENA framework. Once a advisor or a student input student's information such as student name and student number at an information request form in user interface, the proposed system provides mentoring results based on a degree audit of current student and rules to check scores for each part of a curriculum such as special cultural subject, major subject, and MSC subject containing math and basic science. Recall and precision are used to evaluate the performance of the proposed system. The recall is used to check that the proposed system retrieves all relevant subjects. The precision is used to check whether the retrieved subjects are relevant to the mentoring results. An officer of computer science department attends the verification on the results derived from the proposed system. Experimental results using real data of the participating students show that the proposed course guidance system based on course mentoring ontology provides correct course mentoring results to students at all times. Advisors can also reduce their time cost to analyze a degree audit of corresponding student and to calculate each score for the each part. As a result, the proposed system based on ontology techniques solves the difficulty of mentoring methods in manual way and the proposed system derive correct mentoring results as human conduct.

RPC Correction of KOMPSAT-3A Satellite Image through Automatic Matching Point Extraction Using Unmanned AerialVehicle Imagery (무인항공기 영상 활용 자동 정합점 추출을 통한 KOMPSAT-3A 위성영상의 RPC 보정)

  • Park, Jueon;Kim, Taeheon;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1135-1147
    • /
    • 2021
  • In order to geometrically correct high-resolution satellite imagery, the sensor modeling process that restores the geometric relationship between the satellite sensor and the ground surface at the image acquisition time is required. In general, high-resolution satellites provide RPC (Rational Polynomial Coefficient) information, but the vendor-provided RPC includes geometric distortion caused by the position and orientation of the satellite sensor. GCP (Ground Control Point) is generally used to correct the RPC errors. The representative method of acquiring GCP is field survey to obtain accurate ground coordinates. However, it is difficult to find the GCP in the satellite image due to the quality of the image, land cover change, relief displacement, etc. By using image maps acquired from various sensors as reference data, it is possible to automate the collection of GCP through the image matching algorithm. In this study, the RPC of KOMPSAT-3A satellite image was corrected through the extracted matching point using the UAV (Unmanned Aerial Vehichle) imagery. We propose a pre-porocessing method for the extraction of matching points between the UAV imagery and KOMPSAT-3A satellite image. To this end, the characteristics of matching points extracted by independently applying the SURF (Speeded-Up Robust Features) and the phase correlation, which are representative feature-based matching method and area-based matching method, respectively, were compared. The RPC adjustment parameters were calculated using the matching points extracted through each algorithm. In order to verify the performance and usability of the proposed method, it was compared with the GCP-based RPC correction result. The GCP-based method showed an improvement of correction accuracy by 2.14 pixels for the sample and 5.43 pixelsfor the line compared to the vendor-provided RPC. In the proposed method using SURF and phase correlation methods, the accuracy of sample was improved by 0.83 pixels and 1.49 pixels, and that of line wasimproved by 4.81 pixels and 5.19 pixels, respectively, compared to the vendor-provided RPC. Through the experimental results, the proposed method using the UAV imagery presented the possibility as an alternative to the GCP-based method for the RPC correction.

Water Digital Twin for High-tech Electronics Industrial Wastewater Treatment System (II): e-ASM Calibration, Effluent Prediction, Process selection, and Design (첨단 전자산업 폐수처리시설의 Water Digital Twin(II): e-ASM 모델 보정, 수질 예측, 공정 선택과 설계)

  • Heo, SungKu;Jeong, Chanhyeok;Lee, Nahui;Shim, Yerim;Woo, TaeYong;Kim, JeongIn;Yoo, ChangKyoo
    • Clean Technology
    • /
    • v.28 no.1
    • /
    • pp.79-93
    • /
    • 2022
  • In this study, an electronics industrial wastewater activated sludge model (e-ASM) to be used as a Water Digital Twin was calibrated based on real high-tech electronics industrial wastewater treatment measurements from lab-scale and pilot-scale reactors, and examined for its treatment performance, effluent quality prediction, and optimal process selection. For specialized modeling of a high-tech electronics industrial wastewater treatment system, the kinetic parameters of the e-ASM were identified by a sensitivity analysis and calibrated by the multiple response surface method (MRS). The calibrated e-ASM showed a high compatibility of more than 90% with the experimental data from the lab-scale and pilot-scale processes. Four electronics industrial wastewater treatment processes-MLE, A2/O, 4-stage MLE-MBR, and Bardenpo-MBR-were implemented with the proposed Water Digital Twin to compare their removal efficiencies according to various electronics industrial wastewater characteristics. Bardenpo-MBR stably removed more than 90% of the chemical oxygen demand (COD) and showed the highest nitrogen removal efficiency. Furthermore, a high concentration of 1,800 mg L-1 T MAH influent could be 98% removed when the HRT of the Bardenpho-MBR process was more than 3 days. Hence, it is expected that the e-ASM in this study can be used as a Water Digital Twin platform with high compatibility in a variety of situations, including plant optimization, Water AI, and the selection of best available technology (BAT) for a sustainable high-tech electronics industry.

Review of Erosion and Piping in Compacted Bentonite Buffers Considering Buffer-Rock Interactions and Deduction of Influencing Factors (완충재-근계암반 상호작용을 고려한 압축 벤토나이트 완충재 침식 및 파이핑 연구 현황 및 주요 영향인자 도출)

  • Hong, Chang-Ho;Kim, Ji-Won;Kim, Jin-Seop;Lee, Changsoo
    • Tunnel and Underground Space
    • /
    • v.32 no.1
    • /
    • pp.30-58
    • /
    • 2022
  • The deep geological repository for high-level radioactive waste disposal is a multi barrier system comprised of engineered barriers and a natural barrier. The long-term integrity of the deep geological repository is affected by the coupled interactions between the individual barrier components. Erosion and piping phenomena in the compacted bentonite buffer due to buffer-rock interactions results in the removal of bentonite particles via groundwater flow and can negatively impact the integrity and performance of the buffer. Rapid groundwater inflow at the early stages of disposal can lead to piping in the bentonite buffer due to the buildup of pore water pressure. The physiochemical processes between the bentonite buffer and groundwater lead to bentonite swelling and gelation, resulting in bentonite erosion from the buffer surface. Hence, the evaluation of erosion and piping occurrence and its effects on the integrity of the bentonite buffer is crucial in determining the long-term integrity of the deep geological repository. Previous studies on bentonite erosion and piping failed to consider the complex coupled thermo-hydro-mechanical-chemical behavior of bentonite-groundwater interactions and lacked a comprehensive model that can consider the complex phenomena observed from the experimental tests. In this technical note, previous studies on the mechanisms, lab-scale experiments and numerical modeling of bentonite buffer erosion and piping are introduced, and the future expected challenges in the investigation of bentonite buffer erosion and piping are summarized.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.