• Title/Summary/Keyword: Model similarity

Search Result 1,486, Processing Time 0.028 seconds

Modelling Gas Production Induced Seismicity Using 2D Hydro-Mechanical Coupled Particle Flow Code: Case Study of Seismicity in the Natural Gas Field in Groningen Netherlands (2차원 수리-역학적 연계 입자유동코드를 사용한 가스생산 유발지진 모델링: 네덜란드 그로닝엔 천연가스전에서의 지진 사례 연구)

  • Jeoung Seok Yoon;Anne Strader;Jian Zhou;Onno Dijkstra;Ramon Secanell;Ki-Bok Min
    • Tunnel and Underground Space
    • /
    • v.33 no.1
    • /
    • pp.57-69
    • /
    • 2023
  • In this study, we simulated induced seismicity in the Groningen natural gas reservoir using 2D hydro-mechanical coupled discrete element modelling (DEM). The code used is PFC2D (Particle Flow Code 2D), a commercial software developed by Itasca, and in order to apply to this study we further developed 1)initialization of inhomogeneous reservoir pressure distribution, 2)a non-linear pressure-time history boundary condition, 3)local stress field monitoring logic. We generated a 2D reservoir model with a size of 40 × 50 km2 and a complex fault system, and simulated years of pressure depletion with a time range between 1960 and 2020. We simulated fault system failure induced by pressure depletion and reproduced the spatiotemporal distribution of induced seismicity and assessed its failure mechanism. Also, we estimated the ground subsidence distribution and confirmed its similarity to the field measurements in the Groningen region. Through this study, we confirm the feasibility of the presented 2D hydro-mechanical coupled DEM in simulating the deformation of a complex fault system by hydro-mechanical coupled processes.

Monte Carlo Algorithm-Based Dosimetric Comparison between Commissioning Beam Data across Two Elekta Linear Accelerators with AgilityTM MLC System

  • Geum Bong Yu;Chang Heon Choi;Jung-in Kim;Jin Dong Cho;Euntaek Yoon;Hyung Jin Choun;Jihye Choi;Soyeon Kim;Yongsik Kim;Do Hoon Oh;Hwajung Lee;Lee Yoo;Minsoo Chun
    • Progress in Medical Physics
    • /
    • v.33 no.4
    • /
    • pp.150-157
    • /
    • 2022
  • Purpose: Elekta synergy® was commissioned in the Seoul National University Veterinary Medical Teaching Hospital. Recently, Chung-Ang University Gwang Myeong Hospital commissioned Elekta Versa HDTM. The beam characteristics of both machines are similar because of the same AgilityTM MLC Model. We compared measured beam data calculated using the Elekta treatment planning system, Monaco®, for each institute. Methods: Beam of the commissioning Elekta linear accelerator were measured in two independent institutes. After installing the beam model based on the measured beam data into the Monaco®, Monte Carlo (MC) simulation data were generated, mimicking the beam data in a virtual water phantom. Measured beam data were compared with the calculated data, and their similarity was quantitatively evaluated by the gamma analysis. Results: We compared the percent depth dose (PDD) and off-axis profiles of 6 MV photon and 6 MeV electron beams with MC calculation. With a 3%/3 mm gamma criterion, the photon PDD and profiles showed 100% gamma passing rates except for one inplane profile at 10 cm depth from VMTH. Gamma analysis of the measured photon beam off-axis profiles between the two institutes showed 100% agreement. The electron beams also indicated 100% agreement in PDD distributions. However, the gamma passing rates of the off-axis profiles were 91%-100% with a 3%/3 mm gamma criterion. Conclusions: The beam and their comparison with MC calculation for each institute showed good performance. Although the measuring tools were orthogonal, no significant difference was found.

Generative Adversarial Network Model for Generating Yard Stowage Situation in Container Terminal (컨테이너 터미널의 야드 장치 상태 생성을 위한 생성적 적대 신경망 모형)

  • Jae-Young Shin;Yeong-Il Kim;Hyun-Jun Cho
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.383-384
    • /
    • 2022
  • Following the development of technologies such as digital twin, IoT, and AI after the 4th industrial revolution, decision-making problems are being solved based on high-dimensional data analysis. This has recently been applied to the port logistics sector, and a number of studies on big data analysis, deep learning predictions, and simulations have been conducted on container terminals to improve port productivity. These high-dimensional data analysis techniques generally require a large number of data. However, the global port environment has changed due to the COVID-19 pandemic in 2020. It is not appropriate to apply data before the COVID-19 outbreak to the current port environment, and the data after the outbreak was not sufficiently collected to apply it to data analysis such as deep learning. Therefore, this study intends to present a port data augmentation method for data analysis as one of these problem-solving methods. To this end, we generate the container stowage situation of the yard through a generative adversarial neural network model in terms of container terminal operation, and verify similarity through statistical distribution verification between real and augmented data.

  • PDF

Applying deep learning based super-resolution technique for high-resolution urban flood analysis (고해상도 도시 침수 해석을 위한 딥러닝 기반 초해상화 기술 적용)

  • Choi, Hyeonjin;Lee, Songhee;Woo, Hyuna;Kim, Minyoung;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.10
    • /
    • pp.641-653
    • /
    • 2023
  • As climate change and urbanization are causing unprecedented natural disasters in urban areas, it is crucial to have urban flood predictions with high fidelity and accuracy. However, conventional physically- and deep learning-based urban flood modeling methods have limitations that require a lot of computer resources or data for high-resolution flooding analysis. In this study, we propose and implement a method for improving the spatial resolution of urban flood analysis using a deep learning based super-resolution technique. The proposed approach converts low-resolution flood maps by physically based modeling into the high-resolution using a super-resolution deep learning model trained by high-resolution modeling data. When applied to two cases of retrospective flood analysis at part of City of Portland, Oregon, U.S., the results of the 4-m resolution physical simulation were successfully converted into 1-m resolution flood maps through super-resolution. High structural similarity between the super-solution image and the high-resolution original was found. The results show promising image quality loss within an acceptable limit of 22.80 dB (PSNR) and 0.73 (SSIM). The proposed super-resolution method can provide efficient model training with a limited number of flood scenarios, significantly reducing data acquisition efforts and computational costs.

A Study on Spatial Pattern of Impact Area of Intersection Using Digital Tachograph Data and Traffic Assignment Model (차량 운행기록정보와 통행배정 모형을 이용한 교차로 영향권의 공간적 패턴에 관한 연구)

  • PARK, Seungjun;HONG, Kiman;KIM, Taegyun;SEO, Hyeon;CHO, Joong Rae;HONG, Young Suk
    • Journal of Korean Society of Transportation
    • /
    • v.36 no.2
    • /
    • pp.155-168
    • /
    • 2018
  • In this study, we studied the directional pattern of entering the intersection from the intersection upstream link prior to predicting short future (such as 5 or 10 minutes) intersection direction traffic volume on the interrupted flow, and examined the possibility of traffic volume prediction using traffic assignment model. The analysis method of this study is to investigate the similarity of patterns by performing cluster analysis with the ratio of traffic volume by intersection direction divided by 2 hours using taxi DTG (Digital Tachograph) data (1 week). Also, for linking with the result of the traffic assignment model, this study compares the impact area of 5 minutes or 10 minutes from the center of the intersection with the analysis result of taxi DTG data. To do this, we have developed an algorithm to set the impact area of intersection, using the taxi DTG data and traffic assignment model. As a result of the analysis, the intersection entry pattern of the taxi is grouped into 12, and the Cubic Clustering Criterion indicating the confidence level of clustering is 6.92. As a result of correlation analysis with the impact area of the traffic assignment model, the correlation coefficient for the impact area of 5 minutes was analyzed as 0.86, and significant results were obtained. However, it was analyzed that the correlation coefficient is slightly lowered to 0.69 in the impact area of 10 minutes from the center of the intersection, but this was due to insufficient accuracy of O/D (Origin/Destination) travel and network data. In future, if accuracy of traffic network and accuracy of O/D traffic by time are improved, it is expected that it will be able to utilize traffic volume data calculated from traffic assignment model when controlling traffic signals at intersections.

Dynamic Behavior of Model Set Net in the Flow (모형 정치망의 흐름에 대한 거동)

  • Jung, Gi-Cheul;Kwon, Byeong-Guk;Le, Ju-Hee
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.33 no.4
    • /
    • pp.275-284
    • /
    • 1997
  • This experiment was carried out to measure the sinking depth of each buoy, the change in the net shape of the net, and the tension of sand bag line according to the R (from bag net to the fish court) and L (from fish court to the bag net) current directions and their velocity by the model experiment. The model net was one-fiftieth of the real net, and its size was determined after considering the Tauti’s Similarity Law and the dimension of the experimental tank. 1. The changes of the net shape were as follows : In the current R, the end net of fish court moved 20mm down the lowerward tide and 10mm upper part. So the whole model net moved up at 0.2m/sec. The shape of the net showed an almost linear state from bag net to the fish court at 0.6m/sec. In the current L, the door net moved 242mm down the lowerward tide and 18mm upper part. So the whole model net moved up at 0.2m/sec. The net shape showed an almost linear state from the fish court to the bag net at 0.5m/sec. 2. The sinking depths of each buoy were as follows: In the current R, the head buoy started sinking at 0.2m/sec and sank 20mm, 99mm at 0.3m/sec and 0.6m/sec, respectively. The end buoy didn't sink from 0m/sec to 0.6m/sec but showed a slight quake. In the current L, the end buoy started sinking at 0.1m/sec, and sank 5mm and 108mm at 0.2m/sec and 0.6m/sec, respectively. The whole model net sank at 0.5m/sec except the head buoy. 3. The changes of the sand bag line tension were as follows: In the current R, the tension affected by the sand bag line of the head buoy showed 273.51g at 0.1m/sec increased to 1298.40g at 0.6m/sec. In the current L, the tension affected by the sand bag line of the end buoy on one side showed 137.08g at 0.1m/sec increased to 646.00g at 0.6m/sec. The changes in the sand bag line tension were concentrated on the sand bag line of the upperward tide with increasing velocity at the R and L current directions. However, no significant increase in tension was observed in the other sand bag lines.

  • PDF

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

The Adaptive Personalization Method According to Users Purchasing Index : Application to Beverage Purchasing Predictions (고객별 구매빈도에 동적으로 적응하는 개인화 시스템 : 음료수 구매 예측에의 적용)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.95-108
    • /
    • 2011
  • TThis is a study of the personalization method that intelligently adapts the level of clustering considering purchasing index of a customer. In the e-biz era, many companies gather customers' demographic and transactional information such as age, gender, purchasing date and product category. They use this information to predict customer's preferences or purchasing patterns so that they can provide more customized services to their customers. The previous Customer-Segmentation method provides customized services for each customer group. This method clusters a whole customer set into different groups based on their similarity and builds predictive models for the resulting groups. Thus, it can manage the number of predictive models and also provide more data for the customers who do not have enough data to build a good predictive model by using the data of other similar customers. However, this method often fails to provide highly personalized services to each customer, which is especially important to VIP customers. Furthermore, it clusters the customers who already have a considerable amount of data as well as the customers who only have small amount of data, which causes to increase computational cost unnecessarily without significant performance improvement. The other conventional method called 1-to-1 method provides more customized services than the Customer-Segmentation method for each individual customer since the predictive model are built using only the data for the individual customer. This method not only provides highly personalized services but also builds a relatively simple and less costly model that satisfies with each customer. However, the 1-to-1 method has a limitation that it does not produce a good predictive model when a customer has only a few numbers of data. In other words, if a customer has insufficient number of transactional data then the performance rate of this method deteriorate. In order to overcome the limitations of these two conventional methods, we suggested the new method called Intelligent Customer Segmentation method that provides adaptive personalized services according to the customer's purchasing index. The suggested method clusters customers according to their purchasing index, so that the prediction for the less purchasing customers are based on the data in more intensively clustered groups, and for the VIP customers, who already have a considerable amount of data, clustered to a much lesser extent or not clustered at all. The main idea of this method is that applying clustering technique when the number of transactional data of the target customer is less than the predefined criterion data size. In order to find this criterion number, we suggest the algorithm called sliding window correlation analysis in this study. The algorithm purposes to find the transactional data size that the performance of the 1-to-1 method is radically decreased due to the data sparity. After finding this criterion data size, we apply the conventional 1-to-1 method for the customers who have more data than the criterion and apply clustering technique who have less than this amount until they can use at least the predefined criterion amount of data for model building processes. We apply the two conventional methods and the newly suggested method to Neilsen's beverage purchasing data to predict the purchasing amounts of the customers and the purchasing categories. We use two data mining techniques (Support Vector Machine and Linear Regression) and two types of performance measures (MAE and RMSE) in order to predict two dependent variables as aforementioned. The results show that the suggested Intelligent Customer Segmentation method can outperform the conventional 1-to-1 method in many cases and produces the same level of performances compare with the Customer-Segmentation method spending much less computational cost.

The Model Experiment on the Pair Midwater Trawl (중층용 쌍끌이 기선저인망의 모형실험)

  • Cho, Sam-Kwang;Lee, Ju-Hee;Jang, Chung-Sik
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.31 no.3
    • /
    • pp.228-239
    • /
    • 1995
  • A model experiment on the pair midwater trawl net which is prevailing in Denmark is carried out to get the basic data available for Korean pair bottom trawlers. The model net was made in 1/30 scale considering the Tauti's Similarity law of fishing gear and the dimension of experimental tank. The vertical opening, horizontal opening, towing tension and net working depth of the model net were determined in the tank within the towing velocity 0.46~1.15m/sec, front weight 15.5~62.0g and distance between paired boats 5~8m(which correpond to 2~5k't in towing velocity, 70~280kg in weight and 150~240m in distance respectively in the prototype net). The results got from the model experiment can be converted into the full scale net as follows; 1. Vertical opening showed the largest value of 32m at the condition of 2k't in towing velocity, 280kg in front weight and 150m in the distance between paired boats, and the smallest value of 6m at the condition of 5k't in towing velocity, 70kg in front weight and 240m in the distance between paired boats. 2. Horizontal opening showed the largest value of 45m at the condition of 5k't in towing velocity, 70kg in front weight and 240m in the distance between paired boats, and the smallest value of 33m at the condition of 2k't in towing velocity, 280kg in front weight and 150m in the distance between paired boats. 3. Towing tension showed the largest value of 10, 000kg at the condition of 5k't in towing velocity, 280kg in front weight and 240m in the distance between paired boats, and the smallest value of 1, 600kg at the condition of 2k't in towing velocity, 70kg in front weight and 150m in the distance between paired boats. 4. Net working depth showed the largest value of 38m at the condition of 2k't in towing velocity, 280kg in front weight and 150m in the distance between paired boats, and the smallest value of 6m at the condition of 5k't in towing velocity, 70kg in front weight and 240m in the distance between paired boats. 5. Net opening area showed the largest value of 1, 100m super(2) at the condition of 2k't in towing velocity, 280kg in front weight and 180m in the distance between paired boats, and the smallest value of 250m super(2) at the condition of 5k't in towing velocity, 70kg in front weight and 240m in the distance between paired boats.

  • PDF