• Title/Summary/Keyword: 생성모형

검색결과 1,366건 처리시간 0.031초

Agroclimatology of North Korea for Paddy Rice Cultivation: Preliminary Results from a Simulation Experiment (생육모의에 의한 북한지방 시ㆍ군별 벼 재배기후 예비분석)

  • Yun Jin-Il;Lee Kwang-Hoe
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • 제2권2호
    • /
    • pp.47-61
    • /
    • 2000
  • Agroclimatic zoning was done for paddy rice culture in North Korea based on a simulation experiment. Daily weather data for the experiment were generated by 3 steps consisting of spatial interpolation based on topoclimatological relationships, zonal summarization of grid cell values, and conversion of monthly climate data to daily weather data. Regression models for monthly climatological temperature estimation were derived from a statistical procedure using monthly averages of 51 standard weather stations in South and North Korea (1981-1994) and their spatial variables such as latitude, altitude, distance from the coast, sloping angle, and aspect-dependent field of view (openness). Selected models (0.4 to 1.6$^{\circ}C$ RMSE) were applied to the generation of monthly temperature surface over the entire North Korean territory on 1 km$\times$l km grid spacing. Monthly precipitation data were prepared by a procedure described in Yun (2000). Solar radiation data for 27 North Korean stations were reproduced by applying a relationship found in South Korea ([Solar Radiation, MJ m$^{-2}$ day$^{-1}$ ] =0.344 + 0.4756 [Extraterrestrial Solar Irradiance) + 0.0299 [Openness toward south, 0 - 255) - 1.307 [Cloud amount, 0 - 10) - 0.01 [Relative humidity, %), $r^2$=0.92, RMSE = 0.95 ). Monthly solar irradiance data of 27 points calculated from the reproduced data set were converted to 1 km$\times$1 km grid data by inverse distance weighted interpolation. The grid cell values of monthly temperature, solar radiation, and precipitation were summed up to represent corresponding county, which will serve as a land unit for the growth simulation. Finally, we randomly generated daily maximum and minimum temperature, solar irradiance and precipitation data for 30 years from the monthly climatic data for each county based on a statistical method suggested by Pickering et a1. (1994). CERES-rice, a rice growth simulation model, was tuned to accommodate agronomic characteristics of major North Korean cultivars based on observed phenological and yield data at two sites in South Korea during 1995~1998. Daily weather data were fed into the model to simulate the crop status at 183 counties in North Korea for 30 years. Results were analyzed with respect to spatial and temporal variation in yield and maturity, and used to score the suitability of the county for paddy rice culture.

  • PDF

An Efficient Heuristic for Storage Location Assignment and Reallocation for Products of Different Brands at Internet Shopping Malls for Clothing (의류 인터넷 쇼핑몰에서 브랜드를 고려한 상품 입고 및 재배치 방법 연구)

  • Song, Yong-Uk;Ahn, Byung-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • 제16권2호
    • /
    • pp.129-141
    • /
    • 2010
  • An Internet shopping mall for clothing operates a warehouse for packing and shipping products to fulfill its orders. All the products in the warehouse are put into the boxes of same brands and the boxes are stored in a row on shelves equiped in the warehouse. To make picking and managing easy, boxes of the same brands are located side by side on the shelves. When new products arrive to the warehouse for storage, the products of a brand are put into boxes and those boxes are located adjacent to the boxes of the same brand. If there is not enough space for the new coming boxes, however, some boxes of other brands should be moved away and then the new coming boxes are located adjacent in the resultant vacant spaces. We want to minimize the movement of the existing boxes of other brands to another places on the shelves during the warehousing of new coming boxes, while all the boxes of the same brand are kept side by side on the shelves. Firstly, we define the adjacency of boxes by looking the shelves as an one dimensional series of spaces to store boxes, i.e. cells, tagging the series of cells by a series of numbers starting from one, and considering any two boxes stored in the cells to be adjacent to each other if their cell numbers are continuous from one number to the other number. After that, we tried to formulate the problem into an integer programming model to obtain an optimal solution. An integer programming formulation and Branch-and-Bound technique for this problem may not be tractable because it would take too long time to solve the problem considering the number of the cells or boxes in the warehouse and the computing power of the Internet shopping mall. As an alternative approach, we designed a fast heuristic method for this reallocation problem by focusing on just the unused spaces-empty cells-on the shelves, which results in an assignment problem model. In this approach, the new coming boxes are assigned to each empty cells and then those boxes are reorganized so that the boxes of a brand are adjacent to each other. The objective of this new approach is to minimize the movement of the boxes during the reorganization process while keeping the boxes of a brand adjacent to each other. The approach, however, does not ensure the optimality of the solution in terms of the original problem, that is, the problem to minimize the movement of existing boxes while keeping boxes of the same brands adjacent to each other. Even though this heuristic method may produce a suboptimal solution, we could obtain a satisfactory solution within a satisfactory time, which are acceptable by real world experts. In order to justify the quality of the solution by the heuristic approach, we generate 100 problems randomly, in which the number of cells spans from 2,000 to 4,000, solve the problems by both of our heuristic approach and the original integer programming approach using a commercial optimization software package, and then compare the heuristic solutions with their corresponding optimal solutions in terms of solution time and the number of movement of boxes. We also implement our heuristic approach into a storage location assignment system for the Internet shopping mall.

Geophysical Studies on Major Faults in the Gyeonggi Massif : Gravity and Electrical Surveys In the Gongju Basin (경기육괴내 주요 단층대의 지구물리학적 연구: 공주분지의 중력 및 지전기 탐사)

  • Kwon Byung-Doo;Jung Gyung-Ja;Baag Chang-Eob
    • The Korean Journal of Petroleum Geology
    • /
    • 제2권2호
    • /
    • pp.43-50
    • /
    • 1994
  • The geologic structure of Gongju Basin, which is a Cretaceous sedimentary basin located on the boundary of Gyeonggi Massif and Ogcheon Belt, is modeled by using gravity data and interpreted in relation with basin forming tectonism. The electrical survey with dipole-dipole array was also conducted to uncover the development of fractures in the two fault zones which form the boundaries of the basin. In the process of gravity data reduction, the terrain correction was performed by using the conic prism model, which showed better results specially for topography having a steep slope. The gravity model of the geologic structure of Gongju basin is obtained by forward modeling based on the surface geology and density inversion. It reveals that the width of the basin at its central part is about $4{\cal}km$ and about $2.5{\cal}km$ at the southern part. The depth of crystalline basement beneath sedimentary rocks of the basin is about $700{\~}400{\cal}m$ below the sea level and it is thinner in the center than in margin. The fault of the southeastern boundary appears more clearly than that of the northwestern boundary, and its fracture zone may extended to the depth of more than $1{\cal}km$. Therefore, it is thought that the tectonic movement along the fault in the southeastern boundary was much stronger. These results coincide with the appearance of broad low resistivity anomaly at the southeastern boundary of the basin in the resistivity section. The fracture zones having low density are also recognized inside the basin from the gravity model. The swelling feature of basement and the fractures in sedimentary rocks of the basin suggest that the compressional tectonic stress had also involved after the deposition of the Cretaceous sediments.

  • PDF

Comparative assessment and uncertainty analysis of ensemble-based hydrologic data assimilation using airGRdatassim (airGRdatassim을 이용한 앙상블 기반 수문자료동화 기법의 비교 및 불확실성 평가)

  • Lee, Garim;Lee, Songhee;Kim, Bomi;Woo, Dong Kook;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • 제55권10호
    • /
    • pp.761-774
    • /
    • 2022
  • Accurate hydrologic prediction is essential to analyze the effects of drought, flood, and climate change on flow rates, water quality, and ecosystems. Disentangling the uncertainty of the hydrological model is one of the important issues in hydrology and water resources research. Hydrologic data assimilation (DA), a technique that updates the status or parameters of a hydrological model to produce the most likely estimates of the initial conditions of the model, is one of the ways to minimize uncertainty in hydrological simulations and improve predictive accuracy. In this study, the two ensemble-based sequential DA techniques, ensemble Kalman filter, and particle filter are comparatively analyzed for the daily discharge simulation at the Yongdam catchment using airGRdatassim. The results showed that the values of Kling-Gupta efficiency (KGE) were improved from 0.799 in the open loop simulation to 0.826 in the ensemble Kalman filter and to 0.933 in the particle filter. In addition, we analyzed the effects of hyper-parameters related to the data assimilation methods such as precipitation and potential evaporation forcing error parameters and selection of perturbed and updated states. For the case of forcing error conditions, the particle filter was superior to the ensemble in terms of the KGE index. The size of the optimal forcing noise was relatively smaller in the particle filter compared to the ensemble Kalman filter. In addition, with more state variables included in the updating step, performance of data assimilation improved, implicating that adequate selection of updating states can be considered as a hyper-parameter. The simulation experiments in this study implied that DA hyper-parameters needed to be carefully optimized to exploit the potential of DA methods.

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • 제20권1호
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Aeromagnetic Interpretation of the Southern and Western Offshore Korea (한국 서남근해에 대한 항공자력탐사 해석)

  • Baag Czango;Baag Chang-Eob
    • The Korean Journal of Petroleum Geology
    • /
    • 제2권2호
    • /
    • pp.51-57
    • /
    • 1994
  • Analysis of the aeromagnetic data aquired by US Navy in the year 1969 permits us to predict a new sedimentary basin, Heugsan Basin, south of the known Gunsan Basin in Block Ⅱ. The basin appears to consist of three sub-basins trending NNW-SSE. The results of our analysis provide not only an independent assessment of the Gunsan Basin, but also new important information on the tectonic origin and mechanism for the two basins as well as for the entire region. The basin forming tectonic style is interpreted as rhombochasm associated with double overstepped left-lateral wrench faults. From the magnetic evidence, a few NE-SW trending major onshore faults are extended to the study area. We also interpreted the nature of the faults to be left-lateral wrenches. This new gross structural style is consistent with the results of recent Yeongdong Basin analysis by Lee. The senses of fault movement are also supported by the paleomagnetic evidence that the Philippine Sea had experienced an 80-degree clockwise rotation since the Eocene. Based on a 2 $\frac{1}{2}$ model study the probable maximum thickness of the sediments in the Gunsan Basin is approximately 7500 meters. We believe that the new Heugsan Basin was left unidentified because a high velocity layer may be overlying the basin. Because the overall structural configuration of the Heugsan Basin appears to be favorable for hydrocarbon accumulation, a detailed airborne magnetic survey is recommended in the area in order to verify the magnetic expression of this thick basin. A detailed subsequent marine gravity survey is also recommended in order to delineate the sedimentary section and to acquire supplemental data to the magnetic method only if an overlying high velocity layer is confirmed. Otherwise a high energy source seismic survey may be more effective.

  • PDF

A Study on the Establishment Case of Technical Standard for Electronic Record Information Package (전자문서 정보패키지 구축 사례 연구 - '공인전자문서보관소 전자문서 정보패키지 기술규격 개발 연구'를 중심으로-)

  • Kim, Sung-Kyum
    • The Korean Journal of Archival Studies
    • /
    • 제16호
    • /
    • pp.97-146
    • /
    • 2007
  • Those days when people used paper to make up and manage all kinds of documents in the process of their jobs are gone now. Today electronic types of documents have replaced paper. Unlike paper documents, electronic ones contribute to the maximum job efficiency with their convenience in production and storage. But they too have some disadvantages; it's difficult to distinguish originals and copies like paper documents; it's not easy to examine if there is a change or damage to the documents; they are also prone to alteration and damage by the external influences in the electronic environment; and electronic documents require enormous amounts of workforce and costs for immediate measures to be taken according to the changes to the S/W and H/W environment. Despite all those weaknesses, however, electronic documents increasingly account for more percentage in the current job environment thanks to their job convenience and efficiency of production costs. Both the government and private sector have made efforts to come up with plans to maximize their advantages and minimize their risks at the same time. One of the methods is the Authorized Retention Center which is described in the study. There are a couple of prerequisites for its smooth operation; they should guarantee the legal validity of electronic documents in the administrative aspects and first secure the reliability and authenticity of electronic documents in the technological aspects. Responding to those needs, the Ministry of Commerce, Industry and Energy and the Korea Institute for Electronic Commerce, which were the two main bodies to drive the Authorized Retention Center project, revised the Electronic Commerce Act and supplemented the provisions to guarantee the legal validity of electronic documents in 2005 and conducted researches on the ways to preserve electronic documents for a long term and secure their reliability, which had been demanded by the users of the center, in 2006. In an attempt to fulfill those goals of the Authorized Retention Center, this study researched technical standard for electronic record information package of the center and applied the ISO 14721 information package model that's the standard for the long-term preservation of digital data. It also suggested a process to produce and manage information package so that there would be the SIP, AIP and DIP metadata features for the production, preservation, and utilization by users points of electronic documents and they could be implemented according to the center's policies. Based on the previous study, the study introduced the flow charts among the production and progress process, application methods and packages of technical standard for electronic record information package at the center and suggested some issues that should be consistently researched in the field of records management based on the results.

Analysis of the effect of long-term water supply improvement by the installation of sand dams in water scarce areas (물부족 지역에서 샌드댐 설치에 의한 장기 물공급 개선 효과 분석)

  • Chung, Il-Moon;Lee, Jeongwoo;Lee, Jeong Eun;Kim, Il-Hwan
    • Journal of Korea Water Resources Association
    • /
    • 제55권12호
    • /
    • pp.999-1009
    • /
    • 2022
  • The Chuncheon Mullori area is an underprivileged area for water welfare that does not have a local water supply system. Here, water is supplied to the village by using a small-scale water supply facility that uses underground water and underground water as the source. To solve the problem of water shortage during drought and to prepare for the increasing water demand, a sand dam was installed near the valley river, and this facility has been operating since May 2022. In this study, in order to evaluate the reliability of water supply when a sand dam is assumed during a drought in the past, groundwater runoff simulation results using MODFLOW were used to generate inflow data from 2011 to 2020, an unmeasured period. After performing SWAT-K basin hydrologic modeling for the watershed upstream of the existing water intake source and the sand dam, the groundwater runoff was calculated, and the relative ratio of the monthly groundwater runoff for the previous 10 years to the monthly groundwater runoff in 2021 was obtained. By applying this ratio to the 2021 inflow time series data, historical inflow data from 2011 to 2020 were generated. As a result of analyzing the availability of water supply during extreme drought in the past for three cases of demand 20 m3/day, 50 m3/day, and 100 m3/day, it can be confirmed that the reliability of water supply increases with the installation of sand dams. In the case of 100 m3/day, it was analyzed that the reliability exceeded 90% only when the existing water intake source and the sand dam were operated in conjunction. All three operating conditions were evaluated to satisfy 50 m3/day or more of demand based on 95% reliability of water supply and 30 m3/day or more of demand based on 99% of reliability.

Basic Research on the Possibility of Developing a Landscape Perceptual Response Prediction Model Using Artificial Intelligence - Focusing on Machine Learning Techniques - (인공지능을 활용한 경관 지각반응 예측모델 개발 가능성 기초연구 - 머신러닝 기법을 중심으로 -)

  • Kim, Jin-Pyo;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • 제51권3호
    • /
    • pp.70-82
    • /
    • 2023
  • The recent surge of IT and data acquisition is shifting the paradigm in all aspects of life, and these advances are also affecting academic fields. Research topics and methods are being improved through academic exchange and connections. In particular, data-based research methods are employed in various academic fields, including landscape architecture, where continuous research is needed. Therefore, this study aims to investigate the possibility of developing a landscape preference evaluation and prediction model using machine learning, a branch of Artificial Intelligence, reflecting the current situation. To achieve the goal of this study, machine learning techniques were applied to the landscaping field to build a landscape preference evaluation and prediction model to verify the simulation accuracy of the model. For this, wind power facility landscape images, recently attracting attention as a renewable energy source, were selected as the research objects. For analysis, images of the wind power facility landscapes were collected using web crawling techniques, and an analysis dataset was built. Orange version 3.33, a program from the University of Ljubljana was used for machine learning analysis to derive a prediction model with excellent performance. IA model that integrates the evaluation criteria of machine learning and a separate model structure for the evaluation criteria were used to generate a model using kNN, SVM, Random Forest, Logistic Regression, and Neural Network algorithms suitable for machine learning classification models. The performance evaluation of the generated models was conducted to derive the most suitable prediction model. The prediction model derived in this study separately evaluates three evaluation criteria, including classification by type of landscape, classification by distance between landscape and target, and classification by preference, and then synthesizes and predicts results. As a result of the study, a prediction model with a high accuracy of 0.986 for the evaluation criterion according to the type of landscape, 0.973 for the evaluation criterion according to the distance, and 0.952 for the evaluation criterion according to the preference was developed, and it can be seen that the verification process through the evaluation of data prediction results exceeds the required performance value of the model. As an experimental attempt to investigate the possibility of developing a prediction model using machine learning in landscape-related research, this study was able to confirm the possibility of creating a high-performance prediction model by building a data set through the collection and refinement of image data and subsequently utilizing it in landscape-related research fields. Based on the results, implications, and limitations of this study, it is believed that it is possible to develop various types of landscape prediction models, including wind power facility natural, and cultural landscapes. Machine learning techniques can be more useful and valuable in the field of landscape architecture by exploring and applying research methods appropriate to the topic, reducing the time of data classification through the study of a model that classifies images according to landscape types or analyzing the importance of landscape planning factors through the analysis of landscape prediction factors using machine learning.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • 제26권4호
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.