• Title/Summary/Keyword: modeling system

Search Result 10,747, Processing Time 0.037 seconds

Dynamic Traffic Assignment Using Genetic Algorithm (유전자 알고리즘을 이용한 동적통행배정에 관한 연구)

  • Park, Kyung-Chul;Park, Chang-Ho;Chon, Kyung-Soo;Rhee, Sung-Mo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.8 no.1 s.15
    • /
    • pp.51-63
    • /
    • 2000
  • Dynamic traffic assignment(DTA) has been a topic of substantial research during the past decade. While DTA is gradually maturing, many aspects of DTA still need improvement, especially regarding its formulation and solution algerian Recently, with its promise for In(Intelligent Transportation System) and GIS(Geographic Information System) applications, DTA have received increasing attention. This potential also implies higher requirement for DTA modeling, especially regarding its solution efficiency for real-time implementation. But DTA have many mathematical difficulties in searching process due to the complexity of spatial and temporal variables. Although many solution algorithms have been studied, conventional methods cannot iud the solution in case that objective function or constraints is not convex. In this paper, the genetic algorithm to find the solution of DTA is applied and the Merchant-Nemhauser model is used as DTA model because it has a nonconvex constraint set. To handle the nonconvex constraint set the GENOCOP III system which is a kind of the genetic algorithm is used in this study. Results for the sample network have been compared with the results of conventional method.

  • PDF

Performance Estimation of Large-scale High-sensitive Compton Camera for Pyroprocessing Facility Monitoring (파이로 공정 모니터링용 대면적 고효율 콤프턴 카메라 성능 예측)

  • Kim, Young-Su;Park, Jin Hyung;Cho, Hwa Youn;Kim, Jae Hyeon;Kwon, Heungrok;Seo, Hee;Park, Se-Hwan;Kim, Chan Hyeong
    • Journal of Radiation Protection and Research
    • /
    • v.40 no.1
    • /
    • pp.1-9
    • /
    • 2015
  • Compton cameras overcome several limitations of conventional mechanical collimation based gamma imaging devices, such as pin-hole imaging devices, due to its electronic collimation based on coincidence logic. Especially large-scale Compton camera has wide field of view and high imaging sensitivity. Those merits suggest that a large-scale Compton camera might be applicable to monitoring nuclear materials in large facilities without necessity of portability. To that end, our research group have made an effort to design a large-scale Compton camera for safeguard application. Energy resolution or position resolution of large-area detectors vary with configuration style of the detectors. Those performances directly affect the image quality of the large-scale Compton camera. In the present study, a series of Geant4 Monte Carlo simulations were performed in order to examine the effect of those detector parameters. Performance of the designed large-scale Compton camera was also estimated for various monitoring condition with realistic modeling. The conclusion of the present study indicates that the energy resolution of the component detector is the limiting factor of imaging resolution rather than the position resolution. Also, the designed large-scale Compton camera provides the 16.3 cm image resolution in full width at half maximum (angular resolution: $9.26^{\circ}$) for the depleted uranium source considered in this study located at the 1 m from the system when the component detectors have 10% energy resolution and 7 mm position resolution.

Object Tracking Based on Exactly Reweighted Online Total-Error-Rate Minimization (정확히 재가중되는 온라인 전체 에러율 최소화 기반의 객체 추적)

  • JANG, Se-In;PARK, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.53-65
    • /
    • 2019
  • Object tracking is one of important steps to achieve video-based surveillance systems. Object tracking is considered as an essential task similar to object detection and recognition. In order to perform object tracking, various machine learning methods (e.g., least-squares, perceptron and support vector machine) can be applied for different designs of tracking systems. In general, generative methods (e.g., principal component analysis) were utilized due to its simplicity and effectiveness. However, the generative methods were only focused on modeling the target object. Due to this limitation, discriminative methods (e.g., binary classification) were adopted to distinguish the target object and the background. Among the machine learning methods for binary classification, total error rate minimization can be used as one of successful machine learning methods for binary classification. The total error rate minimization can achieve a global minimum due to a quadratic approximation to a step function while other methods (e.g., support vector machine) seek local minima using nonlinear functions (e.g., hinge loss function). Due to this quadratic approximation, the total error rate minimization could obtain appropriate properties in solving optimization problems for binary classification. However, this total error rate minimization was based on a batch mode setting. The batch mode setting can be limited to several applications under offline learning. Due to limited computing resources, offline learning could not handle large scale data sets. Compared to offline learning, online learning can update its solution without storing all training samples in learning process. Due to increment of large scale data sets, online learning becomes one of essential properties for various applications. Since object tracking needs to handle data samples in real time, online learning based total error rate minimization methods are necessary to efficiently address object tracking problems. Due to the need of the online learning, an online learning based total error rate minimization method was developed. However, an approximately reweighted technique was developed. Although the approximation technique is utilized, this online version of the total error rate minimization could achieve good performances in biometric applications. However, this method is assumed that the total error rate minimization can be asymptotically achieved when only the number of training samples is infinite. Although there is the assumption to achieve the total error rate minimization, the approximation issue can continuously accumulate learning errors according to increment of training samples. Due to this reason, the approximated online learning solution can then lead a wrong solution. The wrong solution can make significant errors when it is applied to surveillance systems. In this paper, we propose an exactly reweighted technique to recursively update the solution of the total error rate minimization in online learning manner. Compared to the approximately reweighted online total error rate minimization, an exactly reweighted online total error rate minimization is achieved. The proposed exact online learning method based on the total error rate minimization is then applied to object tracking problems. In our object tracking system, particle filtering is adopted. In particle filtering, our observation model is consisted of both generative and discriminative methods to leverage the advantages between generative and discriminative properties. In our experiments, our proposed object tracking system achieves promising performances on 8 public video sequences over competing object tracking systems. The paired t-test is also reported to evaluate its quality of the results. Our proposed online learning method can be extended under the deep learning architecture which can cover the shallow and deep networks. Moreover, online learning methods, that need the exact reweighting process, can use our proposed reweighting technique. In addition to object tracking, the proposed online learning method can be easily applied to object detection and recognition. Therefore, our proposed methods can contribute to online learning community and object tracking, detection and recognition communities.

Optimal Monetary Policy System for Both Macroeconomics and Financial Stability (거시경제와 금융안정을 종합 고려한 최적 통화정책체계 연구)

  • Joonyoung Hur;Hyoung Seok Oh
    • KDI Journal of Economic Policy
    • /
    • v.46 no.1
    • /
    • pp.91-129
    • /
    • 2024
  • The Bank of Korea, through a legal amendment in 2011 following the financial crisis, was entrusted with the additional responsibility of financial stability beyond its existing mandate of price stability. Since then, concerns have been raised about the prolonged increase in household debt compared to income conditions, which could constrain consumption and growth and increase the possibility of a crisis in the event of negative economic shocks. The current accumulation of financial imbalances suggests a critical period for the government and central bank to be more vigilant, ensuring it does not impede the stable flow of our financial and economic systems. This study examines the applicability of the Integrated Inflation Targeting (IIT) framework proposed by the Bank for International Settlements (BIS) for macro-financial stability in promoting long-term economic stability. Using VAR models, the study reveals a clear increase in risk appetite following interest rate cuts after the financial crisis, leading to a rise in household debt. Additionally, analyzing the central bank's conduct of monetary policy from 2000 to 2021 through DSGE models indicates that the Bank of Korea has operated with a form of IIT, considering both inflation and growth in its policy decisions, with some responsiveness to the increase in household debt. However, the estimation of a high interest rate smoothing coefficient suggests a cautious approach to interest rate adjustments. Furthermore, estimating the optimal interest rate rule to minimize the central bank's loss function reveals that a policy considering inflation, growth, and being mindful of household credit conditions is superior. It suggests that the policy of actively adjusting the benchmark interest rate in response to changes in economic conditions and being attentive to household credit situations when household debt is increasing rapidly compared to income conditions has been analyzed as a desirable policy approach. Based on these findings, we conclude that the integrated inflation targeting framework proposed by the BIS could be considered as an alternative policy system that supports the stable growth of the economy in the medium to long term.

Uranium Adsorption Properties and Mechanisms of the WRK Bentonite at Different pH Condition as a Buffer Material in the Deep Geological Repository for the Spent Nuclear Fuel (사용후핵연료 심지층 처분장의 완충재 소재인 WRK 벤토나이트의 pH 차이에 따른 우라늄 흡착 특성과 기작)

  • Yuna Oh;Daehyun Shin;Danu Kim;Soyoung Jeon;Seon-ok Kim;Minhee Lee
    • Economic and Environmental Geology
    • /
    • v.56 no.5
    • /
    • pp.603-618
    • /
    • 2023
  • This study focused on evaluating the suitability of the WRK (waste repository Korea) bentonite as a buffer material in the SNF (spent nuclear fuel) repository. The U (uranium) adsorption/desorption characteristics and the adsorption mechanisms of the WRK bentonite were presented through various analyses, adsorption/desorption experiments, and kinetic adsorption modeling at various pH conditions. Mineralogical and structural analyses supported that the major mineral of the WRK bentonite is the Ca-montmorillonite having the great possibility for the U adsorption. From results of the U adsorption/desorption experiments (intial U concentration: 1 mg/L) for the WRK bentonite, despite the low ratio of the WRK bentonite/U (2 g/L), high U adsorption efficiency (>74%) and low U desorption rate (<14%) were acquired at pH 5, 6, 10, and 11 in solution, supporting that the WRK bentonite can be used as the buffer material preventing the U migration in the SNF repository. Relatively low U adsorption efficiency (<45%) for the WRK bentonite was acquired at pH 3 and 7 because the U exists as various species in solution depending on pH and thus its U adsorption mechanisms are different due to the U speciation. Based on experimental results and previous studies, the main U adsorption mechanisms of the WRK bentonite were understood in viewpoint of the chemical adsorption. At the acid conditions (<pH 3), the U is apt to adsorb as forms of UO22+, mainly due to the ionic bond with Si-O or Al-O(OH) present on the WRK bentonite rather than the ion exchange with Ca2+ among layers of the WRK bentonite, showing the relatively low U adsorption efficiency. At the alkaline conditions (>pH 7), the U could be adsorbed in the form of anionic U-hydroxy complexes (UO2(OH)3-, UO2(OH)42-, (UO2)3(OH)7-, etc.), mainly by bonding with oxygen (O-) from Si-O or Al-O(OH) on the WRK bentonite or by co-precipitation in the form of hydroxide, showing the high U adsorption. At pH 7, the relatively low U adsorption efficiency (42%) was acquired in this study and it was due to the existence of the U-carbonates in solution, having relatively high solubility than other U species. The U adsorption efficiency of the WRK bentonite can be increased by maintaining a neutral or highly alkaline condition because of the formation of U-hydroxyl complexes rather than the uranyl ion (UO22+) in solution,and by restraining the formation of U-carbonate complexes in solution.

A Comparative Analysis of Social Commerce and Open Market Using User Reviews in Korean Mobile Commerce (사용자 리뷰를 통한 소셜커머스와 오픈마켓의 이용경험 비교분석)

  • Chae, Seung Hoon;Lim, Jay Ick;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.53-77
    • /
    • 2015
  • Mobile commerce provides a convenient shopping experience in which users can buy products without the constraints of time and space. Mobile commerce has already set off a mega trend in Korea. The market size is estimated at approximately 15 trillion won (KRW) for 2015, thus far. In the Korean market, social commerce and open market are key components. Social commerce has an overwhelming open market in terms of the number of users in the Korean mobile commerce market. From the point of view of the industry, quick market entry, and content curation are considered to be the major success factors, reflecting the rapid growth of social commerce in the market. However, academics' empirical research and analysis to prove the success rate of social commerce is still insufficient. Henceforward, it is to be expected that social commerce and the open market in the Korean mobile commerce will compete intensively. So it is important to conduct an empirical analysis to prove the differences in user experience between social commerce and open market. This paper is an exploratory study that shows a comparative analysis of social commerce and the open market regarding user experience, which is based on the mobile users' reviews. Firstly, this study includes a collection of approximately 10,000 user reviews of social commerce and open market listed Google play. A collection of mobile user reviews were classified into topics, such as perceived usefulness and perceived ease of use through LDA topic modeling. Then, a sentimental analysis and co-occurrence analysis on the topics of perceived usefulness and perceived ease of use was conducted. The study's results demonstrated that social commerce users have a more positive experience in terms of service usefulness and convenience versus open market in the mobile commerce market. Social commerce has provided positive user experiences to mobile users in terms of service areas, like 'delivery,' 'coupon,' and 'discount,' while open market has been faced with user complaints in terms of technical problems and inconveniences like 'login error,' 'view details,' and 'stoppage.' This result has shown that social commerce has a good performance in terms of user service experience, since the aggressive marketing campaign conducted and there have been investments in building logistics infrastructure. However, the open market still has mobile optimization problems, since the open market in mobile commerce still has not resolved user complaints and inconveniences from technical problems. This study presents an exploratory research method used to analyze user experience by utilizing an empirical approach to user reviews. In contrast to previous studies, which conducted surveys to analyze user experience, this study was conducted by using empirical analysis that incorporates user reviews for reflecting users' vivid and actual experiences. Specifically, by using an LDA topic model and TAM this study presents its methodology, which shows an analysis of user reviews that are effective due to the method of dividing user reviews into service areas and technical areas from a new perspective. The methodology of this study has not only proven the differences in user experience between social commerce and open market, but also has provided a deep understanding of user experience in Korean mobile commerce. In addition, the results of this study have important implications on social commerce and open market by proving that user insights can be utilized in establishing competitive and groundbreaking strategies in the market. The limitations and research direction for follow-up studies are as follows. In a follow-up study, it will be required to design a more elaborate technique of the text analysis. This study could not clearly refine the user reviews, even though the ones online have inherent typos and mistakes. This study has proven that the user reviews are an invaluable source to analyze user experience. The methodology of this study can be expected to further expand comparative research of services using user reviews. Even at this moment, users around the world are posting their reviews about service experiences after using the mobile game, commerce, and messenger applications.

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

A Study of Feasibility of Dipole-dipole Electric Method to Metallic Ore-deposit Exploration in Korea (국내 금속광 탐사를 위한 쌍극자-쌍극자 전기탐사의 적용성 연구)

  • Min, Dong-Joo;Jung, Hyun-Key;Park, Sam-Gyu;Chon, Hyo-Taek;Kwak, Na-Eun
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.3
    • /
    • pp.250-262
    • /
    • 2008
  • In order to assess the feasibility of the dipole-dipole electric method to the investigation of metallic ore deposit, both field data simulation and inversion are carried out for several simplified ore deposit models. Our interest is in a vein-type model, because most of the ore deposits (more than 70%) exist in a vein type in Korea. Based on the fact that the width of the vein-type ore deposits ranges from tens of centimeters to 2m, we change the width and the material property of the vein, and we use 40m-electrode spacing for our test. For the vein-type model with too small width, the low resistivity zone is not detected, even though the resistivity of the vein amounts to 1/300 of that of the surrounding rock. Considering a wide electrode interval and cell size used in the inversion, it is natural that the size of the low resistivity zone is overestimated. We also perform field data simulation and inversion for a vein-type model with surrounding hydrothermal alteration zones, which is a typical structure in an epithermal ore deposits. In the model, the material properties are assumed on the basis of resistivity values directly observed in a mine originated from an epithermal ore deposits. From this simulation, we can also note that the high resistivity value of the vein does not affect the results when the width of the vein is narrow. This indicates that our main target should be surrounding hydrothermal alteration zones rather than veins in field survey. From these results, we can summarize that when the vein is placed at the deep part and the difference of resistivity values between the vein and the surrounding rock is not large enough, we cannot detect low resistivity zone and interpret the subsurface structures incorrectly using the electric method performed at the surface. Although this work is a little simple, it can be used as references for field survey design and field data Interpretation. If we perform field data simulation and inversion for a number of models and provide some references, they will be helpful in real field survey and interpretation.

Backward Path Tracking Control of a Trailer Type Robot Using a RCGS-Based Model (RCGA 기반의 모델을 이용한 트레일러형 로봇의 후방경로 추종제어)

  • Wi, Yong-Uk;Kim, Heon-Hui;Ha, Yun-Su;Jin, Gang-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.9
    • /
    • pp.717-722
    • /
    • 2001
  • This paper presents a methodology on the backward path tracking control of a trailer type robot which consists of two parts: a tractor and a trailer. It is difficult to control the motion of a trailer vehicle since its dynamics is non-holonomic. Therefore, in this paper, the modeling and parameter estimation of the system using a real-coded genetic algorithm(RCGA) is proposed and a backward path tracking control algorithm is then obtained based on the linearized model. Experimental results verify the effectiveness of the proposed method.

  • PDF