• Title/Summary/Keyword: 동적 의사결정

Search Result 150, Processing Time 0.025 seconds

SpatioTemporal GIS를 활용한 도시공간모형 적용에 관한 연구 / 인구분포모델링을 중심으로

  • 남광우;이성호;김영섭;최철옹
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2002.03b
    • /
    • pp.127-141
    • /
    • 2002
  • GIS환경에서 도시모형(urban model)의 적용을 목적으로 사회·경제적 데이터(socio-economic data)를 활용하는 과정은 도시현상이 갖는 복잡성과 변동성으로 인해 하나의 특정시간에서의 상황을 그대로 저장한 형태인 스냅샷 모형(snapshot model)만으로는 효율적인 공간분석의 실행이 불가능하다. 또한 도시모형을 적용하는 과정에서 GIS의 대상이 되는 공간, 속성, 시간의 정의는 분석목적에 따라 다르게 정의되어질 수 있으며 이에 따라 상이한 결과가 도출될 수 있다. 본 연구는 30년 간의 부산시 인구분포의 동적 변화과정 관측을 위해 시간개념을 결합한 Temporal GIS를 구축하고 이를 활용하여 인구밀도모형 및 접근성모형을 적용하는 과정을 통해 보다 효율적이고 다양한 결과를 제시할 수 있는 GIS 활용방안을 제시하고자 하였다. 흔히 공간현상의 계량화와 통계적 기법의 적용을 위한 데이터 처리과정은 많은 오차와 오류를 유발할 수 있다. 이러한 문제의 해결을 위해서는 우선적으로 분석목적에 맞는 데이터의 정의(Data Definition), 적용하고자 하는 모형(Model)의 유용성 검증, 적절한 분석단위의 설정, 결과해석의 객관적 접근 등이 요구된다. 이와 더불어 변동성 파악을 위한 시계열 자료의 효율적 처리를 위한 방법론이 마련되어져야 한다. 즉, GIS환경에서의 도시모형의 적용에 따른 효율성과 효과성의 극대화를 위해서는 분석목적에 맞는 데이터모델의 설정과 공간DB의 구축방법이 이루어져야 하며 분석가능한 데이터의 유형에 대한 충분한 고려와 적용과정에서 분석결과에 중대한 영향을 미칠 수 있는 요소들을 미리 검증하여 결정하는 순환적 의사결정과정이 필요하다., 표준패턴을 음표와 비음표의 두개의 그룹으로 나누어 인식함으로써 DP 매칭의 처리 속도를 개선시켰고, 국소적인 변형이 있는 패턴과 특징의 수가 다른 패턴의 경우에도 좋은 인식률을 얻었다.r interferon alfa concentrated solution can be established according to the monograph of EP suggesting the revision of Minimum requirements for biological productss of e-procurement, e-placement, e-payment are also investigated.. monocytogenes, E. coli 및 S. enteritidis에 대한 키토산의 최소저해농도는 각각 0.1461 mg/mL, 0.2419 mg/mL, 0.0980 mg/mL 및 0.0490 mg/mL로 측정되었다. 또한 2%(v/v) 초산 자체의 최소저해농도를 측정한 결과, B. cereus, L. mosocytogenes, E. eoli에 대해서는 control과 비교시 유의적인 항균효과는 나타나지 않았다. 반면에 S. enteritidis의 경우는 배양시간 4시간까지는 항균활성을 나타내었지만, 8시간 이후부터는 S. enteritidis의 성장이 control 보다 높아져 배양시간 20시간에서는 control 보다 약 2배 이상 균주의 성장을 촉진시켰다.차에 따른 개별화 학습을 가능하게 할 뿐만 아니라 능동적인 참여를 유도하여 학습효율을 높일 수 있을 것으로 기대된다.향은 패션마케팅의 정의와 적용범위를 축소시킬 수 있는 위험을 내재한 것으로 보여진다. 그런가 하면, 많이 다루어진 주제라

  • PDF

GIS-based Spatial Zonations for Regional Estimation of Site-specific Seismic Response in Seoul Metropolis (대도시 서울에서의 부지고유 지진 응답의 지역적 예측을 위한 GIS 기반의 공간 구역화)

  • Sun, Chang-Guk;Chun, Sung-Ho;Chung, Choong-Ki
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.1C
    • /
    • pp.65-76
    • /
    • 2010
  • Recent earthquake events revealed that severe seismic damages were concentrated mostly at sites composed of soil sediments rather than firm rock. This indicates that the site effects inducing the amplification of earthquake ground motion are associated mainly with the spatial distribution and dynamic properties of the soils overlying bedrock. In this study, an integrated GIS-based information system for geotechnical data was constructed to establish a regional counterplan against ground motions at a representative metropolitan area, Seoul, in Korea. To implement the GIS-based geotechnical information system for the Seoul area, existing geotechnical investigation data were collected in and around the study area and additionally a walkover site survey was carried out to acquire surface geo-knowledge data. For practical application of the geotechnical information system used to estimate the site effects at the area of interest, seismic zoning maps of geotechnical earthquake engineering parameters, such as the depth to bedrock and the site period, were created and presented as regional synthetic strategy for earthquake-induced hazards prediction. In addition, seismic zonation of site classification was also performed to determine the site amplification coefficients for seismic design at any site and administrative sub-unit in the Seoul area. Based on the case study on seismic zonations for Seoul, it was verified that the GIS-based geotechnical information system was very useful for the regional prediction of seismic hazards and also the decision support for seismic hazard mitigation particularly at the metropolitan area.

Development of an Optimization Model and Algorithm Based on Transportation Problem with Additional Constraints (추가 제약을 갖는 수송문제를 활용한 공화차 배분 최적화 모형 및 해법 개발)

  • Park, Bum Hwan;Kim, Young-Hoon
    • Journal of the Korean Society for Railway
    • /
    • v.19 no.6
    • /
    • pp.833-843
    • /
    • 2016
  • Recently, in the field of rail freight transportation, the number of trains dedicated for shippers has been increasing. These dedicated trains, which run on the basis of a contract with shippers, had been restricted to the transportation of containers, or so called block trains. Nowadays, such commodities have extended to cement, hard coal, etc. Most full freight cars are transported by dedicated trains. But, for empty car distribution, the efficiency still remains questionable because the distribution plan is manually developed by dispatchers. In this study, we investigated distribution models delineated in the KTOCS system which was developed by KORAIL as well as mathematical models considered in the state-of-the-art. The models are based on optimization models, especially the network flow model. Here we suggest a new optimization model with a framework of the column generation approach. The master problem can be formulated into a transportation problem with additional constraints. The master problem is improved by adding a new edge between the supply node and the demand node; this edge can be found using a simple shorted path in the time-space network. Finally, we applied our algorithm to the Korean freight train network and were able to find the total number of empty car kilometers decreased.

Pre-aggregation Index Method Based on the Spatial Hierarchy in the Spatial Data Warehouse (공간 데이터 웨어하우스에서 공간 데이터의 개념계층기반 사전집계 색인 기법)

  • Jeon, Byung-Yun;Lee, Dong-Wook;You, Byeong-Seob;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1421-1434
    • /
    • 2006
  • Spatial data warehouses provide analytical information for decision supports using SOLAP (Spatial On-Line Analytical Processing) operations. Many researches have been studied to reduce analysis cost of SOLAP operations using pre-aggregation methods. These methods use the index composed of fixed size nodes for supporting the concept hierarchy. Therefore, these methods have many unused entries in sparse data area. Also, it is impossible to support the concept hierarchy in dense data area. In this paper, we propose a dynamic pre-aggregation index method based on the spatial hierarchy. The proposed method uses the level of the index for supporting the concept hierarchy. In sparse data area, if sibling nodes have a few used entries, those entries are integrated in a node and the parent entries share the node. In dense data area, if a node has many objects, the node is connected with linked list of several nodes and data is stored in linked nodes. Therefore, the proposed method saves the space of unused entries by integrating nodes. Moreover it can support the concept hierarchy because a node is not divided by linked nodes. Experimental result shows that the proposed method saves both space and aggregation search cost with the similar building cost of other methods.

  • PDF

Product Review Data and Sentiment Analytical Processing Modeling (상품 리뷰 데이터와 감성 분석 처리 모델링)

  • Yeon, Jong-Heum;Lee, Dong-Joo;Shim, Jun-Ho;Lee, Sang-Goo
    • The Journal of Society for e-Business Studies
    • /
    • v.16 no.4
    • /
    • pp.125-137
    • /
    • 2011
  • Product reviews in online shopping sites can serve as a useful guideline to buying decisions of customers. However, due to the massive amount of such reviews, it is almost impossible for users to read all the product reviews. For this reason, e-commerce sites provide users with useful reviews or statistics of ratings on products that are manually chosen or calculated. Opinion mining or sentiment analysis is a study on automating above process that involves firstly analyzing users' reviews on a product to tell if a review contains positive or negative feedback, and secondly, providing a summarized report of users' opinions. Previous researches focus on either providing polarity of a user's opinion or summarizing user's opinion on a feature of a product that result in relatively low usage of information that a user review contains. Actual user reviews contains not only mere assessment of a product, but also dissatisfaction and flaws of a product that a user experiences. There are increasing needs for effective analysis on such criteria to help users on their decision-making process. This paper proposes a model that stores various types of user reviews in a data warehouse, and analyzes integrated reviews dynamically. Also, we analyze reviews of an online application shopping site with the proposed model.

누적외상병에 관한 연구

  • 권영국
    • Proceedings of the ESK Conference
    • /
    • 1993.10a
    • /
    • pp.20-20
    • /
    • 1993
  • 반복적인 일의 수행으로 인한 병인 누적외상병에 관해 살펴보고자 한다. 누적외 상병(Cumulative Trauma Disorders)이란 비교적 생소한 질환으로 손이나 어떤 신체 부위를 반복적으로 오래 사용하였을 때 오는 병이다. 이 질환은 200년전 이탈리아 의사인 Benardino Ramazinni에 의해 분류되었으나 최근까지 큰 관심을 끌지 못했 다. 이 병은 Tennis Elbow(테니스 팔꿈치) 또는 Triger Finger(방아쇠 손가락)등 으로 더 잘 알려져 왔다. 그리고 의학계에서는 Ganglions(수종)으로 알려져 왔다. 그러나 80년대의 탁상컴퓨터의 보급으로 많은 사무실에서 반복적인 작업을 연속적 으로 하게 되어 많은 사무원들이 누적외상병(CTD)으로 고통에 시달리게 되고, 심한 경우에는 수술까지 하게된다. 제안자 역시 이 병으로 손목수술을 받은 바 있는 데 이 병은 잠복기가 몇년씩 되는 직업병이다. 이병의 특성상 암과 같이 조기에 발견 하기 어렵고, 이것을 느꼈을 때는 대부분 너무 늦어 수술이 불가피한 경우가 많다. 본 연구에서는 누적외상병의 실체와 현재까지의 외국에서 수행된 연구결과를 소개하고, 현재 한국에서의 이병의 실태를 파악하기 위해 표본대상을 선정하여 설 문조사와 실측조사를 함께 수행하고자 한다. 표본대상으로 육체노동으로 반복작업 을 하는 (Blue-Color) 집단, 사무실에서 반복작업을 하는 (White-Color) 집단, 그리 고 가정에서 반복작업을 하게 되는 주부집단등으로 나누어 실태조사를 하고자 한다. 설문조사의 통계처리를 바탕으로 한국에서의 누적외상병에 관한 실태조사와 의식구 조까지를 진단해 보고자 한다. 그런 다음 총체적이고 최신의 이론과 연구에 바탕을 둔 해결책과 대안을 제시해 보고자 한다.콘에 대해 일반화시키기는 어려우나 이후에 행해질 Icon-based User Interface 분야의 많은 연구들의 기초가 될 것이다. 더불어 아이콘과 관련된 많은 요인들(문화적 영향, 아이콘 색깔, 크기, 아이콘의 위치등이 인식에 미치는 영향)에 대해서도 연구가 행해져야 할 것이다. 확인하고 각각의 기능을 분명히 했다.가 수월하게 하였고 메모리를 동적으로 관리할 수 있게 하였다. 또한 기존의 smpl에 디버깅용 함수 및 설비(facility) 제어용 함수를 추가하여 시뮬레이션 프로그램 작성을 용이하게 하였다. 예를 들면 who_server(), who_queue(), pop_Q(), push_Q(), pop_server(), push_server(), we(), wf(), printfct() 같은 함수들이다. 또한 동시에 발생되는 사건들의 순서를 조종하기 위해, 동시에 발생할 수 있는 각각의 사건에 우선순위를 두어 이 우선 순위에 의하여 사건 리스트(event list)에서 자동적으로 사건들의 순서가 결정되도록 확장하였으며, 설비 제어방식에 있어서도 FIFO, LIFO, 우선 순위 방식등을 선택할 수 있도록 확장하였다. SIMPLE는 자료구조 및 프로그램이 공개되어 있으므로 프로그래머가 원하는 기능을 쉽게 추가할 수 있는 장점도 있다. 아울러 SMPLE에서 새로이 추가된 자료구조와 함수 및 설비제어 방식등을 활용하여 실제 중형급 시스템에 대한 시뮬레이션 구현과 시스템 분석의 예를 보인다._3$", chain segment, with the activation energy of carriers from the shallow trap with 0.4[eV], in he amorphous regions.의 증발산율은 우기의 기상자료를 이용하

  • PDF

Illegal Cash Accommodation Detection Modeling Using Ensemble Size Reduction (신용카드 불법현금융통 적발을 위한 축소된 앙상블 모형)

  • Lee, Hwa-Kyung;Han, Sang-Bum;Jhee, Won-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.1
    • /
    • pp.93-116
    • /
    • 2010
  • Ensemble approach is applied to the detection modeling of illegal cash accommodation (ICA) that is the well-known type of fraudulent usages of credit cards in far east nations and has not been addressed in the academic literatures. The performance of fraud detection model (FDM) suffers from the imbalanced data problem, which can be remedied to some extent using an ensemble of many classifiers. It is generally accepted that ensembles of classifiers produce better accuracy than a single classifier provided there is diversity in the ensemble. Furthermore, recent researches reveal that it may be better to ensemble some selected classifiers instead of all of the classifiers at hand. For the effective detection of ICA, we adopt ensemble size reduction technique that prunes the ensemble of all classifiers using accuracy and diversity measures. The diversity in ensemble manifests itself as disagreement or ambiguity among members. Data imbalance intrinsic to FDM affects our approach for ICA detection in two ways. First, we suggest the training procedure with over-sampling methods to obtain diverse training data sets. Second, we use some variants of accuracy and diversity measures that focus on fraud class. We also dynamically calculate the diversity measure-Forward Addition and Backward Elimination. In our experiments, Neural Networks, Decision Trees and Logit Regressions are the base models as the ensemble members and the performance of homogeneous ensembles are compared with that of heterogeneous ensembles. The experimental results show that the reduced size ensemble is as accurate on average over the data-sets tested as the non-pruned version, which provides benefits in terms of its application efficiency and reduced complexity of the ensemble.

Intelligent Transportation System (ITS) research optimized for autonomous driving using edge computing (엣지 컴퓨팅을 이용하여 자율주행에 최적화된 지능형 교통 시스템 연구(ITS))

  • Sunghyuck Hong
    • Advanced Industrial SCIence
    • /
    • v.3 no.1
    • /
    • pp.23-29
    • /
    • 2024
  • In this scholarly investigation, the focus is placed on the transformative potential of edge computing in enhancing Intelligent Transportation Systems (ITS) for the facilitation of autonomous driving. The intrinsic capability of edge computing to process voluminous datasets locally and in a real-time manner is identified as paramount in meeting the exigent requirements of autonomous vehicles, encompassing expedited decision-making processes and the bolstering of safety protocols. This inquiry delves into the synergy between edge computing and extant ITS infrastructures, elucidating the manner in which localized data processing can substantially diminish latency, thereby augmenting the responsiveness of autonomous vehicles. Further, the study scrutinizes the deployment of edge servers, an array of sensors, and Vehicle-to-Everything (V2X) communication technologies, positing these elements as constituents of a robust framework designed to support instantaneous traffic management, collision avoidance mechanisms, and the dynamic optimization of vehicular routes. Moreover, this research addresses the principal challenges encountered in the incorporation of edge computing within ITS, including issues related to security, the integration of data, and the scalability of systems. It proffers insights into viable solutions and delineates directions for future scholarly inquiry.

Development of a Real-Time Mobile GIS using the HBR-Tree (HBR-Tree를 이용한 실시간 모바일 GIS의 개발)

  • Lee, Ki-Yamg;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.6 no.1 s.11
    • /
    • pp.73-85
    • /
    • 2004
  • Recently, as the growth of the wireless Internet, PDA and HPC, the focus of research and development related with GIS(Geographic Information System) has been changed to the Real-Time Mobile GIS to service LBS. To offer LBS efficiently, there must be the Real-Time GIS platform that can deal with dynamic status of moving objects and a location index which can deal with the characteristics of location data. Location data can use the same data type(e.g., point) of GIS, but the management of location data is very different. Therefore, in this paper, we studied the Real-Time Mobile GIS using the HBR-tree to manage mass of location data efficiently. The Real-Time Mobile GIS which is developed in this paper consists of the HBR-tree and the Real-Time GIS Platform HBR-tree. we proposed in this paper, is a combined index type of the R-tree and the spatial hash Although location data are updated frequently, update operations are done within the same hash table in the HBR-tree, so it costs less than other tree-based indexes Since the HBR-tree uses the same search mechanism of the R-tree, it is possible to search location data quickly. The Real-Time GIS platform consists of a Real-Time GIS engine that is extended from a main memory database system. a middleware which can transfer spatial, aspatial data to clients and receive location data from clients, and a mobile client which operates on the mobile devices. Especially, this paper described the performance evaluation conducted with practical tests if the HBR-tree and the Real-Time GIS engine respectively.

  • PDF

Implementation of Reporting Tool Supporting OLAP and Data Mining Analysis Using XMLA (XMLA를 사용한 OLAP과 데이타 마이닝 분석이 가능한 리포팅 툴의 구현)

  • Choe, Jee-Woong;Kim, Myung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.3
    • /
    • pp.154-166
    • /
    • 2009
  • Database query and reporting tools, OLAP tools and data mining tools are typical front-end tools in Business Intelligence environment which is able to support gathering, consolidating and analyzing data produced from business operation activities and provide access to the result to enterprise's users. Traditional reporting tools have an advantage of creating sophisticated dynamic reports including SQL query result sets, which look like documents produced by word processors, and publishing the reports to the Web environment, but data source for the tools is limited to RDBMS. On the other hand, OLAP tools and data mining tools have an advantage of providing powerful information analysis functions on each own way, but built-in visualization components for analysis results are limited to tables or some charts. Thus, this paper presents a system that integrates three typical front-end tools to complement one another for BI environment. Traditional reporting tools only have a query editor for generating SQL statements to bring data from RDBMS. However, the reporting tool presented by this paper can extract data also from OLAP and data mining servers, because editors for OLAP and data mining query requests are added into this tool. Traditional systems produce all documents in the server side. This structure enables reporting tools to avoid repetitive process to generate documents, when many clients intend to access the same dynamic document. But, because this system targets that a few users generate documents for data analysis, this tool generates documents at the client side. Therefore, the tool has a processing mechanism to deal with a number of data despite the limited memory capacity of the report viewer in the client side. Also, this reporting tool has data structure for integrating data from three kinds of data sources into one document. Finally, most of traditional front-end tools for BI are dependent on data source architecture from specific vendor. To overcome the problem, this system uses XMLA that is a protocol based on web service to access to data sources for OLAP and data mining services from various vendors.