• Title/Summary/Keyword: Flow network

Search Result 2,348, Processing Time 0.03 seconds

Urban Climate Impact Assessment Reflecting Urban Planning Scenarios - Connecting Green Network Across the North and South in Seoul - (서울 도시계획 정책을 적용한 기후영향평가 - 남북녹지축 조성사업을 대상으로 -)

  • Kwon, Hyuk-Gi;Yang, Ho-Jin;Yi, Chaeyeon;Kim, Yeon-Hee;Choi, Young-Jean
    • Journal of Environmental Impact Assessment
    • /
    • v.24 no.2
    • /
    • pp.134-153
    • /
    • 2015
  • When making urban planning, it is important to understand climate effect caused by urban structural changes. Seoul city applies UPIS(Urban Plan Information System) which provides information on urban planning scenario. Technology for analyzing climate effect resulted from urban planning needs to developed by linking urban planning scenario provided by UPIS and climate analysis model, CAS(Climate Analysis Seoul). CAS develops for analyzing urban climate conditions to provide realistic information considering local air temperature and wind flows. Quantitative analyses conducted by CAS for the production, transportation, and stagnation of cold air, wind flow and thermal conditions by incorporating GIS analysis on land cover and elevation and meteorological analysis from MetPhoMod(Meteorology and atmospheric Photochemistry Meso-scale model). In order to reflect land cover and elevation of the latest information, CAS used to highly accurate raster data (1m) sourced from LiDAR survey and KOMPSAT-2(KOrea Multi-Purpose SATellite) satellite image(4m). For more realistic representation of land surface characteristic, DSM(Digital Surface Model) and DTM(Digital Terrain Model) data used as an input data for CFD(Computational Fluid Dynamics) model. Eight inflow directions considered to investigate the change of flow pattern, wind speed according to reconstruction and change of thermal environment by connecting green area formation. Also, MetPhoMod in CAS data used to consider realistic weather condition. The result show that wind corridors change due to reconstruction. As a whole surface temperature around target area decreases due to connecting green area formation. CFD model coupled with CAS is possible to evaluate the wind corridor and heat environment before/after reconstruction and connecting green area formation. In This study, analysis of climate impact before and after created the green area, which is part of 'Connecting green network across the north and south in Seoul' plan, one of the '2020 Seoul master plan'.

GIS based Development of Module and Algorithm for Automatic Catchment Delineation Using Korean Reach File (GIS 기반의 하천망분석도 집수구역 자동 분할을 위한 알고리듬 및 모듈 개발)

  • PARK, Yong-Gil;KIM, Kye-Hyun;YOO, Jae-Hyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.4
    • /
    • pp.126-138
    • /
    • 2017
  • Recently, the national interest in environment is increasing and for dealing with water environment-related issues swiftly and accurately, the demand to facilitate the analysis of water environment data using a GIS is growing. To meet such growing demands, a spatial network data-based stream network analysis map(Korean Reach File; KRF) supporting spatial analysis of water environment data was developed and is being provided. However, there is a difficulty in delineating catchment areas, which are the basis of supplying spatial data including relevant information frequently required by the users such as establishing remediation measures against water pollution accidents. Therefore, in this study, the development of a computer program was made. The development process included steps such as designing a delineation method, and developing an algorithm and modules. DEM(Digital Elevation Model) and FDR(Flow Direction) were used as the major data to automatically delineate catchment areas. The algorithm for the delineation of catchment areas was developed through three stages; catchment area grid extraction, boundary point extraction, and boundary line division. Also, an add-in catchment area delineation module, based on ArcGIS from ESRI, was developed in the consideration of productivity and utility of the program. Using the developed program, the catchment areas were delineated and they were compared to the catchment areas currently used by the government. The results showed that the catchment areas were delineated efficiently using the digital elevation data. Especially, in the regions with clear topographical slopes, they were delineated accurately and swiftly. Although in some regions with flat fields of paddles and downtowns or well-organized drainage facilities, the catchment areas were not segmented accurately, the program definitely reduce the processing time to delineate existing catchment areas. In the future, more efforts should be made to enhance current algorithm to facilitate the use of the higher precision of digital elevation data, and furthermore reducing the calculation time for processing large data volume.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Simulation Analysis of Urban Heat Island Mitigation of Green Area Types in Apartment Complexes (유형별 녹지 시뮬레이션을 통한 아파트 단지 내 도시열섬현상 저감효과 분석)

  • Ji, Eun-Ju;Kim, Da-Been;Kim, Yu-Gyeong;Lee, Jung-A
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.3
    • /
    • pp.153-165
    • /
    • 2023
  • The purpose of this study is to propose effective scenarios for green areas in apartment complexes that can improve the connection between green spaces considering wind flow, thermal comfort, and mitigation of the urban heat island effect. The study site was an apartment complex in Godeok-dong, Gangdong-gu, Seoul, Korea. The site selection was based on comparing temperatures and discomfort index data collected from June to August 2020. Initially, the thermal and wind environment of the current site was analyzed. Based on the findings, three scenarios were proposed, taking into account both green patches and corridor elements: Scenario 1 (green patch), Scenario 2 (green corridor), and Scenario 3 (green patch & corridor). Subsequently, each scenario's wind speed, wind flow, and thermal comfort were analyzed using ENVI-met to compare their effectiveness in mitigating the urban heat island effect. The study results demonstrated that green patches contributed to increased wind speed and improved wind flow, leading to a reduction of 31..20% in the predicted mean vote (PMV) and 68.59% in the predicted percentage of dissatisfied (PET). On the other hand, green corridors facilitated the connection of wind paths and further increased wind speed compared to green patches. They proved to be more effective than green patches in mitigating the urban heat island, resulting in a reduction of 92.47% in PMV and 90.14% in PET. The combination of green patches and green corridors demonstrated the greatest increase in wind speed and strong connectivity within the apartment complex, resulting in a reduction of 95.75% in PMV and 95.35% in PET. However, patches in narrow areas were found to be more effective in improving thermal comfort than green corridors. Therefore, to effectively mitigate the urban heat island effect, enhancing green areas by incorporating green corridors in conjunction with green patches is recommended. This study can serve as fundamental data for planning green areas to mitigate future urban heat island effects in apartment complexes. Additionally, it can be considered a method to improve urban resilience in response to the challenges posed by the urban heat island effect.

A Study on the Birthplace of Kang Jeungsan, Gaekmang-ri, and Neighboring Areas from a Feng Shui Perspective: Focused on the Theory of Connecting Geomantic Veins (상제 강세지 객망리 일대의 풍수지리적 의미에 관한 연구 -지맥의 연결과정을 통한 형기론을 중심으로-)

  • Shin Young-dae
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.46
    • /
    • pp.69-122
    • /
    • 2023
  • This study is an integral exploration of Feng Shui associated with the area around the birthplace of Kang Jeungsan, a sacred site of Daesoon Jinrihoe which holds that the Supreme God descended in human form at that location (through Kang Jeungsan). Through an on-site Feng Shui survey, the main focus of the research method was to explore the Feng Shui configurations around Kang Jeungsan's birthplace especially as it pertains to the connections among geomagnetic veins which lead to the Mount Shiru area. As a method of investigation, this study explored the Feng Shui of Gaekmang-ri Village and the geomantic veins leading up to Mount Shiru. This involved examining the landforms, topography, water flow, and geomantic veins of the area to reveal the overall Feng Shui configurations. Throughout the course of that on-site survey, this study first examined Mount Duseung and Mount Bangjang, also known as Mount Yeongju (sometimes collectively known as Mount Samshin), Mount Dongjuk, Mount Mangje-bong, Mount Maebong, and Mount Shiru. Then, this study stated some of the underlying issues through a scholarly approach based on various theories such as traditional geographical texts and theories on mountain-growth and water-flow from the perspective of Feng Shui. In particular, attention was paid to theoretical aspects of the uninterrupted and undulating flow of the terrain leading to Shiru Mountain. As a result, from a Feng Shui point of view, the connected network geomantic veins in the area of Kang Jeungsan's birthplace and the feng shui features and conditions were all examined through an on-site survey. The survey results revealed that the area forms a large Feng Shui site due to the vast interconnectivity among all the mountains that extend from the Honam vein and form organic relationships with one another. This even includes Mount Samshin in Honam. Considering the geographical conditions that formed a site that enabled harmony between divine beings and humankind, the surrounding place names also provide allusions to the understanding of the birth of Kang Jeungsan as the descent of Supreme God into the human world through the historical figure, Kang Jeungsan. This area is an ideal spot with a propitious spatial arrangement in terms of its Feng Shui. Feng Shui analysis reveals the site to be a place that holds an earth energy-hub transmitting a great energy of nature that cannot be measured by human power alone.

Classification of Domestic Freight Data and Application for Network Models in the Era of 'Government 3.0' ('정부 3.0' 시대를 맞이한 국내 화물 자료의 집계 수준에 따른 분류체계 구축 및 네트워크 모형 적용방안)

  • YOO, Han Sol;KIM, Nam Seok
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.4
    • /
    • pp.379-392
    • /
    • 2015
  • Freight flow data in Korea has been collected for a variety of purposes by various organizations. However, since the representation and format of the data varies, it has not been substantially used for freight analyses and furthermore for freight policies. In order to increase the applicability of those data sets, it is required to bring them in a table and compare for finding the differences. Then, it is shown that the raw data can be aggregated by a particular criterion such as mode, origin and destination, and type commodity. This study aims to examine the freight data issue in terms of three different points of view. First, we investigated various freight volume data sets which are released by several organizations. Second, we tried to develop formulations for freight volume data. Third, we discussed how to apply the formulations to network models in which particular OR (Operations Research) techniques are used. The results emphasized that some data might be useless for modeling once they are aggregated. As a result of examining the freight volume data, this study found that 14 organizations share their data sets at various aggregation levels. This study is not an ordinary research article, which normally includes data analysis, because it seems to be impossible to conduct extensive case studies. The reason is that the data dealt in this study are diverse. Nevertheless, this study might guide the research direction in the freight transport research society in terms of data issue. Especially, it can be concluded that this study is a timely research because the governmemt has emphasized the importance of sharing data to public throughout 'government 3.0' for research purpose.

A Study on Spatial Pattern of Impact Area of Intersection Using Digital Tachograph Data and Traffic Assignment Model (차량 운행기록정보와 통행배정 모형을 이용한 교차로 영향권의 공간적 패턴에 관한 연구)

  • PARK, Seungjun;HONG, Kiman;KIM, Taegyun;SEO, Hyeon;CHO, Joong Rae;HONG, Young Suk
    • Journal of Korean Society of Transportation
    • /
    • v.36 no.2
    • /
    • pp.155-168
    • /
    • 2018
  • In this study, we studied the directional pattern of entering the intersection from the intersection upstream link prior to predicting short future (such as 5 or 10 minutes) intersection direction traffic volume on the interrupted flow, and examined the possibility of traffic volume prediction using traffic assignment model. The analysis method of this study is to investigate the similarity of patterns by performing cluster analysis with the ratio of traffic volume by intersection direction divided by 2 hours using taxi DTG (Digital Tachograph) data (1 week). Also, for linking with the result of the traffic assignment model, this study compares the impact area of 5 minutes or 10 minutes from the center of the intersection with the analysis result of taxi DTG data. To do this, we have developed an algorithm to set the impact area of intersection, using the taxi DTG data and traffic assignment model. As a result of the analysis, the intersection entry pattern of the taxi is grouped into 12, and the Cubic Clustering Criterion indicating the confidence level of clustering is 6.92. As a result of correlation analysis with the impact area of the traffic assignment model, the correlation coefficient for the impact area of 5 minutes was analyzed as 0.86, and significant results were obtained. However, it was analyzed that the correlation coefficient is slightly lowered to 0.69 in the impact area of 10 minutes from the center of the intersection, but this was due to insufficient accuracy of O/D (Origin/Destination) travel and network data. In future, if accuracy of traffic network and accuracy of O/D traffic by time are improved, it is expected that it will be able to utilize traffic volume data calculated from traffic assignment model when controlling traffic signals at intersections.

A Study on Improvement of the police disaster crisis management system (경찰의 재난위기관리 개선에 관한 연구)

  • Chun, Yongtae;Kim, Moonkwi
    • Journal of the Society of Disaster Information
    • /
    • v.11 no.4
    • /
    • pp.556-569
    • /
    • 2015
  • With about 75% of the population of Korea criticizing the government's disaster policy and a failure to respond to large-scale emergency like the Sewol ferry sinking means that there is a deep distrust in the government. In order to prevent dreadful disasters such as the Sewol ferry sinking, it is important to secure a prime time with respect to disaster safety. Improving crisis management skills and managerial role of police officers who are in close proximity to the people is necessary for the success of disaster management. With disaster management as one of the most essential missions of the police, as a part of a national crisis management, a step by step strengthening of the disaster safety management system of the police is necessary, as below. First, at the prevention phase, law enforcement officers were not injected into for profit large-scale assemblies or events, but in the future the involvement, injection should be based on the level of potential risk, rather than profitability. In the past and now, the priortiy was the priority was on traffic flow, traffic communication, however, the paradigm of traffic policy should be changed to a safety-centered policy. To prevent large-scale accidents, police investigators should root out improper routines and illegal construction subcontracting. The police (intelligence) should strengthen efforts to collect intelligence under the subject of "safety". Second, with respect to the preparatory phase, on a survey of police officers, the result showed that 72% of police officers responded that safety management was not related to the job descriptions of the police. This, along with other results, shows that the awareness of disaster safety must be adopted by, or rather changed in the police urgently. The training in disaster safety education should be strengthened. A network of experts (private, administrative, and police) in safety management should be established to take advantage of private resources with regard to crisis situtions. Third, with respect to the response phase, for rapid first responses to occur, a unified communication network should be established, and a real-time video information network should be adopted by the police and installed in the police situation room. Fourth, during the recovery phase, recovery teams should be injected, added and operated to minimize secondary damage.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.