• Title/Summary/Keyword: 시간 가중치

Search Result 791, Processing Time 0.022 seconds

A study on simplification of SWMM for prime time of urban flood forecasting -a case study of Daerim basin- (도시홍수예보 골든타임확보를 위한 SWMM유출모형 단순화 연구 -대림배수분구를 중심으로-)

  • Lee, Jung-Hwan;Kim, Min-Seok;Yuk, Gi-Moon;Moon, Young-Il
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.1
    • /
    • pp.81-88
    • /
    • 2018
  • The rainfall-runoff model made of sewer networks in the urban area is vast and complex, making it unsuitable for real-time urban flood forecasting. Therefore, the rainfall-runoff model is constructed and simplified using the sewer network of Daerim baisn. The network simplification process was composed of 5 steps based on cumulative drainage area and all parameters of SWMM were calculated using weighted area. Also, in order to estimate the optimal simplification range of the sewage network, runoff and flood analysis was carried out by 5 simplification ranges. As a result, the number of nodes, conduits and the simulation time were constantly reduced to 50~90% according to the simplification ranges. The runoff results of simplified models show the same result before the simplification. In the 2D flood analysis, as the simplification range increases by cumulative drainage area, the number of overflow nodes significantly decreased and the positions were changed, but similar flooding pattern was appeared. However, in the case of more than 6 ha cumulative drainage area, some inundation areas could not be occurred because of deleted nodes from upstream. As a result of comparing flood area and flood depth, it was analyzed that the flood result based on simplification range of 1 ha cumulative drainage area is most similar to the analysis result before simplification. It is expected that this study can be used as reliable data suitable for real-time urban flood forecasting by simplifying sewer network considering SWMM parameters.

Runoff analysis according to LID facilities in climate change scenario - focusing on Cheonggyecheon basin (기후변화 시나리오에서의 LID 요소기술 적용에 따른 유출량 분석 - 청계천 유역을 대상으로)

  • Yoon, EuiHyeok;Jang, Chang-Lae;Lee, KyungSu
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.8
    • /
    • pp.583-595
    • /
    • 2020
  • In this study, using the RCP scenario for Hyoja Drainage subbasin of Cheonggyecheon, we analyzed the change with the Historical and Future rainfall calculated from five GCMs models. As a result of analyzing the average rainfall by each GCMs model, the future rainfall increased by 35.30 to 208.65 mm from the historical rainfall. Future rainfall increased 1.73~16.84% than historical rainfall. In addition, the applicability of LID element technologies such as porous pavement, infiltration trench and green roof was analyzed using the SWMM model. And the applied weight and runoff for each LID element technology are analyzed. As a result of the analysis, although there was a difference for each GCMs model, the runoff increased by 2.58 to 28.78%. However, when single porous pavement and Infiltration trench were applied, Future rainfall decreased by 3.48% and 2.74%, 8.04% and 7.16% in INM-CM4 and MRI-CGCM3 models, respectively. Also, when the two types of LID element technologies were combined, the rainfall decreased by 2.74% and 2.89%, 7.16% and 7.31%, respectively. This is less than or similar to the historical rainfall runoff. As a result of applying the LID elemental technology, it was found that applying a green roof area of about 1/3 of the urban area is the most effective to secure the lag time of runoff. Moreover, when applying the LID method to the old downtown area, it is desirable to consider the priority order in the order of economic cost, maintenance, and cityscape.

An Efficient Filtering Technique of GPS Traffic Data using Historical Data (이력 자료를 활용한 GPS 교통정보의 효율적인 필터링 방법)

  • Choi, Jin-Woo;Yang, Young-Kyu
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.3
    • /
    • pp.55-65
    • /
    • 2008
  • For obtaining telematics traffic information(travel time or speed in an individual link), there are many kinds of devices to collect traffic data. Since the GPS satellite signals have been released to civil society, thank to the development of GPS technology, the GPS has become a very useful instrument for collecting traffic data. GPS can reduce the cost of installation and maintenance in contrast with existing traffic detectors which must be stationed on the ground. But. there are Problems when GPS data is applied to the existing filtering techniques used for analyzing the data collected by other detectors. This paper proposes a method to provide users with correct traffic information through filtering abnormal data caused by the unusual driving in collected data based on GPS. We have developed an algorithm that can be applied to real-time GPS data and create more reliable traffic information, by building patterns of past data and filtering abnormal data through selection of filtering areas using Quartile values. in order to verify the proposed algorithm, we experimented with actual traffic data that include probe cars equipped with a built-in GPS receiver which ran through Gangnam Street in Seoul. As a result of these experiments, it is shown that link travel speed data obtained from this algorithm is more accurate than those obtained by existing systems.

  • PDF

A Image Retrieval Model Based on Weighted Visual Features Determined by Relevance Feedback (적합성 피드백을 통해 결정된 가중치를 갖는 시각적 특성에 기반을 둔 이미지 검색 모델)

  • Song, Ji-Young;Kim, Woo-Cheol;Kim, Seung-Woo;Park, Sang-Hyun
    • Journal of KIISE:Databases
    • /
    • v.34 no.3
    • /
    • pp.193-205
    • /
    • 2007
  • Increasing amount of digital images requires more accurate and faster way of image retrieval. So far, image retrieval method includes content-based retrieval and keyword based retrieval, the former utilizing visual features such as color and brightness and the latter utilizing keywords which describe the image. However, the effectiveness of these methods as to providing the exact images the user wanted has been under question. Hence, many researchers have been working on relevance feedback, a process in which responses from the user are given as a feedback during the retrieval session in order to define user’s need and provide improved result. Yet, the methods which have employed relevance feedback also have drawbacks since several feedbacks are necessary to have appropriate result and the feedback information can not be reused. In this paper, a novel retrieval model has been proposed which annotates an image with a keyword and modifies the confidence level of the keyword in response to the user’s feedback. In the proposed model, not only the images which have received positive feedback but also the other images with the visual features similar to the features used to distinguish the positive image are subjected to confidence modification. This enables modifying large amount of images with only a few feedbacks ultimately leading to faster and more accurate retrieval result. An experiment has been performed to verify the effectiveness of the proposed model and the result has demonstrated rapid increase in recall and precision while receiving the same number of feedbacks.

Potential Mapping of Mountainous Wetlands using Weights of Evidence Model in Yeongnam Area, Korea (Weight of Evidence 기법을 이용한 영남지역의 산지습지 가능지역 추출)

  • Baek, Seung-Gyun;Jang, Dong-Ho
    • Journal of The Geomorphological Association of Korea
    • /
    • v.20 no.1
    • /
    • pp.21-33
    • /
    • 2013
  • Weight of evidence model was applied for potential mapping of mountainous wetland to reduce the range of the field survey and to increase the efficiency of operations because the surveys of mountainous wetland need a lot of time and money owing to inaccessibility and extensiveness. The relationship between mountainous wetland location and related factors is expressed as a probability by Weight of evidence model. For this, the spatial database consist of slope map, curvature map, vegetation index map, wetness index map, soil drainage rating map was constructed in Yeongnam area, Korea, and weights of evidence based on the relationship between mountainous wetland location and each factor rating were calculated. As a result of correlation analysis between mountainous wetland location and each factors rating using likelihood ratio values, the probability of mountainous wetlands were increased at condition of lower slope, lower curvature, lower vegetation index value, lower wetness value, moderate soil drainage rating. Mountainous Wetland Potential Index(MWPI) was calculated by summation of the likelihood ratio and mountainous wetland potential map was constucted from GIS integration. The mountain wetland potential map was verified by comparison with the known mountainous wetland locations. The result showed the 75.48% in prediction accuracy.

Dimensionality Reduction of Feature Set for API Call based Android Malware Classification

  • Hwang, Hee-Jin;Lee, Soojin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.11
    • /
    • pp.41-49
    • /
    • 2021
  • All application programs, including malware, call the Application Programming Interface (API) upon execution. Recently, using those characteristics, attempts to detect and classify malware based on API Call information have been actively studied. However, datasets containing API Call information require a large amount of computational cost and processing time. In addition, information that does not significantly affect the classification of malware may affect the classification accuracy of the learning model. Therefore, in this paper, we propose a method of extracting a essential feature set after reducing the dimensionality of API Call information by applying various feature selection methods. We used CICAndMal2020, a recently announced Android malware dataset, for the experiment. After extracting the essential feature set through various feature selection methods, Android malware classification was conducted using CNN (Convolutional Neural Network) and the results were analyzed. The results showed that the selected feature set or weight priority varies according to the feature selection methods. And, in the case of binary classification, malware was classified with 97% accuracy even if the feature set was reduced to 15% of the total size. In the case of multiclass classification, an average accuracy of 83% was achieved while reducing the feature set to 8% of the total size.

A Study on Changes in the Centrality Movement of Coastal Shipping Passengers Utilizing the SNA Method (SNA 방법을 통한 연안해운 승객 중심성 이동변화 분석)

  • PARK, Sung-hun;JU, Dong-young;OH, Jae-gyun;NAM, Tae-hyun;YEO, Gi-tae
    • The Journal of shipping and logistics
    • /
    • v.34 no.4
    • /
    • pp.527-544
    • /
    • 2018
  • In this study, SNA analysis was conducted to examine changes in passenger movements in domestic coastal shipping. The validity of derivation of centrality rankings was enhanced by using the connection centrality that reflected weights, which had not been applied in previous research. The results of the connection centrality analysis indicated that the network composition ratio of the South Sea region was high, and the results of analysis of betweenness centrality indicated that ports belonging to the South Sea region recorded high ranks. Jeju Island, which acts as a gateway to the West Sea and the South Sea, Mokpo, which acts as a gateway between the land and islands, those ports that are geographically close to the land, and those ports that are smoothly connected to small ports, were shown to have betweenness centrality. Meanwhile, in the results of analysis of eigenvector centrality, not only ports in the South Sea region but also many ports in the West Sea region were included in the high ranked ones. Using these results, the port authority can identify major ports in domestic coastal shipping, determine the priorities support, identify the current situation of the port connection relations, and establish strategies for management of key development areas. As future studies, studies in the aspect of economy that separate general passengers and island passengers and utilize data such as fares, distances, and time are necessary.

A Case Study and Its Implications on the Admission Officer System of Colleges and Universities in USA (미국대학 입학사정관제도의 운영사례와 시사점)

  • Chung, Ilhwan;Kim, Byoungjoo
    • Korean Journal of Comparative Education
    • /
    • v.18 no.4
    • /
    • pp.113-139
    • /
    • 2008
  • The purpose of this study is to analyze the operating case of admission officer system of colleges and universities in USA, and to deduce its implications to Korean colleges and universities. In order to accomplish the purpose of this study, following methodologies were adopted: review on the related literatures, statistical data, and previous studies concerning admission officers of colleges and universities in USA, and in-depth interview with them. Historical and cultural background of university admission system of USA was analyzed. Case study on USA colleges and universities was divided with four parts such as determining factors of admission and admission methods, organization for admission affairs and its number of persons, work of admission officer and admission process, and cost of admission and salary. Implications to Korean colleges and universities were presented with three points such as overall implication, implication on materials for admission process, and implication on managing system of admission. Based on the analysis, discussion and implications, the conclusion and further suggestion of this study are as follows: First, actual authority of admission should be grant to admission officer. Second, not only non-curricular factors but also scholastic factors should be emphasized in role of admission officer. Third, education and training about work of admission officer and unification of criteria for admission should be held. Fourth, admission officers with various occupation background are needed. Fifth, work of admission officers should be extended to various work concerning university entrance. Sixth, cross-checking on marks of over two admission officers is needed. Seventh, in order to stabilize admission process, status of admission officer should be stabilized. Eighth, part-time admission officers are need to employ in season of admission. Ninth, authority of weighting high schools should be grant to admission officers in long term perspective.

Correction for Na Migration Effects in Silicate Glasses During Electron Microprobe Analysis (전자현미분석에서 발생하는 규산염 유리 시료의 Na 이동 효과 보정)

  • Hwayoung, Kim;Changkun, Park
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.35 no.4
    • /
    • pp.457-467
    • /
    • 2022
  • Electron bombardment to silicate glass during electron probe microanalysis (EPMA) causes outward migration of Na from the excitation volume and subsequent decrease in the measured X-ray count rates of Na. To acquire precise Na2O content of silicate glass, one should use proper analytical technique to avoid or minimize Na migration effect or should correct for decreases in the measured Na X-ray counts. In this study, we analyzed 8 silicate glass standard samples using automated Time Dependent Intensity (TDI) correction method of Probe for EPMA software that can calculate zero-time intercept by extrapolating X-ray count changes over analysis time. We evaluated an accuracy of TDI correction for Na measurements of silicate glasses with EPMA at 15 kV acceleration voltage and 20 nA probe current electron beam, which is commonly utilized analytical condition for geological samples. Results show that Na loss can be avoided with 20 ㎛-sized large beam (<0.1 nA/㎛2), thus silicate glasses can be analyzed without TDI correction. When the beam size is smaller than 10 ㎛, Na loss results in large relative errors up to -55% of Na2O values without correction. By applying TDI corrections, we can acquire Na2O values close to the reference values with relative errors of ~ ±10%. Use of weighted linear-fit can reduce relative errors down to ±6%. Thus, quantitative analysis of silicate glasses with EPMA is required for TDI correction for alkali elements such as Na and K.

Stress Constraint Topology Optimization using Backpropagation Method in Design Sensitivity Analysis (설계민감도 해석에서 역전파 방법을 사용한 응력제한조건 위상최적설계)

  • Min-Geun, Kim;Seok-Chan, Kim;Jaeseung, Kim;Jai-Kyung, Lee;Geun-Ho, Lee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.35 no.6
    • /
    • pp.367-374
    • /
    • 2022
  • This papter presents the use of the automatic differential method based on the backpropagation method to obtain the design sensitivity and its application to topology optimization considering the stress constraints. Solving topology optimization problems with stress constraints is difficult owing to singularities, the local nature of stress constraints, and nonlinearity with respect to design variables. To solve the singularity problem, the stress relaxation technique is used, and p-norm for stress constraints is applied instead of local stresses for global stress measures. To overcome the nonlinearity of the design variables in stress constraint problems, it is important to analytically obtain the exact design sensitivity. In conventional topology optimization, design sensitivity is obtained efficiently and accurately using the adjoint variable method; however, obtaining the design sensitivity analytically and additionally solving the adjoint equation is difficult. To address this problem, the design sensitivity is obtained using a backpropagation technique that is used to determine optimal weights and biases in the artificial neural network, and it is applied to the topology optimization with the stress constraints. The backpropagation technique is used in automatic differentiation and can simplify the calculation of the design sensitivity for the objectives or constraint functions without complicated analytical derivations. In addition, the backpropagation process is more computationally efficient than solving adjoint equations in sensitivity calculations.