• Title/Summary/Keyword: Network Performance Evaluation

Search Result 1,863, Processing Time 0.029 seconds

A Study on the Determinants of Patent Citation Relationships among Companies : MR-QAP Analysis (기업 간 특허인용 관계 결정요인에 관한 연구 : MR-QAP분석)

  • Park, Jun Hyung;Kwahk, Kee-Young;Han, Heejun;Kim, Yunjeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.21-37
    • /
    • 2013
  • Recently, as the advent of the knowledge-based society, there are more people getting interested in the intellectual property. Especially, the ICT companies leading the high-tech industry are working hard to strive for systematic management of intellectual property. As we know, the patent information represents the intellectual capital of the company. Also now the quantitative analysis on the continuously accumulated patent information becomes possible. The analysis at various levels becomes also possible by utilizing the patent information, ranging from the patent level to the enterprise level, industrial level and country level. Through the patent information, we can identify the technology status and analyze the impact of the performance. We are also able to find out the flow of the knowledge through the network analysis. By that, we can not only identify the changes in technology, but also predict the direction of the future research. In the field using the network analysis there are two important analyses which utilize the patent citation information; citation indicator analysis utilizing the frequency of the citation and network analysis based on the citation relationships. Furthermore, this study analyzes whether there are any impacts between the size of the company and patent citation relationships. 74 S&P 500 registered companies that provide IT and communication services are selected for this study. In order to determine the relationship of patent citation between the companies, the patent citation in 2009 and 2010 is collected and sociomatrices which show the patent citation relationship between the companies are created. In addition, the companies' total assets are collected as an index of company size. The distance between companies is defined as the absolute value of the difference between the total assets. And simple differences are considered to be described as the hierarchy of the company. The QAP Correlation analysis and MR-QAP analysis is carried out by using the distance and hierarchy between companies, and also the sociomatrices that shows the patent citation in 2009 and 2010. Through the result of QAP Correlation analysis, the patent citation relationship between companies in the 2009's company's patent citation network and the 2010's company's patent citation network shows the highest correlation. In addition, positive correlation is shown in the patent citation relationships between companies and the distance between companies. This is because the patent citation relationship is increased when there is a difference of size between companies. Not only that, negative correlation is found through the analysis using the patent citation relationship between companies and the hierarchy between companies. Relatively it is indicated that there is a high evaluation about the patent of the higher tier companies influenced toward the lower tier companies. MR-QAP analysis is carried out as follow. The sociomatrix that is generated by using the year 2010 patent citation relationship is used as the dependent variable. Additionally the 2009's company's patent citation network and the distance and hierarchy networks between the companies are used as the independent variables. This study performed MR-QAP analysis to find the main factors influencing the patent citation relationship between the companies in 2010. The analysis results show that all independent variables have positively influenced the 2010's patent citation relationship between the companies. In particular, the 2009's patent citation relationship between the companies has the most significant impact on the 2010's, which means that there is consecutiveness regarding the patent citation relationships. Through the result of QAP correlation analysis and MR-QAP analysis, the patent citation relationship between companies is affected by the size of the companies. But the most significant impact is the patent citation relationships that had been done in the past. The reason why we need to maintain the patent citation relationship between companies is it might be important in the use of strategic aspect of the companies to look into relationships to share intellectual property between each other, also seen as an important auxiliary of the partner companies to cooperate with.

A fundamental study on the development of feasibility assessment system for utility tunnel by urban patterns (도심지 유형별 공동구 설치 타당성 평가시스템 개발에 관한 기초 연구)

  • Lee, Seong-Won;Sim, Young-Jong;Na, Gwi-Tae
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.1
    • /
    • pp.11-27
    • /
    • 2017
  • The road network system of major domestic urban areas such as city of Seoul was rapidly developed and regionally expanded. In addition, many kinds of life-lines such as electrical cables, telephone cables, water&sewerage lines, heat&cold conduits and gas lines were needed in order for urban residents to live comfortably. Therefore, most of the life-lines were individually buried in underground and individually managed. The utility tunnel is defined as the urban planning facilities for commonly installing life-lines in the National Land Planning Act. Expectation effectiveness of urban utility tunnels is reducing repeated excavation of roads, improvement of urban landscape; road pavement durability; driving performance and traffic flow. It can also be expected that ensuring disaster safety for earthquakes and sinkholes, smart-grind and electric vehicle supply, rapid response to changes in future living environment and etc. Therefore, necessity of urban utility tunnels has recently increased. However, all of the constructed utility tunnels are cut-and-cover tunnels domestically, which is included in development of new-town areas. Since urban areas can not accommodate all buried life-lines, it is necessary to study the feasibility assessment system for utility tunnel by urban patterns and capacity optimization for urban utility tunnels. In this study, we break away from the new-town utility tunnels and suggest a quantitative assessment model based on the evaluation index for urban areas. In addition, we also develop a program that can implement a quantitative evaluation system by subdividing the feasibility assessment system of urban patterns. Ultimately, this study can contribute to be activated the urban utility tunnel.

1V 1.6-GS/s 6-bit Flash ADC with Clock Calibration Circuit (클록 보정회로를 가진 1V 1.6-GS/s 6-bit Flash ADC)

  • Kim, Sang-Hun;Hong, Sang-Geun;Lee, Han-Yeol;Park, Won-Ki;Lee, Wang-Yong;Lee, Sung-Chul;Jang, Young-Chan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.9
    • /
    • pp.1847-1855
    • /
    • 2012
  • A 1V 1.6-GS/s 6-bit flash analog-to-digital converter (ADC) with a clock calibration circuit is proposed. A single track/hold circuit with a bootstrapped analog switch is used as an input stage with a supply voltage of 1V for the high speed operation. Two preamplifier-arrays and each comparator composed of two-stage are implemented for the reduction of analog noises and high speed operation. The clock calibration circuit in the proposed flash ADC improves the dynamic performance of the entire flash ADC by optimizing the duty cycle and phase of the clock. It adjusts the reset and evaluation time of the clock for the comparator by controlling the duty cycle of the clock. The proposed 1.6-GS/s 6-bit flash ADC is fabricated in a 1V 90nm 1-poly 9-metal CMOS process. The measured SNDR is 32.8 dB for a 800 MHz analog input signal. The measured DNL and INL are +0.38/-0.37 LSB, +0.64/-0.64 LSB, respectively. The power consumption and chip area are $800{\times}500{\mu}m2$ and 193.02mW.

Comparative assessment of frost event prediction models using logistic regression, random forest, and LSTM networks (로지스틱 회귀, 랜덤포레스트, LSTM 기법을 활용한 서리예측모형 평가)

  • Chun, Jong Ahn;Lee, Hyun-Ju;Im, Seul-Hee;Kim, Daeha;Baek, Sang-Soo
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.9
    • /
    • pp.667-680
    • /
    • 2021
  • We investigated changes in frost days and frost-free periods and to comparatively assess frost event prediction models developed using logistic regression (LR), random forest (RF), and long short-term memory (LSTM) networks. The meteorological variables for the model development were collected from the Suwon, Cheongju, and Gwangju stations for the period of 1973-2019 for spring (March - May) and fall (September - November). The developed models were then evaluated by Precision, Recall, and f-1 score and graphical evaluation methods such as AUC and reliability diagram. The results showed that significant decreases (significance level of 0.01) in the frequencies of frost days were at the three stations in both spring and fall. Overall, the evaluation metrics showed that the performance of RF was highest, while that of LSTM was lowest. Despite higher AUC values (above 0.9) were found at the three stations, reliability diagrams showed inconsistent reliability. A further study is suggested on the improvement of the predictability of both frost events and the first and last frost days by the frost event prediction models and reliability of the models. It would be beneficial to replicate this study at more stations in other regions.

Development of Image Classification Model for Urban Park User Activity Using Deep Learning of Social Media Photo Posts (소셜미디어 사진 게시물의 딥러닝을 활용한 도시공원 이용자 활동 이미지 분류모델 개발)

  • Lee, Ju-Kyung;Son, Yong-Hoon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.50 no.6
    • /
    • pp.42-57
    • /
    • 2022
  • This study aims to create a basic model for classifying the activity photos that urban park users shared on social media using Deep Learning through Artificial Intelligence. Regarding the social media data, photos related to urban parks were collected through a Naver search, were collected, and used for the classification model. Based on the indicators of Naturalness, Potential Attraction, and Activity, which can be used to evaluate the characteristics of urban parks, 21 classification categories were created. Urban park photos shared on Naver were collected by category, and annotated datasets were created. A custom CNN model and a transfer learning model utilizing a CNN pre-trained on the collected photo datasets were designed and subsequently analyzed. As a result of the study, the Xception transfer learning model, which demonstrated the best performance, was selected as the urban park user activity image classification model and evaluated through several evaluation indicators. This study is meaningful in that it has built AI as an index that can evaluate the characteristics of urban parks by using user-shared photos on social media. The classification model using Deep Learning mitigates the limitations of manual classification, and it can efficiently classify large amounts of urban park photos. So, it can be said to be a useful method that can be used for the monitoring and management of city parks in the future.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

Performance Evaluation of Wireless Sensor Networks in the Subway Station of Workroom (지하철 역사내 기능실에 대한 무선 센서 네트워크 성능 분석)

  • An, Tea-Ki;Shin, Jeong-Ryol;Kim, Gab-Young;Yang, Se-Hyun;Choi, Gab-Bong;Sim, Bo-Seog
    • Proceedings of the KSR Conference
    • /
    • 2011.05a
    • /
    • pp.1701-1708
    • /
    • 2011
  • A typical day in the subway transportation is used by hundreds of thousands are also concerned about the safety of the various workrooms with high underground fire or other less than in the subway users could be damaging even to be raised and there. In 2010, in fact, room air through vents in the fire because smoke and toxic gas accident victims, and train service suspended until such cases are often reported. In response to these incidents in subway stations, even if the latest IT technology, wireless sensor network technology and intelligent video surveillance technology by integrating fire and structural integrity, such as a comprehensive integrated surveillance system to monitor the development of intelligent urban transit system and are under study. In this study, prior to the application of the monitoring system into the field stations, authors carried out the ZigBee-based wireless sensor networks performance analyzation in the Chungmuro station. The test results at a communications room and ventilation room of the station are summarized and analyzed.

  • PDF

Performance Evaluation of the Runoff Reduction with Permeable Pavements using the SWMM Model (SWMM 분석을 통한 투수성 포장의 유출 저감 특성 평가)

  • Lin, Wuguang;Ryu, SungWoo;Park, Dae Geun;Lee, Jaehoon;Cho, Yoon-Ho
    • International Journal of Highway Engineering
    • /
    • v.17 no.4
    • /
    • pp.11-18
    • /
    • 2015
  • PURPOSES: This study aims to evaluate the runoff reduction with permeable pavements using the SWMM analysis. METHODS: In this study, simulations were carried out using two different models, simple and complex, to evaluate the runoff reduction when an impermeable pavement is replaced with a permeable pavement. In the simple model, the target area for the analysis was grouped into four areas by the land use characteristics, using the statistical database. In the complex model, simulation was performed based on the data on the sewer and road network configuration of Yongsan-Gu Bogwang-Dong in Seoul, using the ArcGIS software. A scenario was created to investigate the hydro-performance of the permeable pavement based on the return period, runoff coefficient, and the area of permeable pavement that could be laid within one hour after rainfall. RESULTS : The simple modeling analysis results showed that, when an impervious pavement is replaced with a permeable pavement, the peak discharge reduced from $16.7m^3/s$ to $10.4m^3/s$. This represents a reduction of approximately 37.6%. The peak discharge from the whole basin showed a reduction of approximately 11.0%, and the quantity decreased from $52.9m^3/s$ to $47.2m^3/s$. The total flowoff reduced from $43,261m^3$ to $38,551m^3$, i.e., by approximately 10.9%. In the complex model, performed using the ArcGIS interpretation with fewer permeable pavements applicable, the return period and the runoff coefficient increased, and the total flowoff and peak discharge also increased. When the return period was set to 20 years, and a runoff coefficient of 0.05 was applied to all the roads, the total outflow reduced by $5195.7m^3$, and the ratio reduced to 11.7%. When the return period was increased from 20 years to 30 and 100 years, the total outflow reduction decreased from 11.7% to 8.0% and 5.1%, respectively. When a runoff coefficient of 0.5 was applied to all the roads under the return period of 20 years, the total outflow reduction was 10.8%; when the return period was increased to 30 and 100 years, the total outflow reduction decreased to 6.5% and 2.9%, respectively. However, unlike in the simple model, for all the cases in the complex model, the peak discharge reductions were less than 1%. CONCLUSIONS : Being one of the techniques for water circulation and runoff reduction, a high reduction for the small return period rainfall event of penetration was obtained by applying permeable pavements instead of impermeable pavement. With the SWMM analysis results, it was proved that changing to permeable pavement is one of the ways to effectively provide water circulation to various green infrastructure projects, and for stormwater management in urban watersheds.

S-XML Transformation Method for Efficient Distribution of Spatial Information on u-GIS Environment (u-GIS 환경에서 효율적인 공간 정보 유통을 위한 S-XML 변환 기법)

  • Lee, Dong-Wook;Baek, Sung-Ha;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.1
    • /
    • pp.55-62
    • /
    • 2009
  • In u-GIS environment, we collect spatial data needed through sensor network and provide them with information real-time processed or stored. When information through Internet is requested on Web based applications, it is transmitted in XML. Especially, when requested information includes spatial data, GML, S-XML, and other document that can process spatial data are used. In this processing, real-time stream data processed in DSMS is transformed to S-XML document type and spatial information service based on web receive S-XML document through Internet. Because most of spatial application service use existing spatial DBMS as a storage system, The data used in S-XML and SDBMS needs transformation between themselves. In this paper, we propose S-XML a transformation method using caching of spatial data. The proposed method caches the spatial data part of S-XML to transform S-XML and relational spatial database for providing spatial data efficiently and it transforms cached data without additional transformation cost when a transformation between data in the same region is required. Through proposed method, we show that it reduced the cost of transformation between S-XML documents and spatial information services based on web to provide spatial information in u-GIS environment and increased the performance of query processing through performance evaluation.

  • PDF

Design of Integrated Management System for Electronic Library Based on SaaS and Web Standard

  • Lee, Jong-Hoon;Min, Byung-Won;Oh, Yong-Sun
    • International Journal of Contents
    • /
    • v.11 no.1
    • /
    • pp.41-51
    • /
    • 2015
  • Management systems for electronic library have been developed on the basis of Client/Server or ASP framework in domestic market for a long time. Therefore, both service provider and user suffer from their high cost and effort in management, maintenance, and repairing of software as well as hardware. Recently in addition, mobile devices like smartphone and tablet PC are frequently used as terminal devices to access computers through the Internet or other networks, sophisticatedly customized or personalized interface for n-screen service became more important issue these days. In this paper, we propose a new scheme of integrated management system for electronic library based on SaaS and Web Standard. We design and implement the proposed scheme applying Electronic Cabinet Guidelines for Web Standard and Universal Code System. Hosted application management style and software on demand style service models based on SaaS are basically applied to develop the management system. Moreover, a newly improved concept of duplication check algorithm in a hierarchical evaluation process is presented and a personalized interface based on web standard is applied to implement the system. Algorithms of duplication check for journal, volume/number, and paper are hierarchically presented with their logic flows. Total framework of our development obeys the standard feature of Electronic Cabinet Guidelines offered by Korea government so that we can accomplish standard of application software, quality improvement of total software, and reusability extension. Scope of our development includes core services of library automation system such as acquisition, list-up, loan-and-return, and their related services. We focus on interoperation compatibility between elementary sub-systems throughout complex network and structural features. Reanalyzing and standardizing each part of the system under the concept on the cloud of service, we construct an integrated development environment for generating, test, operation, and maintenance. Finally, performance analyses are performed about resource usability of server, memory amount used, and response time of server etc. As a result of measurements fulfilled over 5 times at different test points and using different data, the average response time is about 62.9 seconds for 100 clients, which takes about 0.629 seconds per client on the average. We can expect this result makes it possible to operate the system in real-time level proof. Resource usability and memory occupation are also good and moderate comparing to the conventional systems. As total verification tests, we present a simple proof to obey Electronic Cabinet Guidelines and a record of TTA authentication test for topics about SaaS maturity, performance, and application program features.