• Title/Summary/Keyword: Network Performance Evaluation

Search Result 1,870, Processing Time 0.033 seconds

The Prediction of Durability Performance for Chloride Ingress in Fly Ash Concrete by Artificial Neural Network Algorithm (인공 신경망 알고리즘을 활용한 플라이애시 콘크리트의 염해 내구성능 예측)

  • Kwon, Seung-Jun;Yoon, Yong-Sik
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.5
    • /
    • pp.127-134
    • /
    • 2022
  • In this study, RCPTs (Rapid Chloride Penetration Test) were performed for fly ash concrete with curing age of 4 ~ 6 years. The concrete mixtures were prepared with 3 levels of water to binder ratio (0.37, 0.42, and 0.47) and 2 levels of substitution ratio of fly ash (0 and 30%), and the improved passed charges of chloride ion behavior were quantitatively analyzed. Additionally, the results were trained through the univariate time series models consisted of GRU (Gated Recurrent Unit) algorithm and those from the models were evaluated. As the result of the RCPT, fly ash concrete showed the reduced passed charges with period and an more improved resistance to chloride penetration than OPC concrete. At the final evaluation period (6 years), fly ash concrete showed 'Very low' grade in all W/B (water to binder) ratio, however OPC concrete showed 'Moderate' grade in the condition with the highest W/B ratio (0.47). The adopted algorithm of GRU for this study can analyze time series data and has the advantage like operation efficiency. The deep learning model with 4 hidden layers was designed, and it provided a reasonable prediction results of passed charge. The deep learning model from this study has a limitation of single consideration of a univariate time series characteristic, but it is in the developing process of providing various characteristics of concrete like strength and diffusion coefficient through additional studies.

Detecting Vehicles That Are Illegally Driving on Road Shoulders Using Faster R-CNN (Faster R-CNN을 이용한 갓길 차로 위반 차량 검출)

  • Go, MyungJin;Park, Minju;Yeo, Jiho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.1
    • /
    • pp.105-122
    • /
    • 2022
  • According to the statistics about the fatal crashes that have occurred on the expressways for the last 5 years, those who died on the shoulders of the road has been as 3 times high as the others who died on the expressways. It suggests that the crashes on the shoulders of the road should be fatal, and that it would be important to prevent the traffic crashes by cracking down on the vehicles intruding the shoulders of the road. Therefore, this study proposed a method to detect a vehicle that violates the shoulder lane by using the Faster R-CNN. The vehicle was detected based on the Faster R-CNN, and an additional reading module was configured to determine whether there was a shoulder violation. For experiments and evaluations, GTAV, a simulation game that can reproduce situations similar to the real world, was used. 1,800 images of training data and 800 evaluation data were processed and generated, and the performance according to the change of the threshold value was measured in ZFNet and VGG16. As a result, the detection rate of ZFNet was 99.2% based on Threshold 0.8 and VGG16 93.9% based on Threshold 0.7, and the average detection speed for each model was 0.0468 seconds for ZFNet and 0.16 seconds for VGG16, so the detection rate of ZFNet was about 7% higher. The speed was also confirmed to be about 3.4 times faster. These results show that even in a relatively uncomplicated network, it is possible to detect a vehicle that violates the shoulder lane at a high speed without pre-processing the input image. It suggests that this algorithm can be used to detect violations of designated lanes if sufficient training datasets based on actual video data are obtained.

Review on Rock-Mechanical Models and Numerical Analyses for the Evaluation on Mechanical Stability of Rockmass as a Natural Barriar (천연방벽 장기 안정성 평가를 위한 암반역학적 모델 고찰 및 수치해석 검토)

  • Myung Kyu Song;Tae Young Ko;Sean S. W., Lee;Kunchai Lee;Byungchan Kim;Jaehoon Jung;Yongjin Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.445-471
    • /
    • 2023
  • Long-term safety over millennia is the top priority consideration in the construction of disposal sites. However, ensuring the mechanical stability of deep geological repositories for spent fuel, a.k.a. radwaste, disposal during construction and operation is also crucial for safe operation of the repository. Imposing restrictions or limitations on tunnel support and lining materials such as shotcrete, concrete, grouting, which might compromise the sealing performance of backfill and buffer materials which are essential elements for the long-term safety of disposal sites, presents a highly challenging task for rock engineers and tunnelling experts. In this study, as part of an extensive exploration to aid in the proper selection of disposal sites, the anticipation of constructing a deep geological repository at a depth of 500 meters in an unknown state has been carried out. Through a review of 2D and 3D numerical analyses, the study aimed to explore the range of properties that ensure stability. Preliminary findings identified the potential range of rock properties that secure the stability of central and disposal tunnels, while the stability of the vertical tunnel network was confirmed through 3D analysis, outlining fundamental rock conditions necessary for the construction of disposal sites.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

The Effective Approach for Non-Point Source Management (효과적인 비점오염원관리를 위한 접근 방향)

  • Park, Jae Hong;Ryu, Jichul;Shin, Dong Seok;Lee, Jae Kwan
    • Journal of Wetlands Research
    • /
    • v.21 no.2
    • /
    • pp.140-146
    • /
    • 2019
  • In order to manage non-point sources, the paradigm of the system should be changed so that the management of non-point sources will be systematized from the beginning of the use and development of the land. It is necessary to change the method of national subsidy support and poeration plan for the non-point source management area. In order to increase the effectiveness of the non-point source reduction project, it is necessary to provide a minimum support ratio and to provide additional support according to the performance of the local government. A new system should be established to evaluate the performance of non-point source reduction projects and to monitor the operational effectiveness. It is necessary to establish the related rules that can lead the local government to take responsible administration so that the local governments faithfully carry out the non-point source reduction project and achieve the planned achievement and become the sustainable maintenance. Alternative solutions are needed, such as problems with the use of $100{\mu}m$ filter in automatic sampling and analysis, timely acquisition of water sampling and analysis during rainfall, and effective management of non-point sources network operation management. As an alternative, it is necessary to consider improving the performance of sampling and analysis equipment, and operate the base station. In addition, countermeasures are needed if the amount of pollutant reduction according to the non-point source reduction facility promoted by the national subsidy is required to be used as the development load of the TMDLs. As an alternative, it is possible to consider supporting incentive type of part of the maintenance cost of the non-point source reduction facility depending on the amount of pollutants reduction.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

A Study of Performance Analysis on Effective Multiple Buffering and Packetizing Method of Multimedia Data for User-Demand Oriented RTSP Based Transmissions Between the PoC Box and a Terminal (PoC Box 단말의 RTSP 운용을 위한 사용자 요구 중심의 효율적인 다중 수신 버퍼링 기법 및 패킷화 방법에 대한 성능 분석에 관한 연구)

  • Bang, Ji-Woong;Kim, Dae-Won
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.1
    • /
    • pp.54-75
    • /
    • 2011
  • PoC(Push-to-talk Over Cellular) is an integrated technology of group voice calls, video calls and internet based multimedia services. If a PoC user can not participate in the PoC session for various reasons such as an emergency situation, lack of battery capacity, then the user can use the PoC Box which has a similar functionality to the MM Box in the MMS(Multimedia Messaging Service). The RTSP(Real-Time Streaming Protocol) method is recommended to be used when there is a transmission session between the PoC box and a terminal. Since the existing VOD service uses a wired network, the packet size of RTSP-based VOD service is huge, however, the PoC service has wireless communication environments which have general characteristics to be used in RTSP method. Packet loss in a wired communication environments is relatively less than that in wireless communication environment, therefore, a buffering latency occurs in PoC service due to a play-out delay which means an asynchronous play of audio & video contents. Those problems make a user to be difficult to find the information they want when the media contents are played-out. In this paper, the following techniques and methods were proposed and their performance and superiority were verified through testing: cross-over dual reception buffering technique, advance partition multi-reception buffering technique, and on-demand multi-reception buffering technique, which are designed for effective picking up of information in media content being transmitted in short amount of time using RTSP when a user searches for media, as well as for reduction in playback delay; and same-priority packetization transmission method and priority-based packetization transmission method, which are media data packetization methods for transmission. From the simulation of functional evaluation, we could find that the proposed multiple receiving buffering and packetizing methods are superior, with respect to the media retrieval inclination, to the existing single receiving buffering method by 6-9 points from the viewpoint of effectiveness and excellence. Among them, especially, on-demand multiple receiving buffering technology with same-priority packetization transmission method is able to manage the media search inclination promptly to the requests of users by showing superiority of 3-24 points above compared to other combination methods. In addition, users could find the information they want much quickly since large amount of informations are received in a focused media retrieval period within a short time.

Olympic Advertisers Win Gold, Experience Stock Price Gains During and After the Games (오운선수작위엄고대언인영득금패(奥运选手作为广告代言人赢得金牌), 비새중화비새후적고표개격상양(比赛中和比赛后的股票价格上扬))

  • Tomovick, Chuck;Yelkur, Rama
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.80-88
    • /
    • 2010
  • There has been considerable research examining the relationship between stockholders equity and various marketing strategies. These include studies linking stock price performance to advertising, customer service metrics, new product introductions, research and development, celebrity endorsers, brand perception, brand extensions, brand evaluation, company name changes, and sports sponsorships. Another facet of marketing investments which has received heightened scrutiny for its purported influence on stockholder equity is television advertisement embedded within specific sporting events such as the Super Bowl. Research indicates that firms which advertise in Super Bowls experience stock price gains. Given this reported relationship between advertising investment and increased shareholder value, for both general and special events, it is surprising that relatively little research attention has been paid to investigating the relationship between advertising in the Olympic Games and its subsequent impact on stockholder equity. While attention has been directed at examining the effectiveness of sponsoring the Olympic Games, much less focus has been placed on the financial soundness of advertising during the telecasts of these Games. Notable exceptions to this include Peters (2008), Pfanner (2008), Saini (2008), and Keller Fay Group (2009). This paper presents a study of Olympic advertisers who ran TV ads on NBC in the American telecasts of the 2000, 2004, and 2008 Summer Olympic Games. Five hypothesis were tested: H1: The stock prices of firms which advertised on American telecasts of the 2008, 2004 and 2000 Olympics (referred to as O-Stocks), will outperform the S&P 500 during this same period of time (i.e., the Monday before the Games through to the Friday after the Games). H2: O-Stocks will outperform the S&P 500 during the medium term, that is, for the period of the Monday before the Games through to the end of each Olympic calendar year (December 31st of 2000, 2004, and 2008 respectively). H3: O-Stocks will outperform the S&P 500 in the longer term, that is, for the period of the Monday before the Games through to the midpoint of the following years (June 30th of 2001, 2005, and 2009 respectively). H4: There will be no difference in the performance of these O-Stocks vs. the S&P 500 in the Non-Olympic time control periods (i.e. three months earlier for each of the Olympic years). H5: The annual revenue of firms which advertised on American telecasts of the 2008, 2004 and 2000 Olympics will be higher for those years than the revenue for those same firms in the years preceding those three Olympics respectively. In this study, we recorded stock prices of those companies that advertised during the Olympics for the last three Summer Olympic Games (i.e. Beijing in 2008, Athens in 2004, and Sydney in 2000). We identified these advertisers using Google searches as well as with the help of the television network (i.e., NBC) that hosted the Games. NBC held the American broadcast rights to all three Olympic Games studied. We used Internet sources to verify the parent companies of the brands that were advertised each year. Stock prices of these parent companies were found using Yahoo! Finance. Only companies that were publicly held and traded were used in the study. We identified changes in Olympic advertisers' stock prices over the four-week period that included the Monday before through the Friday after the Games. In total, there were 117 advertisers of the Games on telecasts which were broadcast in the U.S. for 2008, 2004, and 2000 Olympics. Figure 1 provides a breakdown of those advertisers, by industry sector. Results indicate the stock of the firms that advertised (O-Stocks) out-performed the S&P 500 during the period of interest and under-performed the S&P 500 during the earlier control periods. These same O-Stocks also outperformed the S&P 500 from the start of these Games through to the end of each Olympic year, and for six months beyond that. Price pressure linkage, signaling theory, high involvement viewers, and corporate activation strategies are believed to contribute to these positive results. Implications for advertisers and researchers are discussed, as are study limitations and future research directions.

Knowledge Management Strategy of a Franchise Business : The Case of a Paris Baguette Bakery (프랜차이즈 기업의 지식경영 전략 : 파리바게뜨 사례를 중심으로)

  • Cho, Joon-Sang;Kim, Bo-Yong
    • Journal of Distribution Science
    • /
    • v.10 no.6
    • /
    • pp.39-53
    • /
    • 2012
  • It is widely known that knowledge management plays a facilitating role that contributes to upgrading organizational performance. Knowledge management systems (KMS), especially, support the knowledge management process including the sharing, creating, and using of knowledge within a company, and maximize the value of knowledge resources within an organization. Despite this widely held belief, there are few studies that describe how companies actually develop, share, and practice their knowledge. Companies in the domestic small franchise sector, which are in the early stages in terms of knowledge management, need to improve their KMS to manage their franchisees effectively. From this perspective, this study uses a qualitative approach to explore the actual process of knowledge management implementation. This article presents a case study of PB (Paris Baguette) company, which is the first to build a KMS in the franchise industry. The study was able to confirm the following facts through the analysis of target companies. First, the chief executive's support is a critical success factor and this support can increase the participation of organization members. Second, it is important to build a process and culture that actively creates and leverages information in knowledge management activities. The organizational learning culture should be one where the creation, learning, and sharing of new knowledge is developed continuously. Third, a horizontal network organization is needed in order to make relationships within the organization more close-knit. Fourth, in order to connect the diverse processes such as knowledge acquisition, storage, and utilization of knowledge management activities, information technology (IT) capabilities are essential. Indeed, IT can be a powerful tool for improving the quality of work and maximizing the spread and use of knowledge. However, during the construction of an intranet based KMS, research is required to ensure that the most efficient system is implemented. Finally, proper evaluation and compensation are important success factors. In order to develop knowledge workers, an appropriate program of promotion and compensation should be established. Also, building members' confidence in the benefits of knowledge management should be an ongoing activity. The company developed its original KMS to achieve a flexible and proactive organization, and a new KMS to improve organizational and personal capabilities. The PB case shows that there are differences between participants perceptions and actual performance in managing knowledge; that knowledge management is not a matter of formality but a paradigm that assures the sharing of knowledge; and that IT boosts communication skills, thus creating a mutual relationship to enhance the flow of knowledge and information between people. Knowledge management for building organizational capabilities can be successful when considering its focus and ways to increase its acceptance. This study suggests guidelines for major factors that corporate executives of domestic franchises should consider to improve knowledge management and the higher operating activities that can be used.

  • PDF

A preliminary assessment of high-spatial-resolution satellite rainfall estimation from SAR Sentinel-1 over the central region of South Korea (한반도 중부지역에서의 SAR Sentinel-1 위성강우량 추정에 관한 예비평가)

  • Nguyen, Hoang Hai;Jung, Woosung;Lee, Dalgeun;Shin, Daeyun
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.6
    • /
    • pp.393-404
    • /
    • 2022
  • Reliable terrestrial rainfall observations from satellites at finer spatial resolution are essential for urban hydrological and microscale agricultural demands. Although various traditional "top-down" approach-based satellite rainfall products were widely used, they are limited in spatial resolution. This study aims to assess the potential of a novel "bottom-up" approach for rainfall estimation, the parameterized SM2RAIN model, applied to the C-band SAR Sentinel-1 satellite data (SM2RAIN-S1), to generate high-spatial-resolution terrestrial rainfall estimates (0.01° grid/6-day) over Central South Korea. Its performance was evaluated for both spatial and temporal variability using the respective rainfall data from a conventional reanalysis product and rain gauge network for a 1-year period over two different sub-regions in Central South Korea-the mixed forest-dominated, middle sub-region and cropland-dominated, west coast sub-region. Evaluation results indicated that the SM2RAIN-S1 product can capture general rainfall patterns in Central South Korea, and hold potential for high-spatial-resolution rainfall measurement over the local scale with different land covers, while less biased rainfall estimates against rain gauge observations were provided. Moreover, the SM2RAIN-S1 rainfall product was better in mixed forests considering the Pearson's correlation coefficient (R = 0.69), implying the suitability of 6-day SM2RAIN-S1 data in capturing the temporal dynamics of soil moisture and rainfall in mixed forests. However, in terms of RMSE and Bias, better performance was obtained with the SM2RAIN-S1 rainfall product over croplands rather than mixed forests, indicating that larger errors induced by high evapotranspiration losses (especially in mixed forests) need to be included in further improvement of the SM2RAIN.