• Title/Summary/Keyword: Network Evaluation

Search Result 3,711, Processing Time 0.035 seconds

Review on Rock-Mechanical Models and Numerical Analyses for the Evaluation on Mechanical Stability of Rockmass as a Natural Barriar (천연방벽 장기 안정성 평가를 위한 암반역학적 모델 고찰 및 수치해석 검토)

  • Myung Kyu Song;Tae Young Ko;Sean S. W., Lee;Kunchai Lee;Byungchan Kim;Jaehoon Jung;Yongjin Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.445-471
    • /
    • 2023
  • Long-term safety over millennia is the top priority consideration in the construction of disposal sites. However, ensuring the mechanical stability of deep geological repositories for spent fuel, a.k.a. radwaste, disposal during construction and operation is also crucial for safe operation of the repository. Imposing restrictions or limitations on tunnel support and lining materials such as shotcrete, concrete, grouting, which might compromise the sealing performance of backfill and buffer materials which are essential elements for the long-term safety of disposal sites, presents a highly challenging task for rock engineers and tunnelling experts. In this study, as part of an extensive exploration to aid in the proper selection of disposal sites, the anticipation of constructing a deep geological repository at a depth of 500 meters in an unknown state has been carried out. Through a review of 2D and 3D numerical analyses, the study aimed to explore the range of properties that ensure stability. Preliminary findings identified the potential range of rock properties that secure the stability of central and disposal tunnels, while the stability of the vertical tunnel network was confirmed through 3D analysis, outlining fundamental rock conditions necessary for the construction of disposal sites.

A Study on the Critical Factors Affecting Investment Decision on TIPS (민간주도형 기술창업지원 팁스(TIPS) 투자의사 결정요인에 관한 연구)

  • Goh, Byeong Ki;Park, Sol Ip;Kim, Da Hye;Sung, Chang Soo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.17 no.5
    • /
    • pp.31-47
    • /
    • 2022
  • The TIPS, a representative public-private cooperative project to revitalize the start-up ecosystem, is a government supported policy that promotes successful commercialization through various start-up support for technology-based startups. The purpose of this study is to analyze the investment decision factors of the TIPS program and to derive priorities. In order to achieve the research purpose, first, the investment decision factors were derived through literature analysis, a Delphi surveys were conducted on investors and experts participating in the evaluation of the TIPS program, and an AHP analysis was conducted on 20 VCs to empirically analyze the priority of factors on investment decisions. As a result of the analysis, the importance of critical factors was confirmed in the order of entrepreneurs(team) > market > product/service > finance > network. The importance of detailed factors was found in the order of entrepreneur's reliability and authenticity > market growth and scalability > team members' expertise and capabilities > adequacy of current market size > new market creation. This study presented the capabilities of technology-based startups preparing to participate in the TIPS program by deriving factors that influence investment decisions from an investor's perspective and comparing and analyzing the importance. It is also meaningful that basic data on determinants of private-led investment decision-making were presented to stake-holders such as venture capital, accelerator, and start-up support institutions.

A Study on the Certification System for Offline Stores Selling Copyrighted Contents: Copyright OK Case (정품 콘텐츠 판매 오프라인 업체 인증제도 방안 연구: 저작권 OK 사례)

  • Gyoo Gun Lim;Jae Young Choi;Woong Hee Lee
    • Information Systems Review
    • /
    • v.19 no.4
    • /
    • pp.27-42
    • /
    • 2017
  • With the rapid development in network, graphic technology, and digital technology, content industry is emerging as an important industry for new cultural development and economic development. The development in digital content technology has remarkably expanded the generation and distribution of contents, thereby creating new value and extending into a large distribution market. However, the ease of distribution and duplication, which characterizes digital technology, has increased the circulation of illegal contents due to illegal copying, theft, and alteration. The damage caused by this illegal content is severe. Currently, a copyright protection system targeting online sites is available. By contrast, no system has been established for offline companies that sell offline genuine content, which compete with online companies. The demand for content of overseas tourists is increasing due to the Korean wave craze. Nevertheless, many offline content providers have lost competitiveness due to illegal content distribution with online companies. In this study, we analyzed the case and status of similar copyright certification systems in Korea and overseas through previous research and studied a system to certify the offline genuine contents business. In addition to the case analysis, we focused on interviews obtained through in-depth interviews with the copyright stakeholders. We also developed a certification framework by establishing the certification domain, certification direction, and incentive of the certification system for offline businesses with genuine content. Selected certification direction is ethical, open, inward, store, and rigid (post evaluation). This study aimed to increase awareness among consumers about the use of genuine content and establish a transparent trading order in a healthy content market.

A Study on the Perception and Experience of Daejeon Public Library Users Using Text Mining: Focusing on SNS and Online News Articles (텍스트마이닝을 활용한 대전시 공공도서관 이용자의 인식과 경험 연구 - SNS와 온라인 뉴스 기사를 중심으로 -)

  • Jiwon Choi;Seung-Jin Kwak
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.58 no.2
    • /
    • pp.363-384
    • /
    • 2024
  • This study was conducted to examine the user's experiences with the public library in Daejeon using big data analysis, focusing on the text mining technique. To know this, first, the overall evaluation and perception of users about the public library in Daejeon were explored by collecting data on social media. Second, through analysis using online news articles, the pending issues that are being discussed socially were identified. As a result of the analysis, the proportion of users with children was first high. Next, it was found that topics through LDA analysis appeared in four categories: 'cultural event/program', 'data use', 'physical environment and facilities', and 'library service'. Finally, it was confirmed that keywords for the additional construction of libraries and complex cultural spaces and the establishment of a library cooperation system appeared at the core in the news article data. Based on this, it was proposed to build a library in consideration of regional balance and to create a social parenting community network through business agreements with childcare and childcare institutions. This will contribute to identifying the policy and social trends of public libraries in Daejeon and implementing data-based public library operations that reflect local community demands.

IoT-based Smart Tunnel Accident Alert System (사물 인터넷 기반의 스마트 터널 사고 경보 시스템)

  • Ki-Ung Min;Seong-Noh Lee;Yoon-Hwa Choi;Yeon-Taek Hong;Chul-Sun Lee;Yun-Seok Ko
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.4
    • /
    • pp.753-762
    • /
    • 2024
  • Tunnels have limited evacuation areas, and It is difficult for cars coming from behind to recognize the accident situation in front. Since an accident is very likely to lead to a serious secondary accident, a IoT-based smart tunnel accident warning system was studied to prepare for traffic accidents that occur in tunnels. If the measured values from the flame detection sensor, gas detection sensor, and shock detection sensor in the tunnel exceed the standard, it is judged to be an emergency situation and an alert system is designed to operate. The accident information message was designed to be displayed on the LCD and transmitted to drivers inside and outside the tunnel through a Wi-Fi communication network. A performance test system was established and performance evaluation was performed for several accident scenarios. As a result of the test, it was confirmed that the accident alert system can accurately detect accidents based on given reference values, perform alert procedures, and transmit alert messages to smart phones through Wi-Fi wireless communication. And through this, its effectiveness could be confirmed.

A Study on the Competitive Factor of Global Logistics Hub Cities Using a Importance-Performance Analysis : Focusing on the Case of Incheon Metropolitan City (IPA분석을 통한 글로벌 물류 허브도시 경쟁요인에 관한 연구 : 인천광역시 사례를 중심으로)

  • Lee, Myeong-Hwa;Shin, Mi-Na;Kim, Un-Soo
    • Journal of Korea Port Economic Association
    • /
    • v.40 no.2
    • /
    • pp.205-219
    • /
    • 2024
  • This study assesses Incheon Metropolitan City's potential as a global logistics hub amid intensified competition since the 2000s. Utilizing Importance-Performance Analysis(IPA), it evaluates competitive factors for logistics hub cities and Incheon's current positioning. The research identifies world-class infrastructure development and global city connectivity as key competitiveness factors. While Incheon, with its international airport and port, currently functions as a logistics hub, areas for improvement emerge. Recommendations include developing specialized cargo infrastructure for cold-chain and e-commerce, expanding the global network through multimodal transportation, and addressing gaps in smart and eco-friendly logistics. These suggestions encompass professional training, information platform establishment, and sector-wide decarbonization initiatives. The study's significance lies in its IPA-driven evaluation of competitiveness factors and Incheon's status, providing actionable recommendations for strategic planning to enhance the city's position as a global logistics hub.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

GPR Development for Landmine Detection (지뢰탐지를 위한 GPR 시스템의 개발)

  • Sato, Motoyuki;Fujiwara, Jun;Feng, Xuan;Zhou, Zheng-Shu;Kobayashi, Takao
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.4
    • /
    • pp.270-279
    • /
    • 2005
  • Under the research project supported by Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), we have conducted the development of GPR systems for landmine detection. Until 2005, we have finished development of two prototype GPR systems, namely ALIS (Advanced Landmine Imaging System) and SAR-GPR (Synthetic Aperture Radar-Ground Penetrating Radar). ALIS is a novel landmine detection sensor system combined with a metal detector and GPR. This is a hand-held equipment, which has a sensor position tracking system, and can visualize the sensor output in real time. In order to achieve the sensor tracking system, ALIS needs only one CCD camera attached on the sensor handle. The CCD image is superimposed with the GPR and metal detector signal, and the detection and identification of buried targets is quite easy and reliable. Field evaluation test of ALIS was conducted in December 2004 in Afghanistan, and we demonstrated that it can detect buried antipersonnel landmines, and can also discriminate metal fragments from landmines. SAR-GPR (Synthetic Aperture Radar-Ground Penetrating Radar) is a machine mounted sensor system composed of B GPR and a metal detector. The GPR employs an array antenna for advanced signal processing for better subsurface imaging. SAR-GPR combined with synthetic aperture radar algorithm, can suppress clutter and can image buried objects in strongly inhomogeneous material. SAR-GPR is a stepped frequency radar system, whose RF component is a newly developed compact vector network analyzers. The size of the system is 30cm x 30cm x 30 cm, composed from six Vivaldi antennas and three vector network analyzers. The weight of the system is 17 kg, and it can be mounted on a robotic arm on a small unmanned vehicle. The field test of this system was carried out in March 2005 in Japan.