• Title/Summary/Keyword: Smart Network

Search Result 2,941, Processing Time 0.031 seconds

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Packaging Technology for the Optical Fiber Bragg Grating Multiplexed Sensors (광섬유 브래그 격자 다중화 센서 패키징 기술에 관한 연구)

  • Lee, Sang Mae
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.24 no.4
    • /
    • pp.23-29
    • /
    • 2017
  • The packaged optical fiber Bragg grating sensors which were networked by multiplexing the Bragg grating sensors with WDM technology were investigated in application for the structural health monitoring of the marine trestle structure transporting the ship. The optical fiber Bragg grating sensor was packaged in a cylindrical shape made of aluminum tubes. Furthermore, after the packaged optical fiber sensor was inserted in polymeric tube, the epoxy was filled inside the tube so that the sensor has resistance and durability against sea water. The packaged optical fiber sensor component was investigated under 0.2 MPa of hydraulic pressure and was found to be robust. The number and location of Bragg gratings attached at the trestle were determined where the trestle was subject to high displacement obtained by the finite element simulation. Strain of the part in the trestle being subjected to the maximum load was analyzed to be ${\sim}1000{\mu}{\varepsilon}$ and thus shift in Bragg wavelength of the sensor caused by the maximum load of the trestle was found to be ~1,200 pm. According to results of the finite element analysis, the Bragg wavelength spacings of the sensors were determined to have 3~5 nm without overlapping of grating wavelengths between sensors when the trestle was under loads and thus 50 of the grating sensors with each module consisting of 5 sensors could be networked within 150 nm optical window at 1550 nm wavelength of the Bragg wavelength interrogator. Shifts in Bragg wavelength of the 5 packaged optical fiber sensors attached at the mock trestle unit were well interrogated by the grating interrogator which used the optical fiber loop mirror, and the maximum strain rate was measured to be about $235.650{\mu}{\varepsilon}$. The modelling result of the sensor packaging and networking was in good agreements with experimental result each other.

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.

The Analysis of the Successful Factors from User Side of MMORPG (사용자 측면에서의 MMORPG <월드 오브 워크래프트> 성공요인 분석)

  • Baek, Jaeyong;Kim, Kenneth Chi Ho
    • Cartoon and Animation Studies
    • /
    • s.42
    • /
    • pp.151-175
    • /
    • 2016
  • The game industry has evolved from mobile games to PC online games after the smart-phone industry was opened up. In this environment, the game industry has rather been negatively developing its commercials means than the sufficient fundamental entertainment to the users. Especially, many games were released with better graphic qualities yet poor originality, continuing to be popular without enhancing the market itself. Moreover, the user's recognition level has improved. The users share their online gaming experience easily with the development of network environment. They receive the feedbacks on the quality of the game through the online channels and media by sharing them together. The high margin of the game industry will lead to the negative feedbacks of the users, effecting them to critique the content although the market looks good for now. The game industry's evolution has to be reviewed in the perspective of users, to look back at the successful cases of the past before the mobile era by analyzing and indicating the quality of the games and content's direction. This research is focused on the success factors of from the user's point of view, which has been widely claimed as a popular game franchise publicly before the mobile games had risen. WOW has been the most successful MMORPG game with its user record of 1.2 million till now. For these reasons, this study analyzes 's success factors from the user's point of view by configuring five expert groups, sequentially applying expert group survey, interview, Jobs-to-be-done and Fishbein Model as UX methodologies based on the business model to see through its long term rein in the industry. Consequently, The success factors from the user side of MMORPG provides an opportunity for the users to interact deeply with the game by (1) using well designed 'world view' over 10 years, (2) providing 'national policy' that is based on the locations of the users' culture and language, (3) providing 'expansions' with changes in time to give the digging elements to the users.

A Method for Evaluating News Value based on Supply and Demand of Information Using Text Analysis (텍스트 분석을 활용한 정보의 수요 공급 기반 뉴스 가치 평가 방안)

  • Lee, Donghoon;Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.45-67
    • /
    • 2016
  • Given the recent development of smart devices, users are producing, sharing, and acquiring a variety of information via the Internet and social network services (SNSs). Because users tend to use multiple media simultaneously according to their goals and preferences, domestic SNS users use around 2.09 media concurrently on average. Since the information provided by such media is usually textually represented, recent studies have been actively conducting textual analysis in order to understand users more deeply. Earlier studies using textual analysis focused on analyzing a document's contents without substantive consideration of the diverse characteristics of the source medium. However, current studies argue that analytical and interpretive approaches should be applied differently according to the characteristics of a document's source. Documents can be classified into the following types: informative documents for delivering information, expressive documents for expressing emotions and aesthetics, operational documents for inducing the recipient's behavior, and audiovisual media documents for supplementing the above three functions through images and music. Further, documents can be classified according to their contents, which comprise facts, concepts, procedures, principles, rules, stories, opinions, and descriptions. Documents have unique characteristics according to the source media by which they are distributed. In terms of newspapers, only highly trained people tend to write articles for public dissemination. In contrast, with SNSs, various types of users can freely write any message and such messages are distributed in an unpredictable way. Again, in the case of newspapers, each article exists independently and does not tend to have any relation to other articles. However, messages (original tweets) on Twitter, for example, are highly organized and regularly duplicated and repeated through replies and retweets. There have been many studies focusing on the different characteristics between newspapers and SNSs. However, it is difficult to find a study that focuses on the difference between the two media from the perspective of supply and demand. We can regard the articles of newspapers as a kind of information supply, whereas messages on various SNSs represent a demand for information. By investigating traditional newspapers and SNSs from the perspective of supply and demand of information, we can explore and explain the information dilemma more clearly. For example, there may be superfluous issues that are heavily reported in newspaper articles despite the fact that users seldom have much interest in these issues. Such overproduced information is not only a waste of media resources but also makes it difficult to find valuable, in-demand information. Further, some issues that are covered by only a few newspapers may be of high interest to SNS users. To alleviate the deleterious effects of information asymmetries, it is necessary to analyze the supply and demand of each information source and, accordingly, provide information flexibly. Such an approach would allow the value of information to be explored and approximated on the basis of the supply-demand balance. Conceptually, this is very similar to the price of goods or services being determined by the supply-demand relationship. Adopting this concept, media companies could focus on the production of highly in-demand issues that are in short supply. In this study, we selected Internet news sites and Twitter as representative media for investigating information supply and demand, respectively. We present the notion of News Value Index (NVI), which evaluates the value of news information in terms of the magnitude of Twitter messages associated with it. In addition, we visualize the change of information value over time using the NVI. We conducted an analysis using 387,014 news articles and 31,674,795 Twitter messages. The analysis results revealed interesting patterns: most issues show lower NVI than average of the whole issue, whereas a few issues show steadily higher NVI than the average.

Improving Performance of Recommendation Systems Using Topic Modeling (사용자 관심 이슈 분석을 통한 추천시스템 성능 향상 방안)

  • Choi, Seongi;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.101-116
    • /
    • 2015
  • Recently, due to the development of smart devices and social media, vast amounts of information with the various forms were accumulated. Particularly, considerable research efforts are being directed towards analyzing unstructured big data to resolve various social problems. Accordingly, focus of data-driven decision-making is being moved from structured data analysis to unstructured one. Also, in the field of recommendation system, which is the typical area of data-driven decision-making, the need of using unstructured data has been steadily increased to improve system performance. Approaches to improve the performance of recommendation systems can be found in two aspects- improving algorithms and acquiring useful data with high quality. Traditionally, most efforts to improve the performance of recommendation system were made by the former approach, while the latter approach has not attracted much attention relatively. In this sense, efforts to utilize unstructured data from variable sources are very timely and necessary. Particularly, as the interests of users are directly connected with their needs, identifying the interests of the user through unstructured big data analysis can be a crew for improving performance of recommendation systems. In this sense, this study proposes the methodology of improving recommendation system by measuring interests of the user. Specially, this study proposes the method to quantify interests of the user by analyzing user's internet usage patterns, and to predict user's repurchase based upon the discovered preferences. There are two important modules in this study. The first module predicts repurchase probability of each category through analyzing users' purchase history. We include the first module to our research scope for comparing the accuracy of traditional purchase-based prediction model to our new model presented in the second module. This procedure extracts purchase history of users. The core part of our methodology is in the second module. This module extracts users' interests by analyzing news articles the users have read. The second module constructs a correspondence matrix between topics and news articles by performing topic modeling on real world news articles. And then, the module analyzes users' news access patterns and then constructs a correspondence matrix between articles and users. After that, by merging the results of the previous processes in the second module, we can obtain a correspondence matrix between users and topics. This matrix describes users' interests in a structured manner. Finally, by using the matrix, the second module builds a model for predicting repurchase probability of each category. In this paper, we also provide experimental results of our performance evaluation. The outline of data used our experiments is as follows. We acquired web transaction data of 5,000 panels from a company that is specialized to analyzing ranks of internet sites. At first we extracted 15,000 URLs of news articles published from July 2012 to June 2013 from the original data and we crawled main contents of the news articles. After that we selected 2,615 users who have read at least one of the extracted news articles. Among the 2,615 users, we discovered that the number of target users who purchase at least one items from our target shopping mall 'G' is 359. In the experiments, we analyzed purchase history and news access records of the 359 internet users. From the performance evaluation, we found that our prediction model using both users' interests and purchase history outperforms a prediction model using only users' purchase history from a view point of misclassification ratio. In detail, our model outperformed the traditional one in appliance, beauty, computer, culture, digital, fashion, and sports categories when artificial neural network based models were used. Similarly, our model outperformed the traditional one in beauty, computer, digital, fashion, food, and furniture categories when decision tree based models were used although the improvement is very small.

A Study on the Effects of Career Interrupted Women' Personal Attitude and Subjective Norm on Entrepreneurial Intention: Focusing on Moderating Effects on the Entrepreneurial Supporting Policy (경력단절여성의 창업행위에 대한 태도와 주관적 규범이 창업의도에 미치는 영향)

  • Choi, Jinsook;Lee, Namhee;Hwang, Kumju
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.14 no.4
    • /
    • pp.113-132
    • /
    • 2019
  • The degree of females' participation in corporate activity has been recently increased over the world and females' participation in economic activity may be new dynamic fuel for the Korean economy that falls into the vicious cycle of low growth. Start-up, therefore, has increasingly taken attention as an opportunity for females whose careers were interrupted to re-enter the labor market. The need for studies that examine factors influencing the decision of start-up is also increased along with the increase of the ratio of females' start-up. This study aims to verify effects of the women's characteristics(women discrimination, women's role conflict) and the human networks of females whose careers were interrupted, with the intention for entrepreneurial intention, which are mediated by personal attitudes and subjective norm suggested by Ajzen's Theory of Reasoned Action, based on an empirical research. The findings show that the human networks of females have an effect on attitudes toward start-up activity and subjective norm and the woman discrimination influence the personal attitudes. In contrast, the women's role conflict have no effect on both personal attitude toward start-up activity and subjective norm. This can be supposed as an outcome resulted from the subjects' low level of conflict caused by their sex roles, on their age distribution. The relation between subjective norm and entrepreneurial Intention seemed to be moderated by their perceived strong entrepreneurial supporting policy. Their attitudes toward start-up activity were found to have a mediating effect on the relation between the women discrimination, human networks and entrepreneurial Intention, while the subjective norm only mediated the relation between human networks and entrepreneurial Intention. Based on such results, this study attempts to suggest theoretical suggestions and the direction of various entrepreneurial supporting policy for the increase and the growth of start-up of females whose careers were interrupted, in Korea.

The Effects of Game User's Social Capital and Information Privacy Concern on SNGReuse Intention and Recommendation Intention Through Flow (게임 이용자의 사회자본과 개인정보제공에 대한 우려가 플로우를 통해 SNG 재이용의도와 추천의도에 미치는 영향)

  • Lee, Ji-Hyeon;Kim, Han-Ku
    • Management & Information Systems Review
    • /
    • v.37 no.4
    • /
    • pp.21-39
    • /
    • 2018
  • Today, Mobile Instant Message (MIM) has become a communication means which is commonly used by many people as the technology on smart phones has been enhanced. Among the services, KakaoGame creates much profits continuously by using its representative Kakao platform. However, even though the number of users of KakaoGame increases and the characteristics of the users are more diversified, there are few researches on the relationship between the characteristics of the SNG users and the continuous use of the game. Since the social capital that is formed by the SNG users with the acquaintances create the sense of belonging, its role is being emphasized under the environment of social network. In addition, game user's concerns about the information privacy may decrease the trust on a game APP, and it also caused to threaten about the game system. Therefore, this study was designed to examine the structural relationships among SNG users' social capital, concerns about the information privacy, flow, SNG reuse intention and recommendation intention. The results from this study are as follow. First of all, the participants' bridging social capital had a positive effect on the flow of an SNG, but the bonding social capital had a negative effect on the flow of an SNG. In addition, awareness of information privacy concern had a negative effects on the flow of an SNG, but control of information privacy concern had a positive effect on the flow of an SNG. Lastly, the flow of an SNG had a positive effect on the reuse intention and recommendation intention of an SNG. Also, reuse intention of an SNG had a positive effect on the recommendation intention. Based on the results from this study, academic and practical implications can be drawn. First, This study focused on KakaoTalk which has both of the closed and open characteristics of an SNS and it was found that the SNG user's social capital might be a factor influencing each user's behaviors through the user's flow experiences in SNG. Second, this study extends the scope of prior researches by empirically analysing the relationship between the concerns about the SNG user's information privacy and flow of an SNG. Finally, the results of this research can provide practical guidelines to develop effective marketing strategies considering them for SNG companies.

Operation Measures of Sea Fog Observation Network for Inshore Route Marine Traffic Safety (연안항로 해상교통안전을 위한 해무관측망 운영방안에 관한 연구)

  • Joo-Young Lee;Kuk-Jin Kim;Yeong-Tae Son
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.2
    • /
    • pp.188-196
    • /
    • 2023
  • Among marine accidents caused by bad weather, visibility restrictions caused by sea fog occurrence cause accidents such as ship strand and ship bottom damage, and at the same time involve casualties caused by accidents, which continue to occur every year. In addition, low visibility at sea is emerging as a social problem such as causing considerable inconvenience to islanders in using transportation as passenger ships are collectively delayed and controlled even if there are local differences between regions. Moreover, such measures are becoming more problematic as they cannot objectively quantify them due to regional deviations or different criteria for judging observations from person to person. Currently, the VTS of each port controls the operation of the ship if the visibility distance is less than 1km, and in this case, there is a limit to the evaluation of objective data collection to the extent that the visibility of sea fog depends on the visibility meter or visual observation. The government is building a marine weather signal sign and sea fog observation networks for sea fog detection and prediction as part of solving these obstacles to marine traffic safety, but the system for observing locally occurring sea fog is in a very insufficient practical situation. Accordingly, this paper examines domestic and foreign policy trends to solve social problems caused by low visibility at sea and provides basic data on the need for government support to ensure maritime traffic safety due to sea fog by factually investigating and analyzing social problems. Also, this aims to establish a more stable maritime traffic operation system by blocking marine safety risks that may ultimately arise from sea fog in advance.