• Title/Summary/Keyword: real-time network system

Search Result 2,557, Processing Time 0.043 seconds

A Design of the Emergency-notification and Driver-response Confirmation System(EDCS) for an autonomous vehicle safety (자율차량 안전을 위한 긴급상황 알림 및 운전자 반응 확인 시스템 설계)

  • Son, Su-Rak;Jeong, Yi-Na
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.2
    • /
    • pp.134-139
    • /
    • 2021
  • Currently, the autonomous vehicle market is commercializing a level 3 autonomous vehicle, but it still requires the attention of the driver. After the level 3 autonomous driving, the most notable aspect of level 4 autonomous vehicles is vehicle stability. This is because, unlike Level 3, autonomous vehicles after level 4 must perform autonomous driving, including the driver's carelessness. Therefore, in this paper, we propose the Emergency-notification and Driver-response Confirmation System(EDCS) for an autonomousvehicle safety that notifies the driver of an emergency situation and recognizes the driver's reaction in a situation where the driver is careless. The EDCS uses the emergency situation delivery module to make the emergency situation to text and transmits it to the driver by voice, and the driver response confirmation module recognizes the driver's reaction to the emergency situation and gives the driver permission Decide whether to pass. As a result of the experiment, the HMM of the emergency delivery module learned speech at 25% faster than RNN and 42.86% faster than LSTM. The Tacotron2 of the driver's response confirmation module converted text to speech about 20ms faster than deep voice and 50ms faster than deep mind. Therefore, the emergency notification and driver response confirmation system can efficiently learn the neural network model and check the driver's response in real time.

Accuracy Analysis of GNSS-based Public Surveying and Proposal for Work Processes (GNSS관측 공공측량 정확도 분석 및 업무프로세스 제안)

  • Bae, Tae-Suk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.457-467
    • /
    • 2018
  • Currently, the regulation and rules for public surveying and the UCPs (Unified Control Points) adapts those of the triangulated traverse surveying. In addition, such regulations do not take account of the unique characteristics of GNSS (Global Navigation Satellite System) surveying, thus there are difficulties in field work and data processing afterwards. A detailed procesure of GNSS processing has not yet been described either, and the verification of accuracy does not follow the generic standards. In order to propose an appropriate procedure for field surveys, we processed a short session (30 minutes) based on the scenarios similar to actual situations. The reference network in Seoul was used to process the same data span for 3 days. The temporal variation during the day was evaluated as well. We analyzed the accuracy of the estimated coordinates depending on the parameterization of tropospheric delay, which was compared with the 24-hr static processing results. Estimating the tropospheric delay is advantageous for the accuracy and stability of the coordinates, resulting in about 5 mm and 10 mm of RMSE (Root Mean Squared Error) for horizontal and vertical components, respectively. Based on the test results, we propose a procedure to estimate the daily solution and then combine them to estimate the final solution by applying the minimum constraints (no-net-translation condition). It is necessary to develop a web-based processing system using a high-end softwares. Additionally, it is also required to standardize the ID of the public control points and the UCPs for the automatic GNSS processing.

A Study on the Analysis of the Congestion Level of Tourist Sites and Visitors Characteristics Using SNS Data (SNS 데이터를 활용한 관광지 혼잡도 및 방문자 특성 분석에 관한 연구)

  • Lee, Sang Hoon;Kim, Su-Yeon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.5
    • /
    • pp.13-24
    • /
    • 2022
  • SNS has become a very close service to our daily life. As marketing is done through SNS, places often called hot places are created, and users are flocking to these places. However, it is often crowded with a large number of people in a short period of time, resulting in a negative experience for both visitors and service providers. In order to improve this problem, it is necessary to recognize the congestion level, but the method to determine the congestion level in a specific area at an individual level is very limited. Therefore, in this study, we tried to propose a system that can identify the congestion level information and the characteristics of visitors to a specific tourist destination by using the data on the SNS. For this purpose, posting data uploaded by users and image analysis were used, and the performance of the proposed system was verified using the Naver DataLab system. As a result of comparative verification by selecting three places by type of tourist destination, the results calculated in this study and the congestion level provided by DataLab were found to be similar. In particular, this study is meaningful in that it provides a degree of congestion based on real data of users that is not dependent on a specific company or service.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

A study on the wire reduction design and effect analysis for the train vehicle line (철도차량 배선절감 방안 및 효과분석에 관한 연구)

  • Lee, Kangmi;Kim, Seong Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.11
    • /
    • pp.711-717
    • /
    • 2017
  • The railway is a public transportation system that provides large-scale passenger transportation and service, whose reliability and safety is the top priority. The wiring of railway vehicles is classified into train control lines (train lines) and communication lines. The train lines are used for input / output signals related to vehicle driving and safety functions, and the communication lines are used for the input / output signals for passenger services such as broadcasting. In order to measure the reliability of railway vehicles, a train line is applied to the input / output interface of the control signals between the electric control devices in the vehicle, and there are many electromechanical devices such as relays and contactors for the control logic. In fact, since the vehicle control circuit is composed of several thousand contacts, it is difficult to check for errors such as contact failure, and it is impossible to check the real-time status, so a lot of manpower and time is required for regular maintenance. Therefore, we analyze the current state of the train line design of the electric equipment used for driving and services in domestic railway cars and propose three wiring reduction methods to improve it. Based on the analysis of domestic electric vehicles, it was confirmed that the wiring reduction effect is 35% or more.

Encryption Method Based on Chaos Map for Protection of Digital Video (디지털 비디오 보호를 위한 카오스 사상 기반의 암호화 방법)

  • Yun, Byung-Choon;Kim, Deok-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.1
    • /
    • pp.29-38
    • /
    • 2012
  • Due to the rapid development of network environment and wireless communication technology, the distribution of digital video has made easily and the importance of the protection for digital video has been increased. This paper proposes the digital video encryption system based on multiple chaos maps for MPEG-2 video encoding process. The proposed method generates secret hash key of having 128-bit characteristics from hash chain using Tent map as a basic block and generates $8{\times}8$ lattice cipher by applying this hash key to Logistic map and Henon map. The method can reduce the encryption overhead by doing selective XOR operations between $8{\times}8$ lattice cipher and some coefficient of low frequency in DCT block and it provides simple and randomness characteristic because it uses the architecture of combining chaos maps. Experimental results show that PSNR of the proposed method is less than or equal to 12 dB with respect to encrypted video, the time change ratio, compression ratio of the proposed method are 2%, 0.4%, respectively so that it provides good performance in visual security and can be applied in real time.

Multi-Channel MAC Protocol Based on V2I/V2V Collaboration in VANET (VANET에서 V2I/V2V 협력 기반 멀티채널 MAC 프로토콜)

  • Heo, Sung-Man;Yoo, Sang-Jo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.1
    • /
    • pp.96-107
    • /
    • 2015
  • VANET technologies provide real-time traffic information for mitigating traffic jam and preventing traffic accidents, as well as in-vehicle infotainment service through Telematics/Intelligent Transportation System (ITS). Due to the rapid increasement of various requirements, the vehicle communication with a limited resource and the fixed frame architecture of the conventional techniques is limited to provide an efficient communication service. Therefore, a new flexible operation depending on the surrounding situation information is required that needs an adaptive design of the network architecture and protocol for efficiently predicting, distributing and sharing the context-aware information. In this paper, Vehicle-to-Infrastructure (V2I) based on communication between vehicle and a Road Side Units (RSU) and Vehicle-to-Vehicle (V2V) based on communication between vehicles are effectively combined in a new MAC architecture and V2I and V2V vehicles collaborate in management. As a result, many vehicles and RSU can use more efficiently the resource and send data rapidly. The simulation results show that the proposed method can achieve high resource utilization in accordance. Also we can find out the optimal transmission relay time and 2nd relay vehicle selection probability value to spread out V2V/V2I collaborative schedule message rapidly.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.