• Title/Summary/Keyword: integrated system algorithm

Search Result 832, Processing Time 0.028 seconds

Development of a deep-learning based tunnel incident detection system on CCTVs (딥러닝 기반 터널 영상유고감지 시스템 개발 연구)

  • Shin, Hyu-Soung;Lee, Kyu-Beom;Yim, Min-Jin;Kim, Dong-Gyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.6
    • /
    • pp.915-936
    • /
    • 2017
  • In this study, current status of Korean hazard mitigation guideline for tunnel operation is summarized. It shows that requirement for CCTV installation has been gradually stricted and needs for tunnel incident detection system in conjunction with the CCTV in tunnels have been highly increased. Despite of this, it is noticed that mathematical algorithm based incident detection system, which are commonly applied in current tunnel operation, show very low detectable rates by less than 50%. The putative major reasons seem to be (1) very weak intensity of illumination (2) dust in tunnel (3) low installation height of CCTV to about 3.5 m, etc. Therefore, an attempt in this study is made to develop an deep-learning based tunnel incident detection system, which is relatively insensitive to very poor visibility conditions. Its theoretical background is given and validating investigation are undertaken focused on the moving vehicles and person out of vehicle in tunnel, which are the official major objects to be detected. Two scenarios are set up: (1) training and prediction in the same tunnel (2) training in a tunnel and prediction in the other tunnel. From the both cases, targeted object detection in prediction mode are achieved to detectable rate to higher than 80% in case of similar time period between training and prediction but it shows a bit low detectable rate to 40% when the prediction times are far from the training time without further training taking place. However, it is believed that the AI based system would be enhanced in its predictability automatically as further training are followed with accumulated CCTV BigData without any revision or calibration of the incident detection system.

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

A Numerical Simulation of Three- Dimensional Nonlinear Free surface Flows (3차원 비선형 자유표면 유동의 수치해석)

  • Chang-Gu Kang;In-Young Gong
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.28 no.1
    • /
    • pp.38-52
    • /
    • 1991
  • In this paper, a semi-Lagrangian method is used to solve the nonlinear hydrodynamics of a three-dimensional body beneath the free surface in the time domain. The boundary value problem is solved by using the boundary integral method. The geometries of the body and the free surface are represented by the curved panels. The surfaces are discretized into the small surface elements using a bi-cubic B-spline algorithm. The boundary values of $\phi$ and $\frac{\partial{\phi}}{\partial{n}}$ are assumed to be bilinear on the subdivided surface. The singular part proportional to $\frac{1}{R}$ are subtracted off and are integrated analytically in the calculation of the induced potential by singularities. The far field flow away from the body is represented by a dipole at the origin of the coordinate system. The Runge-Kutta 4-th order algorithm is employed in the time stepping scheme. The three-dimensional form of the integral equation and the boundary conditions for the time derivative of the potential Is derived. By using these formulas, the free surface shape and the equations of motion are calculated simultaneously. The free surface shape and fille forces acting on a body oscillating sinusoidally with large amplitude are calculated and compared with published results. Nonlinear effects on a body near the free surface are investigated.

  • PDF

A Study of Rayleigh Damping Effect on Dynamic Crack Propagation Analysis using MLS Difference Method (MLS 차분법을 활용한 동적 균열전파해석의 Rayleigh 감쇠영향 분석)

  • Kim, Kyeong-Hwan;Lee, Sang-Ho;Yoon, Young-Cheol
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.29 no.6
    • /
    • pp.583-590
    • /
    • 2016
  • This paper presents a dynamic crack propagation algorithm with Rayleigh damping effect based on the MLS(Moving Least Squares) Difference Method. Dynamic equilibrium equation and constitutive equation are derived by considering Rayliegh damping and governing equations are discretized by the MLS derivative approximation; the proportional damping, which has not been properly treated in the conventional strong formulations, was implemented in both the equilibrium equation and constitutive equation. Dynamic equilibrium equation including time relevant terms is integrated by the Central Difference Method and the discrete equations are simplified by lagging the velocity one step behind. A geometrical feature of crack is modeled by imposing the traction-free condition onto the nodes placed at crack surfaces and the effect of movement and addition of the nodes at every time step due to crack growth is appropriately reflected on the construction of total system. The robustness of the proposed numerical algorithm was proved by simulating single and multiple crack growth problems and the effect of proportional damping on the dynamic crack propagation analysis was effectively demonstrated.

Energy Efficient Distributed Intrusion Detection Architecture using mHEED on Sensor Networks (센서 네트워크에서 mHEED를 이용한 에너지 효율적인 분산 침입탐지 구조)

  • Kim, Mi-Hui;Kim, Ji-Sun;Chae, Ki-Joon
    • The KIPS Transactions:PartC
    • /
    • v.16C no.2
    • /
    • pp.151-164
    • /
    • 2009
  • The importance of sensor networks as a base of ubiquitous computing realization is being highlighted, and espicially the security is recognized as an important research isuue, because of their characteristics.Several efforts are underway to provide security services in sensor networks, but most of them are preventive approaches based on cryptography. However, sensor nodes are extremely vulnerable to capture or key compromise. To ensure the security of the network, it is critical to develop security Intrusion Detection System (IDS) that can survive malicious attacks from "insiders" who have access to keying materials or the full control of some nodes, taking their charateristics into consideration. In this perper, we design a distributed and adaptive IDS architecture on sensor networks, respecting both of energy efficiency and IDS efficiency. Utilizing a modified HEED algorithm, a clustering algorithm, distributed IDS nodes (dIDS) are selected according to node's residual energy and degree. Then the monitoring results of dIDSswith detection codes are transferred to dIDSs in next round, in order to perform consecutive and integrated IDS process and urgent report are sent through high priority messages. With the simulation we show that the superiorities of our architecture in the the efficiency, overhead, and detection capability view, in comparison with a recent existent research, adaptive IDS.

A Preliminary Development of Real-Time Hardware-in-the-Loop Simulation Testbed for the Satellite Formation Flying Navigation and Orbit Control (편대비행위성의 항법 및 궤도제어를 위한 실시간 Hardware-In-the-Loop 시뮬레이션 테스트베드 초기 설계)

  • Park, Jae-Ik;Park, Han-Earl;Shim, Sun-Hwa;Park, Sang-Young;Choi, Kyu-Hong
    • Journal of Astronomy and Space Sciences
    • /
    • v.26 no.1
    • /
    • pp.99-110
    • /
    • 2009
  • The main purpose of the current research is to developments a real-time Hardware In-the-Loop (HIL) simulation testbed for the satellite formation flying navigation and orbit control. The HIL simulation testbed is integrated for demonstrations and evaluations of navigation and orbit control algorithms. The HIL simulation testbed is composed of Environment computer, GPS simulator, Flight computer and Visualization computer system. GPS measurements are generated by a SPIRENT GSS6560 multi-channel RF simulator to produce pseudorange, carrier phase measurements. The measurement date are transferred to Satrec Intiative space borne GPS receiver and exchanged by the flight computer system and subsequently processed in a navigation filter to generate relative or absolute state estimates. These results are fed into control algorithm to generate orbit controls required to maintain the formation. These maneuvers are informed to environment computer system to build a close simulation loop. In this paper, the overall design of the HIL simulation testbed for the satellite formation flying navigation and control is presented. Each component of the testbed is then described. Finally, a LEO formation navigation and control simulation is demonstrated by using virtual scenario.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Development of an Automatic Seed Marker Registration Algorithm Using CT and kV X-ray Images (CT 영상 및 kV X선 영상을 이용한 자동 표지 맞춤 알고리듬 개발)

  • Cheong, Kwang-Ho;Cho, Byung-Chul;Kang, Sei-Kwon;Kim, Kyoung-Joo;Bae, Hoon-Sik;Suh, Tae-Suk
    • Radiation Oncology Journal
    • /
    • v.25 no.1
    • /
    • pp.54-61
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: The purpose of this study is to develop a practical method for determining accurate marker positions for prostate cancer radiotherapy using CT images and kV x-ray images obtained from the use of the on- board imager (OBI). $\underline{Materials\;and\;Methods}$: Three gold seed markers were implanted into the reference position inside a prostate gland by a urologist. Multiple digital image processing techniques were used to determine seed marker position and the center-of-mass (COM) technique was employed to determine a representative reference seed marker position. A setup discrepancy can be estimated by comparing a computed $COM_{OBI}$ with the reference $COM_{CT}$. A proposed algorithm was applied to a seed phantom and to four prostate cancer patients with seed implants treated in our clinic. $\underline{Results}$: In the phantom study, the calculated $COM_{CT}$ and $COM_{OBI}$ agreed with $COM_{actual}$ within a millimeter. The algorithm also could localize each seed marker correctly and calculated $COM_{CT}$ and $COM_{OBI}$ for all CT and kV x-ray image sets, respectively. Discrepancies of setup errors between 2D-2D matching results using the OBI application and results using the proposed algorithm were less than one millimeter for each axis. The setup error of each patient was in the range of $0.1{\pm}2.7{\sim}1.8{\pm}6.6\;mm$ in the AP direction, $0.8{\pm}1.6{\sim}2.0{\pm}2.7\;mm$ in the SI direction and $-0.9{\pm}1.5{\sim}2.8{\pm}3.0\;mm$ in the lateral direction, even though the setup error was quite patient dependent. $\underline{Conclusion}$: As it took less than 10 seconds to evaluate a setup discrepancy, it can be helpful to reduce the setup correction time while minimizing subjective factors that may be user dependent. However, the on-line correction process should be integrated into the treatment machine control system for a more reliable procedure.

Design and Hardware Implementation of High-Speed Variable-Length RSA Cryptosystem (가변길이 고속 RSA 암호시스템의 설계 및 하드웨어 구현)

  • 박진영;서영호;김동욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.9C
    • /
    • pp.861-870
    • /
    • 2002
  • In this paper, with targeting on the drawback of RSA of operation speed, a new 1024-bit RSA cryptosystem has been proposed and implemented in hardware to increase the operational speed and perform the variable-length encryption. The proposed cryptosystem mainly consists of the modular exponentiation part and the modular multiplication part. For the modular exponentiation, the RL-binary method, which performs squaring and modular multiplying in parallel, was improved, and then applied. And 4-stage CSA structure and radix-4 booth algorithm were applied to enhance the variable-length operation and reduce the number of partial product in modular multiplication arithmetic. The proposed RSA cryptosystem which can calculate at most 1024 bits at a tittle was mapped into the integrated circuit using the Hynix Phantom Cell Library for Hynix 0.35㎛ 2-Poly 4-Metal CMOS process. Also, the result of software implementation, which had been programmed prior to the hardware research, has been used to verify the operation of the hardware system. The size of the result from the hardware implementation was about 190k gate count and the operational clock frequency was 150㎒. By considering a variable-length of modulus number, the baud rate of the proposed scheme is one and half times faster than the previous works. Therefore, the proposed high speed variable-length RSA cryptosystem should be able to be used in various information security system which requires high speed operation.

Method of Differential Corrections Using GPS/Galileo Pseudorange Measurement for DGNSS RSIM (DGNSS RSIM을 위한 GPS/Galileo 의사거리 보정기법)

  • Seo, Ki-Yeol;Kim, Young-Ki;Jang, Won-Seok;Park, Sang-Hyun
    • Journal of Navigation and Port Research
    • /
    • v.38 no.4
    • /
    • pp.373-378
    • /
    • 2014
  • In order to prepare for recapitalization of differential GNSS (DGNSS) reference station and integrity monitor (RSIM) due to GNSS diversification, this paper focuses on differential correction algorithm using GPS/Galileo pesudorange. The technical standards on operation and broadcast of DGNSS RSIM are described as operation of differential GPS (DGPS) RSIM for conversion of DGNSS RSIM. Usually, in order to get the differential corrections of GNSS pesudorange, the system must know the real positions of satellites and user. Therefore, for calculating the position of Galileo satellites correctly, using the equation for calculating the SV position in Galileo ICD (Interface Control Document), it estimates the SV position based on Ephemeris data obtained from user receiver, and calculates the clock offset of satellite and user receiver, system time offset between GPS and Galileo, then determines the pseudorange corrections of GPS/Galileo. Based on a platform for performance verification connected with GPS/Galileo integrated signal simulator, it compared the PRC (pseudorange correction) errors of GPS and Galileo, analyzed the position errors of DGPS, DGalileo, and DGPS/DGalileo respectively. The proposed method was evaluated according to PRC errors and position accuracy at the simulation platform. When using the DGPS/DGalileo corrections, this paper could confirm that the results met the performance requirements of the RTCM.