• Title/Summary/Keyword: System-level Simulation

Search Result 2,138, Processing Time 0.037 seconds

Implementation of Policy based In-depth Searching for Identical Entities and Cleansing System in LOD Cloud (LOD 클라우드에서의 연결정책 기반 동일개체 심층검색 및 정제 시스템 구현)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.67-77
    • /
    • 2018
  • This paper suggests that LOD establishes its own link policy and publishes it to LOD cloud to provide identity among entities in different LODs. For specifying the link policy, we proposed vocabulary set founded on RDF model as well. We implemented Policy based In-depth Searching and Cleansing(PISC for short) system that proceeds in-depth searching across LODs by referencing the link policies. PISC has been published on Github. LODs have participated voluntarily to LOD cloud so that degree of the entity identity needs to be evaluated. PISC, therefore, evaluates the identities and cleanses the searched entities to confine them to that exceed user's criterion of entity identity level. As for searching results, PISC provides entity's detailed contents which have been collected from diverse LODs and ontology customized to the content. Simulation of PISC has been performed on DBpedia's 5 LODs. We found that similarity of 0.9 of source and target RDF triples' objects provided appropriate expansion ratio and inclusion ratio of searching result. For sufficient identity of searched entities, 3 or more target LODs are required to be specified in link policy.

Design of Ku-Band Low Noise Amplifiers including Band Pass Filter Characteristics for Communication Satellite Transponders (대역통과여파기 특성을 갖는 통신위성중계기용 Ku-Band 저잡음증폭기의 설계 및 제작)

  • 임종식;김남태;박광량;김재명
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.5
    • /
    • pp.872-882
    • /
    • 1994
  • In this paper, the Low Noise Amplifier(LNA) is designed and fabricated to include a band pass filter characteristics considering the antenna system characteristics according to the transmitting and receiving signal level of communication satellite transponder. As an example, a 2-stage low noise amplifier and a 4-stage amplifier and designed, fabricated and measured at 14,0~14.5GHz of receiving frequency band. This fabricated LNA has shown the gain with very good flatness within pass-band, and its gain decreases rapidly out of band resulting in supperssion of the transmitting signal power leakage. It has shown the 20.3dB +- 0.1dB of pass-band gain, the 1.44dB +-0.04dB of noise figure and the 14dB rejection out of band(12.25~12.75GHz). The gain flatness, noise figure and group delay of this 2-stage LNA satisfactorily met the simulation results. And the fabricated 4-stage amplifier has shown the more than 42dB of pass-band gain, the +-0.25dB of flatness and the 28dB of the rejection effect for transmitting power leakage. The 2-stage LNA and 4-stage amplifier, in this paper, will bring a design margin for the input filter and also result in the system cost reduction.

  • PDF

Machine Learning Based Structural Health Monitoring System using Classification and NCA (분류 알고리즘과 NCA를 활용한 기계학습 기반 구조건전성 모니터링 시스템)

  • Shin, Changkyo;Kwon, Hyunseok;Park, Yurim;Kim, Chun-Gon
    • Journal of Advanced Navigation Technology
    • /
    • v.23 no.1
    • /
    • pp.84-89
    • /
    • 2019
  • This is a pilot study of machine learning based structural health monitoring system using flight data of composite aircraft. In this study, the most suitable machine learning algorithm for structural health monitoring was selected and dimensionality reduction method for application on the actual flight data was conducted. For these tasks, impact test on the cantilever beam with added mass, which is the simulation of damage in the aircraft wing structure was conducted and classification model for damage states (damage location and level) was trained. Through vibration test of cantilever beam with fiber bragg grating (FBG) sensor, data of normal and 12 damaged states were acquired, and the most suitable algorithm was selected through comparison between algorithms like tree, discriminant, support vector machine (SVM), kNN, ensemble. Besides, through neighborhood component analysis (NCA) feature selection, dimensionality reduction which is necessary to deal with high dimensional flight data was conducted. As a result, quadratic SVMs performed best with 98.7% for without NCA and 95.9% for with NCA. It is also shown that the application of NCA improved prediction speed, training time, and model memory.

A Study on Technology Evaluation Models and Evaluation Indicators focusing on the Fields of Marine and Fishery (기술력 평가모형 및 평가지표에 대한 연구: 해양수산업을 중심으로)

  • Kim, Min-Seung;Jang, Yong-Ju;Lee, Chan-Ho;Choi, Ji-Hye;Lee, Jeong-Hee;Ahn, Min-Ho;Sung, Tae-Eung
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.10
    • /
    • pp.90-102
    • /
    • 2021
  • Technology evaluation is to assess the ability of technology commercialization entities to generate profits by using the subject technology, and domestic technology evaluation agencies have established and implemented their own evaluation systems. In particular, the recently developed technology evaluation model in the fields of marine and fishery does not sufficiently reflect the poor environment for technology development compared to other industries, so it does not pass the level of T4 rating, which is considered appropriate for investment. This is recognized as a challenge that occurs when the common evaluation indicators and evaluation scales used in other industries, and when the scoring system for T1 to T10 grading is similarly or identically utilized. Therefore, through this study, we intend to secure the appropriateness and reliability of the results of the comprehensive rating calculation by developing technology evaluation models and indicators that well explain the nine marine and fisheries industry classification systems. Based on KED and technology evaluation case data, AHP-based index weighting and Monte Carlo simulation-based rating system are applied and the results of case studies are verified. Through the proposed model, we aim to enhance the usability of R&D and commercialization support programs based on fast, convenient and objective evaluation results by applying to upcoming technology evaluation cases.

Analysis of Quantization Noise in Magnetic Resonance Imaging Systems (자기공명영상 시스템의 양자화잡음 분석)

  • Ahn C.B.
    • Investigative Magnetic Resonance Imaging
    • /
    • v.8 no.1
    • /
    • pp.42-49
    • /
    • 2004
  • Purpose : The quantization noise in magnetic resonance imaging (MRI) systems is analyzed. The signal-to-quantization noise ratio (SQNR) in the reconstructed image is derived from the level of quantization in the signal in spatial frequency domain. Based on the derived formula, the SQNRs in various main magnetic fields with different receiver systems are evaluated. From the evaluation, the quantization noise could be a major noise source determining overall system signal-to-noise ratio (SNR) in high field MRI system. A few methods to reduce the quantization noise are suggested. Materials and methods : In Fourier imaging methods, spin density distribution is encoded by phase and frequency encoding gradients in such a way that it becomes a distribution in the spatial frequency domain. Thus the quantization noise in the spatial frequency domain is expressed in terms of the SQNR in the reconstructed image. The validity of the derived formula is confirmed by experiments and computer simulation. Results : Using the derived formula, the SQNRs in various main magnetic fields with various receiver systems are evaluated. Since the quantization noise is proportional to the signal amplitude, yet it cannot be reduced by simple signal averaging, it could be a serious problem in high field imaging. In many receiver systems employing analog-to-digital converters (ADC) of 16 bits/sample, the quantization noise could be a major noise source limiting overall system SNR, especially in a high field imaging. Conclusion : The field strength of MRI system keeps going higher for functional imaging and spectroscopy. In high field MRI system, signal amplitude becomes larger with more susceptibility effect and wider spectral separation. Since the quantization noise is proportional to the signal amplitude, if the conversion bits of the ADCs in the receiver system are not large enough, the increase of signal amplitude may not be fully utilized for the SNR enhancement due to the increase of the quantization noise. Evaluation of the SQNR for various systems using the formula shows that the quantization noise could be a major noise source limiting overall system SNR, especially in three dimensional imaging in a high field imaging. Oversampling and off-center sampling would be an alternative solution to reduce the quantization noise without replacement of the receiver system.

  • PDF

Context Prediction Using Right and Wrong Patterns to Improve Sequential Matching Performance for More Accurate Dynamic Context-Aware Recommendation (보다 정확한 동적 상황인식 추천을 위해 정확 및 오류 패턴을 활용하여 순차적 매칭 성능이 개선된 상황 예측 방법)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.51-67
    • /
    • 2009
  • Developing an agile recommender system for nomadic users has been regarded as a promising application in mobile and ubiquitous settings. To increase the quality of personalized recommendation in terms of accuracy and elapsed time, estimating future context of the user in a correct way is highly crucial. Traditionally, time series analysis and Makovian process have been adopted for such forecasting. However, these methods are not adequate in predicting context data, only because most of context data are represented as nominal scale. To resolve these limitations, the alignment-prediction algorithm has been suggested for context prediction, especially for future context from the low-level context. Recently, an ontological approach has been proposed for guided context prediction without context history. However, due to variety of context information, acquiring sufficient context prediction knowledge a priori is not easy in most of service domains. Hence, the purpose of this paper is to propose a novel context prediction methodology, which does not require a priori knowledge, and to increase accuracy and decrease elapsed time for service response. To do so, we have newly developed pattern-based context prediction approach. First of ail, a set of individual rules is derived from each context attribute using context history. Then a pattern consisted of results from reasoning individual rules, is developed for pattern learning. If at least one context property matches, say R, then regard the pattern as right. If the pattern is new, add right pattern, set the value of mismatched properties = 0, freq = 1 and w(R, 1). Otherwise, increase the frequency of the matched right pattern by 1 and then set w(R,freq). After finishing training, if the frequency is greater than a threshold value, then save the right pattern in knowledge base. On the other hand, if at least one context property matches, say W, then regard the pattern as wrong. If the pattern is new, modify the result into wrong answer, add right pattern, and set frequency to 1 and w(W, 1). Or, increase the matched wrong pattern's frequency by 1 and then set w(W, freq). After finishing training, if the frequency value is greater than a threshold level, then save the wrong pattern on the knowledge basis. Then, context prediction is performed with combinatorial rules as follows: first, identify current context. Second, find matched patterns from right patterns. If there is no pattern matched, then find a matching pattern from wrong patterns. If a matching pattern is not found, then choose one context property whose predictability is higher than that of any other properties. To show the feasibility of the methodology proposed in this paper, we collected actual context history from the travelers who had visited the largest amusement park in Korea. As a result, 400 context records were collected in 2009. Then we randomly selected 70% of the records as training data. The rest were selected as testing data. To examine the performance of the methodology, prediction accuracy and elapsed time were chosen as measures. We compared the performance with case-based reasoning and voting methods. Through a simulation test, we conclude that our methodology is clearly better than CBR and voting methods in terms of accuracy and elapsed time. This shows that the methodology is relatively valid and scalable. As a second round of the experiment, we compared a full model to a partial model. A full model indicates that right and wrong patterns are used for reasoning the future context. On the other hand, a partial model means that the reasoning is performed only with right patterns, which is generally adopted in the legacy alignment-prediction method. It turned out that a full model is better than a partial model in terms of the accuracy while partial model is better when considering elapsed time. As a last experiment, we took into our consideration potential privacy problems that might arise among the users. To mediate such concern, we excluded such context properties as date of tour and user profiles such as gender and age. The outcome shows that preserving privacy is endurable. Contributions of this paper are as follows: First, academically, we have improved sequential matching methods to predict accuracy and service time by considering individual rules of each context property and learning from wrong patterns. Second, the proposed method is found to be quite effective for privacy preserving applications, which are frequently required by B2C context-aware services; the privacy preserving system applying the proposed method successfully can also decrease elapsed time. Hence, the method is very practical in establishing privacy preserving context-aware services. Our future research issues taking into account some limitations in this paper can be summarized as follows. First, user acceptance or usability will be tested with actual users in order to prove the value of the prototype system. Second, we will apply the proposed method to more general application domains as this paper focused on tourism in amusement park.

A Fully Digital Automatic Gain Control System with Wide Dynamic Range Power Detectors for DVB-S2 Application (넓은 동적 영역의 파워 검출기를 이용한 DVB-S2용 디지털 자동 이득 제어 시스템)

  • Pu, Young-Gun;Park, Joon-Sung;Hur, Jeong;Lee, Kang-Yoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.9
    • /
    • pp.58-67
    • /
    • 2009
  • This paper presents a fully digital gain control system with a new high bandwidth and wide dynamic range power detector for DVB-S2 application. Because the peak-to-average power ratio (PAPR) of DVB-S2 system is so high and the settling time requirement is so stringent, the conventional closed-loop analog gain control scheme cannot be used. The digital gain control is necessary for the robust gain control and the direct digital interface with the baseband modem. Also, it has several advantages over the analog gain control in terms of the settling time and insensitivity to the process, voltage and temperature variation. In order to have a wide gain range with fine step resolution, a new AGC system is proposed. The system is composed of high-bandwidth digital VGAs, wide dynamic range power detectors with RMS detector, low power SAR type ADC, and a digital gain controller. To reduce the power consumption and chip area, only one SAR type ADC is used, and its input is time-interleaved based on four power detectors. Simulation and measurement results show that the new AGC system converges with gain error less than 0.25 dB to the desired level within $10{\mu}s$. It is implemented in a $0.18{\mu}m$ CMOS process. The measurement results of the proposed IF AGC system exhibit 80-dB gain range with 0.25-dB resolution, 8 nV/$\sqrt{Hz}$ input referred noise, and 5-dBm $IIP_3$ at 60-mW power consumption. The power detector shows the 35dB dynamic range for 100 MHz input.

Effectiveness Analysis of HOT Lane and Application Scheme for Korean Environment (HOT차로 운영에 대한 효과분석 및 국내활용방안)

  • Choi, Kee Choo;Kim, Jin Howan;Oh, Seung Hwoon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.1D
    • /
    • pp.25-32
    • /
    • 2009
  • Currently, various types of TDM (Transportation Demand Management) policies are being studied and implemented in an attempt to overcome the limitations of supply oriented policies. In this context, this paper addressed issues of effectiveness and possible domestic implementation of the HOT lane. The possible site of implementation selected for this simulation study is part of the Kyung-bu freeway, where a dedicated bus lane is currently being operated. Minimum length of distance required in between interchanges and access points of the HOT lane for vehicles to safely enter and exit the lane, and traffic management policies for effectively managing the weaving traffic trying to enter and exit the HOT lane were presented. A 5.2km section of freeway from Ki-heuing IC to Suwon IC and a 8.3km section from Hak-uei JC to Pan-gyo JC have been selected as possible sites of implementation for the HOT lane, in which congestion occurs regularly due to the high level of travel demand. VISSIM simulation program has been used to analyze the effects of the HOT lane under the assumption that one-lane HOT lane has been put into operation in these sections and that the lane change rate were in between 5% to 30%. The results of each possible scenario have proven that overall travel speed on the general lanes have increased as well by 1.57~2.62km/h after the implementation of the HOT lane. It is meaningful that this study could serve as a basic reference data for possible follow-up studies on the HOT lane as one effective method of TDM policies. Considering that the bus travel rate would continue increase and assuming the improvement in travel speed on general lanes, similar case study can be implemented where gaps between buses on bus lane are available, as a possible alternative of efficient bus lane management policies.

Combustion Characteristic Study of LNG Flame in an Oxygen Enriched Environment (산소부화 조건에 따른 LNG 연소특성 연구)

  • Kim, Hey-Suk;Shin, Mi-Soo;Jang, Dong-Soon;Lee, Dae-Geun
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.29 no.1
    • /
    • pp.23-30
    • /
    • 2007
  • The ultimate objective of this study is to develop oxygen-enriched combustion techniques applicable to the system of practical industrial boiler. To this end the combustion characteristics of lab-scale LNG combustor were investigated as a first step using the method of numerical simulation by analyzing the flame characteristics and pollutant emission behaviour as a function of oxygen enrichment level. Several useful conclusions could be drawn based on this study. First of all, the increase of oxygen enrichment level instead of air caused long and thin flame called laminar flame feature. This was in good agreement with experimental results appeared in open literature and explained by the effect of the decrease of turbulent mixing due to the decrease of absolute amount of oxidizer flow rate by the absence of the nitrogen species. Further, as expected, oxygen enrichment increased the flame temperatures to a significant level together with concentrations of $CO_2$ and $H_2O$ species because of the elimination of the heat sink and dilution effects by the presence of $N_2$ inert gas. However, the increased flame temperature with $O_2$ enriched air showed the high possibility of the generation of thermal $NO_x$ if nitrogen species were present. In order to remedy the problem caused by the oxygen-enriched combustion, the appropriate amount of recirculation $CO_2$ gas was desirable to enhance the turbulent mixing and thereby flame stability and further optimum determination of operational conditions were necessary. For example, the adjustment of burner with swirl angle of $30\sim45^{\circ}$ increased the combustion efficiency of LNG fuel and simultaneously dropped the $NO_x$ formation.

Study on the Consequence Effect Analysis & Process Hazard Review at Gas Release from Hydrogen Fluoride Storage Tank (최근 불산 저장탱크에서의 가스 누출시 공정위험 및 결과영향 분석)

  • Ko, JaeSun
    • Journal of the Society of Disaster Information
    • /
    • v.9 no.4
    • /
    • pp.449-461
    • /
    • 2013
  • As the hydrofluoric acid leak in Gumi-si, Gyeongsangbuk-do or hydrochloric acid leak in Ulsan, Gyeongsangnam-do demonstrated, chemical related accidents are mostly caused by large amounts of volatile toxic substances leaking due to the damages of storage tank or pipe lines of transporter. Safety assessment is the most important concern because such toxic material accidents cause human and material damages to the environment and atmosphere of the surrounding area. Therefore, in this study, a hydrofluoric acid leaked from a storage tank was selected as the study example to simulate the leaked substance diffusing into the atmosphere and result analysis was performed through the numerical Analysis and diffusion simulation of ALOHA(Areal Location of Hazardous Atmospheres). the results of a qualitative evaluation of HAZOP (Hazard Operability)was looked at to find that the flange leak, operation delay due to leakage of the valve and the hose, and toxic gas leak were danger factors. Possibility of fire from temperature, pressure and corrosion, nitrogen supply overpressure and toxic leak from internal corrosion of tank or pipe joints were also found to be high. ALOHA resulting effects were a little different depending on the input data of Dense Gas Model, however, the wind direction and speed, rather than atmospheric stability, played bigger role. Higher wind speed affected the diffusion of contaminant. In term of the diffusion concentration, both liquid and gas leaks resulted in almost the same $LC_{50}$ and ALOHA AEGL-3(Acute Exposure Guidline Level) values. Each scenarios showed almost identical results in ALOHA model. Therefore, a buffer distance of toxic gas can be determined by comparing the numerical analysis and the diffusion concentration to the IDLH(Immediately Dangerous to Life and Health). Such study will help perform the risk assessment of toxic leak more efficiently and be utilized in establishing community emergency response system properly.