• Title/Summary/Keyword: Adaptive technique

Search Result 1,555, Processing Time 0.024 seconds

Design of High-Speed Multi-Layer PCB for Ultra High Definition Video Signals (UHD급 영상구현을 위한 다층인쇄회로기판의 특성 임피던스 분석에 관한 연구)

  • Jin, Jong-Ho;Son, Hui-Bae;Rhee, Young-Chul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.7
    • /
    • pp.1639-1645
    • /
    • 2015
  • In UHD high-speed video transmission system, when a signal within certain frequency region coincides electrically and structurally, the system becomes unstable because the energy is concentrated, and signal flux is interfered and distorted. For the instability, power integrity analysis should be conducted. To remove the signal distortion for MLB, using a high-frequency design technique for EMI phenomenon, EMI which radiates electromagnetic energy fluxed into power layer was analyzed considering system stabilization. In this paper, we proposed an adaptive MLB design method which minimizes high-frequency noise in MLB structure, enhances signal integrity and power integrity, and suppresses EMI. The characteristic impedance for multi-layer circuit board proposed in this study were High-Speed Video Differential Signaling(HSVDS) line width w = 0.203, line gap d = 0.203, beta layer height h = 0.145, line thickness t = 0.0175, dielectric constant εr = 4.3, and characteristic impedance Zdiff = 100.186Ω. When high-speed video differential signal interface board was tested with optimized parameters, the magnitude of Eye diagram output was 672mV, jittering was 6.593ps, transmission frequency was 1.322GHz, signal to noise was 29.62dB showing transmission quality improvement of 10dB compared to previous system.

Seismic investigation of cyclic pushover method for regular reinforced concrete bridge

  • Shafigh, Afshin;Ahmadi, Hamid Reza;Bayat, Mahmoud
    • Structural Engineering and Mechanics
    • /
    • v.78 no.1
    • /
    • pp.41-52
    • /
    • 2021
  • Inelastic static pushover analysis has been used in the academic-research widely for seismic analysis of structures. Nowadays, the variety pushover analysis methods have been developed, including Modal pushover, Adaptive pushover, and Cyclic pushover, in which some weaknesses of the conventional pushover method have been rectified. In the conventional pushover analysis method, the effects of cumulative growth of cracks are not considered on the reduction of strength and stiffness of RC members that occur during earthquake or cyclic loading. Therefore, the Cyclic Pushover Analysis Method (CPA) has been proposed. This method is a powerful technique for seismic evaluation of regular reinforced concrete buildings in which the first mode of them is dominant. Since the bridges have different structures than buildings, their results cannot necessarily be attributed to bridges, and more research is needed. In this study, a cyclic pushover analysis with four loading protocols (suggested by valid references) by the Opensees software was conducted for seismic evaluation of two regular reinforce concrete bridges. The modeling method was validated with the comparison of the analytical and experimental results under both cyclic and dynamic loading. The failure mode of the piers was considered in two-mode of flexural failure and also a flexural-shear failure. Along with the cyclic analysis, conventional analysis has been studied. Also, the nonlinear incremental dynamic analysis (IDA) method has been used to examine and compare the results of pushover analyses. The time history of 20 far-field earthquake records was used to conduct IDA. After analysis, the base shear vs. displacement in the middle of the deck was drawn. The obtained results show that the cyclic pushover analysis method is able to evaluate an accurate seismic behavior of the reinforced concrete piers of the bridges. Based on the results, the cyclic pushover has proper convergence with IDA. Its accuracy was much higher than the conventional pushover, in which the bridge piers failed in flexural-shear mode. But, in the flexural failure mode, the results of each two pushover methods were close approximately. Besides, the cyclic pushover method with ACI loading protocol, and ATC-24 loading protocol, can provided more accurate results for evaluating the seismic investigation of the bridges, specially if the bridge piers are failed in flexural-shear failure mode.

Numerical comparative investigation on blade tip vortex cavitation and cavitation noise of underwater propeller with compressible and incompressible flow solvers (압축성과 비압축성 유동해석에 따른 수중 추진기 날개 끝 와류공동과 공동소음에 대한 수치비교 연구)

  • Ha, Junbeom;Ku, Garam;Cho, Junghoon;Cheong, Cheolung;Seol, Hanshin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.4
    • /
    • pp.261-269
    • /
    • 2021
  • Without any validation of the incompressible assumption, most of previous studies on cavitation flow and its noise have utilized numerical methods based on the incompressible Reynolds Average Navier-Stokes (RANS) equations because of advantage of its efficiency. In this study, to investigate the effects of the flow compressibility on the Tip Vortex Cavitation (TVC) flow and noise, both the incompressible and compressible simulations are performed to simulate the TVC flow, and the Ffowcs Williams and Hawkings (FW-H) integral equation is utilized to predict the TVC noise. The DARPA Suboff submarine body with an underwater propeller of a skew angle of 17 degree is targeted to account for the effects of upstream disturbance. The computation domain is set to be same as the test-section of the large cavitation tunnel in Korea Research Institute of Ships and Ocean Engineering to compare the prediction results with the measured ones. To predict the TVC accurately, the Delayed Detached Eddy Simulation (DDES) technique is used in combination with the adaptive grid techniques. The acoustic spectrum obtained using the compressible flow solver shows closer agreement with the measured one.

A Study on Dimming Improvement and Flicker Reduction in Visible Light Communication System (가시광통신 시스템에서 디밍 향상 및 플리커 감소 방안에 대한 연구)

  • Doo-Hee, Han;Kyu-Jin, Lee
    • Journal of Industrial Convergence
    • /
    • v.21 no.2
    • /
    • pp.125-131
    • /
    • 2023
  • In this paper, research was conducted to solve the problem of reducing the dimming level and flicker that occurs in the visible light communication system. Visible light communication is a convergence technology that provides both communication and lighting, and must satisfy not only communication performance but also lighting performance. However, since the existing data transmission method transmits without considering the transmission data sequence, it reduces the dimming level and causes a flicker phenomenon. To solve this problem, in this paper, the Dimming Improvement and Flicker Reduction Mapping technique was studied. Existing systems simply transmitted data of '0' and '1', but in this system, original data transmission channels and DIFR (Dimming Improvement and Flicker Reduction) transmission channels are assigned to RGB channels. Original data is allocated to the R channel and original data or inverse original data is allocated to the DIFR-G channel, and the DIFR-B channel maintains the maximum dimming level by transmitting through the logical operation of the R channel and the G channel. At the same time, the flicker phenomenon is prevented by preventing continuous occurrence of 'OFF' patterns. Through this, we proposed an adaptive data allocation algorithm that can faithfully play a role as a light as well as a communication function.

Building a Model to Estimate Pedestrians' Critical Lags on Crosswalks (횡단보도에서의 보행자의 임계간격추정 모형 구축)

  • Kim, Kyung Whan;Kim, Daehyon;Lee, Ik Su;Lee, Deok Whan
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.1D
    • /
    • pp.33-40
    • /
    • 2009
  • The critical lag of crosswalk pedestrians is an important parameter in analyzing traffic operation at unsignalized crosswalks, however there is few research in this field in Korea. The purpose of this study is to develop a model to estimate the critical lag. Among the elements which influence the critical lag, the age of pedestrians and the length of crosswalks, which have fuzzy characteristics, and the each lag which is rejected or accepted are collected on crosswalks of which lengths range from 3.5 m to 10.5 m. The values of the critical lag range from 2.56 sec. to 5.56 sec. The age and the length are divided to the 3 fuzzy variables each, and the critical lag of each case is estimated according to Raff's technique, so a total of 9 fuzzy rules are established. Based on the rules, an ANFIS (Adaptive Neuro-Fuzzy Inference System) model to estimate the critical lag is built. The predictability of the model is evaluated comparing the observed with the estimated critical lags by the model. Statistics of $R^2$, MAE, MSE are 0.96, 0.097, 0.015 respectively. Therefore, the model is evaluated to explain the result well. During this study, it is found that the critical lag increases rapidly over the pedestrian's age of 40 years.

Estimation of the Marginal Walking Time of Bus Users in Small-Medium Cities (중·소도시 버스이용자의 한계도보시간 추정)

  • Kim, Kyung Whan;Yoo, Hwan Hee;Lee, Sang Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.4D
    • /
    • pp.451-457
    • /
    • 2008
  • Establishing realistic bus service coverage is needed to build optimum city bus line networks and reasonable bus service coverage areas. The purposes of this study are understanding the characteristics of the present walking time and marginal walking time of small-medium cities and constructing an ANFIS (Adaptive Neuro-Fuzzy Inference System) model to estimate the marginal walking time for certain age and income. The cities of Masan, Chongwon and Jinju are selected for study cities. The 80 percentile of present walking time of bus users of these cities are 10.2-11.1 minutes, thus the values are greater than the 5 minutes of the maximum walking time in USA and the marginal walking times of 21.1-21.8 minutes are much greater. An ANFIS model based on pulled data of the cities are constructed to estimate the marginal walking time of small-medium cities. Analyzing the relationship between marginal walking time and age/income by using the model, the marginal walking time decreases as the age increases, but is near constant from the age of 25 to 35. And the marginal walking time is inversely proportional to the income. In comparing the surveyed and the estimated values, as the statistics of coefficient of determination, MSE and MAE are 0.996, 0.163, 0.333 respectively, it may be judged that the explainability of the model is very high. The technique developed in this study can be applied to other cities.

Validation of Deep-Learning Image Reconstruction for Low-Dose Chest Computed Tomography Scan: Emphasis on Image Quality and Noise

  • Joo Hee Kim;Hyun Jung Yoon;Eunju Lee;Injoong Kim;Yoon Ki Cha;So Hyeon Bak
    • Korean Journal of Radiology
    • /
    • v.22 no.1
    • /
    • pp.131-138
    • /
    • 2021
  • Objective: Iterative reconstruction degrades image quality. Thus, further advances in image reconstruction are necessary to overcome some limitations of this technique in low-dose computed tomography (LDCT) scan of the chest. Deep-learning image reconstruction (DLIR) is a new method used to reduce dose while maintaining image quality. The purposes of this study was to evaluate image quality and noise of LDCT scan images reconstructed with DLIR and compare with those of images reconstructed with the adaptive statistical iterative reconstruction-Veo at a level of 30% (ASiR-V 30%). Materials and Methods: This retrospective study included 58 patients who underwent LDCT scan for lung cancer screening. Datasets were reconstructed with ASiR-V 30% and DLIR at medium and high levels (DLIR-M and DLIR-H, respectively). The objective image signal and noise, which represented mean attenuation value and standard deviation in Hounsfield units for the lungs, mediastinum, liver, and background air, and subjective image contrast, image noise, and conspicuity of structures were evaluated. The differences between CT scan images subjected to ASiR-V 30%, DLIR-M, and DLIR-H were evaluated. Results: Based on the objective analysis, the image signals did not significantly differ among ASiR-V 30%, DLIR-M, and DLIR-H (p = 0.949, 0.737, 0.366, and 0.358 in the lungs, mediastinum, liver, and background air, respectively). However, the noise was significantly lower in DLIR-M and DLIR-H than in ASiR-V 30% (all p < 0.001). DLIR had higher signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) than ASiR-V 30% (p = 0.027, < 0.001, and < 0.001 in the SNR of the lungs, mediastinum, and liver, respectively; all p < 0.001 in the CNR). According to the subjective analysis, DLIR had higher image contrast and lower image noise than ASiR-V 30% (all p < 0.001). DLIR was superior to ASiR-V 30% in identifying the pulmonary arteries and veins, trachea and bronchi, lymph nodes, and pleura and pericardium (all p < 0.001). Conclusion: DLIR significantly reduced the image noise in chest LDCT scan images compared with ASiR-V 30% while maintaining superior image quality.

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Behavioural Analysis of Password Authentication and Countermeasure to Phishing Attacks - from User Experience and HCI Perspectives (사용자의 패스워드 인증 행위 분석 및 피싱 공격시 대응방안 - 사용자 경험 및 HCI의 관점에서)

  • Ryu, Hong Ryeol;Hong, Moses;Kwon, Taekyoung
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.79-90
    • /
    • 2014
  • User authentication based on ID and PW has been widely used. As the Internet has become a growing part of people' lives, input times of ID/PW have been increased for a variety of services. People have already learned enough to perform the authentication procedure and have entered ID/PW while ones are unconscious. This is referred to as the adaptive unconscious, a set of mental processes incoming information and producing judgements and behaviors without our conscious awareness and within a second. Most people have joined up for various websites with a small number of IDs/PWs, because they relied on their memory for managing IDs/PWs. Human memory decays with the passing of time and knowledges in human memory tend to interfere with each other. For that reason, there is the potential for people to enter an invalid ID/PW. Therefore, these characteristics above mentioned regarding of user authentication with ID/PW can lead to human vulnerabilities: people use a few PWs for various websites, manage IDs/PWs depending on their memory, and enter ID/PW unconsciously. Based on the vulnerability of human factors, a variety of information leakage attacks such as phishing and pharming attacks have been increasing exponentially. In the past, information leakage attacks exploited vulnerabilities of hardware, operating system, software and so on. However, most of current attacks tend to exploit the vulnerabilities of the human factors. These attacks based on the vulnerability of the human factor are called social-engineering attacks. Recently, malicious social-engineering technique such as phishing and pharming attacks is one of the biggest security problems. Phishing is an attack of attempting to obtain valuable information such as ID/PW and pharming is an attack intended to steal personal data by redirecting a website's traffic to a fraudulent copy of a legitimate website. Screens of fraudulent copies used for both phishing and pharming attacks are almost identical to those of legitimate websites, and even the pharming can include the deceptive URL address. Therefore, without the supports of prevention and detection techniques such as vaccines and reputation system, it is difficult for users to determine intuitively whether the site is the phishing and pharming sites or legitimate site. The previous researches in terms of phishing and pharming attacks have mainly studied on technical solutions. In this paper, we focus on human behaviour when users are confronted by phishing and pharming attacks without knowing them. We conducted an attack experiment in order to find out how many IDs/PWs are leaked from pharming and phishing attack. We firstly configured the experimental settings in the same condition of phishing and pharming attacks and build a phishing site for the experiment. We then recruited 64 voluntary participants and asked them to log in our experimental site. For each participant, we conducted a questionnaire survey with regard to the experiment. Through the attack experiment and survey, we observed whether their password are leaked out when logging in the experimental phishing site, and how many different passwords are leaked among the total number of passwords of each participant. Consequently, we found out that most participants unconsciously logged in the site and the ID/PW management dependent on human memory caused the leakage of multiple passwords. The user should actively utilize repudiation systems and the service provider with online site should support prevention techniques that the user can intuitively determined whether the site is phishing.