• Title/Summary/Keyword: filter performance

Search Result 4,615, Processing Time 0.031 seconds

Study on effect of fuel property change on vehicle important parts and exhaust gas (연료 물성 변화가 자동차 주요부품 및 배출가스에 미치는 영향 연구)

  • Lee, Jung-Cheon;Kim, Sung-Woo;Lee, Min-Ho;Kim, Ki-Ho;Park, An-Young
    • Journal of the Korean Applied Science and Technology
    • /
    • v.34 no.4
    • /
    • pp.866-873
    • /
    • 2017
  • Exhaust regulations of automobile are being reinforced increasingly as environmental problems issues came to the fore by industrial development. However, it is known that the exhaust emission is not only influenced by the system of automobile but also the fuel properties. In particular, high-performance engines have required high-performance fuels with high lubricity as CRDI engines(diesel engine) have been developed and commercialized. This paper have examined that the fuel property variations affect a major parts and an exhaust gas of automobile. It was confirmed that the high pressure pump, the injector and the DPF(diesel particulate filter) were damaged and fuel efficiency was get worse due to use the fuel of lacking lubricity property($651{\mu}m$/quality standard: less in $400{\mu}m$). In addition, through an iron component was detected in the broken DPF, it was estimated that the breakage of the DPF was caused by the excessive exhaust of the particulate matter due to the iron component of the fuel.

High Purification of Hg2Br2 Powder for Acousto-Optic Tunable Filters Utilizing a PVT Process (PVT공정을 이용한 음향광학 가변 필터용 Hg2Br2 파우더의 고순도 정제)

  • Kim, Tae Hyeon;Lee, Hee Tae;Kwon, In Hoi;Kang, Young-Min;Woo, Shi-Gwan;Jang, Gun-Eik;Cho, Byungjin
    • Korean Journal of Materials Research
    • /
    • v.28 no.12
    • /
    • pp.732-737
    • /
    • 2018
  • We develop a purification process of $Hg_2Br_2$ raw powders using a physical vapor transport(PVT) process, which is essential for the fabrication of a high performance acousto-optic tunable filter(AOTF) module. Specifically, we characterize and compare three $Hg_2Br_2$ powders: $Hg_2Br_2$ raw powder, $Hg_2Br_2$ powder purified under pumping conditions, and $Hg_2Br_2$ powder purified under vacuum sealing. Before and after purification, we characterize the powder samples through X-ray diffraction and X-ray photoelectron spectroscopy. The corresponding results indicate that physical properties of the $Hg_2Br_2$ compound are not damaged even after the purification process. The impurities and concentration in the purified $Hg_2Br_2$ powder are evaluated by inductively coupled plasma-mass spectroscopy. Notably, compared to the sample purified under pumping conditions, the purification process under vacuum sealing results in a higher purity $Hg_2Br_2$ (99.999 %). In addition, when the second vacuum sealing purification process is performed, the remaining impurities are almost removed, giving rise to $Hg_2Br_2$ with ultra-high purity. This high purification process might be possible due to independent control of impurities and $Hg_2Br_2$ materials under the optimized vacuum sealing. Preparation of such a highly purified $Hg_2Br_2$ materials will pave a promising way toward a high-quality $Hg_2Br_2$ single crystal and then high performance AOTF modules.

Hardware Design of High-Performance SAO in HEVC Encoder for Ultra HD Video Processing in Real Time (UHD 영상의 실시간 처리를 위한 고성능 HEVC SAO 부호화기 하드웨어 설계)

  • Cho, Hyun-pyo;Park, Seung-yong;Ryoo, Kwang-ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.271-274
    • /
    • 2014
  • This paper proposes high-performance SAO(Sample Adaptive Offset) in HEVC(High Efficiency Video Coding) encoder for Ultra HD video processing in real time. SAO is a newly adopted technique belonging to the in-loop filter in HEVC. The proposed SAO encoder hardware architecture uses three-layered buffers to minimize memory access time and to simplify pixel processing and also uses only adder, subtractor, shift register and feed-back comparator to reduce area. Furthermore, the proposed architecture consists of pipelined pixel classification and applying SAO parameters, and also classifies four consecutive pixels into EO and BO concurrently. These result in the reduction of processing time and computation. The proposed SAO encoder architecture is designed by Verilog HDL, and implemented by 180k logic gates in TSMC $0.18{\mu}m$ process. At 110MHz, the proposed SAO encoder can support 4K Ultra HD video encoding at 30fps in real time.

  • PDF

Technology Development for Non-Contact Interface of Multi-Region Classifier based on Context-Aware (상황 인식 기반 다중 영역 분류기 비접촉 인터페이스기술 개발)

  • Jin, Songguo;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.175-182
    • /
    • 2020
  • The non-contact eye tracking is a nonintrusive human-computer interface providing hands-free communications for people with severe disabilities. Recently. it is expected to do an important role in non-contact systems due to the recent coronavirus COVID-19, etc. This paper proposes a novel approach for an eye mouse using an eye tracking method based on a context-aware based AdaBoost multi-region classifier and ASSL algorithm. The conventional AdaBoost algorithm, however, cannot provide sufficiently reliable performance in face tracking for eye cursor pointing estimation, because it cannot take advantage of the spatial context relations among facial features. Therefore, we propose the eye-region context based AdaBoost multiple classifier for the efficient non-contact gaze tracking and mouse implementation. The proposed method detects, tracks, and aggregates various eye features to evaluate the gaze and adjusts active and semi-supervised learning based on the on-screen cursor. The proposed system has been successfully employed in eye location, and it can also be used to detect and track eye features. This system controls the computer cursor along the user's gaze and it was postprocessing by applying Gaussian modeling to prevent shaking during the real-time tracking using Kalman filter. In this system, target objects were randomly generated and the eye tracking performance was analyzed according to the Fits law in real time. It is expected that the utilization of non-contact interfaces.

Development of Urban Wildlife Detection and Analysis Methodology Based on Camera Trapping Technique and YOLO-X Algorithm (카메라 트래핑 기법과 YOLO-X 알고리즘 기반의 도시 야생동물 탐지 및 분석방법론 개발)

  • Kim, Kyeong-Tae;Lee, Hyun-Jung;Jeon, Seung-Wook;Song, Won-Kyong;Kim, Whee-Moon
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.26 no.4
    • /
    • pp.17-34
    • /
    • 2023
  • Camera trapping has been used as a non-invasive survey method that minimizes anthropogenic disturbance to ecosystems. Nevertheless, it is labor-intensive and time-consuming, requiring researchers to quantify species and populations. In this study, we aimed to improve the preprocessing of camera trapping data by utilizing an object detection algorithm. Wildlife monitoring using unmanned sensor cameras was conducted in a forested urban forest and a green space on a university campus in Cheonan City, Chungcheongnam-do, Korea. The collected camera trapping data were classified by a researcher to identify the occurrence of species. The data was then used to test the performance of the YOLO-X object detection algorithm for wildlife detection. The camera trapping resulted in 10,500 images of the urban forest and 51,974 images of green spaces on campus. Out of the total 62,474 images, 52,993 images (84.82%) were found to be false positives, while 9,481 images (15.18%) were found to contain wildlife. As a result of wildlife monitoring, 19 species of birds, 5 species of mammals, and 1 species of reptile were observed within the study area. In addition, there were statistically significant differences in the frequency of occurrence of the following species according to the type of urban greenery: Parus varius(t = -3.035, p < 0.01), Parus major(t = 2.112, p < 0.05), Passer montanus(t = 2.112, p < 0.05), Paradoxornis webbianus(t = 2.112, p < 0.05), Turdus hortulorum(t = -4.026, p < 0.001), and Sitta europaea(t = -2.189, p < 0.05). The detection performance of the YOLO-X model for wildlife occurrence was analyzed, and it successfully classified 94.2% of the camera trapping data. In particular, the number of true positive predictions was 7,809 images and the number of false negative predictions was 51,044 images. In this study, the object detection algorithm YOLO-X model was used to detect the presence of wildlife in the camera trapping data. In this study, the YOLO-X model was used with a filter activated to detect 10 specific animal taxa out of the 80 classes trained on the COCO dataset, without any additional training. In future studies, it is necessary to create and apply training data for key occurrence species to make the model suitable for wildlife monitoring.

Performance Analysis of GPS and QZSS Orbit Determination using Pseudo Ranges and Precise Dynamic Model (의사거리 관측값과 정밀동역학모델을 이용한 GPS와 QZSS 궤도결정 성능 분석)

  • Beomsoo Kim;Jeongrae Kim;Sungchun Bu;Chulsoo Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.6
    • /
    • pp.404-411
    • /
    • 2022
  • The main function in operating the satellite navigation system is to accurately determine the orbit of the navigation satellite and transmit it as a navigation message. In this study, we developed software to determine the orbit of a navigation satellite by combining an extended Kalman filter and an accurate dynamic model. Global positioning system (GPS) and quasi-zenith satellite system (QZSS) orbit determination was performed using international gnss system (IGS) ground station observations and user range error (URE), a key performance indicator of the navigation system, was calculated by comparison with IGS precise ephemeris. When estimating the clock error mounted on the navigation satellite, the radial orbital error and the clock error have a high inverse correlation, which cancel each other out, and the standard deviations of the URE of GPS and QZSS are small namely 1.99 m and 3.47 m, respectively. Instead of estimating the clock error of the navigation satellite, the orbit was determined by replacing the clock error of the navigation message with a modeled value, and the regional correlation with URE and the effect of the ground station arrangement were analyzed.

Vehicle Visible Light Communication System Utilizing Optical Noise Mitigation Technology (광(光)잡음 저감 기술을 이용한 차량용 가시광 통신시스템)

  • Nam-Sun Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.413-419
    • /
    • 2023
  • Light Emitting Diodes(LEDs) are widely utilized not only in lighting but also in various applications such as mobile phones, automobiles, displays, etc. The integration of LED lighting with communication, specifically Visible Light Communication(VLC), has gained significant attention. This paper presents the direct implementation and experimentation of a Vehicle-to-Vehicle(V2V) Visible Light Communication system using commonly used red and yellow LEDs in typical vehicles. Data collected from the leading vehicle, including positional and speed information, were modulated using Non-Return-to-Zero On-Off Keying(NRZ-OOK) and transmitted through the rear lights equipped with red and yellow LEDs. A photodetector(PD) received the visible light signals, demodulated the data, and restored it. To mitigate the interference from fluorescent lights and natural light, a PD for interference removal was installed, and an interference removal device using a polarizing filter and a differential amplifier was employed. The performance of the proposed visible light communication system was analyzed in an ideal case, indoors and outdoors environments. In an outdoor setting, maintaining a distance of approximately 30[cm], and a transmission rate of 4800[bps] for inter-vehicle data transmission, the red LED exhibited a performance improvement of approximately 13.63[dB], while the yellow LED showed an improvement of about 11.9[dB].

Using noise filtering and sufficient dimension reduction method on unstructured economic data (노이즈 필터링과 충분차원축소를 이용한 비정형 경제 데이터 활용에 대한 연구)

  • Jae Keun Yoo;Yujin Park;Beomseok Seo
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.2
    • /
    • pp.119-138
    • /
    • 2024
  • Text indicators are increasingly valuable in economic forecasting, but are often hindered by noise and high dimensionality. This study aims to explore post-processing techniques, specifically noise filtering and dimensionality reduction, to normalize text indicators and enhance their utility through empirical analysis. Predictive target variables for the empirical analysis include monthly leading index cyclical variations, BSI (business survey index) All industry sales performance, BSI All industry sales outlook, as well as quarterly real GDP SA (seasonally adjusted) growth rate and real GDP YoY (year-on-year) growth rate. This study explores the Hodrick and Prescott filter, which is widely used in econometrics for noise filtering, and employs sufficient dimension reduction, a nonparametric dimensionality reduction methodology, in conjunction with unstructured text data. The analysis results reveal that noise filtering of text indicators significantly improves predictive accuracy for both monthly and quarterly variables, particularly when the dataset is large. Moreover, this study demonstrated that applying dimensionality reduction further enhances predictive performance. These findings imply that post-processing techniques, such as noise filtering and dimensionality reduction, are crucial for enhancing the utility of text indicators and can contribute to improving the accuracy of economic forecasts.

The Performance Improvement of PLC by Using RTP Extension Header Data for Consecutive Frame Loss Condition in CELP Type Vocoder (CELP Type Vocoder에서 RTP 확장 헤더 데이터를 이용한 연속적인 프레임 손실에 대한 PLC 성능개선)

  • Hong, Seong-Hoon;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.1
    • /
    • pp.48-55
    • /
    • 2010
  • It has a falling off in speech quality, especially when consecutive packet loss occurs, even if a vocoder implemented in the packet network has its own packet loss concealment (PLC) algorithm. PLC algorithm is divided into transmitter and receiver algorithm. Algorithm in the transmitter gives superior quality by additional information. however it is impossible to provide mutual compatibility and it occurs extra delay and transmission rate. The method applied in the receiver does not require additional delay. However, it sets limits to improve the speech quality. In this paper, we propose a new method that puts extra information for PLC in a part of Extension Header Data which is not used in RTP Header. It can solve the problem and obtain enhanced speech quality. There is no extra delay occurred by the proposed algorithm because there is a jitter buffer to adjust network delay in a receiver. Extra information, 16 bits each frame for G.729 PLC, is allocated for MA filter index in LP synthesis, excitation signal, excitation signal gain and residual gain reconstruction. It is because a transmitter sends speech data each 20 ms when it transfers RTP payload. As a result, the proposed method shows superior performance about 13.5%.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.