• 제목/요약/키워드: complex signal processing

Search Result 261, Processing Time 0.031 seconds

JPEG Pleno: Providing representation interoperability for holographic applications and devices

  • Schelkens, Peter;Ebrahimi, Touradj;Gilles, Antonin;Gioia, Patrick;Oh, Kwan-Jung;Pereira, Fernando;Perra, Cristian;Pinheiro, Antonio M.G.
    • ETRI Journal
    • /
    • v.41 no.1
    • /
    • pp.93-108
    • /
    • 2019
  • Guaranteeing interoperability between devices and applications is the core role of standards organizations. Since its first JPEG standard in 1992, the Joint Photographic Experts Group (JPEG) has published several image coding standards that have been successful in a plethora of imaging markets. Recently, these markets have become subject to potentially disruptive innovations owing to the rise of new imaging modalities such as light fields, point clouds, and holography. These so-called plenoptic modalities hold the promise of facilitating a more efficient and complete representation of 3D scenes when compared to classic 2D modalities. However, due to the heterogeneity of plenoptic products that will hit the market, serious interoperability concerns have arisen. In this paper, we particularly focus on the holographic modality and outline how the JPEG committee has addressed these tremendous challenges. We discuss the main use cases and provide a preliminary list of requirements. In addition, based on the discussion of real-valued and complex data representations, we elaborate on potential coding technologies that range from approaches utilizing classical 2D coding technologies to holographic content-aware coding solutions. Finally, we address the problem of visual quality assessment of holographic data covering both visual quality metrics and subjective assessment methodologies.

NEW ASPECTS OF MEASURING NOISE AND VIBRATION

  • Genuit, K.
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.796-801
    • /
    • 1994
  • Measuring noise, sound quality or acoustical comfort presents a difficult task for the acoustic engineer. Sound and noise are ultimately jugded by human beings acting as analysers. Regulations for determining noise levels are based on A-weighted SPL measurement performed with only one microphone. This method of measurement is usually specified when determining whether the ear can be physically damaged. Such a simple measurement procedure is not able to determine annoyance of sound events or sound quality in general. For some years investigations with binaural measurement analysis technique have shown new possibilities for the objective determination of sound quality. By using Artificial Head technology /1/, /2/ in conjunction with psychoacoustic evaluation algorithms - and taking into account binaural signal processing of human hearing, considerable progress regarding the analysis of sounds has been made. Because sound events often arise in a complex way, direct conclusions about components subjectively judged to be annoying with regard to their causes and transmission paths, can be drawn in a limited way only. A new procedure, complementing binaural measurement technology combined with mulit-channel measuements of acceleration sensor signals has been developed. This involves correlating signals influencing sound quality, analyzed by means of human hearing, with signals form different acceleration sensors fixed at different positions of the sound source. Now it is possible to recognize the source and the transmission way of those signals which have an influence on the annoyance of sound.

  • PDF

A Study on the Tracking Algorithm for BSD Detection of Smart Vehicles (스마트 자동차의 BSD 검지를 위한 추적알고리즘에 관한 연구)

  • Kim Wantae
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.2
    • /
    • pp.47-55
    • /
    • 2023
  • Recently, Sensor technologies are emerging to prevent traffic accidents and support safe driving in complex environments where human perception may be limited. The UWS is a technology that uses an ultrasonic sensor to detect objects at short distances. While it has the advantage of being simple to use, it also has the disadvantage of having a limited detection distance. The LDWS, on the other hand, is a technology that uses front image processing to detect lane departure and ensure the safety of the driving path. However, it may not be sufficient for determining the driving environment around the vehicle. To overcome these limitations, a system that utilizes FMCW radar is being used. The BSD radar system using FMCW continuously emits signals while driving, and the emitted signals bounce off nearby objects and return to the radar. The key technologies involved in designing the BSD radar system are tracking algorithms for detecting the surrounding situation of the vehicle. This paper presents a tracking algorithm for designing a BSD radar system, while explaining the principles of FMCW radar technology and signal types. Additionally, this paper presents the target tracking procedure and target filter to design an accurate tracking system and performance is verified through simulation.

Revisiting Deep Learning Model for Image Quality Assessment: Is Strided Convolution Better than Pooling? (영상 화질 평가 딥러닝 모델 재검토: 스트라이드 컨볼루션이 풀링보다 좋은가?)

  • Uddin, AFM Shahab;Chung, TaeChoong;Bae, Sung-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.29-32
    • /
    • 2020
  • Due to the lack of improper image acquisition process, noise induction is an inevitable step. As a result, objective image quality assessment (IQA) plays an important role in estimating the visual quality of noisy image. Plenty of IQA methods have been proposed including traditional signal processing based methods as well as current deep learning based methods where the later one shows promising performance due to their complex representation ability. The deep learning based methods consists of several convolution layers and down sampling layers for feature extraction and fully connected layers for regression. Usually, the down sampling is performed by using max-pooling layer after each convolutional block. We reveal that this max-pooling causes information loss despite of knowing their importance. Consequently, we propose a better IQA method that replaces the max-pooling layers with strided convolutions to down sample the feature space and since the strided convolution layers have learnable parameters, they preserve optimal features and discard redundant information, thereby improve the prediction accuracy. The experimental results verify the effectiveness of the proposed method.

  • PDF

Development of an Electro Impedance Spectroscopy device for EDLC super capacitor characterization in a mass production line (EDLC 슈퍼 캐피시터 특성 분석을 위한 양산용 전기화학 분석 장치 개발)

  • Park, Chan-Hee;Lee, Hye-In;Kim, Sang-Jung;Lee, Jung-Ho;Kim, Sung-Jin;Lee, Hee-Gwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.12
    • /
    • pp.5647-5654
    • /
    • 2012
  • In this paper, we developed an electro impedance spectroscopy (EIS) device, which are primarily used for the analysis of fuel cells or batteries, to widen its coverage to the next generation super capacitor EDLC characterization. The developed system was composed of a signal generator that can generate various signal patterns, a potentiostatic generator, and a high speed digital filter for signal processing and measurement program. The developed system is portable, which is not only suitable laboratory use but also for mass production line. The special features of the system include a patterned output signal from 0.01 to 20 kHz, and a fast Fourier transform (FFT) analysis of current signals, both of which are acquired simultaneously. Our tests showed similar results after comparing the analysis from our newly-developed device showing the characteristics of EDLC complex impedance and the analysis from an equivalent impedance which was applied to an equivalent circuit. Now, we can expect a fast inspection time from the application of the present system to the super capacitor production line, based on time-varying changes in electrochemical impedance.

Seismic Pre-processing and AVO analysis for understanding the gas-hydrate structure (가스 하이드레이트 부존층의 구조 파악을 위한 탄성파 전산처리 및 AVO 분석)

  • Chung Bu-Heung
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.634-637
    • /
    • 2005
  • Multichannel seismic data acquired in Ulleung Basin of East Sea for gas hydrate exploration. The seismic sections of this area show strong BSR(bottom simulating reflections) associated with methane hydrate occurrence in deep marine sediments. Very limited information is available from deep sea drilling as the risk of heating and destabilizing the initial hydrate conditions during the processing of drilling is considerably high. Not so many advanced status of gas hydrate exploration in Korea, the most of information of gas hydrate characteristics and properties are inferred from seismic reflection data. In this study, The AVO analysis using the long offset seismic data acquired in Ulleung Basin used to explain the characteristics and structure of gas hydrate. It is used primarily P-wave velocity accessible from seismic data. To make a good quality of AVO analysis input data, seismic preprocessing including 'true gain correction', 'source signature deconvolution', twice velocity analysis and some kinds of multiple rejection and enhancing the signal to noise ratio processes is carried out very carefully. The results of AVO analysis, the eight kinds of AVO attributes are estimated basically and some others of AVO attributes are evaluated for interpretation of AVO analysis additionally. The impedance variation at the boundary of gas hydrate and free gas is estimated for investing the BSR characteristics and properties. The complex analysis is performed also to verifying the amplitude variation and phase shift occurrence at BSR. Type III AVO anomaly appearance at saturated free gas area is detected on BSR. It can be an important evidence of gas hydrate deposition upper the BSR.

  • PDF

Design and Implementation of Direct Torque Control Based on an Intelligent Technique of Induction Motor on FPGA

  • Krim, Saber;Gdaim, Soufien;Mtibaa, Abdellatif;Mimouni, Mohamed Faouzi
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.4
    • /
    • pp.1527-1539
    • /
    • 2015
  • In this paper the hardware implementation of the direct torque control based on the fuzzy logic technique of induction motor on the Field-Programmable Gate Array (FPGA) is presented. Due to its complexity, the fuzzy logic technique implemented on a digital system like the DSP (Digital Signal Processor) and microcontroller is characterized by a calculating delay. This delay is due to the processing speed which depends on the system complexity. The limitation of these solutions is inevitable. To solve this problem, an alternative digital solution is used, based on the FPGA, which is characterized by a fast processing speed, to take the advantage of the performances of the fuzzy logic technique in spite of its complex computation. The Conventional Direct Torque Control (CDTC) of the induction machine faces problems, like the high stator flux, electromagnetic torque ripples, and stator current distortions. To overcome the CDTC problems many methods are used such as the space vector modulation which is sensitive to the parameters variations of the machine, the increase in the switches inverter number which increases the cost of the inverter, and the artificial intelligence. In this paper an intelligent technique based on the fuzzy logic is used because it is allows controlling the systems without knowing the mathematical model. Also, we use a new method based on the Xilinx system generator for the hardware implementation of Direct Torque Fuzzy Control (DTFC) on the FPGA. The simulation results of the DTFC are compared to those of the CDTC. The comparison results illustrate the reduction in the torque and stator flux ripples of the DTFC and show the Xilinx Virtex V FPGA performances in terms of execution time.

Development of the Algorithm for Traffic Accident Auto-Detection in Signalized Intersection (신호교차로 내 실시간 교통사고 자동검지 알고리즘 개발)

  • O, Ju-Taek;Im, Jae-Geuk;Hwang, Bo-Hui
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.5
    • /
    • pp.97-111
    • /
    • 2009
  • Image-based traffic information collection systems have entered widespread adoption and use in many countries since these systems are not only capable of replacing existing loop-based detectors which have limitations in management and administration, but are also capable of providing and managing a wide variety of traffic related information. In addition, these systems are expanding rapidly in terms of purpose and scope of use. Currently, the utilization of image processing technology in the field of traffic accident management is limited to installing surveillance cameras on locations where traffic accidents are expected to occur and digitalizing of recorded data. Accurately recording the sequence of situations around a traffic accident in a signal intersection and then objectively and clearly analyzing how such accident occurred is more urgent and important than anything else in resolving a traffic accident. Therefore, in this research, we intend to present a technology capable of overcoming problems in which advanced existing technologies exhibited limitations in handling real-time due to large data capacity such as object separation of vehicles and tracking, which pose difficulties due to environmental diversities and changes at a signal intersection with complex traffic situations, as pointed out by many past researches while presenting and implementing an active and environmentally adaptive methodology capable of effectively reducing false detection situations which frequently occur even with the Gaussian complex model analytical method which has been considered the best among well-known environmental obstacle reduction methods. To prove that the technology developed by this research has performance advantage over existing automatic traffic accident recording systems, a test was performed by entering image data from an actually operating crossroad online in real-time. The test results were compared with the performance of other existing technologies.

Design of a computationally efficient frame synchronization scheme for wireless LAN systems (무선랜 시스템을 위한 계산이 간단한 초기 동기부 설계)

  • Cho, Jun-Beom;Lee, Jong-Hyup;Han, Jin_Woo;You, Yeon-Sang;Oh, Hyok-Jun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.64-72
    • /
    • 2012
  • Synchronization including timing recovery, frequency offset compensation, and frame synchronization is most important signal processing block in all wireless/wired communication systems. In most communication systems, synchronization schemes based on training sequences or preambles are used. WLAN standards of 802.11a/g/n released by IEEE are based on OFDM systems. OFDM systems are known to be much more sensitive to frequency and timing synchronization errors than single carrier systems. A loss of orthogonality between the multiplexed subcarriers can result in severe performance degradations. The starting position of the frame and the beginning of the symbol and training symbol can be estimated using correlation methods. Correlation processing functionality is usually complex because of large number of multipliers in implementation especially when the reference signal is non-binary. In this paper, a simple correlation based synchronization scheme is proposed for IEEE 802.11a/g/n systems. Existing property of a periodicity in the training symbols are exploited. Simulation and implementation results show that the proposed method has much smaller complexity without any performance degradation than the existing schemes.

Implementation of Sonar Bearing Accuracy Measurement Equipment with Parallax Error and Time Delay Error Correction (관측위치오차와 시간지연오차를 보정하는 소나방위정확도 측정 장비 구현)

  • Kim, Sung-Duk;Kim, Do-Young;Park, Gyu-Tae;Shin, Kee-Cheol
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.4
    • /
    • pp.245-251
    • /
    • 2019
  • Sonar bearing accuracy is the correspondence between the target orientation predicted by sonar and actual target orientation, and is obtained from measurements. However, when measuring sonar bearing accuracy, many errors are included in the results because they are made at sea, where complex and diverse environmental factors are applied. In particular, parallax error caused by the difference between the position of the GPS receiver and the sonar sensor, and the time delay error generated between the speed of underwater sound waves and the speed of electromagnetic waves in the air have a great influence on the accuracy. Correcting these parallax errors and time delay errors without an automated tool is a laborious task. Therefore, in this study, we propose a sonar bearing accuracy measurement equipment with parallax error and time delay error correction. The tests were carried out through simulation data and real data. As a result of the test it was confirmed that the parallax error and time delay error were systematically corrected so that 51.7% for simulation data and more than 18.5% for real data. The proposed method is expected to improve the efficiency and accuracy of sonar system detection performance verification in the future.