• Title/Summary/Keyword: Filter-based technique

Search Result 699, Processing Time 0.027 seconds

Localization and Navigation of a Mobile Robot using Single Ultrasonic Sensor Module (단일 초음파 센서모듈을 이용한 이동로봇의 위치추정 및 주행)

  • Jin Taeseok;Lee JangMyung
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.42 no.2 s.302
    • /
    • pp.1-10
    • /
    • 2005
  • This paper presents a technique for localization of a mobile robot using a single ultrasonic sensor. The mobile robot is designed for operating in a well-structured environment that can be represented by planes, edges, corners and cylinders in the view of structural features. In the case of ultrasonic sensors, these features have the range information in the form of the arc of a circle that is generally named as RCD (Region of Constant Depth). Localization is the continual provision of a knowledge of position which is deduced from it's a priori position estimation. The environment of a robot is modeled into a two dimensional grid map. we defines a physically-based sonar sensor model and employs an extended Kalman filter to estimate position of the robot. The performance and simplicity of the approach is demonstrated with the results produced by sets of experiments using a mobile robot.

Design of Random Number Generator for Simulation of Speech-Waveform Coders (음성엔코더 시뮬레이션에 사용되는 난수발생기 설계)

  • 박중후
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.3-9
    • /
    • 2001
  • In this paper, a random number generator for simulation of speech-waveform coders was designed. A random number generator having a desired probability density function and a desired power spectral density is discussed and experimental results are presented. The technique is based on Sondhi algorithm which consists of a linear filter and a memoryless nonlinearity. Several methods of obtaining memoryless nonlinearities for some typical continuous distributions are discussed. Sondhi algorithm is analyzed in the time domain using the diagonal expansion of the bivariate Gaussian probability density function. It is shown that the Sondhi algorithm gives satisfactory results when the memoryless nonlinearity is given in an antisymmetric form as in uniform, Cauchy, binary and gamma distribution. It is shown that the Sondhi algorithm does not perform well when the corresponding memoryless nonlinearity cannot be obtained analytically as in Student-t and F distributions, and when the memoryless nonlinearity can not be expressed in an antisymmetric form as in chi-squared and lognormal distributions.

  • PDF

New Strategy for Eliminating Zero-sequence Circulating Current between Parallel Operating Three-level NPC Voltage Source Inverters

  • Li, Kai;Dong, Zhenhua;Wang, Xiaodong;Peng, Chao;Deng, Fujin;Guerrero, Josep;Vasquez, Juan
    • Journal of Power Electronics
    • /
    • v.18 no.1
    • /
    • pp.70-80
    • /
    • 2018
  • A novel strategy based on a zero common mode voltage pulse-width modulation (ZCMV-PWM) technique and zero-sequence circulating current (ZSCC) feedback control is proposed in this study to eliminate ZSCCs between three-level neutral point clamped (NPC) voltage source inverters, with common AC and DC buses, that are operating in parallel. First, an equivalent model of ZSCC in a three-phase three-level NPC inverter paralleled system is developed. Second, on the basis of the analysis of the excitation source of ZSCCs, i.e., the difference in common mode voltages (CMVs) between paralleled inverters, the ZCMV-PWM method is presented to reduce CMVs, and a simple electric circuit is adopted to control ZSCCs and neutral point potential. Finally, simulation and experiment are conducted to illustrate effectiveness of the proposed strategy. Results show that ZSCCs between paralleled inverters can be eliminated effectively under steady and dynamic states. Moreover, the proposed strategy exhibits the advantage of not requiring carrier synchronization. It can be utilized in inverters with different types of filter.

Real Estate Price Forecasting by Exploiting the Regional Analysis Based on SOM and LSTM (SOM과 LSTM을 활용한 지역기반의 부동산 가격 예측)

  • Shin, Eun Kyung;Kim, Eun Mi;Hong, Tae Ho
    • The Journal of Information Systems
    • /
    • v.30 no.2
    • /
    • pp.147-163
    • /
    • 2021
  • Purpose The study aims to predict real estate prices by utilizing regional characteristics. Since real estate has the characteristic of immobility, the characteristics of a region have a great influence on the price of real estate. In addition, real estate prices are closely related to economic development and are a major concern for policy makers and investors. Accurate house price forecasting is necessary to prepare for the impact of house price fluctuations. To improve the performance of our predictive models, we applied LSTM, a widely used deep learning technique for predicting time series data. Design/methodology/approach This study used time series data on real estate prices provided by the Ministry of Land, Infrastructure and Transport. For time series data preprocessing, HP filters were applied to decompose trends and SOM was used to cluster regions with similar price directions. To build a real estate price prediction model, SVR and LSTM were applied, and the prices of regions classified into similar clusters by SOM were used as input variables. Findings The clustering results showed that the region of the same cluster was geographically close, and it was possible to confirm the characteristics of being classified as the same cluster even if there was a price level and a similar industry group. As a result of predicting real estate prices in 1, 2, and 3 months, LSTM showed better predictive performance than SVR, and LSTM showed better predictive performance in long-term forecasting 3 months later than in 1-month short-term forecasting.

Optimal Variable Step Size for Simplified SAP Algorithm with Critical Polyphase Decomposition (임계 다위상 분해기법이 적용된 SAP 알고리즘을 위한 최적 가변 스텝사이즈)

  • Heo, Gyeongyong;Choi, Hun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.11
    • /
    • pp.1545-1550
    • /
    • 2021
  • We propose an optimal variable step size adjustment method for the simplified subband affine projection algorithm (Simplified SAP; SSAP) in a subband structure based on a polyphase decomposition technique. The proposed method provides an optimal step size derived to minimize the mean square deviation(MSD) at the time of updating the coefficients of the subband adaptive filter. Application of the proposed optimal step size in the SSAP algorithm using colored input signals ensures fast convergence speed and small steady-state error. The results of computer simulations performed using AR(2) signals and real voices as input signals prove the validity of the proposed optimal step size for the SSAP algorithm. Also, the simulation results show that the proposed algorithm has a faster convergence rate and good steady-state error compared to the existing other adaptive algorithms.

A Study on Diagnosis of BLDC motor and New data-set Feature Extraction using Park's Vector Approach (Park's Vector Approach를 이용한 BLDC모터진단 방법과 새로운 데이터 셋 특징 추출 연구)

  • Goh, Yeong-Jin;Kim, Ji-Seon;Lee, Buhm;Kim, Kyoung-Min
    • Journal of IKEEE
    • /
    • v.26 no.1
    • /
    • pp.104-110
    • /
    • 2022
  • In this paper, we propose a new dataset for AI diagnosis and BLDC motor diagnosis in UAV. In the diagnosis of BLDC motor, PVA(Park's Vector Approach) is difficult to apply due to many ripples of frequency components. However, since the components of ripples are the third harmonics, we propose a method to utilize PVA as circle fitting by applying Savitzky-Golay filter which is excellent for the third harmonics. On the other hand, PVA, a technique to convert from three-phase to two-phase, is always based on the origin during the transformation process. This study demonstrates that the error of the origin and the measured center can be detected and diagnosed in the application process of Circle fitting, and that it can be used as a new data set of AI technology.

The Near-Infrared Imaging Spectroscopy to Visualize the Distribution of Sugar Content in the Flesh of a Melon

  • Tsuta, Mizuki;Sugiyama, Junichi;Sagara, Yasuyuki
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1526-1526
    • /
    • 2001
  • To improve the accuracy of sweetness sensor in automated sorting operations, it is necessary to clarify unevenness of the sugar content distribution within fruits. And it is expected that the technique to evaluate the content distribution in fruits contribute to the development of the near-infrared (NIR) imaging spectroscopy. Sugiyama (1999) had succeeded to visualize the distribution of the sugar content on the surface of a half-cut green fresh melon. However, this method cannot be applied to red flesh melons because it depends on information of the absorption band of chlorophyll (676 nm), which is affected by the color of the fresh. The objective of this study was to develop the universal visualization method depends on the absorption band of sugar, which can be applied to various kinds of melons and other fruits. The relationship between the sugar contents and absorption spectra of both green and red fresh melons were investigated by using a NIR spectrometer to determine the absorption band of sugar. The combination of 2$\^$nd/ derivative absorbances at 902 nm and 874 nm was highly correlated with the sugar contents. The wavelength of 902 nm is attributed to the absorption band of sugar. A cooled charge-coupled device (CCD) imaging camera which has 16 bit (65536 steps) A/D resolution was equipped with rotating band-pass filter wheel and used to capture the spectral absorption images of the flesh of a vertically half-cut red fresh melon. The advantage of the high A/D resolution in this research is that each pixel of the CCD is expected to function as a detector of the NIR spectrometer for quantitative analysis. Images at 846 nm, 874 nm, 902 nm and 930 nm were acquired using this CCD camera. Then the 2$\^$nd/ derivative absorbances at 902 nm and 874 nm at each pixel were calculated using these four images. On the other hand, parts of the same melon were extracted for capturing the images and squeezed for the measurement of sugar content. Then the calibration curve between the combination of 2$\^$nd/ derivative absorbances at 902 nm and 874 nm and sugar content was developed. The calibration method based on NIR spectroscopy techniques was applied to each pixel of the images to convert the 2$\^$nd/ derivative absorbances into the Brix sugar content. Mapping the sugar content value of each pixel with linear color scale, the distribution of the sugar content was visualized. As a result of the visualization, it was quantitatively confirmed that the Brix sugar contents are low at the near of the skin and become higher towards the seeds. This result suggests that the visualization technique by the NIR imaging spectroscopy could become a new useful method fer quality evaluation of melons.

  • PDF

THE INFRARED MEDIUM-DEEP SURVEY. V. A NEW SELECTION STRATEGY FOR QUASARS AT z > 5 BASED ON MEDIUM-BAND OBSERVATIONS WITH SQUEAN

  • JEON, YISEUL;IM, MYUNGSHIN;PAK, SOOJONG;HYUN, MINHEE;KIM, SANGHYUK;KIM, YONGJUNG;LEE, HYE-IN;PARK, WOOJIN
    • Journal of The Korean Astronomical Society
    • /
    • v.49 no.1
    • /
    • pp.25-35
    • /
    • 2016
  • Multiple color selection techniques are successful in identifying quasars from wide-field broadband imaging survey data. Among the quasars that have been discovered so far, however, there is a redshift gap at 5 ≲ z ≲ 5.7 due to the limitations of filter sets in previous studies. In this work, we present a new selection technique of high redshift quasars using a sequence of medium-band filters: nine filters with central wavelengths from 625 to 1025 nm and bandwidths of 50 nm. Photometry with these medium-bands traces the spectral energy distribution (SED) of a source, similar to spectroscopy with resolution R ~ 15. By conducting medium-band observations of high redshift quasars at 4.7 ≤ z ≤ 6.0 and brown dwarfs (the main contaminants in high redshift quasar selection) using the SED camera for QUasars in EArly uNiverse (SQUEAN) on the 2.1-m telescope at the McDonald Observatory, we show that these medium-band filters are superior to multi-color broad-band color section in separating high redshift quasars from brown dwarfs. In addition, we show that redshifts of high redshift quasars can be determined to an accuracy of Δz/(1 + z) = 0.002 - 0.026. The selection technique can be extended to z ~ 7, suggesting that the medium-band observation can be powerful in identifying quasars even at the re-ionization epoch.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

An Interface Technique for Avatar-Object Behavior Control using Layered Behavior Script Representation (계층적 행위 스크립트 표현을 통한 아바타-객체 행위 제어를 위한 인터페이스 기법)

  • Choi Seung-Hyuk;Kim Jae-Kyung;Lim Soon-Bum;Choy Yoon-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.9
    • /
    • pp.751-775
    • /
    • 2006
  • In this paper, we suggested an avatar control technique using the high-level behavior. We separated behaviors into three levels according to level of abstraction and defined layered scripts. Layered scripts provide the user with the control over the avatar behaviors at the abstract level and the reusability of scripts. As the 3D environment gets complicated, the number of required avatar behaviors increases accordingly and thus controlling the avatar-object behaviors gets even more challenging. To solve this problem, we embed avatar behaviors into each environment object, which informs how the avatar can interact with the object. Even with a large number of environment objects, our system can manage avatar-object interactions in an object-oriented manner Finally, we suggest an easy-to-use user interface technique that allows the user to control avatars based on context menus. Using the avatar behavior information that is embedded into the object, the system can analyze the object state and filter the behaviors. As a result, context menu shows the behaviors that the avatar can do. In this paper, we made the virtual presentation environment and applied our model to the system. In this paper, we suggested the technique that we controling an the avatar control technique using the high-level behavior. We separated behaviors into three levels byaccording to level of abstract levelion and defined multi-levellayered script. Multi-leveILayered script offers that the user can control avatar behavior at the abstract level and reuses script easily. We suggested object models for avatar-object interaction. Because, TtThe 3D environment is getting more complicated very quickly, so that the numberss of avatar behaviors are getting more variableincreased. Therefore, controlling avatar-object behavior is getting complex and difficultWe need tough processing for handling avatar-object interaction. To solve this problem, we suggested object models that embedded avatar behaviors into object for avatar-object interaction. insert embedded ail avatar behaviors into object. Even though the numbers of objects areis large bigger, it can manage avatar-object interactions by very efficientlyobject-oriented manner. Finally Wewe suggested context menu for ease ordering. User can control avatar throughusing not avatar but the object-oriented interfaces. To do this, Oobject model is suggested by analyzeing object state and filtering the behavior, behavior and context menu shows the behaviors that avatar can do. The user doesn't care about the object or avatar state through the related object.