• Title/Summary/Keyword: Signal

Search Result 32,832, Processing Time 0.051 seconds

Inhibitory Effect of Chloroform Extract of Marine Algae Hizikia Fusifomis on Angiogenesis (Hizikia fusiformis 클로로포름 추출물의 in vitro 및 in vivo 혈관신생 억제 연구)

  • Myeong-Eun Jegal;Yu-Seon Han;Shi-Young Park;Ji-Hyeok Lee;Eui-Yeun Yi;Yung-Jin Kim
    • Journal of Life Science
    • /
    • v.34 no.6
    • /
    • pp.399-407
    • /
    • 2024
  • Angiogenesis is the process by which new blood vessels form from existing blood vessels. This phenomenon occurs during growth, healing, and menstrual cycle changes. Angiogenesis is a complex and multifaceted process that is important for the continued growth of primary tumors, metastasis promotion, the support of metastatic tumors, and cancer progression. Impaired angiogenesis can lead to cancer, autoimmune diseases, rheumatoid arthritis, cardiovascular disease, and delayed wound healing. Currently, there are only a handful of effective antiangiogenic drugs. Recent studies have shown that natural marine products exhibit antiangiogenic effects. In a previous study, we reported that the hexane extract of H. fusiformis (HFH) could inhibit the development of new blood vessels both in vitro and in vivo. The aim of this study was to describe the inhibitory effect of chloroform extracts of H. fusiformis on angiogenesis. To investigate how chloroform extract prevents blood vessel growth, we examined its effects on HUVEC, including cell migration, invasion, and tube formation. In a mouse Matrigel plug assay, H. fusiformis chloroform extract (HFC) also inhibited angiogenesis in vivo. Certain proteins associated with blood vessel growth were reduced after HFC treatment. These proteins include vascular endothelial growth factor (VEGF), mitogen-activated protein kinase (MAPK)/extracellular signal transduction kinase, and serine/threonine kinase 1 (AKT). These studies have shown that the chloroform extract of H. fusiformis can inhibit blood vessel growth both in vitro and in vivo.

Highband Coding Method Using Matching Pusuit Estimation and CELP Coding for Wideband Speech Coder (광대역 음성부호화기를 위한 매칭퍼슈잇 알고리즘과 CELP 방법을 이용한 고대역 부호화 방법)

  • Jeong Gyu-Hyeok;Ahn Yeong-Uk;Kim Jong-Hark;Shin Jae-Hyun;Seo Sang-Won;Hwang In-Kwan;Lee In-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.1
    • /
    • pp.21-29
    • /
    • 2006
  • In this Paper a split bandwidth wideband speech coder and its highband coding method are Proposed. The coder uses a split-band approach. where the wideband input speech signal is split into two equal frequency bands from 0-4kHz and 4-8kHz. The lowband and the highband are coded respectively by the 11.8kb/s G.729 Annex E and the proposed coding method. After the LPC analysis, the highband is divided by two modes according to the properties of signals. In stationary mode. the highband signals are compressed by the mixture excitation model; CELP algorithm and W (Matching Pursuit) algorithm. The others are coded by the only CELP algorithm. We compare the performance of the new wideband speech coder with that of G.722 48kbps SB-ADPCM and G.722.2 12.85kbps in a subjective method. The simulation results show that the Performance of the proposed wideband speech coder has better than that of 48kbps G.722 and no better than that of 12.85kbps G.722.2.

Geoacoustic Inversion and Source Localization with an L-Shaped Receiver Array (L-자형 선배열을 이용한 지음향학적 인자 역산 및 음원 위치 추정)

  • Kim, Kyung-Seop;Lee, Keun-Hwa;Kim, Seong-Il;Kim, Young-Gyu;Seong, Woo-Jae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.7
    • /
    • pp.346-355
    • /
    • 2006
  • Acoustic data from a shallow water experiment in the East Sea of Korea (MAPLE IV) is Processed to investigate the Performance of matched-field geo-acoustic inversion and source localization. The receiver array consists of two legs as in an L-shape. one vertical and the other horizontal lying on the seabed. Narrowband multi-tone CW source was towed along a slightly inclined bathymetry track. The matched-field geo-acoustic inversion includes comparisons between three processing techniques. all based on the Bartlett processor as; (1) the coherent processing of the data from the full array, (2) the incoherent Product of each output from both the horizontal and vertical arrays, and (3) the cross correlation between the horizontal and vertical arrays. as well as processing each array leg separately. To verify the inversion results. matched-field source localization for low level source signal components were performed using the same Processors used at the inversion stage.

Development of a Listener Position Adaptive Real-Time Sound Reproduction System (청취자 위치 적응 실시간 사운드 재생 시스템의 개발)

  • Lee, Ki-Seung;Lee, Seok-Pil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.7
    • /
    • pp.458-467
    • /
    • 2010
  • In this paper, a new audio reproduction system was developed in which the cross-talk signals would be reasonably cancelled at an arbitrary listener position. To adaptively remove the cross-talk signals according to the listener's position, a method of tracking the listener position was employed. This was achieved using the two microphones, where the listener direction was estimated using the time-delay between the two signals from the two microphones, respectively. Moreover, room reverberation effects were taken into consideration where linear prediction analysis was involved. To remove the cross-talk signals at the left-and right-ears, the paths between the sources and the ears were represented using the KEMAR head-related transfer functions (HRTFs) which were measured from the artificial dummy head. To evaluate the usefulness of the proposed listener tracking system, the performance of cross-talk cancellation was evaluated at the estimated listener positions. The performance was evaluated in terms of the channel separation ration (CSR), a -10 dB of CSR was experimentally achieved although the listener positions were more or less deviated. A real-time system was implemented using a floating-point digital signal processor (DSP). It was confirmed that the average errors of the listener direction was 5 degree and the subjects indicated that 80 % of the stimuli was perceived as the correct directions.

Estimation of Cavitation Bubble Distribution Using Multi-Frequency Acoustic Signals (다중 주파수를 이용한 캐비테이션 기포의 분포량 추정)

  • Kim, Dae-Uk;La, Hyoung-Sul;Choi, Jee-Woong;Na, Jung-Yul;Kang, Don-Hyug
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.3
    • /
    • pp.198-207
    • /
    • 2009
  • Distribution of cavitation bubbles relative to change of the sound speed and attenuation in the water was estimated using acoustic signal from 20 to 300 kHz in two cases that cavitation bubbles exist and do not exist. To study generation and extinction property of cavitation bubble, bubble distribution was estimated in three cases: change of rotation speed (3000-4000 rpm), surface area of blade ($32-98\;mm^2$) and elapsed time (30-120 sec). As a result, the radii of the generated bubbles ranged from 10 to $60{\mu}m$, and bubble radius of $10-20{\mu}m$ and $20-30{\mu}m$ was accounted for 45 and 25% of the total number of cavitation bubbles, respectively. And generation bubble population correlated closely with the rotating speed of the blades but did not correlate with the surface area of blade. It was observed that 80% of total bubble population disappeared within 2 minutes. Finally, acoustic data of bubble distribution was compared with optical data.

Blind Rhythmic Source Separation (블라인드 방식의 리듬 음원 분리)

  • Kim, Min-Je;Yoo, Ji-Ho;Kang, Kyeong-Ok;Choi, Seung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.8
    • /
    • pp.697-705
    • /
    • 2009
  • An unsupervised (blind) method is proposed aiming at extracting rhythmic sources from commercial polyphonic music whose number of channels is limited to one. Commercial music signals are not usually provided with more than two channels while they often contain multiple instruments including singing voice. Therefore, instead of using conventional modeling of mixing environments or statistical characteristics, we should introduce other source-specific characteristics for separating or extracting sources in the under determined environments. In this paper, we concentrate on extracting rhythmic sources from the mixture with the other harmonic sources. An extension of nonnegative matrix factorization (NMF), which is called nonnegative matrix partial co-factorization (NMPCF), is used to analyze multiple relationships between spectral and temporal properties in the given input matrices. Moreover, temporal repeatability of the rhythmic sound sources is implicated as a common rhythmic property among segments of an input mixture signal. The proposed method shows acceptable, but not superior separation quality to referred prior knowledge-based drum source separation systems, but it has better applicability due to its blind manner in separation, for example, when there is no prior information or the target rhythmic source is irregular.

Salient Region Detection Algorithm for Music Video Browsing (뮤직비디오 브라우징을 위한 중요 구간 검출 알고리즘)

  • Kim, Hyoung-Gook;Shin, Dong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2
    • /
    • pp.112-118
    • /
    • 2009
  • This paper proposes a rapid detection algorithm of a salient region for music video browsing system, which can be applied to mobile device and digital video recorder (DVR). The input music video is decomposed into the music and video tracks. For the music track, the music highlight including musical chorus is detected based on structure analysis using energy-based peak position detection. Using the emotional models generated by SVM-AdaBoost learning algorithm, the music signal of the music videos is classified into one of the predefined emotional classes of the music automatically. For the video track, the face scene including the singer or actor/actress is detected based on a boosted cascade of simple features. Finally, the salient region is generated based on the alignment of boundaries of the music highlight and the visual face scene. First, the users select their favorite music videos from various music videos in the mobile devices or DVR with the information of a music video's emotion and thereafter they can browse the salient region with a length of 30-seconds using the proposed algorithm quickly. A mean opinion score (MOS) test with a database of 200 music videos is conducted to compare the detected salient region with the predefined manual part. The MOS test results show that the detected salient region using the proposed method performed much better than the predefined manual part without audiovisual processing.

Cepstral Distance and Log-Energy Based Silence Feature Normalization for Robust Speech Recognition (강인한 음성인식을 위한 켑스트럼 거리와 로그 에너지 기반 묵음 특징 정규화)

  • Shen, Guang-Hu;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.278-285
    • /
    • 2010
  • The difference between training and test environments is one of the major performance degradation factors in noisy speech recognition and many silence feature normalization methods were proposed to solve this inconsistency. Conventional silence feature normalization method represents higher classification performance in higher SNR, but it has a problem of performance degradation in low SNR due to the low accuracy of speech/silence classification. On the other hand, cepstral distance represents well the characteristic distribution of speech/silence (or noise) in low SNR. In this paper, we propose a Cepstral distance and Log-energy based Silence Feature Normalization (CLSFN) method which uses both log-energy and cepstral euclidean distance to classify speech/silence for better performance. Because the proposed method reflects both the merit of log energy being less affected with noise in high SNR and the merit of cepstral distance having high discrimination accuracy for speech/silence classification in low SNR, the classification accuracy will be considered to be improved. The experimental results showed that our proposed CLSFN presented the improved recognition performances comparing with the conventional SFN-I/II and CSFN methods in all kinds of noisy environments.

A DB Pruning Method in a Large Corpus-Based TTS with Multiple Candidate Speech Segments (대용량 복수후보 TTS 방식에서 합성용 DB의 감량 방법)

  • Lee, Jung-Chul;Kang, Tae-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.572-577
    • /
    • 2009
  • Large corpus-based concatenating Text-to-Speech (TTS) systems can generate natural synthetic speech without additional signal processing. To prune the redundant speech segments in a large speech segment DB, we can utilize a decision-tree based triphone clustering algorithm widely used in speech recognition area. But, the conventional methods have problems in representing the acoustic transitional characteristics of the phones and in applying context questions with hierarchic priority. In this paper, we propose a new clustering algorithm to downsize the speech DB. Firstly, three 13th order MFCC vectors from first, medial, and final frame of a phone are combined into a 39 dimensional vector to represent the transitional characteristics of a phone. And then the hierarchically grouped three question sets are used to construct the triphone trees. For the performance test, we used DTW algorithm to calculate the acoustic similarity between the target triphone and the triphone from the tree search result. Experimental results show that the proposed method can reduce the size of speech DB by 23% and select better phones with higher acoustic similarity. Therefore the proposed method can be applied to make a small sized TTS.

Noise-Biased Compensation of Minimum Statistics Method using a Nonlinear Function and A Priori Speech Absence Probability for Speech Enhancement (음질향상을 위해 비선형 함수와 사전 음성부재확률을 이용한 최소통계법의 잡음전력편의 보상방법)

  • Lee, Soo-Jeong;Lee, Gang-Seong;Kim, Sun-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.1
    • /
    • pp.77-83
    • /
    • 2009
  • This paper proposes a new noise-biased compensation of minimum statistics(MS) method using a nonlinear function and a priori speech absence probability(SAP) for speech enhancement in non-stationary noisy environments. The minimum statistics(MS) method is well known technique for noise power estimation in non-stationary noisy environments. It tends to bias the noise estimate below that of true noise level. The proposed method is combined with an adaptive parameter based on a sigmoid function and a priori speech absence probability (SAP) for biased compensation. Specifically. we apply the adaptive parameter according to the a posteriori SNR. In addition, when the a priori SAP equals unity, the adaptive biased compensation factor separately increases ${\delta}_{max}$ each frequency bin, and vice versa. We evaluate the estimation of noise power capability in highly non-stationary and various noise environments, the improvement in the segmental signal-to-noise ratio (SNR), and the Itakura-Saito Distortion Measure (ISDM) integrated into a spectral subtraction (SS). The results shows that our proposed method is superior to the conventional MS approach.