• Title/Summary/Keyword: Digital Audio

Search Result 626, Processing Time 0.026 seconds

Analysis on the Possibility of Electronic Surveillance Society in the Intelligence Information age

  • Chung, Choong-Sik
    • Journal of Platform Technology
    • /
    • v.6 no.4
    • /
    • pp.11-17
    • /
    • 2018
  • In the smart intelligence information society, there is a possibility that the social dysfunction such as the personal information protection issue and the risk to the electronic surveillance society may be highlighted. In this paper, we refer to various categories and classify electronic surveillance into audio surveillance, visual surveillance, location surveillance, biometric information surveillance, and data surveillance. In order to respond to new electronic surveillance in the intelligent information society, it requires a change of perception that is different from that of the past. This starts with the importance of digital privacy and results in the right to self-determination of personal information. Therefore, in order to preemptively respond to the dysfunctions that may arise in the intelligent information society, it is necessary to further raise the awareness of the civil society to protect information human rights.

A study on the risk of taking out specific information by VoIP sniffing technique (VoIP 스니핑을 통한 특정정보 탈취 위험성에 관한 연구)

  • Lee, Donggeon;Choi, Woongchul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.14 no.4
    • /
    • pp.117-125
    • /
    • 2018
  • Recently, VoIP technology is widely used in our daily life. Even VoIP has become a technology that can be easily accessed from services such as home phone as well as KakaoTalk.[1] Most of these Internet telephones use the RTP protocol. However, there is a vulnerability that the audio data of users can be intercepted through packet sniffing in the RTP protocol. So we want to create a tool to check the security level of a VoIP network using the RTP protocol. To do so, we capture data packet from and to these VoIP networks. For this purpose, we first configure a virtual VoIP network using Raspberry Pi and show the security vulnerability by applying our developed sniffing tool to the VoIP network. We will then analyze the captured packets and extract meaningful information from the analyzed data using the Google Speech API. Finally, we will address the causes of these vulnerabilities and possible solutions to address them.

Design and Implementation of Scent-Supported Educational Content using Arduino

  • Hye-kyung Kwon;Heesun Kim
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.260-267
    • /
    • 2023
  • Due to the development of science and technology in the 4th Industrial Revolution, a variety of content is being developed and utilized through educational courses linked to digital textbooks. Students use smart devices to engage in realistic virtual learning experiences, interacting with the content in digital textbooks. However, while many realistic contents offer visual and auditory effects like 3D VR, AR, and holograms, olfactory content that evokes actual sensations has not yet been introduced. Therefore, in this paper, we designed and implemented 4D educational content by adding the sense of smell to existing content. This implemented content was tested in classrooms through a curriculum-based evaluation. Classes taught with olfactory-enhanced content showed a higher percentage of correct answers compared to those using traditional audio-visual materials, indicating improved understanding.

Data Visualization of Site-Specific Underground Sounds

  • Tae-Eun, Kim
    • International journal of advanced smart convergence
    • /
    • v.13 no.1
    • /
    • pp.77-84
    • /
    • 2024
  • This study delves into the subtle sounds emanating from beneath the earth's surface to unveil hidden messages and the movements of life. It transforms these acoustic phenomena into digital data and reimagines them as visual elements. By employing Sismophone microphones and utilizing the FFT function in p5.js, it analyzes the intricate frequency components of subterranean sounds and translates them into various visual elements, including 3D geometric shapes, flowing lines, and moving particles. This project is grounded in the sounds recorded in diverse 'spaces of death,' ranging from the tombs of Joseon Dynasty officials to abandoned areas in modern cities. We leverage the power of sound to transcend space and time, conveying the concealed narratives and messages of forgotten places .Through the visualization of these sounds, this research blurs the boundaries between 'death' and 'life,' 'past' and 'present,' aiming to explore new forms of artistic expression and broaden perceptions through the sensory connection between sound and vision.

음성인식 기반 인터렉티브 미디어아트의 연구 - 소리-시각 인터렉티브 설치미술 "Water Music" 을 중심으로-

  • Lee, Myung-Hak;Jiang, Cheng-Ri;Kim, Bong-Hwa;Kim, Kyu-Jung
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.354-359
    • /
    • 2008
  • This Audio-Visual Interactive Installation is composed of a video projection of a video Projection and digital Interface technology combining with the viewer's voice recognition. The Viewer can interact with the computer generated moving images growing on the screen by blowing his/her breathing or making sound. This symbiotic audio and visual installation environment allows the viewers to experience an illusionistic spacephysically as well as psychologically. The main programming technologies used to generate moving water waves which can interact with the viewer in this installation are visual C++ and DirectX SDK For making water waves, full-3D rendering technology and particle system were used.

  • PDF

Development of Hardware Platform for Extracting & Composing of SDI Embedded Audio Data at Real-time Capture/Playback System of UHD Video/Audio (UHD 영상/음향 데이터의 실시간 획득/재생 시스템에서의 SDI 내장 음향 데이터의 추출 및 합성을 위한 하드웨어 플랫폼 개발)

  • Lee, Sang-Seol;Jang, Sung-Joon;Choi, Jung-Min;Kim, Je Woo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.258-259
    • /
    • 2016
  • 일반적으로 UHD 방송 편집 시스템에서 UHD 영상의 데이터양이 막대하기 때문에 실시간 전송을 위해 코덱과 함께 압축하여 편집 서버로 혹은 편집 서버로부터 스트림 형태로 전송한다. BT.1120 형태로 전송 송출된 SDI (Serial Digital Interface) 내장 음향 데이터는 영상과 달리 보조 데이터 영역에 다른 메타 데이터들과 함께 합성되어 전송 송출되기 때문에 추출 및 합성이 상대적으로 어렵다. 특히 재생을 위해서는 영상 코덱으로부터의 출력 영상과의 동기를 고려해야 하고 음향 데이터를 BT.1120 표준에 맞춰 보조 데이터 영역에 합성해야하기 때문에 개발에 어려움이 있다. 이에 본 논문은 UHD 영상/음향 데이터의 실시간 획득/재생 시스템에서의 SDI 내장 음향 데이터의 추출 및 합성을 위한 FPGA (Field Programmable Gate Array) 기반 하드웨어 플랫폼을 제안하였다. 또한, 이를 위한 음향 데이터 추출 로직과 합성 로직을 HDL(Hardware Design Language) 설계하여 FPGA 내에 탑재하고 카메라/디스플레이/편집 서버와 통합하였다. 시험 결과 4K 60fps 데이터에서 정상적으로 영상과 음향을 분리/획득 및 합성/재생하였다.

  • PDF

A Study for DVD Authoring with IEEE 1394 (IEEE 1394를 이용한 DVD Authoring에 관한 연구)

  • Yoon Young-Doo;Lee Heun-Jung
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.5
    • /
    • pp.145-151
    • /
    • 2005
  • We can define the procedure of Authoring that it makes area cord and the reproduction prevent menu programmed into MPEG II video stream , Audio which is AC-3 audio stream and subtitle under its own category. And it makes process an attribute, an order and an operation, gives the last disk image, Which is DVD(digital versatile disc). There are various process of Authoring tools in the market so that authoring tools can enable, encourage, and assist users ('authors') in the selection of tools that produce simple title, video production and editing suites. In this paper, we will compare and analyze authoring process in which image and sound are authorized into DVD with IEEE 1394port between Window system using generally with Desktop PC and the Macintosh that is based on OSX.

  • PDF

Implementation of the MPEG-1 Layer II Decoder Using the TMS320C64x DSP Processor (TMS320C64x 기반 MPEG-1 LayerII Decoder의 DSP 구현)

  • Cho, Choong-Sang;Lee, Young-Han;Oh, Yoo-Rhee;Kim, Hong-Kook
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.257-258
    • /
    • 2006
  • In this paper, we address several issues in the real time implementation of MPEG-1 Layer II decoder on a fixed-point digital signal processor (DSP), especially TMS320C6416. There is a trade-off between processing speed and the size of program/data memory for the optimal implementation. In a view of the speed optimization, we first convert the floating point operations into fixed point ones with little degradation in audio quality, and then the look-up tables used for the inverse quantization of the audio codec are forced to be located into the internal memory of the DSP. And then, window functions and filter coefficients in the decoder are precalculated and stored as constant, which makes the decoder faster even larger memory size is required. It is shown from the real-time experiments that the fixed-point implementation enables us to make the decoder with a sampling rate of 48 kHz operate with 3 times faster than real-time on TMS320C6416 at a clock rate of 600 MHz.

  • PDF

Automatic Generation of Video Metadata for the Super-personalized Recommendation of Media

  • Yong, Sung Jung;Park, Hyo Gyeong;You, Yeon Hwi;Moon, Il-Young
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.4
    • /
    • pp.288-294
    • /
    • 2022
  • The media content market has been growing, as various types of content are being mass-produced owing to the recent proliferation of the Internet and digital media. In addition, platforms that provide personalized services for content consumption are emerging and competing with each other to recommend personalized content. Existing platforms use a method in which a user directly inputs video metadata. Consequently, significant amounts of time and cost are consumed in processing large amounts of data. In this study, keyframes and audio spectra based on the YCbCr color model of a movie trailer were extracted for the automatic generation of metadata. The extracted audio spectra and image keyframes were used as learning data for genre recognition in deep learning. Deep learning was implemented to determine genres among the video metadata, and suggestions for utilization were proposed. A system that can automatically generate metadata established through the results of this study will be helpful for studying recommendation systems for media super-personalization.

Case Study : Cinematography using Digital Human in Tiny Virtual Production (초소형 버추얼 프로덕션 환경에서 디지털 휴먼을 이용한 촬영 사례)

  • Jaeho Im;Minjung Jang;Sang Wook Chun;Subin Lee;Minsoo Park;Yujin Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.3
    • /
    • pp.21-31
    • /
    • 2023
  • In this paper, we introduce a case study of cinematography using digital human in virtual production. This case study deals with the system overview of virtual production using LEDs and an efficient filming pipeline using digital human. Unlike virtual production using LEDs, which mainly project the background on LEDs, in this case, we use digital human as a virtual actor to film scenes communicating with a real actor. In addition, to film the dialogue scene between the real actor and the digital human using a real-time engine, we automatically generated speech animation of the digital human in advance by applying our Korean lip-sync technology based on audio and text. We verified this filming case by using a real-time engine to produce short drama content using real actor and digital human in an LED-based virtual production environment.