• Title/Summary/Keyword: 화상회의

Search Result 362, Processing Time 0.028 seconds

원격화상회의시스템의 활용실태에 관한 연구

  • 김영문;김숙원
    • Proceedings of the Korea Association of Information Systems Conference
    • /
    • 1997.10b
    • /
    • pp.243-259
    • /
    • 1997
  • 본 논문은 원격화상회의시스템에 대하여 (1) 이론적 배경, (2) 활용실태, (3) 효과 와 문제점에 대하여 이론적 문헌과 실제 사례를 중심으로 구체적으로 논하였다.

  • PDF

An Algorithm for Stable Video Conference System (안정적인 화상회의 시스템을 위한 알고리즘)

  • Lee Moon-Ku
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.2 s.302
    • /
    • pp.11-20
    • /
    • 2005
  • In previous video conference system, when the number of participants in video conference increases by n, the bandwidth and memory of n2 is required. And also, it brings about increase in traffic and problem of a say during a conference in aspect of transmission of voice data. In this paper, we propose an algorithm of remote video conference using silence detection algerian to resolve the questions such as buffering method of video data in server and heavy traffic detection algorithm to the increase in participants. Video data buffering algorithm is not a method of broadcasting to other client in the server, but this algorithm uses two other methods; the buffering method of receiving compressed video data from clients and the indexing method for acquiring the video data of other participants in clients according to clients' bandwidth and network transmission speed. We apply a voice transmission algerian and a channel management algorithm to the remote video conference system. The method used in the voice transmission algorithm is a silence detection algorithm which does not send silent participants' voice data to the server. The channel management algorithm is a method allocating a say to the participants who have priority. In consideration of average 20 frames and 30ms regardless of a number of participants, we can safely conclude that the transmission of video and voice data is stable.

Real-Time Vision Based Speaker Location Detection for Realistic Audio Reproduction (실감 음향 재생을 위한 영상기반의 실시간 화자 위치 검출)

  • Lim Jaehyun;Lee Chulhee
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.143-146
    • /
    • 2004
  • 일반적으로, 화상회의에서 화자의 위치를 검출하는 것은 음향 신호를 기반으로 이루어져 왔다. 그러나 물리적인 환경의 제약이나 화자 검출 시스템의 한계를 벗어나는 노이즈가 발생하는 경우에는 검출 시스템의 성능저하를 초래하게 된다. 본 논문에서는 음향 기반의 검출 시스템과 독립적으로, 혹은 상호 보완적으로 사용될 수 있는 영상 기반의 화자 검출 알고리즘에 대하여 제안하고자 한다. 화자의 위치에 관한 정보는 화상회의에 한층 사실감을 부여하는 3 차원 오디오 재생에 사용될 수 있다.

  • PDF

Design of a Three Dimensional Audio System for Multicast Conferencing (멀티캐스트 화상회의를 위한 3-D 음향시스템 설계)

  • 김영오;고대식
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.1B
    • /
    • pp.71-76
    • /
    • 2000
  • On multimedia teleconferencing system existing a number of participants, face of the participants can beperceived by visual image. However, differentiation of each participant's voice and spaciousness sense are very hard since voice of all participants is processed with one dimensional data. In this paper, we implemented three dimensional audio rendering system using the HRTF(Head Related Transfer Function) and distance sense reproduction method and determined the optimal location of the participants for teleconferencing system. In the results of the listening test using elevation and azimuth angle, we showed that directional perception of the azimuth angles were better than that of the elevation angles. Specially, we showed that participant location using the HRTFS of the azimuth angle 10" , 90" , 270" and350" was efficient in teleconferencing system existing four participants. We also proposed that distance cue was used for enhancement of the reality and location of many participants more than five.ipants more than five.

  • PDF

Metaverse Platform to Improve Immersion of Online Video Conferencing System (온라인 화상 회의 시스템의 몰입 개선을 위한 메타버스 플랫폼)

  • Yoon, Dong-eon;Oh, Am-suk
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.35-37
    • /
    • 2022
  • Online video conferencing systems such as Zoom and Discord are mainly used for non-face-to-face work due to their good accessibility and lack restriction of space. However, most online video conferencing systems are difficult to interact in both directions, and problems such as difficulty in communication and lack of immersion are emerging in participants who use them. On the other hand, Metaverse, which has attracted attention with the development of spatial computing, enables smooth interaction like the real world in the three-dimensional virtual world by utilizing hearing, visual, and touch information. In this paper, we propose a method of utilizing the metaverse platform to improve the problem of lack of immersion in the existing online video conferencing system. Through the proposal method, platform users can share and check each other's work screens in a virtual space and exchange various data. In order to improve participants' of immersion, the environment and improvement plan of the metaverse platform were constructed and compared with the existing online video conferencing platform.

  • PDF

Recording Support System for Off-Line Conference using Face and Speaker Recognition (얼굴 인식 및 화자 정보를 이용한 오프라인 회의 기록 지원 시스템)

  • Son, Yun-Sik;Jeong, Jin-U;Park, Han-Mu;Gye, Seung-Cheol;Yun, Jong-Hyeok;Jeong, Nak-Cheon;O, Se-Man
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.33-37
    • /
    • 2007
  • 최근 멀티미디어 서비스는 동영상 압축 기술 및 네트워크의 발달을 기반으로 하여 다양한 응용 서비스를 제공하고 있으며, 이 중 화상 회의 시스템은 이 두 가지 기술이 효과적으로 사용된 대표적인 예이다. 원격 사용자간의 원활한 의사전달을 위해 고려된 화상회의 시스템은 효과적인 응용 서비스로 분류되고 있지만, 이러한 서비스 제공을 위한 기술을 이용하여 빈도가 훨씬 많은 일반적인 회의를 지원하는 응용서비스는 드문 편이다. 본 논문에서는 얼굴 정보와 화자 정보를 기반으로 오프라인 회의를 보조하는 시스템을 제안한다. 제안된 시스템은 소규모의 마이크와 캠을 이용하여 화자의 위치를 파악하고 캠에서 얻어진 정보를 이용하여 얼굴 영역 정보를 분석하고 인식한 후 화자 정보를 추출하여 발언자들을 추적 하여 기록하는 기능을 제공한다.

  • PDF

An Efficient On-line Frame Scheduling Algorithm for Video Conferences (화상회의를 위한 효율적인 온-라인 프레임 스케줄링 알고리즘)

  • 안성용;이정아;심재홍
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.7
    • /
    • pp.387-396
    • /
    • 2004
  • In this paper, we propose an algorithm that distributes processor time to the tasks decoding encoded frames with a goal maximizing total QoS (quality of services) of video conference system. An encoded frame has such a characteristic that the QoS of recovered frame image also increases as the processor time given for decoding the frame gets to increase. Thus, the quality of decoded image for each frame can be represented as a QoS function of the amount of service time given to decode. In addition, every stream of video conference has close time-dependency between continuous frames belonging to the same stream. Based on the time-dependency and QoS functions, we propose an on-line frame scheduling algorithm which does not schedule all frames in the system but just a few frames while maximizing total QoS of video streams in the conference. The simulation results show that, as the system load gets to increase, the proposed algorithm compared to the existing EDF algorithm can reduce the quality of decoded frame images more smoothly and show the movements of conference attendees more naturally without short cutting.

Mutual Gaze Correction for Videoconferencing using View Morphing (모핑을 이용한 화상회의의 시선 맞춤 보정 방법)

  • Baek, Eu-Tteum;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.4 no.1
    • /
    • pp.9-15
    • /
    • 2015
  • Nonverbal communications such as eye gazing, posture, and gestures send forceful messages. In regard to nonverbal communication, eye gazing is one of the most strong forms that an individual can use. However, lack of mutual gazing occurs when we use video conferencing system. The displacement between locations of the eyes and a camera gets in the way of eye contact. The lack of eye gazing can give unapproachable and unpleasant feeling. In this paper, we propose an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. The result shows that eye gazing is corrected and correctly preserved and the image was synthesized seamlessly.