• Title/Summary/Keyword: Moving camera

Search Result 832, Processing Time 0.023 seconds

Efficient Video Service Providing Methods for Mobile of Indoor AP Terminals (실내 AP간 단말 이동에 따른 효율적인 동영상 서비스 제공 방안)

  • Hong, Sung-Hwa;Kim, Byoung-Kug
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.585-587
    • /
    • 2022
  • The visual motivation between AP devices is NTP-based visual motivation through the access of the Internet through the internal wired LAN, but this has several seconds of visual difference in hundreds of milliseconds (msec) depending on the network. The frame for the output of the video will vary depending on the application, but usually 24 (image) frames are output to the screen in one second. Therefore, the visual synchronization between peripheral devices can be performed through the adjacent moving camera device, not the wired method. The programming method of generating API for synchronization command when creating an application for visual synchronization and delivering it to AP through MAC may differ from the time in synchronization command according to the environment of the operating system at the transmission side and the situation of the buffer queue of the MAC. Therefore, as a method to solve this problem, the renewal of visual information in the device driver terminal controlling MAC can be much more effective.

  • PDF

Implementation of a Self Controlled Mobile Robot with Intelligence to Recognize Obstacles (장애물 인식 지능을 갖춘 자율 이동로봇의 구현)

  • 류한성;최중경
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.312-321
    • /
    • 2003
  • In this paper, we implement robot which are ability to recognize obstacles and moving automatically to destination. we present two results in this paper; hardware implementation of image processing board and software implementation of visual feedback algorithm for a self-controlled robot. In the first part, the mobile robot depends on commands from a control board which is doing image processing part. We have studied the self controlled mobile robot system equipped with a CCD camera for a long time. This robot system consists of a image processing board implemented with DSPs, a stepping motor, a CCD camera. We will propose an algorithm in which commands are delivered for the robot to move in the planned path. The distance that the robot is supposed to move is calculated on the basis of the absolute coordinate and the coordinate of the target spot. And the image signal acquired by the CCD camera mounted on the robot is captured at every sampling time in order for the robot to automatically avoid the obstacle and finally to reach the destination. The image processing board consists of DSP (TMS320VC33), ADV611, SAA7111, ADV7l76A, CPLD(EPM7256ATC144), and SRAM memories. In the second part, the visual feedback control has two types of vision algorithms: obstacle avoidance and path planning. The first algorithm is cell, part of the image divided by blob analysis. We will do image preprocessing to improve the input image. This image preprocessing consists of filtering, edge detection, NOR converting, and threshold-ing. This major image processing includes labeling, segmentation, and pixel density calculation. In the second algorithm, after an image frame went through preprocessing (edge detection, converting, thresholding), the histogram is measured vertically (the y-axis direction). Then, the binary histogram of the image shows waveforms with only black and white variations. Here we use the fact that since obstacles appear as sectional diagrams as if they were walls, there is no variation in the histogram. The intensities of the line histogram are measured as vertically at intervals of 20 pixels. So, we can find uniform and nonuniform regions of the waveforms and define the period of uniform waveforms as an obstacle region. We can see that the algorithm is very useful for the robot to move avoiding obstacles.

Rotating Brush Strokes to Track Movement for Painterly Rendering (회학적 렌더링에서 움직임을 따라 회전하는 붓질 기법)

  • Han, Jeong-Hun;Gi, Hyeon-U;Kim, Hyo-Won;O, Gyeong-Su
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.426-432
    • /
    • 2008
  • We introduce a method of rendering a scene lying 3D objects which is like that artist draw on a canvas by brushing. Painting is the art area presenting something created by color and line on 2D plane. We are brushing on billboards on screen space for the 2D brushing effect according to the definition of "Painting". Brushing orientation is haven to rotate for preventing the orientation in the first scene in the case that object or camera are moving. If the brushing isn't rotated, shower-door effect is watched on the scene as undesirable result We present a brushing rotating method for keeping the orientation changing the direction of view and object rigid animation. The brushing direction is computed with Horn's 2D similarity transform by least-square solution. We watched the changing brushing to track the motion of object and view.

  • PDF

Regional Projection Histogram Matching and Linear Regression based Video Stabilization for a Moving Vehicle (영역별 수직 투영 히스토그램 매칭 및 선형 회귀모델 기반의 차량 운행 영상의 안정화 기술 개발)

  • Heo, Yu-Jung;Choi, Min-Kook;Lee, Hyun-Gyu;Lee, Sang-Chul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.798-809
    • /
    • 2014
  • Video stabilization is performed to remove unexpected shaky and irregular motion from a video. It is often used as preprocessing for robust feature tracking and matching in video. Typical video stabilization algorithms are developed to compensate motion from surveillance video or outdoor recordings that are captured by a hand-help camera. However, since the vehicle video contains rapid change of motion and local features, typical video stabilization algorithms are hard to be applied as it is. In this paper, we propose a novel approach to compensate shaky and irregular motion in vehicle video using linear regression model and vertical projection histogram matching. Towards this goal, we perform vertical projection histogram matching at each sub region of an input frame, and then we generate linear regression model to extract vertical translation and rotation parameters with estimated regional vertical movement vector. Multiple binarization with sub-region analysis for generating the linear regression model is effective to typical recording environments where occur rapid change of motion and local features. We demonstrated the effectiveness of our approach on blackbox videos and showed that employing the linear regression model achieved robust estimation of motion parameters and generated stabilized video in full automatic manner.

A Study on High-Speed Extraction of Bar Code Region for Parcel Automatic Identification (소포 자동식별을 위한 바코드 관심영역 고속 추출에 관한 연구)

  • Park, Moon-Sung;Kim, Jin-Suk;Kim, Hye-Kyu;Jung, Hoe-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.9D no.5
    • /
    • pp.915-924
    • /
    • 2002
  • Conventional Systems for parcel sorting consist of two sequences as loading the parcel into conveyor belt system and post-code input. Using bar code information, the parcels to be recorded and managed are recognized. This paper describes a 32 $\times$ 32 sized mini-block inspection to extract bar code Region of Interest (ROI) from the line Charged Coupled Device (CCD) camera capturing image of moving parcel at 2m/sec speed. Firstly, the Min-Max distribution of the mini-block has been applied to discard the background of parcel and region of conveying belts from the image. Secondly, the diagonal inspection has been used for the extraction of letters and bar code region. Five horizontal line scanning detects the number of edges and sizes and ROI has been acquired from the detection. The wrong detected area has been deleted by the comparison of group size from labeling processes. To correct excluded bar code region in mini-block processes and for analysis of bar code information, the extracted ROI 8 boundary points and decline distribution have been used with central axis line adjustment. The ROI extraction and central axis creation have become enable within 60~80msec, and the accuracy has been accomplished over 99.44 percentage.

Hierrachical manner of motion parameters for sports video mosaicking (스포츠 동영상의 모자익을 위한 이동계수의 계층적 향상)

  • Lee, Jae-Cheol;Lee, Soo-Jong;Ko, Young-Hoon;Noh, Heung-Sik;Lee Wan-Ju
    • The Journal of Information Technology
    • /
    • v.7 no.2
    • /
    • pp.93-104
    • /
    • 2004
  • Sports scene is characterized by large amount of global motion due to pan and zoom of camera motion, and includes many small objects moving independently. Some short period of sports games is thrilling to televiewers, and important to producers. At the same time that kinds of scenes exhibit exceptionally dynamic motions and it is very difficult to analyze the motions with conventional algorithms. In this thesis, several algorithms are proposed for global motion analysis on these dynamic scenes. It is shown that proposed algorithms worked well for motion compensation and panorama synthesis. When cascading the inter frame motions, accumulated errors are unavoidable. In order to minimize these errors, interpolation method of motion vectors is introduced. Affined transform or perspective projection transform is regarded as a square matrix, which can be factorized into small amount of motion vectors. To solve factorization problem, we preposed the adaptation of Newton Raphson method into vector and matrix form, which is also computationally efficient. Combining multi frame motion estimation and the corresponding interpolation in hierarchical manner enhancement algorithm of motion parameters is proposed, which is suitable for motion compensation and panorama synthesis. The proposed algorithms are suitable for special effect rendering for broadcast system, video indexing, tracking in complex scenes, and other fields requiring global motion estimation.

  • PDF

The momentary movement of soft contact lens by blinking : The change of movement depending on wearing time (손목에 의한 소프트콘택트렌즈의 순간적인 움직임 : 착용시간의 증가에 따른 움직임의 변화)

  • Park, Sang-Il;Lee, Youn Jin;Lee, Heum-Sook;Park, Mijung
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.12 no.1
    • /
    • pp.1-7
    • /
    • 2007
  • To investigate the momentary movement pattern of soft contact lens(SCL) depending on wearing time, eight types of soft contact lenses were worn by 10 normal subjects and the momentary movements of SCLs were estimated using by high speed camera(FASTCAM ultima 1024). When the momentary movements of SCLs in the cornea between blinkings were compared after 15 min wearing, the vertical movements of all eight SCLs were about 2 times larger than the horizontal movement but the extent of these movement difference was a function of kinds of SCLs. The momentary moving distance of SCL varied from the kinds of SCLs, which A and B lens, daily wear lens, moved significantly larger distance compared with other SCLs. The momentary movements between blinkings decreased significantly after 8hr wear of SCLs. The extents were different when SCLs were compared with each other, which the reduction range of horizontal and vertical movement was 24.6~60.0% and 20.4~94.3%, respectively. The A, B and C lenses which had relatively higher water content showed the larger movement reduction after SCL wear. This results suggest that wearing SCL for some hours decreases the movement of SCl, which can induce the change of tear flow.

  • PDF

A Method for Recovering Text Regions in Video using Extended Block Matching and Region Compensation (확장적 블록 정합 방법과 영역 보상법을 이용한 비디오 문자 영역 복원 방법)

  • 전병태;배영래
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.767-774
    • /
    • 2002
  • Conventional research on image restoration has focused on restoring degraded images resulting from image formation, storage and communication, mainly in the signal processing field. Related research on recovering original image information of caption regions includes a method using BMA(block matching algorithm). The method has problem with frequent incorrect matching and propagating the errors by incorrect matching. Moreover, it is impossible to recover the frames between two scene changes when scene changes occur more than twice. In this paper, we propose a method for recovering original images using EBMA(Extended Block Matching Algorithm) and a region compensation method. To use it in original image recovery, the method extracts a priori knowledge such as information about scene changes, camera motion and caption regions. The method decides the direction of recovery using the extracted caption information(the start and end frames of a caption) and scene change information. According to the direction of recovery, the recovery is performed in units of character components using EBMA and the region compensation method. Experimental results show that EBMA results in good recovery regardless of the speed of moving object and complexity of background in video. The region compensation method recovered original images successfully, when there is no information about the original image to refer to.

Design and Implementation of Mobile Vision-based Augmented Galaga using Real Objects (실제 물체를 이용한 모바일 비전 기술 기반의 실감형 갤러그의 설계 및 구현)

  • Park, An-Jin;Yang, Jong-Yeol;Jung, Kee-Chul
    • Journal of Korea Game Society
    • /
    • v.8 no.2
    • /
    • pp.85-96
    • /
    • 2008
  • Recently, research on augmented games as a new game genre has attracted a lot of attention. An augmented game overlaps virtual objects in an augmented reality(AR) environment, allowing game players to interact with the AR environment through manipulating real and virtual objects. However, it is difficult to release existing augmented games to ordinary game players, as the games generally use very expensive and inconvenient 'backpack' systems: To solve this problem, several augmented games have been proposed using mobile devices equipped with cameras, but it can be only enjoyed at a previously-installed location, as a ‘color marker' or 'pattern marker’ is used to overlap the virtual object with the real environment. Accordingly, this paper introduces an augmented game, called augmented galaga based on traditional well-known galaga, executed on mobile devices to make game players experience the game without any economic burdens. Augmented galaga uses real object in real environments, and uses scale-invariant features(SIFT), and Euclidean distance to recognize the real objects. The virtural aliens are randomly appeared around the specific objects, several specific objects are used to improve the interest aspect, andgame players attack the virtual aliens by moving the mobile devices towards specific objects and clicking a button of mobile devices. As a result, we expect that augmented galaga provides an exciting experience without any economic burdens for players based on the game paradigm, where the user interacts with both the physical world captured by a mobile camera and the virtual aliens automatically generated by a mobile devices.

  • PDF

Design and Implementation of a Real-Time Lipreading System Using PCA & HMM (PCA와 HMM을 이용한 실시간 립리딩 시스템의 설계 및 구현)

  • Lee chi-geun;Lee eun-suk;Jung sung-tae;Lee sang-seol
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1597-1609
    • /
    • 2004
  • A lot of lipreading system has been proposed to compensate the rate of speech recognition dropped in a noisy environment. Previous lipreading systems work on some specific conditions such as artificial lighting and predefined background color. In this paper, we propose a real-time lipreading system which allows the motion of a speaker and relaxes the restriction on the condition for color and lighting. The proposed system extracts face and lip region from input video sequence captured with a common PC camera and essential visual information in real-time. It recognizes utterance words by using the visual information in real-time. It uses the hue histogram model to extract face and lip region. It uses mean shift algorithm to track the face of a moving speaker. It uses PCA(Principal Component Analysis) to extract the visual information for learning and testing. Also, it uses HMM(Hidden Markov Model) as a recognition algorithm. The experimental results show that our system could get the recognition rate of 90% in case of speaker dependent lipreading and increase the rate of speech recognition up to 40~85% according to the noise level when it is combined with audio speech recognition.

  • PDF