• Title/Summary/Keyword: 3D Manipulation

Search Result 170, Processing Time 0.032 seconds

Helicopter Pilot Metaphor for 3D Space Navigation and its implementation using a Joystick (3차원 공간 탐색을 위한 헬리콥터 조종사 메타포어와 그 구현)

  • Kim, Young-Kyoung;Jung, Moon-Ryul;Paik, Doowon;Kim, Dong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.3 no.1
    • /
    • pp.57-67
    • /
    • 1997
  • The navigation of virtual space comes down to the manipulation of the virtual camera. The movement of the virtual cameras has 6 degrees of freedom. However, input devices such as mouses and joysticks are 2D. So, the movement of the camera that corresponds to the input device is 2D movement at the given moment. Therefore, the 3D movement of the camera can be implemented by means of the combination of 2D and 1D movements of the camera. Many of the virtual space navigation browser use several navigation modes to solve this problem. But, the criteria for distinguishing different modes are not clear, somed of the manipulations in each mode are repeated in other modes, and the kinesthetic correspondence of the input devices is often confusing. Hence the user has difficulty in making correct decisions when navigating the virtual space. To solve this problem, we use a single navigation metaphore in which different modes are organically integrated. In this paper we propose a helicopter pilot metaphor. Using the helicopter pilot metaphore means that the user navigates the virtual space like a pilot of a helicopter flying in space. In this paper, we distinguished six 2D movement spaces of the helicopter: (1) the movement on the horizontal plane, (2) the movement on the vertical plane,k (3) the pitch and yaw rotations about the current position, (4) the roll and pitch rotations about the current position, (5) the horizontal and vertical turning, and (6) the rotation about the target object. The six 3D movement spaces are visualized and displayed as a sequence of auxiliary windows. The user can select the desired movement space simply by jumping from one window to another. The user can select the desired movement by looking at the displaced 2D movement spaces. The movement of the camera in each movement space is controlled by the usual movements of the joystick.

  • PDF

Hardware Implementation of Real-Time Blind Watermarking by Substituting Bitplanes of Wavelet DC Coefficients (웨이블릿 DC 계수의 비트평면 치환방법에 의한 실시간 블라인드 워터마킹 및 하드웨어 구현)

  • 서영호;김동욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.3C
    • /
    • pp.398-407
    • /
    • 2004
  • In this paper, a blind watermarking method which is suitable to the video compression using 2-D discrete wavelet transform was proposed and implemented into the hardware using VHDL(VHSIC Hardware Description Language). The goal of the proposed watermarking algorithm is the authentication about the manipulation of the watermark embedded image and the detection of the error positions. Considering the compressed video image, the proposed watermarking scheme is unrelated to the quantization and is able to concurrently embed or extract the watermark. We experimentally verified that the lowest frequency subband(LL4) is not sensitive to the change in the spatial domain, so LL4 subband was selected for the mark space. And the combination of the bitplanes which has the properties of both the minimum degradation of the image and the robustness was chosen as the embedded Point in the mark space in LL4 subband. Since we know the watermark embedded positions and the watermark is embedded by not varying the value but changing the value, the watermark can be extracted without the original image. Also, for the security when exposing the watermark embedded position, we embed the encrypted watermark by the block cipher. The proposed watermark algorithm shows the robustness against the general image manipulation and is easily transplanted into the image or video compressor with the minimal changing in the structure. The designed hardware has 4037 LABs(24%) and 85 ESBs(3%) in APEX20KC EP20K400CF672C7 FPGA of Altera and stably operates in 82MHz clock frequency.

Design and Comparison of Digital Predistorters for High Power Amplifiers (비선형 고전력 증폭기의 디지털 전치 보상기 설계 및 비교)

  • Lim, Sun-Min;Eun, Chang-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.4C
    • /
    • pp.403-413
    • /
    • 2009
  • We compare three predistortion methods to prevent signal distortion and spectral re-growth due to the high PAPR (peak-to-average ratio) of OFDM signal and the non-linearity of high-power amplifiers. The three predistortion methods are pth order inverse, indirect learning architecture and look up table. The pth order inverse and indirect learning architecture methods requires less memory and has a fast convergence because these methods use a polynomial model that has a small number of coefficients. Nevertheless the convergence is fast due to the small number of coefficients and the simple computation that excludes manipulation of complex numbers by separate compensation for the magnitude and phase. The look up table method is easy to implement due to simple computation but has the disadvantage that large memory is required. Computer simulation result reveals that indirect learning architecture shows the best performance though the gain is less than 1 dB at $BER\;=\;10^{-4}$ for 64-QAM. The three predistorters are adaptive to the amplifier aging and environmental changes, and can be selected to the requirements for implementation.

Effects of different energy and rumen undegradable protein levels on dairy cow's production performance at mid-lactation period (에너지 및 반추위 미분해단백질 수준을 달리한 사료급여가 비유중기 유우에 미치는 영향)

  • Park, Su Bum;Lim, Dong Hyun;Park, Seong Min;Kim, Tae Il;Choi, Sun Ho;Kwon, Eung Gi;Seo, Jakyeom;Seo, Seongwon;Ki, Kwang Seok
    • Korean Journal of Agricultural Science
    • /
    • v.40 no.4
    • /
    • pp.333-338
    • /
    • 2013
  • Sources of energy and rumen undegradable protein (RUP) have been used to meet nutrient requirements for high producing dairy cows. However studies for manipulation the levels of energy and RUP in diets have been mainly achieved using dairy cows at early-lactation period. The objective of this study thus, was to investigate the effects of different energy and rumen undegradable protein (RUP) levels on dry matter intake and milk yield in Holstein cows at mid-lactation period. Basal diet was prepared as TMR to meet nutrient requirements for dairy cows at mid-lactation according to NRC recommendation. Cows of control group (Con) were fed only basal diets while ground corn (0.5 kg/d), heat-treated soybean meal (0.5 kg/d), and their mixture (0.25 kg of each supplements/d) were added to diets for cows of treatment groups (T1, T2, and T3 respectively) to modulate the level of energy and RUP contents in diets. Addition of energy or RUP source in basal TMR did not affect in total DMI while TMR intake tended to be higher in Con compared to T3. Cows fed T3 diets tended to show increased milk yield and MUN content than those of Con. Cows for T2 as well as T3 had lower ADG (P<0.05) compared with those of Con. We concluded that the addition of RUP source in diets for dairy cows on mid-lactation period might cause the decrement of DMI and ADG.

Multi-view Video Coding using View Interpolation (영상 보간을 이용한 다시점 비디오 부호화 방법)

  • Lee, Cheon;Oh, Kwan-Jung;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.12 no.2
    • /
    • pp.128-136
    • /
    • 2007
  • Since the multi-view video is a set of video sequences captured by multiple array cameras for the same three-dimensional scene, it can provide multiple viewpoint images using geometrical manipulation and intermediate view generation. Although multi-view video allows us to experience more realistic feeling with a wide range of images, the amount of data to be processed increases in proportion to the number of cameras. Therefore, we need to develop efficient coding methods. One of the possible approaches to multi-view video coding is to generate an intermediate image using view interpolation method and to use the interpolated image as an additional reference frame. The previous view interpolation method for multi-view video coding employs fixed size block matching over the pre-determined disparity search range. However, if the disparity search range is not proper, disparity error may occur. In this paper, we propose an efficient view interpolation method using initial disparity estimation, variable block-based estimation, and pixel-level estimation using adjusted search ranges. In addition, we propose a multi-view video coding method based on H.264/AVC to exploit the intermediate image. Intermediate images have been improved about $1{\sim}4dB$ using the proposed method compared to the previous view interpolation method, and the coding efficiency have been improved about 0.5 dB compared to the reference model.

Effective Utilization of Domain Knowledge for Relational Reinforcement Learning (관계형 강화 학습을 위한 도메인 지식의 효과적인 활용)

  • Kang, MinKyo;Kim, InCheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.141-148
    • /
    • 2022
  • Recently, reinforcement learning combined with deep neural network technology has achieved remarkable success in various fields such as board games such as Go and chess, computer games such as Atari and StartCraft, and robot object manipulation tasks. However, such deep reinforcement learning describes states, actions, and policies in vector representation. Therefore, the existing deep reinforcement learning has some limitations in generality and interpretability of the learned policy, and it is difficult to effectively incorporate domain knowledge into policy learning. On the other hand, dNL-RRL, a new relational reinforcement learning framework proposed to solve these problems, uses a kind of vector representation for sensor input data and lower-level motion control as in the existing deep reinforcement learning. However, for states, actions, and learned policies, It uses a relational representation with logic predicates and rules. In this paper, we present dNL-RRL-based policy learning for transportation mobile robots in a manufacturing environment. In particular, this study proposes a effective method to utilize the prior domain knowledge of human experts to improve the efficiency of relational reinforcement learning. Through various experiments, we demonstrate the performance improvement of the relational reinforcement learning by using domain knowledge as proposed in this paper.

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 1) Theoretical Principle

  • Lari, Zahra;Habib, Ayman;Mazaheri, Mehdi;Al-Durgham, Kaleel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.191-204
    • /
    • 2014
  • In recent years, multi-camera systems have been recognized as an affordable alternative for the collection of 3D spatial data from physical surfaces. The collected data can be applied for different mapping(e.g., mobile mapping and mapping inaccessible locations)or metrology applications (e.g., industrial, biomedical, and architectural). In order to fully exploit the potential accuracy of these systems and ensure successful manipulation of the involved cameras, a careful system calibration should be performed prior to the data collection procedure. The calibration of a multi-camera system is accomplished when the individual cameras are calibrated and the geometric relationships among the different system components are defined. In this paper, a new single-step approach is introduced for the calibration of a multi-camera system (i.e., individual camera calibration and estimation of the lever-arm and boresight angles among the system components). In this approach, one of the cameras is set as the reference camera and the system mounting parameters are defined relative to that reference camera. The proposed approach is easy to implement and computationally efficient. The major advantage of this method, when compared to available multi-camera system calibration approaches, is the flexibility of being applied for either directly or indirectly geo-referenced multi-camera systems. The feasibility of the proposed approach is verified through experimental results using real data collected by a newly-developed indirectly geo-referenced multi-camera system.

Collaborative Authoring System using 3D Spatio-Temporal Space (삼차원 시.공간을 이용하는 프레젠테이션 공동저작 시스템)

  • 이도형;성미영
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.6
    • /
    • pp.623-634
    • /
    • 2003
  • In this paper, we propose a collaborative multimedia authoring system. Our authoring system represents a multimedia presentation in a 3D coordinate system. One axis represents the traditional timeline information (T-zone), and the other two axes represent spatial coordinates (XY-zone). Our system represents a visual media objects as a 3D parallelepipeds and audio media objects as cylinders. This interface allows for simultaneous authoring and manipulation of both the temporal and the spatial aspects of a presentation. Using our system, users can design multimedia presentations collaboratively in the unified spatio-temporal space while freely traversing the spatial domain and the temporal domain without changing the context of authoring. In addition, we suggest an efficient mechanism of concurrency control for shared objects generated by our collaborative writing system. The mechanism is mainly based on the user awareness, the multiple versions, and the access permission of shared objects. Our concurrency control mechanism is designed to keep data consistency by minimizing the collision due to the delay or the failure of network communication and to allow maximum responsiveness for users using optimistic concurrency control. Also, the mechanism maximize the responsiveness by refining the locking granularity and applying different concurrency control mechanisms to each.

Cubical User Interface for Toy Block Composition in Augmented Reality (증강 현실에서의 장난감 블록 결합을 위한 큐브형 사용자 인터페이스)

  • Lee, Hyeong-Mook;Lee, Young-Ho;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.363-367
    • /
    • 2009
  • We propose Cubical User Interface(CUI) for toy block composition in Augmented Reality. The creation of new object by composing virtual object is able to construct various AR contents effectively. However, existing GUI method requires learning time or is lacking of intuitiveness between act of user and offered interface. In case of AR interfaces, they mainly have been supported one handed operation and it did not consider composition property well. Therefore, the CUI provide tangible cube as the manipulation tool for virtual toy block composition in AR. The tangible cube which is attached multi-markers, magnets, and buttons supports free rotation, combination, and button input. Also, we propose two kinds of two-handed composing interactions based on CUI. First is Screw Driving(SD) method which is possible to free 3-D positioning and second is Block Assembly(BA) method which support visual guidance and is fast and intuitive. We expected that proposed interface can apply as the authoring system for content such as education, entertainment, Digilogbook.

  • PDF

Motion Capture using both Human Structural Characteristic and Inverse Kinematics (인체의 구조적 특성과 역운동학을 이용한 모션 캡처)

  • Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo;Lee, Chil-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.2
    • /
    • pp.20-32
    • /
    • 2010
  • Previous hardware devices to capture human motion have many limitations; expensive equipment, complexity of manipulation or constraints of human motion. In order to overcome these problems, real-time motion capture algorithms based on computer vision have been actively proposed. This paper presents an efficient analysis method of multiple view images for real-time motion capture. First, we detect the skin color regions of human being, and then correct the image coordinates of the regions by using camera calibration and epipolar geometry. Finally, we track the human body part and capture human motion using kalman filter. Experimental results show that the proposed algorithm can estimate a precise position of the human body.