• Title/Summary/Keyword: Virtual Electronic Devices

Search Result 54, Processing Time 0.022 seconds

A Case Study on Digital Interactive Training Content <Tamagotchi> and <Peridot>

  • DongHee Choi;Jeanhun Chung
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.306-313
    • /
    • 2023
  • Having pet is one of the activities people living in modern society do to relieve stress and find peace of mind. Currently, the object of companion animals has moved beyond being a real 'living entity' and has developed to a stage where the animal's upbringing process can be enjoyed in a virtual space by being programmed in digital content. This paper studies detailed elements such as character design, interaction, and realism of 'Tamagotchi (1996)', which can be said to be the beginning of digital training content, and 'Peridot (2023)', a recently introduced augmented reality-based training content. The point was that it was training content using portable electronic devices. However, while the environment in the electronic device in which Tamagotchi's character exists was a simple black and white screen, the environment in which Peridot's character operates has been changed to the real world projected on the screen based on augmented reality. Mutual communication with characters in Tamagotchi remained a response to pressing buttons, but in Peridot, it has advanced to the point where you can pet the characters by touching the smartphone screen. In addition, through object and step recognition, it was confirmed that the sense of reality had become more realistic, with toys thrown by users on the screen bouncing off real objects. We hope that this research material will serve as a useful reference for the development of digital training content to be developed in the near future.

Fabrication of Flexible Micro LED for Beauty/Biomedical Applications (미용/의료용 유연 마이크로 발광 다이오드 디바이스 제작 공정)

  • Jae Hee Lee
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.36 no.6
    • /
    • pp.563-569
    • /
    • 2023
  • Micro light-emitting diodes (LEDs), with a chip size of 100 micrometers or less, have attracted significant attention in flexible displays, augmented reality/virtual reality (AR/VR), and bio-medical applications as next-generation light sources due to their outstanding electrical, optical, and mechanical performance. In the realm of bio-medical devices, it is crucial to transfer tiny micro LED chips onto desired flexible substrates with low precision errors, high speed, and high yield for practical applications on various parts of the human body, including someone's face and organs. This paper aims to introduce a fabrication process for flexible micro LED devices and propose micro LED transfer techniques for cosmetic and medical applications. Flexible micro LED technology holds promise for treating skin disorders, cancers, and neurological diseases.

Image Analysis Module for AR-based Navigation Information Display (증강현실 기반의 항행정보 가시화를 위한 영상해석 모듈)

  • Lee, Jung-Min;Lee, Kyung-Ho;Kim, Dae-Seok
    • Journal of Ocean Engineering and Technology
    • /
    • v.27 no.3
    • /
    • pp.22-28
    • /
    • 2013
  • This paper suggests a navigation information display system that is based on augmented reality technology. A navigator always has to confirm the information from marine electronic navigation devices and then compare it with the view of targets outside the windows. This "head down" posture causes discomfort and sometimes near accidents such as collisions or missing objects, because he or she cannot keep an eye on the front view of the windows. Augmented reality can display both virtual and real information in a single display. Therefore, we attempted to adapt AR technology to assist navigators. To analyze the outside view of the bridge window, various computer image processing techniques are required because the sea surface has many noises that disturb computer image processing for object detection, such as waves, wakes, light reflection, and so on. In this study, we investigated an analysis module to extract navigational information from images that are captured by a CCTV camera, and we validated our prototype.

Virtual Lecture for Digital Logic Circuit Using Flash (플래쉬를 이용한 디지털 논리회로 교육 콘텐츠)

  • Lim Dong-Kyun;Cho Tae-Kyung;Oh Won-Geun
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.4
    • /
    • pp.180-187
    • /
    • 2005
  • In this paper, we developed an online lecture for digital logic circuit which is a basic course in electric/electronic education. Because of importance of the laboratory experiences in this course and to reflect industrial requests, we have selected most effective experimental examples in each chapter and inserted instructions for basic usags of ORCAD and digial clock design. Moreover, we developed cyber lab to design students' own circuit using Flash animation. Two features of this cyber lab are real-like graphics for devices and breadboards to improve reality and patented new IC chip objects for easy experiments, which help the students understand digital logic easily.

  • PDF

Gesture Recognition by Analyzing a Trajetory on Spatio-Temporal Space (시공간상의 궤적 분석에 의한 제스쳐 인식)

  • 민병우;윤호섭;소정;에지마 도시야끼
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.1
    • /
    • pp.157-157
    • /
    • 1999
  • Researches on the gesture recognition have become a very interesting topic in the computer vision area, Gesture recognition from visual images has a number of potential applicationssuch as HCI (Human Computer Interaction), VR(Virtual Reality), machine vision. To overcome thetechnical barriers in visual processing, conventional approaches have employed cumbersome devicessuch as datagloves or color marked gloves. In this research, we capture gesture images without usingexternal devices and generate a gesture trajectery composed of point-tokens. The trajectory Is spottedusing phase-based velocity constraints and recognized using the discrete left-right HMM. Inputvectors to the HMM are obtained by using the LBG clustering algorithm on a polar-coordinate spacewhere point-tokens on the Cartesian space .are converted. A gesture vocabulary is composed oftwenty-two dynamic hand gestures for editing drawing elements. In our experiment, one hundred dataper gesture are collected from twenty persons, Fifty data are used for training and another fifty datafor recognition experiment. The recognition result shows about 95% recognition rate and also thepossibility that these results can be applied to several potential systems operated by gestures. Thedeveloped system is running in real time for editing basic graphic primitives in the hardwareenvironments of a Pentium-pro (200 MHz), a Matrox Meteor graphic board and a CCD camera, anda Window95 and Visual C++ software environment.

Real-time and Power Hardware-in-the-loop Simulation of PEM Fuel Cell Stack System

  • Jung, Jee-Hoon
    • Journal of Power Electronics
    • /
    • v.11 no.2
    • /
    • pp.202-210
    • /
    • 2011
  • Polymer electrolyte membrane (PEM) fuel cell is one of the popular renewable energy sources and widely used in commercial medium power areas from portable electronic devices to electric vehicles. In addition, the increased integration of the PEM fuel cell with power electronics, dynamic loads, and control systems requires accurate electrical models and simulation methods to emulate their electrical behaviors. Advancement in parallel computation techniques, various real-time simulation tools, and smart power hardware have allowed the prototyping of novel apparatus to be investigated in a virtual system under a wide range of realistic conditions repeatedly, safely, and economically. This paper builds up advancements of optimized model constructions for a fuel cell stack system on a real-time simulator in the view points of improving dynamic model accuracy and boosting computation speed. In addition, several considerations for a power hardware-in-the-loop (PHIL) simulation are provided to electrically emulate the PEM fuel cell stack system with power facilities. The effectiveness of the proposed PHIL simulation method developed on Opal RT's RT-Lab Matlab/Simulink based real-time engineering simulator and a programmable power supply is verified using experimental results of the proposed PHIL simulation system with a Ballard Nexa fuel cell stack.

Computer Image Processing for AR Conceptional Display 3D Navigational Information (증강현실 개념의 항행정보 가시화를 위한 영상처리 기술)

  • Lee, Jung-Min;Lee, Kyung-Ho;Kim, Dae-Soek;Nam, Byeong-Wook
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2014.10a
    • /
    • pp.245-246
    • /
    • 2014
  • This paper suggests the navigation information display system which is based on augmented reality technology and especially focuses on image analysis technology. Navigator has to always confirm the information from marine electronic navigation devices and then they compare with the view of outside targets of the windows. During this 'head down' posture, they feel uncomfortable and sometimes it cause near-accidents such as collision or missing objects, because he or she cannot keep an eye on the front view of windows. Augmented reality can display both of information of virtual and real in a single display. Therefore we tried to adapt the AR technology to help navigators and have been studied and developed image pre-processing module as a previous research already. To analysis the outside view of the bridge window, we have extracted navigational information from the camera image by using image processing. This paper mainly describes about recognizing ship feature by haar-like feature and filtering region of interest area by AIS data, which are to improve accuracy of the image analysis.

  • PDF

Point Cloud Video Codec using 3D DCT based Motion Estimation and Motion Compensation (3D DCT를 활용한 포인트 클라우드의 움직임 예측 및 보상 기법)

  • Lee, Minseok;Kim, Boyeun;Yoon, Sangeun;Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.680-691
    • /
    • 2021
  • Due to the recent developments of attaining 3D contents by using devices such as 3D scanners, the diversity of the contents being used in AR(Augmented Reality)/VR(Virutal Reality) fields is significantly increasing. There are several ways to represent 3D data, and using point clouds is one of them. A point cloud is a cluster of points, having the advantage of being able to attain actual 3D data with high precision. However, in order to express 3D contents, much more data is required compared to that of 2D images. The size of data needed to represent dynamic 3D point cloud objects that consists of multiple frames is especially big, and that is why an efficient compression technology for this kind of data must be developed. In this paper, a motion estimation and compensation method for dynamic point cloud objects using 3D DCT is proposed. This will lead to switching the 3D video frames into I frames and P frames, which ensures higher compression ratio. Then, we confirm the compression efficiency of the proposed technology by comparing it with the anchor technology, an Intra-frame based compression method, and 2D-DCT based V-PCC.

2D Interpolation of 3D Points using Video-based Point Cloud Compression (비디오 기반 포인트 클라우드 압축을 사용한 3차원 포인트의 2차원 보간 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.692-703
    • /
    • 2021
  • Recently, with the development of computer graphics technology, research on technology for expressing real objects as more realistic virtual graphics is being actively conducted. Point cloud is a technology that uses numerous points, including 2D spatial coordinates and color information, to represent 3D objects, and they require huge data storage and high-performance computing devices to provide various services. Video-based Point Cloud Compression (V-PCC) technology is currently being studied by the international standard organization MPEG, which is a projection based method that projects point cloud into 2D plane, and then compresses them using 2D video codecs. V-PCC technology compresses point cloud objects using 2D images such as Occupancy map, Geometry image, Attribute image, and other auxiliary information that includes the relationship between 2D plane and 3D space. When increasing the density of point cloud or expanding an object, 3D calculation is generally used, but there are limitations in that the calculation method is complicated, requires a lot of time, and it is difficult to determine the correct location of a new point. This paper proposes a method to generate additional points at more accurate locations with less computation by applying 2D interpolation to the image on which the point cloud is projected, in the V-PCC technology.

Speaker Adapted Real-time Dialogue Speech Recognition Considering Korean Vocal Sound System (한국어 음운체계를 고려한 화자적응 실시간 단모음인식에 관한 연구)

  • Hwang, Seon-Min;Yun, Han-Kyung;Song, Bok-Hee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.6 no.4
    • /
    • pp.201-207
    • /
    • 2013
  • Voice Recognition technique has been developed and it has been actively applied to various information devices such as smart phones and car navigation system. But the basic research technique related the speech recognition is based on research results in English. Since the lip sync producing generally requires tedious hand work of animators and it serious affects the animation producing cost and development period to get a high quality lip animation. In this research, a real time processed automatic lip sync algorithm for virtual characters in digital contents is studied by considering Korean vocal sound system. This suggested algorithm contributes to produce a natural lip animation with the lower producing cost and the shorter development period.