• Title/Summary/Keyword: 3D Environment Recognition

Search Result 151, Processing Time 0.024 seconds

A Case Study on Application of Realistic Content to Space Design (실감형콘텐츠의 공간디자인 적용사례연구)

  • Kang, Jae-Shin
    • Journal of Digital Convergence
    • /
    • v.15 no.6
    • /
    • pp.369-376
    • /
    • 2017
  • In a digital multimedia environment with various experiences and communication, We live in an age where it is possible to experience from imagination to reality realizable by imagination. The remarkable technology based on ICT has been attracting attention as next generation video service technology from 3DTV, UHD TV, and hologram. These media, combined with space design, are able to offer us amazing and diverse experiences. In addition, now, there is a demand for more differentiated contents using human five senses recognition technology. We analyzed the application of realistic contents to space design. As a result, we have come to the conclusion that creative production which can express more fun and convenient will be an important issue.

FTFM: An Object Linkage Model for Virtual Reality (가상현실을 위한 객체 연결 모델)

  • Ju, U-Seok;Choe, Seong-Un;Park, Gyeong-Hui;Lee, Hui-Seung
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.1
    • /
    • pp.95-106
    • /
    • 1996
  • The most fundamental difference between general three dimensional computer graphics technology and virtual reality technology lies in the degree of realism as we feel, and thus the virtual reality method heavily relies on such tolls as data gloves, 3D auditory system to enhance human perception and recognition. Although these tolls are valid for such purpose, more essential ingredient. This paper provides further realism by modeling active interactions between the objects inside scenes. For this purpose, this paper proposes and implements a field model where the virtual reality space is treated as a physical field defined on the characteristic radius of stimulus and sense corresponding to the individual object. In the field model, the rule of cause and effect as an essential feature of the realism can be interpreted simply as an energy exchange between objects and consequently, variation on the radius information together with behavioral logic alone can build the virtual environment where each object can react to other objects actively and controllably.

  • PDF

Study on Changes of Street Furniture in Digital Environment (디지털 환경에서 가로시설물의 변화에 관한 연구)

  • In, Chiho;Kim, Hyunsoo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.1D
    • /
    • pp.129-136
    • /
    • 2008
  • Along with the development in cutting-edge digital technology, the street space is also being changed. Mobile telecommunication units and internet give a big change in a human being's lifestyle. And the ubiquitous computing is proceeding with expanding its application range from the indoor space to the street space. As the street furniture is the convenient facility that allows a human being's life in street space to be abundant, it is getting advanced. First of all, in terms of such phenomenon, this study analyzed the cases of a research on application of street space and the actual condition of a change in the number of individuals for the street furniture, through a literature research of ubiquitous. Also, it researched into the realities of using the street furniture of the walking-desired streets at Daehac-ro and Hongdae district, where are two representative places related to digital generation. The next was carried out FGI (Focus Group Interview) with users of the street space in front of Hongik University and managers of the street furniture, and was researched into the use & management behavior, and recognition level on the street furniture. Thus, the key elements were extracted such as interchange of information for cultural activities, automation for interaction variability in function. Finally the core elements for future vision of street furniture in this digital era were extracted in 3I, namely, Information, Intellectualization, and Integration. This is considered to be applied to the establishment of direction in the process of high-tech digitalization in street furniture related to information hereafter.

Development of System for Real-Time Object Recognition and Matching using Deep Learning at Simulated Lunar Surface Environment (딥러닝 기반 달 표면 모사 환경 실시간 객체 인식 및 매칭 시스템 개발)

  • Jong-Ho Na;Jun-Ho Gong;Su-Deuk Lee;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.4
    • /
    • pp.281-298
    • /
    • 2023
  • Continuous research efforts are being devoted to unmanned mobile platforms for lunar exploration. There is an ongoing demand for real-time information processing to accurately determine the positioning and mapping of areas of interest on the lunar surface. To apply deep learning processing and analysis techniques to practical rovers, research on software integration and optimization is imperative. In this study, a foundational investigation has been conducted on real-time analysis of virtual lunar base construction site images, aimed at automatically quantifying spatial information of key objects. This study involved transitioning from an existing region-based object recognition algorithm to a boundary box-based algorithm, thus enhancing object recognition accuracy and inference speed. To facilitate extensive data-based object matching training, the Batch Hard Triplet Mining technique was introduced, and research was conducted to optimize both training and inference processes. Furthermore, an improved software system for object recognition and identical object matching was integrated, accompanied by the development of visualization software for the automatic matching of identical objects within input images. Leveraging satellite simulative captured video data for training objects and moving object-captured video data for inference, training and inference for identical object matching were successfully executed. The outcomes of this research suggest the feasibility of implementing 3D spatial information based on continuous-capture video data of mobile platforms and utilizing it for positioning objects within regions of interest. As a result, these findings are expected to contribute to the integration of an automated on-site system for video-based construction monitoring and control of significant target objects within future lunar base construction sites.

Speech Recognition Using Linear Discriminant Analysis and Common Vector Extraction (선형 판별분석과 공통벡터 추출방법을 이용한 음성인식)

  • 남명우;노승용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4
    • /
    • pp.35-41
    • /
    • 2001
  • This paper describes Linear Discriminant Analysis and common vector extraction for speech recognition. Voice signal contains psychological and physiological properties of the speaker as well as dialect differences, acoustical environment effects, and phase differences. For these reasons, the same word spelled out by different speakers can be very different heard. This property of speech signal make it very difficult to extract common properties in the same speech class (word or phoneme). Linear algebra method like BT (Karhunen-Loeve Transformation) is generally used for common properties extraction In the speech signals, but common vector extraction which is suggested by M. Bilginer et at. is used in this paper. The method of M. Bilginer et al. extracts the optimized common vector from the speech signals used for training. And it has 100% recognition accuracy in the trained data which is used for common vector extraction. In spite of these characteristics, the method has some drawback-we cannot use numbers of speech signal for training and the discriminant information among common vectors is not defined. This paper suggests advanced method which can reduce error rate by maximizing the discriminant information among common vectors. And novel method to normalize the size of common vector also added. The result shows improved performance of algorithm and better recognition accuracy of 2% than conventional method.

  • PDF

Investigating Smart TV Gesture Interaction Based on Gesture Types and Styles

  • Ahn, Junyoung;Kim, Kyungdoh
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.2
    • /
    • pp.109-121
    • /
    • 2017
  • Objective: This study aims to find suitable types and styles for gesture interaction as remote control on smart TVs. Background: Smart TV is being developed rapidly in the world, and gesture interaction has a wide range of research areas, especially based on vision techniques. However, most studies are focused on the gesture recognition technology. Also, not many previous studies of gestures types and styles on smart TVs were carried out. Therefore, it is necessary to check what users prefer in terms of gesture types and styles for each operation command. Method: We conducted an experiment to extract the target user manipulation commands required for smart TVs and select the corresponding gestures. To do this, we looked at gesture styles people use for every operation command, and checked whether there are any gesture styles they prefer over others. Through these results, this study was carried out with a process selecting smart TV operation commands and gestures. Results: Eighteen TV commands have been used in this study. With agreement level as a basis, we compared the six types of gestures and five styles of gestures for each command. As for gesture type, participants generally preferred a gesture of Path-Moving type. In the case of Pan and Scroll commands, the highest agreement level (1.00) of 18 commands was shown. As for gesture styles, the participants preferred a manipulative style in 11 commands (Next, Previous, Volume up, Volume down, Play, Stop, Zoom in, Zoom out, Pan, Rotate, Scroll). Conclusion: By conducting an analysis on user-preferred gestures, nine gesture commands are proposed for gesture control on smart TVs. Most participants preferred Path-Moving type and Manipulative style gestures based on the actual operations. Application: The results can be applied to a more advanced form of the gestures in the 3D environment, such as a study on VR. The method used in this study will be utilized in various domains.

On Motion Planning for Human-Following of Mobile Robot in a Predictable Intelligent Space

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.101-110
    • /
    • 2004
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, humans and robots need to be in close proximity to each other as much as possible. Moreover, it is necessary for their interactions to occur naturally. It is desirable for a robot to carry out human following, as one of the human-affinitive movements. The human-following robot requires several techniques: the recognition of the moving objects, the feature extraction and visual tracking, and the trajectory generation for following a human stably. In this research, a predictable intelligent space is used in order to achieve these goals. An intelligent space is a 3-D environment in which many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents providing humans with services. A mobile robot is controlled to follow a walking human using distributed intelligent sensors as stably and precisely as possible. The moving objects is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to follow the walking human, the linear and angular velocities are estimated and utilized. The computer simulation and experimental results of estimating and following of the walking human with the mobile robot are presented.

Jewelry case design study with fusion of cultural contents (문화콘텐츠가 융합된 주얼리케이스 디자인연구)

  • Hwang, Sun Wook
    • Journal of Digital Convergence
    • /
    • v.16 no.4
    • /
    • pp.335-342
    • /
    • 2018
  • I do not think that the role of jewel case in modern society is simply to perform the role of storing and storing contents. Depending on the circumstance and environment, it may show some special performance with its contents. In order to obtain such a result, So I came to think about a new type of case design and wanted to find the starting point in the cultural contents which is the mainstream of modern society. I have focused on 'story' of many cultural prototypes. and adopted 'Heungbujeon', a classical literature which can share case and image, and popular recognition. After the design process, I made sample. so the research on the fusion of cultural contents can be both creative and popular. I think it can be another development direction of modern craft.

Analysis of Eye-safe LIDAR Signal under Various Measurement Environments and Reflection Conditions (다양한 측정 환경 및 반사 조건에 대한 시각안전 LIDAR 신호 분석)

  • Han, Mun Hyun;Choi, Gyu Dong;Seo, Hong Seok;Mheen, Bong Ki
    • Korean Journal of Optics and Photonics
    • /
    • v.29 no.5
    • /
    • pp.204-214
    • /
    • 2018
  • Since LIDAR is advantageous for accurate information acquisition and realization of a high-resolution 3D image based on characteristics that can be precisely measured, it is essential to autonomous navigation systems that require acquisition and judgment of accurate peripheral information without user intervention. Recently, as an autonomous navigation system applying LIDAR has been utilized in human living space, it is necessary to solve the eye-safety problem, and to make reliable judgment through accurate obstacle recognition in various environments. In this paper, we construct a single-shot LIDAR system (SSLs) using a 1550-nm eye-safe light source, and report the analysis method and results of LIDAR signals for various measurement environments, reflective materials, and material angles. We analyze the signals of materials with different reflectance in each measurement environment by using a 5% Al reflector and a building wall located at a distance of 25 m, under indoor, daytime, and nighttime conditions. In addition, signal analysis of the angle change of the material is carried out, considering actual obstacles at various angles. This signal analysis has the merit of possibly confirming the correlation between measurement environment, reflection conditions, and LIDAR signal, by using the SNR to determine the reliability of the received information, and the timing jitter, which is an index of the accuracy of the distance information.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.