• Title/Summary/Keyword: Virtual and real

Search Result 2,196, Processing Time 0.032 seconds

Development of Metrics to Measure Reusability of Services of IoT Software

  • Cho, Eun-Sook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.151-158
    • /
    • 2021
  • Internet of Things (IoT) technology, which provides services by connecting various objects in the real world and objects in the virtual world based on the Internet, is emerging as a technology that enables a hyper-connected society in the era of the 4th industrial revolution. Since IoT technology is a convergence technology that encompasses devices, networks, platforms, and services, various studies are being conducted. Among these studies, studies on measures that can measure service quality provided by IoT software are still insufficient. IoT software has hardware parts of the Internet of Things, technologies based on them, features of embedded software, and network features. These features are used as elements defining IoT software quality measurement metrics. However, these features are considered in the metrics related to IoT software quality measurement so far. Therefore, this paper presents a metric for reusability measurement among various quality factors of IoT software in consideration of these factors. In particular, since IoT software is used through IoT devices, services in IoT software must be designed to be changed, replaced, or expanded, and metrics that can measure this are very necessary. In this paper, we propose three metrics: changeability, replaceability, and scalability that can measure and evaluate the reusability of IoT software services were presented, and the metrics presented through case studies were verified. It is expected that the service quality verification of IoT software will be carried out through the metrics presented in this paper, thereby contributing to the improvement of users' service satisfaction.

2D Interpolation of 3D Points using Video-based Point Cloud Compression (비디오 기반 포인트 클라우드 압축을 사용한 3차원 포인트의 2차원 보간 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.692-703
    • /
    • 2021
  • Recently, with the development of computer graphics technology, research on technology for expressing real objects as more realistic virtual graphics is being actively conducted. Point cloud is a technology that uses numerous points, including 2D spatial coordinates and color information, to represent 3D objects, and they require huge data storage and high-performance computing devices to provide various services. Video-based Point Cloud Compression (V-PCC) technology is currently being studied by the international standard organization MPEG, which is a projection based method that projects point cloud into 2D plane, and then compresses them using 2D video codecs. V-PCC technology compresses point cloud objects using 2D images such as Occupancy map, Geometry image, Attribute image, and other auxiliary information that includes the relationship between 2D plane and 3D space. When increasing the density of point cloud or expanding an object, 3D calculation is generally used, but there are limitations in that the calculation method is complicated, requires a lot of time, and it is difficult to determine the correct location of a new point. This paper proposes a method to generate additional points at more accurate locations with less computation by applying 2D interpolation to the image on which the point cloud is projected, in the V-PCC technology.

Spatial Replicability Assessment of Land Cover Classification Using Unmanned Aerial Vehicle and Artificial Intelligence in Urban Area (무인항공기 및 인공지능을 활용한 도시지역 토지피복 분류 기법의 공간적 재현성 평가)

  • Geon-Ung, PARK;Bong-Geun, SONG;Kyung-Hun, PARK;Hung-Kyu, LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.63-80
    • /
    • 2022
  • As a technology to analyze and predict an issue has been developed by constructing real space into virtual space, it is becoming more important to acquire precise spatial information in complex cities. In this study, images were acquired using an unmanned aerial vehicle for urban area with complex landscapes, and land cover classification was performed object-based image analysis and semantic segmentation techniques, which were image classification technique suitable for high-resolution imagery. In addition, based on the imagery collected at the same time, the replicability of land cover classification of each artificial intelligence (AI) model was examined for areas that AI model did not learn. When the AI models are trained on the training site, the land cover classification accuracy is analyzed to be 89.3% for OBIA-RF, 85.0% for OBIA-DNN, and 95.3% for U-Net. When the AI models are applied to the replicability assessment site to evaluate replicability, the accuracy of OBIA-RF decreased by 7%, OBIA-DNN by 2.1% and U-Net by 2.3%. It is found that U-Net, which considers both morphological and spectroscopic characteristics, performs well in land cover classification accuracy and replicability evaluation. As precise spatial information becomes important, the results of this study are expected to contribute to urban environment research as a basic data generation method.

A Comparative Study on the Brand Experiences of Metaverse and Offline Stores (메타버스와 오프라인 스토어의 브랜드 체험 비교 연구)

  • Gwang-Ho Yi;Yu-Jin Kim
    • Science of Emotion and Sensibility
    • /
    • v.26 no.2
    • /
    • pp.53-66
    • /
    • 2023
  • In recent times, more fashion brands have been seeking ways to use metaverse platforms, in which users can actively participate, as their new brand touch-points. This study aims to compare the brand experiences of the fashion brand Gentle Monster's offline store and its equivalent metaverse store. By changing the order of offline and metaverse visits, two groups participated in the field study that allowed them to experience directly the offline and metaverse stores. As a result of the analysis, the following findings were discovered: (1) In the overall experiential response, the frequency of sensory modules responding to new information was much higher than that of feeling experiences; (2) Experiential responses were more active in the offline store where the subjects could touch and use products directly rather than in the metaverse; (3) Among the four types of theme space, the experiential response was the most frequent in the product space; (4) The first group that visited the metaverse store before the offline store showed a more active experience than the second group that visited the offline store first. Finally, the results of this study show that metaverse brand stores in virtual space not only provide differentiated experiences beyond the spatiotemporal constraints of real space but can also be used as a strategic tool to make offline store experiences more meaningful and rich.

Performance Evaluation of LTE-VPN based Disaster Investigation System for Sharing Disaster Field Information (재난사고 정보공유를 위한 LTE-VPN기반 현장조사시스템 성능평가)

  • Kim, Seong Sam;Shin, Dong Yoon;Nho, Hyun Ju
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.11
    • /
    • pp.602-609
    • /
    • 2020
  • In the event of a large-scale disaster such as an earthquake, typhoon, landslide, and building collapse, the disaster situation awareness and timely disaster information sharing play a key role in the disaster response and decision-making stages for disaster management, such as disaster site control and evacuation of residents. In this paper, an exited field investigation system of NDMI (National Disaster Management Research Institute) was enhanced with an LTE-VPN- based wireless communication system to provide an effective on-site response in an urgent disaster situation and share observation data or analysis information acquired at the disaster fields in real-time. The required performance of wireless communication for the disaster field investigation system was then analyzed and evaluated. The experimental result for field data transmission performance of an advanced wireless communication investigation system showed that the UDP transmission performance of at least 4.1Mbps is required to ensure a seamless video conference system between disaster sites. In addition, a wireless communication bandwidth of approximately 10 Mbps should be guaranteed to smoothly share the communication and field data between the survey equipment currently mounted on the survey vehicle.

Interactive 3D Visualization of Ceilometer Data (운고계 관측자료의 대화형 3차원 시각화)

  • Lee, Junhyeok;Ha, Wan Soo;Kim, Yong-Hyuk;Lee, Kang Hoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.2
    • /
    • pp.21-28
    • /
    • 2018
  • We present interactive methods for visualizing the cloud height data and the backscatter data collected from ceilometers in the three-dimensional virtual space. Because ceilometer data is high-dimensional, large-size data associated with both spatial and temporal information, it is highly improbable to exhibit the whole aspects of ceilometer data simply with static, two-dimensional images. Based on the three-dimensional rendering technology, our visualization methods allow the user to observe both the global variations and the local features of the three-dimensional representations of ceilometer data from various angles by interactively manipulating the timing and the view as desired. The cloud height data, coupled with the terrain data, is visualized as a realistic cloud animation in which many clouds are formed and dissipated over the terrain. The backscatter data is visualized as a three-dimensional terrain which effectively represents how the amount of backscatter changes according to the time and the altitude. Our system facilitates the multivariate analysis of ceilometer data by enabling the user to select the date to be examined, the level-of-detail of the terrain, and the additional data such as the planetary boundary layer height. We demonstrate the usefulness of our methods through various experiments with real ceilometer data collected from 93 sites scattered over the country.

Low Resolution Depth Interpolation using High Resolution Color Image (고해상도 색상 영상을 이용한 저해상도 깊이 영상 보간법)

  • Lee, Gyo-Yoon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.2 no.4
    • /
    • pp.60-65
    • /
    • 2013
  • In this paper, we propose a high-resolution disparity map generation method using a low-resolution time-of-flight (TOF) depth camera and color camera. The TOF depth camera is efficient since it measures the range information of objects using the infra-red (IR) signal in real-time. It also quantizes the range information and provides the depth image. However, there are some problems of the TOF depth camera, such as noise and lens distortion. Moreover, the output resolution of the TOF depth camera is too small for 3D applications. Therefore, it is essential to not only reduce the noise and distortion but also enlarge the output resolution of the TOF depth image. Our proposed method generates a depth map for a color image using the TOF camera and the color camera simultaneously. We warp the depth value at each pixel to the color image position. The color image is segmented using the mean-shift segmentation method. We define a cost function that consists of color values and segmented color values. We apply a weighted average filter whose weighting factor is defined by the random walk probability using the defined cost function of the block. Experimental results show that the proposed method generates the depth map efficiently and we can reconstruct good virtual view images.

  • PDF

Factors Affecting the Delay of a Decision to Admit Severe Trauma Patients and the Effect of a Multidisciplinary Department System: a Preliminary Study (중증 외상 환자의 입원 결정 지연에 영향을 미치는 요인과 공동진료시스템)

  • Kang, Mun-Ju;Shin, Tae-Gun;Sim,, Min-Seob;Jo, Ik-Joon;Song, Hyoung-Gon
    • Journal of Trauma and Injury
    • /
    • v.23 no.2
    • /
    • pp.113-118
    • /
    • 2010
  • Purpose: Prolonged stay in the emergency department (ED), which is closely related with the time interval from the ED visit to a decision to admit, might be associated with poor outcomes for trauma patients and with overcrowding of the ED. Therefore, we examined the factors affecting the delay in the decision to admit severe trauma patients. Also, a multidisciplinary department system was preliminarily evaluated to see if it could reduce the time from triage to the admission decision. Methods: A retrospective observational study was conducted at a tertiary care university hospital without a specialized trauma team or specialized trauma surgeons from January 2009 to March 2010. Severe trauma patients with an International Classification of Disease Based Injury Severity Score (ICISS) below 0.9 were included. A multivariable logistic regression analysis was used to find independent variables associated with a delay in the decision for admission which was defined as the time interval between ED arrival and admission decision exceeded 4 hours. We also simulated the time from triage to the decision for admission by a multidisciplinary department system. Results: A total of 89 patients were enrolled. The average time from triage to the admission decision was $5.2{\pm}7.1$ hours and the average length of the ED stay was $9.0{\pm}11.5$ hours. The rate of decision delay for admission was 31.5%. A multivariable regression analysis revealed that multiple trauma (odds ratio [OR]: 30.6, 95%; confidence interval [CI]: 3.18-294.71), emergency operation (OR: 0.55, 95%; CI: 0.01-0.96), and treatment in the Department of Neurosurgery (OR: 0.07, 95%; CI: 0.01-0.78) were significantly associated with the decision delay. In a simulation based on a multidisciplinary department system, the virtual time from triage to admission decision was $2.1{\pm}1.5$ hours. Conclusion: In the ED, patients with severe trauma, multiple trauma was a significant factor causing a delay in the admission decision. On the other hand, emergency operation and treatment in Department of Neurosurgery were negatively associated with the delay. The simulated time from triage to the decision for admission by a multidisciplinary department system was 3 hours shorter than the real one.

Preliminary Study on the Enhancement of Reconstruction Speed for Emission Computed Tomography Using Parallel Processing (병렬 연산을 이용한 방출 단층 영상의 재구성 속도향상 기초연구)

  • Park, Min-Jae;Lee, Jae-Sung;Kim, Soo-Mee;Kang, Ji-Yeon;Lee, Dong-Soo;Park, Kwang-Suk
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.443-450
    • /
    • 2009
  • Purpose: Conventional image reconstruction uses simplified physical models of projection. However, real physics, for example 3D reconstruction, takes too long time to process all the data in clinic and is unable in a common reconstruction machine because of the large memory for complex physical models. We suggest the realistic distributed memory model of fast-reconstruction using parallel processing on personal computers to enable large-scale technologies. Materials and Methods: The preliminary tests for the possibility on virtual manchines and various performance test on commercial super computer, Tachyon were performed. Expectation maximization algorithm with common 2D projection and realistic 3D line of response were tested. Since the process time was getting slower (max 6 times) after a certain iteration, optimization for compiler was performed to maximize the efficiency of parallelization. Results: Parallel processing of a program on multiple computers was available on Linux with MPICH and NFS. We verified that differences between parallel processed image and single processed image at the same iterations were under the significant digits of floating point number, about 6 bit. Double processors showed good efficiency (1.96 times) of parallel computing. Delay phenomenon was solved by vectorization method using SSE. Conclusion: Through the study, realistic parallel computing system in clinic was established to be able to reconstruct by plenty of memory using the realistic physical models which was impossible to simplify.

L-CAA : An Architecture for Behavior-Based Reinforcement Learning (L-CAA : 행위 기반 강화학습 에이전트 구조)

  • Hwang, Jong-Geun;Kim, In-Cheol
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.3
    • /
    • pp.59-76
    • /
    • 2008
  • In this paper, we propose an agent architecture called L-CAA that is quite effective in real-time dynamic environments. L-CAA is an extension of CAA, the behavior-based agent architecture which was also developed by our research group. In order to improve adaptability to the changing environment, it is extended by adding reinforcement learning capability. To obtain stable performance, however, behavior selection and execution in the L-CAA architecture do not entirely rely on learning. In L-CAA, learning is utilized merely as a complimentary means for behavior selection and execution. Behavior selection mechanism in this architecture consists of two phases. In the first phase, the behaviors are extracted from the behavior library by checking the user-defined applicable conditions and utility of each behavior. If multiple behaviors are extracted in the first phase, the single behavior is selected to execute in the help of reinforcement learning in the second phase. That is, the behavior with the highest expected reward is selected by comparing Q values of individual behaviors updated through reinforcement learning. L-CAA can monitor the maintainable conditions of the executing behavior and stop immediately the behavior when some of the conditions fail due to dynamic change of the environment. Additionally, L-CAA can suspend and then resume the current behavior whenever it encounters a higher utility behavior. In order to analyze effectiveness of the L-CAA architecture, we implement an L-CAA-enabled agent autonomously playing in an Unreal Tournament game that is a well-known dynamic virtual environment, and then conduct several experiments using it.

  • PDF