• Title/Summary/Keyword: Space Images

Search Result 2,339, Processing Time 0.027 seconds

Control of Temperature and the Direction of Wind Using Thermal Images and a Fuzzy Control Method (열 영상과 퍼지 제어 기법을 이용한 온도 및 풍향 제어)

  • Kim, Kwang-Baek;Cho, Jae-Hyun;Woo, Young-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.11
    • /
    • pp.2083-2090
    • /
    • 2008
  • In this paper, we propose a method for control of temperature and the direction of wind in an air-cooler using thermal images and fuzzy inference rules in order to achieve energy saving. In a simulation for controlling temperature, a thermal image is transformed to a color distribution image of $300{\times}400$ size to analyze the thermal image. A color distribution image is composed of R, G and B values haying temperature values of Red, Magenta, Yellow, Green, Cyan and Blue. Each color has a temperature value from $24.0^{\circ}C$ to $27.0^{\circ}C$ and a color distribution image is classified into height hierarchies from level 1 to level 10. The classified hierarchies have their peculiar color distributions and temperature values are assigned to each level by temperature values of the peculiar colors. The process for controlling overall balance of temperature and the direction of wind in an indoor space is as follows. Fuzzy membership functions are designed by the direction of wind, duration time, and temperature and height values of a color distribution image to calculate the strength of wind. After then, the strength of wind is calculated by membership values of membership functions.

A Theory of Intermediality and its Application in Peter Greenaway's (상호매체성의 이론과 그 적용 - 피터 그리너웨이의 <프로스페로의 서재>를 중심으로)

  • PARK, Ki-Hyun
    • Cross-Cultural Studies
    • /
    • v.19
    • /
    • pp.39-77
    • /
    • 2010
  • The cinema of Peter Greenaway has consistently engaged questions of the relationship between the arts and particularly the relations of image and writing to cinema. When different types of images are correlated and merged with each other on the borders of painting, photography, film, video and computer animation, the interrelationships of the distinct elements cause a shift in the notion of the whole image. This analysis proposes to articulate the complex relationship between the 'interartial' dimension and the 'intermedial' dimension in Peter Greenaway's film, (1991). If the interartiality is interested in the interaction between various arts, including the transition from one to another, the intermediality articulates the same type of relationship between two or more media. The interactional relationship is the same on both sides; on the contrary, the relationship between art and media does not show the same symmetry. All art is based on one or more media - the media is a condition existence of art - but no art can't be reduced to the status of media. This suggests that if the interartiality always involves the intermediality, this proposal may not be reversed. First, we analyse a self-conscious investigation into digital art and technology. Prosospero's Books can be read as a daring visual essay that self-consciously investigates the technical and philosophical functions of letters, books, images, animated paintings, digital arts, and the other magical illusions, which have been modern or will be post-modern media to represent the world. Greenaway uses both conventional film techniques and the resources of high-definition television to layer image upon image, superimposing a second or third frame within his frame. Greenaway uses the frame-within-frame as the cinematic equivalent of Shakespeare's paly-within-play : it offer him the possibility to analyse the work of art/artist/spectator relationship. Secondly, we analyse the relationship between the written word, oral word and the books. Like the written word, the oral word changes into a visual image: The linguistic richness and nuances of Shakeaspeare's characters turn into the powerful and authoritative, but monotone, voices of Gielgud-Prospero, who speaks the Shakespearean lines aloud, shaping the characters so powerfully through his worlds that they are conjured before us. Specially each book is placed over the frame of the play's action, only partially covering the image, so that it gives virtually every frame at least two space-time orientations. Thirdly, we try to show how Peter Greenaway uses pictorial references in order to illustrate the context of the Renaissance as well as pictorial techniques and language in order to question the nature of artistic representation. For exemple, The storm is visualised through reference to Botticelli's : the storm of papers swirling around the library is constructed to look like a facsimili copy of Michelangelo's Laurentiana Library in Florence. Greenaway's modern mannerism consists in imposing his own aesthetic vision and his questioning of art beyond the play's meta-theatricality: in other words, Shakespeare''s text has been adapted without being betrayed.

Implementation of virtual reality for interactive disaster evacuation training using close-range image information (근거리 영상정보를 활용한 실감형 재난재해 대피 훈련 가상 현실 구현)

  • KIM, Du-Young;HUH, Jung-Rim;LEE, Jin-Duk;BHANG, Kon-Joon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.1
    • /
    • pp.140-153
    • /
    • 2019
  • Cloase-range image information from drones and ground-based camera has been frequently used in the field of disaster mitigation with 3D modeling and mapping. In addition, the utilization of virtual reality(VR) is being increased by implementing realistic 3D models with the VR technology simulating disaster circumstances in large scale. In this paper, we created a VR training program by extracting realistic 3D models from close-range images from unmanned aircraft and digital camera on hand and observed several issues occurring during the implementation and the effectiveness in the case of a VR application in training for disaster mitigation. First of all, we built up a scenario of disaster and created 3D models after image processing with the close-range imagery. The 3D models were imported into Unity, a software for creation of augmented/virtual reality, as a background for android-based mobile phones and VR environment was created with C#-based script language. The generated virtual reality includes a scenario in which the trainer moves to a safe place along the evacuation route in the event of a disaster, and it was considered that the successful training can be obtained with virtual reality. In addition, the training through the virtual reality has advantages relative to actual evacuation training in terms of cost, space and time efficiencies.

Pixel-level Crack Detection in X-ray Computed Tomography Image of Granite using Deep Learning (딥러닝을 이용한 화강암 X-ray CT 영상에서의 균열 검출에 관한 연구)

  • Hyun, Seokhwan;Lee, Jun Sung;Jeon, Seonghwan;Kim, Yejin;Kim, Kwang Yeom;Yun, Tae Sup
    • Tunnel and Underground Space
    • /
    • v.29 no.3
    • /
    • pp.184-196
    • /
    • 2019
  • This study aims to extract a 3D image of micro-cracks generated by hydraulic fracturing tests, using the deep learning method and X-ray computed tomography images. The pixel-level cracks are difficult to be detected via conventional image processing methods, such as global thresholding, canny edge detection, and the region growing method. Thus, the convolutional neural network-based encoder-decoder network is adapted to extract and analyze the micro-crack quantitatively. The number of training data can be acquired by dividing, rotating, and flipping images and the optimum combination for the image augmentation method is verified. Application of the optimal image augmentation method shows enhanced performance for not only the validation dataset but also the test dataset. In addition, the influence of the original number of training data to the performance of the deep learning-based neural network is confirmed, and it leads to succeed the pixel-level crack detection.

FBX Format Animation Generation System Combined with Joint Estimation Network using RGB Images (RGB 이미지를 이용한 관절 추정 네트워크와 결합된 FBX 형식 애니메이션 생성 시스템)

  • Lee, Yujin;Kim, Sangjoon;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.519-532
    • /
    • 2021
  • Recently, in various fields such as games, movies, and animation, content that uses motion capture to build body models and create characters to express in 3D space is increasing. Studies are underway to generate animations using RGB-D cameras to compensate for problems such as the cost of cinematography in how to place joints by attaching markers, but the problem of pose estimation accuracy or equipment cost still exists. Therefore, in this paper, we propose a system that inputs RGB images into a joint estimation network and converts the results into 3D data to create FBX format animations in order to reduce the equipment cost required for animation creation and increase joint estimation accuracy. First, the two-dimensional joint is estimated for the RGB image, and the three-dimensional coordinates of the joint are estimated using this value. The result is converted to a quaternion, rotated, and an animation in FBX format is created. To measure the accuracy of the proposed method, the system operation was verified by comparing the error between the animation generated based on the 3D position of the marker by attaching a marker to the body and the animation generated by the proposed system.

Comparative Experiment of 2D and 3D DCT Point Cloud Compression (2D 및 3D DCT를 활용한 포인트 클라우드 압축 비교 실험)

  • Nam, Kwijung;Kim, Junsik;Han, Muhyen;Kim, Kyuheon;Hwang, Minkyu
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.553-565
    • /
    • 2021
  • Point cloud is a set of points for representing a 3D object, and consists of geometric information, which is 3D coordinate information, and attribute information, which is information representing color, reflectance, and the like. In this way of expressing, it has a vast amount of data compared to 2D images. Therefore, a process of compressing the point cloud data in order to transmit the point cloud data or use it in various fields is required. Unlike color information corresponding to all 2D geometric information constituting a 2D image, a point cloud represents a point cloud including attribute information such as color in only a part of the 3D space. Therefore, separate processing of geometric information is also required. Based on these characteristics of point clouds, MPEG under ISO/IEC standardizes V-PCC, which imitates point cloud images and compresses them into 2D DCT-based 2D image compression codecs, as a compression method for high-density point cloud data. This has limitations in accurately representing 3D spatial information to proceed with compression by converting 3D point clouds to 2D, and difficulty in processing non-existent points when utilizing 3D DCT. Therefore, in this paper, we present 3D Discrete Cosine Transform-based Point Cloud Compression (3DCT PCC), a method to compress point cloud data, which is a 3D image by utilizing 3D DCT, and confirm the efficiency of 3D DCT compared to V-PCC based on 2D DCT.

Early Estimation of Rice Cultivation in Gimje-si Using Sentinel-1 and UAV Imagery (Sentinel-1 및 UAV 영상을 활용한 김제시 벼 재배 조기 추정)

  • Lee, Kyung-do;Kim, Sook-gyeong;Ahn, Ho-yong;So, Kyu-ho;Na, Sang-il
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.503-514
    • /
    • 2021
  • Rice production with adequate level of area is important for decision making of rice supply and demand policy. It is essential to grasp rice cultivation areas in advance for estimating rice production of the year. This study was carried out to classify paddy rice cultivation in Gimje-si using sentinel-1 SAR (synthetic aperture radar) and UAV imagery in early July. Time-series Sentinel-1A and 1B images acquired from early May to early July were processed to convert into sigma naught (dB) images using SNAP (SeNtinel application platform, Version 8.0) toolbox provided by European Space Agency. Farm map and parcel map, which are spatial data of vector polygon, were used to stratify paddy field population for classifying rice paddy cultivation. To distinguish paddy rice from other crops grown in the paddy fields, we used the decision tree method using threshold levels and random forest model. Random forest model, trained by mainly rice cultivation area and rice and soybean cultivation area in UAV image area, showed the best performance as overall accuracy 89.9%, Kappa coefficient 0.774. Through this, we were able to confirm the possibility of early estimation of rice cultivation area in Gimje-si using UAV image.

Accuracy analysis of Multi-series Phenological Landcover Classification Using U-Net-based Deep Learning Model - Focusing on the Seoul, Republic of Korea - (U-Net 기반 딥러닝 모델을 이용한 다중시기 계절학적 토지피복 분류 정확도 분석 - 서울지역을 중심으로 -)

  • Kim, Joon;Song, Yongho;Lee, Woo-Kyun
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.409-418
    • /
    • 2021
  • The land cover map is a very important data that is used as a basis for decision-making for land policy and environmental policy. The land cover map is mapped using remote sensing data, and the classification results may vary depending on the acquisition time of the data used even for the same area. In this study, to overcome the classification accuracy limit of single-period data, multi-series satellite images were used to learn the difference in the spectral reflectance characteristics of the land surface according to seasons on a U-Net model, one of the deep learning algorithms, to improve classification accuracy. In addition, the degree of improvement in classification accuracy is compared by comparing the accuracy of single-period data. Seoul, which consists of various land covers including 30% of green space and the Han River within the area, was set as the research target and quarterly Sentinel-2 satellite images for 2020 were aquired. The U-Net model was trained using the sub-class land cover map mapped by the Korean Ministry of Environment. As a result of learning and classifying the model into single-period, double-series, triple-series, and quadruple-series through the learned U-Net model, it showed an accuracy of 81%, 82% and 79%, which exceeds the standard for securing land cover classification accuracy of 75%, except for a single-period. Through this, it was confirmed that classification accuracy can be improved through multi-series classification.

Research on the Value Shift of Ne zha's Character Image in Different Periods : Take Ne Zha : I am the destiny(2019) and Ne Zha Conquers the Dragon King(1979) as Examples (시기별 나타 캐릭터의 가치전향 연구- <나타지마동강세(2019)> <나타요해(1979)>를 중심으로 -)

  • Wang, Xin-Xin
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.7
    • /
    • pp.239-249
    • /
    • 2020
  • Ne Zha: I am the destiny(2019)and Ne Zha Conquers the Dragon King(1979)as animated films based on the theme of Ne zha in different eras and contexts, reflecting the value and significance of different artistic images of Ne zha, The study of the value turn in the character image of Ne zha reflects the different styles of artistic images of Ne zha. This thesis attempts to interpret the changing values behind the different artistic expressions of the Ne zha character in different films. The transformation of Ne zha's image from good to evil, and the shift from the idea of "submit to the will of heaven" to the idea of "taking one's destiny into one's own hands" reflect the different values and meanings of Ne zha's image in different eras. Each film is distinctly contemporary, and Ne Zha: I am the destiny is an innovative and in-depth look at the character of Ne zha in the new era of cinema. There are many forms of artistic expression, and these differences greatly increase the appeal of Ne zha's artistic image, making Chinese animation films more space and value for development.

3D Reconstruction of Pipe-type Underground Facility Based on Stereo Images and Reference Data (스테레오 영상과 기준데이터를 활용한 관로형 지하시설물 3차원 형상 복원)

  • Cheon, Jangwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1515-1526
    • /
    • 2022
  • Image-based 3D reconstruction is to restore the shape and color of real-world objects, and image sensors mounted on mobile platforms are used for positioning and mapping purposes in indoor and outdoor environments. Due to the increase in accidents in underground space, the location accuracy problem of underground spatial information has been raised. Image-based location estimation studies have been conducted with the advantage of being able to determine the 3D location and simultaneously identify internal damage from image data acquired from the inside of pipeline-type underground facilities. In this study, we studied 3D reconstruction based on the images acquired inside the pipe-type underground facility and reference data. An unmanned mobile system equipped with a stereo camera was used to acquire data and image data within a pipe-type underground facility where reference data were placed at the entrance and exit. Using the acquired image and reference data, the pipe-type underground facility is reconstructed to a geo-referenced 3D shape. The accuracy of the 3D reconstruction result was verified by location and length. It was confirmed that the location was determined with an accuracy of 20 to 60 cm and the length was estimated with an accuracy of about 20 cm. Using the image-based 3D reconstruction method, the position and line-shape of the pipe-type underground facility will be effectively updated.