• Title/Summary/Keyword: Image information

Search Result 19,610, Processing Time 0.045 seconds

A Study on the Development of Storytelling for Culture and Tourism Market Development - Based on Jecheon Central Market (문화관광형시장 육성을 위한 스토리텔링개발연구 - 제천중앙시장을 중심으로)

  • Park, Jin-Soo
    • Journal of Convergence for Information Technology
    • /
    • v.8 no.6
    • /
    • pp.367-374
    • /
    • 2018
  • The purpose of this research is to promote traditional markets, which are part of urban regeneration project, and to promote cultural and tourism market by applying characteristics and differentiated elements through story development through market-related resources in order to secure identity of the JeCheon Central market that lost function of the traditional market and regional aging of the traditional market. To this end, the basic survey and analysis of the Jecheon area and the current situation of the Jecheon Central Market were conducted to diagnose problems and to analyze keywords through surveys by local merchants and visitors. By drawing up measures to vitalize the Jecheon Central Market by floor and space, the Jecheon Central Market's design story is developed and applied so that it can restore the image of the local traditional market through regional and cultural elements and become a center of space and culture that can become a landmark for the region in the future. The storytelling designed for this purpose shall be linked to the spatial planning of each floor as well as the C.I. and exterior of the C.I. and the building of the Jecheon Central Market, and the identity of the Jecheon Central Market can be reestablishe.

Research On Development of Usability Evaluation Contents and Weight of Importance for the Fire Detector Product (화재감지기 제품디자인 사용성 평가항목 개발 및 이해관계자 가중치평가 연구)

  • Jung, Ji-Yoon;Lee, Sang-Ki;Kim, Ji-Hyang;Yun, Su-Ji;Jang, Gi-Yong;Lee, Sung-Pil
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.1
    • /
    • pp.404-412
    • /
    • 2019
  • The purpose of this study is to develop the usability evaluation contents based on the needs of different stakeholder's related to the usability of the product, and to derive the design direction and apply it as the evaluation standard by applying the product design based on the results. I created a stakeholder map for a fire detector product and identified stakeholders related to usability. Based on 3 factors(Physical, cognitive, emotional) of the usability evaluation, I conducted survey on the building users and the building managers who have different requirements. There are 12 directions (ease of installation, durability, maintainability, additional functionality, effectiveness, attractiveness, visibility, consistency of information, environmental harmony, consistency, Image suitability, reliability). Through weighted analysis of three usability evaluation factors, I found factors were ranked in the same order of importance, but they were different in importance figure. Based on the results of the survey, overall product usability aspects were improved but effectiveness and environmental coordination aspects needed to be improved.

Fire Detection using Deep Convolutional Neural Networks for Assisting People with Visual Impairments in an Emergency Situation (시각 장애인을 위한 영상 기반 심층 합성곱 신경망을 이용한 화재 감지기)

  • Kong, Borasy;Won, Insu;Kwon, Jangwoo
    • 재활복지
    • /
    • v.21 no.3
    • /
    • pp.129-146
    • /
    • 2017
  • In an event of an emergency, such as fire in a building, visually impaired and blind people are prone to exposed to a level of danger that is greater than that of normal people, for they cannot be aware of it quickly. Current fire detection methods such as smoke detector is very slow and unreliable because it usually uses chemical sensor based technology to detect fire particles. But by using vision sensor instead, fire can be proven to be detected much faster as we show in our experiments. Previous studies have applied various image processing and machine learning techniques to detect fire, but they usually don't work very well because these techniques require hand-crafted features that do not generalize well to various scenarios. But with the help of recent advancement in the field of deep learning, this research can be conducted to help solve this problem by using deep learning-based object detector that can detect fire using images from security camera. Deep learning based approach can learn features automatically so they can usually generalize well to various scenes. In order to ensure maximum capacity, we applied the latest technologies in the field of computer vision such as YOLO detector in order to solve this task. Considering the trade-off between recall vs. complexity, we introduced two convolutional neural networks with slightly different model's complexity to detect fire at different recall rate. Both models can detect fire at 99% average precision, but one model has 76% recall at 30 FPS while another has 61% recall at 50 FPS. We also compare our model memory consumption with each other and show our models robustness by testing on various real-world scenarios.

Case Study of SM Entertainment on the K-Pop Visual Directing Strategy (K-Pop 비주얼디렉팅 전략에 관한 SM 사례연구)

  • Choi, Lia Sung-Yee;Ko, Jeong-Min
    • Journal of Digital Convergence
    • /
    • v.17 no.2
    • /
    • pp.373-379
    • /
    • 2019
  • This study examines the process and scope of visual directing through the case of SM Entertainment, and explores the role of visual directors. As a result of the case study, SM created the visual directing team composed of art director and visual designers within the creative headquarters and was actively introducing visual directing for the development of idol. The visual directing process of SM, which is being developed as a part of the star marketing, consists of analyzing the environment in marketing strategy, establishing marketing strategy related to idol, setting up target image per artist, and finally planning and managing the visual directing project. The visual director in SM is required to have creative talent, logical persuasion, information analytical ability, visual expression ability, and field application ability. SM also applies visual directing accumulated from idol singers to SM business areas such as MD product design and production, product composition and designer collaboration, and SM town COEX artium. This paper have significance in attracting visual directing to the academic field.

The Method for Colorizing SAR Images of Kompsat-5 Using Cycle GAN with Multi-scale Discriminators (다양한 크기의 식별자를 적용한 Cycle GAN을 이용한 다목적실용위성 5호 SAR 영상 색상 구현 방법)

  • Ku, Wonhoe;Chun, Daewon
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_3
    • /
    • pp.1415-1425
    • /
    • 2018
  • Kompsat-5 is the first Earth Observation Satellite which is equipped with an SAR in Korea. SAR images are generated by receiving signals reflected from an object by microwaves emitted from a SAR antenna. Because the wavelengths of microwaves are longer than the size of particles in the atmosphere, it can penetrate clouds and fog, and high-resolution images can be obtained without distinction between day and night. However, there is no color information in SAR images. To overcome these limitations of SAR images, colorization of SAR images using Cycle GAN, a deep learning model developed for domain translation, was conducted. Training of Cycle GAN is unstable due to the unsupervised learning based on unpaired dataset. Therefore, we proposed MS Cycle GAN applying multi-scale discriminator to solve the training instability of Cycle GAN and to improve the performance of colorization in this paper. To compare colorization performance of MS Cycle GAN and Cycle GAN, generated images by both models were compared qualitatively and quantitatively. Training Cycle GAN with multi-scale discriminator shows the losses of generators and discriminators are significantly reduced compared to the conventional Cycle GAN, and we identified that generated images by MS Cycle GAN are well-matched with the characteristics of regions such as leaves, rivers, and land.

Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision (사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.267-277
    • /
    • 2019
  • Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.

Low-complexity Local Illuminance Compensation for Bi-prediction mode (양방향 예측 모드를 위한 저복잡도 LIC 방법 연구)

  • Choi, Han Sol;Byeon, Joo Hyung;Bang, Gun;Sim, Dong Gyu
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.463-471
    • /
    • 2019
  • This paper proposes a method for reducing the complexity of LIC (Local Illuminance Compensation) for bi-directional inter prediction. The LIC performs local illumination compensation using neighboring reconstruction samples of the current block and the reference block to improve the accuracy of the inter prediction. Since the weight and offset required for local illumination compensation are calculated at both sides of the encoder and decoder using the reconstructed samples, there is an advantage that the coding efficiency is improved without signaling any information. Since the weight and the offset are obtained in the encoding prediction step and the decoding step, encoder and decoder complexity are increased. This paper proposes two methods for low complexity LIC. The first method is a method of applying illumination compensation with offset only in bi-directional prediction, and the second is a method of applying LIC after weighted average step of reference block obtained by bidirectional prediction. To evaluate the performance of the proposed method, BD-rate is compared with BMS-2.0.1 using B, C, and D classes of MPEG standard experimental image under RA (Random Access) condition. Experimental results show that the proposed method reduces the average of 0.29%, 0.23%, 0.04% for Y, U, and V in terms of BD-rate performance compared to BMS-2.0.1 and encoding/decoding time is almost same. Although the BD-rate was lost, the calculation complexity of the LIC was greatly reduced as the multiplication operation was removed and the addition operation was halved in the LIC parameter derivation process.

A Study on Pipe Model Registration for Augmented Reality Based O&M Environment Improving (증강현실 기반의 O&M 환경 개선을 위한 배관 모델 정합에 관한 연구)

  • Lee, Won-Hyuk;Lee, Kyung-Ho;Lee, Jae-Joon;Nam, Byeong-Wook
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.3
    • /
    • pp.191-197
    • /
    • 2019
  • As the shipbuilding and offshore plant industries grow larger and more complex, their maintenance and inspection systems become more important. Recently, maintenance and inspection systems based on augmented reality have been attracting much attention for improving worker's understanding of work and efficiency, but it is often difficult to work with because accurate matching between the augmented model and reality information is not. To solve this problem, marker based AR technology is used to attach a specific image to the model. However, the markers get damaged due to the characteristic of the shipbuilding and offshore plant industry, and the camera needs to be able to detect the entire marker clearly, and thus requires sufficient space to exist between the operator. In order to overcome the limitations of the existing AR system, in this study, a markerless AR was adopted to accurately recognize the actual model of the pipe system that occupies the most processes in the shipbuilding and offshore plant industries. The matching methodology. Through this system, it is expected that the twist phenomenon of the augmented model according to the attitude of the real worker and the limited environment can be improved.

Meteorological drought outlook with satellite precipitation data using Bayesian networks and decision-making model (베이지안 네트워크 및 의사결정 모형을 이용한 위성 강수자료 기반 기상학적 가뭄 전망)

  • Shin, Ji Yae;Kim, Ji-Eun;Lee, Joo-Heon;Kim, Tae-Woong
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.4
    • /
    • pp.279-289
    • /
    • 2019
  • Unlike other natural disasters, drought is a reoccurring and region-wide phenomenon after being triggered by a prolonged precipitation deficiency. Considering that remote sensing products provide consistent temporal and spatial measurements of precipitation, this study developed a remote sensing data-based drought outlook model. The meteorological drought was defined by the Standardized Precipitation Index (SPI) achieved from PERSIANN_CDR, TRMM 3B42 and GPM IMERG images. Bayesian networks were employed in this study to combine the historical drought information and dynamical prediction products in advance of drought outlook. Drought outlook was determined through a decision-making model considering the current drought condition and forecasted condition from the Bayesian networks. Drought outlook condition was classified by four states such as no drought, drought occurrence, drought persistence, and drought removal. The receiver operating characteristics (ROC) curve analysis were employed to measure the relative outlook performance with the dynamical prediction production, Multi-Model Ensemble (MME). The ROC analysis indicated that the proposed outlook model showed better performance than the MME, especially for drought occurrence and persistence of 2- and 3-month outlook.

Solitary Work Detection of Heavy Equipment Using Computer Vision (컴퓨터비전을 활용한 건설현장 중장비의 단독작업 자동 인식 모델 개발)

  • Jeong, Insoo;Kim, Jinwoo;Chi, Seokho;Roh, Myungil;Biggs, Herbert
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.4
    • /
    • pp.441-447
    • /
    • 2021
  • Construction sites are complex and dangerous because heavy equipment and workers perform various operations simultaneously within limited working areas. Solitary works of heavy equipment in complex job sites can cause fatal accidents, and thus they should interact with spotters and obtain information about surrounding environments during operations. Recently, many computer vision technologies have been developed to automatically monitor construction equipment and detect their interactions with other resources. However, previous methods did not take into account the interactions between equipment and spotters, which is crucial for identifying solitary works of heavy equipment. To address the drawback, this research develops a computer vision-based solitary work detection model that considers interactive operations between heavy equipment and spotters. To validate the proposed model, the research team performed experiments using image data collected from actual construction sites. The results showed that the model was able to detect workers and equipment with 83.4 % accuracy, classify workers and spotters with 84.2 % accuracy, and analyze the equipment-to-spotter interactions with 95.1 % accuracy. The findings of this study can be used to automate manual operation monitoring of heavy equipment and reduce the time and costs required for on-site safety management.