• Title/Summary/Keyword: simultaneous observation

Search Result 124, Processing Time 0.018 seconds

Spatial Distribution of Urban Heat and Pollution Islands using Remote Sensing and Private Automated Meteorological Observation System Data -Focused on Busan Metropolitan City, Korea- (위성영상과 민간자동관측시스템 자료를 활용한 도시열섬과 도시오염섬의 공간 분포 특성 - 부산광역시를 대상으로 -)

  • HWANG, Hee-Soo;KANG, Jung Eun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.3
    • /
    • pp.100-119
    • /
    • 2020
  • During recent years, the heat environment and particulate matter (PM10) have become serious environmental problems, as increases in heat waves due to rising global temperature interact with weakening atmospheric wind speeds. There exist urban heat islands and urban pollution islands with higher temperatures and air pollution concentrations than other areas. However, few studies have examined these issues together because of a lack of micro-scale data, which can be constructed from spatial data. Today, with the help of satellite images and big data collected by private telecommunication companies, detailed spatial distribution analyses are possible. Therefore, this study aimed to examine the spatial distribution patterns of urban heat islands and urban pollution islands within Busan Metropolitan City and to compare the distributions of the two phenomena. In this study, the land surface temperature of Landsat 8 satellite images, air temperature and particulate matter concentration data derived from a private automated meteorological observation system were gridded in 30m × 30m units, and spatial analysis was performed. Analysis showed that simultaneous zones of urban heat islands and urban pollution islands included some vulnerable residential areas and industrial areas. The political migration areas such as Seo-dong and Bansong-dong, representative vulnerable residential areas in Busan, were included in the co-occurring areas. The areas have a high density of buildings and poor ventilation, most of whose residents are vulnerable to heat waves and air pollution; thus, these areas must be considered first when establishing related policies. In the industrial areas included in the co-occurring areas, concrete or asphalt concrete-based impervious surfaces accounted for an absolute majority, and not only was the proportion of vegetation insufficient, there was also considerable vehicular traffic. A hot-spot analysis examining the reliability of the analysis confirmed that more than 99.96% of the regions corresponded to hot-spot areas at a 99% confidence level.

Visualization and Localization of Fusion Image Using VRML for Three-dimensional Modeling of Epileptic Seizure Focus (VRML을 이용한 융합 영상에서 간질환자 발작 진원지의 3차원적 가시화와 위치 측정 구현)

  • 이상호;김동현;유선국;정해조;윤미진;손혜경;강원석;이종두;김희중
    • Progress in Medical Physics
    • /
    • v.14 no.1
    • /
    • pp.34-42
    • /
    • 2003
  • In medical imaging, three-dimensional (3D) display using Virtual Reality Modeling Language (VRML) as a portable file format can give intuitive information more efficiently on the World Wide Web (WWW). The web-based 3D visualization of functional images combined with anatomical images has not studied much in systematic ways. The goal of this study was to achieve a simultaneous observation of 3D anatomic and functional models with planar images on the WWW, providing their locational information in 3D space with a measuring implement using VRML. MRI and ictal-interictal SPECT images were obtained from one epileptic patient. Subtraction ictal SPECT co-registered to MRI (SISCOM) was performed to improve identification of a seizure focus. SISCOM image volumes were held by thresholds above one standard deviation (1-SD) and two standard deviations (2-SD). SISCOM foci and boundaries of gray matter, white matter, and cerebrospinal fluid (CSF) in the MRI volume were segmented and rendered to VRML polygonal surfaces by marching cube algorithm. Line profiles of x and y-axis that represent real lengths on an image were acquired and their maximum lengths were the same as 211.67 mm. The real size vs. the rendered VRML surface size was approximately the ratio of 1 to 605.9. A VRML measuring tool was made and merged with previous VRML surfaces. User interface tools were embedded with Java Script routines to display MRI planar images as cross sections of 3D surface models and to set transparencies of 3D surface models. When transparencies of 3D surface models were properly controlled, a fused display of the brain geometry with 3D distributions of focal activated regions provided intuitively spatial correlations among three 3D surface models. The epileptic seizure focus was in the right temporal lobe of the brain. The real position of the seizure focus could be verified by the VRML measuring tool and the anatomy corresponding to the seizure focus could be confirmed by MRI planar images crossing 3D surface models. The VRML application developed in this study may have several advantages. Firstly, 3D fused display and control of anatomic and functional image were achieved on the m. Secondly, the vector analysis of a 3D surface model was defined by the VRML measuring tool based on the real size. Finally, the anatomy corresponding to the seizure focus was intuitively detected by correlations with MRI images. Our web based visualization of 3-D fusion image and its localization will be a help to online research and education in diagnostic radiology, therapeutic radiology, and surgery applications.

  • PDF

An Analysis on the Sinking Resistance of Purse Seine - 2. In the Case of the Model Purse Seine with Different Netting Material and Sinkers - (旋網의 沈降 抵抗 解析 - 2. 網地材料와 沈子量 다른 模型網의 경우 -)

  • Kim, Suk-Jong
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.40 no.1
    • /
    • pp.29-36
    • /
    • 2004
  • This study deals with an analysis on the sinking resistance for the model purse seine, in the case of different netting material and sinkers. The experiment was carried out using rune simplified model seines of knotless nettings. Dimension of model seines 420cm for corkline and 85cm for seine depth, three groups of models rigged 25, 45 and 60g with the same weighted sinkers in water were used. These were named PP-25, PA-25, PES-25, PP-45, PA-45, PES-45, PP-60, PA-60 and PES-60 seine. The densitie($\rho$) of netting materials were 0.91g/cm$cm^3$, 1.14g/cm$cm^3$ and 1.38g/cm$m^3$. Experiments carried out in the observation channel in a flume tank under still water conditions. Sinking motion was recorded by the one set of TV-camera for VTR, and reading coordinate carried out by the video digitization system. Differential equations were derived from the conservation of momenta of the model purse seines and used to determine the sinking speeds of the depths of leadline and the other portions of the seines. An analysis carried out by simultaneous differential equations for numerical method by sub-routine Runge-Kutta-Gill The results obtained were as follows : 1. Average sinking speed of leadline for the model seines rigged 60g with the same weighted sinkers in water was fastest for 12.2cm/sec of PES seine, followed by 11.4cm/sec of PA and 10.7cm/sec of PP seines. 2. The coefficient of resistance for netting of seine was estimated to be $K_D=0.09(\frac{\rho}{\rho_w})^4$ 3. The coefficient of resistance for netting bundle of seine was estimated to be $C_R=0.91(\frac{\rho}{\rho_w})$ 4. In all seines, the calculated depths of leadline closely agreed with the measured ones, each 25g, 45g, 60g of weighted sinkers were put into formulas meas.=1.04cal., meas.=0.99cal. and meas.=0.98 cal.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.