• 제목/요약/키워드: Number of Images

검색결과 2,198건 처리시간 0.03초

VALIDATION OF SEA ICE MOTION DERIVED FROM AMSR-E AND SSM/I DATA USING MODIS DATA

  • Yaguchi, Ryota;Cho, Ko-Hei
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2008년도 International Symposium on Remote Sensing
    • /
    • pp.301-304
    • /
    • 2008
  • Since longer wavelength microwave radiation can penetrate clouds, satellite passive microwave sensors can observe sea ice of the entire polar region on a daily basis. Thus, it is becoming popular to derive sea ice motion vectors from a pair of satellite passive microwave sensor images observed at one or few day interval. Usually, the accuracies of derived vectors are validated by comparing with the position data of drifting buoys. However, the number of buoys for validation is always quite limited compared to a large number of vectors derived from satellite images. In this study, the sea ice motion vectors automatically derived from pairs of AMSR-E 89GHz images (IFOV = 3.5 ${\times}$ 5.9km) by an image-to-image cross correlation were validated by comparing with sea ice motion vectors manually derived from pairs of cloudless MODIS images (IFOV=250 ${\times}$ 250m). Since AMSR-E and MODIS are both on the same Aqua satellite of NASA, the observation time of both sensors are the same. The relative errors of AMSR-E vectors against MODIS vectors were calculated. The accuracy validation has been conducted for 5 scenes. If we accept relative error of less than 30% as correct vectors, 75% to 92% of AMSR-E vectors derived from one scene were correct. On the other hand, the percentage of correct sea ice vectors derived from a pair of SSM/I 85GHz images (IFOV = 15 ${\times}$ 13km) observed nearly simultaneously with one of the AMSR-E images was 46%. The difference of the accuracy between AMSR-E and SSM/I is reflecting the difference of IFOV. The accuracies of H and V polarization were different from scene to scene, which may reflect the difference of sea ice distributions and their snow cover of each scene.

  • PDF

소셜큐레이션과 광고 - 버티컬 SNS에서 표현된 패션브랜드 이미지의 메시지 전략 - (Social curation as an advertising tool - Message strategy of fashion brand images on vertical SNS -)

  • 신인준;이규혜
    • 복식문화연구
    • /
    • 제23권3호
    • /
    • pp.498-511
    • /
    • 2015
  • This paper examines advertising images of fashion brands in vertical social network site (SNS) from the viewpoints of message strategies. Vertical social network sites are types of social curation systems applied to social networking, where information is selected, organized, and maintained. Fashion brands communicate with consumers by presenting images on vertical SNSs, anticipating improvements in brand image, popularity, and loyalty. Those images portray content for particular brands and seasonal concepts, thus creating paths for product sales information. Marketing via SNSs corresponds to relationship marketing, which refers to long-term interrelationship and value augmentation between the company and consumer, and viral advertising, which relies on word of mouth distribution via social network platforms. Taylor's six-segment message strategy wheel, often used for analyzing viral ads, was applied to conduct a content analysis of the images. A total of 2,656 images of fashion brands advertised on Instagram were selected and analyzed. Results indicated that brand values were somewhat related to the number of followers. Follower rankings and comment rankings were also correlated. In general, fashion brands projected sensory messages most often. Acute need and rational messages were less common than other messages. Sports brands and luxury brands presented sensory messages, whereas fast fashion brands projected routine images most often. Fashion brands promoted on vertical SNSs should portray advertising images that combine message strategies

Denoise of Astronomical Images with Deep Learning

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • 천문학회보
    • /
    • 제44권1호
    • /
    • pp.54.2-54.2
    • /
    • 2019
  • Removing noise which occurs inevitably when taking image data has been a big concern. There is a way to raise signal-to-noise ratio and it is regarded as the only way, image stacking. Image stacking is averaging or just adding all pixel values of multiple pictures taken of a specific area. Its performance and reliability are unquestioned, but its weaknesses are also evident. Object with fast proper motion can be vanished, and most of all, it takes too long time. So if we can handle single shot image well and achieve similar performance, we can overcome those weaknesses. Recent developments in deep learning have enabled things that were not possible with former algorithm-based programming. One of the things is generating data with more information from data with less information. As a part of that, we reproduced stacked image from single shot image using a kind of deep learning, conditional generative adversarial network (cGAN). r-band camcol2 south data were used from SDSS Stripe 82 data. From all fields, image data which is stacked with only 22 individual images and, as a pair of stacked image, single pass data which were included in all stacked image were used. All used fields are cut in $128{\times}128$ pixel size, so total number of image is 17930. 14234 pairs of all images were used for training cGAN and 3696 pairs were used for verify the result. As a result, RMS error of pixel values between generated data from the best condition and target data were $7.67{\times}10^{-4}$ compared to original input data, $1.24{\times}10^{-3}$. We also applied to a few test galaxy images and generated images were similar to stacked images qualitatively compared to other de-noising methods. In addition, with photometry, The number count of stacked-cGAN matched sources is larger than that of single pass-stacked one, especially for fainter objects. Also, magnitude completeness became better in fainter objects. With this work, it is possible to observe reliably 1 magnitude fainter object.

  • PDF

Analysis of X-ray image qualities-accuracy of shape and clearness of image-using X-ray digital tomosynthesis

  • Roh, Young Jun;Kang, Sung Taek;Kim, Hyung Cheol;Kim, Sung-Kwon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.572-576
    • /
    • 1997
  • X-ray laminography and DT(digital tomosynthesis) that can form a cross-sectional image of 3-D objects promise to be good solutions for inspecting interior defects of industrial products. The major factors of the digital tomosynthesis that influence on the quality of x-ray cross-sectional images are also discussed. The quality of images acquired from the DT system varies according to image synthesizing methods, the number of images used in image synthesizing, and X-ray projection angles. In this paper, a new image synthesizing method named 'log-root method' is proposed to get clear and accurate cross-sectional images, which can reduce both artifact and blurring generated by materials out of focal plane. To evaluate the quality of cross-sectional images, two evaluating criteria: (1) shape accuracy and (2) clearness in the cross-sectional image are defined. Based on this criteria, a series of simulations were performed, and the results show the superiority of the new synthesizing method over the existing ones such as averaging and minimum method.

  • PDF

Cone-beam CT와 multi-detector CT영상에서 측정된 CT number에 대한 비교연구 (Comparison of CT numbers between cone-beam CT and multi-detector CT)

  • 김동수;한원정;김은경
    • Imaging Science in Dentistry
    • /
    • 제40권2호
    • /
    • pp.63-68
    • /
    • 2010
  • Purpose : To compare the CT numbers on 3 cone-beam CT (CBCT) images with those on multi-detector CT (MDCT) image using CT phantom and to develop linear regressive equations using CT numbers to material density for all the CT scanner each. Materials and Methods : Mini CT phantom comprised of five 1 inch thick cylindrical models with 1.125 inches diameter of materials with different densities (polyethylene, polystyrene, plastic water, nylon and acrylic) was used. It was scanned in 3 CBCTs (i-CAT, Alphard VEGA, Implagraphy SC) and 1 MDCT (Somatom Emotion). The images were saved as DICOM format and CT numbers were measured using OnDemand 3D. CT numbers obtained from CBCTs and MDCT images were compared and linear regression analysis was performed for the density, $\rho$ ($g/cm^3$), as the dependent variable in terms of the CT numbers obtained from CBCTs and MDCT images. Results : CT numbers on i-CAT and Implagraphy CBCT images were smaller than those on Somatom Emotion MDCT image (p<0.05). Linear relationship on a range of materials used for this study were $\rho$=0.001H+1.07 with $R^2$ value of 0.999 for Somatom Emotion, $\rho$=0.002H+1.09 with $R^2$ value of 0.991 for Alphard VEGA, $\rho$=0.001H+1.43 with $R^2$ value of 0.980 for i-CAT and $\rho$=0.001H+1.30 with $R^2$ value of 0.975 for Implagraphy. Conclusion: CT numbers on i-CAT and Implagraphy CBCT images were not same as those on Somatom Emotion MDCT image. The linear regressive equations to determine the density from the CT numbers with very high correlation coefficient were obtained on three CBCT and MDCT scan.

Affine-Invariant Image normalization for Log-Polar Images using Momentums

  • Son, Young-Ho;You, Bum-Jae;Oh, Sang-Rok;Park, Gwi-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.1140-1145
    • /
    • 2003
  • Image normalization is one of the important areas in pattern recognition. Also, log-polar images are useful in the sense that their image data size is reduced dramatically comparing with conventional images and it is possible to develop faster pattern recognition algorithms. Especially, the log-polar image is very similar with the structure of human eyes. However, there are almost no researches on pattern recognition using the log-polar images while a number of researches on visual tracking have been executed. We propose an image normalization technique of log-polar images using momentums applicable for affine-invariant pattern recognition. We handle basic distortions of an image including translation, rotation, scaling, and skew of a log-polar image. The algorithm is experimented in a PC-based real-time vision system successfully.

  • PDF

일반 방사선검사의 소요 시간 실태조사 (Investigation of the Time Required for General Radiography)

  • 임우택;주영철;김연민
    • 대한방사선기술학회지:방사선기술과학
    • /
    • 제45권3호
    • /
    • pp.255-262
    • /
    • 2022
  • In this study, by analyzing the examination time for each procedure, the appropriate workload of radiologic technologist is analyzed based on the actual examination time in the current clinical setting by comparing with the examination time in the radiology field setting of the health insurance review and assessment service. In addition, this result is introduced into the calculation of relate value units; it was attempted to provide accurate and objective evidence in the field of radiology. From May 2020 to December 2021, the study retrospectively investigated the examination times recorded in the electronic medical record and picture archiving and communication system at 5 tertiary general hospitals and 1 general hospital. The total of 16 examination parts are applied in this study, including the head, sinuses, chest, ribs, abdomen, pelvis, cervical, thoracic, lumbar, shoulder, elbow, wrist, hip, femur, knee, and ankle. The minimum number of images that could be obtained per radiation generator was 3.6 images for one hour, and the maximum was 6.4 images. When 50% median of procedure time is calculated, the minimum number of images that could be obtained was 16.7 images and maximum was 35.3 images; in addition, minimum examination time is 1.7 minutes, and maximum time is 3.6 minutes. In conclusion, it is judged that there will be insufficient explanation time for basic infection instructions such as hand hygiene during the examinations in current clinical practice. It is believed that radiologic technologists will contribute to providing higher-quality of radiation examination services to the public by complying with guidelines for work and setting appropriate workload on their own.

조명조건이 다른 다수영상의 융합을 통한 영상의 분할기법 (Image segmentation by fusing multiple images obtained under different illumination conditions)

  • 전윤산;한헌수
    • 제어로봇시스템학회논문지
    • /
    • 제1권2호
    • /
    • pp.105-111
    • /
    • 1995
  • This paper proposes a segmentation algorithm using gray-level discontinuity and surface reflectance ratio of input images obtained under different illumination conditions. Each image is divided by a certain number of subregions based on the thresholds. The thresholds are determined using the histogram of fusion image which is obtained by ANDing the multiple input images. The subregions of images are projected on the eigenspace where their bases are the major eigenvectors of image matrix. Points in the eigenspace are classified into two clusters. Images associated with the bigger cluster are fused by revised ANDing to form a combined edge image. Missing edges are detected using surface reflectance ration and chain code. The proposed algorithm obtains more accurate edge information and allows to more efficiently recognize the environment under various illumination conditions.

  • PDF

3차원 물체의 반복된 다중 직교 투영 영상을 이용한 푸리에 홀로그램의 재생 (Reconstruction of Fourier hologram for 3D objects using repeated multiple orthographic view images)

  • 김민수;김남;박재형;길상근
    • 한국광학회:학술대회논문집
    • /
    • 한국광학회 2009년도 동계학술발표회 논문집
    • /
    • pp.167-168
    • /
    • 2009
  • We propose a new computing method for Fourier hologram of 3D objects captured by lens array. Fourier hologram of the two objects which positioned at different distances can be calculated using multiple orthographic view images. The size of the Fourier hologram is in proportion to the number of the orthographic view images. By repeating the orthographic view images, the size of the Fourier hologram can be increased. The principle is verified by numerically reconstructing the hologram which is synthesized from the orthographic images captured optically.

  • PDF

Multi-camera based Images through Feature Points Algorithm for HDR Panorama

  • Yeong, Jung-Ho
    • International journal of advanced smart convergence
    • /
    • 제4권2호
    • /
    • pp.6-13
    • /
    • 2015
  • With the spread of various kinds of cameras such as digital cameras and DSLR and a growing interest in high-definition and high-resolution images, a method that synthesizes multiple images is being studied among various methods. High Dynamic Range (HDR) images store light exposure with even wider range of number than normal digital images. Therefore, it can store the intensity of light inherent in specific scenes expressed by light sources in real life quite accurately. This study suggests feature points synthesis algorithm to improve the performance of HDR panorama recognition method (algorithm) at recognition and coordination level through classifying the feature points for image recognition using more than one multi frames.