• Title/Summary/Keyword: Processing Image

Search Result 9,963, Processing Time 0.039 seconds

Detection of Forest Fire Damage from Sentinel-1 SAR Data through the Synergistic Use of Principal Component Analysis and K-means Clustering (Sentinel-1 SAR 영상을 이용한 주성분분석 및 K-means Clustering 기반 산불 탐지)

  • Lee, Jaese;Kim, Woohyeok;Im, Jungho;Kwon, Chunguen;Kim, Sungyong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1373-1387
    • /
    • 2021
  • Forest fire poses a significant threat to the environment and society, affecting carbon cycle and surface energy balance, and resulting in socioeconomic losses. Widely used multi-spectral satellite image-based approaches for burned area detection have a problem in that they do not work under cloudy conditions. Therefore, in this study, Sentinel-1 Synthetic Aperture Radar (SAR) data from Europe Space Agency, which can be collected in all weather conditions, were used to identify forest fire damaged area based on a series of processes including Principal Component Analysis (PCA) and K-means clustering. Four forest fire cases, which occurred in Gangneung·Donghae and Goseong·Sokcho in Gangwon-do of South Korea and two areas in North Korea on April 4, 2019, were examined. The estimated burned areas were evaluated using fire reference data provided by the National Institute of Forest Science (NIFOS) for two forest fire cases in South Korea, and differenced normalized burn ratio (dNBR) for all four cases. The average accuracy using the NIFOS reference data was 86% for the Gangneung·Donghae and Goseong·Sokcho fires. Evaluation using dNBR showed an average accuracy of 84% for all four forest fire cases. It was also confirmed that the stronger the burned intensity, the higher detection the accuracy, and vice versa. Given the advantage of SAR remote sensing, the proposed statistical processing and K-means clustering-based approach can be used to quickly identify forest fire damaged area across the Korean Peninsula, where a cloud cover rate is high and small-scale forest fires frequently occur.

Introduction of GOCI-II Atmospheric Correction Algorithm and Its Initial Validations (GOCI-II 대기보정 알고리즘의 소개 및 초기단계 검증 결과)

  • Ahn, Jae-Hyun;Kim, Kwang-Seok;Lee, Eun-Kyung;Bae, Su-Jung;Lee, Kyeong-Sang;Moon, Jeong-Eon;Han, Tai-Hyun;Park, Young-Je
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_2
    • /
    • pp.1259-1268
    • /
    • 2021
  • The 2nd Geostationary Ocean Color Imager (GOCI-II) is the successor to the Geostationary Ocean Color Imager (GOCI), which employs one near-ultraviolet wavelength (380 nm) and eight visible wavelengths(412, 443, 490, 510, 555, 620, 660, 680 nm) and three near-infrared wavelengths(709, 745, 865 nm) to observe the marine environment in Northeast Asia, including the Korean Peninsula. However, the multispectral radiance image observed at satellite altitude includes both the water-leaving radiance and the atmospheric path radiance. Therefore, the atmospheric correction process to estimate the water-leaving radiance without the path radiance is essential for analyzing the ocean environment. This manuscript describes the GOCI-II standard atmospheric correction algorithm and its initial phase validation. The GOCI-II atmospheric correction method is theoretically based on the previous GOCI atmospheric correction, then partially improved for turbid water with the GOCI-II's two additional bands, i.e., 620 and 709 nm. The match-up showed an acceptable result, with the mean absolute percentage errors are fall within 5% in blue bands. It is supposed that part of the deviation over case-II waters arose from a lack of near-infrared vicarious calibration. We expect the GOCI-II atmospheric correction algorithm to be improved and updated regularly to the GOCI-II data processing system through continuous calibration and validation activities.

Host-Based Intrusion Detection Model Using Few-Shot Learning (Few-Shot Learning을 사용한 호스트 기반 침입 탐지 모델)

  • Park, DaeKyeong;Shin, DongIl;Shin, DongKyoo;Kim, Sangsoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.271-278
    • /
    • 2021
  • As the current cyber attacks become more intelligent, the existing Intrusion Detection System is difficult for detecting intelligent attacks that deviate from the existing stored patterns. In an attempt to solve this, a model of a deep learning-based intrusion detection system that analyzes the pattern of intelligent attacks through data learning has emerged. Intrusion detection systems are divided into host-based and network-based depending on the installation location. Unlike network-based intrusion detection systems, host-based intrusion detection systems have the disadvantage of having to observe the inside and outside of the system as a whole. However, it has the advantage of being able to detect intrusions that cannot be detected by a network-based intrusion detection system. Therefore, in this study, we conducted a study on a host-based intrusion detection system. In order to evaluate and improve the performance of the host-based intrusion detection system model, we used the host-based Leipzig Intrusion Detection-Data Set (LID-DS) published in 2018. In the performance evaluation of the model using that data set, in order to confirm the similarity of each data and reconstructed to identify whether it is normal data or abnormal data, 1D vector data is converted to 3D image data. Also, the deep learning model has the drawback of having to re-learn every time a new cyber attack method is seen. In other words, it is not efficient because it takes a long time to learn a large amount of data. To solve this problem, this paper proposes the Siamese Convolutional Neural Network (Siamese-CNN) to use the Few-Shot Learning method that shows excellent performance by learning the little amount of data. Siamese-CNN determines whether the attacks are of the same type by the similarity score of each sample of cyber attacks converted into images. The accuracy was calculated using Few-Shot Learning technique, and the performance of Vanilla Convolutional Neural Network (Vanilla-CNN) and Siamese-CNN was compared to confirm the performance of Siamese-CNN. As a result of measuring Accuracy, Precision, Recall and F1-Score index, it was confirmed that the recall of the Siamese-CNN model proposed in this study was increased by about 6% from the Vanilla-CNN model.

Diagnosis of Nitrogen Content in the Leaves of Apple Tree Using Spectral Imagery (분광 영상을 이용한 사과나무 잎의 질소 영양 상태 진단)

  • Jang, Si Hyeong;Cho, Jung Gun;Han, Jeom Hwa;Jeong, Jae Hoon;Lee, Seul Ki;Lee, Dong Yong;Lee, Kwang Sik
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.384-392
    • /
    • 2022
  • The objective of this study was to estimated nitrogen content and chlorophyll using RGB, Hyperspectral sensors to diagnose of nitrogen nutrition in apple tree leaves. Spectral data were acquired through image processing after shooting with high resolution RGB and hyperspectral sensor for two-year-old 'Hongro/M.9' apple. Growth data measured chlorophyll and leaf nitrogen content (LNC) immediately after shooting. The growth model was developed by using regression analysis (simple, multi, partial least squared) with growth data (chlorophyll, LNC) and spectral data (SPAD meter, color vegetation index, wavelength). As a result, chlorophyll and LNC showed a statistically significant difference according to nitrogen fertilizer level regardless of date. Leaf color became pale as the nutrients in the leaf were transferred to the fruit as over time. RGB sensor showed a statistically significant difference at the red wavelength regardless of the date. Also hyperspectral sensor showed a spectral difference depend on nitrogen fertilizer level for non-visible wavelength than visible wavelength at June 10th and July 14th. The estimation model performance of chlorophyll, LNC showed Partial least squared regression using hyperspectral data better than Simple and multiple linear regression using RGB data (Chlorophyll R2: 81%, LNC: 81%). The reason is that hyperspectral sensor has a narrow Full Half at Width Maximum (FWHM) and broad wavelength range (400-1,000 nm), so it is thought that the spectral analysis of crop was possible due to stress cause by nitrogen deficiency. In future study, it is thought that it will contribute to development of high quality and stable fruit production technology by diagnosis model of physiology and pest for all growth stage of tree using hyperspectral imagery.

An Implementation of OTB Extension to Produce TOA and TOC Reflectance of LANDSAT-8 OLI Images and Its Product Verification Using RadCalNet RVUS Data (Landsat-8 OLI 영상정보의 대기 및 지표반사도 산출을 위한 OTB Extension 구현과 RadCalNet RVUS 자료를 이용한 성과검증)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.449-461
    • /
    • 2021
  • Analysis Ready Data (ARD) for optical satellite images represents a pre-processed product by applying spectral characteristics and viewing parameters for each sensor. The atmospheric correction is one of the fundamental and complicated topics, which helps to produce Top-of-Atmosphere (TOA) and Top-of-Canopy (TOC) reflectance from multi-spectral image sets. Most remote sensing software provides algorithms or processing schemes dedicated to those corrections of the Landsat-8 OLI sensors. Furthermore, Google Earth Engine (GEE), provides direct access to Landsat reflectance products, USGS-based ARD (USGS-ARD), on the cloud environment. We implemented the Orfeo ToolBox (OTB) atmospheric correction extension, an open-source remote sensing software for manipulating and analyzing high-resolution satellite images. This is the first tool because OTB has not provided calibration modules for any Landsat sensors. Using this extension software, we conducted the absolute atmospheric correction on the Landsat-8 OLI images of Railroad Valley, United States (RVUS) to validate their reflectance products using reflectance data sets of RVUS in the RadCalNet portal. The results showed that the reflectance products using the OTB extension for Landsat revealed a difference by less than 5% compared to RadCalNet RVUS data. In addition, we performed a comparative analysis with reflectance products obtained from other open-source tools such as a QGIS semi-automatic classification plugin and SAGA, besides USGS-ARD products. The reflectance products by the OTB extension showed a high consistency to those of USGS-ARD within the acceptable level in the measurement data range of the RadCalNet RVUS, compared to those of the other two open-source tools. In this study, the verification of the atmospheric calibration processor in OTB extension was carried out, and it proved the application possibility for other satellite sensors in the Compact Advanced Satellite (CAS)-500 or new optical satellites.

Estimation of Chlorophyll Contents in Pear Tree Using Unmanned AerialVehicle-Based-Hyperspectral Imagery (무인기 기반 초분광영상을 이용한 배나무 엽록소 함량 추정)

  • Ye Seong Kang;Ki Su Park;Eun Li Kim;Jong Chan Jeong;Chan Seok Ryu;Jung Gun Cho
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.669-681
    • /
    • 2023
  • Studies have tried to apply remote sensing technology, a non-destructive survey method, instead of the existing destructive survey, which requires relatively large labor input and a long time to estimate chlorophyll content, which is an important indicator for evaluating the growth of fruit trees. This study was conducted to non-destructively evaluate the chlorophyll content of pear tree leaves using unmanned aerial vehicle-based hyperspectral imagery for two years(2021, 2022). The reflectance of the single bands of the pear tree canopy extracted through image processing was band rationed to minimize unstable radiation effects depending on time changes. The estimation (calibration and validation) models were developed using machine learning algorithms of elastic-net, k-nearest neighbors(KNN), and support vector machine with band ratios as input variables. By comparing the performance of estimation models based on full band ratios, key band ratios that are advantageous for reducing computational costs and improving reproducibility were selected. As a result, for all machine learning models, when calibration of coefficient of determination (R2)≥0.67, root mean squared error (RMSE)≤1.22 ㎍/cm2, relative error (RE)≤17.9% and validation of R2≥0.56, RMSE≤1.41 ㎍/cm2, RE≤20.7% using full band ratios were compared, four key band ratios were selected. There was relatively no significant difference in validation performance between machine learning models. Therefore, the KNN model with the highest calibration performance was used as the standard, and its key band ratios were 710/714, 718/722, 754/758, and 758/762 nm. The performance of calibration showed R2=0.80, RMSE=0.94 ㎍/cm2, RE=13.9%, and validation showed R2=0.57, RMSE=1.40 ㎍/cm2, RE=20.5%. Although the performance results based on validation were not sufficient to estimate the chlorophyll content of pear tree leaves, it is meaningful that key band ratios were selected as a standard for future research. To improve estimation performance, it is necessary to continuously secure additional datasets and improve the estimation model by reproducing it in actual orchards. In future research, it is necessary to continuously secure additional datasets to improve estimation performance, verify the reliability of the selected key band ratios, and upgrade the estimation model to be reproducible in actual orchards.

The Effect of Retinal and Perceived Motion Trajectory of Visual Motion Stimulus on Estimated Speed of Motion (운동자극의 망막상 운동거리와 지각된 운동거리가 운동속도 추정에 미치는 영향)

  • Park Jong-Jin;Hyng-Chul O. Li;ShinWoo Kim
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.3
    • /
    • pp.181-196
    • /
    • 2023
  • Size, velocity, and time equivalence are mechanisms that allow us to perceive objects in three-dimensional space consistently, despite errors on the two-dimensional retinal image. These mechanisms work on common cues, suggesting that the perception of motion distance, motion speed, and motion time may share common processing. This can lead to the hypothesis that, despite the spatial nature of visual stimuli distorting temporal perception, the perception of motion speed and the perception of motion duration will tend to oppose each other, as observed for objects moving in the environment. To test this hypothesis, the present study measured perceived speed using Müller-Lyer illusion stimulus to determine the relationship between the time-perception consequences of motion stimuli observed in previous studies and the speed perception measured in the present study. Experiment 1 manipulated the perceived motion trajectory while controlling for the retinal motion trajectory, and Experiment 2 manipulated the retinal motion trajectory while controlling for the perceived motion trajectory. The result is that the speed of the inward stimulus, which is perceived to be shorter, is estimated to be higher than that of the outward stimulus, which is perceived to be longer than the actual distance traveled. Taken together with previous time perception findings, namely that time perception is expanded for outward stimuli and contracted for inward stimuli, this suggests that when the perceived trajectory of a stimulus manipulated by the Müller-Lyer illusion is controlled for, perceived speed decreases with increasing duration and increases with decreasing duration when the perceived distance of the stimulus is constant. This relationship suggests that the relationship between time and speed perceived by spatial cues corresponds to the properties of objects moving in the environment, i.e, an increase in time decreases speed and a decrease in time increases speed when distance remains the same.

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.

The Impacts of Need for Cognitive Closure, Psychological Wellbeing, and Social Factors on Impulse Purchasing (인지폐합수요(认知闭合需要), 심리건강화사회인소대충동구매적영향(心理健康和社会因素对冲动购买的影响))

  • Lee, Myong-Han;Schellhase, Ralf;Koo, Dong-Mo;Lee, Mi-Jeong
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.4
    • /
    • pp.44-56
    • /
    • 2009
  • Impulse purchasing is defined as an immediate purchase with no pre-shopping intentions. Previous studies of impulse buying have focused primarily on factors linked to marketing mix variables, situational factors, and consumer demographics and traits. In previous studies, marketing mix variables such as product category, product type, and atmospheric factors including advertising, coupons, sales events, promotional stimuli at the point of sale, and media format have been used to evaluate product information. Some authors have also focused on situational factors surrounding the consumer. Factors such as the availability of credit card usage, time available, transportability of the products, and the presence and number of shopping companions were found to have a positive impact on impulse buying and/or impulse tendency. Research has also been conducted to evaluate the effects of individual characteristics such as the age, gender, and educational level of the consumer, as well as perceived crowding, stimulation, and the need for touch, on impulse purchasing. In summary, previous studies have found that all products can be purchased impulsively (Vohs and Faber, 2007), that situational factors affect and/or at least facilitate impulse purchasing behavior, and that various individual traits are closely linked to impulse buying. The recent introduction of new distribution channels such as home shopping channels, discount stores, and Internet stores that are open 24 hours a day increases the probability of impulse purchasing. However, previous literature has focused predominantly on situational and marketing variables and thus studies that consider critical consumer characteristics are still lacking. To fill this gap in the literature, the present study builds on this third tradition of research and focuses on individual trait variables, which have rarely been studied. More specifically, the current study investigates whether impulse buying tendency has a positive impact on impulse buying behavior, and evaluates how consumer characteristics such as the need for cognitive closure (NFCC), psychological wellbeing, and susceptibility to interpersonal influences affect the tendency of consumers towards impulse buying. The survey results reveal that while consumer affective impulsivity has a strong positive impact on impulse buying behavior, cognitive impulsivity has no impact on impulse buying behavior. Furthermore, affective impulse buying tendency is driven by sub-components of NFCC such as decisiveness and discomfort with ambiguity, psychological wellbeing constructs such as environmental control and purpose in life, and by normative and informational influences. In addition, cognitive impulse tendency is driven by sub-components of NFCC such as decisiveness, discomfort with ambiguity, and close-mindedness, and the psychological wellbeing constructs of environmental control, as well as normative and informational influences. The present study has significant theoretical implications. First, affective impulsivity has a strong impact on impulse purchase behavior. Previous studies based on affectivity and flow theories proposed that low to moderate levels of impulsivity are driven by reduced self-control or a failure of self-regulatory mechanisms. The present study confirms the above proposition. Second, the present study also contributes to the literature by confirming that impulse buying tendency can be viewed as a two-dimensional concept with both affective and cognitive dimensions, and illustrates that impulse purchase behavior is explained mainly by affective impulsivity, not by cognitive impulsivity. Third, the current study accommodates new constructs such as psychological wellbeing and NFCC as potential influencing factors in the research model, thereby contributing to the existing literature. Fourth, by incorporating multi-dimensional concepts such as psychological wellbeing and NFCC, more diverse aspects of consumer information processing can be evaluated. Fifth, the current study also extends the existing literature by confirming the two competing routes of normative and informational influences. Normative influence occurs when individuals conform to the expectations of others or to enhance his/her self-image. Whereas informational influence occurs when individuals search for information from knowledgeable others or making inferences based upon observations of the behavior of others. The present study shows that these two competing routes of social influence can be attributed to different sources of influence power. The current study also has many practical implications. First, it suggests that people with affective impulsivity may be primary targets to whom companies should pay closer attention. Cultivating a more amenable and mood-elevating shopping environment will appeal to this segment. Second, the present results demonstrate that NFCC is closely related to the cognitive dimension of impulsivity. These people are driven by careless thoughts, not by feelings or excitement. Rational advertising at the point of purchase will attract these customers. Third, people susceptible to normative influences are another potential target market. Retailers and manufacturers could appeal to this segment by advertising their products and/or services as products that can be used to identify with or conform to the expectations of others in the aspiration group. However, retailers should avoid targeting people susceptible to informational influences as a segment market. These people are engaged in an extensive information search relevant to their purchase, and therefore more elaborate, long-term rational advertising messages, which can be internalized into these consumers' thought processes, will appeal to this segment. The current findings should be interpreted with caution for several reasons. The study used a small convenience sample, and only investigated behavior in two dimensions. Accordingly, future studies should incorporate a sample with more diverse characteristics and measure different aspects of behavior. Future studies should also investigate personality traits closely related to affectivity theories. Trait variables such as sensory curiosity, interpersonal curiosity, and atmospheric responsiveness are interesting areas for future investigation.

  • PDF

Performance Evaluation of Siemens CTI ECAT EXACT 47 Scanner Using NEMA NU2-2001 (NEMA NU2-2001을 이용한 Siemens CTI ECAT EXACT 47 스캐너의 표준 성능 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.3
    • /
    • pp.259-267
    • /
    • 2004
  • Purpose: NEMA NU2-2001 was proposed as a new standard for performance evaluation of whole body PET scanners. in this study, system performance of Siemens CTI ECAT EXACT 47 PET scanner including spatial resolution, sensitivity, scatter fraction, and count rate performance in 2D and 3D mode was evaluated using this new standard method. Methods: ECAT EXACT 47 is a BGO crystal based PET scanner and covers an axial field of view (FOV) of 16.2 cm. Retractable septa allow 2D and 3D data acquisition. All the PET data were acquired according to the NEMA NU2-2001 protocols (coincidence window: 12 ns, energy window: $250{\sim}650$ keV). For the spatial resolution measurement, F-18 point source was placed at the center of the axial FOV((a) x=0, and y=1, (b)x=0, and y=10, (c)x=70, and y=0cm) and a position one fourth of the axial FOV from the center ((a) x=0, and y=1, (b)x=0, and y=10, (c)x=10, and y=0cm). In this case, x and y are transaxial horizontal and vertical, and z is the scanner's axial direction. Images were reconstructed using FBP with ramp filter without any post processing. To measure the system sensitivity, NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves were scanned at the center of transaxial FOV and 10 cm offset from the center. Attenuation free values of sensitivity wire estimated by extrapolating data to the zero wall thickness. NEMA scatter phantom with length of 70 cm was filled with F-18 or C-11solution (2D: 2,900 MBq, 3D: 407 MBq), and coincidence count rates wire measured for 7 half-lives to obtain noise equivalent count rate (MECR) and scatter fraction. We confirmed that dead time loss of the last flame were below 1%. Scatter fraction was estimated by averaging the true to background (staffer+random) ratios of last 3 frames in which the fractions of random rate art negligibly small. Results: Axial and transverse resolutions at 1cm offset from the center were 0.62 and 0.66 cm (FBP in 2D and 3D), and 0.67 and 0.69 cm (FBP in 2D and 3D). Axial, transverse radial, and transverse tangential resolutions at 10cm offset from the center were 0.72 and 0.68 cm (FBP in 2D and 3D), 0.63 and 0.66 cm (FBP in 2D and 3D), and 0.72 and 0.66 cm (FBP in 2D and 3D). Sensitivity values were 708.6 (2D), 2931.3 (3D) counts/sec/MBq at the center and 728.7 (2D, 3398.2 (3D) counts/sec/MBq at 10 cm offset from the center. Scatter fractions were 0.19 (2D) and 0.49 (3D). Peak true count rate and NECR were 64.0 kcps at 40.1 kBq/mL and 49.6 kcps at 40.1 kBq/mL in 2D and 53.7 kcps at 4.76 kBq/mL and 26.4 kcps at 4.47 kBq/mL in 3D. Conclusion: Information about the performance of CTI ECAT EXACT 47 PET scanner reported in this study will be useful for the quantitative analysis of data and determination of optimal image acquisition protocols using this widely used scanner for clinical and research purposes.