• Title/Summary/Keyword: application feasibility

Search Result 1,552, Processing Time 0.035 seconds

F-18-FDG Whole Body Scan using Gamma Camera equipped with Ultra High Energy Collimator in Cancer Patients: Comparison with FDG Coincidence PET (종양 환자에서 초고에너지(511 keV) 조준기를 이용한 전신 F-18-FDG 평면 영상: Coincidence 감마카메라 단층 촬영 영상과의 비교)

  • Pai, Moon-Sun;Park, Chan-H.;Joh, Chul-Woo;Yoon, Seok-Nam;Yang, Seung-Dae;Lim, Sang-Moo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.33 no.1
    • /
    • pp.65-75
    • /
    • 1999
  • Purpose: The aim of this study is to demonstrate the feasibility of 2-[fluorine-18] fluoro-2-deoxy-D-glucose (F-18-FDG) whole body scan (FDG W/B Scan) using dual-head gamma camera equipped with ultra high energy collimator in patients with various cancers, and compare the results with those of coincidence imaging. Materials and Methods: Phantom studies of planar imaging with ultra high energy and coincidence tomography (FDG CoDe PET) were performed. Fourteen patients with known or suspected malignancy were examined. F-18-FDG whole body scan was performed using dual-head gamma camera with high energy (511 keV) collimators and regional FDG CoDe PET immediately followed it Radiological, clinical follow up and histologic results were correlated with F-18-FDG findings. Results: Planar phantom study showed 13.1 mm spatial resolution at 10 cm with a sensitivity of 2638 cpm/MBq/ml. In coincidence PET, spatial resolution was 7.49 mm and sensitivity was 5351 cpm/MBq/ml. Eight out of 14 patients showed hypermetabolic sites in primary or metastatic tumors in FDG CoDe PET. The lesions showing no hypermetabolic uptake of FDG in both methods were all less than 1 cm except one lesion of 2 cm sized metastatic lymph node. The metastatic lymph nodes of positive FDG uptake were more than 1.5 cm in size or conglomerated lesions of lymph nodes less than 1cm in size. FDG W/B scan showed similar results but had additional false positive and false negative cases. FDG W/B scan could not visualize liver metastasis in one case that showed multiple metastatic sites in FDG CoDe PET. Conclusion: FDG W/B scan with specially designed collimators depicted some cancers and their metastatic sites, although it had a limitation in image quality compared to that of FDG CoDe PET. This study suggests that F-18-FDG positron imaging using dual-head gamma camera is feasible in oncology and helpful if it should be more available by regional distribution of FDG.

  • PDF

Development of Industrial Embedded System Platform (산업용 임베디드 시스템 플랫폼 개발)

  • Kim, Dae-Nam;Kim, Kyo-Sun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.5
    • /
    • pp.50-60
    • /
    • 2010
  • For the last half a century, the personal computer and software industries have been prosperous due to the incessant evolution of computer systems. In the 21st century, the embedded system market has greatly increased as the market shifted to the mobile gadget field. While a lot of multimedia gadgets such as mobile phone, navigation system, PMP, etc. are pouring into the market, most industrial control systems still rely on 8-bit micro-controllers and simple application software techniques. Unfortunately, the technological barrier which requires additional investment and higher quality manpower to overcome, and the business risks which come from the uncertainty of the market growth and the competitiveness of the resulting products have prevented the companies in the industry from taking advantage of such fancy technologies. However, high performance, low-power and low-cost hardware and software platforms will enable their high-technology products to be developed and recognized by potential clients in the future. This paper presents such a platform for industrial embedded systems. The platform was designed based on Telechips TCC8300 multimedia processor which embedded a variety of parallel hardware for the implementation of multimedia functions. And open-source Embedded Linux, TinyX and GTK+ are used for implementation of GUI to minimize technology costs. In order to estimate the expected performance and power consumption, the performance improvement and the power consumption due to each of enabled hardware sub-systems including YUV2RGB frame converter are measured. An analytic model was devised to check the feasibility of a new application and trade off its performance and power consumption. The validity of the model has been confirmed by implementing a real target system. The cost can be further mitigated by using the hardware parts which are being used for mass production products mostly in the cell-phone market.

A Study on Location Selection for Rainwater Circulation System Elements at a City Level - Focusing on the Application of the Environmental and Ecological Plan of a Development - (도시차원의 빗물순환체계 요소별 입지선정에 관한 연구 - 개발예정지역의 환경생태계획 적용방안을 중심으로 -)

  • Kim, Hyo-Min;Kim, Kwi-Gon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.40 no.3
    • /
    • pp.1-11
    • /
    • 2012
  • This study focused on establishing a natural rainwater circulation system using rainwater meant for relatively large urban development projects such as a new town development. In particular, when the location selection techniques for individual elements of a natural rainwater circulation system are developed for the integrated rainwater management, changes in hydrological environment will be minimized and the natural water circulation would be restored to realize the low impact development (LID). In that case, not only the excess will be reduced but water space and green areas in a city would also increase to improve the urban sustainability. First of all, there were five elements selected for the location selection of a rainwater circulation system intended for the integrated rainwater management: rainwater collection, infiltration, filtration, retention and movement spaces. After generating these items, the location selection items and criteria were defined for each of the five elements. For a technique to apply the generated evaluation items and criteria, a grid cell analysis was conducted based m the suitability index theory, and thematic maps were overlapped through suitability assessment of each element and graded based on the suitability index. The priority areas were identified for each element. The developed technique was applied to a site where Gim-cheon Innovation City development is planned to review its feasibility and limitations. The combined score of the overlapped map for each element was separated into five levels: very low, low, moderate, high and very high. Finally, it was concluded that creating a rainwater circulation system conceptual map m the current land use plan based on the outcome of the application would be useful in building a water circulation system at the de1ailed space planning stage after environmental and ecological planning. Furthermore, we use the results of this study as a means for environment-friendly urban planning for sustainable urban development.

Simulation and Feasibility Analysis of Aging Urban Park Refurbishment Project through the Application of Japan's Park-PFI System (일본 공모설치관리제도(Park-PFI)의 적용을 통한 노후 도시공원 정비사업 시뮬레이션 및 타당성 분석)

  • Kim, Yong-Gook;Kim, Young-Hyeon;Kim, Min-Seo
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.5
    • /
    • pp.13-29
    • /
    • 2023
  • Urban parks are social infrastructure supporting citizens' health, quality of life, and community formation. As the proportion of urban parks that have been established for more than 20 years is increasing, the need for refurbishment to improve the physical space environment and enhance the functions of aging urban parks is increasing. Since the government's refurbishment of aging urban parks has limitations in securing financial resources and promoting attractiveness, they must be promoted through public-private partnerships. Japan, which suffered from the problem of aging urban parks, has successfully promoted several park refurbishment projects by introducing the Park-PFI through the revision of the 「Urban Park Act」 in 2017. This study examines and analyzes the characteristics of the Japan Park-PFI as an alternative to improving the quality of aging domestic urban park services through public-private partnerships and the validity of the aging urban park refurbishment projects through Park-PFI. The main findings are as follows. First, it is necessary to start discussions on introducing Japan's Park-PFI according to the domestic conditions as a means of public-private partnership to improve the service quality and diversify the functions of aging urban parks. In order to introduce Park-PFI social discussions and follow-up studies on the deterioration of urban parks. Must be conducted. The installation of private capital and profit facilities and improvements of related regulations, such as the 「Parks and Green Spaces Act」 and the 「Public Property Act」, is required. Second, it is judged that the Park-PFI project is a policy alternative that can enhance the benefits to citizens, local governments, and private operators under the premise that the need to refurbish aging urban parks is high and the location is suitable for promoting the project. As a result of a pilot application of the Park-PFI project to Seyeong Park, an aging urban park located in Bupyeong-gu, Incheon, it was analyzed to be profitable in terms of the profitability index (PI), net present value (FNPV), and internal rate of return (FIRR). It is considered possible to participate in the business sector. At the local government level, private capital is used to improve the physical space environment of aging urban parks, as well as the refurbishment of the urban parks by utilizing financial resources generated by returning a portion of the facility usage fees and profits (0.5% of annual sales) of private operators. It was found that management budgets could be secured.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Study on the Observation of Soil Moisture Conditions and its Applied Possibility in Agriculture Using Land Surface Temperature and NDVI from Landsat-8 OLI/TIRS Satellite Image (Landsat-8 OLI/TIRS 위성영상의 지표온도와 식생지수를 이용한 토양의 수분 상태 관측 및 농업분야에의 응용 가능성 연구)

  • Chae, Sung-Ho;Park, Sung-Hwan;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_1
    • /
    • pp.931-946
    • /
    • 2017
  • The purpose of this study is to observe and analyze soil moisture conditions with high resolution and to evaluate its application feasibility to agriculture. For this purpose, we used three Landsat-8 OLI (Operational Land Imager)/TIRS (Thermal Infrared Sensor) optical and thermal infrared satellite images taken from May to June 2015, 2016, and 2017, including the rural areas of Jeollabuk-do, where 46% of agricultural areas are located. The soil moisture conditions at each date in the study area can be effectively obtained through the SPI (Standardized Precipitation Index)3 drought index, and each image has near normal, moderately wet, and moderately dry soil moisture conditions. The temperature vegetation dryness index (TVDI) was calculated to observe the soil moisture status from the Landsat-8 OLI/TIRS images with different soil moisture conditions and to compare and analyze the soil moisture conditions obtained from the SPI3 drought index. TVDI is estimated from the relationship between LST (Land Surface Temperature) and NDVI (Normalized Difference Vegetation Index) calculated from Landsat-8 OLI/TIRS satellite images. The maximum/minimum values of LST according to NDVI are extracted from the distribution of pixels in the feature space of LST-NDVI, and the Dry/Wet edges of LST according to NDVI can be determined by linear regression analysis. The TVDI value is obtained by calculating the ratio of the LST value between the two edges. We classified the relative soil moisture conditions from the TVDI values into five stages: very wet, wet, normal, dry, and very dry and compared to the soil moisture conditions obtained from SPI3. Due to the rice-planing season from May to June, 62% of the whole images were classified as wet and very wet due to paddy field areas which are the largest proportions in the image. Also, the pixels classified as normal were analyzed because of the influence of the field area in the image. The TVDI classification results for the whole image roughly corresponded to the SPI3 soil moisture condition, but they did not correspond to the subdivision results which are very dry, wet, and very wet. In addition, after extracting and classifying agricultural areas of paddy field and field, the paddy field area did not correspond to the SPI3 drought index in the very dry, normal and very wet classification results, and the field area did not correspond to the SPI3 drought index in the normal classification. This is considered to be a problem in Dry/Wet edge estimation due to outlier such as extremely dry bare soil and very wet paddy field area, water, cloud and mountain topography effects (shadow). However, in the agricultural area, especially the field area, in May to June, it was possible to effectively observe the soil moisture conditions as a subdivision. It is expected that the application of this method will be possible by observing the temporal and spatial changes of the soil moisture status in the agricultural area using the optical satellite with high spatial resolution and forecasting the agricultural production.

A Feasibility Study on GMC (Geo-Multicell-Composite) of the Leachate Collection System in Landfill (폐기물 매립시설의 배수층 및 보호층으로서의 Geo-Multicell-Composite(GMC)의 적합성에 관한 연구)

  • Jung, Sung-Hoon;Oh, Seungjin;Oh, Minah;Kim, Joonha;Lee, Jai-Young
    • Journal of the Korean Geosynthetics Society
    • /
    • v.12 no.4
    • /
    • pp.67-76
    • /
    • 2013
  • Landfill require special care due to the dangers of nearby surface water and underground water pollution caused by leakage of leachate. The leachate does not leak due to the installation of the geomembrane but sharp wastes or landfill equipment can damage the geomembrane and therefore a means of protecting the geomembrane is required. In Korea, in accordance with the waste control act being modified in 1999, protecting the geosynthetics liner on top of the slope of landfill and installing a drainage layer to fluently drain leachate became mandatory, and technologies are being researched to both protect the geomembrane and quickly drain leachate simultaneously. Therefore, this research has its purpose in studying the drainage functions of leachate and protection functions of the geomembrane in order to examine the application possibilities of Geo-Multicell-Composite (GMC) as a Leachate Collection Removal and Protection System (LCRPs) at the slope on top of the geomembrane of landfill by observing methods of inserting filler with high-quality water permeability at the drainage net. GMC's horizontal permeability coefficient is $8.0{\times}10^{-4}m^2/s$ to legal standards satisfeid. Also crash gravel used as filler respected by vertical permeability is 5.0 cm/s, embroidering puncture strength 140.2 kgf. A result of storm drain using artificial rain in GMC model facility, maxinum flow rate of 1,120 L/hr even spray without surface runoff was about 92~97% penetration. Further study, instead of crash gravel used as a filler, such as using recycled aggregate utilization increases and the resulting construction cost is expected to savings.

Study of Prediction Model Improvement for Apple Soluble Solids Content Using a Ground-based Hyperspectral Scanner (지상용 초분광 스캐너를 활용한 사과의 당도예측 모델의 성능향상을 위한 연구)

  • Song, Ahram;Jeon, Woohyun;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_1
    • /
    • pp.559-570
    • /
    • 2017
  • A partial least squares regression (PLSR) model was developed to map the internal soluble solids content (SSC) of apples using a ground-based hyperspectral scanner that could simultaneously acquire outdoor data and capture images of large quantities of apples. We evaluated the applicability of various preprocessing techniques to construct an optimal prediction model and calculated the optimal band through a variable importance in projection (VIP)score. From the 515 bands of hyperspectral images extracted at wavelengths of 360-1019 nm, 70 reflectance spectra of apples were extracted, and the SSC ($^{\circ}Brix$) was measured using a digital photometer. The optimal prediction model wasselected considering the root-mean-square error of cross-validation (RMSECV), root-mean-square error of prediction (RMSEP) and coefficient of determination of prediction $r_p^2$. As a result, multiplicative scatter correction (MSC)-based preprocessing methods were better than others. For example, when a combination of MSC and standard normal variate (SNV) was used, RMSECV and RMSEP were the lowest at 0.8551 and 0.8561 and $r_c^2$ and $r_p^2$ were the highest at 0.8533 and 0.6546; wavelength ranges of 360-380, 546-690, 760, 915, 931-939, 942, 953, 971, 978, 981, 988, and 992-1019 nm were most influential for SSC determination. The PLSR model with the spectral value of the corresponding region confirmed that the RMSEP decreased to 0.6841 and $r_p^2$ increased to 0.7795 as compared to the values of the entire wavelength band. In this study, we confirmed the feasibility of using a hyperspectral scanner image obtained from outdoors for the SSC measurement of apples. These results indicate that the application of field data and sensors could possibly expand in the future.

Superficial Dosimetry for Helical Tomotherapy (토모테라피를 이용한 표면 치료 계획과 선량 분석)

  • Kim, Song-Yih;You, Sei-Hwan;Song, Tae-Soo;Kim, Yong-Nam;Keum, Ki-Chang;Cho, Jae-Ho;Lee, Chang-Geol;Seong, Jin-Sil
    • Radiation Oncology Journal
    • /
    • v.27 no.2
    • /
    • pp.103-110
    • /
    • 2009
  • Purpose: To investigate the feasibility of helical tomotherapy on a wide curved area of the skin, and its accuracy in calculating the absorbed dose in the superficial region. Materials and Methods: Two types of treatment plans were made with the cylinder-shaped 'cheese phantom'. In the first trial, 2 Gy was prescribed to a 1-cm depth from the surface. For the other trial, 2 Gy was prescribed to a 1-cm depth from the external side of the surface by 5 mm. The inner part of the phantom was completely blocked. To measure the surface dose and the depth dose profile, an EDR2 film was inserted into the phantom, while 6 TLD chips were attached to the surface. Results: The film indicated that the surface dose of the former case was 118.7 cGy and the latter case was 130.9 cGy. The TLD chips indicated that the surface dose was higher than these, but it was due to the finite thickness of the TLD chips. In the former case, 95% of the prescribed dose was obtained at a 2.1 mm depth, while the prescribed does was at 2.2 mm in the latter case. The maximum dose was about 110% of the prescribed dose. As the depth became deeper, the dose decreased rapidly. Accordingly, at a 2-cm depth, the dose was 20 % of the prescribed dose. Conclusion: Helical tomotherapy could be a useful application in the treatment of a wide area of the skin with curvature. However, for depths up to 2 mm, the planning system overestimated the superficial dose. For shallower targets, the use of a compensator such as a bolus is required.