• Title/Summary/Keyword: data utility

Search Result 1,232, Processing Time 0.024 seconds

Assessment of the Potential Consumers' Preference for the V2G System (V2G 시스템에 대한 잠재적 소비자의 선호 평가)

  • Lim, Seul-Ye;Kim, Hee-Hoon;Yoo, Seung-Hoon
    • Journal of Energy Engineering
    • /
    • v.25 no.4
    • /
    • pp.93-102
    • /
    • 2016
  • Vehicle-to-Grid (V2G) system, bi-direction power trading technology, enables drivers possessing electric vehicle to sell the spare electricity charged in the vehicle to power distribution company. The drivers gain profit by charging electricity in the day time of high electricity rate. In this regard, the government is preparing the policies of building and supporting V2G infrastructure and demanding the potential consumers' preference for the V2G system. This paper attempts to analyze the consumers' preference using the data from obtained a survey of randomly selected 1,000 individuals. To this end, choice experiment, an economic technique, is employed here. The attributes considered in the study are residual amount of electricity, electricity trading hours, required plug-in time, and price measured as an amount additional to current gasoline vehicle price. The multinomial logit model, which requires the assumption of 'independence of irrelevant alternatives', is applied but the assumption could not be satisfied in our data. Thus, we finally utilized nested logit model which does not require the assumption. All the parameter estimates in the utility function are statistically significant at the 10% level. The estimation results show that the marginal willingness to pay (MWTP) for one hour increase in electricity trading hours is estimated to be KRW 1,601,057. On the other hand, a one percent reduction in residual amount of electricity and one hour reduction in required plug-in time in V2G system are computed to be KRW -91,911 and -470,619, respectively. The findings can provide policy makers with useful information for decision-making about introducing and managing V2G system.

Simulation Approach for the Tracing the Marine Pollution Using Multi-Remote Sensing Data (다중 원격탐사 자료를 활용한 해양 오염 추적 모의 실험 방안에 대한 연구)

  • Kim, Keunyong;Kim, Euihyun;Choi, Jun Myoung;Shin, Jisun;Kim, Wonkook;Lee, Kwang-Jae;Son, Young Baek;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_2
    • /
    • pp.249-261
    • /
    • 2020
  • Coastal monitoring using multiple platforms/sensors is a very important tools for accurately understanding the changes in offshore marine environment and disaster with high temporal and spatial resolutions. However, integrated observation studies using multiple platforms and sensors are insufficient, and none of them have been evaluated for efficiency and limitation of convergence. In this study, we aimed to suggest an integrated observation method with multi-remote sensing platform and sensors, and to diagnose the utility and limitation. Integrated in situ surveys were conducted using Rhodamine WT fluorescent dye to simulate various marine disasters. In September 2019, the distribution and movement of RWT dye patches were detected using satellite (Kompsat-2/3/3A, Landsat-8 OLI, Sentinel-3 OLCI and GOCI), unmanned aircraft (Mavic 2 pro and Inspire 2), and manned aircraft platforms after injecting fluorescent dye into the waters of the South Sea-Yeosu Sea. The initial patch size of the RWT dye was 2,600 ㎡ and spread to 62,000 ㎡ about 138 minutes later. The RWT patches gradually moved southwestward from the point where they were first released,similar to the pattern of tidal current flowing southwest as the tides gradually decreased. Unmanned Aerial Vehicles (UAVs) image showed highest resolution in terms of spatial and time resolution, but the coverage area was the narrowest. In the case of satellite images, the coverage area was wide, but there were some limitations compared to other platforms in terms of operability due to the long cycle of revisiting. For Sentinel-3 OLCI and GOCI, the spectral resolution and signal-to-noise ratio (SNR) were the highest, but small fluorescent dye detection was limited in terms of spatial resolution. In the case of hyperspectral sensor mounted on manned aircraft, the spectral resolution was the highest, but this was also somewhat limited in terms of operability. From this simulation approach, multi-platform integrated observation was able to confirm that time,space and spectral resolution could be significantly improved. In the future, if this study results are linked to coastal numerical models, it will be possible to predict the transport and diffusion of contaminants, and it is expected that it can contribute to improving model accuracy by using them as input and verification data of the numerical models.

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

A Geographically Weighted Regression on the Effect of Regulation of Space Use on the Residential Land Price - Evidence from Jangyu New Town - (공간사용 규제가 택지가격에 미치는 영향에 대한 공간가중회귀분석 - 장유 신도시지역을 대상으로-)

  • Kang, Sun-Duk;Park, Sae-Woon;Jeong, Tae-Yun
    • Management & Information Systems Review
    • /
    • v.37 no.3
    • /
    • pp.27-47
    • /
    • 2018
  • In this study, we examine how land use zoning affects the land price controlling other variables such as road-facing condition of the land, land form, land age after its development and land size. We employ geographically weighted regression analysis which reflects spatial dependency as methodology with a data sample of land transaction price data of Jangyu, a new town, in Korea. The results of our empirical analysis show that the respective coefficients of traditional regression and geographically weighted regression are not significantly different. However, after calculating Moran's Index with residuals of both OLS and GWR models, we find that Moran's Index of GWR decreases around 26% compared to that of OLS model, thus improving the problem of spatial autoregression of residuals considerably. Unlike our expectation, though, in both traditional regression and geographically weighted regression where residential exclusive area is used as a reference variable, the dummy variable of the residential land for both housing and shops shows a negative sign. This may be because the residential land for both housing and shops is usually located in the level area while the residential exclusive area is located at the foot of a mountain or on a gentle hill where the residents can have good quality air and scenery. Although the utility of the residential land for both housing and shops is higher than its counterpart's since it has higher floor area ratio, amenity which can be explained as high quality of air and scenery in this study seems to have higher impact in purchase of land for housing. On the other hand, land for neighbourhood living facility seems to be valued higher than any other land zonings used in this research since it has much higher floor area ratio than the two land zonings above and can have a building with up to 5 stories constructed on it. With regard to road-facing condition, land buyers seem to prefer land which faces a medium-width road as expected. Land facing a wide-width road may have some disadvantage in that it can be exposed to noise and exhaust gas from cars and that entrance may not be easy due to the high speed traffic of the road. In contrast, land facing a narrow road can be free of noise or fume from cars and have privacy protected while it has some inconvenience in that entrance may be blocked by cars parked in both sides of the narrow road. Finally, land age variable shows a negative sign, which means that the price of land declines over time. This may be because decline of the land price of Jangyu was bigger than that of other regions in Gimhae where Jangyu, a new town, also belong, during the global financial crisis of 2008.

Development and Utility Evaluation of Portable Respiration Training Device for Image-guided Stereotactic Body Radiation Therapy (SBRT) (영상유도 체부정위방사선 치료시 호흡동조를 위한 휴대형 호흡연습장치의 개발 및 유용성 평가)

  • Hwang, Seon Bung;Park, Mun Kyu;Park, Seung Woo;Cho, Yu Ra;Lee, Dong Han;Jung, Hai Jo;Ji, Young Hoon;Kwon, Soo-Il
    • Progress in Medical Physics
    • /
    • v.25 no.4
    • /
    • pp.264-270
    • /
    • 2014
  • This study developed a portable respiratory training device to improve breathing stability, which is an important element in using the CyberKnife Synchrony respiratory tracking device, one of the typical Stereotactic Radiation Therapy (SRT) devices. It produced an interface for users to be able to select one of two displays, a graph type and a bar type, supported an auditory system that helps them expect next respiration by improving a sense of rhythm of their respiratory period, and provided comfortable respiratory inducement. By targeting 5 applicants and applying individual respiratory period detected through a self-developed program, it acquired signal data of 'guide respiration' that induces breathing through signal data gained from 'free respiration' and an auditory system, and evaluated the usability by comparing deviation average values of respiratory period and respiratory amplitude. It could be identified that respiratory period decreased $55.74{\pm}0.14%$ compared to free respiration, and respiratory amplitude decreased $28.12{\pm}0.10%$ compared to free respiration, which confirmed the consistency and stability of respiratory. SBRT, developed based on these results, using the portable respiratory training device, for liver cancer or lung cancer, is evaluated to be able to help reduce delayed treatment time due to respiratory instability and improve treatment accuracy, and if it could be applied to developing respiratory training applications targeting an android-based portable device in the future, even use convenience and economic efficiency are expected.

Evaluation Criteria and Preferred Image of Jeans Products based on Benefit Segmentation (진 제품 구매자의 추구혜택에 따른 평가기준 및 선호 이미지)

  • Park, Na-Ri;Park, Jae-Ok
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.31 no.6 s.165
    • /
    • pp.974-984
    • /
    • 2007
  • The purpose of this study was to find differences in evaluation criteria and to find differences in preferred images based on benefits segmented groups of jeans products consumers. Male and female Korean university students participated in the study. Quota sampling method was used to collect the data based on gender and a residential area of the respondents. Data from 492 questionnaires were used in the analysis. Factor analysis, Cronbach's alpha coefficient, cluster analysis, one-way ANOVA, and post-hoc test were conducted. As a result, respondents who seek multi-benefits considered aesthetic criteria(e.g., color, style, design, fit) and quality performance criteria(e.g., durability, ease of care, contractibility, flexibility) more importantly when evaluating and purchasing jeans products. Respondents who seek brand name considered extrinsic criteria(e.g., brand reputation, status symbol, country of origin, fashionability) more importantly than respondents who seek economic efciency. Respondents who seek multi-benefits such as attractiveness, fashion, individuality, and utility tend to prefer all the images: individual image, active image, sexual image, sophisticated image, and simple image when wearing jeans products. Respondents who seek fashion are likely to prefer individual image, and respondents who seek brand name more prefer both individual image and polished image. Mean while, respondents who seek economical efficiency less prefer sexual image and polished image.

Quality Evaluation through Inter-Comparison of Satellite Cloud Detection Products in East Asia (동아시아 지역의 위성 구름탐지 산출물 상호 비교를 통한 품질 평가)

  • Byeon, Yugyeong;Choi, Sungwon;Jin, Donghyun;Seong, Noh-hun;Jung, Daeseong;Sim, Suyoung;Woo, Jongho;Jeon, Uujin;Han, Kyung-soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_2
    • /
    • pp.1829-1836
    • /
    • 2021
  • Cloud detection means determining the presence or absence of clouds in a pixel in a satellite image, and acts as an important factor affecting the utility and accuracy of the satellite image. In this study, among the satellites of various advanced organizations that provide cloud detection data, we intend to perform quantitative and qualitative comparative analysis on the difference between the cloud detection data of GK-2A/AMI, Terra/MODIS, and Suomi-NPP/VIIRS. As a result of quantitative comparison, the Proportion Correct (PC) index values in January were 74.16% for GK-2A & MODIS, 75.39% for GK-2A & VIIRS, and 87.35% for GK-2A & MODIS in April, and GK-2A & VIIRS showed that 87.71% of clouds were detected in April compared to January without much difference by satellite. As for the qualitative comparison results, when compared with RGB images, it was confirmed that the results corresponding to April rather than January detected clouds better than the previous quantitative results. However, if thin clouds or snow cover exist, each satellite were some differences in the cloud detection results.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

A Study on the Potential Use of ChatGPT in Public Design Policy Decision-Making (공공디자인 정책 결정에 ChatGPT의 활용 가능성에 관한연구)

  • Son, Dong Joo;Yoon, Myeong Han
    • Journal of Service Research and Studies
    • /
    • v.13 no.3
    • /
    • pp.172-189
    • /
    • 2023
  • This study investigated the potential contribution of ChatGPT, a massive language and information model, in the decision-making process of public design policies, focusing on the characteristics inherent to public design. Public design utilizes the principles and approaches of design to address societal issues and aims to improve public services. In order to formulate public design policies and plans, it is essential to base them on extensive data, including the general status of the area, population demographics, infrastructure, resources, safety, existing policies, legal regulations, landscape, spatial conditions, current state of public design, and regional issues. Therefore, public design is a field of design research that encompasses a vast amount of data and language. Considering the rapid advancements in artificial intelligence technology and the significance of public design, this study aims to explore how massive language and information models like ChatGPT can contribute to public design policies. Alongside, we reviewed the concepts and principles of public design, its role in policy development and implementation, and examined the overview and features of ChatGPT, including its application cases and preceding research to determine its utility in the decision-making process of public design policies. The study found that ChatGPT could offer substantial language information during the formulation of public design policies and assist in decision-making. In particular, ChatGPT proved useful in providing various perspectives and swiftly supplying information necessary for policy decisions. Additionally, the trend of utilizing artificial intelligence in government policy development was confirmed through various studies. However, the usage of ChatGPT also unveiled ethical, legal, and personal privacy issues. Notably, ethical dilemmas were raised, along with issues related to bias and fairness. To practically apply ChatGPT in the decision-making process of public design policies, first, it is necessary to enhance the capacities of policy developers and public design experts to a certain extent. Second, it is advisable to create a provisional regulation named 'Ordinance on the Use of AI in Policy' to continuously refine the utilization until legal adjustments are made. Currently, implementing these two strategies is deemed necessary. Consequently, employing massive language and information models like ChatGPT in the public design field, which harbors a vast amount of language, holds substantial value.

Evaluation of the Utilization Potential of High-Resolution Optical Satellite Images in Port Ship Management: A Case Study on Berth Utilization in Busan New Port (고해상도 광학 위성영상의 항만선박관리 활용 가능성 평가: 부산 신항의 선석 활용을 대상으로)

  • Hyunsoo Kim ;Soyeong Jang ;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1173-1183
    • /
    • 2023
  • Over the past 20 years, Korea's overall import and export cargo volume has increased at an average annual rate of approximately 5.3%. About 99% of the cargo is still being transported by sea. Due to recent increases in maritime cargo volume, congestion in maritime logistics has become challenging due to factors such as the COVID-19 pandemic and conflicts. Continuous monitoring of ports has become crucial. Various ground observation systems and Automatic Identification System (AIS) data have been utilized for monitoring ports and conducting numerous preliminary studies for the efficient operation of container terminals and cargo volume prediction. However, small and developing countries' ports face difficulties in monitoring due to environmental issues and aging infrastructure compared to large ports. Recently, with the increasing utility of artificial satellites, preliminary studies have been conducted using satellite imagery for continuous maritime cargo data collection and establishing ocean monitoring systems in vast and hard-to-reach areas. This study aims to visually detect ships docked at berths in the Busan New Port using high-resolution satellite imagery and quantitatively evaluate berth utilization rates. By utilizing high-resolution satellite imagery from Compact Advanced Satellite 500-1 (CAS500-1), Korea Multi-Purpose satellite-3 (KOMPSAT-3), PlanetScope, and Sentinel-2A, ships docked within the port berths were visually detected. The berth utilization rate was calculated using the total number of ships that could be docked at the berths. The results showed variations in berth utilization rates on June 2, 2022, with values of 0.67, 0.7, and 0.59, indicating fluctuations based on the time of satellite image capture. On June 3, 2022, the value remained at 0.7, signifying a consistent berth utilization rate despite changes in ship types. A higher berth utilization rate indicates active operations at the berth. This information can assist in basic planning for new ship operation schedules, as congested berths can lead to longer waiting times for ships in anchorages, potentially resulting in increased freight rates. The duration of operations at berths can vary from several hours to several days. The results of calculating changes in ships at berths based on differences in satellite image capture times, even with a time difference of 4 minutes and 49 seconds, demonstrated variations in ship presence. With short observation intervals and the utilization of high-resolution satellite imagery, continuous monitoring within ports can be achieved. Additionally, utilizing satellite imagery to monitor changes in ships at berths in minute increments could prove useful for small and developing country ports where harbor management is not well-established, offering valuable insights and solutions.