• Title/Summary/Keyword: 계산 복잡성

Search Result 1,094, Processing Time 0.032 seconds

WQI Class Prediction of Sihwa Lake Using Machine Learning-Based Models (기계학습 기반 모델을 활용한 시화호의 수질평가지수 등급 예측)

  • KIM, SOO BIN;LEE, JAE SEONG;KIM, KYUNG TAE
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.2
    • /
    • pp.71-86
    • /
    • 2022
  • The water quality index (WQI) has been widely used to evaluate marine water quality. The WQI in Korea is categorized into five classes by marine environmental standards. But, the WQI calculation on huge datasets is a very complex and time-consuming process. In this regard, the current study proposed machine learning (ML) based models to predict WQI class by using water quality datasets. Sihwa Lake, one of specially-managed coastal zone, was selected as a modeling site. In this study, adaptive boosting (AdaBoost) and tree-based pipeline optimization (TPOT) algorithms were used to train models and each model performance was evaluated by metrics (accuracy, precision, F1, and Log loss) on classification. Before training, the feature importance and sensitivity analysis were conducted to find out the best input combination for each algorithm. The results proved that the bottom dissolved oxygen (DOBot) was the most important variable affecting model performance. Conversely, surface dissolved inorganic nitrogen (DINSur) and dissolved inorganic phosphorus (DIPSur) had weaker effects on the prediction of WQI class. In addition, the performance varied over features including stations, seasons, and WQI classes by comparing spatio-temporal and class sensitivities of each best model. In conclusion, the modeling results showed that the TPOT algorithm has better performance rather than the AdaBoost algorithm without considering feature selection. Moreover, the WQI class for unknown water quality datasets could be surely predicted using the TPOT model trained with satisfactory training datasets.

The Study on Control Algorithm of Elevator EDLC Emergency Power Converter (승강기 EDLC 비상전원 전력변환장치 제어 알고리즘 연구)

  • Lee, Sang-min;Kim, IL-Song;Kim, Nam
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.709-718
    • /
    • 2017
  • The installation of the elevator ARD(Automatic Rescue Device) system has been forced into law in these days in order to safely rescue passengers during power failure. The configuration of the ARD system consists of energy storage device, power converter and control systems. The EDLC(Electric Double Layer Capacitor) are used as energy storage device for rapid charge/discharge purposes. The power conditioning system (PCS) consists of bi-directional converter, 3-phase converter and control system. The dead-beat control system is adopted for most systems however it requires complex mathematical calculations, the high performance microprocessors are mandatory and thus it can be a cause of high manufacturing cost. In this paper the new control method for average current mode control is presented for simple structure. The control algorithm is applied to the single phase system and then expands to three phase system to meet the sysem requirements. The mathematical modeling using average modeling method is presented and analysed by PSIM computer simulation to verifie the validity of the proposed control methods.

Strength Prediction of PSC Box Girder Diaphragms Using 3-Dimensional Grid Strut-Tie Model Approach (3차원 격자 스트럿-타이 모델 방법을 이용한 PSC 박스거더 격벽부의 강도예측)

  • Park, Jung Woong;Kim, Tae Young
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.5A
    • /
    • pp.841-848
    • /
    • 2006
  • There is a complex variation of stress in PSC anchorage zones and box girder diaphragms because of large concentrated load by prestress. According to the AASHTO LFRD design code, three-dimensional effects due to concentrated jacking loads shall be investigated using three-dimensional analysis procedures or may be approximated by considering separate submodels for two or more planes. In this case, the interaction of the submodels should be considered, and the model loads and results should be consistent. However, box girder diaphragms are 3-dimensional disturbed region which requires a fully three-dimensional model, and two-dimensional models are not satisfactory to model the flow of forces in diaphragms. In this study, the strengths of the prestressed box girder diaphragms are predicted using the 3-dimensional grid strut-tie model approach, which were tested to failure in University of Texas. According to the analysis results, the 3-dimensional strut-tie model approach can be possibly applied to the analysis and design of PSC box girder anchorage zones as a reasonable computer-aided approach with satisfied accuracy.

Design of Authentication Mechinism for Command Message based on Double Hash Chains (이중 해시체인 기반의 명령어 메시지 인증 메커니즘 설계)

  • Park Wang Seok;Park Chang Seop
    • Convergence Security Journal
    • /
    • v.24 no.1
    • /
    • pp.51-57
    • /
    • 2024
  • Although industrial control systems (ICSs) recently keep evolving with the introduction of Industrial IoT converging information technology (IT) and operational technology (OT), it also leads to a variety of threats and vulnerabilities, which was not experienced in the past ICS with no connection to the external network. Since various control command messages are sent to field devices of the ICS for the purpose of monitoring and controlling the operational processes, it is required to guarantee the message integrity as well as control center authentication. In case of the conventional message integrity codes and signature schemes based on symmetric keys and public keys, respectively, they are not suitable considering the asymmetry between the control center and field devices. Especially, compromised node attacks can be mounted against the symmetric-key-based schemes. In this paper, we propose message authentication scheme based on double hash chains constructed from cryptographic hash function without introducing other primitives, and then propose extension scheme using Merkle tree for multiple uses of the double hash chains. It is shown that the proposed scheme is much more efficient in computational complexity than other conventional schemes.

음료$\cdot$다류 산업

  • 손헌수
    • Food Industry
    • /
    • s.180
    • /
    • pp.27-64
    • /
    • 2004
  • 국내 제조업 전체에서 식음료품은 $6.31\%$를 차지하고 있다. 2000년을 기준으로 식품산업의 총매출실적을 보면 32조 3,463억원을 기록하고 있는데, 이 중 식료품은 26조 921억원, 알콜올음료를 포함한 음료품은 6조 2,516억원을 차지하고 있다. 2001년, 음료시장의 규모는 약 2조 7,900억원 정도였다. 빠르게 성장하던 전체 음료시장의 매출은 IMF 이후 약간 주춤하는 경향을 보였지만 지속적으로 매출액이 상승하여 2002년에는 3조 4천억원의 매출을 기록하였다. 이것은 사이다, 후레바, 정통주스, 냉장주스, 스포츠 음료, 두유 및 기능성음료 등이 성장을 주도하였다. 2003년 상반기, 1조 6,500억원으로 전년 동기대비 $3\%$ 감소하였지만 2003년 전체 실적은 대략 지난해와 비슷한 3조 5,000억원 규모로 전망된다. 내수량의 경우 탄산음료와 과즙음료가 감소하는 경향을 나타냈지만, 전체적으로는 증가하였다. 다류시장의 규모는 약 650억원, 캔커피시장의 규모는 2,400억원 정도로 나타났다. 이처럼 음료$\cdot$다류 시장은 전체적으로 약 4조원의 거대시장규모를 형성하고 있으며 내수가 지속적으로 증가되고 있어, 점차 시장규모도 증가할 것으로 전망되고 있다. 최근에는 기능성 음료 및 다류의 시장규모가 전체의 $20\%$에 달할 정도로 신장하였으며, 기능성음료 및 다류의 특허출원도 지속적으로 증가하고 있는 실정이다. 이는 비단 국내에서 뿐만 아니라 세계적인 현상이며 이러한 추세는 앞으로 지속될 것으로 전망되고 있다. 이처럼 기능성음료 및 다류의 성장이 두드러지고 있는 것은 시대가 변하면서 현대인들의 생활방식과 식생활 변화, 질병형태 다양화 등 여러 사회적 여건이 변화되고 있는데 기인하며, 현대인의 건강에 대한 관심의 증대는 음료시장의 쾌속성장과 틈새수요를 지속적으로 창출하고 있다. 현재 세계 건강기능성 식품의 시장 규모는 약 160조원이며, 국내의 경우 약 1조원에 이르는 실정이다. 이 중 기능성음료 시장은 국내에서 약 4천억원 규모로 기능성식품 시장의 $40\%$를 차지하고 있다. 이처럼 기능성음료 시장은 최근 연간 $7\%$ 정도의 빠른 성장률을 보이고 있으며 향후에도 $6.7\%$의 실질 연간 성장률을 지속할 것으로 전망되고 있다. 전해질(이온)음료나 식이섬유를 포함하는 다이어트음료에서 출발한 기능성음료는 숙취해소, 성인병 예방, 스트레스 해소에 이르기까지 각종 질병의 치료 및 예방으로 확대되고 있다. 앞으로는 BT 및 나노 기술 등을 이용한 신기능 소재들에 대한 다각적인 효능의 규명에 따른 음료$\cdot$다류 소재의 개발과 더불어 체질개선, 다이어트, 숙취해소 등의 특정한 기능성과 관련한 음료$\cdot$다류의 특허출원이 계속될 것으로 전망된다. 복잡한 한약의 제조과정을 단순화하여 티백이나 캔 형태로 만든 맛과 기능이 조화된 음료$\cdot$다류 분야의 연구개발이 지속될 것으로 보이며, 시장 규모의 확대와 더불어 기능성과 간편성을 동시에 추구하는 신세대 소비자들의 성향을 겨냥하여 기존의 음료$\cdot$다류 전문 업체 외에도 제약회사 등의 비음료 업체도 다양한 형태의 음료$\cdot$다류 기술개발을 지속할 것으로 예상되고 있다. 특히 새로운 신기능성 소재를 개발하는 것은 고부가가치 기술로써 BT, NT 등의 최첨단 기술의 발전과 도입으로 인해 그동안 수입에 의존하던 기능성 소재들도 국내 기업들의 축전된 기반기술을 통하여 대거 참입할 것으로 보인다. 하지만, 여러 업체의 음료$\cdot$다류시장 진출에 따라 야기될 수 있는 비위생이고 효능이 불확실한 식품의 유통, 업체간의 과열경쟁에 따른 유통질서 문란 등은 모처럼 활기를 되찾은 음료$\cdot$다류 분야의 기술개발을 위축시킬 수도 있으므로 정부차원에서 체계적이고 효율적인 관리가 조속히 이루어져야 할 것이며 이를 제도적으로 뒷받침하기 위한 정책이 수립되고 시행되어야 할 것으로 보인다. 특히, 원료적인 측면에서의 무역 불균형은 반드시 해소되어야 할 부분이다. 대부분이 수입 기능성 원료에 의존하고 있는 현실에서 국내의 BT 기술 확충에 더욱 많은 투자와 노력이 집중되어야 할 것이다. 세계적으로 경쟁이 될 수 있는 기능성 소재의 개발과 이를 통한 신기능 음료의 개발은 단순한 수학적 계산을 넘어서 국가의 기술력을 홍보할수 있는 좋은 계기가 되기 때문이다. 바야흐로 음료$\cdot$다류 분야에 대한 열기가 식품시장을 주도하면서, 시장이 안정적으로 확대되기 위해서는 지속적인 기술개발이 이루어져야 한다는 것은 재론의 여지가 없다고 하겠다.

  • PDF

Optimal Spatial Scale for Land Use Change Modelling : A Case Study in a Savanna Landscape in Northern Ghana (지표피복변화 연구에서 최적의 공간스케일의 문제 : 가나 북부지역의 사바나 지역을 사례로)

  • Nick van de Giesen;Paul L. G. Vlek;Park Soo Jin
    • Journal of the Korean Geographical Society
    • /
    • v.40 no.2 s.107
    • /
    • pp.221-241
    • /
    • 2005
  • Land Use and Land Cover Changes (LUCC) occur over a wide range of space and time scales, and involve complex natural, socio-economic, and institutional processes. Therefore, modelling and predicting LUCC demands an understanding of how various measured properties behave when considered at different scales. Understanding spatial and temporal variability of driving forces and constraints on LUCC is central to understanding the scaling issues. This paper aims to 1) assess the heterogeneity of land cover change processes over the landscape in northern Ghana, where intensification of agricultural activities has been the dominant land cover change process during the past 15 years, 2) characterise dominant land cover change mechanisms for various spatial scales, and 3) identify the optimal spatial scale for LUCC modelling in a savanna landscape. A multivariate statistical method was first applied to identify land cover change intensity (LCCI), using four time-sequenced NDVI images derived from LANDSAT scenes. Three proxy land use change predictors: distance from roads, distance from surface water bodies, and a terrain characterisation index, were regressed against the LCCI using a multi-scale hierarchical adaptive model to identify scale dependency and spatial heterogeneity of LUCC processes. High spatial associations between the LCCI and land use change predictors were mostly limited to moving windows smaller than 10$\times$10km. With increasing window size, LUCC processes within the window tend to be too diverse to establish clear trends, because changes in one part of the window are compensated elsewhere. This results in a reduced correlation between LCCI and land use change predictors at a coarser spatial extent. The spatial coverage of 5-l0km is incidentally equivalent to a village or community area in the study region. In order to reduce spatial variability of land use change processes for regional or national level LUCC modelling, we suggest that the village level is the optimal spatial investigation unit in this savanna landscape.

On Using Near-surface Remote Sensing Observation for Evaluation Gross Primary Productivity and Net Ecosystem CO2 Partitioning (근거리 원격탐사 기법을 이용한 총일차생산량 추정 및 순생태계 CO2 교환량 배분의 정확도 평가에 관하여)

  • Park, Juhan;Kang, Minseok;Cho, Sungsik;Sohn, Seungwon;Kim, Jongho;Kim, Su-Jin;Lim, Jong-Hwan;Kang, Mingu;Shim, Kyo-Moon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.251-267
    • /
    • 2021
  • Remotely sensed vegetation indices (VIs) are empirically related with gross primary productivity (GPP) in various spatio-temporal scales. The uncertainties in GPP-VI relationship increase with temporal resolution. Uncertainty also exists in the eddy covariance (EC)-based estimation of GPP, arising from the partitioning of the measured net ecosystem CO2 exchange (NEE) into GPP and ecosystem respiration (RE). For two forests and two agricultural sites, we correlated the EC-derived GPP in various time scales with three different near-surface remotely sensed VIs: (1) normalized difference vegetation index (NDVI), (2) enhanced vegetation index (EVI), and (3) near infrared reflectance from vegetation (NIRv) along with NIRvP (i.e., NIRv multiplied by photosynthetically active radiation, PAR). Among the compared VIs, NIRvP showed highest correlation with half-hourly and monthly GPP at all sites. The NIRvP was used to test the reliability of GPP derived by two different NEE partitioning methods: (1) original KoFlux methods (GPPOri) and (2) machine-learning based method (GPPANN). GPPANN showed higher correlation with NIRvP at half-hourly time scale, but there was no difference at daily time scale. The NIRvP-GPP correlation was lower under clear sky conditions due to co-limitation of GPP by other environmental conditions such as air temperature, vapor pressure deficit and soil moisture. However, under cloudy conditions when photosynthesis is mainly limited by radiation, the use of NIRvP was more promising to test the credibility of NEE partitioning methods. Despite the necessity of further analyses, the results suggest that NIRvP can be used as the proxy of GPP at high temporal-scale. However, for the VIs-based GPP estimation with high temporal resolution to be meaningful, complex systems-based analysis methods (related to systems thinking and self-organization that goes beyond the empirical VIs-GPP relationship) should be developed.

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

Analysis of Uncertainty in Ocean Color Products by Water Vapor Vertical Profile (수증기 연직 분포에 의한 GOCI-II 해색 산출물 오차 분석)

  • Kyeong-Sang Lee;Sujung Bae;Eunkyung Lee;Jae-Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1591-1604
    • /
    • 2023
  • In ocean color remote sensing, atmospheric correction is a vital process for ensuring the accuracy and reliability of ocean color products. Furthermore, in recent years, the remote sensing community has intensified its requirements for understanding errors in satellite data. Accordingly, research is currently addressing errors in remote sensing reflectance (Rrs) resulting from inaccuracies in meteorological variables (total ozone, pressure, wind field, and total precipitable water) used as auxiliary data for atmospheric correction. However, there has been no investigation into the error in Rrs caused by the variability of the water vapor profile, despite it being a recognized error source. In this study, we used the Second Simulation of a Satellite Signal Vector version 2.1 simulation to compute errors in water vapor transmittance arising from variations in the water vapor profile within the GOCI-II observation area. Subsequently, we conducted an analysis of the associated errors in ocean color products. The observed water vapor profile not only exhibited a complex shape but also showed significant variations near the surface, leading to differences of up to 0.007 compared to the US standard 62 water vapor profile used in the GOCI-II atmospheric correction. The resulting variation in water vapor transmittance led to a difference in aerosol reflectance estimation, consequently introducing errors in Rrs across all GOCI-II bands. However, the error of Rrs in the 412-555 nm due to the difference in the water vapor profile band was found to be below 2%, which is lower than the required accuracy. Also, similar errors were shown in other ocean color products such as chlorophyll-a concentration, colored dissolved organic matter, and total suspended matter concentration. The results of this study indicate that the variability in water vapor profiles has minimal impact on the accuracy of atmospheric correction and ocean color products. Therefore, improving the accuracy of the input data related to the water vapor column concentration is even more critical for enhancing the accuracy of ocean color products in terms of water vapor absorption correction.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.