• Title/Summary/Keyword: 데이터 샘플링

Search Result 510, Processing Time 0.034 seconds

Real-time Nutrient Monitoring of Hydroponic Solutions Using an Ion-selective Electrode-based Embedded System (ISE 기반의 임베디드 시스템을 이용한 실시간 수경재배 양액 모니터링)

  • Han, Hee-Jo;Kim, Hak-Jin;Jung, Dae-Hyun;Cho, Woo-Jae;Cho, Yeong-Yeol;Lee, Gong-In
    • Journal of Bio-Environment Control
    • /
    • v.29 no.2
    • /
    • pp.141-152
    • /
    • 2020
  • The rapid on-site measurement of hydroponic nutrients allows for the more efficient use of crop fertilizers. This paper reports on the development of an embedded on-site system consisting of multiple ion-selective electrodes (ISEs) for the real-time measurement of the concentrations of macronutrients in hydroponic solutions. The system included a combination of PVC ISEs for the detection of NO3, K, and Ca ions, a cobalt-electrode for the detection of H2PO4, a double-junction reference electrode, a solution container, and a sampling system consisting of pumps and valves. An Arduino Due board was used to collect data and to control the volume of the sample. Prior to the measurement of each sample, a two-point normalization method was employed to adjust the sensitivity followed by an offset to minimize potential drift that might occur during continuous measurement. The predictive capabilities of the NO3 and K ISEs based on PVC membranes were satisfactory, producing results that were in close agreement with the results of standard analyzers (R2 = 0.99). Though the Ca ISE fabricated with Ca ionophore II underestimated the Ca concentration by an average of 55%, the strong linear relationship (R2 > 0.84) makes it possible for the embedded system to be used in hydroponic NO3, K, and Ca sensing. The cobalt-rod-based phosphate electrodes exhibited a relatively high error of 24.7±9.26% in the phosphate concentration range of 45 to 155 mg/L compared to standard methods due to inconsistent signal readings between replicates, illustrating the need for further research on the signal conditioning of cobalt electrodes to improve their predictive ability in hydroponic P sensing.

Reconfiguration of Physical Structure of Vegetation by Voxelization Based on 3D Point Clouds (3차원 포인트 클라우드 기반 복셀화에 의한 식생의 물리적 구조 재구현)

  • Ahn, Myeonghui;Jang, Eun-kyung;Bae, Inhyeok;Ji, Un
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.6
    • /
    • pp.571-581
    • /
    • 2020
  • Vegetation affects water level change and flow resistance in rivers and impacts waterway ecosystems as a whole. Therefore, it is important to have accurate information about the species, shape, and size of any river vegetation. However, it is not easy to collect full vegetation data on-site, so recent studies have attempted to obtain large amounts of vegetation data using terrestrial laser scanning (TLS). Also, due to the complex shape of vegetation, it is not easy to obtain accurate information about the canopy area, and there are limitations due to a complex range of variables. Therefore, the physical structure of vegetation was analyzed in this study by reconfiguring high-resolution point cloud data collected through 3-dimensional terrestrial laser scanning (3D TLS) in a voxel. Each physical structure was analyzed under three different conditions: a simple vegetation formation without leaves, a complete formation with leaves, and a patch-scale vegetation formation. In the raw data, the outlier and unnecessary data were filtered and removed by Statistical Outlier Removal (SOR), resulting in 17%, 26%, and 25% of data being removed, respectively. Also, vegetation volume by voxel size was reconfigured from post-processed point clouds and compared with vegetation volume; the analysis showed that the margin of error was 8%, 25%, and 63% for each condition, respectively. The larger the size of the target sample, the larger the error. The vegetation surface looked visually similar when resizing the voxel; however, the volume of the entire vegetation was susceptible to error.

Influencing Factors Analysis for the Number of Participants in Public Contracts Using Big Data (빅데이터를 활용한 공공계약의 입찰참가자수 영향요인 분석)

  • Choi, Tae-Hong;Lee, Kyung-Hee;Cho, Wan-Sup
    • The Journal of Bigdata
    • /
    • v.3 no.2
    • /
    • pp.87-99
    • /
    • 2018
  • This study analyze the factors affecting the number of bidders in public contracts by collecting contract data such as purchase of goods, service and facility construction through KONEPS among various forms of public contracts. The reason why the number of bidders is important in public contracts is that it can be a minimum criterion for judging whether to enter into a rational contract through fair competition and is closely related to the budget reduction of the ordering organization or the profitability of the bidders. The purpose of this study is to analyze the factors that determine the participation of bidders in public contracts and to present the problems and policy implications of bidders' participation in public contracts. This research distinguishes the existing sampling based research by analyzing and analyzing many contracts such as purchasing, service and facility construction of 4.35 million items in which 50,000 public institutions have been placed as national markets and 300,000 individual companies and corporations participated. As a research model, the number of announcement days, budget amount, contract method and winning bid is used as independent variables and the number of bidders is used as a dependent variable. Big data and multidimensional analysis techniques are used for survey analysis. The conclusions are as follows: First, the larger the budget amount of public works projects, the smaller the number of participants. Second, in the contract method, restricted competition has more participants than general competition. Third, the duration of bidding notice did not significantly affect the number of bidders. Fourth, in the winning bid method, the qualification examination bidding system has more bidders than the lowest bidding system.

A Fast Processor Architecture and 2-D Data Scheduling Method to Implement the Lifting Scheme 2-D Discrete Wavelet Transform (리프팅 스킴의 2차원 이산 웨이브릿 변환 하드웨어 구현을 위한 고속 프로세서 구조 및 2차원 데이터 스케줄링 방법)

  • Kim Jong Woog;Chong Jong Wha
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.4 s.334
    • /
    • pp.19-28
    • /
    • 2005
  • In this paper, we proposed a parallel fast 2-D discrete wavelet transform hardware architecture based on lifting scheme. The proposed architecture improved the 2-D processing speed, and reduced internal memory buffer size. The previous lifting scheme based parallel 2-D wavelet transform architectures were consisted with row direction and column direction modules, which were pair of prediction and update filter module. In 2-D wavelet transform, column direction processing used the row direction results, which were not generated in column direction order but in row direction order, so most hardware architecture need internal buffer memory. The proposed architecture focused on the reducing of the internal memory buffer size and the total calculation time. Reducing the total calculation time, we proposed a 4-way data flow scheduling and memory based parallel hardware architecture. The 4-way data flow scheduling can increase the row direction parallel performance, and reduced the initial latency of starting of the row direction calculation. In this hardware architecture, the internal buffer memory didn't used to store the results of the row direction calculation, while it contained intermediate values of column direction calculation. This method is very effective in column direction processing, because the input data of column direction were not generated in column direction order The proposed architecture was implemented with VHDL and Altera Stratix device. The implementation results showed overall calculation time reduced from $N^2/2+\alpha$ to $N^2/4+\beta$, and internal buffer memory size reduced by around $50\%$ of previous works.

Complexity-based Sample Adaptive Offset Parallelism (복잡도 기반 적응적 샘플 오프셋 병렬화)

  • Ryu, Eun-Kyung;Jo, Hyun-Ho;Seo, Jung-Han;Sim, Dong-Gyu;Kim, Doo-Hyun;Song, Joon-Ho
    • Journal of Broadcast Engineering
    • /
    • v.17 no.3
    • /
    • pp.503-518
    • /
    • 2012
  • In this paper, we propose a complexity-based parallelization method of the sample adaptive offset (SAO) algorithm which is one of HEVC in-loop filters. The SAO algorithm can be regarded as region-based process and the regions are obtained and represented with a quad-tree scheme. A offset to minimize a reconstruction error is sent for each partitioned region. The SAO of the HEVC can be parallelized in data-level. However, because the sizes and complexities of the SAO regions are not regular, workload imbalance occurs with multi-core platform. In this paper, we propose a LCU-based SAO algorithm and a complexity prediction algorithm for each LCU. With the proposed complexity-based LCU processing, we found that the proposed algorithm is faster than the sequential implementation by a factor of 2.38 times. In addition, the proposed algorithm is faster than regular parallel implementation SAO by 21%.

Performance of Uncompressed Audio Distribution System over Ethernet with a L1/L2 Hybrid Switching Scheme (L1/L2 혼합형 중계 방법을 적용한 이더넷 기반 비압축 오디오 분배 시스템의 성능 분석)

  • Nam, Wie-Jung;Yoon, Chong-Ho;Park, Pu-Sik;Jo, Nam-Hong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.46 no.12
    • /
    • pp.108-116
    • /
    • 2009
  • In this paper, we propose a Ethernet based audio distribution system with a new L1/L2 hybrid switching scheme, and evaluate its performance. The proposed scheme not only offers guaranteed low latency and jitter characteristics that are essentially required for the distribution of high-quality uncompressed audio traffic, and but also provide an efficient transmission of data traffic on the Ethernet environment. The audio distribution system with a proposed scheme consists of a master node and a number of relay nodes, and all nodes are mutually connected as a daisy-chain topology through up and downlinks. The master node generates an audio frame for each cycle of 125us, and the audio frame has 24 time slotted audio channels for carrying stereo 24 channels of 16-bit PCM sampled audio. On receiving the audio frame from its upstream node via the downlink, each intermediate node inserts its audio traffic to the reserved time slot for itself, then relays again to next node through its physical layer(L1) transmission - repeating. After reaching the end node, the audio frame is loopbacked through the uplink. On repeating through the uplink, each node makes a copy of audio slot that node has to receive, then play the audio. When the audio transmission is completed, each node works as a normal L2 switch, thus data frames are switched during the remaining period. For supporting this L1/L2 hybrid switching capability, we insert a glue logic for parsing and multiplexing audio and data frames at MII(Media Independent Interlace) between the physical and data link layers. The proposed scheme can provide a good delay performance and transmission efficiency than legacy Ethernet based audio distribution systems. For verifying the feasibility of the proposed L1/L2 hybrid switching scheme, we use OMNeT++ as a simulation tool with various parameters. From the simulation results, one can find that the proposed scheme can provides outstanding characteristics in terms of both jitter characteristic for audio traffic and transmission efficiency of data traffics.

Development of Sauces Made from Gochujang Using the Quality Function Deployment Method: Focused on U.S. and Chinese Markets (품질기능전개(Quality Function Deployment) 방법을 적용한 고추장 소스 콘셉트 개발: 미국과 중국 시장을 중심으로)

  • Lee, Seul Ki;Kim, A Young;Hong, Sang Pil;Lee, Seung Je;Lee, Min A
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.44 no.9
    • /
    • pp.1388-1398
    • /
    • 2015
  • Quality Function Deployment (QFD) is the most complete and comprehensive method for translating what customers need from a product. This study utilized QFD to develop sauces made from Gochujang and to determine how to fulfill international customers' requirements. A customer survey and expert opinion survey were conducted from May 13 to August 22, 2014 and targeted 220 consumers and 20 experts in the U.S. and China. Finally, a total of 208 (190 consumers and 18 experts) useable data were selected. The top three customer requirements for Gochujang sauces were identified as fresh flavor (4.40), making better flavor (3.99), and cooking availability (3.90). Thirty-three engineering characteristics were developed. The results from the calculation of relative importance of engineering characteristics identified that 'cooking availability', 'free sample and food testing', 'unique concept', and 'development of brand' were the highest. The relative importance of engineering characteristics, correlation, and technical difficulties are ranked, and this result could contribute to the development Korean sauces based on customer needs and engineering characteristics.

Building battery deterioration prediction model using real field data (머신러닝 기법을 이용한 납축전지 열화 예측 모델 개발)

  • Choi, Keunho;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.243-264
    • /
    • 2018
  • Although the worldwide battery market is recently spurring the development of lithium secondary battery, lead acid batteries (rechargeable batteries) which have good-performance and can be reused are consumed in a wide range of industry fields. However, lead-acid batteries have a serious problem in that deterioration of a battery makes progress quickly in the presence of that degradation of only one cell among several cells which is packed in a battery begins. To overcome this problem, previous researches have attempted to identify the mechanism of deterioration of a battery in many ways. However, most of previous researches have used data obtained in a laboratory to analyze the mechanism of deterioration of a battery but not used data obtained in a real world. The usage of real data can increase the feasibility and the applicability of the findings of a research. Therefore, this study aims to develop a model which predicts the battery deterioration using data obtained in real world. To this end, we collected data which presents change of battery state by attaching sensors enabling to monitor the battery condition in real time to dozens of golf carts operated in the real golf field. As a result, total 16,883 samples were obtained. And then, we developed a model which predicts a precursor phenomenon representing deterioration of a battery by analyzing the data collected from the sensors using machine learning techniques. As initial independent variables, we used 1) inbound time of a cart, 2) outbound time of a cart, 3) duration(from outbound time to charge time), 4) charge amount, 5) used amount, 6) charge efficiency, 7) lowest temperature of battery cell 1 to 6, 8) lowest voltage of battery cell 1 to 6, 9) highest voltage of battery cell 1 to 6, 10) voltage of battery cell 1 to 6 at the beginning of operation, 11) voltage of battery cell 1 to 6 at the end of charge, 12) used amount of battery cell 1 to 6 during operation, 13) used amount of battery during operation(Max-Min), 14) duration of battery use, and 15) highest current during operation. Since the values of the independent variables, lowest temperature of battery cell 1 to 6, lowest voltage of battery cell 1 to 6, highest voltage of battery cell 1 to 6, voltage of battery cell 1 to 6 at the beginning of operation, voltage of battery cell 1 to 6 at the end of charge, and used amount of battery cell 1 to 6 during operation are similar to that of each battery cell, we conducted principal component analysis using verimax orthogonal rotation in order to mitigate the multiple collinearity problem. According to the results, we made new variables by averaging the values of independent variables clustered together, and used them as final independent variables instead of origin variables, thereby reducing the dimension. We used decision tree, logistic regression, Bayesian network as algorithms for building prediction models. And also, we built prediction models using the bagging of each of them, the boosting of each of them, and RandomForest. Experimental results show that the prediction model using the bagging of decision tree yields the best accuracy of 89.3923%. This study has some limitations in that the additional variables which affect the deterioration of battery such as weather (temperature, humidity) and driving habits, did not considered, therefore, we would like to consider the them in the future research. However, the battery deterioration prediction model proposed in the present study is expected to enable effective and efficient management of battery used in the real filed by dramatically and to reduce the cost caused by not detecting battery deterioration accordingly.

Effect of Gas now Modulation on Etch Depth Uniformity for Plasma Etching of 150 mm GaAs Wafers (150 mm GaAs 웨이퍼의 플라즈마 식각에서 식각 깊이의 균일도에 대한 가스 흐름의 최적화 연구)

  • 정필구;임완태;조관식;전민현;임재영;이제원;조국산
    • Journal of the Korean Vacuum Society
    • /
    • v.11 no.2
    • /
    • pp.113-118
    • /
    • 2002
  • We developed engineering methods to control gas flow in a plasma reactor in order to achieve good etch depth uniformity for large area GaAs etching. Finite difference numerical method was found quite useful for simulation of gas flow distribution in the reactor for dry etching of GaAs. The experimental results in $BCl_3/N_2/SF_6/He$ ICP plasmas confirmed that the simulated data fitted very well with real data. It is noticed that a focus ring could help improve both gas flow and etch uniformity for 150 mm diameter GaAs plasma etch processing. The simulation results showed that optimization of clamp configuration could decrease gas flow uniformity as low as $\pm$ 1.5% on an 100 mm(4 inch) GaAs wafer and $\pm$ 3% for a 150 m(6 inch) wafer with the fixed reactor and electrode, respectively. Comparison between simulated gas flow uniformity and real etch depth distribution data concluded that control of gas flow distribution in the chamber would be significantly important in order or achieve excellent dry etch uniformity of large area GaAs wafers.

A comparison of imputation methods using nonlinear models (비선형 모델을 이용한 결측 대체 방법 비교)

  • Kim, Hyein;Song, Juwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.4
    • /
    • pp.543-559
    • /
    • 2019
  • Data often include missing values due to various reasons. If the missing data mechanism is not MCAR, analysis based on fully observed cases may an estimation cause bias and decrease the precision of the estimate since partially observed cases are excluded. Especially when data include many variables, missing values cause more serious problems. Many imputation techniques are suggested to overcome this difficulty. However, imputation methods using parametric models may not fit well with real data which do not satisfy model assumptions. In this study, we review imputation methods using nonlinear models such as kernel, resampling, and spline methods which are robust on model assumptions. In addition, we suggest utilizing imputation classes to improve imputation accuracy or adding random errors to correctly estimate the variance of the estimates in nonlinear imputation models. Performances of imputation methods using nonlinear models are compared under various simulated data settings. Simulation results indicate that the performances of imputation methods are different as data settings change. However, imputation based on the kernel regression or the penalized spline performs better in most situations. Utilizing imputation classes or adding random errors improves the performance of imputation methods using nonlinear models.