• 제목/요약/키워드: 수집최적화

Search Result 455, Processing Time 0.033 seconds

The study of Data Logging Model Development for ICT Instruction in elementary school (초등학교 ICT 활용 교육을 위한 데이터 로깅 모델 개발에 관한 연구)

  • Lee, Gil-Kyung;Hong, Myung-Hui
    • Annual Conference of KIPS
    • /
    • 2007.05a
    • /
    • pp.1410-1413
    • /
    • 2007
  • 현재 초등학교에서 실시되고 있는 ICT(Information & Communication Technology) 교육은 ICT 소양교육과 ICT 활용교육으로 구분되어 실시되고 있다. ICT 소양교육은 컴퓨터 과학을 기반으로 하여 정보기술과 통신 기술에 대한 기본적인 소양교육으로 그 내용을 구성하고 있다. 현재 초고속 통신망의 발달과 컴퓨터 사용으로 인하여 소양교육에 대한 학업 성취도는 매우 향상되어 가고 있는 실정이다. 따라서 앞으로의 ICT 교육은 활용 교육에 더 많은 노력을 기울여 전 교과에 걸쳐 정보 통신 기술을 활용하여 교육의 내용뿐만 아니라 교육의 방법 등에서 많은 변화의 필요성이 증대되고 있다. ICT 활용교육에서 컴퓨터를 활용하기 위해서는 실생활에서 습득하거나 측정된 데이터를 컴퓨터에 입력하는 과정, 데이터 로깅(data logging), 으로부터 시작 한다. 최근의 유비쿼터스 컴퓨팅 환경에서 데이터가 발생한 곳에서 즉시 받아들이고 또한 결과가 필요한 곳에서 즉시 정보를 제공하여 주는 컴퓨팅 환경을 구성하는 것이 매우 중요 하다고 본다. 이에 본 연구에서는 유비쿼터스 컴퓨팅의 기본 개념 중 하나인 실시간 데이터 로깅 기법을 응용하여 초등학교에서 ICT 활용 학습 활동 시 발생되는 각종 원시 데이터들을 컴퓨터로 가져오는 데이터로깅 모델을 제안하고, 초등학교 과학과를 중심으로 교육과정의 실험 요소들을 분석하여 이를 개발된 모델에 적용하였다. 데이터 로깅 모델 적용 결과, 손쉽게 해당 원시 데이터를 수집할 수 있었고 데이터의 처리 및 분석을 간편하게 수행하여 정확한 실험 데이터를 바탕으로 실험 결과에 대한 토의, 토론에 더욱 많은 시간을 할애할 수 있었으며 학교에서의 ICT 활용 교육의 새로운 모델을 제시하였다.다.ovoids에서도 각각의 점들에 대한 선량을 측정하였다. SAS와 SSAS의 직장에 미치는 선량차이는 실제 임상에서의 관심 점들과 가장 가까운 25 mm(R2)와 30 mm(R3)거리에서 각각 8.0% 6.0%였고 SAS와 FWAS의 직장에 미치는 선량차이는 25 mm(R2) 와 30 mm(R3)거리에서 각각 25.0% 23.0%로 나타났다. SAS와 SSAS의 방광에 미치는 선량차이는 20 m(Bl)와 30 mm(B2)거리에서 각각 8.0% 3.0%였고 SAS와 FWAS의 방광에 미치는 선량차이는 20 mm(Bl)와 30 mm(B2)거리에서 각각 23.0%, 17.0%로 나타났다. SAS를 SSAS나 FWAS로 대체하였을 때 직장에 미치는 선량은 SSAS는 최대 8.0 %, FWAS는 최대 26.0 %까지 감소되고 방광에 미치는 선량은 SSAS는 최대 8.0 % FWAS는 최대 23.0%까지 감소됨을 알 수 있었고 FWAS가 SSAS 보다 차폐효과가 더 좋은 것으로 나타났으며 이 두 종류의 shielded applicator set는 부인암의 근접치료시 직장과 방광으로 가는 선량을 감소시켜 환자치료의 최적화를 이룰 수 있을 것으로 생각된다.)한 항균(抗菌) 효과(效果)를 나타내었다. 이상(以上)의 결과(結果)로 보아 선방활명음(仙方活命飮)의 항균(抗菌) 효능(效能)은 군약(君藥)인 대황(大黃)의 성분(成分) 중(中)의 하나인 stilbene 계열(系列)의 화합물(化合物)인 Rhapontigenin과 Rhaponticin의 작용(作用)에 의(依)한 것이며, 이는 한의학(韓醫學) 방제(方劑) 원리(原理)인 군신좌사(君臣佐使) 이론(理論)에서 군약(君藥)이 주증(主症)에 주(主)로 작용(作用)하는 약물(藥物)이라는 것을 밝혀주는 것

Optimization of fractionation efficiency (FE) and throughput (TP) in a large scale splitter less full-feed depletion SPLITT fractionation (Large scale FFD-SF) (대용량 splitter less full-feed depletion SPLITT 분획법 (Large scale FFD-SF)에서의 분획효율(FE)및 시료처리량(TP)의 최적화)

  • Eum, Chul Hun;Noh, Ahrahm;Choi, Jaeyeong;Yoo, Yeongsuk;Kim, Woon Jung;Lee, Seungho
    • Analytical Science and Technology
    • /
    • v.28 no.6
    • /
    • pp.453-459
    • /
    • 2015
  • Split-flow thin cell fractionation (SPLITT fractionation, SF) is a particle separation technique that allows continuous (and thus a preparative scale) separation into two subpopulations based on the particle size or the density. In SF, there are two basic performance parameters. One is the throughput (TP), which was defined as the amount of sample that can be processed in a unit time period. Another is the fractionation efficiency (FE), which was defined as the number % of particles that have the size predicted by theory. Full-feed depletion mode (FFD-SF) have only one inlet for the sample feed, and the channel is equipped with a flow stream splitter only at the outlet in SF mode. In conventional FFD-mode, it was difficult to extend channel due to splitter in channel. So, we use large scale splitter-less FFD-SF to increase TP from increase channel scale. In this study, a FFD-SF channel was developed for a large-scale fractionation, which has no flow stream splitters (‘splitter less’), and then was tested for optimum TP and FE by varying the sample concentration and the flow rates at the inlet and outlet of the channel. Polyurethane (PU) latex beads having two different size distribution (about 3~7 µm, and about 2~30 µm) were used for the test. The sample concentration was varied from 0.2 to 0.8% (wt/vol). The channel flow rate was varied from 70, 100, 120 and 160 mL/min. The fractionated particles were monitored by optical microscopy (OM). The sample recovery was determined by collecting the particles on a 0.1 µm membrane filter. Accumulation of relatively large micron sized particles in channel could be prevented by feeding carrier liquid. It was found that, in order to achieve effective TP, the concentration of sample should be at higher than 0.4%.

Performance Evaluation of Machine Learning and Deep Learning Algorithms in Crop Classification: Impact of Hyper-parameters and Training Sample Size (작물분류에서 기계학습 및 딥러닝 알고리즘의 분류 성능 평가: 하이퍼파라미터와 훈련자료 크기의 영향 분석)

  • Kim, Yeseul;Kwak, Geun-Ho;Lee, Kyung-Do;Na, Sang-Il;Park, Chan-Won;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.5
    • /
    • pp.811-827
    • /
    • 2018
  • The purpose of this study is to compare machine learning algorithm and deep learning algorithm in crop classification using multi-temporal remote sensing data. For this, impacts of machine learning and deep learning algorithms on (a) hyper-parameter and (2) training sample size were compared and analyzed for Haenam-gun, Korea and Illinois State, USA. In the comparison experiment, support vector machine (SVM) was applied as machine learning algorithm and convolutional neural network (CNN) was applied as deep learning algorithm. In particular, 2D-CNN considering 2-dimensional spatial information and 3D-CNN with extended time dimension from 2D-CNN were applied as CNN. As a result of the experiment, it was found that the hyper-parameter values of CNN, considering various hyper-parameter, defined in the two study areas were similar compared with SVM. Based on this result, although it takes much time to optimize the model in CNN, it is considered that it is possible to apply transfer learning that can extend optimized CNN model to other regions. Then, in the experiment results with various training sample size, the impact of that on CNN was larger than SVM. In particular, this impact was exaggerated in Illinois State with heterogeneous spatial patterns. In addition, the lowest classification performance of 3D-CNN was presented in Illinois State, which is considered to be due to over-fitting as complexity of the model. That is, the classification performance was relatively degraded due to heterogeneous patterns and noise effect of input data, although the training accuracy of 3D-CNN model was high. This result simply that a proper classification algorithms should be selected considering spatial characteristics of study areas. Also, a large amount of training samples is necessary to guarantee higher classification performance in CNN, particularly in 3D-CNN.

Statistical Analysis of Protein Content in Wheat Germplasm Based on Near-infrared Reflectance Spectroscopy (밀 유전자원의 근적외선분광분석 예측모델에 의한 단백질 함량 변이분석)

  • Oh, Sejong;Choi, Yu Mi;Yoon, Hyemyeong;Lee, Sukyeung;Yoo, Eunae;Hyun, Do Yoon;Shin, Myoung-Jae;Lee, Myung Chul;Chae, Byungsoo
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.64 no.4
    • /
    • pp.353-365
    • /
    • 2019
  • A near-infrared reflectance spectroscopy (NIRS) prediction model was set to establish a rapid analysis system of wheat germplasm and provide statistical information on the characteristics of protein contents. The variability index value (VIV) of calibration resources was 0.80, the average protein content was 13.2%, and the content range was from 7.0% to 13.2%. After measuring the near-infrared spectra of calibration resources, the NIRS prediction model was developed through a regression analysis between protein content and spectra data, and then optimized by excluding outliers. The standard error of calibration, R2, and the slope of the optimized model were 0.132, 0.997, and 1.000 respectively, and those of external validation results were 0.994, 0.191, and 1.013, respectively. Based on these results, a developed NIRS model could be applied to the rapid analysis of protein in wheat. The distribution of NIRS protein content of 6,794 resources were analyzed using a normal distribution analysis. The VIV was 0.79, the average protein was 12.1%, and the content range of resources accounting for 42.1% and 68% of the total accessions were 10-13% and 9.5-14.6%, respectively. The composition of total resources was classified into breeding line (3,128), landrace (2,705), and variety (961). The VIV in breeding line was 0.80, the protein average was 11.8%, and the contents of 68% of total resources ranged from 9.2% to 14.5%. The VIV in landrace was 0.76, the protein average was 12.1%, and the content range of resources of 68% of total accessions was 9.8-14.4%. The VIV in variety was 0.80, the protein average was 12.8%, and the accessions representing 68% of total resources ranged from 10.2% to 15.4%. These results should be helpful to the related experts of wheat breeding.

A Comparison between Multiple Satellite AOD Products Using AERONET Sun Photometer Observations in South Korea: Case Study of MODIS,VIIRS, Himawari-8, and Sentinel-3 (우리나라에서 AERONET 태양광도계 자료를 이용한 다종위성 AOD 산출물 비교평가: MODIS, VIIRS, Himawari-8, Sentinel-3의 사례연구)

  • Kim, Seoyeon;Jeong, Yemin;Youn, Youjeong;Cho, Subin;Kang, Jonggu;Kim, Geunah;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.543-557
    • /
    • 2021
  • Because aerosols have different spectral characteristics according to the size and composition of the particle and to the satellite sensors, a comparative analysis of aerosol products from various satellite sensors is required. In South Korea, however, a comprehensive study for the comparison of various official satellite AOD (Aerosol Optical Depth) products for a long period is not easily found. In this paper, we aimed to assess the performance of the AOD products from MODIS (Moderate Resolution Imaging Spectroradiometer), VIIRS (Visible Infrared Imaging Radiometer Suite), Himawari-8, and Sentinel-3 by referring to the AERONET (Aerosol Robotic Network) sun photometer observations for the period between January 2015 and December 2019. Seasonal and geographical characteristics of the accuracy of satellite AOD were also analyzed. The MODIS products, which were accumulated for a long time and optimized by the new MAIAC (Multiangle Implementation of Atmospheric Correction) algorithm, showed the best accuracy (CC=0.836) and were followed by the products from VIIRS and Himawari-8. On the other hand, Sentinel-3 AOD did not appear to have a good quality because it was recently launched and not sufficiently optimized yet, according to ESA (European Space Agency). The AOD of MODIS, VIIRS, and Himawari-8 did not show a significant difference in accuracy according to season and to urban vs. non-urban regions, but the mixed pixel problem was partly found in a few coastal regions. Because AOD is an essential component for atmospheric correction, the result of this study can be a reference to the future work for the atmospheric correction for the Korean CAS (Compact Advanced Satellite) series.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.

A Study on Sample Allocation for Stratified Sampling (층화표본에서의 표본 배분에 대한 연구)

  • Lee, Ingue;Park, Mingue
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.6
    • /
    • pp.1047-1061
    • /
    • 2015
  • Stratified random sampling is a powerful sampling strategy to reduce variance of the estimators by incorporating useful auxiliary information to stratify the population. Sample allocation is the one of the important decisions in selecting a stratified random sample. There are two common methods, the proportional allocation and Neyman allocation if we could assume data collection cost for different observation units equal. Theoretically, Neyman allocation considering the size and standard deviation of each stratum, is known to be more effective than proportional allocation which incorporates only stratum size information. However, if the information on the standard deviation is inaccurate, the performance of Neyman allocation is in doubt. It has been pointed out that Neyman allocation is not suitable for multi-purpose sample survey that requires the estimation of several characteristics. In addition to sampling error, non-response error is another factor to evaluate sampling strategy that affects the statistical precision of the estimator. We propose new sample allocation methods using the available information about stratum response rates at the designing stage to improve stratified random sampling. The proposed methods are efficient when response rates differ considerably among strata. In particular, the method using population sizes and response rates improves the Neyman allocation in multi-purpose sample survey.

Evaluation of Image Qualities for a Digital X-ray Imaging System Based on Gd$_2$O$_2$S(Tb) Scintillator and Photosensor Array by Using a Monte Carlo Imaging Simulation Code (몬테카를로 영상모의실험 코드를 이용한 Gd$_2$O$_2$S(Tb) 섬광체 및 광센서 어레이 기반 디지털 X-선 영상시스템의 화질평가)

  • Jung, Man-Hee;Jung, In-Bum;Park, Ju-Hee;Oh, Ji-Eun;Cho, Hyo-Sung;Han, Bong-Soo;Kim, Sin;Lee, Bong-Soo;Kim, Ho-Kyung
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.4
    • /
    • pp.253-259
    • /
    • 2004
  • in this study, we developed a Monte Carlo imaging simulation code written by the visual C$\^$++/ programing language for design optimization of a digital X-ray imaging system. As a digital X-ray imaging system, we considered a Gd$_2$O$_2$S(Tb) scintillator and a photosensor array, and included a 2D parallel grid to simulate general test renditions. The interactions between X-ray beams and the system structure, the behavior of lights generated in the scintillator, and their collection in the photosensor array were simulated by using the Monte Carlo method. The scintillator thickness and the photosensor array pitch were assumed to 66$\mu\textrm{m}$ and 48$\mu\textrm{m}$, respertively, and the pixel format was set to 256 x 256. Using the code, we obtained X-ray images under various simulation conditions, and evaluated their image qualities through the calculations of SNR (signal-to-noise ratio), MTF (modulation transfer function), NPS (noise power spectrum), DQE (detective quantum efficiency). The image simulation code developed in this study can be applied effectively for a variety of digital X-ray imaging systems for their design optimization on various design parameters.

The Effect of Supply Chain Dynamic Capabilities, Open Innovation and Supply Uncertainty on Supply Chain Performance (공급사슬 동적역량, 개방형 혁신, 공급 불확실성이 공급사슬 성과에 미치는 영향)

  • Lee, Sang-Yeol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.4
    • /
    • pp.481-491
    • /
    • 2018
  • As the global business environment is dynamic, uncertain, and complex, supply chain management determines the performance of the supply chain in terms of the utilization of resources and capabilities of companies involved in the supply chain. Companies pursuing open innovation gain greater access to the external environment and accumulate knowledge flows and learning experiences, and may generate better business performance from dynamic capabilities. This study analyzed the effects of supply chain dynamic capabilities, open innovation, and supply uncertainty on supply chain performance. Through questionnaires on 178 companies listed on KOSDAQ, empirical results are as follows: First, integration and reactivity capabilities among supply chain dynamic capabilities have a positive effect on supply chain performance. Second, the moderating effect of open innovation showed a negative correlation in the case of information exchange, and a positive correlation in the cases of integration, cooperation and reactivity. Third, two of the 3-way interaction terms, "information exchange*open innovation*supply uncertainty" and "integration*open innovation*supply uncertainty" were statistically significant. The implications of this study are as follows: First, as the supply chain needs to achieve optimization of the whole process between supply chain components rather than individual companies, dynamic capabilities play an important role in improving performance. Second, for KOSDAQ companies featuring limited capital resources, open innovation that integrates external knowledge is valuable. In order to increase synergistic effects, it is necessary to develop dynamic capabilities accordingly. Third, since resources are constrained, managers must determine the type or level of capabilities and open innovation in accordance with supply uncertainty. Since this study has limitations in analyzing survey data, it is necessary to collect secondary data or longitudinal data. It is also necessary to further analyze the internal and external factors that have a significant impact on supply chain performance.

The Flow-rate Measurements in a Multi-phase Flow Pipeline by Using a Clamp-on Sealed Radioisotope Cross Correlation Flowmeter (투과 감마선 계측신호의 Cross correlation 기법 적용에 의한 다중상 유체의 유량측정)

  • Kim, Jin-Seop;Kim, Jong-Bum;Kim, Jae-Ho;Lee, Na-Young;Jung, Sung-Hee
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.1
    • /
    • pp.13-20
    • /
    • 2008
  • The flow rate measurements in a multi-phase flow pipeline were evaluated quantitatively by means of a clamp-on sealed radioisotope based on a cross correlation signal processing technique. The flow rates were calculated by a determination of the transit time between two sealed gamma sources by using a cross correlation function following FFT filtering, then corrected with vapor fraction in the pipeline which was measured by the ${\gamma}$-ray attenuation method. The pipeline model was manufactured by acrylic resin(ID. 8 cm, L=3.5 m, t=10 mm), and the multi-phase flow patterns were realized by an injection of compressed $N_2$ gas. Two sealed gamma sources of $^{137}Cs$ (E=0.662 MeV, ${\Gamma}$ $factor=0.326\;R{\cdot}h^{-1}{\cdot}m^2{\cdot}Ci^{-1}$) of 20 mCi and 17 mCi, and radiation detectors of $2"{\times}2"$ NaI(Tl) scintillation counter (Eberline, SP-3) were used for this study. Under the given conditions(the distance between two sources: 4D(D; inner diameter), N/S ratio: $0.12{\sim}0.15$, sampling time ${\Delta}t$: 4msec), the measured flow rates showed the maximum. relative error of 1.7 % when compared to the real ones through the vapor content corrections($6.1\;%{\sim}9.2\;%$). From a subsequent experiment, it was proven that the closer the distance between the two sealed sources is, the more precise the measured flow rates are. Provided additional studies related to the selection of radioisotopes their activity, and an optimization of the experimental geometry are carried out, it is anticipated that a radioisotope application for flow rate measurements can be used as an important tool for monitoring multi-phase facilities belonging to petrochemical and refinery industries and contributes economically in the light of maintenance and control of them.