• Title/Summary/Keyword: 정규혼합

Search Result 233, Processing Time 0.019 seconds

Variation of Inflow Density Currents with Different Flood Magnitude in Daecheong Reservoir (홍수 규모별 대청호에 유입하는 하천 밀도류의 특성 변화)

  • Yoon, Sung-Wan;Chung, Se-Woong;Choi, Jung-Kyu
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.12
    • /
    • pp.1219-1230
    • /
    • 2008
  • Stream inflows induced by flood runoffs have a higher density than the ambient reservoir water because of a lower water temperature and elevated suspended sediment(SS) concentration. As the propagation of density currents that formed by density difference between inflow and ambient water affects reservoir water quality and ecosystem, an understanding of reservoir density current is essential for an optimization of filed monitoring, analysis and forecast of SS and nutrient transport, and their proper management and control. This study was aimed to quantify the characteristics of inflow density current including plunge depth($d_p$) and distance($X_p$), separation depth($d_s$), interflow thickness($h_i$), arrival time to dam($t_a$), reduction ratio(${\beta}$) of SS contained stream inflow for different flood magnitude in Daecheong Reservoir with a validated two-dimensional(2D) numerical model. 10 different flood scenarios corresponding to inflow densimetric Froude number($Fr_i$) range from 0.920 to 9.205 were set up based on the hydrograph obtained from June 13 to July 3, 2004. A fully developed stratification condition was assumed as an initial water temperature profile. Higher $Fr_i$(inertia-to-buoyancy ratio) resulted in a greater $d_p,\;X_p,\;d_s,\;h_i$, and faster propagation of interflow, while the effect of reservoir geometry on these characteristics was significant. The Hebbert equation that estimates $d_p$ assuming steady-state flow condition with triangular cross section substantially over-estimated the $d_p$ because it does not consider the spatial variation of reservoir geometry and water surface changes during flood events. The ${\beta}$ values between inflow and dam sites were decreased as $Fr_i$ increased, but reversed after $Fr_i$>9.0 because of turbulent mixing effect. The results provides a practical and effective prediction measures for reservoir operators to first capture the behavior of turbidity inflow.

Development of Prediction Model for Capsaicinoids Content in Red-Pepper Powder Using Near-Infrared Spectroscopy - Particle Size Effect (근적외선 스펙트럼을 이용한 고춧가루의 캡사이신 함량 예측 모델 개발 - 입자의 영향)

  • Mo, Changyeun;Kang, Sukwon;Lee, Kangjin;Lim, Jong-Guk;Cho, Byoung-Kwan;Lee, Hyun-Dong
    • Food Engineering Progress
    • /
    • v.15 no.1
    • /
    • pp.48-55
    • /
    • 2011
  • In this research, the near-infrared absorption from 1,100-2,300 nm was used to measure the content of capsaicinoids in the red-pepper powder by using the Acousto-optic tunable filters (AOTF) spectrometer with sample plate and sample rotating unit. Non-spicy red-pepper samples from one location (Younggwang-gun. Korea) were mixed with spicy one (var. Chungyang) to make samples separated by particle size (below 0.425 mm, 0.425-0.71 mm, and 0.71- 1.4 mm). The Partial Least Squares Regression (PLSR) model to predict the capsaicinoid content on particle sizes was developed with measured spectra by AOTF spectrometer and used to analyze the amount of capsaicinoids by HPLC. The PLSR Model of red-pepper powder of below 0.425 mm, 0.425-0.71 mm, and 0.71-1.4 mm with cross validation had ${R_V}^2$ = 0.948-0.979 and Standard Error of Prediction (SEP) = 6.56-7.94 mg%. The prediction error of smaller particle size of red-pepper powder was low. The best PLSR model was found in pretreatment of Range Normalization, Standard Normal Variate, and 1st Derivatives of red-pepper powder of below 1.4 mm with cross validation, having ${R_V}^2$ = 0.959 and SEP = 8.82 mg%.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.