• Title/Summary/Keyword: model experiment

Search Result 7,981, Processing Time 0.042 seconds

Annotation Method based on Face Area for Efficient Interactive Video Authoring (효과적인 인터랙티브 비디오 저작을 위한 얼굴영역 기반의 어노테이션 방법)

  • Yoon, Ui Nyoung;Ga, Myeong Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.83-98
    • /
    • 2015
  • Many TV viewers use mainly portal sites in order to retrieve information related to broadcast while watching TV. However retrieving information that people wanted needs a lot of time to retrieve the information because current internet presents too much information which is not required. Consequentially, this process can't satisfy users who want to consume information immediately. Interactive video is being actively investigated to solve this problem. An interactive video provides clickable objects, areas or hotspots to interact with users. When users click object on the interactive video, they can see additional information, related to video, instantly. The following shows the three basic procedures to make an interactive video using interactive video authoring tool: (1) Create an augmented object; (2) Set an object's area and time to be displayed on the video; (3) Set an interactive action which is related to pages or hyperlink; However users who use existing authoring tools such as Popcorn Maker and Zentrick spend a lot of time in step (2). If users use wireWAX then they can save sufficient time to set object's location and time to be displayed because wireWAX uses vision based annotation method. But they need to wait for time to detect and track object. Therefore, it is required to reduce the process time in step (2) using benefits of manual annotation method and vision-based annotation method effectively. This paper proposes a novel annotation method allows annotator to easily annotate based on face area. For proposing new annotation method, this paper presents two steps: pre-processing step and annotation step. The pre-processing is necessary because system detects shots for users who want to find contents of video easily. Pre-processing step is as follow: 1) Extract shots using color histogram based shot boundary detection method from frames of video; 2) Make shot clusters using similarities of shots and aligns as shot sequences; and 3) Detect and track faces from all shots of shot sequence metadata and save into the shot sequence metadata with each shot. After pre-processing, user can annotates object as follow: 1) Annotator selects a shot sequence, and then selects keyframe of shot in the shot sequence; 2) Annotator annotates objects on the relative position of the actor's face on the selected keyframe. Then same objects will be annotated automatically until the end of shot sequence which has detected face area; and 3) User assigns additional information to the annotated object. In addition, this paper designs the feedback model in order to compensate the defects which are wrong aligned shots, wrong detected faces problem and inaccurate location problem might occur after object annotation. Furthermore, users can use interpolation method to interpolate position of objects which is deleted by feedback. After feedback user can save annotated object data to the interactive object metadata. Finally, this paper shows interactive video authoring system implemented for verifying performance of proposed annotation method which uses presented models. In the experiment presents analysis of object annotation time, and user evaluation. First, result of object annotation average time shows our proposed tool is 2 times faster than existing authoring tools for object annotation. Sometimes, annotation time of proposed tool took longer than existing authoring tools, because wrong shots are detected in the pre-processing. The usefulness and convenience of the system were measured through the user evaluation which was aimed at users who have experienced in interactive video authoring system. Recruited 19 experts evaluates of 11 questions which is out of CSUQ(Computer System Usability Questionnaire). CSUQ is designed by IBM for evaluating system. Through the user evaluation, showed that proposed tool is useful for authoring interactive video than about 10% of the other interactive video authoring systems.

Optimization of Medium for the Carotenoid Production by Rhodobacter sphaeroides PS-24 Using Response Surface Methodology (반응 표면 분석법을 사용한 Rhodobacter sphaeroides PS-24 유래 carotenoid 생산 배지 최적화)

  • Bong, Ki-Moon;Kim, Kong-Min;Seo, Min-Kyoung;Han, Ji-Hee;Park, In-Chul;Lee, Chul-Won;Kim, Pyoung-Il
    • Korean Journal of Organic Agriculture
    • /
    • v.25 no.1
    • /
    • pp.135-148
    • /
    • 2017
  • Response Surface Methodology (RSM), which is combining with Plackett-Burman design and Box-Behnken experimental design, was applied to optimize the ratios of the nutrient components for carotenoid production by Rhodobacter sphaeroides PS-24 in liquid state fermentation. Nine nutrient ingredients containing yeast extract, sodium acetate, NaCl, $K_2HPO_4$, $MgSO_4$, mono-sodium glutamate, $Na_2CO_3$, $NH_4Cl$ and $CaCl_2$ were finally selected for optimizing the medium composition based on their statistical significance and positive effects on carotenoid yield. Box-Behnken design was employed for further optimization of the selected nutrient components in order to increase carotenoid production. Based on the Box-Behnken assay data, the secondary order coefficient model was set up to investigate the relationship between the carotenoid productivity and nutrient ingredients. The important factors having influence on optimal medium constituents for carotenoid production by Rhodobacter sphaeroides PS-24 were determined as follows: yeast extract 1.23 g, sodium acetate 1 g, $NH_4Cl$ 1.75 g, NaCl 2.5 g, $K_2HPO_4$ 2 g, $MgSO_4$ 1.0 g, mono-sodium glutamate 7.5 g, $Na_2CO_3$ 3.71 g, $NH_4Cl$ 3.5g, $CaCl_2$ 0.01 g, per liter. Maximum carotenoid yield of 18.11 mg/L was measured by confirmatory experiment in liquid culture using 500 L fermenter.

Analysis of Chinese Consumer Preference of Country of Origin for Apples based on National Organic Certification (사과의 국가별 유기인증 결합에 대한 중국 소비자 선호분석)

  • Kwon, Jae-Hyun;Kim, Jeong-Nyeon;Hong, Na-Kyoung;Kim, Tae-Kyun
    • Current Research on Agriculture and Life Sciences
    • /
    • v.32 no.4
    • /
    • pp.225-230
    • /
    • 2014
  • This study investigates the effect of organic certification of apples on consumer preference in China as a way to support the expanded export of Korean apples to China. A choice experiment was designed to analyze the apple consumption in China. A total of 298 Chinese consumers answered the survey, and multinomial logit models were used to analyze the results. Organic certification was identified as an important determinant of consumer preference for apples in China, affecting both the evaluation and choice of country of origin. The results also indicated that Korean organic certification significantly increased the probability of Chinese consumers choosing Korean apples. Thus, organic certification by the Korean government should be strengthened to promote apple exports to China, plus the results of this study may provide useful information to promote agricultural product exports and improve the organic certification system.

Effects of Humic Acid on the pH-dependent Sorption of Europium (Eu) to Kaolinite (PH 변화에 따른 카올리나이트와 유로퓸(Eu)의 흡착에 대한 휴믹산의 영향)

  • Harn, Yoon-I;Shin, Hyun-Sang;Rhee, Dong-Seok;Lee, Myung-Ho;Chung, Euo-Cang
    • Journal of Soil and Groundwater Environment
    • /
    • v.14 no.4
    • /
    • pp.23-32
    • /
    • 2009
  • The sorption of europium (Eu (III)) onto kaolinite and the influence of humic acids over a range of pH 3 ~ 11 has been studied by batch adsorption experiment (V/m = 250 : 1 mL/g, $C_{Eu(III)}\;=\;1\;{\times}\;10^{-5}\;mol/L$, $C_{HA}\;=\;5{\sim}50\;mg/L$, $P_{CO2}=10^{-3.5}\;atm$). The concentrations of HA and Eu(III) in aqueous phase were measured by UV absorbance at 254nm (e.g., $UV_{254}$) and ICP-MS after microwave digestion for HA removals, respectively. Results showed that the HA sorption onto kaolinite was decreased with increasing pH and their sorption isotherms fit well with the Langmuir adsorption model (except pH 3). Maximum amount ($q_{max}$) for the HA sorption at pH 4 to 11 was ranged from 4.73 to 0.47 mg/g. Europium adsorption onto the kaolinite in the absence of HA was typical, showing an increases with pH and a distinct adsorption edge at pH 3 to 5. However in the presence of HA, Eu adsorption to kaolinite was significantly affected. HA was shown to enhance Eu adsorption in the acidic pH range (pH 3 ~ 4) due to the formation of additional binding sites for Eu coming from HA adsorbed onto kaolinite surface, but reduce Eu adsorption in the intermediate and high pH above 6 due to the formation of aqueous Eu-HA complexes. The results on the ternary interaction of kaolinte-Eu-HA are compared with those on the binary system of kaolinite-HA and kaolinite-Eu and adsorption mechanism with pH was discussed.

Evaluation of Moisture and Feed Values for Winter Annual Forage Crops Using Near Infrared Reflectance Spectroscopy (근적외선분광법을 이용한 동계사료작물 풀 사료의 수분함량 및 사료가치 평가)

  • Kim, Ji Hea;Lee, Ki Won;Oh, Mirae;Choi, Ki Choon;Yang, Seung Hak;Kim, Won Ho;Park, Hyung Soo
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.39 no.2
    • /
    • pp.114-120
    • /
    • 2019
  • This study was carried out to explore the accuracy of near infrared spectroscopy(NIRS) for the prediction of moisture content and chemical parameters on winter annual forage crops. A population of 2454 winter annual forages representing a wide range in chemical parameters was used in this study. Samples of forage were scanned at 1nm intervals over the wavelength range 680-2500nm and the optical data was recorded as log 1/Reflectance(log 1/R), which scanned in intact fresh condition. The spectral data were regressed against a range of chemical parameters using partial least squares(PLS) multivariate analysis in conjunction with spectral math treatments to reduced the effect of extraneous noise. The optimum calibrations were selected based on the highest coefficients of determination in cross validation($R^2$) and the lowest standard error of cross-validation(SECV). The results of this study showed that NIRS calibration model to predict the moisture contents and chemical parameters had very high degree of accuracy except for barely. The $R^2$ and SECV for integrated winter annual forages calibration were 0.99(SECV 1.59%) for moisture, 0.89(SECV 1.15%) for acid detergent fiber, 0.86(SECV 1.43%) for neutral detergent fiber, 0.93(SECV 0.61%) for crude protein, 0.90(SECV 0.45%) for crude ash, and 0.82(SECV 3.76%) for relative feed value on a dry matter(%), respectively. Results of this experiment showed the possibility of NIRS method to predict the moisture and chemical composition of winter annual forage for routine analysis method to evaluate the feed value.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Recommender system using BERT sentiment analysis (BERT 기반 감성분석을 이용한 추천시스템)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.1-15
    • /
    • 2021
  • If it is difficult for us to make decisions, we ask for advice from friends or people around us. When we decide to buy products online, we read anonymous reviews and buy them. With the advent of the Data-driven era, IT technology's development is spilling out many data from individuals to objects. Companies or individuals have accumulated, processed, and analyzed such a large amount of data that they can now make decisions or execute directly using data that used to depend on experts. Nowadays, the recommender system plays a vital role in determining the user's preferences to purchase goods and uses a recommender system to induce clicks on web services (Facebook, Amazon, Netflix, Youtube). For example, Youtube's recommender system, which is used by 1 billion people worldwide every month, includes videos that users like, "like" and videos they watched. Recommended system research is deeply linked to practical business. Therefore, many researchers are interested in building better solutions. Recommender systems use the information obtained from their users to generate recommendations because the development of the provided recommender systems requires information on items that are likely to be preferred by the user. We began to trust patterns and rules derived from data rather than empirical intuition through the recommender systems. The capacity and development of data have led machine learning to develop deep learning. However, such recommender systems are not all solutions. Proceeding with the recommender systems, there should be no scarcity in all data and a sufficient amount. Also, it requires detailed information about the individual. The recommender systems work correctly when these conditions operate. The recommender systems become a complex problem for both consumers and sellers when the interaction log is insufficient. Because the seller's perspective needs to make recommendations at a personal level to the consumer and receive appropriate recommendations with reliable data from the consumer's perspective. In this paper, to improve the accuracy problem for "appropriate recommendation" to consumers, the recommender systems are proposed in combination with context-based deep learning. This research is to combine user-based data to create hybrid Recommender Systems. The hybrid approach developed is not a collaborative type of Recommender Systems, but a collaborative extension that integrates user data with deep learning. Customer review data were used for the data set. Consumers buy products in online shopping malls and then evaluate product reviews. Rating reviews are based on reviews from buyers who have already purchased, giving users confidence before purchasing the product. However, the recommendation system mainly uses scores or ratings rather than reviews to suggest items purchased by many users. In fact, consumer reviews include product opinions and user sentiment that will be spent on evaluation. By incorporating these parts into the study, this paper aims to improve the recommendation system. This study is an algorithm used when individuals have difficulty in selecting an item. Consumer reviews and record patterns made it possible to rely on recommendations appropriately. The algorithm implements a recommendation system through collaborative filtering. This study's predictive accuracy is measured by Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Netflix is strategically using the referral system in its programs through competitions that reduce RMSE every year, making fair use of predictive accuracy. Research on hybrid recommender systems combining the NLP approach for personalization recommender systems, deep learning base, etc. has been increasing. Among NLP studies, sentiment analysis began to take shape in the mid-2000s as user review data increased. Sentiment analysis is a text classification task based on machine learning. The machine learning-based sentiment analysis has a disadvantage in that it is difficult to identify the review's information expression because it is challenging to consider the text's characteristics. In this study, we propose a deep learning recommender system that utilizes BERT's sentiment analysis by minimizing the disadvantages of machine learning. This study offers a deep learning recommender system that uses BERT's sentiment analysis by reducing the disadvantages of machine learning. The comparison model was performed through a recommender system based on Naive-CF(collaborative filtering), SVD(singular value decomposition)-CF, MF(matrix factorization)-CF, BPR-MF(Bayesian personalized ranking matrix factorization)-CF, LSTM, CNN-LSTM, GRU(Gated Recurrent Units). As a result of the experiment, the recommender system based on BERT was the best.

Predicting the Pre-Harvest Sprouting Rate in Rice Using Machine Learning (기계학습을 이용한 벼 수발아율 예측)

  • Ban, Ho-Young;Jeong, Jae-Hyeok;Hwang, Woon-Ha;Lee, Hyeon-Seok;Yang, Seo-Yeong;Choi, Myong-Goo;Lee, Chung-Keun;Lee, Ji-U;Lee, Chae Young;Yun, Yeo-Tae;Han, Chae Min;Shin, Seo Ho;Lee, Seong-Tae
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.22 no.4
    • /
    • pp.239-249
    • /
    • 2020
  • Rice flour varieties have been developed to replace wheat, and consumption of rice flour has been encouraged. damage related to pre-harvest sprouting was occurring due to a weather disaster during the ripening period. Thus, it is necessary to develop pre-harvest sprouting rate prediction system to minimize damage for pre-harvest sprouting. Rice cultivation experiments from 20 17 to 20 19 were conducted with three rice flour varieties at six regions in Gangwon-do, Chungcheongbuk-do, and Gyeongsangbuk-do. Survey components were the heading date and pre-harvest sprouting at the harvest date. The weather data were collected daily mean temperature, relative humidity, and rainfall using Automated Synoptic Observing System (ASOS) with the same region name. Gradient Boosting Machine (GBM) which is a machine learning model, was used to predict the pre-harvest sprouting rate, and the training input variables were mean temperature, relative humidity, and total rainfall. Also, the experiment for the period from days after the heading date (DAH) to the subsequent period (DA2H) was conducted to establish the period related to pre-harvest sprouting. The data were divided into training-set and vali-set for calibration of period related to pre-harvest sprouting, and test-set for validation. The result for training-set and vali-set showed the highest score for a period of 22 DAH and 24 DA2H. The result for test-set tended to overpredict pre-harvest sprouting rate on a section smaller than 3.0 %. However, the result showed a high prediction performance (R2=0.76). Therefore, it is expected that the pre-harvest sprouting rate could be able to easily predict with weather components for a specific period using machine learning.

An Outlier Detection Using Autoencoder for Ocean Observation Data (해양 이상 자료 탐지를 위한 오토인코더 활용 기법 최적화 연구)

  • Kim, Hyeon-Jae;Kim, Dong-Hoon;Lim, Chaewook;Shin, Yongtak;Lee, Sang-Chul;Choi, Youngjin;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.6
    • /
    • pp.265-274
    • /
    • 2021
  • Outlier detection research in ocean data has traditionally been performed using statistical and distance-based machine learning algorithms. Recently, AI-based methods have received a lot of attention and so-called supervised learning methods that require classification information for data are mainly used. This supervised learning method requires a lot of time and costs because classification information (label) must be manually designated for all data required for learning. In this study, an autoencoder based on unsupervised learning was applied as an outlier detection to overcome this problem. For the experiment, two experiments were designed: one is univariate learning, in which only SST data was used among the observation data of Deokjeok Island and the other is multivariate learning, in which SST, air temperature, wind direction, wind speed, air pressure, and humidity were used. Period of data is 25 years from 1996 to 2020, and a pre-processing considering the characteristics of ocean data was applied to the data. An outlier detection of actual SST data was tried with a learned univariate and multivariate autoencoder. We tried to detect outliers in real SST data using trained univariate and multivariate autoencoders. To compare model performance, various outlier detection methods were applied to synthetic data with artificially inserted errors. As a result of quantitatively evaluating the performance of these methods, the multivariate/univariate accuracy was about 96%/91%, respectively, indicating that the multivariate autoencoder had better outlier detection performance. Outlier detection using an unsupervised learning-based autoencoder is expected to be used in various ways in that it can reduce subjective classification errors and cost and time required for data labeling.

Forecasting Leaf Mold and Gray Leaf Spot Incidence in Tomato and Fungicide Spray Scheduling (토마토 재배에서 점무늬병 및 잎곰팡이병 발생 예측 및 방제력 연구)

  • Lee, Mun Haeng
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.376-383
    • /
    • 2022
  • The current study, which consisted of two independent studies (laboratory and greenhouse), was carried out to project the hypothesis fungi-spray scheduling for leaf mold and gray leaf spot in tomato, as well as to evaluate the effect of temperature and leaf wet duration on the effectiveness of different fungicides against these diseases. In the first experiment, tomato leaves were infected with 1 × 104 conidia·mL-1 and put in a dew chamber for 0 to 18 hours at 10 to 25℃ (Fulvia fulva) and 10 to 30℃ (Stemphylium lycopersici). In farm study, tomato plants were treated for 240 hours with diluted (1,000 times) 30% trimidazole, 50% polyoxin B, and 40% iminoctadine tris (Belkut) for protection of leaf mold, and 10% etridiazole + 55% thiophanate-methyl (Gajiran), and 15% tribasic copper sulfate (Sebinna) for protection of gray leaf spot. In laboratory test, leaf condensation on the leaves of tomato plants were emerged after 9 hrs. of incubation. In conclusion, the incidence degree of leaf mold and gray leaf spot disease on tomato plants shows that it is very closely related to formation of leaf condensation, therefore the incidence of leaf mold was greater at 20 and 15℃, while 25 and 20℃ enhanced the incidence of gray leaf spot. The incidence of leaf mold and gray leaf spot developed 20 days after inoculation, and the latency period was estimated to be 14-15 days. Trihumin fungicide had the maximum effectiveness up to 168 hours of fungicides at 12 hours of wet duration in leaf mold, whereas Gajiran fungicide had the highest control (93%) against gray leaf spot up to 144 hours. All the chemicals showed an around 30-50% decrease in effectiveness after 240 hours of treatment. The model predictions in present study could be help in timely, effective and ecofriendly management of leaf mold disease in tomato.