• Title/Summary/Keyword: Systems Analysis and Design

Search Result 6,055, Processing Time 0.043 seconds

Effect of Organizational Support Perception on Intrinsic Job Motivation : Verification of the Causal Effects of Work-Family Conflict and Work-Family Balance (조직지원인식이 내재적 직무동기에 미치는 영향 : 일-가정 갈등 및 일-가정 균형의 인과관계 효과 검증)

  • Yoo, Joon-soo;Kang, Chang-wan
    • Journal of Venture Innovation
    • /
    • v.6 no.1
    • /
    • pp.181-198
    • /
    • 2023
  • This study aims to analyze the influence of organizational support perception of workers in medical institutions on intrinsic job motivation, and to check whether there is significance in the mediating effect of work-family conflict and work-family balance factors in this process. The results of empirical analysis through the questionnaire are as follows. First, it was confirmed that organizational support recognition had a significant positive effect on work-family balance as well as intrinsic job motivation, and work-family balance had a significant positive effect on intrinsic job motivation. Second, it was confirmed that organizational support recognition had a significant negative effect on work-family conflict, but work-family conflict had no significant influence on intrinsic job motivation. Third, in order to reduce job stress for medical institution workers, it is necessary to reduce job intensity, assign appropriate workload for ability. And in order to improve manpower operation and job efficiency, Job training and staffing in the right place are needed. Fourth, in order to improve positive organizational support perception and intrinsic job motivation, It is necessary to induce long-term service by providing support and institutional devices to increase attachment to the current job and recognize organizational problems as their own problems with various incentive systems. The limitations of this study and future research directions are as follows. First, it is believed that an expanded analysis of medical institution workers nationwide by region, gender, medical institution, academic, and income will not only provide more valuable results, but also evaluate the quality of medical services. Second, it is necessary to reflect the impact of the work-life balance support system on each employee depending on the environmental uncertainty or degree of competition in the hospital to which medical institution workers belong. Third, organizational support perception will be recognized differently depending on organizational culture and organizational type, and organizational size and work characteristics, working years, and work types, so it is necessary to reflect this. Fourth, it is necessary to analyze various new personnel management techniques such as hospital's organizational structure, job design, organizational support method, motivational approach, and personnel evaluation method in line with the recent change in the government's medical institution policy and the global business environment. It is also considered important to analyze by reflecting recent and near future medical trends.

Integrated Management Data Warehouse Development Process of Research Expenses in Enterprise Environment (엔터프라이즈 환경의 연구비 통합관리 데이터 웨어하우스 개발 프로세스)

  • Choi, Seong-Man;Yoo, Cheol-Jung;Chang, Ok-Bae
    • The KIPS Transactions:PartD
    • /
    • v.11D no.1
    • /
    • pp.183-194
    • /
    • 2004
  • The existing management job of research expenses has been divided into three parts: budget planning, budget draw-up, and exact settlement of budget. However, it caused some problems. Under this current circumstance it is required to obtain research expenses steadily, to operate efficiently and to use them clearly to solve such problems. As a result of a study on data warehouse development process of existing system integration company (Inmon, IBM) to reflect current trend described above, data warehouse development process of Inmon uses systematic and gradual access as a classical development cycle method. It causes overlap and feedback to the previous step in the process of each step Is requested. And another problem that it is difficult to toil what function refers and corrects data because functions and data are separated during performing development process at data warehouse development process of IBM is caused. Integrated management data warehouse development process of research expenses in the enterprise environment which applies UML at planning and analysis step, design step and implement and test step is suggested in this paper. Information retrieval agent uses existing budget plan DB, budget draw-up DB and budget settlement DB to find out information that a user wants to know. Information retrieval agent collects and saves information at integration database and information integration agent extracts, transports, transforms and loads the data. Information integration agent reduces a user's efforts to access to a number of information sources and check each of them. It also screens out data that a user may not need. As a result, integrated management data warehouse development process of research expenses in the enterprise environment reflects a user's requirements as much as possible and provides various types of information to make a decision which is needed to establish the policy of research expense management. It helps an end user approach his/her desired analysis information quickly and get various data from the comprehensive viewpoint rather than the fragmentary viewpoint. Furthermore, as it integrated three systems into one, it is possible to share data, to integrate the system, to reduce operating expenses and to simplify supporting environment for the decision making.

Analysis of Reform Model to Records Management System in Public Institution -from Reform to Records Management System in 2006- (행정기관의 기록관리시스템 개선모델 분석 -2006년 기록관리시스템 혁신을 중심으로-)

  • Kwag, Jeong
    • The Korean Journal of Archival Studies
    • /
    • no.14
    • /
    • pp.153-190
    • /
    • 2006
  • Externally, business environment in public institution has being changed as government business reference model(BRM) appeared and business management systems for transparency of a policy decision process are introduced. After Records Automation System started its operation, dissatisfaction grows because of inadequacy in system function and the problems about authenticity of electronic records. With these backgrounds, National Archives and Records Service had carried out 'Information Strategy Planning for Reform to Records Management System' for 5 months from September, 2005. As result, this project reengineers current records management processes and presents the world-class system model. After Records and Archives Management Act was made, the records management in public institution has propelled the concept that paper records are handled by means of the electric data management. In this reformed model, however, we concentrates on the electric records, which have gradually replaced the paper records and investigate on the management methodology considering attributes of electric records. According to this new paradigm, the electric records management raises a new issue in the records management territory. As the major contents of the models connecting with electric records management were analyzed and their significance and bounds were closely reviewed, the aim of this paper is the understanding of the future bearings of the management system. Before the analysis of the reformed models, issues in new business environments and their records management were reviewed. The government's BRM and Business management system prepared the general basis that can manage government's whole results on the online and classify them according to its function. In this points, the model is innovative. However considering the records management, problems such as division into Records Classification, definitions and capturing methods of records management objects, limitations of Records Automation System and so on was identified. For solving these problems, the reformed models that has a records classification system based on the business classification, extended electronic records filing system, added functions for strengthening electric records management and so on was proposed. As regards dramatically improving the role of records center in public institution, searching for the basic management methodology of the records management object from various agency and introducing the detail design to keep documents' authenticity, this model forms the basis of the electric records management system. In spite of these innovations, however, the proposed system for real electric records management era is still in its beginning. In near feature, when the studies is concentrated upon the progress of qualified classifications, records capturing plans for foreign records structures such like administration information system, the further study of the previous preservation technology, the developed prospective of electric records management system will be very bright.

Hydrograph Separation and Flow Characteristic Analysis for Observed Rainfall Events during Flood Season in a Forested Headwater Stream (산지계류에 있어서 홍수기의 강우사상에 대한 유출수문곡선 분리 및 특성 분석)

  • Nam, Sooyoun;Chun, Kun-Woo;Lee, Jae Uk;Kang, Won Seok;Jang, Su-Jin
    • Korean Journal of Ecology and Environment
    • /
    • v.54 no.1
    • /
    • pp.49-60
    • /
    • 2021
  • We examined the flow characteristics by direct runoff and base flow in a headwater stream during observed 59 rainfall events of flood season (June~September) from 2017 to 2020 yrs. Total precipitation ranged from 5.0 to 400.8 mm, total runoff ranged from 0.1 to 176.5 mm, and runoff ratio ranged from 0.1 to 242.9% during the rainfall events. From hydrograph separation, flow duration in base flow (139.3 days) was tended to be longer than direct runoff (78.3 days), while the contribution of direct runoff in total runoff (54.2%) was greater than base flow (45.8%). The total amount and peak flow of direct runoff and base flow had the highest correlation (p<0.05) with total precipitation and duration of rain among rainfall and soil moisture conditions. Dominant rainfall events for the total amount and peak flow of base flow were generated under 5.0~200.4 and 10.5~110.5 mm in total precipitation. However, when direct runoff occurred as dominant rainfall events, total amount and peak flow were increased by 267.4~400.8 and 169.0~400.8 mm in total precipitation. Therefore, the unique aspects of our study design permitted us to draw inferences about flow characteristic analysis with the contribution of base flow and/or direct runoff in the total runoff in a headwater stream. Furthermore, it will be useful for the long-term strategy of effective water management for integrated surface-groundwater in the forested headwater stream.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Understanding User Motivations and Behavioral Process in Creating Video UGC: Focus on Theory of Implementation Intentions (Video UGC 제작 동기와 행위 과정에 관한 이해: 구현의도이론 (Theory of Implementation Intentions)의 적용을 중심으로)

  • Kim, Hyung-Jin;Song, Se-Min;Lee, Ho-Geun
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.125-148
    • /
    • 2009
  • UGC(User Generated Contents) is emerging as the center of e-business in the web 2.0 era. The trend reflects changing roles of users in production and consumption of contents on websites and helps us to understand new strategies of websites such as web portals and social network websites. Nowadays, we consume contents created by other non-professional users for both utilitarian (e.g., knowledge) and hedonic values (e.g., fun). Also, contents produced by ourselves (e.g., photo, video) are posted on websites so that our friends, family, and even the public can consume those contents. This means that non-professionals, who used to be passive audience in the past, are now creating contents and share their UGCs with others in the Web. Accessible media, tools, and applications have also reduced difficulty and complexity in the process of creating contents. Realizing that users create plenty of materials which are very interesting to other people, media companies (i.e., web portals and social networking websites) are adjusting their strategies and business models accordingly. Increased demand of UGC may lead to website visits which are the source of benefits from advertising. Therefore, they put more efforts into making their websites open platforms where UGCs can be created and shared among users without technical and methodological difficulties. Many websites have increasingly adopted new technologies such as RSS and openAPI. Some have even changed the structure of web pages so that UGC can be seen several times to more visitors. This mainstream of UGCs on websites indicates that acquiring more UGCs and supporting participating users have become important things to media companies. Although those companies need to understand why general users have shown increasing interest in creating and posting contents and what is important to them in the process of productions, few research results exist in this area to address these issues. Also, behavioral process in creating video UGCs has not been explored enough for the public to fully understand it. With a solid theoretical background (i.e., theory of implementation intentions), parts of our proposed research model mirror the process of user behaviors in creating video contents, which consist of intention to upload, intention to edit, edit, and upload. In addition, in order to explain how those behavioral intentions are developed, we investigated influences of antecedents from three motivational perspectives (i.e., intrinsic, editing software-oriented, and website's network effect-oriented). First, from the intrinsic motivation perspective, we studied the roles of self-expression, enjoyment, and social attention in forming intention to edit with preferred editing software or in forming intention to upload video contents to preferred websites. Second, we explored the roles of editing software for non-professionals to edit video contents, in terms of how it makes production process easier and how it is useful in the process. Finally, from the website characteristic-oriented perspective, we investigated the role of a website's network externality as an antecedent of users' intention to upload to preferred websites. The rationale is that posting UGCs on websites are basically social-oriented behaviors; thus, users prefer a website with the high level of network externality for contents uploading. This study adopted a longitudinal research design; we emailed recipients twice with different questionnaires. Guided by invitation email including a link to web survey page, respondents answered most of questions except edit and upload at the first survey. They were asked to provide information about UGC editing software they mainly used and preferred website to upload edited contents, and then asked to answer related questions. For example, before answering questions regarding network externality, they individually had to declare the name of the website to which they would be willing to upload. At the end of the first survey, we asked if they agreed to participate in the corresponding survey in a month. During twenty days, 333 complete responses were gathered in the first survey. One month later, we emailed those recipients to ask for participation in the second survey. 185 of the 333 recipients (about 56 percentages) answered in the second survey. Personalized questionnaires were provided for them to remind the names of editing software and website that they reported in the first survey. They answered the degree of editing with the software and the degree of uploading video contents to the website for the past one month. To all recipients of the two surveys, exchange tickets for books (about 5,000~10,000 Korean Won) were provided according to the frequency of participations. PLS analysis shows that user behaviors in creating video contents are well explained by the theory of implementation intentions. In fact, intention to upload significantly influences intention to edit in the process of accomplishing the goal behavior, upload. These relationships show the behavioral process that has been unclear in users' creating video contents for uploading and also highlight important roles of editing in the process. Regarding the intrinsic motivations, the results illustrated that users are likely to edit their own video contents in order to express their own intrinsic traits such as thoughts and feelings. Also, their intention to upload contents in preferred website is formed because they want to attract much attention from others through contents reflecting themselves. This result well corresponds to the roles of the website characteristic, namely, network externality. Based on the PLS results, the network effect of a website has significant influence on users' intention to upload to the preferred website. This indicates that users with social attention motivations are likely to upload their video UGCs to a website whose network size is big enough to realize their motivations easily. Finally, regarding editing software characteristic-oriented motivations, making exclusively-provided editing software more user-friendly (i.e., easy of use, usefulness) plays an important role in leading to users' intention to edit. Our research contributes to both academic scholars and professionals. For researchers, our results show that the theory of implementation intentions is well applied to the video UGC context and very useful to explain the relationship between implementation intentions and goal behaviors. With the theory, this study theoretically and empirically confirmed that editing is a different and important behavior from uploading behavior, and we tested the behavioral process of ordinary users in creating video UGCs, focusing on significant motivational factors in each step. In addition, parts of our research model are also rooted in the solid theoretical background such as the technology acceptance model and the theory of network externality to explain the effects of UGC-related motivations. For practitioners, our results suggest that media companies need to restructure their websites so that users' needs for social interaction through UGC (e.g., self-expression, social attention) are well met. Also, we emphasize strategic importance of the network size of websites in leading non-professionals to upload video contents to the websites. Those websites need to find a way to utilize the network effects for acquiring more UGCs. Finally, we suggest that some ways to improve editing software be considered as a way to increase edit behavior which is a very important process leading to UGC uploading.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Mechanical and Rheological Properties of Rice Plant (수도(水稻)의 역학적(力學的) 및 리올러지 특성(特性)에 관(關)한 연구(硏究))

  • Huh, Yun Kun;Cha, Gyun Do
    • Korean Journal of Agricultural Science
    • /
    • v.14 no.1
    • /
    • pp.98-133
    • /
    • 1987
  • The mechanical and rheological properties of agricultural materials are important for engineering design and analysis of their mechanical harvesting, handling, transporting and processing systems. Agricultural materials, which composed of structural members and fluids do not react in a purely elastic manner, and their response when subjected to stress and strain is a combination of elastic and viscous behavior so called viscoelastic behavior. Many researchers have conducted studies on the mechanical and rheological properties of the various agricultural products, but a few researcher has studied those properties of rice plant, and also those data are available only for foreign varieties of rice plant. This study are conducted to experimentally determine the mechanical and the rheological properties such as axial compressive strength, tensile strength, bending and shear strength, stress relaxation and creep behavior of rice stems, and grain detachment strength. The rheological models for the rice stem were developed from the test data. The shearing characteristics were examined at some different levels of portion, cross-sectional area, moisture content of rice stem and shearing angle. The results obtained from this study were summarized as follows 1. The mechanical properties of the stems of the J aponica types were greater than those of the Indica ${\times}$ Japonica hybrid in compression, tension, bendingand shearing. 2. The mean value of the compressive force was 80.5 N in the Japonica types and 55.5 N in the Indica ${\times}$ Japonica hybrid which was about 70 percent to that of the Japonica types, and then the value increased progressively at the lower portion of the stems generally. 3. The average tensile force was about 226.6 N in the Japonica types and 123.6 N in the Indica ${\times}$ Japonica hybrid which was about 55 percent to that of the Japonica types. 4. The bending moment was $0.19N{\cdot}m$ in the Japonica types and $0.13N{\cdot}m$ in the Indica ${\times}$ Japonica hybrid which was 68 percent to that of the Japonica types and the bending strength was 7.7 MPa in the Japonica types and 6.5 MPa in the Indica ${\times}$ Japonica hybrid respectively. 5. The shearing force was 141.1 N in Jinju, the Japonica type and 101.4 N in Taebaeg, the Indica ${\times}$ Japonica hybrid which was 72 percent to that of Jinju, and the shearing strength of Taebaeg was 63 percent to that of Jinju. 6. The shearing force and the shearing energy along the stem portion in Jinju increased progressively together at the lower portions, meanwhile in Taebaeg the shearing force showed the maximum value at the intermediate portion and the shearing energy was the greatest at the portion of 21 cm from the ground level, and also the shearing strength and the shearing energy per unit cross-sectional area of the stem were the greater values at the intermediate portion than at any other portions. 7. The shearing force and the shearing energy increased with increase of the cross-sectional area of the rice stem and with decrease of the shearing angie from $90^{\circ}$ to $50^{\circ}$. 8. The shearing forces showed the minimum values of 110 N at Jinju and of 60 N at Taebaeg, the shearing energy at the moisture content decreased about 15 percent point from initial moisture content showed value of 50 mJ in Jinju and of 30 mJ in Taebaeg, respectively. 9. The stress relaxation behavior could be described by the generalized Maxwell model and also the compression creep behavior by Burger's model, respectively in the rice stem. 10. With increase of loading rate, the stress relaxation intensity increased, meanwhile the relaxation time and residual stress decreased. 11. In the compression creep test, the logarithmic creep occured at the stress less than 2.0 MPa and the steady-state creep at the stress larger than 2.0 MPa. 12. The stress level had not a significant effect on the relaxation time, while the relaxation intensity and residual stress increased with increase of the stress level. 13. In the compression creep test of the rice stem, the instantaneous elastic modulus of Burger's model showed the range of 60 to 80 MPa and the viscosities of the free dashpot were very large numerical value which was well explained that the rice stem was viscoelastic material. 14. The tensile detachment forces were about 1.7 to 2.3 N in the Japonica types while about 1.0 to 1.3 N in Indica ${\times}$ Japonica hybrid corresponding to 58 percent of Japonica types, and the bending detachment forces were about 0.6 to 1.1 N corresponding to 30 to 50 percent of the tensile detachment forces, and the bending detachment of the Indica ${\times}$ Japonica hybrid was 0.1 to 0.3 N which was 7 to 21 percent of Japonica types. 15. The detachment force of the lower portion was little bigger than that of the upper portion in a penicle and was not significantly affected by the harvesting period from September 28 to October 20. 16. The tensile and bending detachment forces decreased with decrease of the moisture content from 23 to 13 percent (w.b.) by the natural drying, and the decreasing rate of detachment forces along the moisture content was the greater in the bending detachment force than the tensile detachment force.

  • PDF