• Title/Summary/Keyword: 시스템 설계(system design)

Search Result 19,119, Processing Time 0.048 seconds

Oil Production Evaluation for Hybrid Method of Low-Salinity Water and Polymer in Carbonate Oil Reservoir (탄산염암 저류층에 저염수주입공법과 폴리머공법의 복합 적용에 따른 오일 생산량 평가)

  • Lee, Yeonkyeong;Kim, Sooyeon;Lee, Wonsuk;Jang, Youngho;Sung, Wonmo
    • Journal of the Korean Institute of Gas
    • /
    • v.22 no.5
    • /
    • pp.53-61
    • /
    • 2018
  • Low-salinity water based polymerflooding (LSPF) is one of promising enhanced oil recovery (EOR) method that has the synergetic effect of combining polymer injection method and low-salinity water injection method. In order to maximize EOR efficiency, it is essential to design low-salinity water appropriately considering the properties of polymer. In this aspect, the main purpose of this study is to investigate the effect of pH and $SO_4{^{2-}}$ ion which one of PDI (Potential Determining Ion) on oil production when applying LSPF to carbonate oil reservoir. First, the stability and adsorption of polymer molecule were analyzed in different pH of injection water and $SO_4{^{2-}}$ concentration in injection water. As a result, regardless of pH and $SO_4{^{2-}}$ concentration, when $SO_4{^{2-}}$ ion was contained in injection water, the stability of polymer solution was obtained. However, from the result of polymer retention analysis, in neutral state of injection water, since $SO_4{^{2-}}$ interfered the adsorption of polymer, the adsorption thickness of polymer was thinner as $SO_4{^{2-}}$ concentration was higher. On the other hand, when injection water was acidic as pH 4, the amount of polymer adsorption increased with the injection of polymer solution, so the mobility of polymer solution was greatly lowered. From the results of wettability alteration due to low-salinity water effect, in the case of neutral injection water injected, as $SO_4{^{2-}}$ concentration was increased, more oil which attached on rock surface was detached, altering wettability from oil-wet to water-wet. On the other hand, in acidic condition, due to complex effect of rock dissolution and polymer adsorption, wettability of the entire core system was less altered relatively to neutral condition. Therefore, it was evaluated that better EOR efficiency was obtained when injecting low-salinity water based polymer solution containing high concentration of $SO_4{^{2-}}$ with neutral condition, enhancing the oil production up to 12.3% compared to low-salinity water injection method.

Development of cardiopulmonary resuscitation nursing education program of web-based instruction (웹 기반의 심폐소생술 간호교육 프로그램 개발)

  • Sin, Hae-Won;Hong, Hae-Sook
    • Journal of Korean Biological Nursing Science
    • /
    • v.4 no.1
    • /
    • pp.25-39
    • /
    • 2002
  • The purpose of this study is to develop and evaluate a web-based instruction Program(WBI) to help nurses improving their knowledge and skill of cardiopulmonary resuscitation. Using the model of web-based instruction(WBI) program designed by Rhu(1999), this study was carried out during February-April 2002 in five different steps; analysis, design, data collection and reconstruction, programming and publishing, and evaluation. The results of the study were as follows; 1) The goal of this program was focused on improving accuracy of knowledge and skills of cardiopulmonary resuscitation. The program texts consists of the concepts and importances of cardiopulmonary resuscitation(CPR), basic life support(BLS), advanced cardiac life support(ACLS), treatment of CPR, nursing care after CPR treatment. And in the file making step, photographs, drawings and image files were collected and edited by web-editor(Namo), scanner and Adobe photoshop program. Then, the files were modified and posted on the web by file transfer protocol(FTP). Finally, the program was demonstrated and once again revised by the result, and then completed. 2) For the evaluation of the program, 36 nurses who in K university hospital located in D city, and related questionnaire were distributed to them as well. Higher scores were given by the nurses in its learning contents with $4.2{\pm}.67$, and in its structuring and interaction of the program with $4.0{\pm}.79$, and also in its satisfactory of the program with $4.2{\pm}.58$ respectively. In conclusion, if the contents of this WBI educational program upgrade further based upon analysis and applying of the results the program evaluation, it is considered as an effective tool to implement for continuing education as life-long educational system for nurse.

  • PDF

A Study on NCS-based Team Teaching Operation in Animation Related Department (애니메이션 관련학과 NCS기반 팀 티칭 운영방안에 관한 연구)

  • Jung, Dong-hee;An, Dong-kyu;Choi, Jung-woong
    • Cartoon and Animation Studies
    • /
    • s.47
    • /
    • pp.31-52
    • /
    • 2017
  • NCS education was created to realize a society in which skills and abilities are respected, such as transcending specifications, establishing recruitment systems, and developing and disseminating national incompetence standards. At the university level, special lectures and job training are being strengthened to raise industrial experts. Especially, in the field of animation, new technologies are rapidly emerging and demanding convergent talents with various fields. In order to meet these social demands, there is a limit to the existing one-class teaching method. In order to solve this problem, it is necessary to participate in a variety of specialized teachers. In other words, rather than solving problems of students' job training and job creation, It is aimed to solve jointly, Team teaching was suggested as a method for this. The expected effects that can be obtained through this are as follows. First, the field of animation is becoming more diverse and complex. The ability to use NCS job-related skills pools can be matched with professors from other departments to enable a wider range of professional instruction. Second, it is possible to use partial professorships in other departments by actively utilizing professors in the university. This leads to the strengthening of the capacity of teachers in universities. Third, it is possible to build a broader and more integrated educational system through cooperative teaching of professors in other departments. Finally, the advantages of special lectures and mentor support of college professors' pools are broader than those of field specialists. A variety of guidance for students can be made with responsible professors. In other words, time and space constraints can be avoided because the mentor is easily met and guided by the university.

A Proposal for Simplified Velocity Estimation for Practical Applicability (실무 적용성이 용이한 간편 유속 산정식 제안)

  • Tai-Ho Choo;Jong-Cheol Seo; Hyeon-Gu Choi;Kun-Hak Chun
    • Journal of Wetlands Research
    • /
    • v.25 no.2
    • /
    • pp.75-82
    • /
    • 2023
  • Data for measuring the flow rate of streams are used as important basic data for the development and maintenance of water resources, and many experts are conducting research to make more accurate measurements. Especially, in Korea, monsoon rains and heavy rains are concentrated in summer due to the nature of the climate, so floods occur frequently. Therefore, it is necessary to measure the flow rate most accurately during a flood to predict and prevent flooding. Thus, the U.S. Geological Survey (USGS) introduces 1, 2, 3 point method using a flow meter as one way to measure the average flow rate. However, it is difficult to calculate the average flow rate with the existing 1, 2, 3 point method alone.This paper proposes a new 1, 2, 3 point method formula, which is more accurate, utilizing one probabilistic entropy concept. This is considered to be a highly empirical study that can supplement the limitations of existing measurement methods. Data and Flume data were used in the number of holesman to demonstrate the utility of the proposed formula. As a result of the analysis, in the case of Flume Data, the existing USGS 1 point method compared to the measured value was 7.6% on average, 8.6% on the 2 point method, and 8.1% on the 3 point method. In the case of Coleman Data, the 1 point method showed an average error rate of 5%, the 2 point method 5.6% and the 3 point method 5.3%. On the other hand, the proposed formula using the concept of entropy reduced the error rate by about 60% compared to the existing method, with the Flume Data averaging 4.7% for the 1 point method, 5.7% for the 2 point method, and 5.2% for the 3 point method. In addition, Coleman Data showed an average error of 2.5% in the 1 point method, 3.1% in the 2 point method, and 2.8% in the 3 point method, reducing the error rate by about 50% compared to the existing method.This study can calculate the average flow rate more accurately than the existing 1, 2, 3 point method, which can be useful in many ways, including future river disaster management, design and administration.

Review of Domestic Research Trends on Layered Double Hydroxide (LDH) Materials: Based on Research Articles in Korean Citation Index (KCI) (이중층수산화물(layered double hydroxide, LDH) 소재의 국내 연구동향 리뷰: 한국학술지인용색인(KCI)에 발표된 논문을 대상으로)

  • Seon Yong Lee;YoungJae Kim;Young Jae Lee
    • Economic and Environmental Geology
    • /
    • v.56 no.1
    • /
    • pp.23-53
    • /
    • 2023
  • In this review paper, previous studies on layered double hydroxides (LDHs) published in the Korean Citation Index (KCI) were examined to investigate a research trend for LDHs in Korea. Since the first publication in 2002, 160 papers on LDHs have been published until January 2023. Among the 31 academic fields, top 5 fields appeared in the order of chemical engineering, chemistry, materials engineering, environmental engineering, and physics. The chemical engineering shows the highest record of published paper (71 papers) while around 10 papers have been published in the other four fields. All papers were reclassified into 15 research fields based on the industrial and academic purposes of using LDHs. The top 5 in these fields are in order of environmental purification materials, polymer catalyst materials, battery materials, pharmaceutical/medicinal materials, and basic physicochemical properties. These findings suggest that researches on the applications of LDH materials in the academic fields of chemical engineering and chemistry for the improvement of their functions such as environmental purification materials, polymer catalysts, and batteries have been being most actively conducted. The application of LDHs for cosmetic and agricultural purposes and for developing environmental sensors is still at the beginning of research. Considering a market-potential and high-efficiency-eco-friendly trend, however, it will deserve our attention as emerging application fields in the future. All reclassified papers were summarized in our tables and a supplementary file, including information on applied materials, key results, characteristics and synthesis methods of LDHs used. We expect that our findings of overall trends in LDH research in Korea can help design future researches with LDHs and suggest policies for resources and energies as well as environments efficiently.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Geochemical Equilibria and Kinetics of the Formation of Brown-Colored Suspended/Precipitated Matter in Groundwater: Suggestion to Proper Pumping and Turbidity Treatment Methods (지하수내 갈색 부유/침전 물질의 생성 반응에 관한 평형 및 반응속도론적 연구: 적정 양수 기법 및 탁도 제거 방안에 대한 제안)

  • 채기탁;윤성택;염승준;김남진;민중혁
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.7 no.3
    • /
    • pp.103-115
    • /
    • 2000
  • The formation of brown-colored precipitates is one of the serious problems frequently encountered in the development and supply of groundwater in Korea, because by it the water exceeds the drinking water standard in terms of color. taste. turbidity and dissolved iron concentration and of often results in scaling problem within the water supplying system. In groundwaters from the Pajoo area, brown precipitates are typically formed in a few hours after pumping-out. In this paper we examine the process of the brown precipitates' formation using the equilibrium thermodynamic and kinetic approaches, in order to understand the origin and geochemical pathway of the generation of turbidity in groundwater. The results of this study are used to suggest not only the proper pumping technique to minimize the formation of precipitates but also the optimal design of water treatment methods to improve the water quality. The bed-rock groundwater in the Pajoo area belongs to the Ca-$HCO_3$type that was evolved through water/rock (gneiss) interaction. Based on SEM-EDS and XRD analyses, the precipitates are identified as an amorphous, Fe-bearing oxides or hydroxides. By the use of multi-step filtration with pore sizes of 6, 4, 1, 0.45 and 0.2 $\mu\textrm{m}$, the precipitates mostly fall in the colloidal size (1 to 0.45 $\mu\textrm{m}$) but are concentrated (about 81%) in the range of 1 to 6 $\mu\textrm{m}$in teams of mass (weight) distribution. Large amounts of dissolved iron were possibly originated from dissolution of clinochlore in cataclasite which contains high amounts of Fe (up to 3 wt.%). The calculation of saturation index (using a computer code PHREEQC), as well as the examination of pH-Eh stability relations, also indicate that the final precipitates are Fe-oxy-hydroxide that is formed by the change of water chemistry (mainly, oxidation) due to the exposure to oxygen during the pumping-out of Fe(II)-bearing, reduced groundwater. After pumping-out, the groundwater shows the progressive decreases of pH, DO and alkalinity with elapsed time. However, turbidity increases and then decreases with time. The decrease of dissolved Fe concentration as a function of elapsed time after pumping-out is expressed as a regression equation Fe(II)=10.l exp(-0.0009t). The oxidation reaction due to the influx of free oxygen during the pumping and storage of groundwater results in the formation of brown precipitates, which is dependent on time, $Po_2$and pH. In order to obtain drinkable water quality, therefore, the precipitates should be removed by filtering after the stepwise storage and aeration in tanks with sufficient volume for sufficient time. Particle size distribution data also suggest that step-wise filtration would be cost-effective. To minimize the scaling within wells, the continued (if possible) pumping within the optimum pumping rate is recommended because this technique will be most effective for minimizing the mixing between deep Fe(II)-rich water and shallow $O_2$-rich water. The simultaneous pumping of shallow $O_2$-rich water in different wells is also recommended.

  • PDF