• Title/Summary/Keyword: information flow

Search Result 5,744, Processing Time 0.033 seconds

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

Analyzing the Issue Life Cycle by Mapping Inter-Period Issues (기간별 이슈 매핑을 통한 이슈 생명주기 분석 방법론)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.25-41
    • /
    • 2014
  • Recently, the number of social media users has increased rapidly because of the prevalence of smart devices. As a result, the amount of real-time data has been increasing exponentially, which, in turn, is generating more interest in using such data to create added value. For instance, several attempts are being made to analyze the relevant search keywords that are frequently used on new portal sites and the words that are regularly mentioned on various social media in order to identify social issues. The technique of "topic analysis" is employed in order to identify topics and themes from a large amount of text documents. As one of the most prevalent applications of topic analysis, the technique of issue tracking investigates changes in the social issues that are identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has two limitations. First, when a new period is included, topic analysis must be repeated for all the documents of the entire period, rather than being conducted only on the new documents of the added period. This creates practical limitations in the form of significant time and cost burdens. Therefore, this traditional approach is difficult to apply in most applications that need to perform an analysis on the additional period. Second, the issue is not only generated and terminated constantly, but also one issue can sometimes be distributed into several issues or multiple issues can be integrated into one single issue. In other words, each issue is characterized by a life cycle that consists of the stages of creation, transition (merging and segmentation), and termination. The existing issue tracking methods do not address the connection and effect relationship between these issues. The purpose of this study is to overcome the two limitations of the existing issue tracking method, one being the limitation regarding the analysis method and the other being the limitation involving the lack of consideration of the changeability of the issues. Let us assume that we perform multiple topic analysis for each multiple period. Then it is essential to map issues of different periods in order to trace trend of issues. However, it is not easy to discover connection between issues of different periods because the issues derived for each period mutually contain heterogeneity. In this study, to overcome these limitations without having to analyze the entire period's documents simultaneously, the analysis can be performed independently for each period. In addition, we performed issue mapping to link the identified issues of each period. An integrated approach on each details period was presented, and the issue flow of the entire integrated period was depicted in this study. Thus, as the entire process of the issue life cycle, including the stages of creation, transition (merging and segmentation), and extinction, is identified and examined systematically, the changeability of the issues was analyzed in this study. The proposed methodology is highly efficient in terms of time and cost, as it sufficiently considered the changeability of the issues. Further, the results of this study can be used to adapt the methodology to a practical situation. By applying the proposed methodology to actual Internet news, the potential practical applications of the proposed methodology are analyzed. Consequently, the proposed methodology was able to extend the period of the analysis and it could follow the course of progress of each issue's life cycle. Further, this methodology can facilitate a clearer understanding of complex social phenomena using topic analysis.

Problems with ERP Education at College and How to Solve the Problems (대학에서의 ERP교육의 문제점 및 개선방안)

  • Kim, Mang-Hee;Ra, Ki-La;Park, Sang-Bong
    • Management & Information Systems Review
    • /
    • v.31 no.2
    • /
    • pp.41-59
    • /
    • 2012
  • ERP is a new technique of process innovation. It indicates enterprise resource planning whose purpose is an integrated total management of enterprise resources. ERP can be also seen as one of the latest management systems that organically connects by using computers all business processes including marketing, production and delivery and control those processes on a real-time basis. Currently, however, it's not easy for local enterprises to have operators who will be in charge of ERP programs, even if they want to introduce the resource management system. This suggests that it's urgently needed to train such operators through ERP education at school. But in the field of education, actually, the lack of professional ERP instructors and less effective learning programs for industrial applications of ERP are obstacles to bringing up ERP workers who are competent as much as required by enterprises. In ERP, accounting is more important than any others. Accountants are assuming more and more roles in ERP. Thus, there's a rapidly increasing demand for experts in ERP accounting. This study examined previous researches and literature concerning ERP education, identified problems with current ERP education at college and proposed how to solve the problems. This study proposed the ways of improving ERP education at college as follows. First, a prerequisite learning of ERP, that is, educating the principle of accounting should be intensified to make students get a basic theoretical knowledge of ERP enough. Second, lots of different scenarios designed to try ERP programs in business should be created. In association, students should be educated to get a better understanding of incidents or events taken place in those scenarios and apply it to trying ERP for themselves. Third, as mentioned earlier, ERP is a system that integrates all enterprise resources such as marketing, procurement, personnel management, remuneration and production under the framework of accounting. It should be noted that under ERP, business activities are organically connected with accounting modules. More importantly, those modules should be recognized not individually, but as parts comprising a whole flow of accounting. This study has a limitation because it is a literature research that heavily relied on previous studies, publications and reports. This suggests the need to compare the efficiency of ERP education between before and after applying what this study proposed to improve that education. Also, it's needed to determine students' and professors' perceived effectiveness of current ERP education and compare and analyze the difference in that perception between the two groups.

  • PDF

Accounting Conservatism and Excess Executive Compensation (회계 보수주의와 경영자 초과보상)

  • Byun, Seol-Won;Park, Sang-Bong
    • Management & Information Systems Review
    • /
    • v.37 no.2
    • /
    • pp.187-207
    • /
    • 2018
  • This study examines the negative relationship between accounting conservatism and excess executive compensation and examines whether their relationship increases as managerial incentive compensation intensity increases. For this purpose, a total of 2,755 company-years were selected for the analysis of the companies listed on the Korea Stock Exchange from December 2012 to 2016 as the final sample. The results of this study are as follows. First, there is a statistically significant negative relationship between accounting conservatism and manager overpayment. This implies that managers' incentives to distort future cash flow estimates by over booking assets or accounting profits in order to maximize their compensation when manager compensation is linked to firm performance. In this sense, accounting conservatism can reduce opportunistic behavior by restricting managerial accounting choices, which can be interpreted as a reduction in overpayment to managers. Second, we found that the relationship between accounting conservatism and excess executive compensation increases with the incentive compensation for accounting performance. The higher the managerial incentive compensation intensity of accounting performance is, the more likely it is that the manager has the incentive to make earnings adjustments. Therefore, the high level of incentive compensation for accounting performance means that the ex post settling up problem due to over-compensation can become serious. In this case, the higher the managerial incentive compensation intensity for accounting performance, the greater the role and utility of conservatism in manager compensation contracts. This study is based on the fact that it presents empirical evidence on the usefulness of accounting conservatism in managerial compensation contracts theoretically presented by Watts (2003) and the additional basis that conservatism can be used as a useful tool for investment decision.

Reproducibility of Regional Pulse Wave Velocity in Healthy Subjects

  • Im Jae-Joong;Lee, Nak-Bum;Rhee Moo-Yong;Na Sang-Hun;Kim, Young-Kwon;Lee, Myoung-Mook;Cockcroft John R.
    • International Journal of Vascular Biomedical Engineering
    • /
    • v.4 no.2
    • /
    • pp.19-24
    • /
    • 2006
  • Background: Pulse wave velocity (PWV), which is inversely related to the distensibility of an arterial wall, offers a simple and potentially useful approach for an evaluation of cardiovascular diseases. In spite of the clinical importance and widespread use of PWV, there exist no standard either for pulse sensors or for system requirements for accurate pulse wave measurement. Objective of this study was to assess the reproducibility of PWV values using a newly developed PWV measurement system in healthy subjects prior to a large-scale clinical study. Methods: System used for the study was the PP-1000 (Hanbyul Meditech Co., Korea), which provides regional PWV values based on the measurements of electrocardiography (ECG), phonocardiography (PCG), and pulse waves from four different sites of arteries (carotid, femoral, radial, and dorsalis pedis) simultaneously. Seventeen healthy male subjects with a mean age of 33 years (ranges 22 to 52 years) without any cardiovascular disease were participated for the experiment. Two observers (observer A and B) performed two consecutive measurements from the same subject in a random order. For an evaluation of system reproducibility, two analyses (within-observer and between-observer) were performed, and expressed in terms of mean difference ${\pm}2SD$, as described by Bland and Altman plots. Results: Mean and SD of PWVs for aorta, arm, and leg were $7.07{\pm}1.48m/sec,\;8.43{\pm}1.14m/sec,\;and\;8.09{\pm}0.98m/sec$ measured from observer A and $6.76{\pm}1.00m/sec,\;7.97{\pm}0.80m/sec,\;and\;\7.97{\pm}0.72m/sec$ from observer B, respectively. Between-observer differences ($mean{\pm}2SD$) for aorta, arm, and leg were $0.14{\pm\}0.62m/sec,\;0.18{\pm\}0.84m/sec,\;and\;0.07{\pm}0.86m/sec$, and the correlation coefficients were high especially 0.93 for aortic PWV. Within-observer differences ($mean{\pm}2SD$) for aorta, arm, and leg were $0.01{\pm}0.26m/sec,\;0.02{\pm}0.26m/sec,\;and\;0.08{\pm}0.32m/sec$ from observer A and $0.01{\pm}0.24m/sec,\;0.04{\pm}0.28m/sec,\;and\;0.01{\pm}0.20m/sec$ from observer B, respectively. All the measurements showed significantly high correlation coefficients ranges from 0.94 to 0.99. Conclusion: PWV measurement system used for the study offers comfortable and simple operation and provides accurate analysis results with high reproducibility. Since the reproducibility of the measurement is critical for the diagnosis in clinical use, it is necessary to provide an accurate algorithm for the detection of additional features such as flow wave, reflection wave, and dicrotic notch from a pulse waveform. This study will be extended for the comparison of PWV values from patients with various vascular risks for clinical application. Data acquired from the study could be used for the determination of the appropriate sample size for further studies relating various types of arteriosclerosis-related vascular disease.

  • PDF

Study on the Consequence Effect Analysis & Process Hazard Review at Gas Release from Hydrogen Fluoride Storage Tank (최근 불산 저장탱크에서의 가스 누출시 공정위험 및 결과영향 분석)

  • Ko, JaeSun
    • Journal of the Society of Disaster Information
    • /
    • v.9 no.4
    • /
    • pp.449-461
    • /
    • 2013
  • As the hydrofluoric acid leak in Gumi-si, Gyeongsangbuk-do or hydrochloric acid leak in Ulsan, Gyeongsangnam-do demonstrated, chemical related accidents are mostly caused by large amounts of volatile toxic substances leaking due to the damages of storage tank or pipe lines of transporter. Safety assessment is the most important concern because such toxic material accidents cause human and material damages to the environment and atmosphere of the surrounding area. Therefore, in this study, a hydrofluoric acid leaked from a storage tank was selected as the study example to simulate the leaked substance diffusing into the atmosphere and result analysis was performed through the numerical Analysis and diffusion simulation of ALOHA(Areal Location of Hazardous Atmospheres). the results of a qualitative evaluation of HAZOP (Hazard Operability)was looked at to find that the flange leak, operation delay due to leakage of the valve and the hose, and toxic gas leak were danger factors. Possibility of fire from temperature, pressure and corrosion, nitrogen supply overpressure and toxic leak from internal corrosion of tank or pipe joints were also found to be high. ALOHA resulting effects were a little different depending on the input data of Dense Gas Model, however, the wind direction and speed, rather than atmospheric stability, played bigger role. Higher wind speed affected the diffusion of contaminant. In term of the diffusion concentration, both liquid and gas leaks resulted in almost the same $LC_{50}$ and ALOHA AEGL-3(Acute Exposure Guidline Level) values. Each scenarios showed almost identical results in ALOHA model. Therefore, a buffer distance of toxic gas can be determined by comparing the numerical analysis and the diffusion concentration to the IDLH(Immediately Dangerous to Life and Health). Such study will help perform the risk assessment of toxic leak more efficiently and be utilized in establishing community emergency response system properly.

Development of Cyber R&D Platform on Total System Performance Assessment for a Potential HLW Repository ; Application for Development of Scenario through QA Procedures (고준위 방사성폐기물 처분 종합 성능 평가 (TSPA)를 위한 Cyber R&D Platform 개발 ; 시나리오 도출 과정에서의 품질보증 적용 사례)

  • Seo Eun-Jin;Hwang Yong-soo;Kang Chul-Hyung
    • Proceedings of the Korean Radioactive Waste Society Conference
    • /
    • 2005.06a
    • /
    • pp.311-318
    • /
    • 2005
  • Transparency on the Total System Performance Assessment (TSPA) is the key issue to enhance the public acceptance for a permanent high level radioactive repository. To approve it, all performances on TSPA through Quality Assurance is necessary. The integrated Cyber R&D Platform is developed by KAERI using the T2R3 principles applicable for five major steps in R&D's. The proposed system is implemented in the web-based system so that all participants in TSPA are able to access the system. It is composed of FEAS (FEp to Assessment through Scenario development) showing systematic approach from the FEPs to Assessment methods flow chart, PAID (Performance Assessment Input Databases) showing PA(Performance Assessment) input data set in web based system and QA system receding those data. All information is integrated into Cyber R&D Platform so that every data in the system can be checked whenever necessary. For more user-friendly system, system upgrade included input data & documentation package is under development. Throughout the next phase R&D, Cyber R&D Platform will be connected with the assessment tool for TSPA so that it will be expected to search the whole information in one unified system.

  • PDF

Development of Intelligent ATP System Using Genetic Algorithm (유전 알고리듬을 적용한 지능형 ATP 시스템 개발)

  • Kim, Tai-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.131-145
    • /
    • 2010
  • The framework for making a coordinated decision for large-scale facilities has become an important issue in supply chain(SC) management research. The competitive business environment requires companies to continuously search for the ways to achieve high efficiency and lower operational costs. In the areas of production/distribution planning, many researchers and practitioners have developedand evaluated the deterministic models to coordinate important and interrelated logistic decisions such as capacity management, inventory allocation, and vehicle routing. They initially have investigated the various process of SC separately and later become more interested in such problems encompassing the whole SC system. The accurate quotation of ATP(Available-To-Promise) plays a very important role in enhancing customer satisfaction and fill rate maximization. The complexity for intelligent manufacturing system, which includes all the linkages among procurement, production, and distribution, makes the accurate quotation of ATP be a quite difficult job. In addition to, many researchers assumed ATP model with integer time. However, in industry practices, integer times are very rare and the model developed using integer times is therefore approximating the real system. Various alternative models for an ATP system with time lags have been developed and evaluated. In most cases, these models have assumed that the time lags are integer multiples of a unit time grid. However, integer time lags are very rare in practices, and therefore models developed using integer time lags only approximate real systems. The differences occurring by this approximation frequently result in significant accuracy degradations. To introduce the ATP model with time lags, we first introduce the dynamic production function. Hackman and Leachman's dynamic production function in initiated research directly related to the topic of this paper. They propose a modeling framework for a system with non-integer time lags and show how to apply the framework to a variety of systems including continues time series, manufacturing resource planning and critical path method. Their formulation requires no additional variables or constraints and is capable of representing real world systems more accurately. Previously, to cope with non-integer time lags, they usually model a concerned system either by rounding lags to the nearest integers or by subdividing the time grid to make the lags become integer multiples of the grid. But each approach has a critical weakness: the first approach underestimates, potentially leading to infeasibilities or overestimates lead times, potentially resulting in excessive work-inprocesses. The second approach drastically inflates the problem size. We consider an optimized ATP system with non-integer time lag in supply chain management. We focus on a worldwide headquarter, distribution centers, and manufacturing facilities are globally networked. We develop a mixed integer programming(MIP) model for ATP process, which has the definition of required data flow. The illustrative ATP module shows the proposed system is largely affected inSCM. The system we are concerned is composed of a multiple production facility with multiple products, multiple distribution centers and multiple customers. For the system, we consider an ATP scheduling and capacity allocationproblem. In this study, we proposed the model for the ATP system in SCM using the dynamic production function considering the non-integer time lags. The model is developed under the framework suitable for the non-integer lags and, therefore, is more accurate than the models we usually encounter. We developed intelligent ATP System for this model using genetic algorithm. We focus on a capacitated production planning and capacity allocation problem, develop a mixed integer programming model, and propose an efficient heuristic procedure using an evolutionary system to solve it efficiently. This method makes it possible for the population to reach the approximate solution easily. Moreover, we designed and utilized a representation scheme that allows the proposed models to represent real variables. The proposed regeneration procedures, which evaluate each infeasible chromosome, makes the solutions converge to the optimum quickly.

An Oceanic Current Map of the East Sea for Science Textbooks Based on Scientific Knowledge Acquired from Oceanic Measurements (해양관측을 통해 획득된 과학적 지식에 기반한 과학교과서 동해 해류도)

  • Park, Kyung-Ae;Park, Ji-Eun;Choi, Byoung-Ju;Byun, Do-Seong;Lee, Eun-Il
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.18 no.4
    • /
    • pp.234-265
    • /
    • 2013
  • Oceanic current maps in the secondary school science and earth science textbooks have played an important role in piquing students's inquisitiveness and interests in the ocean. Such maps can provide students with important opportunities to learn about oceanic currents relevant to abrupt climate change and global energy balance issues. Nevertheless, serious and diverse errors in these secondary school oceanic current maps have been discovered upon comparison with up-to-date scientific knowledge concerning oceanic currents. This study presents the fundamental methods and strategies for constructing such maps error-free, through the unification of the diverse current maps currently in the textbooks. In order to do so, we analyzed the maps found in 27 different textbooks and compared them with other up-to-date maps found in scientific journals, and developed a mapping technique for extracting digitalized quantitative information on warm and cold currents in the East Sea. We devised analysis items for the current visualization in relation to the branching features of the Tsushima Warm Current (TWC) in the Korea Strait. These analysis items include: its nearshore and offshore branches, the northern limit and distance from the coast of the East Korea Warm Current, outflow features of the TWC near the Tsugaru and Soya Straits and their returning currents, and flow patterns of the Liman Cold Current and the North Korea Cold Current. The first draft of the current map was constructed based upon the scientific knowledge and input of oceanographers based on oceanic in-situ measurements, and was corrected with the help of a questionnaire survey to the members of an oceanographic society. In addition, diverse comments have been collected from a special session of the 2013 spring meeting of the Korean Oceanographic Society to assist in the construction of an accurate current map of the East Sea which has been corrected repeatedly through in-depth discussions with oceanographers. Finally, we have obtained constructive comments and evaluations of the interim version of the current map from several well-known ocean current experts and incorporated their input to complete the map's final version. To avoid errors in the production of oceanic current maps in future textbooks, we provide the geolocation information (latitude and longitude) of the currents by digitalizing the map. This study is expected to be the first step towards the completion of an oceanographic current map suitable for secondary school textbooks, and to encourage oceanographers to take more interest in oceanic education.