• Title/Summary/Keyword: User Function

Search Result 2,276, Processing Time 0.029 seconds

Effectiveness of Leading Light by Reflecting the Characteristics of Marine Traffic at Gamcheon Port (감천항 선박교통 특성을 반영한 도등 효용성 분석)

  • Shin-Young Ha;Seung-gi Gug
    • Journal of Navigation and Port Research
    • /
    • v.48 no.3
    • /
    • pp.232-238
    • /
    • 2024
  • This study examines the effectiveness of Gamcheon Port's leading lights in reflecting the characteristics of ship traffic entering the port. The leading light of Gamcheon Port was proposed and installed in 1996 during the basic design process of supplementing the port's route signs for the entry and exit of 4,000 TEU container ships. Since then, it has been improved to accommodate the entry of 50,000 DWT general cargo ships and to reflect the crane height of Hanjin Pier, as a result of a review study conducted by the Busan Regional Maritime Affairs and Fisheries Administration to improve the still temperature of Gamcheon Port by relocating existing outer facilities. However, an analysis of the current characteristics of maritime traffic at Gamcheon Port reveals that maritime traffic congestion is smooth and the proportion of small and medium-sized ships under 10,000 tons is higher than that of large ships, resulting in decreased efficiency of the leading lights to respond to the entry of large ships. Nevertheless, considering the increasing CAGR of the entry ratio of ships of 30,000 tons or more by 8.45%, preparations for the anticipated increase in the proportion of large ships entering the port in the future are necessary, and it is preferable to maintain the function of the leading lights rather than demolishing the entrance to Gamcheon Port. The narrow nature of the Gamcheon Port route poses a higher risk of collision when ships entering and exiting encounter each other, which can burden the navigator. Therefore, instead of maintaining the function of the leading lights, it is possible to relocate the conduction light to reduce maintenance burden and install a direction light in its place. When installing the direction light, it is worth considering using Double Sector Lights instead of the currently installed Single Sector Lights at nearby Busan Bukhang Port, as the former can improve user satisfaction by providing a clearer middle line and reducing difficulties in distinguishing between points.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

A Time Series Analysis of Urban Park Behavior Using Big Data (빅데이터를 활용한 도시공원 이용행태 특성의 시계열 분석)

  • Woo, Kyung-Sook;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.48 no.1
    • /
    • pp.35-45
    • /
    • 2020
  • This study focused on the park as a space to support the behavior of urban citizens in modern society. Modern city parks are not spaces that play a specific role but are used by many people, so their function and meaning may change depending on the user's behavior. In addition, current online data may determine the selection of parks to visit or the usage of parks. Therefore, this study analyzed the change of behavior in Yeouido Park, Yeouido Hangang Park, and Yangjae Citizen's Forest from 2000 to 2018 by utilizing a time series analysis. The analysis method used Big Data techniques such as text mining and social network analysis. The summary of the study is as follows. The usage behavior of Yeouido Park has changed over time to "Ride" (Dynamic Behavior) for the first period (I), "Take" (Information Communication Service Behavior) for the second period (II), "See" (Communicative Behavior) for the third period (III), and "Eat" (Energy Source Behavior) for the fourth period (IV). In the case of Yangjae Citizens' Forest, the usage behavior has changed over time to "Walk" (Dynamic Behavior) for the first, second, and third periods (I), (II), (III) and "Play" (Dynamic Behavior) for the fourth period (IV). Looking at the factors affecting behavior, Yeouido Park was had various factors related to sports, leisure, culture, art, and spare time compared to Yangjae Citizens' Forest. The differences in Yangjae Citizens' Forest that affected its main usage behavior were various elements of natural resources. Second, the behavior of the target areas was found to be focused on certain main behaviors over time and played a role in selecting or limiting future behaviors. These results indicate that the space and facilities of the target areas had not been utilized evenly, as various behaviors have not occurred, however, a certain main behavior has appeared in the target areas. This study has great significance in that it analyzes the usage of urban parks using Big Data techniques, and determined that urban parks are transformed into play spaces where consumption progressed beyond the role of rest and walking. The behavior occurring in modern urban parks is changing in quantity and content. Therefore, through various types of discussions based on the results of the behavior collected through Big Data, we can better understand how citizens are using city parks. This study found that the behavior associated with static behavior in both parks had a great impact on other behaviors.

The Research on the Life-safety Implementation using the Natural Light LED Lamp in the Disaster Prevention and Safety Management (방재안전 자연광 LED 조명을 이용한 생활안전 개선에 관한 연구)

  • Lee, Taeshik;Seok, Gumcheul;So, Yooseb;Choi, Byungshik;Kim, Jaekwon;Cho, Woncheol
    • Journal of Korean Society of Disaster and Security
    • /
    • v.9 no.2
    • /
    • pp.53-62
    • /
    • 2016
  • This paper is shown the new method using LED Light, which the light environment is upgraded the natural LED light in the area of Disaster Prevention and Safety Management (PDSD), which the events of deaths is reduced on the Suicide, the Infectious diseases, the safety accidents, the Traffic Accident, the crime, the fire, the Nature Disaster, and which the health and the environment and the safety is implemented using the value of the color LED Light. Research findings include,during 3 weeks in the November 2016, in the ten residents (average living 28.7 years, age 67.5 years) with depressive symptoms in the northern part of Seoul, according to the request of the user, the PDSD natural light LED lighting was installed in the home bedroom or the living room, expectations for the ability to restore physical and mental stability were high (88%), in the same way, after 1 week and 3 weeks, the physical and mental changes were compared and the results,84% in the first week and 90% in the third week and thereafter, the effect of relieving depression was high. We conclude that patients with depression have a good sleep, an uneasy feeling, a sense of security, a good night's sleep, and a good feeling. The PDSD LED Light is expected to contribute in the various areas, which reduced the suicides, which give increased immunity from infectious diseases, which give a crash to reduce accidents caused by negligence, which improve the safe operation of heavy vehicles in which a traffics accident incidence installed on the highest point, which improve the safety function on the 'safety way home' for the safety of the community, which due to fire gives alleviate the emotional anxiety of firefighters, which improve the environment for long-term control room working during decision making caused by natural disasters.

Development of Forest Road Network Model Using Digital Terrain Model (수치지형(數値地形)모델을 이용(利用)한 임도망(林道網) 배치(配置)모델의 개발(開發))

  • Lee, Jun Woo
    • Journal of Korean Society of Forest Science
    • /
    • v.81 no.4
    • /
    • pp.363-371
    • /
    • 1992
  • This study was aimed at developing a computer model to determine rational road networks in mountainous forests. The computer model is composed of two major subroutines for digital terrain analyses and route selection. The digital terrain model(DTM) provides various information on topographic and vegetative characteristics of forest stands. The DTM also evaluates the effectiveness of road construction based on slope gradients. Using the results of digital terrain analyses, the route selection subroutine, heuristically, determines the optimal road layout satisfying the predefined road densities. The route selection subroutine uses the area-partitioning method in order to fully of roads. This method leads to unbiased road layouts in forest areas. The size of the unit partitiones area can be calculated as a function of the predefined road density. In addition, the user-defined road density of the area-partitioning method provides flexibility in applying the model to real situations. The rational road network can be easily achived for varying road densities, which would be an essential element for network design of forest roads. The optimality conditions are evaluated in conjuction with longitudinal gradients, investment efficiency earthwork quantity or the mixed criteria of these three. The performance of the model was measured and, then, compared with those of conventional ones in terns of average skidding distance, accessibility of stands, development index and circulated road network index. The results of the performance analysis indicate that selection of roading routes for network design using the digital terrain analysis and the area-partitioning method improves performance of the network design medel.

  • PDF

Function of the Korean String Indexing System for the Subject Catalog (주제목록을 위한 한국용어열색인 시스템의 기능)

  • Yoon Kooho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.15
    • /
    • pp.225-266
    • /
    • 1988
  • Various theories and techniques for the subject catalog have been developed since Charles Ammi Cutter first tried to formulate rules for the construction of subject headings in 1876. However, they do not seem to be appropriate to Korean language because the syntax and semantics of Korean language are different from those of English and other European languages. This study therefore attempts to develop a new Korean subject indexing system, namely Korean String Indexing System(KOSIS), in order to increase the use of subject catalogs. For this purpose, advantages and disadvantages between the classed subject catalog nd the alphabetical subject catalog, which are typical subject ca-alogs in libraries, are investigated, and most of remarkable subject indexing systems, in particular the PRECIS developed by the British National Bibliography, are reviewed and analysed. KOSIS is a string indexing based on purely the syntax and semantics of Korean language, even though considerable principles of PRECIS are applied to it. The outlines of KOSIS are as follows: 1) KOSIS is based on the fundamentals of natural language and an ingenious conjunction of human indexing skills and computer capabilities. 2) KOSIS is. 3 string indexing based on the 'principle of context-dependency.' A string of terms organized accoding to his principle shows remarkable affinity with certain patterns of words in ordinary discourse. From that point onward, natural language rather than classificatory terms become the basic model for indexing schemes. 3) KOSIS uses 24 role operators. One or more operators should be allocated to the index string, which is organized manually by the indexer's intellectual work, in order to establish the most explicit syntactic relationship of index terms. 4) Traditionally, a single -line entry format is used in which a subject heading or index entry is presented as a single sequence of words, consisting of the entry terms, plus, in some cases, an extra qualifying term or phrase. But KOSIS employs a two-line entry format which contains three basic positions for the production of index entries. The 'lead' serves as the user's access point, the 'display' contains those terms which are themselves context dependent on the lead, 'qualifier' sets the lead term into its wider context. 5) Each of the KOSIS entries is co-extensive with the initial subject statement prepared by the indexer, since it displays all the subject specificities. Compound terms are always presented in their natural language order. Inverted headings are not produced in KOSIS. Consequently, the precision ratio of information retrieval can be increased. 6) KOSIS uses 5 relational codes for the system of references among semantically related terms. Semantically related terms are handled by a different set of routines, leading to the production of 'See' and 'See also' references. 7) KOSIS was riginally developed for a classified catalog system which requires a subject index, that is an index -which 'trans-lates' subject index, that is, an index which 'translates' subjects expressed in natural language into the appropriate classification numbers. However, KOSIS can also be us d for a dictionary catalog system. Accordingly, KOSIS strings can be manipulated to produce either appropriate subject indexes for a classified catalog system, or acceptable subject headings for a dictionary catalog system. 8) KOSIS is able to maintain a constistency of index entries and cross references by means of a routine identification of the established index strings and reference system. For this purpose, an individual Subject Indicator Number and Reference Indicator Number is allocated to each new index strings and new index terms, respectively. can produce all the index entries, cross references, and authority cards by means of either manual or mechanical methods. Thus, detailed algorithms for the machine-production of various outputs are provided for the institutions which can use computer facilities.

  • PDF

A Study on the Deduction of the Forest Play Activity and Space through Preschooler Participatory Workshop (유아참여 워크숍을 통한 숲놀이 활동 및 공간 요소의 도출에 관한 연구)

  • Kang, Taesun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.5
    • /
    • pp.69-81
    • /
    • 2018
  • Recently, user participatory workshops have been applied as a way to plan landscape spaces that reflects the needs and demands of the users. It is also required to improve the quality of the FECC (Forest Experience Center for Children), which is growing rapidly. Therefore, the purpose of this study is to deduct the design elements (forest play activities and space), the basic needs, and the demands of users in making the FECC a preschooler participatory workshop. For this, materials for preschooler participation were selected, and a step-by-step workshop was conducted to satisfy the demands of the preschooler's development. First, in the pre-workshops phase, design elements standards were deducted through the preschooler participatory results (41 children aged 6 and 7, Kindergarten). Second, in the main workshop phase, the design elements to be introduced on the site (Songsan-mulbit FECC) were deducted through the participating preschooler's selection and those results were analyzed. The materials used at the preschooler participatory process were 'drawing a picture' in the pre-workshop phase, and the design elements and the standard types charts were the forest play activity pictogram chart, and the forest play space general images chart in the main workshop. As for results, frst, there are 38 standard types of forest play activities that have been deducted. It consists of 27 cognitive activities (functional 16, constructive 4, symbolic 4, game on rule 3), 9 games (sensory 5, other 4), and two social play activities (solo, group). There are 21 standard types of forest play spaces. They consist of 8 play facility spaces (5 facility, 3 natural), 2 water spaces, and 11 spaces of 5 types. Second, as a result of applying the results to the site, the forest play activities to be introduced on the site were selected, and the functional play was most selected. Additionally, climbing and water play were most selected as the unit activities. Also, functional, constructive, symbolic, games based on rules were selected, even in the preschooler's development play. In the case of the forest play spaces to be introduced in the site, the preschooler's selection results by sex and age tended to be similar to the preschooler's comprehensive selection results, but the boys preferred function and adventure spaces more than the girls, while the girls preferred rest spaces more than the boys. This result is similar to the previous study results, which directly observed the preschooler's forest play behavior, and analysis that the preschooler recognized the site and selected the design elements introduced on the site. Therefore, the participatory workshop process and the materials process in this study are analyzed and applied to the purpose of the study. It is valuable as a case to be applied in design of the FECC from this point forward.

The Brassica rapa Tissue-specific EST Database (배추의 조직 특이적 발현유전자 데이터베이스)

  • Yu, Hee-Ju;Park, Sin-Gi;Oh, Mi-Jin;Hwang, Hyun-Ju;Kim, Nam-Shin;Chung, Hee;Sohn, Seong-Han;Park, Beom-Seok;Mun, Jeong-Hwan
    • Horticultural Science & Technology
    • /
    • v.29 no.6
    • /
    • pp.633-640
    • /
    • 2011
  • Brassica rapa is an A genome model species for Brassica crop genetics, genomics, and breeding. With the completion of sequencing the B. rapa genome, functional analysis of the genome is forthcoming issue. The expressed sequence tags are fundamental resources supporting annotation and functional analysis of the genome including identification of tissue-specific genes and promoters. As of July 2011, 147,217 ESTs from 39 cDNA libraries of B. rapa are reported in the public database. However, little information can be retrieved from the sequences due to lack of organized databases. To leverage the sequence information and to maximize the use of publicly-available EST collections, the Brassica rapa tissue-specific EST database (BrTED) is developed. BrTED includes sequence information of 23,962 unigenes assembled by StackPack program. The unigene set is used as a query unit for various analyses such as BLAST against TAIR gene model, functional annotation using MIPS and UniProt, gene ontology analysis, and prediction of tissue-specific unigene sets based on statistics test. The database is composed of two main units, EST sequence processing and information retrieving unit and tissue-specific expression profile analysis unit. Information and data in both units are tightly inter-connected to each other using a web based browsing system. RT-PCR evaluation of 29 selected unigene sets successfully amplified amplicons from the target tissues of B. rapa. BrTED provided here allows the user to identify and analyze the expression of genes of interest and aid efforts to interpret the B. rapa genome through functional genomics. In addition, it can be used as a public resource in providing reference information to study the genus Brassica and other closely related crop crucifer plants.

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.