• Title/Summary/Keyword: Input form

Search Result 1,151, Processing Time 0.028 seconds

Pallet Size Optimization for Special Cargo based on Neighborhood Search Algorithm (이웃해 탐색 알고리즘 기반의 특수화물 팔레트 크기 최적화)

  • Hyeon-Soo Shin;Chang-Hyeon Kim;Chang-Wan Ha;Hwan-Seong Kim
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2023.05a
    • /
    • pp.250-251
    • /
    • 2023
  • The pallet, typically a form of tertiary packaging, is a flat structure used as a base for the unitization of goods in the supply chain. In addition, standard pallets such as T-11 and T-12 are used throughout the logistics industry to reduce the cost and enhance the efficiency of transportation. However, in the case of special cargo, it is impossible to handle such cargo using a standard pallet due to its size and weight, so many have developed and are now using their customized pallet. Therefore, this study suggests a pallet size optimization method to calculate the optimal pallet size, which minimizes the loss of space on a pallet. The main input features are the specifications and the storage quantity of each cargo, and the optimization method that has modified the Neighborhood Search Algorithm calculates the optimal pallet size. In order to verify the optimality of the developed algorithm, a comparative analysis has been conducted through simulation.

  • PDF

Automated Story Generation with Image Captions and Recursiva Calls (이미지 캡션 및 재귀호출을 통한 스토리 생성 방법)

  • Isle Jeon;Dongha Jo;Mikyeong Moon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.42-50
    • /
    • 2023
  • The development of technology has achieved digital innovation throughout the media industry, including production techniques and editing technologies, and has brought diversity in the form of consumer viewing through the OTT service and streaming era. The convergence of big data and deep learning networks automatically generated text in format such as news articles, novels, and scripts, but there were insufficient studies that reflected the author's intention and generated story with contextually smooth. In this paper, we describe the flow of pictures in the storyboard with image caption generation techniques, and the automatic generation of story-tailored scenarios through language models. Image caption using CNN and Attention Mechanism, we generate sentences describing pictures on the storyboard, and input the generated sentences into the artificial intelligence natural language processing model KoGPT-2 in order to automatically generate scenarios that meet the planning intention. Through this paper, the author's intention and story customized scenarios are created in large quantities to alleviate the pain of content creation, and artificial intelligence participates in the overall process of digital content production to activate media intelligence.

Analysis of Color Distortion in Hazy Images (안개가 포함된 영상에서의 색 왜곡 특성 분석)

  • JeongYeop Kim
    • Journal of Platform Technology
    • /
    • v.11 no.6
    • /
    • pp.68-78
    • /
    • 2023
  • In this paper, the color distortion in images with haze would be analyzed. When haze is included in the scene, the color signal reflected in the scene is accompanied by color distortion due to the influence of transmittance according to the haze component. When the influence of haze is excluded by a conventional de-hazing method, the distortion of color tends to not be sufficiently resolved. Khoury et al. used the dark channel priority technique, a haze model mentioned in many studies, to determine the degree of color distortion. However, only the tendency of distortion such as color error values was confirmed, and specific color distortion analysis was not performed. This paper analyzes the characteristic of color distortion and proposes a restoration method that can reduce color distortion. Input images of databases used by Khoury et al. include Macbeth color checker, a standard color tool. Using Macbeth color checker's color values, color distortion according to changes in haze concentration was analyzed, and a new color distortion model was proposed through modeling. The proposed method is to obtain a mapping function using the change in chromaticity by step according to the change in haze concentration and the color of the ground truth. Since the form of color distortion varies from step to step in proportion to the haze concentration, it is necessary to obtain an integrated thought function that operates stably at all stages. In this paper, the improvement of color distortion through the proposed method was estimated based on the value of angular error, and it was verified that there was an improvement effect of about 15% compared to the conventional method.

  • PDF

A Study on Animation Character Face Design System Based on Physiognomic Judgment of Character Study in the Cosmic Dual Forces and the Five Elements Thoughts (음양오행(陰陽五行)사상의 관상학에 기반한 애니메이션 캐릭터 얼굴 설계 시스템 연구)

  • Hong, Soo-Hyeon;Kim, Jae-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.7
    • /
    • pp.872-893
    • /
    • 2006
  • In this study, I classify the elements of physiognomic judgment of character with regard to form and meaning from a visual perspective based on physiognomic judgment of character study in 'the cosmic dual forces and the Five Elements theory'. Individual characters for each type are designed using graphic data. Based on that, design system of individual characters for each personality type is investigated using Neural Network system. Faces with O-Haeng (Five Elements) shapes are shown to constitute the system with ${\pm}0.3%$ degree of error tolerance for the non-loaming input data. For the shapes of Chinese characters 'tree, fire, soil, gold and water', their MSE(Mean Square Error) are 0.3, 0.3, 0.2, 0.5, 0.2. It seems to be the best regarding the scoring system which ranges from 0 to 5. Therefore, this system might be regarded to produce the most accurate facial shape of character automatically when we input character's personality we desire to make.

  • PDF

A Study on the Relationship of Learning, Innovation Capability and Innovation Outcome (학습, 혁신역량과 혁신성과 간의 관계에 관한 연구)

  • Kim, Kui-Won
    • Journal of Korea Technology Innovation Society
    • /
    • v.17 no.2
    • /
    • pp.380-420
    • /
    • 2014
  • We increasingly see the importance of employees acquiring enough expert capability or innovation capability to prepare for ever growing uncertainties in their operation domains. However, despite the above circumstances, there have not been an enough number of researches on how operational input components for employees' innovation outcome, innovation activities such as acquisition, exercise and promotion effort of employee's innovation capability, and their resulting innovation outcome interact with each other. This trend is believed to have been resulted because most of the current researches on innovation focus on the units of country, industry and corporate entity levels but not on an individual corporation's innovation input components, innovation outcome and innovation activities themselves. Therefore, this study intends to avoid the currently prevalent study frames and views on innovation and focus more on the strategic policies required for the enhancement of an organization's innovation capabilities by quantitatively analyzing employees' innovation outcomes and their most suggested relevant innovation activities. The research model that this study deploys offers both linear and structural model on the trio of learning, innovation capability and innovation outcome, and then suggests the 4 relevant hypotheses which are quantitatively tested and analyzed as follows: Hypothesis 1] The different levels of innovation capability produce different innovation outcomes (accepted, p-value = 0.000<0.05). Hypothesis 2] The different amounts of learning time produce different innovation capabilities (rejected, p-value = 0.199, 0.220>0.05). Hypothesis 3] The different amounts of learning time produce different innovation outcomes. (accepted, p-value = 0.000<0.05). Hypothesis 4] the innovation capability acts as a significant parameter in the relationship of the amount of learning time and innovation outcome (structural modeling test). This structural model after the t-tests on Hypotheses 1 through 4 proves that irregular on-the-job training and e-learning directly affects the learning time factor while job experience level, employment period and capability level measurement also directly impacts on the innovation capability factor. Also this hypothesis gets further supported by the fact that the patent time absolutely and directly affects the innovation capability factor rather than the learning time factor. Through the 4 hypotheses, this study proposes as measures to maximize an organization's innovation outcome. firstly, frequent irregular on-the-job training that is based on an e-learning system, secondly, efficient innovation management of employment period, job skill levels, etc through active sponsorship and energization community of practice (CoP) as a form of irregular learning, and thirdly a model of Yί=f(e, i, s, t, w)+${\varepsilon}$ as an innovation outcome function that is soundly based on a smart system of capability level measurement. The innovation outcome function is what this study considers the most appropriate and important reference model.

A Dynamic Management Method for FOAF Using RSS and OLAP cube (RSS와 OLAP 큐브를 이용한 FOAF의 동적 관리 기법)

  • Sohn, Jong-Soo;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.39-60
    • /
    • 2011
  • Since the introduction of web 2.0 technology, social network service has been recognized as the foundation of an important future information technology. The advent of web 2.0 has led to the change of content creators. In the existing web, content creators are service providers, whereas they have changed into service users in the recent web. Users share experiences with other users improving contents quality, thereby it has increased the importance of social network. As a result, diverse forms of social network service have been emerged from relations and experiences of users. Social network is a network to construct and express social relations among people who share interests and activities. Today's social network service has not merely confined itself to showing user interactions, but it has also developed into a level in which content generation and evaluation are interacting with each other. As the volume of contents generated from social network service and the number of connections between users have drastically increased, the social network extraction method becomes more complicated. Consequently the following problems for the social network extraction arise. First problem lies in insufficiency of representational power of object in the social network. Second problem is incapability of expressional power in the diverse connections among users. Third problem is the difficulty of creating dynamic change in the social network due to change in user interests. And lastly, lack of method capable of integrating and processing data efficiently in the heterogeneous distributed computing environment. The first and last problems can be solved by using FOAF, a tool for describing ontology-based user profiles for construction of social network. However, solving second and third problems require a novel technology to reflect dynamic change of user interests and relations. In this paper, we propose a novel method to overcome the above problems of existing social network extraction method by applying FOAF (a tool for describing user profiles) and RSS (a literary web work publishing mechanism) to OLAP system in order to dynamically innovate and manage FOAF. We employed data interoperability which is an important characteristic of FOAF in this paper. Next we used RSS to reflect such changes as time flow and user interests. RSS, a tool for literary web work, provides standard vocabulary for distribution at web sites and contents in the form of RDF/XML. In this paper, we collect personal information and relations of users by utilizing FOAF. We also collect user contents by utilizing RSS. Finally, collected data is inserted into the database by star schema. The system we proposed in this paper generates OLAP cube using data in the database. 'Dynamic FOAF Management Algorithm' processes generated OLAP cube. Dynamic FOAF Management Algorithm consists of two functions: one is find_id_interest() and the other is find_relation (). Find_id_interest() is used to extract user interests during the input period, and find-relation() extracts users matching user interests. Finally, the proposed system reconstructs FOAF by reflecting extracted relationships and interests of users. For the justification of the suggested idea, we showed the implemented result together with its analysis. We used C# language and MS-SQL database, and input FOAF and RSS as data collected from livejournal.com. The implemented result shows that foaf : interest of users has reached an average of 19 percent increase for four weeks. In proportion to the increased foaf : interest change, the number of foaf : knows of users has grown an average of 9 percent for four weeks. As we use FOAF and RSS as basic data which have a wide support in web 2.0 and social network service, we have a definite advantage in utilizing user data distributed in the diverse web sites and services regardless of language and types of computer. By using suggested method in this paper, we can provide better services coping with the rapid change of user interests with the automatic application of FOAF.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Knowledge graph-based knowledge map for efficient expression and inference of associated knowledge (연관지식의 효율적인 표현 및 추론이 가능한 지식그래프 기반 지식지도)

  • Yoo, Keedong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.49-71
    • /
    • 2021
  • Users who intend to utilize knowledge to actively solve given problems proceed their jobs with cross- and sequential exploration of associated knowledge related each other in terms of certain criteria, such as content relevance. A knowledge map is the diagram or taxonomy overviewing status of currently managed knowledge in a knowledge-base, and supports users' knowledge exploration based on certain relationships between knowledge. A knowledge map, therefore, must be expressed in a networked form by linking related knowledge based on certain types of relationships, and should be implemented by deploying proper technologies or tools specialized in defining and inferring them. To meet this end, this study suggests a methodology for developing the knowledge graph-based knowledge map using the Graph DB known to exhibit proper functionality in expressing and inferring relationships between entities and their relationships stored in a knowledge-base. Procedures of the proposed methodology are modeling graph data, creating nodes, properties, relationships, and composing knowledge networks by combining identified links between knowledge. Among various Graph DBs, the Neo4j is used in this study for its high credibility and applicability through wide and various application cases. To examine the validity of the proposed methodology, a knowledge graph-based knowledge map is implemented deploying the Graph DB, and a performance comparison test is performed, by applying previous research's data to check whether this study's knowledge map can yield the same level of performance as the previous one did. Previous research's case is concerned with building a process-based knowledge map using the ontology technology, which identifies links between related knowledge based on the sequences of tasks producing or being activated by knowledge. In other words, since a task not only is activated by knowledge as an input but also produces knowledge as an output, input and output knowledge are linked as a flow by the task. Also since a business process is composed of affiliated tasks to fulfill the purpose of the process, the knowledge networks within a business process can be concluded by the sequences of the tasks composing the process. Therefore, using the Neo4j, considered process, task, and knowledge as well as the relationships among them are defined as nodes and relationships so that knowledge links can be identified based on the sequences of tasks. The resultant knowledge network by aggregating identified knowledge links is the knowledge map equipping functionality as a knowledge graph, and therefore its performance needs to be tested whether it meets the level of previous research's validation results. The performance test examines two aspects, the correctness of knowledge links and the possibility of inferring new types of knowledge: the former is examined using 7 questions, and the latter is checked by extracting two new-typed knowledge. As a result, the knowledge map constructed through the proposed methodology has showed the same level of performance as the previous one, and processed knowledge definition as well as knowledge relationship inference in a more efficient manner. Furthermore, comparing to the previous research's ontology-based approach, this study's Graph DB-based approach has also showed more beneficial functionality in intensively managing only the knowledge of interest, dynamically defining knowledge and relationships by reflecting various meanings from situations to purposes, agilely inferring knowledge and relationships through Cypher-based query, and easily creating a new relationship by aggregating existing ones, etc. This study's artifacts can be applied to implement the user-friendly function of knowledge exploration reflecting user's cognitive process toward associated knowledge, and can further underpin the development of an intelligent knowledge-base expanding autonomously through the discovery of new knowledge and their relationships by inference. This study, moreover than these, has an instant effect on implementing the networked knowledge map essential to satisfying contemporary users eagerly excavating the way to find proper knowledge to use.

A Study on the Costumes in the Dong A II Bo - $1920{\sim}1945$ - (동아일보(東亞日報)에 나타난 복식연구(服飾硏究))

  • Son, Myong-Im;Kim, Jin-Goo
    • Journal of the Korean Society of Costume
    • /
    • v.14
    • /
    • pp.145-165
    • /
    • 1990
  • This study examine closely conditions of costume between the Modernized period and Liberation with newspaper materials. Because newspaper generally appear society conditions in those days on rapid and across-the boad basis. The Modernized period is extremely change among history of costum (ordinance prohibiting top knots, allowance of foreign clothes putting on). Because this change have been spontaneously not by internal desired but Western input by the strong nation of imperialism to enclose Chosun, they was accepted by the general public later under the rule of Japaneses Imperialism. Consequently, study of costume play an important part periods between the Japanese annexation of Korea and Liberation. This study apply to newspaper characteric for costume, and closely examine an important costum condition of those days next time, and present costume material in those days that composed the account catalog appeared periods between the first publication(1920) of the Dong A II Bo, and in the year 1945, it is as follows. 1. Foreign clothes of men generally accept the general public on look at from form change, in the 1920's had come short Jackets and narrow throusers into fashion, in the 1930's had come trousers of generous waist band with broads shoulder and long Jackets. Catalog of Major clothes is as follows; Spring coat, Jacket, Vest, Shirt, etc. While pants had come trousers into fashion 2. Functional characteric of Foreign clothes was the possible acceptance of women's foreign clothes. It relate with much discussion to improve Korean development in those days and substitute foreign clothes for Korean clothes because of institence in those days to improve functional clothes life. 3. An improvement women's Korean clothes generally take aim at women's nipple liberation, substitute vest waist for skirt waist, appear seamless one-piece skirt of shade length, and long dress length of Jacket. 4. Children's clothes give an account of functional and sanitary conditions, handling method, washing method. 5. Clothes materials give account of foreign clothes material, artificial silk, furs, cotton fabrics, and etc. 6. Clothes management give an account of washing, keeping method, washing method of foreign clothes, and keeping of furs. 7. The hair generaly had come short hair into fashion in men's case, while accounts on long hair fashion of foreign nation effect in case of women. 8. Describing on beauty care manage primary beauty care, reform, plastic operation, and shade beauty care. Ideal beauty care deal with natural and dignified buauty care. 9. Accesaries (hat, handbag, handkerchief, gloves) change with fashion of clothes, it rapid more than clothes fashion. 10. On encouragement of abolition of white clothes and putting on dyeing clothes, because of economic defect of white clothes, psychology and beauty consequently, white clothes is on the rise abolition. In national level almost substitute dyeing clothes for control and improvement of people of all social standings consequently, dress and its ornaments conditions in those days analyzed account of Dong-A II Bo accept the foreign clothes that introduced internal country of the whole century, and substitute dyeing for white clothes. Costume condition in those days appear the mixed conditions of Korean clothes and Foreign clothes. In the 1920's is the first consideration dress and its ornaments form of Korean clothes. As later goes on foreign is given much weight in the whole clothes life. Account of foreign clothes managemental ways appear in the 1920's, while those facts prove the point that appeared the account that always dealed with concrete content of foreign fashion in the 1930's.

  • PDF