• Title/Summary/Keyword: Information value approach

Search Result 1,081, Processing Time 0.033 seconds

Evaluating the Levels of Port Services by the Average Waiting Cost of Ships (선박당 평균대기비용에 의한 항만의 서비스 수준 평가)

  • Park, Byung-In;Bae, Jong-Wook;Park, Sang-June
    • Journal of Korea Port Economic Association
    • /
    • v.25 no.4
    • /
    • pp.183-202
    • /
    • 2009
  • This study estimates the port waiting cost of international trade ports in Korea by an opportunity cost approach. In the next step, we present a method to assess the levels of port services by the average waiting cost of ships derived from the results of the first step. Because the port waiting cost reflects the social cost, it is difficult to use as a service indicator even though it is the decision support information for a particular port facility expansion. The percentages of waiting ships and time also are insufficient indicators to reflect only the quantitative aspects by the time. However, the average waiting cost of ships in this study can be utilized as a service indicator to reflect waiting time and the loss of economic value simultaneously. It is also very useful information for a shipper and a carrier to select a port. Based on the average waiting cost of ships in 2007, it is analyzed in order of lowest service ports sequentially such as Pyeongtaek-Dangjin, Pohang, Donghae, and Samcheonpo. It is different from the sequential order of ports by the port waiting cost such as Pohang, Incheon, Gwangyang, Pyeongtak-Dangjin, and Ulsan. The port waiting cost is to a port authority as a key indicator what the average waiting cost of ships is to a port user as a useful indicator to evaluate the levels of port services.

  • PDF

e-Gov's Big Data utilization plan for social crisis management (사회 위기관리를 위한 전자정부의 빅데이터 활용 방안)

  • Choung, Young-chul;Choy, Ik-su;Bae, Yong-guen
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.2
    • /
    • pp.435-442
    • /
    • 2017
  • Our anxiousness has risen for recent increase in unpredicatable disaster. Accordingly, for the future society's preventing measure in advance against current considerable disasters due to societal crisis, we need to prepare secure measure ahead. Hence, we need to recognize the significance of governmental role and the value of Big Data application as ICT developed country in order to manage social crisis all the time. This manuscript analyzes human anxiety from listed disasters and describes that our government seeks new way to utilize Big Data in public in order to visualize Big Data related issues and its significance and urgency. Also, it suggests domestic/international application trend of Big Data's public sector with new practical approach to Big Data. Then, it emphasizes e-Gov's role for its Big Data application and suggests policies implying governmental use of Big Data for social crisis management by case-studying disaster measures against unpredictable crisis.

Automation of Regression Analysis for Predicting Flatfish Production (광어 생산량 예측을 위한 회귀분석 자동화 시스템 구축)

  • Ahn, Jinhyun;Kang, Jungwoon;Kim, Mincheol;Park, So-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.128-130
    • /
    • 2021
  • This study aims to implement a Regression Analysis system for predicting the appropriate production of flatfish. Due to Korea's signing of FTAs with countries around the world and accelerating market opening, Korean flatfish farming businesses are experiencing many difficulties due to the specificity and uncertainty of the environment. In addition, there is a need for a solution to problems such as sluggish consumption and price drop due to the recent surge in imported seafood such as salmon and yellowtail and changes in people's dietary habits. in this study, Using the python module, xlwings, it was used to obtain for the production amount of flatfish and to predict the amount of flatfish to be produced later. was used to predict the amount of flatfish to be produced in the future. Therefore, based on the analysis results of this prediction of flatfish production, the flatfish aquaculture industry will be able to come up with a plan to achieve an appropriate production volume and control supply and demand, which will reduce unnecessary economic loss and promote new value creation based on data. In addition, through the data approach attempted in this study, various analysis techniques such as artificial neural networks and multiple regression analysis can be used in future research in various fields, which will become the foundation of basic data that can effectively analyze and utilize big data in various industries.

  • PDF

Methods for Quantitative Disassembly and Code Establishment of CBS in BIM for Program and Payment Management (BIM의 공정과 기성 관리 적용을 위한 CBS 수량 분개 및 코드 정립 방안)

  • Hando Kim;Jeongyong Nam;Yongju Kim;Inhye Ryu
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.6
    • /
    • pp.381-389
    • /
    • 2023
  • One of the crucial components in building information modeling (BIM) is data. To systematically manage these data, various research studies have focused on the creation of object breakdown structures and property sets. Specifically, crucial data for managing programs and payments involves work breakdown structures (WBSs) and cost breakdown structures (CBSs), which are indispensable for mapping BIM objects. Achieving this requires disassembling CBS quantities based on 3D objects and WBS. However, this task is highly tedious owing to the large volume of CBS and divergent coding practices employed by different organizations. Manual processes, such as those based on Excel, become nearly impossible for such extensive tasks. In response to the challenge of computing quantities that are difficult to derive from BIM objects, this study presents methods for disassembling length-based quantities, incorporating significant portions of the bill of quantities (BOQs). The proposed approach recommends suitable CBS by leveraging the accumulated history of WBS-CBS mapping databases. Additionally, it establishes a unified CBS code, facilitating the effective operation of CBS databases.

A Data-Driven Approach and Network Analysis of Technological Innovation Resources in SMEs (데이터 기반 접근법을 활용한 중소기업 기술혁신자원의 네트워크 분석)

  • Kyung Min An;Young-Chan Lee
    • Knowledge Management Research
    • /
    • v.24 no.4
    • /
    • pp.103-129
    • /
    • 2023
  • This study aims to analyze the network structure of technological innovation resources in SMEs, especially manufacturing firms, and reveal the differences between innovative and non-innovative firms. The study first analyzes connection centrality, flow-mediated centrality, and power centrality for all firms, and derives structural equivalence through CONCOR analysis. Then, the network structure of innovative and non-innovative firms was compared and analyzed according to innovation performance and creation. The results show that entrepreneurship and corporate innovation strategy have a significant impact on the analysis of technological innovation resources of all firms. According to the CONCOR analysis, the innovation resources of SMEs are organized into seven clusters, which can be defined as intrinsic product innovation resources, competitive advantage promotion resources, cooperative activities resources, information system resources, and innovation protection resources. The network analysis of innovative and non-innovative firms showed that innovative firms focused on enhancing competitiveness and improving quality, while non-innovative firms tended to focus more on existing products and customers. In addition, innovative firms had eight clusters, while non-innovative firms had six clusters, suggesting that innovative firms utilize resources diversely to pursue structural change and new value creation, while non-innovative firms operate technological innovation resources in a more stable form. This study emphasizes the importance of entrepreneurship and corporate innovation strategy in SMEs' technological innovation, and suggests that strong internal efforts are needed to increase innovativeness. These findings have important implications for strategy formulation and policy development for technological innovation in SMEs.

A Study on the Optimization of Fire Awareness Model Based on Convolutional Neural Network: Layer Importance Evaluation-Based Approach (합성곱 신경망 기반 화재 인식 모델 최적화 연구: Layer Importance Evaluation 기반 접근법)

  • Won Jin;Mi-Hwa Song
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.9
    • /
    • pp.444-452
    • /
    • 2024
  • This study proposes a deep learning architecture optimized for fire detection derived through Layer Importance Evaluation. In order to solve the problem of unnecessary complexity and operation of the existing Convolutional Neural Network (CNN)-based fire detection system, the operation of the inner layer of the model based on the weight and activation values was analyzed through the Layer Importance Evaluation technique, the layer with a high contribution to fire detection was identified, and the model was reconstructed only with the identified layer, and the performance indicators were compared and analyzed with the existing model. After learning the fire data using four transfer learning models: Xception, VGG19, ResNet, and EfficientNetB5, the Layer Importance Evaluation technique was applied to analyze the weight and activation value of each layer, and then a new model was constructed by selecting the top rank layers with the highest contribution. As a result of the study, it was confirmed that the implemented architecture maintains the same performance with parameters that are about 80% lighter than the existing model, and can contribute to increasing the efficiency of fire monitoring equipment by outputting the same performance in accuracy, loss, and confusion matrix indicators compared to conventional complex transfer learning models while having a learning speed of about 3 to 5 times faster.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Revisiting the cause of unemployment problem in Korea's labor market: The job seeker's interests-based topic analysis (취업준비생 토픽 분석을 통한 취업난 원인의 재탐색)

  • Kim, Jung-Su;Lee, Suk-Jun
    • Management & Information Systems Review
    • /
    • v.35 no.1
    • /
    • pp.85-116
    • /
    • 2016
  • The present study aims to explore the causes of employment difficulty on the basis of job applicant's interest from P-E (person-environment) fit perspective. Our approach relied on a textual analytic method to reveal insights from their situational interests in a job search during the change of labor market. Thus, to investigate the type of major interests and psychological responses, user-generated texts in a social community were collected for analysis between January 1, 2013 through December 31, 2015 by crawling the online-community in regard to job seeking and sharing information and opinions. The results of topic analysis indicated user's primary interests were divided into four types: perception of vocation expectation, employment pre-preparation behaviors, perception of labor market, and job-seeking stress. Specially, job applicants put mainly concerns of monetary reward and a form of employment, rather than their work values or career exploration, thus youth job applicants expressed their psychological responses using contextualized language (e.g., slang, vulgarisms) for projecting their unstable state under uncertainty in response to environmental changes. Additionally, they have perceived activities in the restricted preparation (e.g., certification, English exam) as determinant factors for success in employment and suffered form job-seeking stress. On the basis of these findings, current unemployment matters are totally attributed to the absence of pursing the value of vocation and job in individuals, organizations, and society. Concretely, job seekers are preoccupied with occupational prestige in social aspect and have undecided vocational value. On the other hand, most companies have no perception of the importance of human resources and have overlooked the needs for proper work environment development in respect of stimulating individual motivation. The attempt in this study to reinterpret the effect of environment as for classifying job applicant's interests in reference to linguistic and psychological theories not only helps conduct a more comprehensive meaning for understanding social matters, but guides new directions for future research on job applicant's psychological factors (e.g., attitudes, motivation) using topic analysis.

  • PDF

A study of SCM strategic plan: Focusing on the case of LG electronics (공급사슬 관리 구축전략에 관한 연구: LG전자 사례 중심으로)

  • Lee, Gi-Wan;Lee, Sang-Youn
    • Journal of Distribution Science
    • /
    • v.9 no.3
    • /
    • pp.83-94
    • /
    • 2011
  • Most domestic companies, with the exclusion of major firms, are reluctant to implement a supply chain management (SCM) network into their operations. Most small- and medium-sized enterprises are not even aware of SCM. Due to the inherent total-systems efficiency of SCM, it coordinates domestic manufacturers, subcontractors, distributors, and physical distributors and cuts down on cost of inventory control, as well as demand management. Furthermore, a lack of SCM causes a decrease in competitiveness for domestic companies. The reason lies in the fundamentality of SCM, which is the characteristic of information sharing, process innovation throughout SCM, and the vast range of problems the SCM management tool is able to address. This study suggests the contemplation and reformation of the current SCM situation by analyzing the SCM strategic plan, discourses and logical discussions on the topic, and a successful case for adapting SCM; hence, the study plans to productively "process" SCM. First, it is necessary to contemplate the theoretical background of SCM before discussing how to successfully process SCM. I will describe the concept and background of SCM in Chapter 2, with a definition of SCM, types of SCM promotional activities, fields of SCM, necessity of applying SCM, and the effects of SCM. All of the defects in currently processing SCM will be introduced in Chapter 3. Discussion items include the following: the Bullwhip Effect; the breakdown in supply chain and sales networks due to e-business; the issue that even though the key to a successful SCM is cooperation between the production and distribution company, during the process of SCM, the companies, many times, put their profits first, resulting in a possible defect in demands estimation. Furthermore, the problems of processing SCM in a domestic distribution-production company concern Information Technology; for example, the new system introduced to the company is not compatible with the pre-existing document architecture. Second, for effective management, distribution and production companies should cooperate and enhance their partnership in the aspect of the corporation; however, in reality, this seldom occurs. Third, in the aspect of the work process, introducing SCM could provoke corporations during the integration of the distribution-production process. Fourth, to increase the achievement of the SCM strategy process, they need to set up a cross-functional team; however, many times, business partners lack the cooperation and business-information sharing tools necessary to effect the transition to SCM. Chapter 4 will address an SCM strategic plan and a case study of LG Electronics. The purpose of the strategic plan, strategic plans for types of business, adopting SCM in a distribution company, and the global supply chain process of LG Electronics will be introduced. The conclusion of the study is located in Chapter 5, which addresses the issue of the fierce competition that companies currently face in the global market environment and their increased investment in SCM, in order to better cope with short product life cycle and high customer expectations. The SCM management system has evolved through the adaptation of improved information, communication, and transportation technologies; now, it demands the utilization of various strategic resources. The introduction of SCM provides benefits to the management of a network of interconnected businesses by securing customer loyalty with cost and time savings, derived through the consolidation of many distribution systems; additionally, SCM helps enterprises form a wide range of marketing strategies. Thus, we could conclude that not only the distributors but all types of businesses should adopt the systems approach to supply chain strategies. SCM deals with the basic stream of distribution and increases the value of a company by replacing physical distribution with information. By the company obtaining and sharing ready information, it is able to create customer satisfaction at the end point of delivery to the consumer.

  • PDF