• Title/Summary/Keyword: Information structure

Search Result 16,699, Processing Time 0.047 seconds

A Study on Image Copyright Archive Model for Museums (미술관 이미지저작권 아카이브 모델 연구)

  • Nam, Hyun Woo;Jeong, Seong In
    • Korea Science and Art Forum
    • /
    • v.23
    • /
    • pp.111-122
    • /
    • 2016
  • The purpose of this multi-disciplinary convergent study is to establish Image Copyright Archive Model for Museums to protect image copyright and vitalize the use of images out of necessity of research and development on copyright services over the life cycle of art contents created by the museums and out of the necessity to vitalize distribution market of image copyright contents in creative industry and to formulate management system of copyright services. This study made various suggestions for enhancement of transparency and efficiency of art contents ecosystem through vitalization of use and recycling of image copyright materials by proposing standard system for calculation, distribution, settlement and monitoring of copyright royalty of 1,000 domestic museums, galleries and exhibit halls. First, this study proposed contents and structure design of image copyright archive model and, by proposing art contents distribution service platform for prototype simulation, execution simulation and model operation simulation, established art contents copyright royalty process model. As billing system and technological development for image contents are still in incipient stage, this study used the existing contents billing framework as basic model for the development of billing technology for distribution of museum collections and artworks and automatic division and calculation engine for copyright royalty. Ultimately, study suggested image copyright archive model which can be used by artists, curators and distributors. In business strategy, study suggested niche market penetration of museum image copyright archive model. In sales expansion strategy, study established a business model in which effective process of image transaction can be conducted in the form of B2B, B2G, B2C and C2B through flexible connection of museum archive system and controllable management of image copyright materials can be possible. This study is expected to minimize disputes between copyright holder of artwork images and their owners and enhance manageability of copyrighted artworks through prevention of such disputes and provision of information on distribution and utilization of art contents (of collections and new creations) owned by the museums. In addition, by providing a guideline for archives of collections of museums and new creations, this study is expected to increase registration of image copyright and to make various convergent businesses possible such as billing, division and settlement of copyright royalty for image copyright distribution service.

Analysis of Tourism Popularity Using T-map Search andSome Trend Data: Focusing on Chuncheon-city, Gangwon-province (T맵 검색지와 썸트랜드 데이터를 이용한 관광인기도분석: 강원도 춘천을 중심으로)

  • TaeWoo Kim;JaeHee Cho
    • Journal of Service Research and Studies
    • /
    • v.12 no.1
    • /
    • pp.25-35
    • /
    • 2022
  • Covid-19, of which the first patient in Korea occurred in January 2020, has affected various fields. Of these, the tourism sector might havebeen hit the hardest. In particular, since tourism-based industrial structure forms the basis of the region, Gangwon-province, and the tourism industry is the main source of income for small businesses and small enterprises, the damage is great. To check the situation and extent of such damage, targeting the Chuncheon region, where public access is the most convenient among the Gangwon regions, one-day tours are possible using public transportation from Seoul and the metropolitan area, with a general image that low expense tourism is recognized as possible, this study conducted empirical analysis through data analysis. For this, the general status of the region was checked based on the visitor data of Chuncheon city provided by the tourist information system, and to check the levels ofinterest in 2019, before Covid-19, and in 2020, after Covid-19, by comparing keywords collected from the web service sometrend of Vibe Company Inc., a company specializing in keyword collection, with SK Telecom's T-map search site data, which in parallel provides in-vehicle navigation service and communication service, this study analyzed the general regional image of Chuncheon-city. In addition, by comparing data from two years by developing a tourism popularity index applying keywords and T-map search site data, this study examined how much the Covid-19 situation affected the level of interest of visitors to the Chuncheon area leading to actual visits using a data analysis approach. According to the results of big data analysis applying the tourism popularity index after designing the data mart, this study confirmed that the effect of the Covid-19 situation on tourism popularity in Chuncheon-city, Gangwon-provincewas not significant, and confirmed the image of tourist destinations based on the regional characteristics of the region. It is hoped that the results of this research and analysis can be used as useful reference data for tourism economic policy making.

Degree of Self-Understanding Through "Self-Guided Interpretation" in Yeoncheon, Hantan River UNESCO Geopark: Focusing on Readability and Curriculum Relevance (한탄강 세계지질공원 연천 지역의 자기-안내식 해설 매체를 통한 스스로 이해 가능 정도: 이독성과 교육과정 관련성을 중심으로)

  • Min Ji Kim;Chan-Jong Kim;Eun-Jeong Yu
    • Journal of the Korean earth science society
    • /
    • v.44 no.6
    • /
    • pp.655-674
    • /
    • 2023
  • This study examined whether the "self-guided interpretation" media in the Yeoncheon area of the Hantangang River UNESCO Geopark are intelligible for visitors. Accordingly, two on-site investigations were conducted in the Hantangang River Global Geopark in September and November 2022. The Yeoncheon area, known for its diverse geological features and the era of geological attraction formation, was selected for analysis. We analyzed the readability levels, graphic characteristics, and alignment with science curriculum of the interpretive media specific to geological sites among a total of 36 self-guided interpretive media in the Yeoncheon area. Results indicated that information boards, primarily offering guidance on geological attractions, were the most prevalent type of interpretive media in the Yeoncheon area. The quantity of text in explanatory media surpassed that of a 12th-grade science textbook. The average vocabulary grade was similar to that of 11th- and 12th-grade science textbooks, with somewhat reduced readability due to a high occurrence of complex sentences. Predominant graphic types included illustrative photographs, aiding comprehension of the geological formation process through multi-structure graphics. Regarding scientific terms used in the interpretive media, 86.3% of the terms were within the "Solid Earth" section of the 2015 revised curriculum, with the majority being at the 4th-grade level. The 11th-grade optional curriculum terms comprised the second largest portion, and 13.7% of all science terms were from outside the curriculum. Notably, variations in the scientific terminology's complexity was based on geological attractions. Specifically, the terminology level on the homepage tended to be generally higher than that on information boards. Through these findings, specific factors impeding visitor comprehension of geological attractions in the Yeoncheon area, based on the interpretation medium, were identified. We suggest further research to effect improvements in self-guided interpretation media, fostering geological resource education for general visitors and anticipating advancements in geology education.

Proposal for the Hourglass-based Public Adoption-Linked National R&D Project Performance Evaluation Framework (Hourglass 기반 공공도입연계형 국가연구개발사업 성과평가 프레임워크 제안: 빅데이터 기반 인공지능 도시계획 기술개발 사업 사례를 바탕으로)

  • SeungHa Lee;Daehwan Kim;Kwang Sik Jeong;Keon Chul Park
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.31-39
    • /
    • 2023
  • The purpose of this study is to propose a scientific performance evaluation framework for measuring and managing the overall outcome of complex types of projects that are linked to public demand-based commercialization, such as information system projects and public procurement, in integrated national R&D projects. In the case of integrated national R&D projects that involve multiple research institutes to form a single final product, and in the case of demand-based demonstration and commercialization of the project results, the existing evaluation system that evaluates performance based on the short-term outputs of the detailed tasks comprising the R&D project has limitations in evaluating the mid- and long-term effects and practicality of the integrated research products. (Moreover, as the paradigm of national R&D projects is changing to a mission-oriented one that emphasizes efficiency, there is a need to change the performance evaluation of national R&D projects to focus on the effectiveness and practicality of the results.) In this study, we propose a performance evaluation framework from a structural perspective to evaluate the completeness of each national R&D project from a practical perspective, such as its effectiveness, beyond simple short-term output, by utilizing the Hourglass model. In particular, it presents an integrated performance evaluation framework that links the top-down and bottom-up approaches leading to Tool-System-Service-Effect according to the structure of R&D projects. By applying the proposed detailed evaluation indicators and performance evaluation frame to actual national R&D projects, the validity of the indicators and the effectiveness of the proposed performance evaluation frame were verified, and these results are expected to provide academic, policy, and industrial implications for the performance evaluation system of national R&D projects that emphasize efficiency in the future.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.

Understanding User Motivations and Behavioral Process in Creating Video UGC: Focus on Theory of Implementation Intentions (Video UGC 제작 동기와 행위 과정에 관한 이해: 구현의도이론 (Theory of Implementation Intentions)의 적용을 중심으로)

  • Kim, Hyung-Jin;Song, Se-Min;Lee, Ho-Geun
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.125-148
    • /
    • 2009
  • UGC(User Generated Contents) is emerging as the center of e-business in the web 2.0 era. The trend reflects changing roles of users in production and consumption of contents on websites and helps us to understand new strategies of websites such as web portals and social network websites. Nowadays, we consume contents created by other non-professional users for both utilitarian (e.g., knowledge) and hedonic values (e.g., fun). Also, contents produced by ourselves (e.g., photo, video) are posted on websites so that our friends, family, and even the public can consume those contents. This means that non-professionals, who used to be passive audience in the past, are now creating contents and share their UGCs with others in the Web. Accessible media, tools, and applications have also reduced difficulty and complexity in the process of creating contents. Realizing that users create plenty of materials which are very interesting to other people, media companies (i.e., web portals and social networking websites) are adjusting their strategies and business models accordingly. Increased demand of UGC may lead to website visits which are the source of benefits from advertising. Therefore, they put more efforts into making their websites open platforms where UGCs can be created and shared among users without technical and methodological difficulties. Many websites have increasingly adopted new technologies such as RSS and openAPI. Some have even changed the structure of web pages so that UGC can be seen several times to more visitors. This mainstream of UGCs on websites indicates that acquiring more UGCs and supporting participating users have become important things to media companies. Although those companies need to understand why general users have shown increasing interest in creating and posting contents and what is important to them in the process of productions, few research results exist in this area to address these issues. Also, behavioral process in creating video UGCs has not been explored enough for the public to fully understand it. With a solid theoretical background (i.e., theory of implementation intentions), parts of our proposed research model mirror the process of user behaviors in creating video contents, which consist of intention to upload, intention to edit, edit, and upload. In addition, in order to explain how those behavioral intentions are developed, we investigated influences of antecedents from three motivational perspectives (i.e., intrinsic, editing software-oriented, and website's network effect-oriented). First, from the intrinsic motivation perspective, we studied the roles of self-expression, enjoyment, and social attention in forming intention to edit with preferred editing software or in forming intention to upload video contents to preferred websites. Second, we explored the roles of editing software for non-professionals to edit video contents, in terms of how it makes production process easier and how it is useful in the process. Finally, from the website characteristic-oriented perspective, we investigated the role of a website's network externality as an antecedent of users' intention to upload to preferred websites. The rationale is that posting UGCs on websites are basically social-oriented behaviors; thus, users prefer a website with the high level of network externality for contents uploading. This study adopted a longitudinal research design; we emailed recipients twice with different questionnaires. Guided by invitation email including a link to web survey page, respondents answered most of questions except edit and upload at the first survey. They were asked to provide information about UGC editing software they mainly used and preferred website to upload edited contents, and then asked to answer related questions. For example, before answering questions regarding network externality, they individually had to declare the name of the website to which they would be willing to upload. At the end of the first survey, we asked if they agreed to participate in the corresponding survey in a month. During twenty days, 333 complete responses were gathered in the first survey. One month later, we emailed those recipients to ask for participation in the second survey. 185 of the 333 recipients (about 56 percentages) answered in the second survey. Personalized questionnaires were provided for them to remind the names of editing software and website that they reported in the first survey. They answered the degree of editing with the software and the degree of uploading video contents to the website for the past one month. To all recipients of the two surveys, exchange tickets for books (about 5,000~10,000 Korean Won) were provided according to the frequency of participations. PLS analysis shows that user behaviors in creating video contents are well explained by the theory of implementation intentions. In fact, intention to upload significantly influences intention to edit in the process of accomplishing the goal behavior, upload. These relationships show the behavioral process that has been unclear in users' creating video contents for uploading and also highlight important roles of editing in the process. Regarding the intrinsic motivations, the results illustrated that users are likely to edit their own video contents in order to express their own intrinsic traits such as thoughts and feelings. Also, their intention to upload contents in preferred website is formed because they want to attract much attention from others through contents reflecting themselves. This result well corresponds to the roles of the website characteristic, namely, network externality. Based on the PLS results, the network effect of a website has significant influence on users' intention to upload to the preferred website. This indicates that users with social attention motivations are likely to upload their video UGCs to a website whose network size is big enough to realize their motivations easily. Finally, regarding editing software characteristic-oriented motivations, making exclusively-provided editing software more user-friendly (i.e., easy of use, usefulness) plays an important role in leading to users' intention to edit. Our research contributes to both academic scholars and professionals. For researchers, our results show that the theory of implementation intentions is well applied to the video UGC context and very useful to explain the relationship between implementation intentions and goal behaviors. With the theory, this study theoretically and empirically confirmed that editing is a different and important behavior from uploading behavior, and we tested the behavioral process of ordinary users in creating video UGCs, focusing on significant motivational factors in each step. In addition, parts of our research model are also rooted in the solid theoretical background such as the technology acceptance model and the theory of network externality to explain the effects of UGC-related motivations. For practitioners, our results suggest that media companies need to restructure their websites so that users' needs for social interaction through UGC (e.g., self-expression, social attention) are well met. Also, we emphasize strategic importance of the network size of websites in leading non-professionals to upload video contents to the websites. Those websites need to find a way to utilize the network effects for acquiring more UGCs. Finally, we suggest that some ways to improve editing software be considered as a way to increase edit behavior which is a very important process leading to UGC uploading.

SANET-CC : Zone IP Allocation Protocol for Offshore Networks (SANET-CC : 해상 네트워크를 위한 구역 IP 할당 프로토콜)

  • Bae, Kyoung Yul;Cho, Moon Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.87-109
    • /
    • 2020
  • Currently, thanks to the major stride made in developing wired and wireless communication technology, a variety of IT services are available on land. This trend is leading to an increasing demand for IT services to vessels on the water as well. And it is expected that the request for various IT services such as two-way digital data transmission, Web, APP, etc. is on the rise to the extent that they are available on land. However, while a high-speed information communication network is easily accessible on land because it is based upon a fixed infrastructure like an AP and a base station, it is not the case on the water. As a result, a radio communication network-based voice communication service is usually used at sea. To solve this problem, an additional frequency for digital data exchange was allocated, and a ship ad-hoc network (SANET) was proposed that can be utilized by using this frequency. Instead of satellite communication that costs a lot in installation and usage, SANET was developed to provide various IT services to ships based on IP in the sea. Connectivity between land base stations and ships is important in the SANET. To have this connection, a ship must be a member of the network with its IP address assigned. This paper proposes a SANET-CC protocol that allows ships to be assigned their own IP address. SANET-CC propagates several non-overlapping IP addresses through the entire network from land base stations to ships in the form of the tree. Ships allocate their own IP addresses through the exchange of simple requests and response messages with land base stations or M-ships that can allocate IP addresses. Therefore, SANET-CC can eliminate the IP collision prevention (Duplicate Address Detection) process and the process of network separation or integration caused by the movement of the ship. Various simulations were performed to verify the applicability of this protocol to SANET. The outcome of such simulations shows us the following. First, using SANET-CC, about 91% of the ships in the network were able to receive IP addresses under any circumstances. It is 6% higher than the existing studies. And it suggests that if variables are adjusted to each port's environment, it may show further improved results. Second, this work shows us that it takes all vessels an average of 10 seconds to receive IP addresses regardless of conditions. It represents a 50% decrease in time compared to the average of 20 seconds in the previous study. Also Besides, taking it into account that when existing studies were on 50 to 200 vessels, this study on 100 to 400 vessels, the efficiency can be much higher. Third, existing studies have not been able to derive optimal values according to variables. This is because it does not have a consistent pattern depending on the variable. This means that optimal variables values cannot be set for each port under diverse environments. This paper, however, shows us that the result values from the variables exhibit a consistent pattern. This is significant in that it can be applied to each port by adjusting the variable values. It was also confirmed that regardless of the number of ships, the IP allocation ratio was the most efficient at about 96 percent if the waiting time after the IP request was 75ms, and that the tree structure could maintain a stable network configuration when the number of IPs was over 30000. Fourth, this study can be used to design a network for supporting intelligent maritime control systems and services offshore, instead of satellite communication. And if LTE-M is set up, it is possible to use it for various intelligent services.

The Photography as Technological Aesthetics (데크놀로지 미학으로서의 사진)

  • Jin, Dong-Sun
    • Journal of Science of Art and Design
    • /
    • v.11
    • /
    • pp.221-249
    • /
    • 2007
  • Today, photography is facing to the crisis of identity and dilemma of ontology from the digital imaging process in the new technology form. It is very important points to say rethinking of the traditional photographic medium, that has changed the way we view the world and ourselves is perhaps an understatement and that photography has transformed our essential understanding of reality. Now, no longer are photographic images regarded as the true automatic recording, innocent evidence and the mirror to the reality. Rather, photography constructs the world for our entertainment, helping to create the comforting illusions by which we live. The recognition that photographs are not constructions and reflections of reality, is the basis for the actual presence within the contemporary photographic world. It is shock. This thesis's aim is to look for the problems of photographic identity and ontological crisis that is controlling and regulating digital photographic imagery, allowing the reproduction of the electronic simulations era. Photography loses its special aesthetic status and becomes no more true information and, exclusively evidence by traditional film and paper that appeared both as a technological accuracy and as a medium-specific aesthetic. The result, photography is facing two crises, one is the photographic ontology(the introduction of computerized digital images) and the other is photographic epistemology(having to do broader changes in ethics, knowledge and culture). Taken together, these crises apparently threaten us with the death of photography, with the 'end' of photography and the culture it sustains. The thesis's meaning is to look into the dilemma of photography's ontology and epistemology, especially, automatical index and digital codes from its origin, meaning, and identity as the technological medium. Thus, in particular, thesis focuses on the analog imagery presence, from the nature in the material world, and the digital imagery presence from the cultural situations in our society. And also thesis's aim is to examine the main issues of the history of photography has been concentrated on the ontological arguments since the discovery of photography in 1839. Photography has never been only one static technology form. Rather, its nearly two centuries of technological development have been marked by numerous, competing of technological innovation and self revolution from the dual aspects. This thesis examines recent account of photography by the analysis of the medium's concept, meaning, identity between film base image and digital base image from the aspects of photographic ontology and epistemology. Thus, the structure of thesis is fairy straightforward to examine what appear to be two opposing view of photographic conditions and ontological situations. Thesis' view contrasts that figure out the value of photography according to its fundamental characteristic as a medium. Also, it seeks a possible solution to the dilemma of photographic ontology through the medium's origin from the early years of the nineteenth century to the raising questions about the different meaning(analog/digital) of photography, now. Finally, this thesis emphasizes and concludes that the photographic ontological crisis reflects to the paradoxical dynamic structure, that unsolved the origins of the medium, itself. Moreover, even photography is not single identity of the photographic ontology, and also can not be understood as having a static identity or singular status from the dynamic field of technologies, practices, and images.

  • PDF