• Title/Summary/Keyword: Cause by case

Search Result 3,380, Processing Time 0.032 seconds

Dynamics of Technology Adoption in Markets Exhibiting Network Effects

  • Hur, Won-Chang
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.127-140
    • /
    • 2010
  • The benefit that a consumer derives from the use of a good often depends on the number of other consumers purchasing the same goods or other compatible items. This property, which is known as network externality, is significant in many IT related industries. Over the past few decades, network externalities have been recognized in the context of physical networks such as the telephone and railroad industries. Today, as many products are provided as a form of system that consists of compatible components, the appreciation of network externality is becoming increasingly important. Network externalities have been extensively studied among economists who have been seeking to explain new phenomena resulting from rapid advancements in ICT (Information and Communication Technology). As a result of these efforts, a new body of theories for 'New Economy' has been proposed. The theoretical bottom-line argument of such theories is that technologies subject to network effects exhibit multiple equilibriums and will finally lock into a monopoly with one standard cornering the entire market. They emphasize that such "tippiness" is a typical characteristic in such networked markets, describing that multiple incompatible technologies rarely coexist and that the switch to a single, leading standard occurs suddenly. Moreover, it is argued that this standardization process is path dependent, and the ultimate outcome is unpredictable. With incomplete information about other actors' preferences, there can be excess inertia, as consumers only moderately favor the change, and hence are themselves insufficiently motivated to start the bandwagon rolling, but would get on it once it did start to roll. This startup problem can prevent the adoption of any standard at all, even if it is preferred by everyone. Conversely, excess momentum is another possible outcome, for example, if a sponsoring firm uses low prices during early periods of diffusion. The aim of this paper is to analyze the dynamics of the adoption process in markets exhibiting network effects by focusing on two factors; switching and agent heterogeneity. Switching is an important factor that should be considered in analyzing the adoption process. An agent's switching invokes switching by other adopters, which brings about a positive feedback process that can significantly complicate the adoption process. Agent heterogeneity also plays a important role in shaping the early development of the adoption process, which has a significant impact on the later development of the process. The effects of these two factors are analyzed by developing an agent-based simulation model. ABM is a computer-based simulation methodology that can offer many advantages over traditional analytical approaches. The model is designed such that agents have diverse preferences regarding technology and are allowed to switch their previous choice. The simulation results showed that the adoption processes in a market exhibiting networks effects are significantly affected by the distribution of agents and the occurrence of switching. In particular, it is found that both weak heterogeneity and strong network effects cause agents to start to switch early and this plays a role of expediting the emergence of 'lock-in.' When network effects are strong, agents are easily affected by changes in early market shares. This causes agents to switch earlier and in turn speeds up the market's tipping. The same effect is found in the case of highly homogeneous agents. When agents are highly homogeneous, the market starts to tip toward one technology rapidly, and its choice is not always consistent with the populations' initial inclination. Increased volatility and faster lock-in increase the possibility that the market will reach an unexpected outcome. The primary contribution of this study is the elucidation of the role of parameters characterizing the market in the development of the lock-in process, and identification of conditions where such unexpected outcomes happen.

Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks (Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성)

  • Kim, Hyeonho;Han, Seokmin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.23-31
    • /
    • 2020
  • This study was carried out to generate various images of railroad surfaces with random defects as training data to be better at the detection of defects. Defects on the surface of railroads are caused by various factors such as friction between track binding devices and adjacent tracks and can cause accidents such as broken rails, so railroad maintenance for defects is necessary. Therefore, various researches on defect detection and inspection using image processing or machine learning on railway surface images have been conducted to automate railroad inspection and to reduce railroad maintenance costs. In general, the performance of the image processing analysis method and machine learning technology is affected by the quantity and quality of data. For this reason, some researches require specific devices or vehicles to acquire images of the track surface at regular intervals to obtain a database of various railway surface images. On the contrary, in this study, in order to reduce and improve the operating cost of image acquisition, we constructed the 'Defective Railroad Surface Regeneration Model' by applying the methods presented in the related studies of the Generative Adversarial Network (GAN). Thus, we aimed to detect defects on railroad surface even without a dedicated database. This constructed model is designed to learn to generate the railroad surface combining the different railroad surface textures and the original surface, considering the ground truth of the railroad defects. The generated images of the railroad surface were used as training data in defect detection network, which is based on Fully Convolutional Network (FCN). To validate its performance, we clustered and divided the railroad data into three subsets, one subset as original railroad texture images and the remaining two subsets as another railroad surface texture images. In the first experiment, we used only original texture images for training sets in the defect detection model. And in the second experiment, we trained the generated images that were generated by combining the original images with a few railroad textures of the other images. Each defect detection model was evaluated in terms of 'intersection of union(IoU)' and F1-score measures with ground truths. As a result, the scores increased by about 10~15% when the generated images were used, compared to the case that only the original images were used. This proves that it is possible to detect defects by using the existing data and a few different texture images, even for the railroad surface images in which dedicated training database is not constructed.

Operation Measures of Sea Fog Observation Network for Inshore Route Marine Traffic Safety (연안항로 해상교통안전을 위한 해무관측망 운영방안에 관한 연구)

  • Joo-Young Lee;Kuk-Jin Kim;Yeong-Tae Son
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.2
    • /
    • pp.188-196
    • /
    • 2023
  • Among marine accidents caused by bad weather, visibility restrictions caused by sea fog occurrence cause accidents such as ship strand and ship bottom damage, and at the same time involve casualties caused by accidents, which continue to occur every year. In addition, low visibility at sea is emerging as a social problem such as causing considerable inconvenience to islanders in using transportation as passenger ships are collectively delayed and controlled even if there are local differences between regions. Moreover, such measures are becoming more problematic as they cannot objectively quantify them due to regional deviations or different criteria for judging observations from person to person. Currently, the VTS of each port controls the operation of the ship if the visibility distance is less than 1km, and in this case, there is a limit to the evaluation of objective data collection to the extent that the visibility of sea fog depends on the visibility meter or visual observation. The government is building a marine weather signal sign and sea fog observation networks for sea fog detection and prediction as part of solving these obstacles to marine traffic safety, but the system for observing locally occurring sea fog is in a very insufficient practical situation. Accordingly, this paper examines domestic and foreign policy trends to solve social problems caused by low visibility at sea and provides basic data on the need for government support to ensure maritime traffic safety due to sea fog by factually investigating and analyzing social problems. Also, this aims to establish a more stable maritime traffic operation system by blocking marine safety risks that may ultimately arise from sea fog in advance.

The CH3CHO Removal Characteristics of Lightweight Aggregate Concrete with TiO2 Spreaded by Low Temperature Firing using Sol-gel Method (Sol-gel법으로 이산화티탄(TiO2)을 저온소성 도포시킨 경량골재콘크리트의 아세트알데히드(CH3CHO) 제거 특성)

  • Lee, Seung Han;Yeo, In Dong;Jung, Yong Wook;Jang, Suk Soo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.2A
    • /
    • pp.129-136
    • /
    • 2011
  • Recently studies on functional concrete with a photocatalytic material such as $TiO_2$ have actively been carried out in order to remove air pollutants. The absorbtion of $TiO_2$ from those studies is applied by it being directly mixed into concrete or by suspension coated on the surface. When it comes to the effectiveness, the former process is less than that of the latter compared with the $TiO_2$ use. As a result, the direct coating of $TiO_2$ on materials' surface is more used for effectiveness. The Surface spread of it needs to have a more than $400^{\circ}C$ heat treat done to stimulate the activation and adhesion of photocatalysis. Heat treat consequently leads hydration products in concrete to be dehydrated and shrunk and is the cause of cracking. The study produces $TiO_2$ used Sol-gel method which enables it to be coated with a low temperature treat, applies it to pearlite using Lightweight Aggregate Concrete fixed with a low temperature treat and evaluates the spread performance of it. In addition to this, the size of pearlite is divided into two types: One is 2.5 mm to 5.0 mm and the other is more than 5.0 mm for the benefit of finding out the removal characteristics of $CH_3CHO$ whether they are affected by pearlite size, mixing method and ratio with $TiO_2$ and elapsed time. The result of this experiment shows that although $TiO_2$ produced by Sol-gel method is treated with 120 temperature, it maintains a high spread rate on the XRF(X ray Florescence) quantitative analysis which ranks $TiO_2$ 38 percent, $SiO_2$ 29 percent and CaO 18 percent. In the size of perlite from 2.5 mm to 5.0 mm, the removal characteristic of $CH_3CHO$ from a low temperature heated Lightweight concrete appears 20 percent higher when $TiO_2$ with Sol-gel method is spreaded on the 7 percent of surface. In other words, the removal rate is 94 percent compared with the 72 percent where $TiO_2$ is mixed in 10 percent surface. In more than 5.0 mm sized perlite, the removal rate of $CH_3CHO$, when $TiO_2$ is mixed with 10 percent, is 69 percent, which is similar with that of the previous case. It suggests that the size of pearlite has little effects on the removal rate of $CH_3CHO$. In terms of Elapsed time, the removal characteristic seems apparent at the early stage, where the average removal rate for the first 10 hours takes up 84 percent compared with that of 20 hours.

Development on Early Warning System about Technology Leakage of Small and Medium Enterprises (중소기업 기술 유출에 대한 조기경보시스템 개발에 대한 연구)

  • Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.143-159
    • /
    • 2017
  • Due to the rapid development of IT in recent years, not only personal information but also the key technologies and information leakage that companies have are becoming important issues. For the enterprise, the core technology that the company possesses is a very important part for the survival of the enterprise and for the continuous competitive advantage. Recently, there have been many cases of technical infringement. Technology leaks not only cause tremendous financial losses such as falling stock prices for companies, but they also have a negative impact on corporate reputation and delays in corporate development. In the case of SMEs, where core technology is an important part of the enterprise, compared to large corporations, the preparation for technological leakage can be seen as an indispensable factor in the existence of the enterprise. As the necessity and importance of Information Security Management (ISM) is emerging, it is necessary to check and prepare for the threat of technology infringement early in the enterprise. Nevertheless, previous studies have shown that the majority of policy alternatives are represented by about 90%. As a research method, literature analysis accounted for 76% and empirical and statistical analysis accounted for a relatively low rate of 16%. For this reason, it is necessary to study the management model and prediction model to prevent leakage of technology to meet the characteristics of SMEs. In this study, before analyzing the empirical analysis, we divided the technical characteristics from the technology value perspective and the organizational factor from the technology control point based on many previous researches related to the factors affecting the technology leakage. A total of 12 related variables were selected for the two factors, and the analysis was performed with these variables. In this study, we use three - year data of "Small and Medium Enterprise Technical Statistics Survey" conducted by the Small and Medium Business Administration. Analysis data includes 30 industries based on KSIC-based 2-digit classification, and the number of companies affected by technology leakage is 415 over 3 years. Through this data, we conducted a randomized sampling in the same industry based on the KSIC in the same year, and compared with the companies (n = 415) and the unaffected firms (n = 415) 1:1 Corresponding samples were prepared and analyzed. In this research, we will conduct an empirical analysis to search for factors influencing technology leakage, and propose an early warning system through data mining. Specifically, in this study, based on the questionnaire survey of SMEs conducted by the Small and Medium Business Administration (SME), we classified the factors that affect the technology leakage of SMEs into two factors(Technology Characteristics, Organization Characteristics). And we propose a model that informs the possibility of technical infringement by using Support Vector Machine(SVM) which is one of the various techniques of data mining based on the proven factors through statistical analysis. Unlike previous studies, this study focused on the cases of various industries in many years, and it can be pointed out that the artificial intelligence model was developed through this study. In addition, since the factors are derived empirically according to the actual leakage of SME technology leakage, it will be possible to suggest to policy makers which companies should be managed from the viewpoint of technology protection. Finally, it is expected that the early warning model on the possibility of technology leakage proposed in this study will provide an opportunity to prevent technology Leakage from the viewpoint of enterprise and government in advance.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Limitations on Exclusive Rights of Authors for Library Reprography : A Comparative Examination of the Draft Revision of Korean Copyright Law with the New American Copyright Act of 1976 (저작권법에 준한 도서관봉사에 관한 연구 -미국과 한국의 저자재산권의 제한규정을 중시으로-)

  • 김향신
    • Journal of Korean Library and Information Science Society
    • /
    • v.11
    • /
    • pp.69-99
    • /
    • 1984
  • A dramatic development in the new technology of copying materials has presented us with massive problems on reconciling the conflicts between copyright owners and potential users of copyrighted materials. The adaptation to this changing condition led some countries to revise their copyright laws such as in the U. S. in 1976 and in Korea in 1984 for merging with the international or universal copyright conventions in the future. Copyright defined as exclusive rights given to copyright owners aims to secure a fair return for an author's creative labor and to stimulate artistic creativity for the general public good. The exclusive rights on copyrightable matters, generally for reproduction, preparation of derivative works, public distribution, public performance, and public display, are limited by fair use for scholarship and criticism and by library reproduction for its preservation and interlibrary loan. These limitations on the exclusive rights are concerned with all aspects of library services and cause a great burden on librarian's daily duty to provide balance between the rights of creators and the needs of library patrons. The fair use as one of the limitations on it has been coupled with enormous growth of a new technology and extended from xerography to online database systems. The implementation of the fair use and library reprography in Korean law to the local practices is examined on the basis of the new American copyright act of 1976. Under the draft revision of Korean law, librarians will face many potential problems as summarized below. 1. Because the new provision of 'life time plus 50 years' will tie up substantial bodies of material longer than the old law, until that date librarians would need permissions from the owners and should pay attention to the author's death date. 2. Because the copyright can be sold, distributed, given to the heirs, donated, as a whole or a part, librarians should chase down the heirs and other second owners. In case of a derivative work, this is a real problem. 3. Since a work has its protection from the moment of its creation, the coverage of copyrightable matter would be extended to the published or the unpublished works and librarian's work load would be heavier. Without copyright registration, no one can be certain that a work is in the public domain. Therefore, librarians will need to check with an authority. 4. For implementation of limitations on exclusive rights, fair use and library reproduction for interlibrary loan, there can be no substantial aggregate use and there can be no systematic distribution of multicopies. Therefore, librarians should not substitute reproductions for subscriptions or purchases. 5. For the interlibrary loan by photocopying, librarians should understand the procedure of royalty payment. 6. Compulsory licenses should be understood by librarians. 7. Because the draft revision of Korean law is a reciprocal treaty, librarians should take care of other countries' copyright law to protect foreign authors from Korean law. In order to solve the above problems, some suggestions are presented below. 1. That copyright clearinghouse or central agency as a centralized royalty payment mechanism be established. 2. That the Korean Library Association establish a committee on copyright. 3. That the Korean Library Association propose guidelines for each occasion, e.g. for interlibrary loan, books and periodicals and music, etc. 4. That the Korean government establish a copyright office or an official organization for copyright control other than the copyright committee already organized by the government. 5. That the Korean Library Association establish educational programs on copyright for librarians through seminars or articles written in its magazines. 6. That individual libraries provide librarian's copyright kits. 7. That school libraries distribute subject bibliographies on copyright law to teachers. However, librarians should keep in mind that limitations on exclusive rights are not for an exemption from library reprography but as a convenient access to library resources.

  • PDF

Pulmonary Oxalosis Caused by Aspergillus Niger Infection (Aspergillus Niger 감염에 의한 폐옥살산염 1예)

  • Cho, Gye Jung;Ju, Jin Young;Park, Kyung Hwa;Choi, Yoo-Duk;Kim, Kyu Sik;Kim, Yu Il;Kim, Soo-Ok;Lim, Sung-Chul;Kim, Young-Chul;Park, Kyung-Ok;Nam, Jong-Hee;Yoon, Woong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.55 no.5
    • /
    • pp.516-521
    • /
    • 2003
  • The Aspergillus species produces metabolic products that play a significant role in the destructive processes in the lung. We experienced a case of chronic necrotizing pulmonary aspergillosis caused by an Aspergillus niger infection, which contained numerous calcium oxalate crystals in the necrotic lung tissue. A 46-year-old man, who had a history of pulmonary tuberculosis, presented with high fever, intermittent hemoptysis and pulmonary infiltrations with a cavity indicated by the chest radiograph. Despite being treated with several antibiotics and anti-tuberculosis regimens, the high fever continued. The sputum cultures yielded A. niger repeatedly, and intravenous amphotericin B was then introduced. The pathological specimen obtained by a transbronchial lung biopsy revealed numerous calcium oxalate crystals in a background of acute inflammatory exudates with no identification of the organism. Intravenous amphotericin B was continued at a total dose of 1600 mg, and at that time he was afebrile, although the intermittent hemoptysis continued. On the $63^{rd}$ hospital day, a massive hemoptysis (about 800 mL) developed, which could not be controlled despite embolizing the left bronchial artery. He died of respiratory failure the next day. It is believed that the oxalic acid produced by A. niger was the main cause of the patient's pulmonary injury and the ensuing massive hemoptysis.

A Study on the Temporomandibular Joint Disorder and School Life Stress of High School Student by Department (계열별 남자고등학생의 학교생활스트레스와 측두하악장애에 관한 연구)

  • Lee, Jung-Hwa;Choi, Jung-Mi
    • Journal of dental hygiene science
    • /
    • v.7 no.3
    • /
    • pp.179-185
    • /
    • 2007
  • The purpose of this study targeted on high school student in the department of liberal arts, industry in Daegu metropolitan city, is to get basic data necessary for the development of dental educational program, to discern prevention and treatment of temporomandibular joint disorder by observing the situation temporomandibular joint disorder and contribution element, of relationship of school life stress The results are as follows.: 1. The percentage of occurring temporomandibular joint disorder in the high school resulted in a joint noise at 61.8% and joint dislocation 6.9%, sharp pain 47.5% at time of chewing. 29.8% at the time of the non-chewing, lockjaw 11.3%, a headache appeared at 40.4%.2. In the contribution factor of occurring temporomandibular joint disorder, the cause of joint noise was the clench one's teeth, lip and cheek clench, For the pain at the time of chewing clench one's teeth, one side chewing, over-chewing, lip clench, sideways sleeping showed the difference. (P < 0.01) For the pain at the time of non-chewing, clench one's teeth, bruxism, one side chewing, lip and cheek clench were similar, and for the lockjaw, clench one's teeth, bruxism, sideways sleeping showed the difference. The plum evil thing period at time of the fault writing that statistically showed the difference. For the headache, the contribution factors were the all bad habits mentioned above excluding one side sleeping.(P < 0.01, P < 0.05). 3. The rate of experiencing temporomandibular joint disorder by oral and maxillofacia was 13.4% in industrial department, and 19.6% in liberal arts. And for the factor of wound was that exercise 26.8%, others 24.4%, fall-down 19.5%. And for the industrial, exercise 44.4%, fall-down 22.2%, others 14.9%. The treatment experience appeared at 5.0% in industrial department, 2.9% in liberal arts. And for the medical institutions, liberal arts were dental clinic 50%, orthopedics 50%, and the industrial department orthopedics 40%, oriental medicine clinic 30%, dental clinic 30%. 4. In case of temporomandibular joint disorder, there were no difference by grades or educational background. And at the time of chewing or non-chewing showed similar difference.(P < 0.01). 5. Compared to stress in the high school, it generally showed higher in liberal arts than in industrial department due to school record. Its scope was $3.75{\pm}1.14$ in liberal arts, $3.01{\pm}1.23$ in industrial department. 6. The school record, school life, stress problems by teachers, chewing/non-chewing pain of temporomandibular joint disorder, joint noise had a similar correlation.(P < 0.01, < 0.05).

  • PDF

Effect of Storage Conditions on the Dormancy Release and the Induction of Secondary Dormancy in Weed Seeds (저장조건이 잡초종자의 휴면타파와 이차휴면 유도에 미치는 효과)

  • Kim, J.S.;Hwang, I.T.;Cho, K.Y.
    • Korean Journal of Weed Science
    • /
    • v.16 no.3
    • /
    • pp.200-209
    • /
    • 1996
  • It is assumed to be an efficient method for keeping a germinability of weed seeds as long as possible, if a secondary dormancy is not induced by transferring the seeds of which dormancy was broken in wetting condition into drying condition. To investigate its validity, two experiments were carried out on seeds of 9 weed species ; to find out the most effective storage condition in breaking the dormancy of each weed species and to know whether there is a decrease in the germinability by transferring into drying storage condition. The dormancy of Chenopodium album and Stellaria aquatica was released well under the drying condition, but that of Echinochloa crus-galli var. oryzicola by soaking in water. Other weed species were released from dormancy by storage in wetting condition. When the seeds stored in the wetting or soaking condition, are air-dried and then restored at room or low temperature, a decreasing tendency of germinability which might cause a trouble in using them practically, was not observed except on the seeds of Persicaria vulgaris. In the case of Persicaria vulgaris, the low germination since 3 month-storage seemed not to be caused by drying, because a decrease of its germinability was observed with increasing storage period in all of the storage conditions. In contrast, high germination was induced as the seeds of Echinochloa crusgalli var. oryzicola, which were not germinated during the storage in low temperature and wetting condition, were transferred into the room temperature and drying condition. These results suggest that this approach can be one of the efficient methods for keeping a good germinability as long as possible in most weed seeds.

  • PDF