• Title/Summary/Keyword: on-the-machine

Search Result 15,780, Processing Time 0.057 seconds

THE EFFECTS OF SURFACE TREATMENTS ON SHEAR BOND STRENGTHS OF LIGHT-CURED AND CHEMICALLY CURED GLASS IONOMER CEMENTS TO ENAMEL (법랑질의 표면처리가 광중합형 및 화학중합형 글래스아이오노머 시멘트의 전단결합강도에 미치는 영향)

  • Shin, Kang-Seob;Lee, Ki-Soo
    • The korean journal of orthodontics
    • /
    • v.25 no.2 s.49
    • /
    • pp.223-233
    • /
    • 1995
  • The purpose of this study was to evaluate the effects of surface conditioning with $10\%$ polyacrylic acid, etching with $38\%$ phosphoric acid, and polishing with a slurry of pumice on shear bond strengths of light-cured glass ionomer cement, chemically cured glass ionomer cement, and a composite resin to enamel, and to observe the failure patterns of bracket bondings. Shear bond strengths of glass ionomer cements were compared with that of a composite resin. Metal brackets were bonded on the extracted human bicuspids after enamel surface treatments, and samples were immersed in the $37^{\circ}C$ distilled water bath, and shear bond strengths of glass ionomer cements and a composite resin were measured on the Instron machine after 24hrs passed, and the deboned samples were measured in respect of adhesive remnant index. Scanning electron micrographs were taken of enamel surfaces after various treatments. The data were evaluated and tested by ANOVA and Duncan's multiple range test, and those results were as follows. 1. Shear bond strength of light-cured glass ionomer cement showed statistically higher than that of chemically cured glass ionomer cement. 2. Shear bond strengths of light-cured and chemically cured glass ionomer cements to enamel treated with $10\%$ polyacrylic acid and $38\%$ phosphoric acid showed statistically higher than those with a slurry of pumice. 3. According to scanning electron micrographs, enamel surface conditioned with $10\%$ polyacrylic acid is slightly etched and cleaned, that etched with $38\%$ phosphoric acid is severely etched, and that polished with a slurry of pumice is irregulary scretched and not completely cleaned. 4. After debonding, light-cured glass ionomer cement to enamel treated with $10\%$ polyacrylic acid showed less residual materials on the enamel solace than composite resin to enamel etched with $38\%$ phosphoric acid. 5. There was no significant difference in the shear bond strength of light-cured glass ionomer cement to enamel treated with $10\%$ polyacrylic acid and that of composite resin to enamel etched with $38\%$ Phosphoric acid.

  • PDF

Life Cylcle Assessment (LCA) on Rice Production Systems: Comparison of Greenhouse Gases (GHGs) Emission on Conventional, Without Agricultural Chemical and Organic Farming (쌀 생산체계에 대한 영농방법별 전과정평가: 관행농, 무농약, 유기농법별 탄소배출량 비교)

  • Ryu, Jong-Hee;Kwon, Young-Rip;Kim, Gun-Yeob;Lee, Jong-Sik;Kim, Kye-Hoon;So, Kyu-Ho
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.6
    • /
    • pp.1157-1163
    • /
    • 2012
  • This study was performed a comparative life cycle assessment (LCA) among three rice production systems in order to analyze the difference of greenhouse gases (GHGs) emissions and environment impacts. Its life cycle inventory (LCI) database (DB) was established using data obtained from interview with conventional, without agricultural chemical and organic farming at Gunsan and Iksan, Jeonbuk province in 2011. According to the result of LCI analysis, $CO_2$ was mostly emitted from fertilizer production process and rice cropping phase. $CH_4$ and $N_2O$ were almost emitted from rice cultivation phase. The value of carbon footprint to produce 1 kg rice (unhulled) on conventional rice production system was 1.01E+00 kg $CO_2$-eq. $kg^{-1}$ and it was the highest value among three rice production systems. The value of carbon footprints on without agricultural chemical and organic rice production systems were 5.37E-01 $CO_2$-eq. $kg^{-1}$ and 6.58E-01 $CO_2$-eq. $kg^{-1}$, respectively. Without agricultural chemical rice production system whose input amount was the smallest had the lowest value of carbon footprint. Although the yield of rice from organic farming was the lowest, its value of carbon footprint less than that of conventional farming. Because there is no compound fertilizer inputs in organic farming. Compound fertilizer production and methane emission during rice cultivation were the main factor to GHGs emission in conventional and without agricultural chemical rice production systems. In organic rice production system, the main factors to GHGs emission were using fossil fuel on machine operation and methane emission from rice paddy field.

Dose Distribution of Co-60 Photon Beam in Total Body Irradiation (Co-60에 의한 전신조사시 선량분포)

  • Kang, Wee-Saing
    • Progress in Medical Physics
    • /
    • v.2 no.2
    • /
    • pp.109-120
    • /
    • 1991
  • Total body irradiation is operated to irradicate malignant cells of bone marrow of patients to be treated with bone marrow transplantation. Field size of a linear accelerator or cobalt teletherapy unit with normal geometry for routine technique is too small to cover whole body of a patient. So, any special method to cover patient whole body must be developed. Because such environments as room conditions and machine design are not universal, some characteristic method of TBI for each hospital could be developed. At Seoul National University Hospital, at present, only a cobalt unit is available for TBI because source head of the unit could be tilted. When the head is tilted outward by 90$^{\circ}$, beam direction is horizontal and perpendicular to opposite wall. Then, the distance from cobalt source to the wall was 319 cm. Provided that the distance from the wall to midsagittal plane of a patient is 40cm, nominal field size at the plane(SCD 279cm) is 122cm$\times$122cm but field size by measurement of exposure profile was 130cm$\times$129cm and vertical profile was not symmetric. That field size is large enough to cover total body of a patient when he rests on a couch in a squatting posture. Assuming that average lateral width of patients is 30cm, percent depth dose for SSD 264cm and nominal field size 115.5cm$\times$115.5cm was measured with a plane-parallel chamber in a polystyrene phantom and was linear over depth range 10~20cm. An anthropomorphic phantom of size 25cm wide and 30cm deep. Depth of dose maximum, surface dose and depth of 50% dose were 0.3cm, 82% and 16.9cm, respectively. A dose profile on beam axis for two opposing beams was uniform within 10% for mid-depth dose. Tissue phantom ratio with reference depth 15cm for maximum field size at SCD 279cm was measured in a small polystyrene phantom and was linear over depth range 10~20cm. An anthropomorphic phantom with TLD chips inserted in holes on the largest coronal plane was bilaterally irradiated by 15 minute in each direction by cobalt beam aixs in line with the cross line of the coronal plane and contact surface of sections No. 27 and 28. When doses were normalized with dose at mid-depth on beam axis, doses in head/neck, abdomen and lower lung region were close to reference dose within $\pm$ 10% but doses in upper lung, shoulder and pelvis region were lower than 10% from reference dose. Particulaly, doses in shoulder region were lower than 30%. On this result, the conclusion such that under a geometric condition for TBI with cobalt beam as SNUH radiotherapy departement, compensators for head/neck and lung shielding are not required but boost irradiation to shoulder is required could be induced.

  • PDF

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Influence analysis of Internet buzz to corporate performance : Individual stock price prediction using sentiment analysis of online news (온라인 언급이 기업 성과에 미치는 영향 분석 : 뉴스 감성분석을 통한 기업별 주가 예측)

  • Jeong, Ji Seon;Kim, Dong Sung;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.37-51
    • /
    • 2015
  • Due to the development of internet technology and the rapid increase of internet data, various studies are actively conducted on how to use and analyze internet data for various purposes. In particular, in recent years, a number of studies have been performed on the applications of text mining techniques in order to overcome the limitations of the current application of structured data. Especially, there are various studies on sentimental analysis to score opinions based on the distribution of polarity such as positivity or negativity of vocabularies or sentences of the texts in documents. As a part of such studies, this study tries to predict ups and downs of stock prices of companies by performing sentimental analysis on news contexts of the particular companies in the Internet. A variety of news on companies is produced online by different economic agents, and it is diffused quickly and accessed easily in the Internet. So, based on inefficient market hypothesis, we can expect that news information of an individual company can be used to predict the fluctuations of stock prices of the company if we apply proper data analysis techniques. However, as the areas of corporate management activity are different, an analysis considering characteristics of each company is required in the analysis of text data based on machine-learning. In addition, since the news including positive or negative information on certain companies have various impacts on other companies or industry fields, an analysis for the prediction of the stock price of each company is necessary. Therefore, this study attempted to predict changes in the stock prices of the individual companies that applied a sentimental analysis of the online news data. Accordingly, this study chose top company in KOSPI 200 as the subjects of the analysis, and collected and analyzed online news data by each company produced for two years on a representative domestic search portal service, Naver. In addition, considering the differences in the meanings of vocabularies for each of the certain economic subjects, it aims to improve performance by building up a lexicon for each individual company and applying that to an analysis. As a result of the analysis, the accuracy of the prediction by each company are different, and the prediction accurate rate turned out to be 56% on average. Comparing the accuracy of the prediction of stock prices on industry sectors, 'energy/chemical', 'consumer goods for living' and 'consumer discretionary' showed a relatively higher accuracy of the prediction of stock prices than other industries, while it was found that the sectors such as 'information technology' and 'shipbuilding/transportation' industry had lower accuracy of prediction. The number of the representative companies in each industry collected was five each, so it is somewhat difficult to generalize, but it could be confirmed that there was a difference in the accuracy of the prediction of stock prices depending on industry sectors. In addition, at the individual company level, the companies such as 'Kangwon Land', 'KT & G' and 'SK Innovation' showed a relatively higher prediction accuracy as compared to other companies, while it showed that the companies such as 'Young Poong', 'LG', 'Samsung Life Insurance', and 'Doosan' had a low prediction accuracy of less than 50%. In this paper, we performed an analysis of the share price performance relative to the prediction of individual companies through the vocabulary of pre-built company to take advantage of the online news information. In this paper, we aim to improve performance of the stock prices prediction, applying online news information, through the stock price prediction of individual companies. Based on this, in the future, it will be possible to find ways to increase the stock price prediction accuracy by complementing the problem of unnecessary words that are added to the sentiment dictionary.

The Research on the Life-safety Implementation using the Natural Light LED Lamp in the Disaster Prevention and Safety Management (방재안전 자연광 LED 조명을 이용한 생활안전 개선에 관한 연구)

  • Lee, Taeshik;Seok, Gumcheul;So, Yooseb;Choi, Byungshik;Kim, Jaekwon;Cho, Woncheol
    • Journal of Korean Society of Disaster and Security
    • /
    • v.9 no.2
    • /
    • pp.53-62
    • /
    • 2016
  • This paper is shown the new method using LED Light, which the light environment is upgraded the natural LED light in the area of Disaster Prevention and Safety Management (PDSD), which the events of deaths is reduced on the Suicide, the Infectious diseases, the safety accidents, the Traffic Accident, the crime, the fire, the Nature Disaster, and which the health and the environment and the safety is implemented using the value of the color LED Light. Research findings include,during 3 weeks in the November 2016, in the ten residents (average living 28.7 years, age 67.5 years) with depressive symptoms in the northern part of Seoul, according to the request of the user, the PDSD natural light LED lighting was installed in the home bedroom or the living room, expectations for the ability to restore physical and mental stability were high (88%), in the same way, after 1 week and 3 weeks, the physical and mental changes were compared and the results,84% in the first week and 90% in the third week and thereafter, the effect of relieving depression was high. We conclude that patients with depression have a good sleep, an uneasy feeling, a sense of security, a good night's sleep, and a good feeling. The PDSD LED Light is expected to contribute in the various areas, which reduced the suicides, which give increased immunity from infectious diseases, which give a crash to reduce accidents caused by negligence, which improve the safe operation of heavy vehicles in which a traffics accident incidence installed on the highest point, which improve the safety function on the 'safety way home' for the safety of the community, which due to fire gives alleviate the emotional anxiety of firefighters, which improve the environment for long-term control room working during decision making caused by natural disasters.

Function of the Korean String Indexing System for the Subject Catalog (주제목록을 위한 한국용어열색인 시스템의 기능)

  • Yoon Kooho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.15
    • /
    • pp.225-266
    • /
    • 1988
  • Various theories and techniques for the subject catalog have been developed since Charles Ammi Cutter first tried to formulate rules for the construction of subject headings in 1876. However, they do not seem to be appropriate to Korean language because the syntax and semantics of Korean language are different from those of English and other European languages. This study therefore attempts to develop a new Korean subject indexing system, namely Korean String Indexing System(KOSIS), in order to increase the use of subject catalogs. For this purpose, advantages and disadvantages between the classed subject catalog nd the alphabetical subject catalog, which are typical subject ca-alogs in libraries, are investigated, and most of remarkable subject indexing systems, in particular the PRECIS developed by the British National Bibliography, are reviewed and analysed. KOSIS is a string indexing based on purely the syntax and semantics of Korean language, even though considerable principles of PRECIS are applied to it. The outlines of KOSIS are as follows: 1) KOSIS is based on the fundamentals of natural language and an ingenious conjunction of human indexing skills and computer capabilities. 2) KOSIS is. 3 string indexing based on the 'principle of context-dependency.' A string of terms organized accoding to his principle shows remarkable affinity with certain patterns of words in ordinary discourse. From that point onward, natural language rather than classificatory terms become the basic model for indexing schemes. 3) KOSIS uses 24 role operators. One or more operators should be allocated to the index string, which is organized manually by the indexer's intellectual work, in order to establish the most explicit syntactic relationship of index terms. 4) Traditionally, a single -line entry format is used in which a subject heading or index entry is presented as a single sequence of words, consisting of the entry terms, plus, in some cases, an extra qualifying term or phrase. But KOSIS employs a two-line entry format which contains three basic positions for the production of index entries. The 'lead' serves as the user's access point, the 'display' contains those terms which are themselves context dependent on the lead, 'qualifier' sets the lead term into its wider context. 5) Each of the KOSIS entries is co-extensive with the initial subject statement prepared by the indexer, since it displays all the subject specificities. Compound terms are always presented in their natural language order. Inverted headings are not produced in KOSIS. Consequently, the precision ratio of information retrieval can be increased. 6) KOSIS uses 5 relational codes for the system of references among semantically related terms. Semantically related terms are handled by a different set of routines, leading to the production of 'See' and 'See also' references. 7) KOSIS was riginally developed for a classified catalog system which requires a subject index, that is an index -which 'trans-lates' subject index, that is, an index which 'translates' subjects expressed in natural language into the appropriate classification numbers. However, KOSIS can also be us d for a dictionary catalog system. Accordingly, KOSIS strings can be manipulated to produce either appropriate subject indexes for a classified catalog system, or acceptable subject headings for a dictionary catalog system. 8) KOSIS is able to maintain a constistency of index entries and cross references by means of a routine identification of the established index strings and reference system. For this purpose, an individual Subject Indicator Number and Reference Indicator Number is allocated to each new index strings and new index terms, respectively. can produce all the index entries, cross references, and authority cards by means of either manual or mechanical methods. Thus, detailed algorithms for the machine-production of various outputs are provided for the institutions which can use computer facilities.

  • PDF

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

A Comparative Study on the Possibility of Land Cover Classification of the Mosaic Images on the Korean Peninsula (한반도 모자이크 영상의 토지피복분류 활용 가능성 탐색을 위한 비교 연구)

  • Moon, Jiyoon;Lee, Kwang Jae
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_4
    • /
    • pp.1319-1326
    • /
    • 2019
  • The KARI(Korea Aerospace Research Institute) operates the government satellite information application consultation to cope with ever-increasing demand for satellite images in the public sector, and carries out various support projects including the generation and provision of mosaic images on the Korean Peninsula every year to enhance user convenience and promote the use of satellite images. In particular, the government has wanted to increase the utilization of mosaic images on the Korean Peninsula and seek to classify and update mosaic images so that users can use them in their businesses easily. However, it is necessary to test and verify whether the classification results of the mosaic images can be utilized in the field since the original spectral information is distorted during pan-sharpening and color balancing, and there is a limitation that only R, G, and B bands are provided. Therefore, in this study, the reliability of the classification result of the mosaic image was compared to the result of KOMPSAT-3 image. The study found that the accuracy of the classification result of KOMPSAT-3 image was between 81~86% (overall accuracy is about 85%), while the accuracy of the classification result of mosaic image was between 69~72% (overall accuracy is about 72%). This phenomenon is interpreted not only because of the distortion of the original spectral information through pan-sharpening and mosaic processes, but also because NDVI and NDWI information were extracted from KOMPSAT-3 image rather than from the mosaic image, as only three color bands(R, G, B) were provided. Although it is deemed inadequate to distribute classification results extracted from mosaic images at present, it is believed that it will be necessary to explore ways to minimize the distortion of spectral information when making mosaic images and to develop classification techniques suitable for mosaic images as well as the provision of NIR band information. In addition, it is expected that the utilization of images with limited spectral information could be increased in the future if related research continues, such as the comparative analysis of classification results by geomorphological characteristics and the development of machine learning methods for image classification by objects of interest.

A Study on Image-Based Mobile Robot Driving on Ship Deck (선박 갑판에서 이미지 기반 이동로봇 주행에 관한 연구)

  • Seon-Deok Kim;Kyung-Min Park;Seung-Yeol Wang
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.7
    • /
    • pp.1216-1221
    • /
    • 2022
  • Ships tend to be larger to increase the efficiency of cargo transportation. Larger ships lead to increased travel time for ship workers, increased work intensity, and reduced work efficiency. Problems such as increased work intensity are reducing the influx of young people into labor, along with the phenomenon of avoidance of high intensity labor by the younger generation. In addition, the rapid aging of the population and decrease in the young labor force aggravate the labor shortage problem in the maritime industry. To overcome this, the maritime industry has recently introduced technologies such as an intelligent production design platform and a smart production operation management system, and a smart autonomous logistics system in one of these technologies. The smart autonomous logistics system is a technology that delivers various goods using intelligent mobile robots, and enables the robot to drive itself by using sensors such as lidar and camera. Therefore, in this paper, it was checked whether the mobile robot could autonomously drive to the stop sign by detecting the passage way of the ship deck. The autonomous driving was performed by detecting the passage way of the ship deck through the camera mounted on the mobile robot based on the data learned through Nvidia's End-to-end learning. The mobile robot was stopped by checking the stop sign using SSD MobileNetV2. The experiment was repeated five times in which the mobile robot autonomously drives to the stop sign without deviation from the ship deck passage way at a distance of about 70m. As a result of the experiment, it was confirmed that the mobile robot was driven without deviation from passage way. If the smart autonomous logistics system to which this result is applied is used in the marine industry, it is thought that the stability, reduction of labor force, and work efficiency will be improved when workers work.