• Title/Summary/Keyword: Navigation-System

Search Result 6,434, Processing Time 0.034 seconds

The Simulation for the Organization of Fishing Vessel Control System in Fishing Ground (어장에 있어서의 어선관제시스템 구축을 위한 모의실험)

  • 배문기;신형일
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.36 no.3
    • /
    • pp.175-185
    • /
    • 2000
  • This paper described on a basic study to organize fishing vessel control system in order to control efficiently fishing vessel in Korean offshore. It was digitalized ARPA image on the fishing processing of a fleet of purse seiner in conducting fishing operation at Cheju offshore in Korea as a digital camera and then simulated by used VTMS. Futhermore, it was investigated on the application of FVTMS which can control efficiently fishing vessels in fishing ground. The results obtained were as follows ; (1) It was taken 16 minutes and 35 minutes to casting and hauling net in fishing processing respectively. The length of rope pulled by scout boat was 200m, tactical diameter in casting net was 340.8m, turning speed was 6kts as well. (2) The processing of casting and hauling net was moved to SW, NE as results of simulation when the current direction and speed set into NE, 2kts and SW, 2kts respectively. Such as these results suggest that can predict to control the fishing vessel previously with information of fishing ground, fishery and ship's maneuvering, etc. (3) The control range of VTMS radar used in simulation was about 16 miles. Although converting from a radar of the control vessel to another one, it was continuously acquired for the vector and the target data. The optimum control position could be determined by measuring and analyzing to distance and direction between the control vessel and the fleet of fishing vessel. (4) The FVTMS(fishing vessel traffic management services) model was suggested that fishing vessels received fishing conditions and safety navigation information can operate safely and efficiently.

  • PDF

A Study on the Characteristics Measurement of Main Engine Exhaust Emission in Training Ship HANBADA (실습선 한바다호 주기관 배기가스 배출물질 특성 고찰에 관한 연구)

  • Choi, Jung-Sik;Lee, Sang-Deuk;Kim, Seong-Yun;Lee, Kyoung-Woo;Chun, Kang-Woo;Nam, Youn-Woo;Jung, Kyun-Sik;Park, Sang-Kyun;Choi, Jae-Hyuk
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.19 no.6
    • /
    • pp.658-665
    • /
    • 2013
  • In this study, we measured particulate matter(PM) which emerged as the hot issue from the International Maritime Organization(IMO) and the exhaust emission using HANBADA, the training ship of Korea Maritime University. In particular, the PM was obtained with TEM grid. PM structure was observed by electron microscopy. And exhaust gases such as NOx, $CO_2$, and CO were measured using the combustion gas analyzer(PG-250A, HORIBA). The results of this study are as follows. 1) When the ship departed from the port, the maximum difference in PM emissions were up to 30 % due to the Bunker Change. 2) Under the steady navigation, emission of PM was $1.34mg/m^3$ when Bunker-A is changing L.R.F.O(3 %). And, at the fixed L.R.F.O (3 %), emission of PM was $1.19mg/m^3$. When the main engine RPM increased up to 20 % with fixed L.R.F.O(3 %), emission of PM was $1.40mg/m^3$. When we changed to low quality oil(L.R.F.O(3 %)), CO concentration from main engine increased about 16 %. On the other hand, when the main engine RPM is rising up to 20 %, CO concentration is increased more than 152 percent. These results imply that the changes of RPM is a dominant factor in exhaust emission although fuel oil type is an important factor. 3) The diameter of PM obtained with TEM grid is about $4{\sim}10{\mu}m$ and its structure shows porous aggregate.

2-D/3-D Seismic Data Acquisition and Quality Control for Gas Hydrate Exploration in the Ulleung Basin (울릉분지 가스하이드레이트 2/3차원 탄성파 탐사자료 취득 및 품질관리)

  • Koo, Nam-Hyung;Kim, Won-Sik;Kim, Byoung-Yeop;Cheong, Snons;Kim, Young-Jun;Yoo, Dong-Geun;Lee, Ho-Young;Park, Keun-Pil
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.2
    • /
    • pp.127-136
    • /
    • 2008
  • To identify the potential area of gas hydrate in the Ulleung Basin, 2-D and 3-D seismic surveys using R/V Tamhae II were conducted in 2005 and 2006. Seismic survey equipment consisted of navigation system, recording system, streamer cable and air-gun source. For reliable velocity analysis in a deep sea area where water depths are mostly greater than 1,000 m and the target depth is up to about 500 msec interval below the seafloor, 3-km-long streamer and 1,035 $in^3$ tuned air-gun array were used. During the survey, a suite of quality control operations including source signature analysis, 2-D brute stack, RMS noise analysis and FK analysis were performed. The source signature was calculated to verify its conformity to quality specification and the gun dropout test was carried out to examine signature changes due to a single air gun's failure. From the online quality analysis, we could conclude that the overall data quality was very good even though some seismic data were affected by swell noise, parity error, spike noise and current rip noise. Especially, by checking the result of data quality enhancement using FK filtering and missing trace restoration technique for the 3-D seismic data inevitably contaminated with current rip noises, the acquired data were accepted and the field survey could be conducted continuously. Even in survey areas where the acquired data would be unsuitable for quality specification, the marine seismic survey efficiency could be improved by showing the possibility of noise suppression through onboard data processing.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Estimation of the CY Area Required for Each Container Handling System in Mokpo New Port (목표 신항만의 터미널 운영시스템에 따른 CY 소요면적 산정에 관한 연구)

  • Keum, J.S.
    • Journal of Korean Port Research
    • /
    • v.12 no.1
    • /
    • pp.35-46
    • /
    • 1998
  • The CY can be said to function in various respect as a buffer zone between the maritime and overland inflow-outflow of container. The amount of storage area needed requires a very critical appraisal at pre-operational stage. A container terminal should be designed to handle and store containers in the most efficient and economic way possible. In order to achieve this aim it is necessary to figure out or forecast numbers and types of containers to be handled, CY area required, and internal handling systems to be adopted. This paper aims to calculate the CY area required for each container handling system in Mokpo New Port. The CY area required are directly dependent on the equipment being used and the storage demand. And also the CY area required depends on the dwell time. Furthermore, containers need to be segregated by destination, weight, class, FCL(full container load), LCL(less than container load), direction of travel, and sometimes by type and often by shipping line or service. Thus the full use of a storage area is not always possible as major unbalances and fluctuations in these flow occuring all the time. The calculating CY area must therefore be taken into account in terms of these operational factors. For solving such problem, all these factors have been applied to estimation of CY area in Mokpo New Port. The CY area required in Mokpo New Port was summarized in the conclusion section.

  • PDF

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.

A Study on the Historical Origin of Private Security Industry in Korea (우리나라 보안산업의 역사적 기원에 관한 연구)

  • Lee, Chang-Moo
    • Korean Security Journal
    • /
    • no.22
    • /
    • pp.91-111
    • /
    • 2010
  • Around the middle of the ninth century the strict bone-rank system of Silla frustrated many people who had political ambition but lacked nobility. They had to seek other ways, including maritime trade. Such an undertaking reflected and also increased their economic and military power. Trade prospered with T'ang China and with Japan as well. The threat of piracy to Silla's thriving maritime trade caused to create a succession of garrisons at important coastal points. Chonghae Jin (Chonghae garrison) was regarded as the most important of these. It was established in 828 by Chang Pogo. Chonghae Jin was on Wando, an island just east of the southwestern tip of Korea and a key place at this time in the trade between China, Korea, and Japan. From this vantage point Chang Pogo became a merchant-prince with extensive holdings and commercial interests in China and with trade contacts with Japan. Although piracy was rampant in East Asia at that time, either the Chinese or Silla government was not able to control it due to inner political strife and lack of policing resources. Infuriated by the piracy and the government's inability to control it, Chang Pogo came back to Silla to fight against the pirates and to protect maritime trade. He persuaded the king of Silla and was permitted to control the private armed forces to sweep away the pirates. In 829 he was appointed Commissioner of Chonghae-Jin with the mission of curbing piracy in that region. Chang's forces were created to protect people from pirates, but also developed into traders among Silla Korea, T'ang China, and Japan in the 9th century. This was geographically possible because the Chonghae Garrison was situated at the midpoint of Korea, China, and Japan, and also because Chang's naval forces actually dominated the East Asia Sea while patrolling sea-lanes. Based on these advantages, Chang Pogo made a great fortune, which might be collected from a charge for protecting people from pirates and the trades with China and Japan. Chang's forces could be termed the first private security company in the Korean history, at least in terms of historical documents. Based on historical documents, the numbers of private soldiers might be estimated to exceed tens of thousands at least, since Chang's forces alone were recorded to be more than ten thousand. Because local powers and aristocratic elites were said to have thousands of armed forces respectively, the extent of private forces was assumed to be vast, although they were available only to the privileged class. In short, the domination of Chang's forces was attributable to the decline of central government and its losing control over local powers. In addition it was not possible without advanced technologies in shipbuilding and navigation.

  • PDF

Comparison of Response Systems and Education Courses against HNS Spill Incidents between Land and Sea in Korea (국내 HNS 사고 대응체계 및 교육과정에 관한 육상과 해상의 비교)

  • Kim, Kwang-Soo;Gang, Jin Hee;Lee, Moonjin
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.21 no.6
    • /
    • pp.662-671
    • /
    • 2015
  • As the type of Hazardous and Noxious Substances(HNS) becomes various and the transport volume of HNS increases, HNS spill incidents occur frequently on land and the sea. In view of various damages to human lives and properties by HNS spills, it is necessary to educate and train professional personnel in preparation for and response to potential HNS spills. This study shows the current state of response systems and education courses against HNS spill incidents on land and the sea to compare those with each other between land and sea in Korea. Incident command system on land are basically similar to that at sea, but leading authority which is responsible for combating HNS spills at sea is changeable depending on the location of HNS spill, as it were, Korea Coast Guard(KCG) is responsible for urgent response to HNS spill at sea, while municipalities are responsible for the response to HNS drifted ashore. Education courses for HNS responders on land are established at National Fire Service Academy(NFSA), National Institute of Chemical Safety(NICS), etc., and are diverse. Education and training courses for HNS responder at sea are established at Korea Coast Guard Academy(KCGA) and Marine Environment Research & Training Institute(MERTI), and are comparatively simple. Education courses for dangerous cargo handlers who work in port where land is linked to the sea are established at Korea Maritime Dangerous Goods Inspection & Research Institute(KOMDI), Korea Port Training Institute(KPTI) and Korea Institute of Maritime and Fisheries Technology(KIMFT). Through the comparison of education courses for HNS responders between land and sea, some recommendations such as extension of education targets, division of an existing integrated HNS course into two courses composed of operational level and manager level with respective refresh course, on-line cyber course and joint inter-educational institute course in cooperation with other relevant institutes are proposed for the improvement in education courses of KCG and KOEM(Korea Marine Environment Management Corporation) to educate and train professionals for combating HNS spills at sea in Korea.

A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns (인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법)

  • Kim, Mingyu;Kim, Namgyu;Jung, Inhwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.123-136
    • /
    • 2014
  • Recently, online shopping has further developed as the use of the Internet and a variety of smart mobile devices becomes more prevalent. The increase in the scale of such shopping has led to the creation of many Internet shopping malls. Consequently, there is a tendency for increasingly fierce competition among online retailers, and as a result, many Internet shopping malls are making significant attempts to attract online users to their sites. One such attempt is keyword marketing, whereby a retail site pays a fee to expose its link to potential customers when they insert a specific keyword on an Internet portal site. The price related to each keyword is generally estimated by the keyword's frequency of appearance. However, it is widely accepted that the price of keywords cannot be based solely on their frequency because many keywords may appear frequently but have little relationship to shopping. This implies that it is unreasonable for an online shopping mall to spend a great deal on some keywords simply because people frequently use them. Therefore, from the perspective of shopping malls, a specialized process is required to extract meaningful keywords. Further, the demand for automating this extraction process is increasing because of the drive to improve online sales performance. In this study, we propose a methodology that can automatically extract only shopping-related keywords from the entire set of search keywords used on portal sites. We define a shopping-related keyword as a keyword that is used directly before shopping behaviors. In other words, only search keywords that direct the search results page to shopping-related pages are extracted from among the entire set of search keywords. A comparison is then made between the extracted keywords' rankings and the rankings of the entire set of search keywords. Two types of data are used in our study's experiment: web browsing history from July 1, 2012 to June 30, 2013, and site information. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The original sample dataset contains 150 million transaction logs. First, portal sites are selected, and search keywords in those sites are extracted. Search keywords can be easily extracted by simple parsing. The extracted keywords are ranked according to their frequency. The experiment uses approximately 3.9 million search results from Korea's largest search portal site. As a result, a total of 344,822 search keywords were extracted. Next, by using web browsing history and site information, the shopping-related keywords were taken from the entire set of search keywords. As a result, we obtained 4,709 shopping-related keywords. For performance evaluation, we compared the hit ratios of all the search keywords with the shopping-related keywords. To achieve this, we extracted 80,298 search keywords from several Internet shopping malls and then chose the top 1,000 keywords as a set of true shopping keywords. We measured precision, recall, and F-scores of the entire amount of keywords and the shopping-related keywords. The F-Score was formulated by calculating the harmonic mean of precision and recall. The precision, recall, and F-score of shopping-related keywords derived by the proposed methodology were revealed to be higher than those of the entire number of keywords. This study proposes a scheme that is able to obtain shopping-related keywords in a relatively simple manner. We could easily extract shopping-related keywords simply by examining transactions whose next visit is a shopping mall. The resultant shopping-related keyword set is expected to be a useful asset for many shopping malls that participate in keyword marketing. Moreover, the proposed methodology can be easily applied to the construction of special area-related keywords as well as shopping-related ones.

RPC Correction of KOMPSAT-3A Satellite Image through Automatic Matching Point Extraction Using Unmanned AerialVehicle Imagery (무인항공기 영상 활용 자동 정합점 추출을 통한 KOMPSAT-3A 위성영상의 RPC 보정)

  • Park, Jueon;Kim, Taeheon;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1135-1147
    • /
    • 2021
  • In order to geometrically correct high-resolution satellite imagery, the sensor modeling process that restores the geometric relationship between the satellite sensor and the ground surface at the image acquisition time is required. In general, high-resolution satellites provide RPC (Rational Polynomial Coefficient) information, but the vendor-provided RPC includes geometric distortion caused by the position and orientation of the satellite sensor. GCP (Ground Control Point) is generally used to correct the RPC errors. The representative method of acquiring GCP is field survey to obtain accurate ground coordinates. However, it is difficult to find the GCP in the satellite image due to the quality of the image, land cover change, relief displacement, etc. By using image maps acquired from various sensors as reference data, it is possible to automate the collection of GCP through the image matching algorithm. In this study, the RPC of KOMPSAT-3A satellite image was corrected through the extracted matching point using the UAV (Unmanned Aerial Vehichle) imagery. We propose a pre-porocessing method for the extraction of matching points between the UAV imagery and KOMPSAT-3A satellite image. To this end, the characteristics of matching points extracted by independently applying the SURF (Speeded-Up Robust Features) and the phase correlation, which are representative feature-based matching method and area-based matching method, respectively, were compared. The RPC adjustment parameters were calculated using the matching points extracted through each algorithm. In order to verify the performance and usability of the proposed method, it was compared with the GCP-based RPC correction result. The GCP-based method showed an improvement of correction accuracy by 2.14 pixels for the sample and 5.43 pixelsfor the line compared to the vendor-provided RPC. In the proposed method using SURF and phase correlation methods, the accuracy of sample was improved by 0.83 pixels and 1.49 pixels, and that of line wasimproved by 4.81 pixels and 5.19 pixels, respectively, compared to the vendor-provided RPC. Through the experimental results, the proposed method using the UAV imagery presented the possibility as an alternative to the GCP-based method for the RPC correction.