• Title/Summary/Keyword: Optimal Control Problem

Search Result 1,157, Processing Time 0.028 seconds

Development for Fishing Gear and Method of the Non-Float Midwater Pair Trawl Net (III) - Opening Efficiency of the Model Net attaching the Kite - (무부자 쌍끌이 중층망 어구어법의 개발 (III) - 카이트를 부착한 모형어구의 전개성능 -)

  • 유제범;이주희;이춘우;권병국;김정문
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.39 no.3
    • /
    • pp.197-210
    • /
    • 2003
  • The non-float midwater pair trawl was effective in the mouth opening and control of the working depth in midwater and bottom. In contrast, we confirmed that it was difficult to keep the net at surface above 30 m of the depth by means of the full scale experiment in the field and the model test in the circulation water channel. To solve this problem, the kites were attached to the head rope of the non-float midwater pair trawl. In this study, four kinds of the model experiments were carried out with the purpose of applying the kite to the korean midwater pair trawl. The results obtained can be summarized as follows: 1. The working depth of the non-float midwater pair trawl with the kite was shallower than that of the proto type and non-float type. The working depth of the kite type was approximately 20m with 2 kites and about 5m with 4 kites under 4.0 knot. The working depth was almost constant but the depth of the head rope sank approximately 15m and 10m according to the increase in the front weight and the wing-end weight, respectively. The changing aspect of the working depth was constant, but the depth of the head rope sank approximately 22m according to the increase in the lower warp length (dL). 2. The hydrodynamic resistance of the kite type was almost increased in a linear form in accordance with the flow speed increase from 2.0 to 5.0 knot. The increasing grate of the hydrodynamic resistance tended to increase in accordance with the increase in flow speed. The hydrodynamic resistance of the kite type was larger approximately 5~10 ton larger than that of the non-float type and the proto type. The hydrodynamic resistance of the kite type increased approximately 3ton with the changing of the front weight from 1.40 to 3.50 ton and approximately 4 ton with the changing of the wing-end weight from 0 to 1.11 ton and approximately 5.5 ton with the changing lower warp length (dL) from 0 to 40 m, respectively. 3. The net height of the kite type was increased approximately 10 m with the change in the kite area from $2,270mm^2$ to 4,540 $\textrm{mm}^2$. The net height of the kite type was aproximately 50 m and 30 m larger than that of the proto type and the non-float type, respectively. The changed aspect of the net width was approximately 5m with the variation of the flow speed from 2.0 to 5.0 knot. 4. The filtering volume of the kite type was larger than that of the proto type and the non-float type by 28%, 34% at 2.0 knot of the flow speed and 42%, 41% at 3.0 knot, and 62%, 45% at 4.0 knot, and 74%, 54% at 5.0knot, respectively. The optimal towing speed was approximately 3.0 knot for the proto type and was over 4.0 knot for the non-float type, and the optimal towing speed reached 5.0 knot for the kite type. 5. The opening efficiency of the kite type was approximately 50% and 25% larger than that of the proto type and the non-float type, respectively.

SANET-CC : Zone IP Allocation Protocol for Offshore Networks (SANET-CC : 해상 네트워크를 위한 구역 IP 할당 프로토콜)

  • Bae, Kyoung Yul;Cho, Moon Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.87-109
    • /
    • 2020
  • Currently, thanks to the major stride made in developing wired and wireless communication technology, a variety of IT services are available on land. This trend is leading to an increasing demand for IT services to vessels on the water as well. And it is expected that the request for various IT services such as two-way digital data transmission, Web, APP, etc. is on the rise to the extent that they are available on land. However, while a high-speed information communication network is easily accessible on land because it is based upon a fixed infrastructure like an AP and a base station, it is not the case on the water. As a result, a radio communication network-based voice communication service is usually used at sea. To solve this problem, an additional frequency for digital data exchange was allocated, and a ship ad-hoc network (SANET) was proposed that can be utilized by using this frequency. Instead of satellite communication that costs a lot in installation and usage, SANET was developed to provide various IT services to ships based on IP in the sea. Connectivity between land base stations and ships is important in the SANET. To have this connection, a ship must be a member of the network with its IP address assigned. This paper proposes a SANET-CC protocol that allows ships to be assigned their own IP address. SANET-CC propagates several non-overlapping IP addresses through the entire network from land base stations to ships in the form of the tree. Ships allocate their own IP addresses through the exchange of simple requests and response messages with land base stations or M-ships that can allocate IP addresses. Therefore, SANET-CC can eliminate the IP collision prevention (Duplicate Address Detection) process and the process of network separation or integration caused by the movement of the ship. Various simulations were performed to verify the applicability of this protocol to SANET. The outcome of such simulations shows us the following. First, using SANET-CC, about 91% of the ships in the network were able to receive IP addresses under any circumstances. It is 6% higher than the existing studies. And it suggests that if variables are adjusted to each port's environment, it may show further improved results. Second, this work shows us that it takes all vessels an average of 10 seconds to receive IP addresses regardless of conditions. It represents a 50% decrease in time compared to the average of 20 seconds in the previous study. Also Besides, taking it into account that when existing studies were on 50 to 200 vessels, this study on 100 to 400 vessels, the efficiency can be much higher. Third, existing studies have not been able to derive optimal values according to variables. This is because it does not have a consistent pattern depending on the variable. This means that optimal variables values cannot be set for each port under diverse environments. This paper, however, shows us that the result values from the variables exhibit a consistent pattern. This is significant in that it can be applied to each port by adjusting the variable values. It was also confirmed that regardless of the number of ships, the IP allocation ratio was the most efficient at about 96 percent if the waiting time after the IP request was 75ms, and that the tree structure could maintain a stable network configuration when the number of IPs was over 30000. Fourth, this study can be used to design a network for supporting intelligent maritime control systems and services offshore, instead of satellite communication. And if LTE-M is set up, it is possible to use it for various intelligent services.

Estimation of Optimal Size of the Treatment Facility for Nonpoint Source Pollution due to Watershed Development (비점오염원의 정량화방안에 따른 적정 설계용량결정)

  • Kim, Jin-Kwan
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.8 no.6
    • /
    • pp.149-153
    • /
    • 2008
  • The pollutant capacity occurred before and after the development of a watershed should be quantitatively estimated and controlled for the minimization of water contamination. The Ministry of Environment suggested a guideline for the legal management of nonpoint source from 2006. However, the rational method for the determination of treatment capacity from nonpoint source proposed in the guideline has the problem in the field application because it does not reflect the project based cases and overestimates the pollutant load to be reduced. So, we perform the standard rainfall analysis by analytical probabilistic method for the estimation of an additional pollutant load occurred by a project and suggest a methodology for the estimation of contaminant capacity instead of a simple rational method. The suggested methodology in this study could determine the reasonable capacity and efficiency of a treatment facility through the estimation of pollutant load from nonpoint source and from this we can manage the watershed appropriately. We applied a suggested methodology to the projects of housing land development and a dam construction in the watersheds. When we determine the treatment capacity by a rational method without consideration of the types of projects we should treat the 90% of pollutant capacity occurred by the development and to do so, about 30% of the total cost for the development should be invested for the treatment facility. This requires too big cost and is not realistic. If we use the suggested method the target pollutant capacity to be reduced will be 10 to 30% of the capacity occurred by the development and about 5 to 10% of the total cost can be used. The control of nonpoint source must be performed for the water resources management. However it is not possible to treat the 90% of pollutant load occurred by the development. The proper pollutant capacity from nonpoint source should be estimated and controlled based on various project types and in reality, this is very important for the watershed management. Therefore the results of this study might be more reasonable than the rational method proposed in the Ministry of Environment.

Clinical Investigation of Women with Asthma Worsened During Pregnancy (임신 중 천식의 악화로 내원한 환자의 임상적 고찰)

  • Kwon, Young-Hwan;Kim, Kyung-Kyu;Jung, Hye-Cheol;Lee, Sung-Yong;Kim, Je-Hyeong;Lee, So-Ra;Lee, Sang-Yeub;Lee, Sin-Hyeong;Cho, Jae-Yun;Shim, Jae-Jeong;Kang, Kyung-Ho;Yoo, Se-Hwa;In, Kwang-Ho
    • Tuberculosis and Respiratory Diseases
    • /
    • v.46 no.4
    • /
    • pp.548-554
    • /
    • 1999
  • Background : Asthma is the most common respiratory crisis encountered in clinical practice, occurring in up to 4% of all pregnancies. Pregnancy often appears to alter the course of asthma. But the mechanisms responsible for variable changes in the asthma course during pregnancy remain unknown. Poor control and exacerbations of asthma during pregnancy may result in serious maternal and fetal complications. To investigate the course of asthma during pregnancy in korean women, we did a retrograde study of 27 pregnant women who had been admitted to Korea University Hospital for asthma worsened. Method: Twenty seven pregnant women who had been visited to Korea University Hospital for asthma worsened were enrolled in our retrospective study. We reviewed medical recordings and interviewed patients with asthma. Results: Twenty seven pregnant women with asthma were evaluated, and 25 patients were enrolled to our study. Two patients experienced abortions at 6 weeks and 25 weeks gestation, respectively. The period of asthma worsened was commonly during weeks 20 to 28 of gestation. And all patients wosened were improved during the last 4 weeks of pregnancy. Twenty(80%) of 25 women whose asthma worsened during pregnancy reverted toward their prepregnancy status after delivery(p<0.002). The causes of asthma worsened during pregnancy are reduction or even complete cessaton of medication due to fears about its safety(40%), worsening after upper respiratory infection (28%), and unknown(32%). There were no adverse perinatal outcomes in 25 pregnant asthma subjects. Conclusions: A major problem of therapy for asthma during pregnancy is reduction or even complete cessation of medication due to fears of fetal effects. Therefore, maternal education and optimal clinical and pharmacologic management is necessary to mitigate maternal and fetal complications.

  • PDF

Effects of the Energy Level of the Finisher Diet on Growth Efficiency and Carcass Traits of 'High'-Market Weight Pigs (비육후기 사료의 에너지 수준이 '고체중' 출하돈의 성장효율 및 도체특성에 미치는 영향)

  • Lee, C.Y.;Kim, M.H.;Ha, D.M.;Park, J.W.;Oh, G.Y.;Lee, J.R.;Ha, Y.J.;Park, B.C.
    • Journal of Animal Science and Technology
    • /
    • v.49 no.4
    • /
    • pp.471-480
    • /
    • 2007
  • The aim of the present study was to determine the effects of a low-energy finisher diet on feed and growth efficiencies and carcass traits of ‘high’-market weight (MW) finishing pigs and thereby to extrapolate optimal dietary energy level for the high-MW swine. A total of 160 (Yorkshire × Landrace) × Duroc-crossbred finishing gilts and barrows weighing approximately 90 kg were fed a low-energy (3,200 kcal DE/kg) diet (LE) or control (3,400 kcal) diet (CON) ad libitum in 16 pens up to 135- and 125-kg live weights, respectively, at which the animals were slaughtered and their carcasses were analyzed [2 (sex) × 2 (diet) factorial experimental design]. Average daily gain, average daily feed intake and feed efficiency did not differ between the two sex or diet groups. Backfat thickness was less (P<0.05) in LE (22.4 mm) than in CON group (24.3 mm) in gilts, but not in barrows (24.4 ± 0.4 mm). The percentage of C- & D-grade carcasses was over 90% because of the ‘over-weight’ problem in gilts, whereas in barrows, percentages of A plus B grades and C plus D grades were 79% and 21%, respectively. The yield percentage of each trimmed primal cut per total trimmed cuts (w/w) did not differ between the two sex or diet groups. Physicochemical characteristics of longissimus muscle including color (lightness and redness), pH, drip loss and chemical composition, which overally were within the range of normal carcass, also did not differ between the two sex or diet groups. In conclusion, both LE and CON are judged to be adequate for the high-MW swine during the latter finishing period. If fat deposition of a given herd of high-MW pigs needs to be suppressed by a dietary treatment, the energy content of the diet will have to be reduced to a level lower than 3,200 kcal DE/kg.

Personalized Recommendation System for IPTV using Ontology and K-medoids (IPTV환경에서 온톨로지와 k-medoids기법을 이용한 개인화 시스템)

  • Yun, Byeong-Dae;Kim, Jong-Woo;Cho, Yong-Seok;Kang, Sang-Gil
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.147-161
    • /
    • 2010
  • As broadcasting and communication are converged recently, communication is jointed to TV. TV viewing has brought about many changes. The IPTV (Internet Protocol Television) provides information service, movie contents, broadcast, etc. through internet with live programs + VOD (Video on demand) jointed. Using communication network, it becomes an issue of new business. In addition, new technical issues have been created by imaging technology for the service, networking technology without video cuts, security technologies to protect copyright, etc. Through this IPTV network, users can watch their desired programs when they want. However, IPTV has difficulties in search approach, menu approach, or finding programs. Menu approach spends a lot of time in approaching programs desired. Search approach can't be found when title, genre, name of actors, etc. are not known. In addition, inserting letters through remote control have problems. However, the bigger problem is that many times users are not usually ware of the services they use. Thus, to resolve difficulties when selecting VOD service in IPTV, a personalized service is recommended, which enhance users' satisfaction and use your time, efficiently. This paper provides appropriate programs which are fit to individuals not to save time in order to solve IPTV's shortcomings through filtering and recommendation-related system. The proposed recommendation system collects TV program information, the user's preferred program genres and detailed genre, channel, watching program, and information on viewing time based on individual records of watching IPTV. To look for these kinds of similarities, similarities can be compared by using ontology for TV programs. The reason to use these is because the distance of program can be measured by the similarity comparison. TV program ontology we are using is one extracted from TV-Anytime metadata which represents semantic nature. Also, ontology expresses the contents and features in figures. Through world net, vocabulary similarity is determined. All the words described on the programs are expanded into upper and lower classes for word similarity decision. The average of described key words was measured. The criterion of distance calculated ties similar programs through K-medoids dividing method. K-medoids dividing method is a dividing way to divide classified groups into ones with similar characteristics. This K-medoids method sets K-unit representative objects. Here, distance from representative object sets temporary distance and colonize it. Through algorithm, when the initial n-unit objects are tried to be divided into K-units. The optimal object must be found through repeated trials after selecting representative object temporarily. Through this course, similar programs must be colonized. Selecting programs through group analysis, weight should be given to the recommendation. The way to provide weight with recommendation is as the follows. When each group recommends programs, similar programs near representative objects will be recommended to users. The formula to calculate the distance is same as measure similar distance. It will be a basic figure which determines the rankings of recommended programs. Weight is used to calculate the number of watching lists. As the more programs are, the higher weight will be loaded. This is defined as cluster weight. Through this, sub-TV programs which are representative of the groups must be selected. The final TV programs ranks must be determined. However, the group-representative TV programs include errors. Therefore, weights must be added to TV program viewing preference. They must determine the finalranks.Based on this, our customers prefer proposed to recommend contents. So, based on the proposed method this paper suggested, experiment was carried out in controlled environment. Through experiment, the superiority of the proposed method is shown, compared to existing ways.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.