• Title/Summary/Keyword: requirements

Search Result 15,009, Processing Time 0.046 seconds

A Study on Air Operator Certification and Safety Oversight Audit Program in light of the Convention on International Civil Aviation (시카고협약체계에서의 항공안전평가제도에 관한 연구)

  • Lee, Koo-Hee;Park, Won-Hwa
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.28 no.1
    • /
    • pp.115-157
    • /
    • 2013
  • Some contracting States of the Convention on International Civil Aviation (commonly known as the Chicago Convention) issue FAOC(Foreign AOC and/or Operations Specifications) and conduct various safety audits for the foreign operators. These FAOC and safety audits on the foreign operators are being expanded to other parts of the world. While this trend is the strengthening measure of aviation safety resulting in the reduction of aircraft accident, it is the source of concern from the legal as well as economic perspectives. FAOC of the USA doubly burdens the other contracting States to the Chicago Convention because it is the requirement other than that prescribed by the Chicago Convention of which provisions are faithfully observed by almost all the contracting States. The Chicago Convention in its Article 33 stipulates that each contracting State recognize the validity of the certificates of airworthiness and licenses issued by other contracting States as long as they meet the minimum standards of the ICAO. Consequently, it is submitted that the unilateral action of the USA, China, Mongolia, Australia, and the Philippines issuing the FOAC to the aircraft of other States is against the Convention. It is worry some that this breach of international law is likely to be followed by the European Union which is believed to be in preparation for its own unilateral application. The ICAO established by the Chicago Convention to be in charge of safe and orderly development of the international civil aviation has been in hard work to both upgrade and emphasize the safe operation of aircraft. As the result of these endeavors, it prepared a new Annex 19 to the Chicago Convention with the title of "Safety Management" and with the applicable date 14 November 2013. It is this Annex and other ICAO documents relevant to the safety that the contracting States to the Chicago Convention have to observe. Otherwise, it is the economical burden due to probable delay in issuing the FOAC and bureaucracies combined with many different paperworks and regulations depending on where the aircraft is flown. It is exactly to avoid this type of confusion and waste that the Chicago Convention aimed at when it was adopted in 1944. The State of the operator shall establish a system for both the certification and the continued surveillance of the operator in accordance with ICAO SARPs to ensure that the required standards of operations are maintained. Certainly the operator shall meet and maintain the requirements established by the States in which it operate. The authority of a State stops where the authority of another State intervenes or where the former has yielded its power by an international agreement for the sake of international cooperation. Hence, it is not within the realm of the State to issue FAOC towards foreign operators for the reason that these foreign operators are flying in and out of the State. Furthermore, there are other safety audits such as ICAO USOAP, IATA IOSA, FAA IASA, and EU SAFA that assure the safe operation of the aircraft, but within the limit of their power and in compliance with the ICAO SARPs. If the safety level of any operator is not satisfactory, the operator could be banned to operate in the contracting States with watchful eyes until the ICAO SARPs are met. This time-honoured practice has been applied without any serious problems. Besides, we have the new Annex 19 to strengthen and upgrade with easy reference for contracting States. We don't have no reason to introduce additional burden to the States by unilateral actions of some States. These actions have to be corrected. On the other hand, when it comes to the carriage of the Personal or Pilot Log Book, the Korean regulation requiring it is in contrast with other relevant provisions of USA, USOAP, IOSA, and SAFA. The Chicago Convention requires in its Articles 29 and 34 only the carriage of the Journey Log Book and some other certificates, but do not mention the Personal Log Book at all. Paragraph 5.1.1.1 of Annex 1 to the Chicago Convention even makes it clear that the carriage in the aircraft of the Personal Log Book is not required on international flights. The unique Korean regulation in this regards giving the unnecessary burden to the national flag air carriers has to be lifted at once.

  • PDF

Image Watermarking for Copyright Protection of Images on Shopping Mall (쇼핑몰 이미지 저작권보호를 위한 영상 워터마킹)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.147-157
    • /
    • 2013
  • With the advent of the digital environment that can be accessed anytime, anywhere with the introduction of high-speed network, the free distribution and use of digital content were made possible. Ironically this environment is raising a variety of copyright infringement, and product images used in the online shopping mall are pirated frequently. There are many controversial issues whether shopping mall images are creative works or not. According to Supreme Court's decision in 2001, to ad pictures taken with ham products is simply a clone of the appearance of objects to deliver nothing but the decision was not only creative expression. But for the photographer's losses recognized in the advertising photo shoot takes the typical cost was estimated damages. According to Seoul District Court precedents in 2003, if there are the photographer's personality and creativity in the selection of the subject, the composition of the set, the direction and amount of light control, set the angle of the camera, shutter speed, shutter chance, other shooting methods for capturing, developing and printing process, the works should be protected by copyright law by the Court's sentence. In order to receive copyright protection of the shopping mall images by the law, it is simply not to convey the status of the product, the photographer's personality and creativity can be recognized that it requires effort. Accordingly, the cost of making the mall image increases, and the necessity for copyright protection becomes higher. The product images of the online shopping mall have a very unique configuration unlike the general pictures such as portraits and landscape photos and, therefore, the general image watermarking technique can not satisfy the requirements of the image watermarking. Because background of product images commonly used in shopping malls is white or black, or gray scale (gradient) color, it is difficult to utilize the space to embed a watermark and the area is very sensitive even a slight change. In this paper, the characteristics of images used in shopping malls are analyzed and a watermarking technology which is suitable to the shopping mall images is proposed. The proposed image watermarking technology divide a product image into smaller blocks, and the corresponding blocks are transformed by DCT (Discrete Cosine Transform), and then the watermark information was inserted into images using quantization of DCT coefficients. Because uniform treatment of the DCT coefficients for quantization cause visual blocking artifacts, the proposed algorithm used weighted mask which quantizes finely the coefficients located block boundaries and coarsely the coefficients located center area of the block. This mask improves subjective visual quality as well as the objective quality of the images. In addition, in order to improve the safety of the algorithm, the blocks which is embedded the watermark are randomly selected and the turbo code is used to reduce the BER when extracting the watermark. The PSNR(Peak Signal to Noise Ratio) of the shopping mall image watermarked by the proposed algorithm is 40.7~48.5[dB] and BER(Bit Error Rate) after JPEG with QF = 70 is 0. This means the watermarked image is high quality and the algorithm is robust to JPEG compression that is used generally at the online shopping malls. Also, for 40% change in size and 40 degrees of rotation, the BER is 0. In general, the shopping malls are used compressed images with QF which is higher than 90. Because the pirated image is used to replicate from original image, the proposed algorithm can identify the copyright infringement in the most cases. As shown the experimental results, the proposed algorithm is suitable to the shopping mall images with simple background. However, the future study should be carried out to enhance the robustness of the proposed algorithm because the robustness loss is occurred after mask process.

A study on Development Process of Fish Aquaculture in Japan - Case by Seabream Aquaculture - (일본 어류 양식업의 발전과정과 산지교체에 관한 연구 : 참돔양식업을 사례로)

  • 송정헌
    • The Journal of Fisheries Business Administration
    • /
    • v.34 no.2
    • /
    • pp.75-90
    • /
    • 2003
  • When we think of fundamental problems of the aquaculture industry, there are several strict conditions, and consequently the aquaculture industry is forced to change. Fish aquaculture has a structural supply surplus in production, aggravation of fishing grounds, stagnant low price due to recent recession, and drastic change of distribution circumstances. It is requested for us to initiate discussion on such issue as “how fish aquaculture establishes its status in the coastal fishery\ulcorner, will fish aquaculture grow in the future\ulcorner, and if so “how it will be restructured\ulcorner” The above issues can be observed in the mariculture of yellow tail, sea scallop and eel. But there have not been studied concerning seabream even though the production is over 30% of the total production of fish aquaculture in resent and it occupied an important status in the fish aquaculture. The objectives of this study is to forecast the future movement of sea bream aquaculture. The first goal of the study is to contribute to managerial and economic studies on the aquaculture industry. The second goal is to identify the factors influencing the competition between production areas and to identify the mechanisms involved. This study will examine the competitive power in individual producing area, its behavior, and its compulsory factors based on case study. Producing areas will be categorized according to following parameters : distance to market and availability of transportation, natural environment, the time of formation of producing areas (leaderㆍfollower), major production items, scale of business and producing areas, degree of organization in production and sales. As a factor in shaping the production area of sea bream aquaculture, natural conditions especially the water temperature is very important. Sea bream shows more active feeding and faster growth in areas located where the water temperature does not go below 13∼14$^{\circ}C$ during the winter. Also fish aquaculture is constrained by the transporting distance. Aquacultured yellowtail is a mass-produced and a mass-distributed item. It is sold a unit of cage and transported by ship. On the other hand, sea bream is sold in small amount in markets and transported by truck; so, the transportation cost is higher than yellow tail. Aquacultured sea bream has different product characteristics due to transport distance. We need to study live fish and fresh fish markets separately. Live fish was the original product form of aquacultured sea bream. Transportation of live fish has more constraints than the transportation of fresh fish. Death rate and distance are highly correlated. In addition, loading capacity of live fish is less than fresh fish. In the case of a 10 ton truck, live fish can only be loaded up to 1.5 tons. But, fresh fish which can be placed in a box can be loaded up to 5 to 6 tons. Because of this characteristics, live fish requires closer location to consumption area than fresh fish. In the consumption markets, the size of fresh fish is mainly 0.8 to 2kg.Live fish usually goes through auction, and quality is graded. Main purchaser comes from many small-sized restaurants, so a relatively small farmer and distributer can sell it. Aquacultured sea bream has been transacted as a fresh fish in GMS ,since 1993 when the price plummeted. Economies of scale works in case of fresh fish. The characteristics of fresh fish is as follows : As a large scale demander, General Merchandise Stores are the main purchasers of sea bream and the size of the fish is around 1.3kg. It mainly goes through negotiation. Aquacultured sea bream has been established as a representative food in General Merchandise Stores. GMS require stable and mass supply, consistent size, and low price. And Distribution of fresh fish is undertook by the large scale distributers, which can satisfy requirements of GMS. The market share in Tokyo Central Wholesale Market shows Mie Pref. is dominating in live fish. And Ehime Pref. is dominating in fresh fish. Ehime Pref. showed remarkable growth in 1990s. At present, the dealings of live fish is decreasing. However, the dealings of fresh fish is increasing in Tokyo Central Wholesale Market. The price of live fish is decreasing more than one of fresh fish. Even though Ehime Pref. has an ideal natural environment for sea bream aquaculture, its entry into sea bream aquaculture was late, because it was located at a further distance to consumers than the competing producing areas. However, Ehime Pref. became the number one producing areas through the sales of fresh fish in the 1990s. The production volume is almost 3 times the production volume of Mie Pref. which is the number two production area. More conversion from yellow tail aquaculture to sea bream aquaculture is taking place in Ehime Pref., because Kagosima Pref. has a better natural environment for yellow tail aquaculture. Transportation is worse than Mie Pref., but this region as a far-flung producing area makes up by increasing the business scale. Ehime Pref. increases the market share for fresh fish by creating demand from GMS. Ehime Pref. has developed market strategies such as a quick return at a small profit, a stable and mass supply and standardization in size. Ehime Pref. increases the market power by the capital of a large scale commission agent. Secondly Mie Pref. is close to markets and composed of small scale farmers. Mie Pref. switched to sea bream aquaculture early, because of the price decrease in aquacultured yellou tail and natural environmental problems. Mie Pref. had not changed until 1993 when the price of the sea bream plummeted. Because it had better natural environment and transportation. Mie Pref. has a suitable water temperature range required for sea bream aquaculture. However, the price of live sea bream continued to decline due to excessive production and economic recession. As a consequence, small scale farmers are faced with a market price below the average production cost in 1993. In such kind of situation, the small-sized and inefficient manager in Mie Pref. was obliged to withdraw from sea bream aquaculture. Kumamoto Pref. is located further from market sites and has an unsuitable nature environmental condition required for sea bream aquaculture. Although Kumamoto Pref. is trying to convert to the puffer fish aquaculture which requires different rearing techniques, aquaculture technique for puffer fish is not established yet.

  • PDF

Studies on the Estimation of K2O Requirement for rice through the Chemical Test Data of Paddy Top Soil (화학분석(化學分析)을 통(通)한 수도(水稻)의 가리적량(加里適量) 추정(推定)에 관한 연구(硏究))

  • Kim, Moon Kyu
    • Korean Journal of Agricultural Science
    • /
    • v.2 no.1
    • /
    • pp.61-100
    • /
    • 1975
  • This study has been made to find out the possibilty of successfully using the following $K_2O$ recommended equation $K_2O\;kg/10a=(Ko/\sqrt{Ca+Mg}-Ks/\sqrt{Ca+Mg})sqrt{Ca+Mg}.\;47.\;B\;D$. where $Ko/sqrt{Ca+Mg}=0.03518+0.0007658\;Sio_2/O.M$. $K_Ssqrt{Ca+Mg}$=Exchangeable K me/100g/$\sqrt{Total\;soluble(Ca+Mg)me/100g\;in\;Soil}$ B. D. =Bulk density of top soil, when the dose of Nitrogen for rice is estimated from the following equation: $N\;kg/10a=(4.2+0.096\;SiO_2/O.M).F$ where $F=0.907+0.263x-0.013x^2$ $SiO_2/O.M=(available\;SiO_2=ppm)/(organic\;matter\;%)$in soil For this. two field experiments. one in sandy and the other in clay paddy soil. have been conducted using 3 levels of wollastonite (0, 500, 100kg/10a) as main treatments; 3 levels of $K_2O$ application were used as sub-plots. These were as follows : (1) 8kg of $K_2O$/10a regardless of the K activity-$K/\sqrt{Ca+Mg}$; (2) kg of $K_2O$/10a estimated from the above equation. and (3) same as (2) above plus additional 30% of $K_2O$. The dose of N kg/ 10a was determined from the above equation based on the value of $SiO_2$/O.M. ratio in each treatment. There were three replications. The leading variety of rice in Chung Chong Nam Do area. Akibare (introduced from Japan) was used. The data obtained. through soil and plant analysis and growth and yield observations. have been throughly examined to attain the following summarized conclusions. 1. The nitrogen dose. estimated from the above equation. was in excess for optimum growth of the rice variety Akibare; indicating the necessity of modification onthe value of "F" or the constants in the equation. The concept of using $SiO_2$/O.M. in the equation was shown to be applicable. 2. The dose of potash. estimated from the respective equation given above. also was in excess of the rice requirements indicating the necessity of minor change in the estimation of $Ko/\sqrt{Ca+Mg}$ value and some great modification in the calculation of $Ks/\sqrt{Ca+Mg}$ value for the equation; however the concept of using $K/\sqrt{Ca+Mg}$ as a basis of $K_2O$ recommendation was shown to be quite reasonable. 3. It was found. from the correlation study using the data of paddy yield and amount of $K_2O$ absorbed by rice plants that the substitution of the value of $Ks/\sqrt{Ca+Mg}$ in the equation for the vaule $Ks/\sqrt{Ca+Mg}=0.037+0.78K\;me/100g$ soil was much more applicable than using the value calculated from the data of soil and wollastonite analysis.

  • PDF

A Study on Aviation Safety and Third Country Operator of EU Regulation in light of the Convention on international Civil Aviation (시카고협약체계에서의 EU의 항공법규체계 연구 - TCO 규정을 중심으로 -)

  • Lee, Koo-Hee
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.29 no.1
    • /
    • pp.67-95
    • /
    • 2014
  • Some Contracting States of the Chicago Convention issue FAOC(Foreign Air Operator Certificate) and conduct various safety assessments for the safety of the foreign operators which operate to their state. These FAOC and safety audits on the foreign operators are being expanded to other parts of the world. While this trend is the strengthening measure of aviation safety resulting in the reduction of aircraft accident. FAOC also burdens the other contracting States to the Chicago Convention due to additional requirements and late permission. EASA(European Aviation Safety Agency) is a body governed by European Basic Regulation. EASA was set up in 2003 and conduct specific regulatory and executive tasks in the field of civil aviation safety and environmental protection. EASA's mission is to promote the highest common standards of safety and environmental protection in civil aviation. The task of the EASA has been expanded from airworthiness to air operations and currently includes the rulemaking and standardization of airworthiness, air crew, air operations, TCO, ATM/ANS safety oversight, aerodromes, etc. According to Implementing Rule, Commission Regulation(EU) No 452/2014, EASA has the mandate to issue safety authorizations to commercial air carriers from outside the EU as from 26 May 2014. Third country operators (TCO) flying to any of the 28 EU Member States and/or to 4 EFTA States (Iceland, Norway, Liechtenstein, Switzerland) must apply to EASA for a so called TCO authorization. EASA will only take over the safety-related part of foreign operator assessment. Operating permits will continue to be issued by the national authorities. A 30-month transition period ensures smooth implementation without interrupting international air operations of foreign air carriers to the EU/EASA. Operators who are currently flying to Europe can continue to do so, but must submit an application for a TCO authorization before 26 November 2014. After the transition period, which lasts until 26 November 2016, a valid TCO authorization will be a mandatory prerequisite, in the absence of which an operating permit cannot be issued by a Member State. The European TCO authorization regime does not differentiate between scheduled and non-scheduled commercial air transport operations in principle. All TCO with commercial air transport need to apply for a TCO authorization. Operators with a potential need of operating to the EU at some time in the near future are advised to apply for a TCO authorization in due course, even when the date of operations is unknown. For all the issue mentioned above, I have studied the function of EASA and EU Regulation including TCO Implementing Rule newly introduced, and suggested some proposals. I hope that this paper is 1) to help preparation of TCO authorization, 2) to help understanding about the international issue, 3) to help the improvement of korean aviation regulations and government organizations, 4) to help compliance with international standards and to contribute to the promotion of aviation safety, in addition.

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Culture Conditions of Aspergillus oryzae in Dried Food-Waste and the Effects of Feeding the AO Ferments on Nutrients Availability in Chickens (건조한 남은 음식물을 이용한 Aspergillus oryzae균주 배양조건과 그 배양물 급여가 닭의 영양소 이용률에 미치는 영향)

  • Hwangbo J.;Hong E. C.;Lee B. S.;Bae H. D.;Kim W.;Nho W. G.;Kim J. H.;Kim I. H.
    • Korean Journal of Poultry Science
    • /
    • v.32 no.4
    • /
    • pp.291-300
    • /
    • 2005
  • Two experiments were carried out to assess the appropriate incubation conditions namely; duration, moisture content and the ideal microbial inoculant for fermented dried food waste(EW) offered to broilers. The nutrient utilization of birds fed the FW diets at varying dietary inclusion rates was also compared with a control diet. In Experiment 1, different moisture contents(MC) of 30, 40, 50 and $60\%$ respectively were predetermined to establish the ideal duration of incubation and the microbial inoculant. A 1mL Aspergillus oryzae(AO) $(1.33\times10^5\;CFU/mL)$ was used as the seed inoculant in FW. This results indicated that the ideal MC for incubation was $40\~50\%$ while the normal incubation time was > 72 hours. Consequently, AO seeds at 0.25, 0.50, 0.75 and 1.00mL were inoculated in FW to determine its effect on AO count. The comparative AO count of FW incubated for 12 and 96 hours, respectively showed no significant differences among varying inoculant dosage rates. The FW inoculated with lower AO seeds at 0.10, 0.05 and 0.01mL were likewise incubated for 72 and 96 hours, respectively and no changes in AO count was detected(p<0.05). The above findings indicated that the incubation requirements for FW should be $%40\~50\%$ for 72 hours with an AO seed incoulant dosage rate of 0.10mL. Consequently, in Experiment II, after determining the appropriate processing condition for the FW, 20 five-week old male Hubbard strain were used in a digestibility experiment. The birds were divided into 4 groups with 5 pens(1 bird per pen). The dietary treatments were; Treatment 1 : Control(Basal diet), Treatment 2 : $60\%$ Basal+4$40\%$ FW, Treatment 3 : $60\%$ $Basal+20\%\;FW+20\%$ AFW(Aspergillus oryzae inoculate dried food-waste diet) and Treatment 4: $60\%$ Basal+$40\%$ Am. Digestibility of treatment 2 was lowed on common nutrients and amino acids compared with control(p<0.05) and on crude fat and phosphorus compared with AFW treatments(T3, T4)(plt;0.05). Digestibility of treatment 3 and 4 increased on crude fiber and crude ash compared treatment 2 (p<0.05). Digestibility of control was high on agrinine, leucine, and phenylalnine of essential amino acids compared with treatment 3 and 4(p<0.05), and diestibility of treatment 3 and 4 was improved on arginine, lysine, and threonine of essential amino acids. Finally, despite comparable nutrient utilization among treatments, birds fed the dietary treatment containing AO tended to superior nutrient digestion to those fed the $60\%$ Basa1+$40\%$ FW.

Coffee consumption behaviors, dietary habits, and dietary nutrient intakes according to coffee intake amount among university students (일부 대학생의 커피섭취량에 따른 커피섭취행동, 식습관 및 식사 영양소 섭취)

  • Kim, Sun-Hyo
    • Journal of Nutrition and Health
    • /
    • v.50 no.3
    • /
    • pp.270-283
    • /
    • 2017
  • Purpose: This study was conducted to examine coffee consumption behaviors, dietary habits, and nutrient intakes by coffee intake amount among university students. Methods: Questionnaires were distributed to 300 university students randomly selected in Gongju. Dietary survey was administered during two weekdays by the food record method. Results: Subjects were divided into three groups: NCG (non-coffee group), LCG (low coffee group, 1~2 cups/d), and HCG (high coffee group, 3 cups/d) by coffee intake amount and subjects' distribution. Coffee intake frequency was significantly greater in the HCG compared to the LCG (p < 0.001). The HCG was more likely to intake dripped coffee with or without milk and/or sugar than the LCG (p < 0.05). More than 80% of coffee drinkers chose their favorite coffee or accompanying snacks regardless of energy content. More than 75% of coffee takers did not eat accompanying snacks instead of meals, and the HCG ate them more frequently than LCG (p < 0.05). Breakfast skipping rate was high while vegetable and fruit intakes were very low in most subjects. Subjects who drank carbonated drinks, sweet beverages, or alcohol were significantly greater in number in the LCG and HCG than in the NCG (p < 0.01). Energy intakes from coffee were $0.88{\pm}5.62kcal/d$ and $7.07{\pm}16.93kcal/d$ for the LCG and HCG. For total subjects, daily mean dietary energy intake was low at less than 72% of estimated energy requirement. Levels of vitamin C and calcium were lower than the estimated average requirements while that of vitamin D was low (24~34% of adequate intake). There was no difference in nutrient intakes by coffee intake amount, except protein, vitamin A, and niacin. Conclusion: Coffee intake amount did not affect dietary nutrient intakes. Dietary habits were poor,and most nutrient intakes were lower than recommend levels. High intakes of coffee seemed to be related with high consumption of sweet beverages and alcohol. Therefore, it is necessary to improve nutritional intakes and encourage proper water intake habits, including coffee intake, for improved nutritional status of subjects.

Analysis of Metadata Standards of Record Management for Metadata Interoperability From the viewpoint of the Task model and 5W1H (메타데이터 상호운용성을 위한 기록관리 메타데이터 표준 분석 5W1H와 태스크 모델의 관점에서)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • The Korean Journal of Archival Studies
    • /
    • no.32
    • /
    • pp.127-176
    • /
    • 2012
  • Metadata is well recognized as one of the foundational factors in archiving and long-term preservation of digital resources. There are several metadata standards for records management, archives and preservation, e.g. ISAD(G), EAD, AGRkMs, PREMIS, and OAIS. Consideration is important in selecting appropriate metadata standards in order to design metadata schema that meet the requirements of a particular archival system. Interoperability of metadata with other systems should be considered in schema design. In our previous research, we have presented a feature analysis of metadata standards by identifying the primary resource lifecycle stages where each standard is applied. We have clarified that any single metadata standard cannot cover the whole records lifecycle for archiving and preservation. Through this feature analysis, we analyzed the features of metadata in the whole records lifecycle, and we clarified the relationships between the metadata standards and the stages of the lifecycle. In the previous study, more detailed analysis was left for future study. This paper proposes to analyze the metadata schemas from the viewpoint of tasks performed in the lifecycle. Metadata schemas are primarily defined to describe properties of a resource in accordance with the purposes of description, e.g. finding aids, records management, preservation and so forth. In other words, the metadata standards are resource- and purpose-centric, and the resource lifecycle is not explicitly reflected in the standards. There are no systematic methods for mapping between different metadata standards in accordance with the lifecycle. This paper proposes a method for mapping between metadata standards based on the tasks contained in the resource lifecycle. We first propose a Task Model to clarify tasks applied to resources in each stage of the lifecycle. This model is created as a task-centric model to identify features of metadata standards and to create mappings among elements of those standards. It is important to categorize the elements in order to limit the semantic scope of mapping among elements and decrease the number of combinations of elements for mapping. This paper proposes to use 5W1H (Who, What, Why, When, Where, How) model to categorize the elements. 5W1H categories are generally used for describing events, e.g. news articles. As performing a task on a resource causes an event and metadata elements are used in the event, we consider that the 5W1H categories are adequate to categorize the elements. By using these categories, we determine the features of every element of metadata standards which are AGLS, AGRkMS, PREMIS, EAD, OAIS and an attribute set extracted from DPC decision flow. Then, we perform the element mapping between the standards, and find the relationships between the standards. In this study, we defined a set of terms for each of 5W1H categories, which typically appear in the definition of an element, and used those terms to categorize the elements. For example, if the definition of an element includes the terms such as person and organization that mean a subject which contribute to create, modify a resource the element is categorized into the Who category. A single element can be categorized into one or more 5W1H categories. Thus, we categorized every element of the metadata standards using the 5W1H model, and then, we carried out mapping among the elements in each category. We conclude that the Task Model provides a new viewpoint for metadata schemas and is useful to help us understand the features of metadata standards for records management and archives. The 5W1H model, which is defined based on the Task Model, provides us a core set of categories to semantically classify metadata elements from the viewpoint of an event caused by a task.