• Title/Summary/Keyword: Information Systems discipline

Search Result 77, Processing Time 0.025 seconds

BEEF MEAT TRACEABILITY. CAN NIRS COULD HELP\ulcorner

  • Cozzolino, D.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1246-1246
    • /
    • 2001
  • The quality of meat is highly variable in many properties. This variability originates from both animal production and meat processing. At the pre-slaughter stage, animal factors such as breed, sex, age contribute to this variability. Environmental factors include feeding, rearing, transport and conditions just before slaughter (Hildrum et al., 1995). Meat can be presented in a variety of forms, each offering different opportunities for adulteration and contamination. This has imposed great pressure on the food manufacturing industry to guarantee the safety of meat. Tissue and muscle speciation of flesh foods, as well as speciation of animal derived by-products fed to all classes of domestic animals, are now perhaps the most important uncertainty which the food industry must resolve to allay consumer concern. Recently, there is a demand for rapid and low cost methods of direct quality measurements in both food and food ingredients (including high performance liquid chromatography (HPLC), thin layer chromatography (TLC), enzymatic and inmunological tests (e.g. ELISA test) and physical tests) to establish their authenticity and hence guarantee the quality of products manufactured for consumers (Holland et al., 1998). The use of Near Infrared Reflectance Spectroscopy (NIRS) for the rapid, precise and non-destructive analysis of a wide range of organic materials has been comprehensively documented (Osborne et at., 1993). Most of the established methods have involved the development of NIRS calibrations for the quantitative prediction of composition in meat (Ben-Gera and Norris, 1968; Lanza, 1983; Clark and Short, 1994). This was a rational strategy to pursue during the initial stages of its application, given the type of equipment available, the state of development of the emerging discipline of chemometrics and the overwhelming commercial interest in solving such problems (Downey, 1994). One of the advantages of NIRS technology is not only to assess chemical structures through the analysis of the molecular bonds in the near infrared spectrum, but also to build an optical model characteristic of the sample which behaves like the “finger print” of the sample. This opens the possibility of using spectra to determine complex attributes of organic structures, which are related to molecular chromophores, organoleptic scores and sensory characteristics (Hildrum et al., 1994, 1995; Park et al., 1998). In addition, the application of statistical packages like principal component or discriminant analysis provides the possibility to understand the optical properties of the sample and make a classification without the chemical information. The objectives of this present work were: (1) to examine two methods of sample presentation to the instrument (intact and minced) and (2) to explore the use of principal component analysis (PCA) and Soft Independent Modelling of class Analogy (SIMCA) to classify muscles by quality attributes. Seventy-eight (n: 78) beef muscles (m. longissimus dorsi) from Hereford breed of cattle were used. The samples were scanned in a NIRS monochromator instrument (NIR Systems 6500, Silver Spring, MD, USA) in reflectance mode (log 1/R). Both intact and minced presentation to the instrument were explored. Qualitative analysis of optical information through PCA and SIMCA analysis showed differences in muscles resulting from two different feeding systems.

  • PDF

The Effect of C Language Output Method to the Performance of CGI Gateway in the UNIX Systems (유닉스 시스템에서 C 언어 출력 방법이 CGI 게이트웨이 성능에 미치는 영향)

  • Lee Hyung-Bong;Jeong Yeon-Chul;Kweon Ki-Hyeon
    • The KIPS Transactions:PartC
    • /
    • v.12C no.1 s.97
    • /
    • pp.147-156
    • /
    • 2005
  • CGI is a standard interface rule between web server and gateway devised for the gateway's standard output to replace a static web document in UNIX environment. So, it is common to use standard I/O statements provided by the programming language for the CGI gateway. But the standard I/O mechanism is one of buffer strategies that are designed transparently to operating system and optimized for generic cases. This means that it nay be useful to apply another optimization to the standard I/O environment in CGI gateway. In this paper, we introduced standard output method and file output method as the two output optimization areas for CGI gateways written in C language in the UNIX/LINUX systems, and applied the proposed methods of each area to Debian LINUX, IBM AIX, SUN Solaris, Digital UNIX respectively. Then we analyzed the effect of them focused on execution time. The results were different from operating system to operating system. Compared to normal situation, the best case of standard output area showed about $10{\%}$ improvement and the worst case showed $60{\%}$ degradation in file output area where some performance improvements were expected.

Remedies for the Seller's Delivery of Defective Goods under EC Directive in Comparison with English Law, Korean Law and CISG (EC Directive상 하자물품에 대한 매수인의 구제제도에 관한 비교연구)

  • Lee, Byung-Mun
    • THE INTERNATIONAL COMMERCE & LAW REVIEW
    • /
    • v.19
    • /
    • pp.33-66
    • /
    • 2003
  • This is a comparative and analytical study which comprises of the analysis of the rules of the buyer's remedies where the seller delivers defective goods of four legal systems; Directive, CISG, English law and Korean law. In light of threefold main purposes of this study, it firstly attempts to describe and analyze the remedy provisions of Directive in a comparative way in order to provide legal advice to the sellers who plans to enter into English consumer markets. It shows that the two tier remedial system under Directive is not much different from the other jurisdictions, except where the right of rescission under Directive is absolute in a sense that it does not require a certain degree of seriousness of defect. Secondly, the study compares the rules of one jurisdiction with those of other jurisdictions and evaluates the rules in light of the discipline of comparative law the basic question of which is whether a solution from one jurisdiction may facilitate the systematic development and reform of another jurisdiction. It proves the followings; (1) the reluctance and uncertainty in English law of ordering specific performance based on the discretionary power does not reflect the parties' preference because the order is either uncertain or rather negative where the purchase of substitute goods elsewhere is not a satisfactory solution in many cases; (2) the position in Korean law which has no limitation on the right to require substitute goods is likely unfair in commercial sales, but justified in consumer sales; (3) the right of termination or reduction under Directive which is subject to the applicability of the right to require repair or substitute goods seems to be contrary to the consumer's preference where the defective delivery destroys the basis of trust in the quality of the seller's performance; (4) the absolute right of termination under Directive and English law seems crucial in consumer sales because they are often inferior to commercial sellers in terms of information and bargaining power; (5) the right of reduction as a self-help remedy which is absent in English law emphasizes its usefulness. Thirdly, it finds that, where CISG is deemed to fail to unify different rules on the right to require specific performance between Civil and Common law, it is attempted once again in Directive and notwithstanding their hostility to awarding the right to require specific performance in English law, Regulations 2002 expressively stipulates such right.

  • PDF

Risks and Supervisory Challenges of Financial Conglomerates in Korea (금융그룹화와 금융위험: 실증분석 및 정책과제)

  • Hahm, Joon-Ho;Kim, Joon-Kyung
    • KDI Journal of Economic Policy
    • /
    • v.28 no.1
    • /
    • pp.145-191
    • /
    • 2006
  • This paper studies implications of financial conglomeration for both financial risk of individual conglomerates and systemic risk potential in post-crisis Korea. Our analyses suggest that we cannot conclude that financial conglomerates are taking on higher risks relative to non-conglomerate independent institutions. We also find that larger financial institutions show a significantly higher profitability and lower variability in profitability operating on a superior efficient frontier. However, it turns out that the consolidation has raised systemic risk potential as direct and indirect interdependencies among large banking institutions have substantially increased. Furthermore, financial conglomerates have become more vulnerable to contagion risks from non-bank sectors and capital markets. In the face of the shifting risk structure, financial supervisory and regulatory systems must be upgraded toward a more risk-based, consolidated supervision. Prompt corrective action provision for financial conglomerates must be based upon fully consolidated group risks, and effective supervisory devices need to be introduced to avoid inadvertent extension of public safety net to cross-sectoral activities of financial conglomerates. It is also critical to strengthen internal control and risk management capacities at financial conglomerates, and to establish strong market discipline by improving information transparency and monitoring incentives in the financial market.

  • PDF

Development of Design Space Exploration for Warship using the Concept of Negative Design (네거티브 설계 개념을 이용한 함정 설계영역탐색법 개발)

  • Park, Jin-Won
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.412-419
    • /
    • 2019
  • Negative space in the discipline of art defines the space around and between the subject of an image. The use of negative space is an element of artistic composition, since it is occasionally used to artistic effect as the "real" subject of an image. In painting, it is a technique that negatively touches the background of an object to be expressed, so that it gives a feeling of unique texture and silhouette by touching unnecessary parts while leaving necessary parts. As in art, negative space in a design can also be useful to identify an image of infeasible design ranges with a straightforward view. Similarity between two disciplines leads to the introduction of the negative space concept for design space exploration. A rough design space exploration using statistics and visual analytics may support more efficient decision-making, and can provide meaningful insights into the direction of early-phase system design. For this, the approach guarantees dynamic interactions between visualized information and human cognitive systems. Visual analytics is useful to summarize complex and large-scale data. It is useful for identifying feasible design spaces, as well as for avoiding infeasible spaces or highly risky spaces. This paper investigates the possible use of the negative space concept by using an application example.

Privilege and Immunity of Information and Data from Aviation Safety Program in Unites States (미국 항공안전데이터 프로그램의 비공개 특권과 제재 면제에 관한 연구)

  • Moon, Joon-Jo
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.23 no.2
    • /
    • pp.137-172
    • /
    • 2008
  • The earliest safety data programs, the FDR and CVR, were electronic reporting systems that generate data "automatically." The FDR program, originally instituted in 1958, had no publicly available restrictions for protections against sanctions by the FAA or an airline, although there are agreements and union contracts forbidding the use of FDR data for FAA enforcement actions. This FDR program still has the least formalized protections. With the advent of the CVR program in 1966, the precursor to the current FAR 91.25 was already in place, having been promulgated in 1964. It stated that the FAA would not use CVR data for enforcement actions. In 1982, Congress began restricting the disclosure of the CVR tape and transcripts. Congress added further clarification of the availability of discovery in civil litigation in 1994. Thus, the CVR data have more definitive protections in place than do FDR data. The ASRS was the first non-automatic reporting system; and built into its original design in 1975 was a promise of limited protection from enforcement sanctions. That promise was further codified in an FAR in 1979. As with the CVR, from its inception, the ASRS had some protections built in for the person who might have had a safety problem. However, the program did not (and to this day does not) explicitly deal with issues of use by airlines, litigants, or the public media, although it appears that airlines will either take a non-punitive stance if an ASRS report is filed, or the airline may ignore the fact that it has been filed at all. The FAA worked with several U.S. airlines in the early 1990s on developing ASAP programs, and the FAA issued an Advisory Circular about the program in 1997. From its inception, the ASAP program contained some FAA enforcement protections and company discipline protections, although some protection against litigation disclosure and public disclosure was not added until 2003, when FAA Order 8000.82 was promulgated, placing the program under the protections of FAR 193, which had been added in 2001. The FOQA program, when it was first instituted through a demonstration program in 1995, did not contain protections against sanctions. Now, however, the FAA cannot take enforcement action based on FOQA safety data, and an airline is limited to "corrective action" under the program. Union contracts can exclude FOQA from the realm of disciplinary action, although airline practice may be for airlines to require retraining if there is no contract in place forbidding it. The data is protected against disclosure for litigation and public media purposes by FAA Order 8000.81, issued in 2003, which placed FOQA under the protections of FAR 193. The figure on the next page shows when each program began, and when each statute, regulation, or order became effective for that program.

  • PDF

Development of the Accident Prediction Model for Enlisted Men through an Integrated Approach to Datamining and Textmining (데이터 마이닝과 텍스트 마이닝의 통합적 접근을 통한 병사 사고예측 모델 개발)

  • Yoon, Seungjin;Kim, Suhwan;Shin, Kyungshik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.1-17
    • /
    • 2015
  • In this paper, we report what we have observed with regards to a prediction model for the military based on enlisted men's internal(cumulative records) and external data(SNS data). This work is significant in the military's efforts to supervise them. In spite of their effort, many commanders have failed to prevent accidents by their subordinates. One of the important duties of officers' work is to take care of their subordinates in prevention unexpected accidents. However, it is hard to prevent accidents so we must attempt to determine a proper method. Our motivation for presenting this paper is to mate it possible to predict accidents using enlisted men's internal and external data. The biggest issue facing the military is the occurrence of accidents by enlisted men related to maladjustment and the relaxation of military discipline. The core method of preventing accidents by soldiers is to identify problems and manage them quickly. Commanders predict accidents by interviewing their soldiers and observing their surroundings. It requires considerable time and effort and results in a significant difference depending on the capabilities of the commanders. In this paper, we seek to predict accidents with objective data which can easily be obtained. Recently, records of enlisted men as well as SNS communication between commanders and soldiers, make it possible to predict and prevent accidents. This paper concerns the application of data mining to identify their interests, predict accidents and make use of internal and external data (SNS). We propose both a topic analysis and decision tree method. The study is conducted in two steps. First, topic analysis is conducted through the SNS of enlisted men. Second, the decision tree method is used to analyze the internal data with the results of the first analysis. The dependent variable for these analysis is the presence of any accidents. In order to analyze their SNS, we require tools such as text mining and topic analysis. We used SAS Enterprise Miner 12.1, which provides a text miner module. Our approach for finding their interests is composed of three main phases; collecting, topic analysis, and converting topic analysis results into points for using independent variables. In the first phase, we collect enlisted men's SNS data by commender's ID. After gathering unstructured SNS data, the topic analysis phase extracts issues from them. For simplicity, 5 topics(vacation, friends, stress, training, and sports) are extracted from 20,000 articles. In the third phase, using these 5 topics, we quantify them as personal points. After quantifying their topic, we include these results in independent variables which are composed of 15 internal data sets. Then, we make two decision trees. The first tree is composed of their internal data only. The second tree is composed of their external data(SNS) as well as their internal data. After that, we compare the results of misclassification from SAS E-miner. The first model's misclassification is 12.1%. On the other hand, second model's misclassification is 7.8%. This method predicts accidents with an accuracy of approximately 92%. The gap of the two models is 4.3%. Finally, we test if the difference between them is meaningful or not, using the McNemar test. The result of test is considered relevant.(p-value : 0.0003) This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of enlisted men's data. Additionally, various independent variables used in the decision tree model are used as categorical variables instead of continuous variables. So it suffers a loss of information. In spite of extensive efforts to provide prediction models for the military, commanders' predictions are accurate only when they have sufficient data about their subordinates. Our proposed methodology can provide support to decision-making in the military. This study is expected to contribute to the prevention of accidents in the military based on scientific analysis of enlisted men and proper management of them.