• Title/Summary/Keyword: technology development

Search Result 42,687, Processing Time 0.08 seconds

Development of Root Media Containing Carbonized and Expanded Rice Hull for Container Cultivation of Horticultural Crops (팽연왕겨와 훈탄을 포함한 원예작물 용기재배용 혼합상토의 개발)

  • Park, Eun Young;Choi, Jong Myung;Shim, Chang Young
    • Horticultural Science & Technology
    • /
    • v.32 no.2
    • /
    • pp.157-164
    • /
    • 2014
  • Objective of this research was to develop root media containing expanded rice hull (ERH) and carbonized rice hull (CRH). To achieve this, the physico chemical properties of two materials were analysed and blended with peatmoss (PM) or coir dust (CD) with various ratio. Based on the physical properties of the blended materials, 4 root media were selected for future experiment. After the analysis of pH and EC of the selected root media, the kinds and amount of pre-planting nutrient charge fertilizers (PNCF) incorporated into each root medium were varied, and then, final chemical properties of the root media were analysed. The total porosity (TP), container capacity (CC), and air-filled porosity (AFP) were 81.3%. 39.9%, and 41.4% in ERH and 77.6%, 64.1%, and 13.5% in CRH, respectively. The percentage of easily available water (EAW, from CC to 4.90 kPa tension) and buffering water (BW, 4.91-9.81 kPa tension) were 11.37% and 5.27%, in ERH and 17.26% and 14.28% in CRH, respectively. The pH of ERH was 7.1, but it was extremely high in CRH such as 11.2. The EC and CEC were $1.31dS{\cdot}m^{-1}$ and $12.1meq{\cdot}100g^{-1}$ in ERH and $6.53dS{\cdot}m^{-1}$ and CEC 7.79 $meq{\cdot}100g^{-1}$ in CRH, respectively. The ranges of TP, CC and AFP in 4 selected media (PM + ERH, 6:4, v/v; CD + ERH, 8:2; PM + CRH, 7:3; CD+CRH 6:4) were 89.2-90.3%, 67.3-81.8%, and 8.3-21.9%, respectively. The pHs and ECs in root media containing peatmoss such as PM + ERH (6:4) and PM + CRH (7:3) were 4.0-4.3 and $0.33-0.365dS{\cdot}m^{-1}$, whereas those of CD + CRH were 7.4-7.9 and $1.282dS{\cdot}m^{-1}$. The pHs and ECs, however, analysed before and after the incorporation of PNCF in each root medium were not significant different. This result indicated that the incorporated fertilizers in PNCF to adjust medium pH did not dissolve enough to influence medium pH, but it is very normal in root media containing dolomitic lime and sulfur powder in adjusting pH. The Information obtained in this study may facilitate an effective formulation of root media containing rice hulls.

Study on Basic Requirements of Geoscientific Area for the Deep Geological Repository of Spent Nuclear Fuel in Korea (사용후핵연료 심지층처분장부지 지질환경 기본요건 검토)

  • Bae, Dae-Seok;Koh, Yong-Kwon;Park, Ju-Wan;Park, Jin-Baek;Song, Jong-Soon
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.10 no.1
    • /
    • pp.63-75
    • /
    • 2012
  • This paper gives some basic requirements and preferences of various geological environmental conditions for the final deep geological repository of spent nuclear fuel (SNF). This study also indicates how the requirements and preferences are to be considered prior to the selection of sites for a site investigation as well as the final disposal in Korea. The results of the study are based on the knowledge and experience from the IAEA and NEA/OECD as well as the advanced countries in SNF disposal project. This study discusses and suggests preliminary guideline of the disposal requirements including geological, mechanical, thermal, hydrogeological, chemical and transport properties of host rock with long term geological stabilities which influence the functions of a multi-barrier disposal system. To apply and determine whether requirements and preferences for a given parameter are satisfied at different stages during a site selection and suitability assessment of a final disposal site, the quantitative criteria in each area should be formulated with credibility through relevant research and development efforts for the deep geological environment during the site screening and selection processes as well as specific studies such as productions of safety cases and validation studies using a generic underground research laboratory (URL) in Korea.

Development of analytical method for determination of spinetoram residues in livestock using LC-MS/MS (LC-MS/MS를 이용한 축산물 중 Spinetoram 공정시험법 개발 및 검증)

  • Ko, Ah-Young;Kim, Heejung;Do, Jung Ah;Jang, Jin;Lee, Eun Hyang;Ju, Yun Ji;Kim, Ji Young;Chang, Moon-Ik;Rhee, Gyu-Seek
    • Analytical Science and Technology
    • /
    • v.29 no.2
    • /
    • pp.94-103
    • /
    • 2016
  • An analytical method was developed to determine the amount of spinetoram (spinetoram J and spinetoram L) in livestock samples. The spinetoram was extracted with acetonitrile and purified through a primary secondary amine (PSA) sorbent. The spinetoram residues were then quantified and confirmed using a liquid chromatography–tandem mass spectrometer (LC-MS/MS) in the positive ion mode using multiple reactions monitoring (MRM). Matrix-matched calibration curves were linear over the calibration ranges (0.005-0.5 mg/kg) into a blank extract with r2 > 0.994. The limits of detection and quantification were 0.002 and 0.01 mg/kg, respectively. The recovery results of spinetram ranged between 81.9-106.4% at different concentration levels (LOQ, 10LOQ, 50LOQ, n=5) with relative standard deviations (RSDs) less than 10%. All values were consistent with the criteria ranges requested in the Codex guidelines (CAC/GL40, 2003). An interlaboratory study was conducted to validate the method. The proposed analytical method proved to be accurate, effective, and sensitive for spinetoram determination. The method will be used as an official analytical method in Korea.

Studies on the Insect Pests of Barley in Korea (한국(韓國)의 보리해충(害虫)에 관(關)한 연구(硏究))

  • Kwon, Yong Jung;An, Seung Lak
    • Current Research on Agriculture and Life Sciences
    • /
    • v.3
    • /
    • pp.129-150
    • /
    • 1985
  • The present investigation was conducted to provide a systematic approach necessary to establish an integrated insect pest management program of barley in Korea. Some ecological surveys on insect pests of barley have been undertaken at the field of Experimental Station, Ky$\check{o}$ngbuk Provincial Office of Rural Development as a fixed point survey area, and at 23 localities for round survey throughout southern and central Korea from 1983 to 1984. Previously known insects injurious to barley in Korea were revised and the population dynamics of 10 dominant harmful species were analyzed according to either 24 localities or 25 cultivars respectively by using several sampling methods of net sweeping, black light traps, yellow water pan traps and visual counting. As the results, a total of 94 species belonging to 77 genera under 32 families are known to be injurious to barley, among them 20 species are newly added here. In the population density level, the dominant species were disclosed as Laodelphax striatellus (43.1 %), Macrosiphum avenae(27.0 %), Rhopalosiphum padi(6.5 %), R. maidis(5.4 %), Psammolettix strialus(2.7 %), Chlorops oryzae(2.2 %), Agromyza albipennis(2.1 %) Phyllotreta nemorum(1.4 %), Chaetoenema cylindrica(1.0 %), Dolycoris baccarum(1.0 %) in order. For the general abundance of major insect pests, it was highest in the cultivar P'aldal whereas lowest in Milyang #22. There were tendencies that Psammotettix striatus, Dolycoris baccarum, Phyllotreta nemorum and Chaetocnema cylindrica represented a maximum increase in the beginning of June, while Chlorops oryzae and Agromyza albipennis showed in the middle of May but aphids were in the end of May. In the dominance of natural enemies, Nabis stenoferus occupied 21.4 % and Propylaea japonica 9.6 %.

  • PDF

A study on the CRM strategy for medium and small industry of distribution (중소유통업체의 CRM 도입방안에 관한 연구)

  • Kim, Gi-Pyoung
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.37-47
    • /
    • 2010
  • CRM refers to the operating activities that always maintain and promote good relationship with customers to ultimately maximize the company's profits by understanding the value of customers to meet their demands, establishing a strategy which may maximize the Life Time Value and successfully operating the business by integrating the customer management processes. In our country, many big businesses are introducing CRM initiatively to use it in marketing strategy however, most medium and small sized companies do not understand CRM clearly or they feel difficult to introduce it due to huge investment needed. This study is intended to present CRM promotion strategy and activities plan fit for the medium and small sized companies by analyzing the success factors of the leading companies those have already executed CRM by surveying the precedents to make the distributors out of the industries have close relation with consumers to overcome their weakness in scale and strengthen their competitiveness in such a rapidly changing and fiercely competing market. There are 5 stages to build CRM such as the recognition of the needs of CRM establishment, the establishment of CRM integrated database, the establishment of customer analysis and marketing strategy through data mining, the practical use of customer analysis through data mining and the implementation of response analysis and close loop process. Through the case study of leading companies, CRM is needed in types of businesses where the companies constantly contact their customers. To meet their needs, they assertively analyze their customer information. Through this, they develop their own CRM programs personalized for their customers to provide high quality service products. For customers helping them make profits, the VIP marketing strategy is conducted to keep the customers from breaking their relationships with the companies. Through continuous management, CRM should be executed. In other words, through customer segmentation, the profitability for the customers should be maximized. The maximization of the profitability for the customers is the key to CRM. These are the success factors of the CRM of the distributors in Korea. Firstly, the top management's will power for CS management is needed. Secondly, the culture across the company should be made to respect the customers. Thirdly, specialized customer management and CRM workers should be trained. Fourthly, CRM behaviors should be developed for the whole staff members. Fifthly, CRM should be carried out through systematic cooperation between related departments. To make use of the case study for CRM, the company should understand the customer and establish customer management programs to set the optimal CRM strategy and continuously pursue it according to a long-term plan. For this, according to collected information and customer data, customers should be segmented and the responsive customer system should be designed according to the differentiated strategy according to the class of the customers. In terms of the future CRM, integrated CRM is essential where the customer information gathers together in one place. As the degree of customers' expectation increases a lot, the effective way to meet the customers' expectation should be pursued. As the IT technology improved rapidly, RFID (Radio Frequency Identification) appears. On a real-time basis, information about products and customers is obtained massively in a very short time. A strategy for successful CRM promotion should be improving the organizations in charge of contacting customers, re-planning the customer management processes and establishing the integrated system with the marketing strategy to keep good relation with the customers according to a long-term plan and a proper method suitable to the market conditions and run a company-wide program. In addition, a CRM program should be continuously improved and complemented to meet the company's characteristics. Especially, a strategy for successful CRM for the medium and small sized distributors should be as follows. First, they should change their existing recognition in CRM and keep in-depth care for the customers. Second, they should benchmark the techniques of CRM from the leading companies and find out success points to use. Third, they should seek some methods best suited for their particular conditions by achieving the ideas combining their own strong points with marketing. Fourth, a CRM model should be developed that will promote relationship with individual customers just like the precedents of small sized businesses in Switzerland through small but noticeable events.

  • PDF

Hypoxia-dependent mitochondrial fission regulates endothelial progenitor cell migration, invasion, and tube formation

  • Kim, Da Yeon;Jung, Seok Yun;Kim, Yeon Ju;Kang, Songhwa;Park, Ji Hye;Ji, Seung Taek;Jang, Woong Bi;Lamichane, Shreekrishna;Lamichane, Babita Dahal;Chae, Young Chan;Lee, Dongjun;Chung, Joo Seop;Kwon, Sang-Mo
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.22 no.2
    • /
    • pp.203-213
    • /
    • 2018
  • Tumor undergo uncontrolled, excessive proliferation leads to hypoxic microenvironment. To fulfill their demand for nutrient, and oxygen, tumor angiogenesis is required. Endothelial progenitor cells (EPCs) have been known to the main source of angiogenesis because of their potential to differentiation into endothelial cells. Therefore, understanding the mechanism of EPC-mediated angiogenesis in hypoxia is critical for development of cancer therapy. Recently, mitochondrial dynamics has emerged as a critical mechanism for cellular function and differentiation under hypoxic conditions. However, the role of mitochondrial dynamics in hypoxia-induced angiogenesis remains to be elucidated. In this study, we demonstrated that hypoxia-induced mitochondrial fission accelerates EPCs bioactivities. We first investigated the effect of hypoxia on EPC-mediated angiogenesis. Cell migration, invasion, and tube formation was significantly increased under hypoxic conditions; expression of EPC surface markers was unchanged. And mitochondrial fission was induced by hypoxia time-dependent manner. We found that hypoxia-induced mitochondrial fission was triggered by dynamin-related protein Drp1, specifically, phosphorylated DRP1 at Ser637, a suppression marker for mitochondrial fission, was impaired in hypoxia time-dependent manner. To confirm the role of DRP1 in EPC-mediated angiogenesis, we analyzed cell bioactivities using Mdivi-1, a selective DRP1 inhibitor, and DRP1 siRNA. DRP1 silencing or Mdivi-1 treatment dramatically reduced cell migration, invasion, and tube formation in EPCs, but the expression of EPC surface markers was unchanged. In conclusion, we uncovered a novel role of mitochondrial fission in hypoxia-induced angiogenesis. Therefore, we suggest that specific modulation of DRP1-mediated mitochondrial dynamics may be a potential therapeutic strategy in EPC-mediated tumor angiogenesis.

Development of Agent-based Platform for Coordinated Scheduling in Global Supply Chain (글로벌 공급사슬에서 경쟁협력 스케줄링을 위한 에이전트 기반 플랫폼 구축)

  • Lee, Jung-Seung;Choi, Seong-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.213-226
    • /
    • 2011
  • In global supply chain, the scheduling problems of large products such as ships, airplanes, space shuttles, assembled constructions, and/or automobiles are complicated by nature. New scheduling systems are often developed in order to reduce inherent computational complexity. As a result, a problem can be decomposed into small sub-problems, problems that contain independently small scheduling systems integrating into the initial problem. As one of the authors experienced, DAS (Daewoo Shipbuilding Scheduling System) has adopted a two-layered hierarchical architecture. In the hierarchical architecture, individual scheduling systems composed of a high-level dock scheduler, DAS-ERECT and low-level assembly plant schedulers, DAS-PBS, DAS-3DS, DAS-NPS, and DAS-A7 try to search the best schedules under their own constraints. Moreover, the steep growth of communication technology and logistics enables it to introduce distributed multi-nation production plants by which different parts are produced by designated plants. Therefore vertical and lateral coordination among decomposed scheduling systems is necessary. No standard coordination mechanism of multiple scheduling systems exists, even though there are various scheduling systems existing in the area of scheduling research. Previous research regarding the coordination mechanism has mainly focused on external conversation without capacity model. Prior research has heavily focuses on agent-based coordination in the area of agent research. Yet, no scheduling domain has been developed. Previous research regarding the agent-based scheduling has paid its ample attention to internal coordination of scheduling process, a process that has not been efficient. In this study, we suggest a general framework for agent-based coordination of multiple scheduling systems in global supply chain. The purpose of this study was to design a standard coordination mechanism. To do so, we first define an individual scheduling agent responsible for their own plants and a meta-level coordination agent involved with each individual scheduling agent. We then suggest variables and values describing the individual scheduling agent and meta-level coordination agent. These variables and values are represented by Backus-Naur Form. Second, we suggest scheduling agent communication protocols for each scheduling agent topology classified into the system architectures, existence or nonexistence of coordinator, and directions of coordination. If there was a coordinating agent, an individual scheduling agent could communicate with another individual agent indirectly through the coordinator. On the other hand, if there was not any coordinating agent existing, an individual scheduling agent should communicate with another individual agent directly. To apply agent communication language specifically to the scheduling coordination domain, we had to additionally define an inner language, a language that suitably expresses scheduling coordination. A scheduling agent communication language is devised for the communication among agents independent of domain. We adopt three message layers which are ACL layer, scheduling coordination layer, and industry-specific layer. The ACL layer is a domain independent outer language layer. The scheduling coordination layer has terms necessary for scheduling coordination. The industry-specific layer expresses the industry specification. Third, in order to improve the efficiency of communication among scheduling agents and avoid possible infinite loops, we suggest a look-ahead load balancing model which supports to monitor participating agents and to analyze the status of the agents. To build the look-ahead load balancing model, the status of participating agents should be monitored. Most of all, the amount of sharing information should be considered. If complete information is collected, updating and maintenance cost of sharing information will be increasing although the frequency of communication will be decreasing. Therefore the level of detail and updating period of sharing information should be decided contingently. By means of this standard coordination mechanism, we can easily model coordination processes of multiple scheduling systems into supply chain. Finally, we apply this mechanism to shipbuilding domain and develop a prototype system which consists of a dock-scheduling agent, four assembly- plant-scheduling agents, and a meta-level coordination agent. A series of experiments using the real world data are used to empirically examine this mechanism. The results of this study show that the effect of agent-based platform on coordinated scheduling is evident in terms of the number of tardy jobs, tardiness, and makespan.

Evaluation of Web Service Similarity Assessment Methods (웹서비스 유사성 평가 방법들의 실험적 평가)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component based software development to promote application interaction and integration both within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web service repositories not only be well-structured but also provide efficient tools for developers to find reusable Web service components that meet their needs. As the potential of Web services for service-oriented computing is being widely recognized, the demand for effective Web service discovery mechanisms is concomitantly growing. A number of techniques for Web service discovery have been proposed, but the discovery challenge has not been satisfactorily addressed. Unfortunately, most existing solutions are either too rudimentary to be useful or too domain dependent to be generalizable. In this paper, we propose a Web service organizing framework that combines clustering techniques with string matching and leverages the semantics of the XML-based service specification in WSDL documents. We believe that this is one of the first attempts at applying data mining techniques in the Web service discovery domain. Our proposed approach has several appealing features : (1) It minimizes the requirement of prior knowledge from both service consumers and publishers; (2) It avoids exploiting domain dependent ontologies; and (3) It is able to visualize the semantic relationships among Web services. We have developed a prototype system based on the proposed framework using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web service registries. We report on some preliminary results demonstrating the efficacy of the proposed approach.

  • PDF

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Development of Menu Labeling System (MLS) Using Nutri-API (Nutrition Analysis Application Programming Interface) (영양분석 API를 이용한 메뉴 라벨링 시스템 (MLS) 개발)

  • Hong, Soon-Myung;Cho, Jee-Ye;Park, Yu-Jeong;Kim, Min-Chan;Park, Hye-Kyung;Lee, Eun-Ju;Kim, Jong-Wook;Kwon, Kwang-Il;Kim, Jee-Young
    • Journal of Nutrition and Health
    • /
    • v.43 no.2
    • /
    • pp.197-206
    • /
    • 2010
  • Now a days, people eat outside of the home more and more frequently. Menu labeling can help people make more informed decisions about the foods they eat and help them maintain a healthy diet. This study was conducted to develop menu labeling system using Nutri-API (Nutrition Analysis Application Programming Interface). This system offers convenient user interface and menu labeling information with printout format. This system provide useful functions such as new food/menu nutrients information, retrieval food semantic service, menu plan with subgroup and nutrient analysis informations and print format. This system provide nutritive values with nutrient information and ratio of 3 major energy nutrients. MLS system can analyze nutrients for menu and each subgroup. And MLS system can display nutrient comparisons with DRIs and % Daily Nutrient Values. And also this system provide 6 different menu labeling formate with nutrient information. Therefore it can be used by not only usual people but also dietitians and restaurant managers who take charge of making a menu and experts in the field of food and nutrition. It is expected that Menu Labeling System (MLS) can be useful of menu planning and nutrition education, nutrition counseling and expert meal management.