• Title/Summary/Keyword: various techniques

Search Result 7,726, Processing Time 0.043 seconds

Development of Textile Design for Fashion Cultural Products - Focusing on Traditional Korean Patterns - (패션문화상품을 위한 텍스타일 디자인 개발 - 한국전통문양을 중심으로 -)

  • Hyun, Seon-Hee;Bae, Soo-Jeong
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.31 no.6 s.165
    • /
    • pp.985-996
    • /
    • 2007
  • The purpose of this study is to analyze the symbolism of traditional Korean patterns which reflect an emotional cultural background of Korean people, to apply modernized and developed patterns to Textile Design for fashion cultural products, and to explore productive direction of developing designs of fashion cultural products. The process of developing Textile Design of fashion cultural products which applied symbolism of traditional Korean patterns was conducted as follows. Firstly, based on '05 S/S-'07 S/S fashion trends, a design concept wat decided(man-urban ethnic style, woman-romantic ethnic style). Secondly, motive was abstracted from selected traditional patterns to develop into modem patterns. Thirdly, items were selected according to symbolic meaning of traditional Korean patterns. Man's items included shirts, necktie, and handkerchief which were highly preferred by Korean and foreign visitors. Finally, developed textile designs were diagrammed by item using textile CAD and an illustrator 10 and presented as images. The following results were obtained. First, textile designs for fashion cultural products, in which apply traditional patterns may reflect the understanding of traditional aesthetic beauty and philosophical approach by applying symbolic significance inherent in patterns as well as the aesthetics of the patterns. Second, traditional patterns have been recognized as old fashioned to consumers because they have been often used for traditional handicrafts or folk products. If their unique shapes are changed or simplified, emphasizing images, and trend styles and colors are used, they will be recreated as a modem design. Third, textile designs using traditional patterns may provide various images and visual effects according to techniques and production methods. Then, the method will be applied to many items. Finally, since traditional patterns in fashion cultural products can be used as our unique design elements, they can be utilized as the source of design inspiration for the development of value-added products.

Evaluation of Thermal Catalytic Decomposition of Chlorinated Hydrocarbons and Catalyst-Poison Effect by Sulfur Compound (염소계 탄화수소의 열촉매 분해와 황화합물에 의한 촉매독 영향 평가)

  • Jo, Wan-Kuen;Shin, Seung-Ho;Yang, Chang-Hee;Kim, Mo-Geun
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.29 no.5
    • /
    • pp.577-583
    • /
    • 2007
  • To overcome certain disadvantages of past typical control techniques for toxic contaminants emitted from various industrial processes, the current study was conducted to establish a thermal catalytic system using mesh-type transition-metal platinum(Pt)/stainless steel(SS) catalyst and to evaluate catalytic thermal destruction of five chlorinated hydrocarbons[chlorobenzene(CHB), chloroform(CHF), perchloroethylene (PCE), 1,1,1-trichloroethane(TCEthane), trichloroethylene(TCE)]. In addition, this study evaluated the catalyst poison effect on the catalytic thermal destruction. Three operating parameters tested for the thermal catalyst system included the inlet concentrations, the incineration temperature, and the residence time in the catalyst system. The thermal decomposition efficiency decreased from the highest value of 100% to the lowest value of almost 0%(CHB) as the input concentration increased, depending upon the type of chlorinated compounds. The destruction efficiencies of the four target compounds, except for TCEthane, increased upto almost 100% as the reaction temperature increased, whereas the destruction efficiency for TCEthane did not significantly vary. For the target compounds except for TCEthane, the catalytic destruction efficiencies increased up to 30% to 97% as the residence time increased from 10 sec to 60 sec, but the increase of destruction efficiency for TCEthane stopped at the residence time of 30 sec, suggesting that long residence times are not always proper for thermal destruction of VOCs, when considering the destruction efficiency and operation costs of thermal catalytic system together. Conclusively, the current findings suggest that when applying the transition-metal catalyst for the better destruction of chlorinated hydrocarbons, VOC type should be considered, along with their inlet concentrations, and reaction temperature and residence time in catalytic system. Meanwhile, the addition of high methyl sulfide(1.8 ppm) caused a drop of 0 to 50% in the removal efficiencies of the target compounds, whereas the addition of low methyl sulfide (0.1 ppm), which is lower than the concentrations of sulfur compounds measured in typical industrial emissions, did not cause.

Utility-Based Video Adaptation in MPEG-21 for Universal Multimedia Access (UMA를 위한 유틸리티 기반 MPEG-21 비디오 적응)

  • 김재곤;김형명;강경옥;김진웅
    • Journal of Broadcast Engineering
    • /
    • v.8 no.4
    • /
    • pp.325-338
    • /
    • 2003
  • Video adaptation in response to dynamic resource conditions and user preferences is required as a key technology to enable universal multimedia access (UMA) through heterogeneous networks by a multitude of devices In a seamless way. Although many adaptation techniques exist, selections of appropriate adaptations among multiple choices that would satisfy given constraints are often ad hoc. To provide a systematic solution, we present a general conceptual framework to model video entity, adaptation, resource, utility, and relations among them. It allows for formulation of various adaptation problems as resource-constrained utility maximization. We apply the framework to a practical case of dynamic bit rate adaptation of MPEG-4 video streams by employing combination of frame dropping and DCT coefficient dropping. Furthermore, we present a descriptor, which has been accepted as a part of MPEG-21 Digital Item Adaptation (DIA), for supporting terminal and network quality of service (QoS) in an interoperable manner. Experiments are presented to demonstrate the feasibility of the presented framework using the descriptor.

Effects of TRIZ's 40 Inventive Principles Application on the Improvement of Learners' Creativity (트리즈 40가지 발명 원리 적용이 학습자의 창의성 신장에 미치는 영향)

  • Nam, Seungkwon;Choi, Wonsik
    • 대한공업교육학회지
    • /
    • v.31 no.2
    • /
    • pp.203-232
    • /
    • 2006
  • The purposes of this study are to examine the effects of learning, which was applied TRIZ's 40 inventive principles, on the improvement of learners' creativity and to offer basic information that would be necessary to study on Inventive Education in Technology Education. In order to achieve the purposes, objects were divided into two groups, experiment group(74 students) and control group(67 students), who were from B Middle school in Daejeon. Creativity Self-Assessment and Student Inventive Rating Scale were used as tools for research so that we could find the homogeneity in two groups. An applied design method was nonequivalent control group pretest-posttest design. This study was performed for 2 hours on the 1st and the 3rd Saturday in every month from the 3rd week of March, 2006 to the 3rd of July of 2006, and total researching period was 9 weeks. In that time, the students were required to learn 40 inventive principles. The results from this study are as below. (1) Applying TRIZ's 40 inventive principles had a positive effect on students' CQ(creative quotient), as influencing on the subordinate factors of creativity, such as, originality, germinal, trasformational, value, attraction, expressive power and organic systemicity. However it didn't have any effect on adequateness, properness, merit, complex and elegance. (2) Applying TRIZ's 40 inventive principles had a significant effect neither on CQ by sex, nor on the subordinate factors of creativity, except for originality and expressive power. Based on the results of the experiment, below suggestions were made to promote the application of TRIZ's 40 inventive principles to Technology Education. (1) Although this study was performed by using development activities, it is necessary to study more systemically to apply 40 inventive principles to regular subject in Technology Education. (2) As creativity was very important in Technology Education, there should be studies on the various types of inventive principles and techniques for Inventive Education in Technology Education.

An Investigation on Expanding Co-occurrence Criteria in Association Rule Mining (연관규칙 마이닝에서의 동시성 기준 확장에 대한 연구)

  • Kim, Mi-Sung;Kim, Nam-Gyu;Ahn, Jae-Hyeon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.23-38
    • /
    • 2012
  • There is a large difference between purchasing patterns in an online shopping mall and in an offline market. This difference may be caused mainly by the difference in accessibility of online and offline markets. It means that an interval between the initial purchasing decision and its realization appears to be relatively short in an online shopping mall, because a customer can make an order immediately. Because of the short interval between a purchasing decision and its realization, an online shopping mall transaction usually contains fewer items than that of an offline market. In an offline market, customers usually keep some items in mind and buy them all at once a few days after deciding to buy them, instead of buying each item individually and immediately. On the contrary, more than 70% of online shopping mall transactions contain only one item. This statistic implies that traditional data mining techniques cannot be directly applied to online market analysis, because hardly any association rules can survive with an acceptable level of Support because of too many Null Transactions. Most market basket analyses on online shopping mall transactions, therefore, have been performed by expanding the co-occurrence criteria of traditional association rule mining. While the traditional co-occurrence criteria defines items purchased in one transaction as concurrently purchased items, the expanded co-occurrence criteria regards items purchased by a customer during some predefined period (e.g., a day) as concurrently purchased items. In studies using expanded co-occurrence criteria, however, the criteria has been defined arbitrarily by researchers without any theoretical grounds or agreement. The lack of clear grounds of adopting a certain co-occurrence criteria degrades the reliability of the analytical results. Moreover, it is hard to derive new meaningful findings by combining the outcomes of previous individual studies. In this paper, we attempt to compare expanded co-occurrence criteria and propose a guideline for selecting an appropriate one. First of all, we compare the accuracy of association rules discovered according to various co-occurrence criteria. By doing this experiment we expect that we can provide a guideline for selecting appropriate co-occurrence criteria that corresponds to the purpose of the analysis. Additionally, we will perform similar experiments with several groups of customers that are segmented by each customer's average duration between orders. By this experiment, we attempt to discover the relationship between the optimal co-occurrence criteria and the customer's average duration between orders. Finally, by a series of experiments, we expect that we can provide basic guidelines for developing customized recommendation systems. Our experiments use a real dataset acquired from one of the largest internet shopping malls in Korea. We use 66,278 transactions of 3,847 customers conducted during the last two years. Overall results show that the accuracy of association rules of frequent shoppers (whose average duration between orders is relatively short) is higher than that of causal shoppers. In addition we discover that with frequent shoppers, the accuracy of association rules appears very high when the co-occurrence criteria of the training set corresponds to the validation set (i.e., target set). It implies that the co-occurrence criteria of frequent shoppers should be set according to the application purpose period. For example, an analyzer should use a day as a co-occurrence criterion if he/she wants to offer a coupon valid only for a day to potential customers who will use the coupon. On the contrary, an analyzer should use a month as a co-occurrence criterion if he/she wants to publish a coupon book that can be used for a month. In the case of causal shoppers, the accuracy of association rules appears to not be affected by the period of the application purposes. The accuracy of the causal shoppers' association rules becomes higher when the longer co-occurrence criterion has been adopted. It implies that an analyzer has to set the co-occurrence criterion for as long as possible, regardless of the application purpose period.

A Case Study on Forecasting Inbound Calls of Motor Insurance Company Using Interactive Data Mining Technique (대화식 데이터 마이닝 기법을 활용한 자동차 보험사의 인입 콜량 예측 사례)

  • Baek, Woong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.99-120
    • /
    • 2010
  • Due to the wide spread of customers' frequent access of non face-to-face services, there have been many attempts to improve customer satisfaction using huge amounts of data accumulated throughnon face-to-face channels. Usually, a call center is regarded to be one of the most representative non-faced channels. Therefore, it is important that a call center has enough agents to offer high level customer satisfaction. However, managing too many agents would increase the operational costs of a call center by increasing labor costs. Therefore, predicting and calculating the appropriate size of human resources of a call center is one of the most critical success factors of call center management. For this reason, most call centers are currently establishing a department of WFM(Work Force Management) to estimate the appropriate number of agents and to direct much effort to predict the volume of inbound calls. In real world applications, inbound call prediction is usually performed based on the intuition and experience of a domain expert. In other words, a domain expert usually predicts the volume of calls by calculating the average call of some periods and adjusting the average according tohis/her subjective estimation. However, this kind of approach has radical limitations in that the result of prediction might be strongly affected by the expert's personal experience and competence. It is often the case that a domain expert may predict inbound calls quite differently from anotherif the two experts have mutually different opinions on selecting influential variables and priorities among the variables. Moreover, it is almost impossible to logically clarify the process of expert's subjective prediction. Currently, to overcome the limitations of subjective call prediction, most call centers are adopting a WFMS(Workforce Management System) package in which expert's best practices are systemized. With WFMS, a user can predict the volume of calls by calculating the average call of each day of the week, excluding some eventful days. However, WFMS costs too much capital during the early stage of system establishment. Moreover, it is hard to reflect new information ontothe system when some factors affecting the amount of calls have been changed. In this paper, we attempt to devise a new model for predicting inbound calls that is not only based on theoretical background but also easily applicable to real world applications. Our model was mainly developed by the interactive decision tree technique, one of the most popular techniques in data mining. Therefore, we expect that our model can predict inbound calls automatically based on historical data, and it can utilize expert's domain knowledge during the process of tree construction. To analyze the accuracy of our model, we performed intensive experiments on a real case of one of the largest car insurance companies in Korea. In the case study, the prediction accuracy of the devised two models and traditional WFMS are analyzed with respect to the various error rates allowable. The experiments reveal that our data mining-based two models outperform WFMS in terms of predicting the amount of accident calls and fault calls in most experimental situations examined.

Relation of Social Security Network, Community Unity and Local Government Trust (지역사회 사회안전망구축과 지역사회결속 및 지방자치단체 신뢰의 관계)

  • Kim, Yeong-Nam;Kim, Chan-Sun
    • Korean Security Journal
    • /
    • no.42
    • /
    • pp.7-36
    • /
    • 2015
  • This study aims at analyzing difference of social Security network, Community unity and local government trust according to socio-demographical features, exploring the relation of social Security network, Community unity and local government trust according to socio-demographical features, presenting results between each variable as a model and verifying the property of mutual ones. This study sampled general citizens in Gwangju for about 15 days Aug. 15 through Aug. 30, 2014, distributed total 450 copies using cluster random sampling, gathered 438 persons, 412 persons of whom were used for analysis. This study verified the validity and credibility of the questionnaire through an experts' meeting, preliminary test, factor analysis and credibility analysis. The credibility of questionnaire was ${\alpha}=.809{\sim}{\alpha}=.890$. The inout data were analyzed by study purpose using SPSSWIN 18.0, as statistical techniques, factor analysis, credibility analysis, correlation analysis, independent sample t verification, ANOVA, multi-regression analysis, path analysis etc. were used. the findings obtained through the above study methods are as follows. First, building a social Security network has an effect on Community institution. That is, the more activated a, the higher awareness on institution. the more activated street CCTV facilities, anti-crime design, local government Security education, the higher the stability. Second, building a social Security network has an effect on trust of local government. That is, the activated local autonomous anti-crime activity, anti-crime design. local government's Security education, police public oder service, the more increased trust of policy, service management, busines performance. Third, Community unity has an effect on trust of local government. That is, the better Community institution is achieved, the higher trust of policy. Also the stabler Community institution, the higher trust of business performance. Fourth, building a social Security network has a direct or indirect effect on Community unity and local government trust. That is, social Security network has a direct effect on trust of local government, but it has a higher effect through Community unity of parameter. Such results showed that Community unity in Gwangju Region is an important factor, which means it is an important variable mediating building a social Security network and trust of local government. To win trust of local residents, we need to prepare for various cultural events and active communication space and build a social Security network for uniting them.

  • PDF

Modern Paper Quality Control

  • Olavi Komppa
    • Proceedings of the Korea Technical Association of the Pulp and Paper Industry Conference
    • /
    • 2000.06a
    • /
    • pp.16-23
    • /
    • 2000
  • The increasing functional needs of top-quality printing papers and packaging paperboards, and especially the rapid developments in electronic printing processes and various computer printers during past few years, set new targets and requirements for modern paper quality. Most of these paper grades of today have relatively high filler content, are moderately or heavily calendered , and have many coating layers for the best appearance and performance. In practice, this means that many of the traditional quality assurance methods, mostly designed to measure papers made of pure. native pulp only, can not reliably (or at all) be used to analyze or rank the quality of modern papers. Hence, introduction of new measurement techniques is necessary to assure and further develop the paper quality today and in the future. Paper formation , i.e. small scale (millimeter scale) variation of basis weight, is the most important quality parameter of paper-making due to its influence on practically all the other quality properties of paper. The ideal paper would be completely uniform so that the basis weight of each small point (area) measured would be the same. In practice, of course, this is not possible because there always exists relatively large local variations in paper. However, these small scale basis weight variations are the major reason for many other quality problems, including calender blacking uneven coating result, uneven printing result, etc. The traditionally used visual inspection or optical measurement of the paper does not give us a reliable understanding of the material variations in the paper because in modern paper making process the optical behavior of paper is strongly affected by using e.g. fillers, dye or coating colors. Futhermore, the opacity (optical density) of the paper is changed at different process stages like wet pressing and calendering. The greatest advantage of using beta transmission method to measure paper formation is that it can be very reliably calibrated to measure true basis weight variation of all kinds of paper and board, independently on sample basis weight or paper grade. This gives us the possibility to measure, compare and judge papers made of different raw materials, different color, or even to measure heavily calendered, coated or printed papers. Scientific research of paper physics has shown that the orientation of the top layer (paper surface) fibers of the sheet paly the key role in paper curling and cockling , causing the typical practical problems (paper jam) with modern fax and copy machines, electronic printing , etc. On the other hand, the fiber orientation at the surface and middle layer of the sheet controls the bending stiffness of paperboard . Therefore, a reliable measurement of paper surface fiber orientation gives us a magnificent tool to investigate and predict paper curling and coclking tendency, and provides the necessary information to finetune, the manufacturing process for optimum quality. many papers, especially heavily calendered and coated grades, do resist liquid and gas penetration very much, bing beyond the measurement range of the traditional instruments or resulting invonveniently long measuring time per sample . The increased surface hardness and use of filler minerals and mechanical pulp make a reliable, nonleaking sample contact to the measurement head a challenge of its own. Paper surface coating causes, as expected, a layer which has completely different permeability characteristics compared to the other layer of the sheet. The latest developments in sensor technologies have made it possible to reliably measure gas flow in well controlled conditions, allowing us to investigate the gas penetration of open structures, such as cigarette paper, tissue or sack paper, and in the low permeability range analyze even fully greaseproof papers, silicon papers, heavily coated papers and boards or even detect defects in barrier coatings ! Even nitrogen or helium may be used as the gas, giving us completely new possibilities to rank the products or to find correlation to critical process or converting parameters. All the modern paper machines include many on-line measuring instruments which are used to give the necessary information for automatic process control systems. hence, the reliability of this information obtained from different sensors is vital for good optimizing and process stability. If any of these on-line sensors do not operate perfectly ass planned (having even small measurement error or malfunction ), the process control will set the machine to operate away from the optimum , resulting loss of profit or eventual problems in quality or runnability. To assure optimum operation of the paper machines, a novel quality assurance policy for the on-line measurements has been developed, including control procedures utilizing traceable, accredited standards for the best reliability and performance.

Increase of Tc-99m RBC SPECT Sensitivity for Small Liver Hemangioma using Ordered Subset Expectation Maximization Technique (Tc-99m RBC SPECT에서 Ordered Subset Expectation Maximization 기법을 이용한 작은 간 혈관종 진단 예민도의 향상)

  • Jeon, Tae-Joo;Bong, Jung-Kyun;Kim, Hee-Joung;Kim, Myung-Jin;Lee, Jong-Doo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.344-356
    • /
    • 2002
  • Purpose: RBC blood pool SPECT has been used to diagnose focal liver lesion such as hemangioma owing to its high specificity. However, low spatial resolution is a major limitation of this modality. Recently, ordered subset expectation maximization (OSEM) has been introduced to obtain tomographic images for clinical application. We compared this new modified iterative reconstruction method, OSEM with conventional filtered back projection (FBP) in imaging of liver hemangioma. Materials and Methods: Sixty four projection data were acquired using dual head gamma camera in 28 lesions of 24 patients with cavernous hemangioma of liver and these raw data were transferred to LINUX based personal computer. After the replacement of header file as interfile, OSEM was performed under various conditions of subsets (1,2,4,8,16, and 32) and iteration numbers (1,2,4,8, and 16) to obtain the best setting for liver imaging. The best condition for imaging in our investigation was considered to be 4 iterations and 16 subsets. After then, all the images were processed by both FBP and OSEM. Three experts reviewed these images without any information. Results: According to blind review of 28 lesions, OSEM images revealed at least same or better image quality than those of FBP in nearly all cases. Although there showed no significant difference in detection of large lesions more than 3 cm, 5 lesions with 1.5 to 3 cm in diameter were detected by OSEM only. However, both techniques failed to depict 4 cases of small lesions less than 1.5 cm. Conclusion: OSEM revealed better contrast and define in depiction of liver hemangioma as well as higher sensitivity in detection of small lesions. Furthermore this reconstruction method dose not require high performance computer system or long reconstruction time, therefore OSEM is supposed to be good method that can be applied to RBC blood pool SPECT for the diagnosis of liver hemangioma.

Analysis of Mutant Chinese Cabbage Plants Using Gene Tagging System (Gene Tagging System을 이용한 돌연변이 배추의 분석)

  • Yu, Jae-Gyeong;Lee, Gi-Ho;Lim, Ki-Byung;Hwang, Yoon-Jung;Woo, Eun-Taek;Kim, Jung-Sun;Park, Beom-Seok;Lee, Youn-Hyung;Park, Young-Doo
    • Horticultural Science & Technology
    • /
    • v.28 no.3
    • /
    • pp.442-448
    • /
    • 2010
  • The objectives of this study were to analyze mutant lines of Chinese cabbage ($Brassica$ $rapa$ ssp. $pekinensis$) using gene tagging system (plasmid rescue and inverse polymerase chain reaction) and to observe the phenotypic characteristics. Insertional mutants were derived by transferring DNA (T-DNA) of $Agrobacterium$ for functional genomics study in Chinese cabbage. The hypocotyls of Chinese cabbage 'Seoul' were used to obtain transgenic plants with $Agrobacterium$ $tumefaciens$ harboring pRCV2 vector. To tag T-DNA from the Chinese cabbage genomic DNA, plasmid rescue and inverse PCR were applied for multiple copies and single copy insertional mutants. These techniques were successfully conducted to Chinese cabbage plant with high efficiency, and as a result, T-DNA of pRCV2 vector showed distinct various integration patterns in the transgenic plant genome. The polyploidy level analysis showed the change in phenotypic characteristics of 13 mutant lines was not due to variation in somatic chromosome number. Compared with wild type, the $T_1$ progenies showed varied phenotypes, such as decreased stamen numbers, larger or smaller flowers, upright growth habit, hairless leaves, chlorosis symptoms, narrow leaves, and deeply serrated leaves. The polyploidy level analysis showed the change in phenotypic characteristics of 13 mutant lines was not due to variation in somatic chromosome number. To tag T-DNA from the Chinese cabbage genomic DNA, plasmid rescue and inverse PCR were applied for multiple copies and single copy insertional mutants. Mutants that showed distinct phenotypic difference compared to wild type with 1 copy of T-DNA by Southern blot analysis, and with 2n = 20 of chromosome number were selected. These selected mutant lines were sequenced flanking DNA, mapped genomic loci, and the genome information of the lines is being recorded in specially developed database.