• Title/Summary/Keyword: 1-Dimensional

Search Result 12,225, Processing Time 0.051 seconds

Long-Term Results of 2-Dimensional Radiation Therapy in Patients with Nasopharyngeal Cancer (이차원방사선치료를 시행한 코인두암 환자의 장기 추적 결과 및 예후인자 분석)

  • Lee, Nam-Kwon;Park, Young-Je;Yang, Dae-Sik;Yoon, Won-Sup;Lee, Suk;Kim, Chul-Yong
    • Radiation Oncology Journal
    • /
    • v.28 no.4
    • /
    • pp.193-204
    • /
    • 2010
  • Purpose: To analyze the treatment outcomes, complications, prognostic factors after a long-term follow-up of patients with nasopharyngeal carcinoma treated with radiation therapy (RT) alone or concurrent chemoradiation therapy (CCRT). Materials and Methods: Between December 1981 and December 2006, 190 eligible patients with non-metastatic nasopharyngeal carcinoma were treated at our department with a curative intent. Of these patients, 103 were treated with RT alone and 87 patients received CCRT. The median age was 49 years (range, 8~78 years). The distributions of clinical stage according to the AJCC 6th edition included I: 7 (3.6%), IIA: 8 (4.2%), IIB: 33 (17.4%), III: 82 (43.2%), IVA: 31 (16.3%), IVB: 29 (15.3%). The accumulated radiation doses to the primary tumor ranged from 66.6~87.0 Gy (median, 72 Gy). Treatment outcomes and prognostic factors were retrospectively analyzed. Acute and late toxicities were assessed using the RTOG criteria. Results: A total of 96.8% (184/190) of patients completed the planned treatment. With a mean follow-up of 73 months (range, 2~278 months; median, 52 months), 93 (48.9%) patients had relapses that were local 44 (23.2%), nodal 13 (6.8%), or distant 49 (25.8%). The 5- and 10-year overall survival (OS), disease-free survival (DFS), and disease-specific survival (DSS) rates were 55.6% and 44.5%, 54.8% and 51.3%, in addition to 65.3% and 57.4%, respectively. Multivariate analyses revealed that CCRT, age, gender, and stage were significant prognostic factors for OS. The CCRT and gender were independent prognostic factors for both DFS and DSS. There was no grade 4 or 5 acute toxicity, but grade 3 mucositis and hematologic toxicity were present in 42 patients (22.1%) and 18 patients (9.5%), respectively. During follow-up, grade 3 hearing loss in 9 patients and trismus in 6 patients were reported. Conclusion: The results of our study were in accordance with findings of previous studies and we confirmed that CCRT, low stage, female gender, and young age were related to improvement in OS. However, there are limitations in the locoregional control that can be achieved by CCRT with 20 conventional radiation therapy. This observation has led to further studies on clarifying the efficacy of concurrent chemotherapy by intensity modulated radiation therapy.

APICAL FITNESS OF NON-STANDARDIZED GUTTA-PERCHA CONES IN SIMULATED ROOT CANALS PREPARED WITH ROTARY ROOT CANAL INSTRUMENTS (전동화일로 형성된 근관에서 비표준화 Gutta-percha Cone의 적합성)

  • Kwon, O-Sang;Kim, Sung-Kyo
    • Restorative Dentistry and Endodontics
    • /
    • v.25 no.3
    • /
    • pp.390-398
    • /
    • 2000
  • The purpose of this study was to evaluate the apical fitness of non-standardized gutta-percha cones in root canals prepared with rotary Ni-Ti root canal instruments of various tapers and apical tip sizes. Simulated sixty curved root canals of plastic blocks were prepared with crown-down technique using rotary root canal instruments of Maillefer ProFile$^{(R)}$ .04 and .06 taper (Maillefer Instrument SA, Switzerland). Specimens were divided into six groups and prepared as follows: Group 1, prepared up to size 25 of .04 taper ; Group 2, prepared up to size 30 of .04 taper ; Group 3, prepared up to size 35 of .04 taper ; Group 4, prepared up to size 25 of .06 taper ; Group 5, prepared up to size 30 of .06 taper ; Group 6 ; prepared up to size 35 of .06 taper. After cutting off the coronal portion of plastic, blocks perpendicular to the long axis of the canal with the use of a diamond saw, apical 5mm of canal space was analyzed. Prepared apical canal spaces were duplicated using rubber base impression material to evaluate two dimensional total area of apical canal space. Various sized gutta-percha cones were applied in the 5mm-apical canal space, which were size 25, size 30 and size 35 standardized gutta-percha cone, Diadent Dia-Pro ISO-.04$^{TM}$ and .06$^{TM}$(Diadent, Korea), and medium-fine (MF), fine (F), fine-medium (FM) and medium (M) sized non-standardized gutta-percha cones (Diadent, Korea). Coronal excess gutta-percha were cut off with a sharp blade. Photographs of impressed apical canal spaces and gutta-percha cones were taken with a CCD camera under a stereomicroscope and stored in a computer. Areas of the total canal space and gutta-percha cones were calculated using a digitalized image analysing program, CompuScope (Sungjin Multimedia Co., Korea). Ratio of apical fitness was obtained by calculating the area of gutta-percha cone to the total area of the canal space. The data were analysed statistically using One-way Analysis of Variance and Duncan's Multiple Range Test. The results were as follows: 1. In canals prepared up to size 25 ProFile$^{(R)}$ of .04 taper, non-standardized MF and F cones occupied significantly more canal space than Dia-Pro ISO-.04$^{TM}$ or size 25 standardized ones (p<0.05). 2. In canals prepared up to size 30 ProFile$^{(R)}$ of .04 taper, non-standardized F cones occupied significantly more canal space than Dia-Pro ISO-.04$^{TM}$ or size 30 standardized ones (p<0.05), and non-standardized MF cones occupied more canal space than size 30 standardized ones (p<0.05). 3. In canals prepared up to size 35 ProFile$^{(R)}$ of .04 taper, there was no significant difference in canal space occupation among non-standardized MF and F, size 35 standardized, and Dia-Pro ISO-.04$^{TM}$ cones (p>0.05). 4. In canals prepared up to size 25 ProFile$^{(R)}$ of .06 taper, non-standardized MF and F cones occupied significantly more canal space than Dia-Pro ISO-.06$^{TM}$, or size 25 standardized ones (p<0.05), and Dia-Pro ISO-.06$^{TM}$, cones occupied significantly more space than size 25 standardized ones (p<0.05). 5. In canals prepared up to size 30 ProFile$^{(R)}$ of .06 taper, non-standardized FM cones occupied significantly more canal space than Dia-Pro ISO-.06$^{TM}$ or size 30 standardized ones (p<0.05), and non-standardized F cones occupied significantly more canal space than size 30 standardized ones (p<0.05). 6. In canals prepared up to size 35 ProFile$^{(R)}$ of .06 taper, non-standardized M and FM, Dia-Pro ISO-.06$^{TM}$ occupied significantly more canal space than size 35 standardized ones (p<0.05). In summary, in both canals prepared with .04 or .06 taper ProFile$^{(R)}$, non-standardized cones showed better fitness than Dia-Pro ISO$^{TM}$ or standardized ones, which was more characteristic in smaller canals.

  • PDF

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

A STUDY ON THE RELATIONS OF VARIOUS PARTS OF THE PALATE FOR PRIMARY AND PERMANENT DENTITION (유치열과 영구치열의 구개 각부의 관계에 관한 연구)

  • Lee, Yong-Hoon;Yang, Yeon-Mi;Lee, Yong-Hee;Kim, Sang-Hoon;Kim, Jae-Gon;Baik, Byeong-Ju
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.31 no.4
    • /
    • pp.569-578
    • /
    • 2004
  • The purpose of this study was to clarify the palatal arch length, width and height in the primary and permanent dentition. Samples were consisted of normal occlusions both in the primary dentition(50 males and 50 females) and in the permanent dentition(50 males and 50 females). With their upper plaster casts were used and through 3-dimensional laser scanning(3D Scanner, DS4060, LDI, U.S.A.), cloud data, polygonization, section curve and loft surface, fit and horizontal plane were based to measure the palatal arch length, width and height(Surfacer 10.0, Imageware, U.S.A.). T-tests were applied for the statistical analyze of the data. The results were as follows : 1. In the measurement values, the values of the male were higher than those of the female except primary anterior palatal height. There were not only statistically significant differences in anterior palatal width(p<0.05) and posterior palatal width(p<0.01) in primary dentition but palatal width(p<0.05), anterior palatal length(p<0.01), middle and posterior palatal length(p<0.05) in permanent dentition between male and female. 2. In the indices of palate, there were statistically significant differences in height-length index(p<0.05) and width-length index(p<0.01) between male and female in primary dentition. In permanent dentition, there was statistically difference between male and female. 3. In the measurement values, posterior palatal width was increased most greatly. Posterior palatal height, anterior palatal width and anterior palatal length were followed by descending order. On the other hand, anterior palatal height and posterior palatal length were decreased. 4. In the indices of palate, the height-length index, the width-length index and posterior height-width index were increased, but the others were decreased.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Recent Progress in Air-Conditioning and Refrigeration Research : A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2016 (설비공학 분야의 최근 연구 동향 : 2016년 학회지 논문에 대한 종합적 고찰)

  • Lee, Dae-Young;Kim, Sa Ryang;Kim, Hyun-Jung;Kim, Dong-Seon;Park, Jun-Seok;Ihm, Pyeong Chan
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.29 no.6
    • /
    • pp.327-340
    • /
    • 2017
  • This article reviews the papers published in the Korean Journal of Air-Conditioning and Refrigeration Engineering during 2016. It is intended to understand the status of current research in the areas of heating, cooling, ventilation, sanitation, and indoor environments of buildings and plant facilities. Conclusions are as follows. (1) The research works on the thermal and fluid engineering have been reviewed as groups of flow, heat and mass transfer, the reduction of pollutant exhaust gas, cooling and heating, the renewable energy system and the flow around buildings. CFD schemes were used more for all research areas. (2) Research works on heat transfer area have been reviewed in the categories of heat transfer characteristics, pool boiling and condensing heat transfer and industrial heat exchangers. Researches on heat transfer characteristics included the results of the long-term performance variation of the plate-type enthalpy exchange element made of paper, design optimization of an extruded-type cooling structure for reducing the weight of LED street lights, and hot plate welding of thermoplastic elastomer packing. In the area of pool boiling and condensing, the heat transfer characteristics of a finned-tube heat exchanger in a PCM (phase change material) thermal energy storage system, influence of flow boiling heat transfer on fouling phenomenon in nanofluids, and PCM at the simultaneous charging and discharging condition were studied. In the area of industrial heat exchangers, one-dimensional flow network model and porous-media model, and R245fa in a plate-shell heat exchanger were studied. (3) Various studies were published in the categories of refrigeration cycle, alternative refrigeration/energy system, system control. In the refrigeration cycle category, subjects include mobile cold storage heat exchanger, compressor reliability, indirect refrigeration system with $CO_2$ as secondary fluid, heat pump for fuel-cell vehicle, heat recovery from hybrid drier and heat exchangers with two-port and flat tubes. In the alternative refrigeration/energy system category, subjects include membrane module for dehumidification refrigeration, desiccant-assisted low-temperature drying, regenerative evaporative cooler and ejector-assisted multi-stage evaporation. In the system control category, subjects include multi-refrigeration system control, emergency cooling of data center and variable-speed compressor control. (4) In building mechanical system research fields, fifteenth studies were reported for achieving effective design of the mechanical systems, and also for maximizing the energy efficiency of buildings. The topics of the studies included energy performance, HVAC system, ventilation, renewable energies, etc. Proposed designs, performance tests using numerical methods and experiments provide useful information and key data which could be help for improving the energy efficiency of the buildings. (5) The field of architectural environment was mostly focused on indoor environment and building energy. The main researches of indoor environment were related to the analyses of indoor thermal environments controlled by portable cooler, the effects of outdoor wind pressure in airflow at high-rise buildings, window air tightness related to the filling piece shapes, stack effect in core type's office building and the development of a movable drawer-type light shelf with adjustable depth of the reflector. The subjects of building energy were worked on the energy consumption analysis in office building, the prediction of exit air temperature of horizontal geothermal heat exchanger, LS-SVM based modeling of hot water supply load for district heating system, the energy saving effect of ERV system using night purge control method and the effect of strengthened insulation level to the building heating and cooling load.

Investigating the Partial Substitution of Chicken Feather for Wood Fiber in the Production of Wood-based Fiberboard (목질 섬유판 제조에 있어 도계부산물인 닭털의 목섬유 부분적 대체화 탐색)

  • Yang, In;Park, Dae-Hak;Choi, Won-Sil;Oh, Sei Chang;Ahn, Dong-uk;Han, Gyu-Seong;Oh, Seung Won
    • Korean Chemical Engineering Research
    • /
    • v.56 no.4
    • /
    • pp.577-584
    • /
    • 2018
  • This study was conducted to investigate the potential of chicken feather (CF), which is a by-product in poultry industry, as a partial substitute of wood fiber in the production of wood-based fiberboard. Keratin-type protein constituted the majority of CF, and its appearance did not differ from that of wood fiber. When the formaldehyde (HCHO) adsorptivities of CF compared by its pretreatment type, feather meal (FM), which was pretreated CF with high temperature and pressure and then grounded, showed the highest HCHO adsorptivity. In addition, there was no difference between the adsorbed HCHO amounts, which was measured by dinitrophenylhydrazine method, of scissors-chopped CF and CF beated with an electrical blender. Mechanical properties and HCHO emission of medium-density fiberboards (MDF), which were fabricated with wood fiber and 5 wt% CF, beated CF or FM based on the oven-dried weight of wood fiber, were not influenced by the pretreatment type of CF. However, when the values compared with those of MDF made with just wood fiber, thickness swelling and HCHO emission of the MDF were improved greatly with the addition of CF, beated CF or FM. Based on the results, it might be possible to produce MDF with improved dimensional stability and low HCHO emission if CF, beated CF or FM is added partially as a substitute of wood fiber in the manufacturing process of MDF produced with the conventional urea-formaldehyde resin of $E_1$ grade. However, the use of CF or FM in the production of MDF has a low economic feasibility at the current situation due to the securing difficulty and high cost of CF. In order to enhance the economic feasibility, it requires to use CF produced at small to medium-sized chicken meat plants. More importantly, it is considered that the technology developed from this research has a great potential to make provision for the prohibition of animal-based feed and to dispose environmentally avian influenza-infected poultry.

An investigation of the User Research Techniques in the User-Centered Design Framework - Focused on the on-line community services development for 13-18 Young Adults (사용자 중심 디자인 프레임워크에서 사용자 조사기법의 역할에 관한 연구 - 13-18 청소년용 온라인 커뮤니티 컨텐트 개발 프로젝트를 중심으로)

  • 이종호
    • Archives of design research
    • /
    • v.17 no.2
    • /
    • pp.77-86
    • /
    • 2004
  • User-Centered Design Approach plays important role in dealing with usability issues for developing modern technology products. Yet it is still questionable whether the User-Centered approach is enough for the development of successful consumer contents since the User-Centered Design is originated from the software engineering field where meeting customers' functional requirement is the most critical aspect in developing a software. However, modern consumer market is already saturated and in order to meet ever increasing consumer requirements, the User-Centered Design approach needs to be expanded. As a way of incorporating the User-Centered Approach into the consumer product development, Jordan suggested the 'Pleasure-based Approach' in industrial design field, which usually generates multi-dimensional user requirements: 1)physical, 2)cognitive, 3)identity and 4) social. It is the current tendency that many portal and community service providers focus on fulfilling both functional and emotional needs for users when developing new items, contents and services. Previously fulfilling consumers' emotional needs solely depend on visual designer's graphical sense and capability. However, taking the customer-centered approach on withdrawing consumers' unknown needs is getting critical in the competitive market environment. This paper reviews different types of user research techniques and categorized into 6 ways based on Kano(1992)'s product quality model. Based on his theory, only performance factors, such as suability, can be identified through the user-centered design approach. The user-centered design approach has to be expanded to include factors include personality, sociability, pleasure, and so on. In order to identify performance as well as excellent factors through user research, a user-research framework was established and tested through the case study, which is ' the development of new online service for teens '. The results of the user research were summarized at the end of the paper and the pros and cons of each research techniques were analyzed.

  • PDF

Performance Evaluation of Machine Learning and Deep Learning Algorithms in Crop Classification: Impact of Hyper-parameters and Training Sample Size (작물분류에서 기계학습 및 딥러닝 알고리즘의 분류 성능 평가: 하이퍼파라미터와 훈련자료 크기의 영향 분석)

  • Kim, Yeseul;Kwak, Geun-Ho;Lee, Kyung-Do;Na, Sang-Il;Park, Chan-Won;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.5
    • /
    • pp.811-827
    • /
    • 2018
  • The purpose of this study is to compare machine learning algorithm and deep learning algorithm in crop classification using multi-temporal remote sensing data. For this, impacts of machine learning and deep learning algorithms on (a) hyper-parameter and (2) training sample size were compared and analyzed for Haenam-gun, Korea and Illinois State, USA. In the comparison experiment, support vector machine (SVM) was applied as machine learning algorithm and convolutional neural network (CNN) was applied as deep learning algorithm. In particular, 2D-CNN considering 2-dimensional spatial information and 3D-CNN with extended time dimension from 2D-CNN were applied as CNN. As a result of the experiment, it was found that the hyper-parameter values of CNN, considering various hyper-parameter, defined in the two study areas were similar compared with SVM. Based on this result, although it takes much time to optimize the model in CNN, it is considered that it is possible to apply transfer learning that can extend optimized CNN model to other regions. Then, in the experiment results with various training sample size, the impact of that on CNN was larger than SVM. In particular, this impact was exaggerated in Illinois State with heterogeneous spatial patterns. In addition, the lowest classification performance of 3D-CNN was presented in Illinois State, which is considered to be due to over-fitting as complexity of the model. That is, the classification performance was relatively degraded due to heterogeneous patterns and noise effect of input data, although the training accuracy of 3D-CNN model was high. This result simply that a proper classification algorithms should be selected considering spatial characteristics of study areas. Also, a large amount of training samples is necessary to guarantee higher classification performance in CNN, particularly in 3D-CNN.

Sports Biomechanical Analysis of Physical Movements on the Basis of the Patterns of the Ready Poses (준비동작의 형태 변화에 따른 신체 움직임의 운동역학적 분석)

  • Lee, Joong-Sook
    • Korean Journal of Applied Biomechanics
    • /
    • v.12 no.2
    • /
    • pp.179-195
    • /
    • 2002
  • The purpose of this research is to provide a proper model by analyzing the sports biomechanical of physical movements on the basis of the two patterns(open-stance and cross-stance) at the ready-to-start pose. The subjects for this study are composed of five male handball players from P university and five female shooting players from S university. Three-way moving actions at start(right, left, and forward) are recorded with two high-speed video cameras and measured with two Force platforms and a EMG system. Three-dimensional action analyzer, GRF system, and Whole body reaction movement system are used to figure out the moving mechanisms at the start pose. The analytic results of the moving mechanism at the start pose were as follows. 1. Through examining the three-way moving actions at start, I have found the cross-stance pose is better for the moving speed of body weight balance than the open-stance one. 175 degree of knee joint angle at "take-off" and 172 degree of hip joint angle were best for the start pose. 2. The Support time and GRF data shows that the quickest center of gravity shift was occurred when cross-stanced male subjects started to move toward his lefthand side. The quickest male's average supporting time of left and right foot is 0.19${\pm}$0.07 sec., 0.26${\pm}$0.06sec. respectively. The supporting time difference between two feet is 0.07sec. 3. Through analyzing GRF of moving actions at start pose, I have concluded that more than 1550N are overloaded on one foot at the open-stance start, and the overloaded force may cause physical injury. However, at the cross-stance pose, The GRF are properly dispersed on both feet, and maximum 1350N are loaded on one foot.