• Title/Summary/Keyword: Network structure

Search Result 5,340, Processing Time 0.038 seconds

Effect of Heat Treatment Conditions on the Characteristics of Gel Made from Arrowroot Starch in Korea Cultivars (국내산 칡 전분 젤 특성에 미치는 가열처리 조건의 영향)

  • Lee, Seog-Won;Kim, Hyo-Won;Han, Sung-Hee;Rhee, Chul
    • The Korean Journal of Food And Nutrition
    • /
    • v.22 no.3
    • /
    • pp.387-395
    • /
    • 2009
  • This study was conducted to investigate the effects of starch concentrations and heating conditions on the gel characteristics of arrowroot starch. Arrowroot starch gels with various pHs, and starch concentrations, were prepared using different temperatures and heating times, and then stored for 24 hrs at $4^{\circ}C$. The hardness of sample gels made at pH 2.0 and 4.0 increased as the starch concentration increased from 7% to 10%, with the maximum value of 94 N being obtained when the gel was prepared at pH 4.0 with a starch concentration of 10%. The maximum hardness of samples prepared with concentrations of starch ranging from 7~9% appeared at $80^{\circ}C$, regardless of the heating temperature and time. Furthermore, the hardness of samples prepared at greater than $100^{\circ}C$ was relatively lower than that of samples prepared at other temperatures. When a starch concentration of 8% was used, the degree of gelatinization(DR) increased as the heating temperature increased, with the maximum value of DR being about 76% at $120^{\circ}C$, regardless of heating time. After storage for 24 hrs, the hardness of samples prepared at $70^{\circ}C$, $80^{\circ}C$ and $90^{\circ}C$ appeared to decrease, while that of samples prepared at $100^{\circ}C$, $110^{\circ}C$ and $120^{\circ}C$ increased. The correlation between hardness and the degree of gelatinization or retrogradation was very high when samples were prepared at $80^{\circ}C$ with a starch concentration of 9%, as indicated by a correlation coefficient of greater than 0.95. Overall, the microstructures of freeze-dried arrowroot starch gel were composed of a continuous network of amylose and amylopectin with fragmented ghost structures in an excluded phase, but these ghost structures were more evident after storage and with increased heating temperature.

Traffic Forecasting Model Selection of Artificial Neural Network Using Akaike's Information Criterion (AIC(AKaike's Information Criterion)을 이용한 교통량 예측 모형)

  • Kang, Weon-Eui;Baik, Nam-Cheol;Yoon, Hye-Kyung
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.7 s.78
    • /
    • pp.155-159
    • /
    • 2004
  • Recently, there are many trials about Artificial neural networks : ANNs structure and studying method of researches for forecasting traffic volume. ANNs have a powerful capabilities of recognizing pattern with a flexible non-linear model. However, ANNs have some overfitting problems in dealing with a lot of parameters because of its non-linear problems. This research deals with the application of a variety of model selection criterion for cancellation of the overfitting problems. Especially, this aims at analyzing which the selecting model cancels the overfitting problems and guarantees the transferability from time measure. Results in this study are as follow. First, the model which is selecting in sample does not guarantees the best capabilities of out-of-sample. So to speak, the best model in sample is no relationship with the capabilities of out-of-sample like many existing researches. Second, in stability of model selecting criterion, AIC3, AICC, BIC are available but AIC4 has a large variation comparing with the best model. In time-series analysis and forecasting, we need more quantitable data analysis and another time-series analysis because uncertainty of a model can have an effect on correlation between in-sample and out-of-sample.

Compact Orthomode Transducer for Field Experiments of Radar Backscatter at L-band (L-밴드 대역 레이더 후방 산란 측정용 소형 직교 모드 변환기)

  • Hwang, Ji-Hwan;Kwon, Soon-Gu;Joo, Jeong-Myeong;Oh, Yi-Sok
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.22 no.7
    • /
    • pp.711-719
    • /
    • 2011
  • A study of miniaturization of an L-band orthomode transducer(OMT) for field experiments of radar backscatter is presented in this paper. The proposed OMT is not required the additional waveguide taper structures to connect with a standard adaptor by the newly designed junction structure which bases on a waveguide taper. Total length of the OMT for L-band is about 1.2 ${\lambda}_o$(310 mm) and it's a size of 60 % of the existing OMTs. And, to increase the matching and isolation performances of each polarization, two conducting posts are inserted. The bandwidth of 420 MHz and the isolation level of about 40 dB are measured in the operating frequency. The L-band scatterometer consisting of the manufactured OMT, a horn-antenna and network analyzer(Agilent 8753E) was used STCT and 2DTST to analysis the measurement accuracy of radar backscatter. The full-polarimetric RCSs of test-target, 55 cm trihedral corner reflector, measured by the calibrated scatterometer have errors of -0.2 dB and 0.25 dB for vv-/hh-polarization, respectively. The effective isolation level is about 35.8 dB in the operating frequency. Then, the horn-antenna used to measure has the length of 300 mm, the aperture size of $450{\times}450\;mm^2$, and HPBWs of $29.5^{\circ}$ and $36.5^{\circ}$ on the principle E-/H-planes.

Moho Discontinuity Studies Beneath the Broadband Stations Using Receiver Functions in South Korea (수신함수를 이용한 남한의 광대역 관측망 하부의 Moho 불연속면 연구)

  • Kim, So-Gu;Lee, Seong-Kyu
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.1 no.1 s.1
    • /
    • pp.139-155
    • /
    • 2001
  • We investigate the vertical velocity models beneath the newly installed broadband seismic network of KMA (Korea Meteorological Administration) by using receiver function inversion technique. The seismic phases are primarily P-to-S conversions and reverberations generated at the two highest impedance interfaces like the Moho (crust-mantle boundary) and the sediment-basement contact. We obtained the teleseismic P-wave receiver functions, which were derived from teleseismic records of Seoul (SEO), Inchon (INCN), Tejeon (TEJ) , Sosan (SOS/SES), Kangnung (KAN), Ulchin (ULC/ULJ), Taegu (TAG), Pusan (PUS), and Ullung-do (ULL) stations. For Kwangju (KWA/KWJ) and Chunchon (CHU) stations, the Moho conversion Ps arrivals and waveforms of radial receiver functions are azimuthally inconsistent and unclear. From the receiver function inversion result, we found that crustal thickness is 29 km at INCN, SEO, and SOS (SES) stations, 28 km at KAN station in the Kyonggi Massif, 32 km at TEJ station in Okchon Folded Belt, 34 km at TAG, 33 km at PUS station in the Kyongsang Basin, 32 km at KWJ station (readjusted station by prior KWA station) included in the Youngdong-Kwangju Depression Zone, 28 km at ULC station in the eastern margin of the Ryongnam Massif, and 17 km at ULL station in the Ullung Island of the East Sea, respectively. The Moho configuration of INCN, SOS, KWJ, and KAN stations show a laminated smooth transition zone with a 3-5 km thick. The upper crusts(${\sim}5km$) of KAN, ULC, and PUS stations show complex structures with a high velocity. The unusually thick crusts are found at the TAG and PUS stations in the Kyongsang Basin compared to the thin (29-32 km) crust of the western part (INCN, SEO, SOS, TEJ, and KWA stations) The crustal thickness beneath Ullung Island (ULL station) shows the suboceanic crust with about 17 km thickness and complex with a high velocity layer of the upper crust, and the amplitudes of Incoming Ps waves from the western direction are relatively large compared to those from othor directions.

  • PDF

Evaluation on Removal Efficiency of Methylene Blue Using Nano-ZnO/Laponite/PVA Photocatalyzed Adsorption Ball (Nano-ZnO/Laponite/PVA 광촉매 흡착볼의 메틸렌블루 제거효율 평가)

  • Oh, Ju Hyun;Ahn, Hosang;Jang, Dae Gyu;Ahn, Chang Hyuk;Lee, Saeromi;Joo, Jin Chul
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.35 no.9
    • /
    • pp.636-642
    • /
    • 2013
  • In order to overcome drawbacks (i.e., filtration and recovery) of conventional powder type photocatalysts, nano-ZnO/Laponite/PVA (ZLP) photocatalyzed adsorption balls were developed by using in situ mixing of nanoscale ZnO as a photocatalyst, and Laponite as both adsorbent and supporting media in deionized water, followed by the poly vinyl alcohol polymerization with boric acid. The optimum mixing ratio of nano-ZnO:Laponite:PVA:deionized water was found to be 3:1:1:16 (by weight), and the mesh and film produced by PVA polymerization with boric acid might inhibit both swelling of Laponite and detachment of nanoscale ZnO from ZLP balls. Drying ZLP balls with microwave (600 watt) was found to produce ZLP balls with stable structure in water, and various sizes (55~500 ${\mu}m$) of pore were found to be distributed based on SEM and TEM results. In the initial period of reaction (i. e., 40 min), adsorption through ionic interaction between methylene blue and Laponite was the main removal mechanism. After the saturation of methylene blue to available adsorption sites for Laponite, the photocatalytic degradation of methylene blue occurred. The effective removal of methylene blue was attributed to adsorption and photocatalytic degradation. Based on the results from this study, synthesized ZLP photocatalyzed adsorption balls were expected to remove recalcitrant organic compounds effectively through both adsorption and photocatalytic degradation, and the risks of environmental receptors caused by detachment of nanoscale photocatalysts can be reduced.

A Survey of Ecological Knowledge and Information for Climate Change Adaptation in Korea - Focused on the Risk Assessment and Adaptation Strategy to Climate Change - (기후변화 적응정책 관련 생태계 지식정보 수요와 활용도 증진 방향 - 생태계 기후변화 리스크 평가 및 적응대책을 중심으로 -)

  • Yeo, Inae;Hong, Seungbum
    • Journal of Environmental Impact Assessment
    • /
    • v.29 no.1
    • /
    • pp.26-36
    • /
    • 2020
  • This study aimed at investigating present research and knowledge-base on climate change adaptation in ecosystem sector and analyzed the current status of basic information on ecosystem that functions as evidence-base of climate change adaptation to deduce the suggestions for the future development for knowledge and information in biodiversity. In this perspective, a questionary survey titled as "the ecological knowledge-base and information needs for climate change adaptation" with the researchers who were engaged with adaptation studies for biodiversity in the ecosystem related-research institutes including national and 17 regional local governments-affiliated agencies in Korea. The results are as follows; current status of utilizing ecological information which supports climate change adaptation strategy, future needs for adaptation knowledge and ecological information, and activation of utilizing ecological information. The majority of respondents (90.7%) replied that the ecological information has high relevance when conducting research on climate change adaptation. However, only half of all respondents (53.2%) agreed with the real viability of current information to the adaptation research. Particularly, urgent priority for researchers was deduced as intensifying knowledge-base and constructing related information on 'ecosystem change from climate change (productivity, community structure, food chain, phenology, range distribution, and number of individuals) with the overall improvement of information contents and its quality. The respondents emphasized with the necessity of conducting field surveys of local ecosystem and constructing ecosystem inventories, advancing monitoring designs for climate change in ecosystem, and case studies for regional ecosystem changes with the guidance or guidelines for monitoring ecosystem change to enhance the quality of adaptation research and produce related information. In terms of activation for ecological information usage, national and local adaptation network should be working based on the integrated ecological platform necessary to support exchanges of knowledge and information and to expand ecosystem types in time and spatial dimension.

Urban archaeological investigations using surface 3D Ground Penetrating Radar and Electrical Resistivity Tomography methods (3차원 지표레이다와 전기비저항 탐사를 이용한 도심지 유적 조사)

  • Papadopoulos, Nikos;Sarris, Apostolos;Yi, Myeong-Jong;Kim, Jung-Ho
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.1
    • /
    • pp.56-68
    • /
    • 2009
  • Ongoing and extensive urbanisation, which is frequently accompanied with careless construction works, may threaten important archaeological structures that are still buried in the urban areas. Ground Penetrating Radar (GPR) and Electrical Resistivity Tomography (ERT) methods are most promising alternatives for resolving buried archaeological structures in urban territories. In this work, three case studies are presented, each of which involves an integrated geophysical survey employing the surface three-dimensional (3D) ERT and GPR techniques, in order to archaeologically characterise the investigated areas. The test field sites are located at the historical centres of two of the most populated cities of the island of Crete, in Greece. The ERT and GPR data were collected along a dense network of parallel profiles. The subsurface resistivity structure was reconstructed by processing the apparent resistivity data with a 3D inversion algorithm. The GPR sections were processed with a systematic way, applying specific filters to the data in order to enhance their information content. Finally, horizontal depth slices representing the 3D variation of the physical properties were created. The GPR and ERT images significantly contributed in reconstructing the complex subsurface properties in these urban areas. Strong GPR reflections and highresistivity anomalies were correlated with possible archaeological structures. Subsequent excavations in specific places at both sites verified the geophysical results. The specific case studies demonstrated the applicability of ERT and GPR techniques during the design and construction stages of urban infrastructure works, indicating areas of archaeological significance and guiding archaeological excavations before construction work.

Design of Deep Learning-based Tourism Recommendation System Based on Perceived Value and Behavior in Intelligent Cloud Environment (지능형 클라우드 환경에서 지각된 가치 및 행동의도를 적용한 딥러닝 기반의 관광추천시스템 설계)

  • Moon, Seok-Jae;Yoo, Kyoung-Mi
    • Journal of the Korean Applied Science and Technology
    • /
    • v.37 no.3
    • /
    • pp.473-483
    • /
    • 2020
  • This paper proposes a tourism recommendation system in intelligent cloud environment using information of tourist behavior applied with perceived value. This proposed system applied tourist information and empirical analysis information that reflected the perceptual value of tourists in their behavior to the tourism recommendation system using wide and deep learning technology. This proposal system was applied to the tourism recommendation system by collecting and analyzing various tourist information that can be collected and analyzing the values that tourists were usually aware of and the intentions of people's behavior. It provides empirical information by analyzing and mapping the association of tourism information, perceived value and behavior to tourism platforms in various fields that have been used. In addition, the tourism recommendation system using wide and deep learning technology, which can achieve both memorization and generalization in one model by learning linear model components and neural only components together, and the method of pipeline operation was presented. As a result of applying wide and deep learning model, the recommendation system presented in this paper showed that the app subscription rate on the visiting page of the tourism-related app store increased by 3.9% compared to the control group, and the other 1% group applied a model using only the same variables and only the deep side of the neural network structure, resulting in a 1% increase in subscription rate compared to the model using only the deep side. In addition, by measuring the area (AUC) below the receiver operating characteristic curve for the dataset, offline AUC was also derived that the wide-and-deep learning model was somewhat higher, but more influential in online traffic.

Parameters Estimation of Clark Model based on Width Function (폭 함수를 기반으로 한 Clark 모형의 매개변수 추정)

  • Park, Sang Hyun;Kim, Joo-Cheol;Jung, Kwansue
    • Journal of Korea Water Resources Association
    • /
    • v.46 no.6
    • /
    • pp.597-611
    • /
    • 2013
  • This paper presents the methodology for construction of time-area curve via the width function and thereby rational estimation of time of concentration and storage coefficient of Clark model within the framework of method of moments. To this end time-area curve is built by rescaling the grid-based width function under the assumption of pure translation and then the analytical expressions for two parameters of Clark model are proposed in terms of method of moments. The methodology in this study based on the analytical expressions mentioned before is compared with both (1) the traditional optimization method of Clark model provided by HEC-1 in which the symmetric time-area curve is used and the difference between observed and simulated hydrographs is minimized (2) and the same optimization method but replacing time-area curve with rescaled width function in respect of peak discharge and time to peak of simulated direct runoff hydrographs and their efficiency coefficient relative to the observed ones. The following points are worth of emphasizing: (1) The optimization method by HEC-1 with rescaled width function among others results in the parameters well reflecting the observed runoff hydrograph with respect to peak discharge coordinates and coefficient of efficiency; (2) For the better application of Clark model it is recommended to use the time-area curve capable of accounting for irregular drainage structure of a river basin such as rescaled width function instead of symmetric time-area curve by HEC-1; (3) Moment-based methodology with rescaled width function developed in this study also gives rise to satisfactory simulation results in terms of peak discharge coordinates and coefficient of efficiency. Especially the mean velocities estimated from this method, characterizing the translation effect of time-area curve, are well consistent with the field surveying results for the points of interest in this study; (4) It is confirmed that the moment-based methodology could be an effective tool for quantitative assessment of translation and storage effects of natural river basin; (5) The runoff hydrographs simulated by the moment-based methodology tend to be more right skewed relative to the observed ones and have lower peaks. It is inferred that this is due to consideration of only one mean velocity in the parameter estimation. Further research is required to combine the hydrodynamic heterogeneity between hillslope and channel network into the construction of time-area curve.

A MVC Framework for Visualizing Text Data (텍스트 데이터 시각화를 위한 MVC 프레임워크)

  • Choi, Kwang Sun;Jeong, Kyo Sung;Kim, Soo Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.39-58
    • /
    • 2014
  • As the importance of big data and related technologies continues to grow in the industry, it has become highlighted to visualize results of processing and analyzing big data. Visualization of data delivers people effectiveness and clarity for understanding the result of analyzing. By the way, visualization has a role as the GUI (Graphical User Interface) that supports communications between people and analysis systems. Usually to make development and maintenance easier, these GUI parts should be loosely coupled from the parts of processing and analyzing data. And also to implement a loosely coupled architecture, it is necessary to adopt design patterns such as MVC (Model-View-Controller) which is designed for minimizing coupling between UI part and data processing part. On the other hand, big data can be classified as structured data and unstructured data. The visualization of structured data is relatively easy to unstructured data. For all that, as it has been spread out that the people utilize and analyze unstructured data, they usually develop the visualization system only for each project to overcome the limitation traditional visualization system for structured data. Furthermore, for text data which covers a huge part of unstructured data, visualization of data is more difficult. It results from the complexity of technology for analyzing text data as like linguistic analysis, text mining, social network analysis, and so on. And also those technologies are not standardized. This situation makes it more difficult to reuse the visualization system of a project to other projects. We assume that the reason is lack of commonality design of visualization system considering to expanse it to other system. In our research, we suggest a common information model for visualizing text data and propose a comprehensive and reusable framework, TexVizu, for visualizing text data. At first, we survey representative researches in text visualization era. And also we identify common elements for text visualization and common patterns among various cases of its. And then we review and analyze elements and patterns with three different viewpoints as structural viewpoint, interactive viewpoint, and semantic viewpoint. And then we design an integrated model of text data which represent elements for visualization. The structural viewpoint is for identifying structural element from various text documents as like title, author, body, and so on. The interactive viewpoint is for identifying the types of relations and interactions between text documents as like post, comment, reply and so on. The semantic viewpoint is for identifying semantic elements which extracted from analyzing text data linguistically and are represented as tags for classifying types of entity as like people, place or location, time, event and so on. After then we extract and choose common requirements for visualizing text data. The requirements are categorized as four types which are structure information, content information, relation information, trend information. Each type of requirements comprised with required visualization techniques, data and goal (what to know). These requirements are common and key requirement for design a framework which keep that a visualization system are loosely coupled from data processing or analyzing system. Finally we designed a common text visualization framework, TexVizu which is reusable and expansible for various visualization projects by collaborating with various Text Data Loader and Analytical Text Data Visualizer via common interfaces as like ITextDataLoader and IATDProvider. And also TexVisu is comprised with Analytical Text Data Model, Analytical Text Data Storage and Analytical Text Data Controller. In this framework, external components are the specifications of required interfaces for collaborating with this framework. As an experiment, we also adopt this framework into two text visualization systems as like a social opinion mining system and an online news analysis system.