• Title/Summary/Keyword: E-HowNet

Search Result 37, Processing Time 0.03 seconds

Analysis on Forces Acting on the Contact Lens Fitted on the Cornea (콘택트 렌즈에 작용하는 힘의 해석)

  • Kim, Dae-Soo
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.7 no.2
    • /
    • pp.1-11
    • /
    • 2002
  • A mathematical model is proposed to analyze the force; acting on the hard contact lens fitted on the cornea. The model incorporates the nonlinear equations and their numerical solution program, based on the formulations of surface tension force arising from the capillary action in the tear-film layer between the lens and cornea. The model simulates how the adhesion between lens and cornea varies according to the base curves and diameters of the lenses. When the spherical lens is fitted on the spherical cornea it is to rotate downward due to the weight of lens itself until it reaches an equilibrium position along the cornea where the counter(upward) moment caused by net force between the upper and lower portion of the periphery of lens. It is found that both the adhesion and displacement of lens along the cornea, where the gravity of lens balances the capillary-induced upward force, increases rapidly as the base curve of lens increases, i.e., as the lens gets flatter, while the increase in the diameter of lenses has resulted in the less increase in the rotation and adhesion. With the base curve and diameters of lenses being remained constant the increase in surface tension of tear film yields the increase in the adhesion between the cornea and lens while the initial rotation of lens is inversely proportional to the surface tension of the tear film.

  • PDF

Characteristics of Greenup and Senescence for Evapotranspiration in Gyeongan Watershed Using Landsat Imagery (Landsat 인공위성 이미지를 이용한 경안천 유역 증발산의 생장기와 휴면기 분포 특성 분석)

  • Choi, Minha;Hwang, Kyotaek;Kim, Tae-Woong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.1B
    • /
    • pp.29-36
    • /
    • 2011
  • Evapotranspiration (ET) from the various surfaces needs to be understood because it is a crucial hydrological factor to grasp interaction between the land surface and the atmosphere. A traditional way of estimating it, which is calculating it empirically using lysimeter and pan evaporation observations, has a limitation that the measurements represent only point values. However, these measurements cannot describe ET because it is easily affected by outer circumstances. Thus, remote sensing technology was applied to estimate spatial distribution of ET. In this study, we estimated major components of energy balance method (i.e. net radiation flux, soil heat flux, sensible heat flux, and latent heat flux) and ET as a map using Mapping Evapo-Transpiration with Internalized Calibration (METRIC) satellite-based image processing model. This model was run using Landsat imagery of Gyeongan watershed in Korea on Feb 1, 2003 and Sep 13, 2006. Basic statistical analyses were also conducted. The estimated mean daily ETs had respectively 22% and 11% of errors with pan evaporation data acquired from the Suwon Weather Station. This result represented similar distribution compared with previous studies and confirmed that the METRIC algorithm had high reliability in the watershed. In addition, ET distribution of each land use type was separately examined. As a result, it was identified that vegetation density had dominant impacts on distribution of ET. Seasonally, ET in a growing season represented significantly higher than in a dormant season due to more active transpiration. The ET maps will be useful to analyze how ET behaves along with the circumstantial conditions; land cover classification, vegetation density, elevation, topography.

Creation and labeling of multiple phonotopic maps using a hierarchical self-organizing classifier (계층적 자기조직화 분류기를 이용한 다수 음성자판의 생성과 레이블링)

  • Chung, Dam;Lee, Kee-Cheol;Byun, Young-Tai
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.3
    • /
    • pp.600-611
    • /
    • 1996
  • Recently, neural network-based speech recognition has been studied to utilize the adaptivity and learnability of neural network models. However, conventional neural network models have difficulty in the co-articulation processing and the boundary detection of similar phonmes of the Korean speech. Also, in case of using one phonotopic map, learning speed may dramatically increase and inaccuracies may be caused because homogeneous learning and recognition method should be applied for heterogenous data. Hence, in this paper, a neural net typewriter has been designed using a hierarchical self-organizing classifier(HSOC), and related algorithms are presented. This HSOC, during its learing stage, distributed phoneme data on hierarchically structured multiple phonotopic maps, using Kohonen's self-organizing feature maps(SOFM). Presented and experimented in this paper were the algorithms for deciding the number of maps, map sizes, the selection of phonemes and their placement per map, an approapriate learning and preprocessing method per map. If maps are divided according to a priorlinguistic knowledge, we would have difficulty in acquiring linguistic knowledge and how to alpply it(e.g., processing extended phonemes). Contrarily, our HSOC has an advantage that multiple phonotopic maps suitable for given input data are self-organizable. The resulting three korean phonotopic maps are optimally labelled and have their own optimal preprocessing schemes, and also confirm to the conventional linguistic knowledge.

  • PDF

Projecting future hydrological and ecological droughts with the climate and land use scenarios over the Korean peninsula (기후 및 토지이용 변화 시나리오 기반 한반도 미래 수문학적 및 생태학적 가뭄 전망)

  • Lee, Jaehyeong;Kim, Yeonjoo;Chae, Yeora
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.6
    • /
    • pp.427-436
    • /
    • 2020
  • It is uncertain how global climate change will influence future drought characteristics over the Korean peninsula. This study aims to project the future droughts using climate change and land use change scenarios over the Korean peninsula with the land surface modeling system, i.e., Weather Research and Forecasting Model Hydrological modeling system (WRF-Hydro). The Representative Concentration Pathways (RCPs) 2.6 and 8.5 are used as future climate scenarios and the Shared Socio-economic Pathways (SSPs), specifically SSP2, is adopted for the land use scenario. The using Threshold Level Method (TLM), we identify future hydrological and ecological drought events with runoff and Net Primary Productivity (NPP), respectively, and assess drought characteristics of durations and intensities in different scenarios. Results show that the duration of drought is longer over RCP2.6-SSP2 for near future (2031-2050) and RCP8.5-SSP2 (2080-2099) for the far future for hydrological drought. On the other hand, RCP2.6-SSP2 for the far future and RCP8.5-SSP2 for the near future show longer duration for ecological drought. In addition, the drought intensities in both hydrological and ecological drought show different characteristics with the drought duration. The intensity of the hydrological droughts was greatly affected by threshold level methods and RCP2.6-SSP2 for far future shows the severest intensity. However, for ecological drought, the difference of the intensity among the threshold level is not significant and RCP2.6-SSP2 for near future and RCP2.6-SSP2 for near future show the severest intensity. This study suggests a possible future drought characteristics is in the Korea peninsula using combined climate and land use changes, which will help the community to understand and manage the future drought risks.

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

COATED PARTICLE FUEL FOR HIGH TEMPERATURE GAS COOLED REACTORS

  • Verfondern, Karl;Nabielek, Heinz;Kendall, James M.
    • Nuclear Engineering and Technology
    • /
    • v.39 no.5
    • /
    • pp.603-616
    • /
    • 2007
  • Roy Huddle, having invented the coated particle in Harwell 1957, stated in the early 1970s that we know now everything about particles and coatings and should be going over to deal with other problems. This was on the occasion of the Dragon fuel performance information meeting London 1973: How wrong a genius be! It took until 1978 that really good particles were made in Germany, then during the Japanese HTTR production in the 1990s and finally the Chinese 2000-2001 campaign for HTR-10. Here, we present a review of history and present status. Today, good fuel is measured by different standards from the seventies: where $9*10^{-4}$ initial free heavy metal fraction was typical for early AVR carbide fuel and $3*10^{-4}$ initial free heavy metal fraction was acceptable for oxide fuel in THTR, we insist on values more than an order of magnitude below this value today. Half a percent of particle failure at the end-of-irradiation, another ancient standard, is not even acceptable today, even for the most severe accidents. While legislation and licensing has not changed, one of the reasons we insist on these improvements is the preference for passive systems rather than active controls of earlier times. After renewed HTGR interest, we are reporting about the start of new or reactivated coated particle work in several parts of the world, considering the aspects of designs/ traditional and new materials, manufacturing technologies/ quality control quality assurance, irradiation and accident performance, modeling and performance predictions, and fuel cycle aspects and spent fuel treatment. In very general terms, the coated particle should be strong, reliable, retentive, and affordable. These properties have to be quantified and will be eventually optimized for a specific application system. Results obtained so far indicate that the same particle can be used for steam cycle applications with $700-750^{\circ}C$ helium coolant gas exit, for gas turbine applications at $850-900^{\circ}C$ and for process heat/hydrogen generation applications with $950^{\circ}C$ outlet temperatures. There is a clear set of standards for modem high quality fuel in terms of low levels of heavy metal contamination, manufacture-induced particle defects during fuel body and fuel element making, irradiation/accident induced particle failures and limits on fission product release from intact particles. While gas-cooled reactor design is still open-ended with blocks for the prismatic and spherical fuel elements for the pebble-bed design, there is near worldwide agreement on high quality fuel: a $500{\mu}m$ diameter $UO_2$ kernel of 10% enrichment is surrounded by a $100{\mu}m$ thick sacrificial buffer layer to be followed by a dense inner pyrocarbon layer, a high quality silicon carbide layer of $35{\mu}m$ thickness and theoretical density and another outer pyrocarbon layer. Good performance has been demonstrated both under operational and under accident conditions, i.e. to 10% FIMA and maximum $1600^{\circ}C$ afterwards. And it is the wide-ranging demonstration experience that makes this particle superior. Recommendations are made for further work: 1. Generation of data for presently manufactured materials, e.g. SiC strength and strength distribution, PyC creep and shrinkage and many more material data sets. 2. Renewed start of irradiation and accident testing of modem coated particle fuel. 3. Analysis of existing and newly created data with a view to demonstrate satisfactory performance at burnups beyond 10% FIMA and complete fission product retention even in accidents that go beyond $1600^{\circ}C$ for a short period of time. This work should proceed at both national and international level.

The Fourth Industrial Revolution and Labor Relations : Labor-management Conflict Issues and Union Strategies in Western Advanced Countries (4차 산업혁명과 노사관계 : 노사갈등 이슈와 서구 노조들의 대응전략을 중심으로)

  • Lee, Byoung-Hoon
    • 한국사회정책
    • /
    • v.25 no.2
    • /
    • pp.429-446
    • /
    • 2018
  • The $4^{th}$ Industrial Revolution, symbolizing the explosive innovation of digital technologies, is expected to have a great impact on labor relations and produce a lot of contested issues. The labor-management issues, created by the $4^{th}$ Industrial Revolution, are as follows: (1) employment restructuring, job re-allocation, and skill-reformation, driven by the technological displacement, resetting of worker-machine relationship, and negotiation on labor intensity and autonomy, (2) the legislation of institutional protection for the digital dependent self-employed, derived from the proliferation of platform-mediated labor, and the statutory recognition of their 'workerness', (3) unemployment safety net, income guarantee, and skill formation assistance for precarious workeforce, (4) the protection of worker privacy from workplace surveillance, (5) protecting labor rights of the digital dependent self-employed and prcarious workers and guaranteeing their unionization and collective bargaining. In comparing how labor unions in Western countries have responded to the $4^{th}$ Industrial Revolution, German unions have showed a strategic approach of policy formation toward digital technological innovations by effectively building and utilizing diverse channel of social dialogue and collective bargaining, while those in the US and UK have adopted the traditional approach of organizing and protesting in attempting to protect the interest of platform-mediated workers (i.e. Uber drivers). In light of the best practice demonstrated by German unions, it is necessary to build the process of productive policy consultation among three parties- the government, employers, and labor unions - at multi levels (i.e. workplace, sectoral and national levels), in order to prevent the destructive damage as well as labor-management confrotation, caused by digital technological innovations. In such policy consultation procesess, moreover, the inclusive and integrated approach is required to tackle with diverse problems, derived from the $4^{th}$ Industrial Revolution, in a holistic manner.