• Title/Summary/Keyword: TD method

Search Result 263, Processing Time 0.033 seconds

Acute Toxicity Assessment of New Algicides of Thiazolidinediones Derivatives, TD53 and TD49, Using Ulva pertusa Kjellman

  • Yim, Eun-Chae;Park, In-Taek;Han, Hyo-Kyung;Kim, Si-Wouk;Cho, Hoon;Kim, Seong-Jun
    • Environmental Analysis Health and Toxicology
    • /
    • v.25 no.4
    • /
    • pp.273-278
    • /
    • 2010
  • Objectives : This study was aimed to examine the acute toxicity assessment of two new algicides, thiazolidinediones derivatives (TD53 and TD49), which were synthesized to selectively control red tide, to the marine ecosystem. Methods : The assessment employed by a new method using Ulva pertusa Kjellman which has been recently accepted as a standard method of ISO. The toxicity was assessed by calculating the $EC_{50}$ (Effective Concentration of 50%), NOEC (No Observed Effect Concentration) and PNEC (Predicted No Effect Concentration) using acute toxicity data obtained from exposure experiments. $EC_{50}$ value of TD49 and TD53 was examined by 96-hrs exposure together with Solutol as a TD49 dispersing agent and DMSO as a TD53 solvent. Results : $EC_{50}$ value of TD53 was $1.65\;{\mu}M$. From the results, values of NOEC and PNEC were calculated to be $0.63\;{\mu}M$ and 1.65 nM, respectively. DMSO under the range of $0{\sim}10\;{\mu}M$, which is same solvent concentration used in examining TD53, showed no toxic effect. $EC_{50}$ value of TD49 was $0.18\;{\mu}M$ and that of Solutol was $1.70\;{\mu}M$. NOEC and PNEC of TD49 were $0.08\;{\mu}M$ and 0.18 nM, respectively and those for Solutol were $1.25\;{\mu}M$ and 1.25 nM, respectively. Conclusions : From the values of NOEC, PNEC of TD53 and TD49, TD49 showed 9 times stronger toxicity than TD53. On the other hand, DMSO showed no toxicity on the Ulva pertusa Kjellman, but Solutol was found to be a considerable toxicity by itself.

A Reinforcement Loaming Method using TD-Error in Ant Colony System (개미 집단 시스템에서 TD-오류를 이용한 강화학습 기법)

  • Lee, Seung-Gwan;Chung, Tae-Choong
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.77-82
    • /
    • 2004
  • Reinforcement learning takes reward about selecting action when agent chooses some action and did state transition in Present state. this can be the important subject in reinforcement learning as temporal-credit assignment problems. In this paper, by new meta heuristic method to solve hard combinational optimization problem, examine Ant-Q learning method that is proposed to solve Traveling Salesman Problem (TSP) to approach that is based for population that use positive feedback as well as greedy search. And, suggest Ant-TD reinforcement learning method that apply state transition through diversification strategy to this method and TD-error. We can show through experiments that the reinforcement learning method proposed in this Paper can find out an optimal solution faster than other reinforcement learning method like ACS and Ant-Q learning.

Formulation of a Novel Polymeric Tablet for the Controlled Release of Tinidazole (티니다졸의 제어방출을 위한 새로운 합성고분자성 정제의 조성)

  • Yoon, Dong-Jin;Shin, Young-Hee;Kim, Dae-Duk;Lee, Chi-Ho
    • Journal of Pharmaceutical Investigation
    • /
    • v.29 no.4
    • /
    • pp.349-353
    • /
    • 1999
  • A novel polymeric tablet of tinidazole (TD) was formulated to treat Helicobacter pylori and Giardia lambria more efficiently with reduced hepatotoxicity by controlling the release of TD after oral administration. TD tablets containing various concentrations of either xanthan gum (XG, viscosity enhancer) and/or polycarbophil (PC, mucoadhesive) were prepared by the wet granulation method. In vitro release of TD into pH 2.0 and pH 5.0 buffer solutions was observed at 37°C by using an USP dissolution tester and an UV (313 nm) spectrophotometer. In vivo absorption of TD tablets was investigated in rabbits by measuring the blood concentration of TD after oral administration using a HPLC. Compared to a commercial TD tablet, in vitro release of TD in both pH 2.0 and pH 5.0 buffer solutions significantly decreased as the concentration: of XG or PC in the tablet increased up to 30%. However, when XG and PC was added in combination, TD was completely released in a pH 5.0 buffer solution within 8 hours, whereas the release of TD in pH 2.0 buffer solution significantly decreased. TD in a commercial tablet was rapidly absorbed after oral administration in rabbits. After oral administration of the polymeric tablets that contain both XG and PC, plasma concentration of TD dramatically decreased. Since the oral absorption of TD significantly decreased by the addition of XG and PC in the tablets while TD completely released in a pH 5.0 buffer solution, it was speculated that more TD was retained in the gastrointestinal tract. Thus, it was possible to control the release of TD by changing the content of XG and/or PC in the tablet, thereby manipulating the release rate and the gastrointestinal retention of TD after oral administration in rabbits.

  • PDF

Device Personalization Methods for Enhancing Packet Delay in Small-cells based Internet of Things (스몰셀 기반 사물인터넷에서 패킷 지연시간 향상을 위한 디바이스 개인화 방법)

  • Lee, ByungBog;Han, Wang Seok;Kim, Se-Jin
    • Journal of Internet Computing and Services
    • /
    • v.17 no.6
    • /
    • pp.25-31
    • /
    • 2016
  • Recently, with greatly improving the wireless communication technology, new services are created using smart sensors, i.e., machine-to-machine (M2M) and Internet of Things (IoT). In this paper, we propose a novel IoT device (IoTD) personalization method that adopt Small-cell Access Points (SAPs) to control IoTDs using user equipments (UEs), e.g., smart phones and tablet PC, from service users. First, we introduce a system architecture that consists of UE, IoTD, and SAP and propose the IoTD personalization method with two procedures, i.e., IoTD profile registration procedure and IoTD control procedure. Finally, through simulations, we evaluated the system performance of the proposed scheme and it is shown that the proposed scheme outperforms the conventional scheme in terms of the packet delay, packet loss probability, and normalized throughput.

A Localized Adaptive QoS Routing using TD(${\lambda}$) method (TD(${\lambda}$) 기법을 사용한 지역적이며 적응적인 QoS 라우팅 기법)

  • Han Jeong-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.5B
    • /
    • pp.304-309
    • /
    • 2005
  • In this paper, we propose a localized Adaptive QoS Routing using TD method and evaluate performance of various exploration methods when path is selected. Expecially, through extensive simulation, the proposed routing algorithm and exploration method using Exploration Bonus are shown to be effective in significantly reducing the overall blocking probability, when compared to the other path selection method(exploration method), because the proposed exploration method is more adaptive to network environments than others when path is selected.

Human Adaptive Device Development based on TD method for Smart Home

  • Park, Chang-Hyun;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1072-1075
    • /
    • 2005
  • This paper presents that TD method is applied to the human adaptive devices for smart home with context awareness (or recognition) technique. For smart home, the very important problem is how the appliances (or devices) can adapt to user. Since there are many humans to manage home appliances (or devices), managing the appliances automatically is difficult. Moreover, making the users be satisfied by the automatically managed devices is much more difficult. In order to do so, we can use several methods, fuzzy controller, neural network, reinforcement learning, etc. Though the some methods could be used, in this case (in dynamic environment), reinforcement learning is appropriate. Among some reinforcement learning methods, we select the Temporal Difference learning method as a core algorithm for adapting the devices to user. Since this paper assumes the environment is a smart home, we simply explained about the context awareness. Also, we treated with the TD method briefly and implement an example by VC++. Thereafter, we dealt with how the devices can be applied to this problem.

  • PDF

Goal-Directed Reinforcement Learning System (목표지향적 강화학습 시스템)

  • Lee, Chang-Hoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.5
    • /
    • pp.265-270
    • /
    • 2010
  • Reinforcement learning performs learning through interacting with trial-and-error in dynamic environment. Therefore, in dynamic environment, reinforcement learning method like TD-learning and TD(${\lambda}$)-learning are faster in learning than the conventional stochastic learning method. However, because many of the proposed reinforcement learning algorithms are given the reinforcement value only when the learning agent has reached its goal state, most of the reinforcement algorithms converge to the optimal solution too slowly. In this paper, we present GDRLS algorithm for finding the shortest path faster in a maze environment. GDRLS is select the candidate states that can guide the shortest path in maze environment, and learn only the candidate states to find the shortest path. Through experiments, we can see that GDRLS can search the shortest path faster than TD-learning and TD(${\lambda}$)-learning in maze environment.

A Verification of the Numerical Energy Conservation Property of the FD-TD(Finite Difference-Time Domain) Method by Using a Plane Wave Analysis (평면파 해석을 이용한 시간영역-유한차분법의 수치적 에너지 보존성질의 증명)

  • Ihn-Seok Kim
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.7 no.4
    • /
    • pp.320-327
    • /
    • 1996
  • This paper presents that the lossy or amplification property of the Finite Difference-Time Domain(FD-TD) method based on the leap-frog scheme is theoretically verified by using a plane wave analysis. The basic algorithm of the FD-TD method is introduced in order to help understanding the analysis procedure. Since our analysis is formulated by the Von Neumann's approach, the stability inequality is also produced as an another outcome.

  • PDF

Reinforcement Learning using Propagation of Goal-State-Value (목표상태 값 전파를 이용한 강화 학습)

  • Kim, Byeong-Cheon;Yun, Byeong-Ju
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.5
    • /
    • pp.1303-1311
    • /
    • 1999
  • In order to learn in dynamic environments, reinforcement learning algorithms like Q-learning, TD(0)-learning, TD(λ)-learning have been proposed. however, most of them have a drawback of very slow learning because the reinforcement value is given when they reach their goal state. In this thesis, we have proposed a reinforcement learning method that can approximate fast to the goal state in maze environments. The proposed reinforcement learning method is separated into global learning and local learning, and then it executes learning. Global learning is a learning that uses the replacing eligibility trace method to search the goal state. In local learning, it propagates the goal state value that has been searched through global learning to neighboring sates, and then searches goal state in neighboring states. we can show through experiments that the reinforcement learning method proposed in this thesis can find out an optimal solution faster than other reinforcement learning methods like Q-learning, TD(o)learning and TD(λ)-learning.

  • PDF

Genetic Parameters of Milk Yield and Milk Fat Percentage Test Day Records of Iranian Holstein Cows

  • Shadparvar, A.A.;Yazdanshenas, M.S.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.18 no.9
    • /
    • pp.1231-1236
    • /
    • 2005
  • Genetic parameters for first lactation milk production based on test day (TD) records of 56319 Iranian Holstein cows from 655 herds that first calved between 1991 and 2001 were estimated with restricted maximum likelihood method under an Animal model. Traits analyzed were milk yield and milk fat percentage. Heritability for TD records were highest in second half of the lactation, ranging from 0.11 to 0.19 for milk yield and 0.038 to 0.094 for milk fat percentage respectively. Estimates for lactation records for these traits were 0.24 and 0.26 respectively. Genetic correlations between individual TD records were high for consecutive TD records (>0.9) and decreased as the interval between tests increased. Estimates of genetic correlations of TD yield with corresponding lactation yield were highest (0.78 to 0.86) for mid-lactation (TD3 to TD8). Phenotypic correlations were lower than corresponding genetic correlations, but both followed the same pattern. For milk fat percentage no clear pattern was found. Results of this study suggested that TD yields especially in mid-lactation may be used for genetic evaluation instead of 305-day yield.