• Title/Summary/Keyword: two-phase approach

Search Result 537, Processing Time 0.034 seconds

Bright Light Therapy in the Morning or at Mid-Day in the Treatment of Non-Seasonal Bipolar Depressive Episodes (LuBi): Study Protocol for a Dose Research Phase I / II Trial

  • Geoffroy, Pierre Alexis;El Abbassi, El Mountacer Billah;Maruani, Julia;Etain, Bruno;Lejoyeux, Michel;Amad, Ali;Courtet, Philippe;Dubertret, Caroline;Gorwood, Philip;Vaiva, Guillaume;Bellivier, Frank;Chevret, Sylvie
    • Psychiatry investigation
    • /
    • v.15 no.12
    • /
    • pp.1188-1202
    • /
    • 2018
  • Objective This study protocol aims to determine, using a rigorous approach in patients with bipolar disorder (BD) and non-seasonal major depressive episode (MDE), the characteristics of bright light therapy (BLT) administration (duration, escalation, morning and mid-day exposures) depending on the tolerance (hypomanic symptoms). Methods Patients with BD I or II and treated by a mood stabilizer are eligible. After 1 week of placebo, patients are randomized between either morning or mid-day exposure for 10 weeks of active BLT with glasses using a dose escalation at 7.5, 10, 15, 30 and 45 minutes/day. A further follow-up visit is planned 6 months after inclusion. Patients will be included by cohorts of 3, with at least 3 days of delay between them, and 1 week between cohorts. If none meet a dose limiting toxicity (DLT; i.e hypomanic symptoms), the initiation dose of the next cohort will be increased. If one patient meet a DLT, an additionnal cohort will start at the same dose. If 2 or 3 patients meet a DLT, from the same cohort or from two cohorts at the same dose initiation, the maximum tolerated dose is defined. This dose escalation will also take into account DLTs observed during the intra-subject escalation on previous cohorts, with a "Target Ceiling Dose" defined if 2 DLTs occured at a dose. Discussion Using an innovative and more ergonomic device in the form of glasses, this study aims to better codify the use of BLT in BD to ensure a good initiation and tolerance.

Behavior of a Shape Memory Alloy Actuator with Composite Strip and Spring (복합재료 스트립과 스프링을 갖는 형상기억합금 작동기의 거동)

  • Heo, Seok;Hwang, Do-Yeon;Choi, Jae-Won;Park, Hoon-Cheol;Goo, Nam-Seo
    • Composites Research
    • /
    • v.22 no.2
    • /
    • pp.37-42
    • /
    • 2009
  • This paper presents an experimental approach to design a bending-type actuator by using a shape memory alloy wire (SMA), composite strip, and spring. The SMA wire is attached to two edges of the bent strip to apply pre-stress to the SMA wire. The spring is used to provide recovery force right after actuation of the SMA wire. To investigate thermo-mechanical characteristics of the SMA wire, a series of DSC tests have been conducted and tensile tests under various levels of pre-stress and input power have been performed. Based on the measured properties of the SMA wire, bending-type actuators are designed and tested for different combination of strip, number of springs, and input power. It has been found that a bending-type actuator with a proper combination shows fast actuation performance and low power consumption.

Mobile Robot Localization in Geometrically Similar Environment Combining Wi-Fi with Laser SLAM

  • Gengyu Ge;Junke Li;Zhong Qin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1339-1355
    • /
    • 2023
  • Localization is a hot research spot for many areas, especially in the mobile robot field. Due to the weak signal of the global positioning system (GPS), the alternative schemes in an indoor environment include wireless signal transmitting and receiving solutions, laser rangefinder to build a map followed by a re-localization stage and visual positioning methods, etc. Among all wireless signal positioning techniques, Wi-Fi is the most common one. Wi-Fi access points are installed in most indoor areas of human activities, and smart devices equipped with Wi-Fi modules can be seen everywhere. However, the localization of a mobile robot using a Wi-Fi scheme usually lacks orientation information. Besides, the distance error is large because of indoor signal interference. Another research direction that mainly refers to laser sensors is to actively detect the environment and achieve positioning. An occupancy grid map is built by using the simultaneous localization and mapping (SLAM) method when the mobile robot enters the indoor environment for the first time. When the robot enters the environment again, it can localize itself according to the known map. Nevertheless, this scheme only works effectively based on the prerequisite that those areas have salient geometrical features. If the areas have similar scanning structures, such as a long corridor or similar rooms, the traditional methods always fail. To address the weakness of the above two methods, this work proposes a coarse-to-fine paradigm and an improved localization algorithm that utilizes Wi-Fi to assist the robot localization in a geometrically similar environment. Firstly, a grid map is built by using laser SLAM. Secondly, a fingerprint database is built in the offline phase. Then, the RSSI values are achieved in the localization stage to get a coarse localization. Finally, an improved particle filter method based on the Wi-Fi signal values is proposed to realize a fine localization. Experimental results show that our approach is effective and robust for both global localization and the kidnapped robot problem. The localization success rate reaches 97.33%, while the traditional method always fails.

A Novel Two-Stage Training Method for Unbiased Scene Graph Generation via Distribution Alignment

  • Dongdong Jia;Meili Zhou;Wei WEI;Dong Wang;Zongwen Bai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.12
    • /
    • pp.3383-3397
    • /
    • 2023
  • Scene graphs serve as semantic abstractions of images and play a crucial role in enhancing visual comprehension and reasoning. However, the performance of Scene Graph Generation is often compromised when working with biased data in real-world situations. While many existing systems focus on a single stage of learning for both feature extraction and classification, some employ Class-Balancing strategies, such as Re-weighting, Data Resampling, and Transfer Learning from head to tail. In this paper, we propose a novel approach that decouples the feature extraction and classification phases of the scene graph generation process. For feature extraction, we leverage a transformer-based architecture and design an adaptive calibration function specifically for predicate classification. This function enables us to dynamically adjust the classification scores for each predicate category. Additionally, we introduce a Distribution Alignment technique that effectively balances the class distribution after the feature extraction phase reaches a stable state, thereby facilitating the retraining of the classification head. Importantly, our Distribution Alignment strategy is model-independent and does not require additional supervision, making it applicable to a wide range of SGG models. Using the scene graph diagnostic toolkit on Visual Genome and several popular models, we achieved significant improvements over the previous state-of-the-art methods with our model. Compared to the TDE model, our model improved mR@100 by 70.5% for PredCls, by 84.0% for SGCls, and by 97.6% for SGDet tasks.

Investigation of the behavior of an RC beam strengthened by external bonding of a porous P-FGM and E-FGM plate in terms of interface stresses

  • Zahira Sadoun;Riadh Bennai;Mokhtar Nebab;Mouloud Dahmane;Hassen Ait Atmane
    • Structural Monitoring and Maintenance
    • /
    • v.10 no.4
    • /
    • pp.315-337
    • /
    • 2023
  • During the design phase, it is crucial to determine the interface stresses between the reinforcing plate and the concrete base in order to predict plate end separation failures. In this work, a simple theoretical study of interface shear stresses in beams reinforced with P-FGM and E-FGM plates subjected to an arbitrarily positioned point load, or two symmetrical point loads, was presented using the linear elastic theory. The presence of pores in the reinforcing plate distributed in several forms was also taken into account. For this purpose, we analyze the effects of porosity and its distribution shape on the interracial normal and shear stresses of an FGM beam reinforced with an FRP plate under different types of load. Comparisons of the proposed model with existing analytical solutions in the literature confirm the feasibility and accuracy of this new approach. The influence of different parameters on the interfacial behavior of reinforced concrete beams reinforced with functionally graded porous plates is further examined in this parametric study using the proposed model. From the results obtained in this study, we can say that interface stress is significantly affected by several factors, including the pores present in the reinforcing plate and their distribution shape. Additionally, we can conclude from this study that reinforcement systems with composite plates are very effective in improving the flexural response of reinforced RC beams.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Building Transparency on the Total System Performance Assessment of Radioactive Repository through the Development of the FEAS Program (FEAS 프로그램 개발을 통한 방사성폐기물 처분장 종합 성능 평가(TSPA) 투명성 증진에 관한 연구)

  • 서은진;황용수;강철형
    • Tunnel and Underground Space
    • /
    • v.13 no.4
    • /
    • pp.270-278
    • /
    • 2003
  • Transparency on the Total System Performance Assessment (TSPA) is the key issue to enhance the public acceptance for a permanent high level radioactive repository. Traditionally, the study on features, events and processes (FEPs) and associated scenarios has been regarded as the starting point to open the communicative discussion on TSPA such as what to evaluate, how to evaluate and how to translate outcomes into more friendly language that many stakeholders can easily understand and react with. However, in most cases, it has been limited to one way communication, because it is difficult for stakeholders outside the performance assessment field to assess the details on the story of the safety assessment, scenario and technical background of it. Fortunately, the advent of the internet era opens up the possibility of two way communication from the beginning of the performance assessment so that every stakeholder can exchange their keen opinions on the safety issues. To achieve it, KAERI develops the systematic approach from the FEPs to Assessment methods flow chart. All information is integrated into the web based program named FEAS (FEp to Assessment through Scenario development) under development in KAERI. In parallel, two independent systems are also under development the web based QA(Quality Assurance) system and the PA(Performance Assessment) input database. It is ideal to integrate the input data base with the QA system so that every data in the system can checked whenever necessary. Throughout the next phase R&D starting from the year 2003, these three systems will be consolidated into one unified system.

Numerical Simulation of Solitary Wave Run-up with an Internal Wave-Maker of Navier-Stokes Equations Model (내부조파기법을 활용한 Navier-Stokes 방정식 모형의 고립파 처오름 수치모의)

  • Ha, Tae-Min;Kim, Hyung-Jun;Cho, Yong-Sik
    • Journal of Korea Water Resources Association
    • /
    • v.43 no.9
    • /
    • pp.801-811
    • /
    • 2010
  • A three-dimensional numerical model called NEWTANK is employed to investigate solitary wave run-up with an internal wave-maker on a steep slope. The numerical model solves the spatially averaged Navier-Stokes equations for two-phase flows. The LES (large-eddy-simulation) approach is adopted to model the turbulence effect by using the Smagorinsky SGS (sub-grid scale) closure model. A two-step projection method is adopted in numerical solutions, aided by the Bi-CGSTAB (Bi-Conjugate Gradient Stabilized) method to solve the pressure Poisson equation for the filtered pressure field. The second-order accurate VOF (volume-of-fluid) method is used to track the distorted and broken free surface. A solitary wave is first internally generated and propagated over a constant water depth in the three-dimensional domain. Numerically predicted results are compared with analytical solutions and numerical errors are analyzed in detail. The model is then applied to study solitary wave run-up on a steep slope and the obtained results are compared with available laboratory measurements.

Characteristics of Water Level and Velocity Changes due to the Propagation of Bore (단파의 전파에 따른 수위 및 유속변화의 특성에 관한 연구)

  • Lee, Kwang Ho;Kim, Do Sam;Yeh, Harry
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.5B
    • /
    • pp.575-589
    • /
    • 2008
  • In the present work, we investigate the hydrodynamic behavior of a turbulent bore, such as tsunami bore and tidal bore, generated by the removal of a gate with water impounded on one side. The bore generation system is similar to that used in a general dam-break problem. In order to the numerical simulation of the formation and propagation of a bore, we consider the incompressible flows of two immiscible fluids, liquid and gas, governed by the Navier-Stokes equations. The interface tracking between two fluids is achieved by the volume-of-fluid (VOF) technique and the M-type cubic interpolated propagation (MCIP) scheme is used to solve the Navier-Stokes equations. The MCIP method is a low diffusive and stable scheme and is generally extended the original one-dimensional CIP to higher dimensions, using a fractional step technique. Further, large eddy simulation (LES) closure scheme, a cost-effective approach to turbulence simulation, is used to predict the evolution of quantities associated with turbulence. In order to verify the applicability of the developed numerical model to the bore simulation, laboratory experiments are performed in a wave tank. Comparisons are made between the numerical results by the present model and the experimental data and good agreement is achieved.

Near infrared spectroscopy for classification of apples using K-mean neural network algorism

  • Muramatsu, Masahiro;Takefuji, Yoshiyasu;Kawano, Sumio
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1131-1131
    • /
    • 2001
  • To develop a nondestructive quality evaluation technique of fruits, a K-mean algorism is applied to near infrared (NIR) spectroscopy of apples. The K-mean algorism is one of neural network partition methods and the goal is to partition the set of objects O into K disjoint clusters, where K is assumed to be known a priori. The algorism introduced by Macqueen draws an initial partition of the objects at random. It then computes the cluster centroids, assigns objects to the closest of them and iterates until a local minimum is obtained. The advantage of using neural network is that the spectra at the wavelengths having absorptions against chemical bonds including C-H and O-H types can be selected directly as input data. In conventional multiple regression approaches, the first wavelength is selected manually around the absorbance wavelengths as showing a high correlation coefficient between the NIR $2^{nd}$ derivative spectrum and Brix value with a single regression. After that, the second and following wavelengths are selected statistically as the calibration equation shows a high correlation. Therefore, the second and following wavelengths are selected not in a NIR spectroscopic way but in a statistical way. In this research, the spectra at the six wavelengths including 900, 904, 914, 990, 1000 and 1016nm are selected as input data for K-mean analysis. 904nm is selected because the wavelength shows the highest correlation coefficients and is regarded as the absorbance wavelength. The others are selected because they show relatively high correlation coefficients and are revealed as the absorbance wavelengths against the chemical structures by B. G. Osborne. The experiment was performed with two phases. In first phase, a reflectance was acquired using fiber optics. The reflectance was calculated by comparing near infrared energy reflected from a Teflon sphere as a standard reference, and the $2^{nd}$ derivative spectra were used for K-mean analysis. Samples are intact 67 apples which are called Fuji and cultivated in Aomori prefecture in Japan. In second phase, the Brix values were measured with a commercially available refractometer in order to estimate the result of K-mean approach. The result shows a partition of the spectral data sets of 67 samples into eight clusters, and the apples are classified into samples having high Brix value and low Brix value. Consequently, the K-mean analysis realized the classification of apples on the basis of the Brix values.

  • PDF