• Title/Summary/Keyword: Iteration

Search Result 1,889, Processing Time 0.029 seconds

A Study for Perception of Hair Damage Using Friction Coefficient of Human Hair (모발의 마찰계수를 통한 모발 손상 인식 연구)

  • Lim, Byung Tack;Seo, Hong An;Song, Sang-Hun;Son, Seong Kil;Kang, Nae-Gyu
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.46 no.3
    • /
    • pp.295-305
    • /
    • 2020
  • Treatment for beauty using oxidizing agents damages hair with inducing structural alteration in cuticle layer, degradation of protein, and loss of lipid. This study connects a frictional coefficient upon the damaged hair by an instrumental test to the texture test by human being, and considered a moisture as a factor of the damage. A friction coefficient has been measured upon the hair with successive treatment of dye, perm, and bleach. The friction coefficient from the hair dye-treated three times was defined with 0.60, where 58% of answerer indicated an initial damage point as the hairs of iteration of dye-treatment increased. Even bleach treated three times results in 0.84 of friction coefficient corresponding to 88% of answerer attributed the hair to an initially damaged hair. In order to figure out a lipid loss in hair for human being to respond damage, a friction coefficient of the hair was controlled by removing 18-methyleicosanoic acid (18-MEA). The initial damage has been recognized by 0.60 of the friction coefficient for the 68% of answerer. Since moisture is the largest portion of the components in hair, moisture analysis has been performed to study a relationship between texture of damage and the friction coefficient from an instrumental evaluation. As an iteration of dye increases, the hair became hydrophilic with smaller contact angle. It is found that a damaged hair by dyeing possessed more than 0.42% of moisture compared to a healthy hair. Finally, it is elucidated that an increase of moisture in hair induced higher adhesive force corresponding to the friction coefficient, and the friction coefficient above 0.6 is attributed to the preception of hair damage.

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

Closed Integral Form Expansion for the Highly Efficient Analysis of Fiber Raman Amplifier (라만증폭기의 효율적인 성능분석을 위한 라만방정식의 적분형 전개와 수치해석 알고리즘)

  • Choi, Lark-Kwon;Park, Jae-Hyoung;Kim, Pil-Han;Park, Jong-Han;Park, Nam-Kyoo
    • Korean Journal of Optics and Photonics
    • /
    • v.16 no.3
    • /
    • pp.182-190
    • /
    • 2005
  • The fiber Raman amplifier(FRA) is a distinctly advantageous technology. Due to its wider, flexible gain bandwidth, and intrinsically lower noise characteristics, FRA has become an indispensable technology of today. Various FRA modeling methods, with different levels of convergence speed and accuracy, have been proposed in order to gain valuable insights for the FRA dynamics and optimum design before real implementation. Still, all these approaches share the common platform of coupled ordinary differential equations(ODE) for the Raman equation set that must be solved along the long length of fiber propagation axis. The ODE platform has classically set the bar for achievable convergence speed, resulting exhaustive calculation efforts. In this work, we propose an alternative, highly efficient framework for FRA analysis. In treating the Raman gain as the perturbation factor in an adiabatic process, we achieved implementation of the algorithm by deriving a recursive relation for the integrals of power inside fiber with the effective length and by constructing a matrix formalism for the solution of the given FRA problem. Finally, by adiabatically turning on the Raman process in the fiber as increasing the order of iterations, the FRA solution can be obtained along the iteration axis for the whole length of fiber rather than along the fiber propagation axis, enabling faster convergence speed, at the equivalent accuracy achievable with the methods based on coupled ODEs. Performance comparison in all co-, counter-, bi-directionally pumped multi-channel FRA shows more than 102 times faster with the convergence speed of the Average power method at the same level of accuracy(relative deviation < 0.03dB).

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

The Evaluation of Reconstruction Method Using Attenuation Correction Position Shifting in 3D PET/CT (PET/CT 3D 영상에서 감쇠보정 위치 변화 방법을 이용한 영상 재구성법의 평가)

  • Hong, Gun-Chul;Park, Sun-Myung;Jung, Eun-Kyung;Choi, Choon-Ki;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.172-176
    • /
    • 2010
  • Purpose: The patients' moves occurred at PET/CT scan will cause the decline of correctness in results by resulting in inconsistency of Attenuation Correction (AC) and effecting on quantitative evaluation. This study has evaluated the utility of reconstruction method using AC position changing method when having inconsistency of AC depending on the position change of emission scan after transmission scan in obtaining PET/CT 3D image. Materials and Methods: We created 1 mL syringe injection space up to ${\pm}2$, 6, 10 cm toward x and y axis based on central point of polystyrene ($20{\times}20110$ cm) into GE Discovery STE16 equipment. After projection of syringe with $^{18}F$-FDG 5 kBq/mL, made an emission by changing the position and obtained the image by using AC depending on the position change. Reconstruction method is an iteration reconstruction method and is applied two times of iteration and 20 of subset, and for every emission data, decay correction depending on time pass is applied. Also, after setting ROI to the position of syringe, compared %Difference (%D) at each position to radioactivity concentrations (kBq/mL) and central point. Results: Radioactivity concentrations of central point of emission scan is 2.30 kBq/mL and is indicated as 1.95, 1.82 and 1.75 kBq/mL, relatively for +x axis, as 2.07, 1.75 and 1.65 kBq/mL for -x axis, as 2.07, 1.87 and 1.90 kBq/mL for +y axis and as 2.17, 1.85 and 1.67 kBq/mL for -y axis. Also, %D is yield as 15, 20, 23% for +x axis, as 9, 23, 28% for -x axis, as 12, 21, 20% for +y axis and as 8, 22, 29% for -y axis. When using AC position changing method, it is indicated as 2.00, 1.95 and 1.80 kBq/mL, relatively for +x axis, as 2.25, 2.15 and 1.90 kBq/mL for -x axis, as 2.07, 1.90 and 1.90 kBq/mL for +y axis, and as 2.10, 2.02, and 1.72 kBq/mL for -y axis. Also, %D is yield as 13, 15, 21% for +x axis, as 2, 6, 17% for -x axis, as 9, 17, 17% for +y axis, and as 8, 12, 25% for -y axis. Conclusion: When in inconsistency of AC, radioactivity concentrations for using AC position changing method increased average of 0.14, 0.03 kBq/mL at x, y axis and %D was improved 6.1, 4.2%. Also, it is indicated that the more far from the central point and the further position from the central point under the features that spatial resolution is lowered, the higher in lowering of radioactivity concentrations. However, since in actual clinic, attenuation degree increases more, it is considered that when in inconsistency, such tolerance will be increased. Therefore, at the lesion of the part where AC is not inconsistent, the tolerance of radioactivity concentrations will be reduced by applying AC position changing method.

  • PDF

The Study about Application of LEAP Collimator at Brain Diamox Perfusion Tomography Applied Flash 3D Reconstruction: One Day Subtraction Method (Flash 3D 재구성을 적용한 뇌 혈류 부하 단층 촬영 시 LEAP 검출기의 적용에 관한 연구: One Day Subtraction Method)

  • Choi, Jong-Sook;Jung, Woo-Young;Ryu, Jae-Kwang
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.3
    • /
    • pp.102-109
    • /
    • 2009
  • Purpose: Flash 3D (pixon(R) method; 3D OSEM) was developed as a software program to shorten exam time and improve image quality through reconstruction, it is an image processing method that usefully be applied to nuclear medicine tomography. If perfoming brain diamox perfusion scan by reconstructing subtracted images by Flash 3D with shortened image acquisition time, there was a problem that SNR of subtracted image is lower than basal image. To increase SNR of subtracted image, we use LEAP collimators, and we emphasized on sensitivity of vessel dilatation than resolution of brain vessel. In this study, our purpose is to confirm possibility of application of LEAP collimators at brain diamox perfusion tomography, identify proper reconstruction factors by using Flash 3D. Materials and methods: (1) The evaluation of phantom: We used Hoffman 3D Brain Phantom with $^{99m}Tc$. We obtained images by LEAP and LEHR collimators (diamox image) and after 6 hours (the half life of $^{99m}Tc$: 6 hours), we use obtained second image (basal image) by same method. Also, we acquired SNR and ratio of white matters/gray matters of each basal image and subtracted image. (2) The evaluation of patient's image: We quantitatively analyzed patients who were examined by LEAP collimators then was classified as a normal group and who were examined by LEHR collimators then was classified as a normal group from 2008. 05 to 2009. 01. We evaluate the results from phantom by substituting factors. We used one-day protocol and injected $^{99m}Tc$-ECD 925 MBq at both basal image acquisition and diamox image acquisition. Results: (1) The evaluation of phantom: After measuring counts from each detector, at basal image 41~46 kcount, stress image 79~90 kcount, subtraction image 40~47 kcount were detected. LEAP was about 102~113 kcount at basal image, 188~210 kcount at stress image and 94~103 at subtraction image kcount were detected. The SNR of LEHR subtraction image was decreased than LEHR basal image about 37%, the SNR of LEAP subtraction image was decreased than LEAP basal image about 17%. The ratio of gray matter versus white matter is 2.2:1 at LEHR basal image and 1.9:1 at subtraction, and at LEAP basal image was 2.4:1 and subtraction image was 2:1. (2) The evaluation of patient's image: the counts acquired by LEHR collimators are about 40~60 kcounts at basal image, and 80~100 kcount at stress image. It was proper to set FWHM as 7 mm at basal and stress image and 11mm at subtraction image. LEAP was about 80~100 kcount at basal image and 180~200 kcount at stress image. LEAP images could reduce blurring by setting FWHM as 5 mm at basal and stress images and 7 mm at subtraction image. At basal and stress image, LEHR image was superior than LEAP image. But in case of subtraction image like a phantom experiment, it showed rough image because SNR of LEHR image was decreased. On the other hand, in case of subtraction LEAP image was better than LEHR image in SNR and sensitivity. In all LEHR and LEAP collimator images, proper subset and iteration frequency was 8 times. Conclusions: We could archive more clear and high SNR subtraction image by using proper filter with LEAP collimator. In case of applying one day protocol and reconstructing by Flash 3D, we could consider application of LEAP collimator to acquire better subtraction image.

  • PDF

Analysis of Football Fans' Uniform Consumption: Before and After Son Heung-Min's Transfer to Tottenham Hotspur FC (국내 프로축구 팬들의 유니폼 소비 분석: 손흥민의 토트넘 홋스퍼 FC 이적 전후 비교)

  • Choi, Yeong-Hyeon;Lee, Kyu-Hye
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.91-108
    • /
    • 2020
  • Korea's famous soccer players are steadily performing well in international leagues, which led to higher interests of Korean fans in the international leagues. Reflecting the growing social phenomenon of rising interests on international leagues by Korean fans, the study examined the overall consumer perception in the consumption of uniform by domestic soccer fans and compared the changes in perception following the transfers of the players. Among others, the paper examined the consumer perception and purchase factors of soccer fans shown in social media, focusing on periods before and after the recruitment of Heung-Min Son to English Premier League's Tottenham Football Club. To this end, the EPL uniform is the collection keyword the paper utilized and collected consumer postings from domestic website and social media via Python 3.7, and analyzed them using Ucinet 6, NodeXL 1.0.1, and SPSS 25.0 programs. The results of this study can be summarized as follows. First, the uniform of the club that consistently topped the league, has been gaining attention as a popular uniform, and the players' performance, and the players' position have been identified as key factors in the purchase and search of professional football uniforms. In the case of the club, the actual ranking and whether the league won are shown to be important factors in the purchase and search of professional soccer uniforms. The club's emblem and the sponsor logo that will be attached to the uniform are also factors of interest to consumers. In addition, in the decision making process of purchase of a uniform by professional soccer fan, uniform's form, marking, authenticity, and sponsors are found to be more important than price, design, size, and logo. The official online store has emerged as a major purchasing channel, followed by gifts for friends or requests from acquaintances when someone travels to the United Kingdom. Second, a classification of key control categories through the convergence of iteration correlation analysis and Clauset-Newman-Moore clustering algorithm shows differences in the classification of individual groups, but groups that include the EPL's club and player keywords are identified as the key topics in relation to professional football uniforms. Third, between 2002 and 2006, the central theme for professional football uniforms was World Cup and English Premier League, but from 2012 to 2015, the focus has shifted to more interest of domestic and international players in the English Premier League. The subject has changed to the uniform itself from this time on. In this context, the paper can confirm that the major issues regarding the uniforms of professional soccer players have changed since Ji-Sung Park's transfer to Manchester United, and Sung-Yong Ki, Chung-Yong Lee, and Heung-Min Son's good performances in these leagues. The paper also identified that the uniforms of the clubs to which the players have transferred to are of interest. Fourth, both male and female consumers are showing increasing interest in Son's league, the English Premier League, which Tottenham FC belongs to. In particular, the increasing interest in Son has shown a tendency to increase interest in football uniforms for female consumers. This study presents a variety of researches on sports consumption and has value as a consumer study by identifying unique consumption patterns. It is meaningful in that the accuracy of the interpretation has been enhanced by using a cluster analysis via convergence of iteration correlation analysis and Clauset-Newman-Moore clustering algorithm to identify the main topics. Based on the results of this study, the clubs will be able to maximize its profits and maintain good relationships with fans by identifying key drivers of consumer awareness and purchasing for professional soccer fans and establishing an effective marketing strategy.

Study on CGM-LMS Hybrid Based Adaptive Beam Forming Algorithm for CDMA Uplink Channel (CDMA 상향채널용 CGM-LMS 접목 적응빔형성 알고리듬에 관한 연구)

  • Hong, Young-Jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.895-904
    • /
    • 2007
  • This paper proposes a robust sub-optimal smart antenna in Code Division Multiple Access (CDMA) basestation. It makes use of the property of the Least Mean Square (LMS) algorithm and the Conjugate Gradient Method (CGM) algorithm for beamforming processes. The weight update takes place at symbol level which follows the PN correlators of receiver module under the assumption that the post correlation desired signal power is far larger than the power of each of the interfering signals. The proposed algorithm is simple and has as low computational load as five times of the number of antenna elements(O(5N)) as a whole per each snapshot. The output Signal to Interference plus Noise Ratio (SINR) of the proposed smart antenna system when the weight vector reaches the steady state has been examined. It has been observed in computer simulations that proposed beamforming algorithm improves the SINR significantly compared to the single antenna case. The convergence property of the weight vector has also been investigated to show that the proposed hybrid algorithm performs better than CGM and LMS during the initial stage of the weight update iteration. The Bit Error Rate (BER) characteristics of the proposed array has also been shown as the processor input Signal to Noise Ratio (SNR) varies.

An integrated airborne gravity survey of an offshore area near the northern Noto Peninsula, Japan (일본 노토 반도 북쪽 연안의 복합 항공 중력탐사)

  • Komazawa, Masao;Okuma, Shigeo;Segawa, Jiro
    • Geophysics and Geophysical Exploration
    • /
    • v.13 no.1
    • /
    • pp.88-95
    • /
    • 2010
  • An airborne gravity survey using a helicopter was carried out in October 2008, offshore along the northern Noto Peninsula, to understand the shallow and regional underground structure. Eleven flight lines, including three tie lines, were arranged at 2 km spacing within 20 km of the coast. The total length of the flight lines was ~700 km. The Bouguer anomalies computed from the airborne gravimetry are consistent with those computed from land and shipborne gravimetry, which gradually decrease in the offshore direction. So, the accuracy of the airborne system is considered to be adequate. A local gravity low in Wajima Bay, which was already known from seafloor gravimetry, was also observed. This suggests that the airborne system has a structural resolution of ~2 km. Reduction of gravity data to a common datum was conducted by compiling the three kinds of gravity data, from airborne, shipborne, and land surveys. In the present study, we have used a solid angle numerical integration method and an iteration method. We finally calculated the gravity anomalies at 300 m above sea level. We needed to add corrections of 2.5 mGals in order to compile the airborne and shipborne gravity data smoothly, so the accuracy of the Bouguer anomaly map is considered to be nearly 2 mGal on the whole, and 5 mGals at worst in limited or local areas.

Numerical Simulation of Convection-dominated Flow Using SU/PG Scheme (SU/PG 기법을 이용한 이송이 지배적인 흐름 수치모의)

  • Song, Chang Geun;Seo, Il Won
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.32 no.3B
    • /
    • pp.175-183
    • /
    • 2012
  • In this study, Galerkin scheme and SU/PG scheme of Petrov-Galerkin family were applied to the shallow water equations and a finite element model for shallow water flow was developed. Numerical simulations were conducted in several flumes with convection-dominated flow condition. Flow simulation of channel with slender structure in the water course revealed that Galerkin and SU/PG schemes showed similar results under very low Fr number and Re number condition. However, when the Fr number increased up to 1.58, Galerkin scheme did not converge while SU/PG scheme produced stable solutions after 5 iterations by Newton-Raphson method. For the transcritical flow simulation in diverging channel, the present model predicted the hydraulic jump accurately in terms of the jump location, the depth slope, and the flow depth after jump, and the numerical results showed good agreements with the hydraulic experiments carried out by Khalifa(1980). In the oblique hydraulic jump simulation, in which convection-dominated supercritical flow (Fr=2.74) evolves, Galerkin scheme blew up just after the first iteration of the initial time step. However, SU/PG scheme captured the boundary of oblique hydraulic jump accurately without numerical oscillation. The maximum errors quantified with exact solutions were less than 0.2% in water depth and velocity calculations, and thereby SU/PG scheme predicted the oblique hydraulic jump phenomena more accurately compared with the previous studies (Levin et al., 2006; Ricchiuto et al., 2007).