• Title/Summary/Keyword: Random divided image

Search Result 38, Processing Time 0.038 seconds

Development of Smart Tote Bags with Marquage Techniques Using Optical Fiber and LEDs (광섬유와 LED를 활용한 마카쥬(marquage) 기법의 스마트 토트백 개발)

  • Park, Jinhee;Kim, Sang Jin;Kim, Jooyong
    • Journal of Fashion Business
    • /
    • v.25 no.1
    • /
    • pp.51-64
    • /
    • 2021
  • The purpose of this study was to develop smart bags that combining fashion-specific trends and smart information technologies such as light-emitting diodes(LED) and optic fibers by grafting marquage techniques that have recently become popular as part of eco-fashion. We applied e-textiles by designing leather tote bags that could show off LED luminescence. A total of two tote bags, a white-colored peacock design and a black-colored paisley design, divided the LED's light-emitting method into two types, incremental lighting and random light-emission to suit each design, and the locations of the optical fibers were also reversed depending upon the design. The production of circuits for the LEDs and optical fibers was based on the design, and a flexible conductive fabric was laser-cut instead of wire line and attached to the circuit-line location. A separate connector was underwent three-dimensional(3D)-modeling and was connected to high-luminosity LEDs and optic fiber bundles. The optical fiber logo part expressed a subtle image using a white-colored LED, which did not offset the LED's sharp luminous effects, suggesting that using LEDs with fiber optics allowed for the expression of each in harmony without being heterogeneous. Overall, the LEDs and fiber optic fabric were well-harmonized in the fashion bag using marquage techniques, and there was no sense of it being a mechanical device. Also, the circuit part was made of conductive fabric, which is an e-textile product that feels the same as a thin, flexible fabric. The study confirmed that the bag was developed as a smart wearable product that could be used in everyday life.

Comparison of Stereoscopic Fusional Area between People with Good and Poor Stereo Acuity (입체 시력이 양호한 사람과 불량인 사람간의 입체시 융합 가능 영역 비교)

  • Kang, Hyungoo;Hong, Hyungki
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.21 no.1
    • /
    • pp.61-68
    • /
    • 2016
  • Purpose: This study investigated differences in stereoscopic fusional area between those with good and poor stereo acuity in viewing stereoscopic displays. Methods: Stereo acuity of 39 participants (18 males and 21 females, $23.6{\pm}3.15years$) was measured with the random dot stereo butterfly method. Participants with stereo-blindness were not included. Stereoscopic fusional area was measured using stereoscopic stimulus by varying the amount of horizontal disparity in a stereoscopic 3D TV. Participants were divided into two groups of good and poor stereo acuity. Criterion for good stereo acuity was determined as less than 60 arc seconds. Measurements arising from the participants were statistically analyzed. Results: 26 participants were measured to have good stereo acuity and 13 participants poor stereo acuity. In case of the stereoscopic stimulus farther than the fixation point, threshold of horizontal disparity for those with poor stereo acuity were measured to be smaller than the threshold for those with good stereo acuity, with a statistically significant difference. On the other hand, there was no statistically significant difference between the two groups, in case of the stereoscopic stimulus nearer to the fixation point. Conclusions: In viewing stereoscopic displays, the boundary of stereoscopic fusional area for the poor stereo acuity group was smaller than the boundary of good stereo acuity group only for the range behind the display. Hence, in viewing stereoscopic displays, participants with poor stereo acuity would have more difficulty perceiving the fused image at farther distances compared to participants with good stereo acuity.

Human Visual Perception-Based Quantization For Efficiency HEVC Encoder (HEVC 부호화기 고효율 압축을 위한 인지시각 특징기반 양자화 방법)

  • Kim, Young-Woong;Ahn, Yong-Jo;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.28-41
    • /
    • 2017
  • In this paper, the fast encoding algorithm in High Efficiency Video Coding (HEVC) encoder was studied. For the encoding efficiency, the current HEVC reference software is divided the input image into Coding Tree Unit (CTU). then, it should be re-divided into CU up to maximum depth in form of quad-tree for RDO (Rate-Distortion Optimization) in encoding precess. But, it is one of the reason why complexity is high in the encoding precess. In this paper, to reduce the high complexity in the encoding process, it proposed the method by determining the maximum depth of the CU using a hierarchical clustering at the pre-processing. The hierarchical clustering results represented an average combination of motion vectors (MV) on neighboring blocks. Experimental results showed that the proposed method could achieve an average of 16% time saving with minimal BD-rate loss at 1080p video resolution. When combined the previous fast algorithm, the proposed method could achieve an average 45.13% time saving with 1.84% BD-rate loss.

Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks (Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성)

  • Kim, Hyeonho;Han, Seokmin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.23-31
    • /
    • 2020
  • This study was carried out to generate various images of railroad surfaces with random defects as training data to be better at the detection of defects. Defects on the surface of railroads are caused by various factors such as friction between track binding devices and adjacent tracks and can cause accidents such as broken rails, so railroad maintenance for defects is necessary. Therefore, various researches on defect detection and inspection using image processing or machine learning on railway surface images have been conducted to automate railroad inspection and to reduce railroad maintenance costs. In general, the performance of the image processing analysis method and machine learning technology is affected by the quantity and quality of data. For this reason, some researches require specific devices or vehicles to acquire images of the track surface at regular intervals to obtain a database of various railway surface images. On the contrary, in this study, in order to reduce and improve the operating cost of image acquisition, we constructed the 'Defective Railroad Surface Regeneration Model' by applying the methods presented in the related studies of the Generative Adversarial Network (GAN). Thus, we aimed to detect defects on railroad surface even without a dedicated database. This constructed model is designed to learn to generate the railroad surface combining the different railroad surface textures and the original surface, considering the ground truth of the railroad defects. The generated images of the railroad surface were used as training data in defect detection network, which is based on Fully Convolutional Network (FCN). To validate its performance, we clustered and divided the railroad data into three subsets, one subset as original railroad texture images and the remaining two subsets as another railroad surface texture images. In the first experiment, we used only original texture images for training sets in the defect detection model. And in the second experiment, we trained the generated images that were generated by combining the original images with a few railroad textures of the other images. Each defect detection model was evaluated in terms of 'intersection of union(IoU)' and F1-score measures with ground truths. As a result, the scores increased by about 10~15% when the generated images were used, compared to the case that only the original images were used. This proves that it is possible to detect defects by using the existing data and a few different texture images, even for the railroad surface images in which dedicated training database is not constructed.

Comparison of Two Different Immobilization Devices for Pelvic Region Radiotherapy in Tomotherapy

  • Kim, Dae Gun;Jung, James J;Cho, Kwang Hwan;Ryu, Mi Ryeong;Moon, Seong Kwon;Bae, Sun Hyun;Ahn, Jae Ouk;Jung, Jae Hong
    • Progress in Medical Physics
    • /
    • v.27 no.4
    • /
    • pp.250-257
    • /
    • 2016
  • The purpose of this study was to compare the patient setup errors of two different immobilization devices (Feet Fix: FF and Leg Fix: LF) for pelvic region radiotherapy in Tomotherapy. Thirty six-patients previously treated with IMRT technique were selected, and divided into two groups based on applied immobilization devices (FF versus LF). We performed a retrospective clinical analysis including the mean, systematic, random variation, 3D-error, and calculated the planning target volume (PTV) margin. In addition, a rotational error (angles, $^{\circ}$) for each patient was analyzed using the automatic image registration. The 3D-errors for the FF and the LF groups were 3.70 mm and 4.26 mm, respectively; the LF group value was 15.1% higher than in the FF group. The treatment margin in the ML, SI, and AP directions were 5.23 mm (6.08 mm), 4.64 mm (6.29 mm), 5.83 mm (8.69 mm) in the FF group (and the LF group), respectively, that the FF group was lower than in the LF group. The percentage in treatment fractions for the FF group (ant the LF group) in greater than 5 mm at ML, SI, and AP direction was 1.7% (3.6%), 3.3% (10.7%), and 5.0% (16.1%), respectively. Two different immobilization devices were affected the patient setup errors due to different fixed location in low extremity. The radiotherapy for the pelvic region by Tomotherapy should be considering variation for the rotational angles including Yaw and Pitch direction that incorrect setup error during the treatment. In addition the choice of an appropriate immobilization device is important because an unalterable rotation angle affects the setup error.

THE EFFECTS OF ND:YAG LASER AND IRRIGANTS ON CANAL SEALING ABILITY (근관치료시 Nd:YAG Laser 사용과 세척액에 따른 치근단 폐쇄효과의 비교)

  • Kim, Jin-Woon;Lee, Hee-Ju;Hur, Bock
    • Restorative Dentistry and Endodontics
    • /
    • v.26 no.4
    • /
    • pp.307-315
    • /
    • 2001
  • The application of Nd:YAG laser and irrigants to the root surface can change its surface configurations. The purpose of this study was to investigate the effects of Nd:YAG laser and irrigants on the apical seal of obturated canals. In this study, 66 single rooted teeth were randomly assigned to 4 group of 14 teeth each. 8 teeth were served us positive and negative controls. The teeth were divided into 6 groups as follows. Group A: Nd:YAG laser, 5% NaOCl + Rc-prep Group B: Nd:YAG laser, Saline Group C: 5% NaOCl + Rc-prep Group D: Saline Group E: Positive control Group F: Negative control 66 teeth were instrumented using Maillefer ProFile$^{\circledR}$ (Orifice Shapers, .04 taper, .06 taper Dentsply, Switzerland). Two of each group were selected at random, and the canal wall surfaces were examined under a SEM. 12 teeth of each group were obturated using by lateral condensation technique. Specimens were immersed in india ink for 7days, decalcified by 10% nitric acid, dehydrated by 75. 80. 85, 90, 95 and 100% alcohol in order cleared by methyl salicylate and then measured of dye penetration with stereomicroscope($\times$15 magnification) and Image Pro plus. The data were analyzed statistically by one-way ANOVA test and Duncan's Multiple Range test. The results were as follows : 1. The mean leakage was 0.128$\pm$0.376 for group A, 0.237$\pm$0.325 for group B, 0.397$\pm$0.468 for group C, 0.586$\pm$0.402 for group D, and there were statistically significant differences between group A and group D, group B and group D. (p<0.05). 2. Group A had better sealing ability than Group C, but there was statistically no significant differences. (p>0.05). 3. Group B had better sealing ability than Group D and there was statistically significant difference. (p<0.05). 4 Group A had better sealing ability than Group B, but there was statistically no significant difference. (p>0.05). 5. Group C had better sealing ability than Group D, but there was statistically no significant difference. (p>0.05). 6. As a result of observation under SEM, Smear layers were removed in Group A, B. but Smear layers were partially removed and smear plugs were remained in Group C, Smear layers were not removed in Group D. To be specially, Melting of smear layer were showed in Group C. 7. These results suggests that the laser has a potential in reducing the apical microleakage of obturated canals.

  • PDF

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.