• Title/Summary/Keyword: color software

Search Result 446, Processing Time 0.026 seconds

Development of Assistive Software for color blind to Electronic Documents (전자문서용 색각 장애 보정 소프트웨어 개발)

  • Jang, Young-Gun
    • The KIPS Transactions:PartB
    • /
    • v.10B no.5
    • /
    • pp.535-542
    • /
    • 2003
  • This study is concerned with an assistive technology which reduces color blinds´s confusion when they access electronic documents including color objects in their computers. In this study, 1 restrict the assistive technology would apply to windows operating system, 256 color mode and implement to minimize color distortion which occurs in multi window environments because of color approximation process. As a basic palette, I use a 216 colors web safe palette which the Christine proposed as a standard for color blind, expand it to 256 colors to apply all computer displays using Microsoft Windows as its operating system and implement it as windows application. To test its effectiveness, I use a simulator for dichromats, as results of the test, the developed color vision deficiency correction S/W is effective to reduce the confusion. It is more effective to use the implemented S/W in both of design and client process for electronic documents.

Comparison of instrumental methods for color change assessment of Giomer resins

  • Luiza de Almeida Queiroz Ferreira;Rogeli Tiburcio Ribeiro da Cunha Peixoto ;Claudia Silami de Magalhaes;Tassiana Melo Sa;Monica Yamauti ;Francisca Daniele Moreira Jardilino
    • Restorative Dentistry and Endodontics
    • /
    • v.47 no.1
    • /
    • pp.8.1-8.9
    • /
    • 2022
  • Objectives: The aim of this study was to compare the color change of the Giomer resin composite (Beautifil-Bulk) by using photographs obtained with a smartphone (iPhone 6S) associated with Adobe Photoshop software (digital method), with the spectrophotometric method (Vita Easyshade) after immersion in different pigment solutions. Materials and Methods: Twenty resin composite samples with a diameter of 15.0 mm and thickness of 1.0 mm were confectioned in A2 color (n = 5). Photographs and initial color readings were performed with a smartphone and spectrophotometer, respectively. Then, samples were randomly divided and subjected to cycles of immersion in distilled water (control), açai, Coke, and tomato sauce, 3 times a day, 20 minutes for 7 days. Later, new photographs and color readings were taken. Results: The analysis (2-way analysis of variance, Holm-Sidak, p < 0.05) demonstrated no statistical difference (p < 0.005) between the methods in all groups. Similar color changes were observed for all pigment solutions when using the spectrophotometric method. For the digital method, all color changes were clinically unacceptable, with distilled water and tomato sauce similar to each other and with statistical differences (p < 0.005) for Coke and açai. Conclusions: Only the tomato sauce produced a color change above the acceptability threshold using both methods of color assessment. The spectrophotometric and digital methods produce different patterns of color change. According to our results, the spectrophotometric method is more recommended in color change assessment.

Effective Detection of Target Region Using a Machine Learning Algorithm (기계 학습 알고리즘을 이용한 효과적인 대상 영역 분할)

  • Jang, Seok-Woo;Lee, Gyungju;Jung, Myunghee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.5
    • /
    • pp.697-704
    • /
    • 2018
  • Since the face in image content corresponds to individual information that can distinguish a specific person from other people, it is important to accurately detect faces not hidden in an image. In this paper, we propose a method to accurately detect a face from input images using a deep learning algorithm, which is one of the machine learning methods. In the proposed method, image input via the red-green-blue (RGB) color model is first changed to the luminance-chroma: blue-chroma: red-chroma ($YC_bC_r$) color model; then, other regions are removed using the learned skin color model, and only the skin regions are segmented. A CNN model-based deep learning algorithm is then applied to robustly detect only the face region from the input image. Experimental results show that the proposed method more efficiently segments facial regions from input images. The proposed face area-detection method is expected to be useful in practical applications related to multimedia and shape recognition.

Smart Emotional lighting control method using a wheel interface of the smart watch (스마트워치의 휠 인터페이스를 이용한 스마트 감성 조명 제어)

  • Kim, Bo-Ram;Kim, Dong-Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.8
    • /
    • pp.1503-1510
    • /
    • 2016
  • In this study, we implemented the emotional light controlling system by using the wheel interface built in the smart-watch devices. Most previous light controlling systems have been adopted the direct switches or smart-phone applications for presenting individual emotion in lighting systems. However, in order to control color properties, these studies have some complicated user-interfaces in systems and limitation to present various color spectrums. Therefore, we need to user-friendly interfaces and functions for controlling properties of the lightning systems such as color, tone, color temperature, brightness, and saturation in detail with the wheel interface built in the smart-watch devices. The system proposed in the study is given to choose the user's selecting the emotional status information for providing the emotional lights. The selectable emotional status such as "stable", "surprise", "tired", "angry", etc. can be among 11 kinds of emotional states. In addition, the designed system processed the user's information such as user's emotional status information, local time, location information.

Effect of internal structures on the accuracy of 3D printed full-arch dentition preparation models in different printing systems

  • Teng Ma;Tiwu Peng;Yang Lin;Mindi Zhang;Guanghui Ren
    • The Journal of Advanced Prosthodontics
    • /
    • v.15 no.3
    • /
    • pp.145-154
    • /
    • 2023
  • PURPOSE. The objective of this study was to investigate how internal structures influence the overall and marginal accuracy of full arch preparations fabricated through additive manufacturing in different printing systems. MATERIALS AND METHODS. A full-arch preparation digital model was set up with three internal designs, including solid, hollow, and grid. These were printed using three different resin printers with nine models in each group. After scanning, each data was imported into the 3D data processing software together with the master cast, aligned and trimmed, and then put into the 3D data analysis software again to compare the overall and marginal deviation whose results are expressed using root mean square values and color maps. To evaluate the trueness of the resin model, the test data and reference data were compared, and the precision was evaluated by comparing the test data sets. Color maps were observed for qualitative analysis. Data were statistically analyzed by one-way analysis of variance and Bonferroni method was used for post hoc comparison (α = .05). RESULTS. The influence of different internal structures on the accuracy of 3D printed resin models varied significantly (P < .05). Solid and grid models showed better accuracy, while the hollow model exhibited poor accuracy. The color maps show that the resin models have a tendency to shrink inwards. CONCLUSION. The internal structure design influences the accuracy of the 3D printing model, and the effect varies in different printing systems. Irrespective of the kind of printing system, the printing accuracy of hollow model was observed to be worse than those of solid and grid models.

Deep Learning based Color Restoration of Corrupted Black and White Facial Photos (딥러닝 기반 손상된 흑백 얼굴 사진 컬러 복원)

  • Woo, Shin Jae;Kim, Jong-Hyun;Lee, Jung;Song, Chang-Germ;Kim, Sun-Jeong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.2
    • /
    • pp.1-9
    • /
    • 2018
  • In this paper, we propose a method to restore corrupted black and white facial images to color. Previous studies have shown that when coloring damaged black and white photographs, such as old ID photographs, the area around the damaged area is often incorrectly colored. To solve this problem, this paper proposes a method of restoring the damaged area of input photo first and then performing colorization based on the result. The proposed method consists of two steps: BEGAN (Boundary Equivalent Generative Adversarial Networks) model based restoration and CNN (Convolutional Neural Network) based coloring. Our method uses the BEGAN model, which enables a clearer and higher resolution image restoration than the existing methods using the DCGAN (Deep Convolutional Generative Adversarial Networks) model for image restoration, and performs colorization based on the restored black and white image. Finally, we confirmed that the experimental results of various types of facial images and masks can show realistic color restoration results in many cases compared with the previous studies.

Texture-based Hatching for Color Image and Video

  • Yang, Hee-Kyung;Min, Kyung-Ha
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.4
    • /
    • pp.763-781
    • /
    • 2011
  • We present a texture-based hatching technique for color images and video. Whereas existing approaches produce monochrome hatching effects in considering of triangular mesh models by applying strokes of uniform size, our scheme produces color hatching effects from photographs and video using strokes with a range of sizes. We use a Delaunay triangulation to create a mesh of triangles with sizes that reflect the structure of an input image. At each vertex of this triangulation, the flow of the image is analyzed and a hatching texture is then created with the same alignment, based on real pencil strokes. This texture is given a modified version of a color sampled from the image, and then it is used to fill all the triangles adjoining the vertex. The three hatching textures that accumulate in each triangle are averaged and the result of this process across all the triangles forms the output image. We can also add a paper texture effect and enhance feature lines in the image. Our algorithm can also be applied to video. The results are visually pleasing hatching effects similar to those seen in color pencil drawings and oil paintings.

Application of Near Infrared Spectroscopy for Nondestructive Evaluation of Color Degree of Apple Fruit (사과 착색도의 비파괴측정을 위한 근적외분광분석법의 응용)

  • Sohn, Mi-Ryeong;Cho, Rae-Kwang
    • Food Science and Preservation
    • /
    • v.7 no.2
    • /
    • pp.155-159
    • /
    • 2000
  • Apple fruit grading is largely dependant on skin color degree. This work reports about the possibility of nondestructive assessment of apple fruit color using infrared(NIR) reflectance spectroscopy. NIR spectra of apple fruit were collected in wavelength range of 1100~2500nm using an InfraAlyzer 500C(Bran+Luebbe). Calibration as calculated by the standard analysis procedures MLR(multiple linear regression) and stepwise, was performed by allowing the IDAS software to select the best regression equations using raw spectra of sample. Color degree of apple skin was expressed as 2 factors, anthocyanin content by purification and a-value by colorimeter. A total of 90 fruits was used for the calibration set(54) and prediction set(36). For determining a-value, the calibration model composed 6 wavelengths(2076, 2120, 2276, 2488, 2072 and 1492nm) provided the highest accuracy : correlation coefficient is 0.913 and standard error of prediction is 4.94. But, the accuracy of prediction result for anthocyanin content determining was rather low(R of 0.761).

  • PDF

Genome re-sequencing to identify single nucleotide polymorphism markers for muscle color traits in broiler chickens

  • Kong, H.R.;Anthony, N.B.;Rowland, K.C.;Khatri, B.;Kong, B.C.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.31 no.1
    • /
    • pp.13-18
    • /
    • 2018
  • Objective: Meat quality including muscle color in chickens is an important trait and continuous selective pressures for fast growth and high yield have negatively impacted this trait. This study was conducted to investigate genetic variations responsible for regulating muscle color. Methods: Whole genome re-sequencing analysis using Illumina HiSeq paired end read method was performed with pooled DNA samples isolated from two broiler chicken lines divergently selected for muscle color (high muscle color [HMC] and low muscle color [LMC]) along with their random bred control line (RAN). Sequencing read data was aligned to the chicken reference genome sequence for Red Jungle Fowl (Galgal4) using reference based genome alignment with NGen program of the Lasergene software package. The potential causal single nucleotide polymorphisms (SNPs) showing non-synonymous changes in coding DNA sequence regions were chosen in each line. Bioinformatic analyses to interpret functions of genes retaining SNPs were performed using the ingenuity pathways analysis (IPA). Results: Millions of SNPs were identified and totally 2,884 SNPs (1,307 for HMC and 1,577 for LMC) showing >75% SNP rates could induce non-synonymous mutations in amino acid sequences. Of those, SNPs showing over 10 read depths yielded 15 more reliable SNPs including 1 for HMC and 14 for LMC. The IPA analyses suggested that meat color in chickens appeared to be associated with chromosomal DNA stability, the functions of ubiquitylation (UBC) and quality and quantity of various subtypes of collagens. Conclusion: In this study, various potential genetic markers showing amino acid changes were identified in differential meat color lines, that can be used for further animal selection strategy.

Emotion Image Retrieval through Query Emotion Descriptor and Relevance Feedback (질의 감성 표시자와 유사도 피드백을 이용한 감성 영상 검색)

  • Yoo Hun-Woo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.3
    • /
    • pp.141-152
    • /
    • 2005
  • A new emotion-based image retrieval method is proposed in this paper. Query emotion descriptors called query color code and query gray code are designed based on the human evaluation on 13 emotions('like', 'beautiful', 'natural', 'dynamic', 'warm', 'gay', 'cheerful', 'unstable', 'light' 'strong', 'gaudy' 'hard', 'heavy') when 30 random patterns with different color, intensity, and dot sizes are presented. For emotion image retrieval, once a query emotion is selected, associated query color code and query gray code are selected. Next, DB color code and DB gray code that capture color and, intensify and dot size are extracted in each database image and a matching process between two color codes and between two gray codes are peformed to retrieve relevant emotion images. Also, a new relevance feedback method is proposed. The method incorporates human intention in the retrieval process by dynamically updating weights of the query and DB color codes and weights of an intra query color code. For the experiments over 450 images, the number of positive images was higher than that of negative images at the initial query and increased according to the relevance feedback.