• Title/Summary/Keyword: Key Extraction

Search Result 586, Processing Time 0.023 seconds

Optimization Study to Minimize Trigonelline and Chlorogenic acid Loss in the Coffee Decaffeination Process through Supercritical Fluid Extraction (초임계 추출을 통한 커피 디카페인 과정에서의 트리고넬린과 클로로겐산 손실 최소화를 위한 최적화 연구)

  • Ji Sun Lim;Seung Eun Lee;Seong Jun Kim;Bonggeun Shong;Young-Kwon Park;Hong-shik Lee
    • Clean Technology
    • /
    • v.30 no.3
    • /
    • pp.203-210
    • /
    • 2024
  • This study investigated the optimal conditions for efficiently removing caffeine from green coffee beans using supercritical fluid extraction while preserving the key flavor compounds, trigonelline and chlorogenic acid. The results of the experiments conducted under various pretreatment and supercritical fluid extraction conditions revealed that the highest caffeine extraction rate was 90.6% and it was achieved when green coffee beans with a moisture content of 35% were soaked in hot water. However, this condition also showed a tendency to slightly reduce the retention rates of trigonelline and chlorogenic acid. In the supercritical fluid extraction time experiments, the caffeine content decreased as the extraction time increased. Furthermore, extraction at a temperature of 60 ℃ and a pressure of 40 MPa was the most effective in terms of both caffeine removal and flavor compound preservation. As the amount of water added increased, the caffeine extraction rates increased, but there was also an increase in the loss of flavor compounds. With an increase in the solvent-to-material ratio, the caffeine removal rates improved. The optimal results were observed at a ratio of 250, which achieved a caffeine extraction rate of 91.0% and retention rates of trigonelline and chlorogenic acid of 99.9% and 85.9%, respectively.

Fall Detection Based on Human Skeleton Keypoints Using GRU

  • Kang, Yoon-Kyu;Kang, Hee-Yong;Weon, Dal-Soo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.4
    • /
    • pp.83-92
    • /
    • 2020
  • A recent study to determine the fall is focused on analyzing fall motions using a recurrent neural network (RNN), and uses a deep learning approach to get good results for detecting human poses in 2D from a mono color image. In this paper, we investigated the improved detection method to estimate the position of the head and shoulder key points and the acceleration of position change using the skeletal key points information extracted using PoseNet from the image obtained from the 2D RGB low-cost camera, and to increase the accuracy of the fall judgment. In particular, we propose a fall detection method based on the characteristics of post-fall posture in the fall motion analysis method and on the velocity of human body skeleton key points change as well as the ratio change of body bounding box's width and height. The public data set was used to extract human skeletal features and to train deep learning, GRU, and as a result of an experiment to find a feature extraction method that can achieve high classification accuracy, the proposed method showed a 99.8% success rate in detecting falls more effectively than the conventional primitive skeletal data use method.

A Novel Face Recognition Algorithm based on the Deep Convolution Neural Network and Key Points Detection Jointed Local Binary Pattern Methodology

  • Huang, Wen-zhun;Zhang, Shan-wen
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.1
    • /
    • pp.363-372
    • /
    • 2017
  • This paper presents a novel face recognition algorithm based on the deep convolution neural network and key point detection jointed local binary pattern methodology to enhance the accuracy of face recognition. We firstly propose the modified face key feature point location detection method to enhance the traditional localization algorithm to better pre-process the original face images. We put forward the grey information and the color information with combination of a composite model of local information. Then, we optimize the multi-layer network structure deep learning algorithm using the Fisher criterion as reference to adjust the network structure more accurately. Furthermore, we modify the local binary pattern texture description operator and combine it with the neural network to overcome drawbacks that deep neural network could not learn to face image and the local characteristics. Simulation results demonstrate that the proposed algorithm obtains stronger robustness and feasibility compared with the other state-of-the-art algorithms. The proposed algorithm also provides the novel paradigm for the application of deep learning in the field of face recognition which sets the milestone for further research.

Nonlinear Diffusion and Structure Tensor Based Segmentation of Valid Measurement Region from Interference Fringe Patterns on Gear Systems

  • Wang, Xian;Fang, Suping;Zhu, Xindong;Ji, Jing;Yang, Pengcheng;Komori, Masaharu;Kubo, Aizoh
    • Current Optics and Photonics
    • /
    • v.1 no.6
    • /
    • pp.587-597
    • /
    • 2017
  • The extraction of the valid measurement region from the interference fringe pattern is a significant step when measuring gear tooth flank form deviation with grazing incidence interferometry, which will affect the measurement accuracy. In order to overcome the drawback of the conventionally used method in which the object image pattern must be captured, an improved segmentation approach is proposed in this paper. The interference fringe patterns feature, which is smoothed by the nonlinear diffusion, would be extracted by the structure tensor first. And then they are incorporated into the vector-valued Chan-Vese model to extract the valid measurement region. This method is verified in a variety of interference fringe patterns, and the segmentation results show its feasibility and accuracy.

Automatic Single Document Text Summarization Using Key Concepts in Documents

  • Sarkar, Kamal
    • Journal of Information Processing Systems
    • /
    • v.9 no.4
    • /
    • pp.602-620
    • /
    • 2013
  • Many previous research studies on extractive text summarization consider a subset of words in a document as keywords and use a sentence ranking function that ranks sentences based on their similarities with the list of extracted keywords. But the use of key concepts in automatic text summarization task has received less attention in literature on summarization. The proposed work uses key concepts identified from a document for creating a summary of the document. We view single-word or multi-word keyphrases of a document as the important concepts that a document elaborates on. Our work is based on the hypothesis that an extract is an elaboration of the important concepts to some permissible extent and it is controlled by the given summary length restriction. In other words, our method of text summarization chooses a subset of sentences from a document that maximizes the important concepts in the final summary. To allow diverse information in the summary, for each important concept, we select one sentence that is the best possible elaboration of the concept. Accordingly, the most important concept will contribute first to the summary, then to the second best concept, and so on. To prove the effectiveness of our proposed summarization method, we have compared it to some state-of-the art summarization systems and the results show that the proposed method outperforms the existing systems to which it is compared.

Robust Transmission Waveform Design for Distributed Multiple-Radar Systems Based on Low Probability of Intercept

  • Shi, Chenguang;Wang, Fei;Sellathurai, Mathini;Zhou, Jianjiang;Zhang, Huan
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.70-80
    • /
    • 2016
  • This paper addresses the problem of robust waveform design for distributed multiple-radar systems (DMRSs) based on low probability of intercept (LPI), where signal-to-interference-plus-noise ratio (SINR) and mutual information (MI) are utilized as the metrics for target detection and information extraction, respectively. Recognizing that a precise characterization of a target spectrum is impossible to capture in practice, we consider that a target spectrum lies in an uncertainty class bounded by known upper and lower bounds. Based on this model, robust waveform design approaches for the DMRS are developed based on LPI-SINR and LPI-MI criteria, where the total transmitting energy is minimized for a given system performance. Numerical results show the effectiveness of the proposed approaches.

Discussion on the Technology Route for Land Degradation Monitoring and Assessment based on 3S Technique

  • Jing, Wang;Ting, He;Zhang, Ji-Xian;Li, Hai-Tao
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.757-765
    • /
    • 2002
  • This paper analyzes three theories for land degradation assessment and internationl/domestic methods for land degradation monitoring and assessment. Under the guidance of absolute degradation thought, this paper proposes the technological framework for monitoring and appraising cultivated land degradation based on the 3S technique. We can apply 3S technique and analyze the nature, the environmental, the social, and the economic elements which influence the land utilization and degradation synthetically, to set up the indicator system of the cultivated land degradation monitoring and assessment based on 3S technique; to propose the degradation information extraction methods based on 3S technique; to create the quantitative assessment model and method for land degradation; to analyze the ecological environment response of land use and degradation quantitatively; and to propose the measure, policy and suggestion for solving the land degradation problem from the point of view of land utilization.

  • PDF

Shuffling of Elliptic Curve Cryptography Key on Device Payment

  • Kennedy, Chinyere Grace;Cho, Dongsub
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.4
    • /
    • pp.463-471
    • /
    • 2019
  • The growth of mobile technology particularly smartphone applications such as ticketing, access control, and making payments are on the increase. Elliptic Curve Cryptography (ECC)-based systems have also become widely available in the market offering various convenient services by bringing smartphones in proximity to ECC-enabled objects. When a system user attempts to establish a connection, the AIK sends hashes to a server that then verifies the values. ECC can be used with various operating systems in conjunction with other technologies such as biometric verification systems, smart cards, anti-virus programs, and firewalls. The use of Elliptic-curve cryptography ensures efficient verification and signing of security status verification reports which allows the system to take advantage of Trusted Computing Technologies. This paper proposes a device payment method based on ECC and Shuffling based on distributed key exchange. Our study focuses on the secure and efficient implementation of ECC in payment device. This novel approach is well secure against intruders and will prevent the unauthorized extraction of information from communication. It converts plaintext into ASCII value that leads to the point of curve, then after, it performs shuffling to encrypt and decrypt the data to generate secret shared key used by both sender and receiver.

Implementation and characterization of flash-based hardware security primitives for cryptographic key generation

  • Mi-Kyung Oh;Sangjae Lee;Yousung Kang;Dooho Choi
    • ETRI Journal
    • /
    • v.45 no.2
    • /
    • pp.346-357
    • /
    • 2023
  • Hardware security primitives, also known as physical unclonable functions (PUFs), perform innovative roles to extract the randomness unique to specific hardware. This paper proposes a novel hardware security primitive using a commercial off-the-shelf flash memory chip that is an intrinsic part of most commercial Internet of Things (IoT) devices. First, we define a hardware security source model to describe a hardware-based fixed random bit generator for use in security applications, such as cryptographic key generation. Then, we propose a hardware security primitive with flash memory by exploiting the variability of tunneling electrons in the floating gate. In accordance with the requirements for robustness against the environment, timing variations, and random errors, we developed an adaptive extraction algorithm for the flash PUF. Experimental results show that the proposed flash PUF successfully generates a fixed random response, where the uniqueness is 49.1%, steadiness is 3.8%, uniformity is 50.2%, and min-entropy per bit is 0.87. Thus, our approach can be applied to security applications with reliability and satisfy high-entropy requirements, such as cryptographic key generation for IoT devices.

Implementation of Melody Generation Model Through Weight Adaptation of Music Information Based on Music Transformer (Music Transformer 기반 음악 정보의 가중치 변형을 통한 멜로디 생성 모델 구현)

  • Seunga Cho;Jaeho Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.5
    • /
    • pp.217-223
    • /
    • 2023
  • In this paper, we propose a new model for the conditional generation of music, considering key and rhythm, fundamental elements of music. MIDI sheet music is converted into a WAV format, which is then transformed into a Mel Spectrogram using the Short-Time Fourier Transform (STFT). Using this information, key and rhythm details are classified by passing through two Convolutional Neural Networks (CNNs), and this information is again fed into the Music Transformer. The key and rhythm details are combined by differentially multiplying the weights and the embedding vectors of the MIDI events. Several experiments are conducted, including a process for determining the optimal weights. This research represents a new effort to integrate essential elements into music generation and explains the detailed structure and operating principles of the model, verifying its effects and potentials through experiments. In this study, the accuracy for rhythm classification reached 94.7%, the accuracy for key classification reached 92.1%, and the Negative Likelihood based on the weights of the embedding vector resulted in 3.01.