• Title/Summary/Keyword: ADAM15

Search Result 35, Processing Time 0.023 seconds

Initial Ignition Time and Calorific Value Enhancement of Briquette with Added Pine Resin

  • Gustan PARI;Lisna EFIYANTI;Saptadi DARMAWAN;Nur Adi SAPUTRA;Djeni HENDRA;Joseph ADAM;Alfred INKRIWANG;Rachman EFFENDI
    • Journal of the Korean Wood Science and Technology
    • /
    • v.51 no.3
    • /
    • pp.207-221
    • /
    • 2023
  • The increasing demand for clean energy requires considerable effort to find alternative energy sources, such as briquettes. This research aims to develop a charcoal briquette with added pine resin (API) that has excellent combustion speed and distinctive aroma. Briquettes are composed of charcoal, pine resin (concentration: 0%-30%), and starch (up to 7%). They are produced in several stages, including coconut shell pyrolysis in conventional combustion, to obtain charcoal for the briquette precursor. Briquette compaction is conducted by mixing and densifying the charcoal, pine resin, and starch using a hydraulic press for 3 min. The hydraulic press has a total surface area and diameter of 57.7 cm2 and 3.5 cm, respectively. The briquettes are dried at different temperatures, reaching 70℃ for 24 h. The study results show that the briquettes have a thickness and diameter of up to 2 and 3.5 cm, respectively; moisture of 2.18%-2.62%; ash of 11.61%-13.98%; volatile matter of 27.15%-51.74%; and fixed carbon content of 40.24%-59.46%. The compressive strength of the briquettes is 186-540 kg/cm2. Their calorific value is 5,338-6,120 kcal/kg, combusting at a high speed of 0.15-0.40 s. The methoxy naphthalene, phenol, benzopyrrole, and lauryl alcohol; ocimene, valencene, and cembrene are found in the API. The API briquette has several chemical compounds, such as musk ambrette, ocimene, sabinene, limonene, 1-(p-cumenyl) adamantane, butane, and propanal, which improve aroma, drug application, and fuel production. Accordingly, API briquettes have considerable potential as an alternative energy source and a health improvement product.

Characterization and predictive value of volume changes of extremity and pelvis soft tissue sarcomas during radiation therapy prior to definitive wide excision

  • Gui, Chengcheng;Morris, Carol D.;Meyer, Christian F.;Levin, Adam S.;Frassica, Deborah A.;Deville, Curtiland;Terezakis, Stephanie A.
    • Radiation Oncology Journal
    • /
    • v.37 no.2
    • /
    • pp.117-126
    • /
    • 2019
  • Purpose: The purpose of this study was to characterize and evaluate the clinical significance of volume changes of soft tissue sarcomas during radiation therapy (RT), prior to definitive surgical resection. Materials and Methods: Patients with extremity or pelvis soft tissue sarcomas treated at our institution from 2013 to 2016 with RT prior to resection were identified retrospectively. Tumor volumes were measured using cone-beam computed tomography obtained daily during RT. Linear regression evaluated the linearity of volume changes. Kruskal-Wallis tests, Mann-Whitney U tests, and linear regression evaluated predictors of volume change. Logistic and Cox regression evaluated volume change as a predictor of resection margin status, histologic treatment response, and tumor recurrence. Results: Thirty-three patients were evaluated. Twenty-nine tumors were high grade. Prior to RT, median tumor volume was 189 mL (range, 7.2 to 4,885 mL). Sixteen tumors demonstrated significant linear volume changes during RT. Of these, 5 tumors increased and 11 decreased in volume. Myxoid liposarcoma (n = 5, 15%) predicted decreasing tumor volume (p = 0.0002). Sequential chemoradiation (n = 4, 12%) predicted increasing tumor volume (p = 0.008) and corresponded to longer times from diagnosis to RT (p = 0.01). Resection margins were positive in three cases. Five patients experienced local recurrence, and 7 experienced distant recurrence, at median 8.9 and 6.9 months post-resection, respectively. Volume changes did not predict resection margin status, local recurrence, or distant recurrence. Conclusion: Volume changes of pelvis and extremity soft tissue sarcomas followed linear trends during RT. Volume changes reflected histologic subtype and treatment characteristics but did not predict margin status or recurrence after resection.

Characterization of clutch traits and egg production in six chicken breeds

  • Lei Shi;Yunlei Li;Adam Mani Isa;Hui Ma;Jingwei Yuan;Panlin Wang;Pingzhuang Ge;Yanzhang Gong;Jilan Chen;Yanyan Sun
    • Animal Bioscience
    • /
    • v.36 no.6
    • /
    • pp.899-907
    • /
    • 2023
  • Objective: The better understanding of laying pattern of birds is crucial for developing breed-specific proper breeding scheme and management. Methods: Daily egg production until 50 wk of age of six chicken breeds including one layer (White Leghorn, WL), three dual-purpose (Rhode Island Red, RIR; Columbian Plymouth Rock, CR; and Barred Plymouth Rock, BR), one synthetic dwarf (DY), and one indigenous (Beijing-You Chicken, BYC) were used to characterize their clutch traits and egg production. The age at first egg, egg number, average and maximum clutch length, pause length, and number of clutches and pauses were calculated accordingly. Results: The egg number and average clutch length in WL, RIR, CR, and BR were higher than those in DY and BYC (p<0.01). The numbers of clutches and pauses, and pause length in WL, RIR, CR, and BR were lower than those in DY and BYC (p<0.01). The coefficient variations of clutch length in WL, RIR, CR, and BR (57.66%, 66.49%, 64.22%, and 55.35%, respectively) were higher than DY (41.84%) and BYC (36.29%), while the coefficient variations of egg number in WL, RIR, CR, and BR (9.10%, 9.97%, 10.82%, and 9.92%) were lower than DY (15.84%) and BYC (16.85%). The clutch length was positively correlated with egg number (r = 0.51 to 0.66; p<0.01), but not correlated with age at first egg in all breeds. Conclusion: The six breeds showed significant different clutch and egg production traits. Due to the selection history, the high and median productive layer breeds had higher clutch length than those of the less productive indigenous BYC. The clutch length is a proper selection criterion for further progress in egg production. The age at first egg, which is independent of clutch traits, is especially encouraged to be improved by selection in the BYC breed.

Predicting blast-induced ground vibrations at limestone quarry from artificial neural network optimized by randomized and grid search cross-validation, and comparative analyses with blast vibration predictor models

  • Salman Ihsan;Shahab Saqib;Hafiz Muhammad Awais Rashid;Fawad S. Niazi;Mohsin Usman Qureshi
    • Geomechanics and Engineering
    • /
    • v.35 no.2
    • /
    • pp.121-133
    • /
    • 2023
  • The demand for cement and limestone crushed materials has increased many folds due to the tremendous increase in construction activities in Pakistan during the past few decades. The number of cement production industries has increased correspondingly, and so the rock-blasting operations at the limestone quarry sites. However, the safety procedures warranted at these sites for the blast-induced ground vibrations (BIGV) have not been adequately developed and/or implemented. Proper prediction and monitoring of BIGV are necessary to ensure the safety of structures in the vicinity of these quarry sites. In this paper, an attempt has been made to predict BIGV using artificial neural network (ANN) at three selected limestone quarries of Pakistan. The ANN has been developed in Python using Keras with sequential model and dense layers. The hyper parameters and neurons in each of the activation layers has been optimized using randomized and grid search method. The input parameters for the model include distance, a maximum charge per delay (MCPD), depth of hole, burden, spacing, and number of blast holes, whereas, peak particle velocity (PPV) is taken as the only output parameter. A total of 110 blast vibrations datasets were recorded from three different limestone quarries. The dataset has been divided into 85% for neural network training, and 15% for testing of the network. A five-layer ANN is trained with Rectified Linear Unit (ReLU) activation function, Adam optimization algorithm with a learning rate of 0.001, and batch size of 32 with the topology of 6-32-32-256-1. The blast datasets were utilized to compare the performance of ANN, multivariate regression analysis (MVRA), and empirical predictors. The performance was evaluated using the coefficient of determination (R2), mean absolute error (MAE), mean squared error (MSE), mean absolute percentage error (MAPE), and root mean squared error (RMSE)for predicted and measured PPV. To determine the relative influence of each parameter on the PPV, sensitivity analyses were performed for all input parameters. The analyses reveal that ANN performs superior than MVRA and other empirical predictors, andthat83% PPV is affected by distance and MCPD while hole depth, number of blast holes, burden and spacing contribute for the remaining 17%. This research provides valuable insights into improving safety measures and ensuring the structural integrity of buildings near limestone quarry sites.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.