• Title/Summary/Keyword: Mode Combination

Search Result 539, Processing Time 0.033 seconds

Pulsed Electric Fields: An Emerging Food Processing Technology-An Overview (PEF 처리에 의한 식품의 가공)

  • Jayaprakasha, H.M.;Yoon, Y.C.;Lee, S.K.
    • Journal of Animal Science and Technology
    • /
    • v.46 no.5
    • /
    • pp.871-878
    • /
    • 2004
  • Pulsed electric fields(PEF) technology is one of the latest nonthermal methods of food processing for obtaining safe and minimally processed foods. This technology can be effectively explored for obtaining safe food with minimum effect on nutritional, flavor, rheological and sensory qualities of food products. The process involves the application of high voltage(typically 20 ${\sim}$ 80 kv/cm) to foods placed between two electrodes. The mode of inactivation of microorganism; by PEP processing has been postulated in term; of electric breakdown and electroporation. The extent of destruction of microorganisms in PEF processing depends mainly on the electric field strength of the pulses and treatment time. For each cell types, a specific critical electric field strength and specific critical treatment time are required depending on the cell characteristics and the type and strength of the medium where they have been present. The effect also depends on the types of microorganisms and their phase of growth. A careful combination of processing parameters has to be selected for effective processing. The potential applications of PEF technology are numerous ranging from biotechnology to food preservation. With respect to food processing, it has already been established that, the technology is non-thermal in nature, economical and energy efficient, besides providing minimally processed foods. This article gives a brief overview of this technology for food processing applications.

Strain-Relaxed SiGe Layer on Si Formed by PIII&D Technology

  • Han, Seung Hee;Kim, Kyunghun;Kim, Sung Min;Jang, Jinhyeok
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2013.08a
    • /
    • pp.155.2-155.2
    • /
    • 2013
  • Strain-relaxed SiGe layer on Si substrate has numerous potential applications for electronic and opto- electronic devices. SiGe layer must have a high degree of strain relaxation and a low dislocation density. Conventionally, strain-relaxed SiGe on Si has been manufactured using compositionally graded buffers, in which very thick SiGe buffers of several micrometers are grown on a Si substrate with Ge composition increasing from the Si substrate to the surface. In this study, a new plasma process, i.e., the combination of PIII&D and HiPIMS, was adopted to implant Ge ions into Si wafer for direct formation of SiGe layer on Si substrate. Due to the high peak power density applied the Ge sputtering target during HiPIMS operation, a large fraction of sputtered Ge atoms is ionized. If the negative high voltage pulse applied to the sample stage in PIII&D system is synchronized with the pulsed Ge plasma, the ion implantation of Ge ions can be successfully accomplished. The PIII&D system for Ge ion implantation on Si (100) substrate was equipped with 3'-magnetron sputtering guns with Ge and Si target, which were operated with a HiPIMS pulsed-DC power supply. The sample stage with Si substrate was pulse-biased using a separate hard-tube pulser. During the implantation operation, HiPIMS pulse and substrate's negative bias pulse were synchronized at the same frequency of 50 Hz. The pulse voltage applied to the Ge sputtering target was -1200 V and the pulse width was 80 usec. While operating the Ge sputtering gun in HiPIMS mode, a pulse bias of -50 kV was applied to the Si substrate. The pulse width was 50 usec with a 30 usec delay time with respect to the HiPIMS pulse. Ge ion implantation process was performed for 30 min. to achieve approximately 20 % of Ge concentration in Si substrate. Right after Ge ion implantation, ~50 nm thick Si capping layer was deposited to prevent oxidation during subsequent RTA process at $1000^{\circ}C$ in N2 environment. The Ge-implanted Si samples were analyzed using Auger electron spectroscopy, High-resolution X-ray diffractometer, Raman spectroscopy, and Transmission electron microscopy to investigate the depth distribution, the degree of strain relaxation, and the crystalline structure, respectively. The analysis results showed that a strain-relaxed SiGe layer of ~100 nm thickness could be effectively formed on Si substrate by direct Ge ion implantation using the newly-developed PIII&D process for non-gaseous elements.

  • PDF

Deviation of Heavy-Weight Floor Impact Sound Levels According to Measurement Positions (마이크로폰의 위치에 따른 중량 바닥충격음레벨의 편차)

  • Oh Yang-Ki;Joo Moon-Ki;Park Jong-Young;Kim Ha-Geun;Yang Kwan-Seop
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.49-55
    • /
    • 2006
  • Measurement of impact sound insulation of floor, by current Korean Standard KS F 2810-2. is to be made with peak levels over 4 point in a receiving room. But it is often the case that there is inconsistency in results at various receiving points in the receiving room. Such variations obviously have effects on the repeatability and reproducibility of measured data. The result shows that there are even 10 dB deviations in 63Hz octave band frequency range and relatively less variations are occurred in other low frequency ranges. Such variations seems to be coming from modal overlaps of the receiving room. According to current rating method of floor impact sound. KS F 2863-2, that may affect on the single number latins scheme. From the result of tests in this study, there are 2dB to 6dB differences in the sin91e number with the combination of measurement points. This means that the reduction of measurement variations from the microphone positions is needed for a better credibility of measurement results.

Synergistic Effect of Hydrogen and 5-Aza on Myogenic Differentiation through the p38 MAPK Signaling Pathway in Adipose-Derived Mesenchymal Stem Cells

  • Wenyong Fei;Erkai Pang;Lei Hou;Jihang Dai;Mingsheng Liu;Xuanqi Wang;Bin Xie;Jingcheng Wang
    • International Journal of Stem Cells
    • /
    • v.16 no.1
    • /
    • pp.78-92
    • /
    • 2023
  • Background and Objectives: This study aims to clarify the systems underlying regulation and regulatory roles of hydrogen combined with 5-Aza in the myogenic differentiation of adipose mesenchymal stem cells (ADSCs). Methods and Results: In this study, ADSCs acted as an in vitro myogenic differentiating mode. First, the Alamar blue Staining and mitochondrial tracer technique were used to verify whether hydrogen combined with 5-Aza could promote cell proliferation. In addition, this study assessed myogenic differentiating markers (e.g., Myogenin, Mhc and Myod protein expressions) based on the Western blotting assay, analysis on cellular morphological characteristics (e.g., Myotube number, length, diameter and maturation index), RT-PCR (Myod, Myogenin and Mhc mRNA expression) and Immunofluorescence analysis (Desmin, Myosin and 𝛽-actin protein expression). Finally, to verify the mechanism of myogenic differentiation of hydrogen-bound 5-Aza, we performed bioinformatics analysis and Western blot to detect the expression of p-P38 protein. Hydrogen combined with 5-Aza significantly enhanced the proliferation and myogenic differentiation of ADSCs in vitro by increasing the number of single-cell mitochondria and upregulating the expression of myogenic biomarkers such as Myod, Mhc and myotube formation. The expressions of p-P38 was up-regulated by hydrogen combined with 5-Aza. The differentiating ability was suppressed when the cells were cultivated in combination with SB203580 (p38 MAPK signal pathway inhibitor). Conclusions: Hydrogen alleviates the cytotoxicity of 5-Aza and synergistically promotes the myogenic differentiation capacity of adipose stem cells via the p38 MAPK pathway. Thus, the mentioned results present insights into myogenic differentiation and are likely to generate one potential alternative strategy for skeletal muscle related diseases.

Analysis of Interactions in Multiple Genes using IFSA(Independent Feature Subspace Analysis) (IFSA 알고리즘을 이용한 유전자 상호 관계 분석)

  • Kim, Hye-Jin;Choi, Seung-Jin;Bang, Sung-Yang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.157-165
    • /
    • 2006
  • The change of external/internal factors of the cell rquires specific biological functions to maintain life. Such functions encourage particular genes to jnteract/regulate each other in multiple ways. Accordingly, we applied a linear decomposition model IFSA, which derives hidden variables, called the 'expression mode' that corresponds to the functions. To interpret gene interaction/regulation, we used a cross-correlation method given an expression mode. Linear decomposition models such as principal component analysis (PCA) and independent component analysis (ICA) were shown to be useful in analyzing high dimensional DNA microarray data, compared to clustering methods. These methods assume that gene expression is controlled by a linear combination of uncorrelated/indepdendent latent variables. However these methods have some difficulty in grouping similar patterns which are slightly time-delayed or asymmetric since only exactly matched Patterns are considered. In order to overcome this, we employ the (IFSA) method of [1] to locate phase- and shut-invariant features. Membership scoring functions play an important role to classify genes since linear decomposition models basically aim at data reduction not but at grouping data. We address a new function essential to the IFSA method. In this paper we stress that IFSA is useful in grouping functionally-related genes in the presence of time-shift and expression phase variance. Ultimately, we propose a new approach to investigate the multiple interaction information of genes.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Comparison of using CBCT with CT Simulator for Radiation dose of Treatment Planning (CBCT와 Simulation CT를 이용한 치료계획의 선량비교)

  • Kim, Dae-Young;Choi, Ji-Won;Cho, Jung-Keun
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.12
    • /
    • pp.742-749
    • /
    • 2009
  • The use of cone-beam computed tomography(CBCT) has been proposed for guiding the delivery of radiation therapy. A kilovoltage imaging system capable of radiography, fluoroscopy, and cone-beam computed tomography(CT) has been integrated with a medical linear accelerator. A standard clinical linear accelerator, operating in arc therapy mode, and an amorphous-silicon (a-Si) with an on-board electronic portal imager can be used to treat palliative patient and verify the patient's position prior to treatment. On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. In this study, the accuracy of Hounsfield Units of CBCT images as well as the accuracy of dose calculations based on CBCT images of a phantom and compared the results with those of using CT simulator images. Phantom and patient studies were carried out to evaluate the achievable accuracy in using CBCT and CT stimulator for dose calculation. Relative electron density as a function of HU was obtained for both planning CT stimulator and CBCT using a Catphan-600 (The Phantom Laboratory, USA) calibration phantom. A clinical treatment planning system was employed for CT stimulator and CBCT based dose calculations and subsequent comparisons. The dosimetric consequence as the result of HU variation in CBCT was evaluated by comparing MU/cCy. The differences were about 2.7% (3-4MU/100cGy) in phantom and 2.5% (1-3MU/100cGy) in patients. The difference in HU values in Catphan was small. However, the magnitude of scatter and artifacts in CBCT images are affected by limitation of detector's FOV and patient's involuntary motions. CBCT images included scatters and artifacts due to In addition to guide the patient setup process, CBCT data acquired prior to the treatment be used to recalculate or verify the treatment plan based on the patient anatomy of the treatment area. And the CBCT has potential to become a very useful tool for on-line ART.)

Progress of Composite Fabrication Technologies with the Use of Machinery

  • Choi, Byung-Keun;Kim, Yun-Hae;Ha, Jin-Cheol;Lee, Jin-Woo;Park, Jun-Mu;Park, Soo-Jeong;Moon, Kyung-Man;Chung, Won-Jee;Kim, Man-Soo
    • International Journal of Ocean System Engineering
    • /
    • v.2 no.3
    • /
    • pp.185-194
    • /
    • 2012
  • A Macroscopic combination of two or more distinct materials is commonly referred to as a "Composite Material", having been designed mechanically and chemically superior in function and characteristic than its individual constituent materials. Composite materials are used not only for aerospace and military, but also heavily used in boat/ship building and general composite industries which we are seeing increasingly more. Regardless of the various applications for composite materials, the industry is still limited and requires better fabrication technology and methodology in order to expand and grow. An example of this is that the majority of fabrication facilities nearby still use an antiquated wet lay-up process where fabrication still requires manual hand labor in a 3D environment impeding productivity of composite product design advancement. As an expert in the advanced composites field, I have developed fabrication skills with the use of machinery based on my past composite experience. In autumn 2011, the Korea government confirmed to fund my project. It is the development of a composite sanding machine. I began development of this semi-robotic prototype beginning in 2009. It has possibilities of replacing or augmenting the exhaustive and difficult jobs performed by human hands, such as sanding, grinding, blasting, and polishing in most often, very awkward conditions, and is also will boost productivity, improve surface quality, cut abrasive costs, eliminate vibration injuries, and protect workers from exposure to dust and airborne contamination. Ease of control and operation of the equipment in or outside of the sanding room is a key benefit to end-users. It will prove to be much more economical than normal robotics and minimize errors that commonly occur in factories. The key components and their technologies are a 360 degree rotational shoulder and a wrist that is controlled under PLC controller and joystick manual mode. Development on both of the key modules is complete and are now operational. The Korean government fund boosted my development and I expect to complete full scale development no later than 3rd quarter 2012. Even with the advantages of composite materials, there is still the need to repair or to maintain composite products with a higher level of technology. I have learned many composite repair skills on composite airframe since many composite fabrication skills including repair, requires training for non aerospace applications. The wind energy market is now requiring much larger blades in order to generate more electrical energy for wind farms. One single blade is commonly 50 meters or longer now. When a wind blade becomes damaged from external forces, on-site repair is required on the columns even under strong wind and freezing temperature conditions. In order to correctly obtain polymerization, the repair must be performed on the damaged area within a very limited time. The use of pre-impregnated glass fabric and heating silicone pad and a hot bonder acting precise heating control are surely required.

Comparison of using CBCT with CT simulator for radiation dose of treatment planning (CBCT와 Simulation CT를 이용한 치료계획의 선량비교)

  • Cho, jung-keun;Kim, dae-young;Han, tae-jong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.1159-1166
    • /
    • 2009
  • The use of cone-beam computed tomography(CBCT) has been proposed for guiding the delivery of radiation therapy. A kilovoltage imaging system capable of radiography, fluoroscopy, and cone-beam computed tomography(CT) has been integrated with a medical linear accelerator. A standard clinical linear accelerator, operating in arc therapy mode, and an amorphous-silicon (a-Si) with an on-board electronic portal imager can be used to treat palliative patient and verify the patient's position prior to treatment. On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. In this study, the accuracy of Hounsfield Units of CBCT images as well as the accuracy of dose calculations based on CBCT images of a phantom and compared the results with those of using CT simulator images. Phantom and patient studies were carried out to evaluate the achievable accuracy in using CBCT and CT stimulator for dose calculation. Relative electron density as a function of HU was obtained for both planning CT stimulator and CBCT using a Catphan-600 (The Phantom Laboratory, USA) calibration phantom. A clinical treatment planning system was employed for CT stimulator and CBCT based dose calculations and subsequent comparisons. The dosimetric consequence as the result of HU variation in CBCT was evaluated by comparing MU/cCy. The differences were about 2.7% (3-4MU/100cGy) in phantom and 2.5% (1-3MU/100cGy) in patients. The difference in HU values in Catphan was small. However, the magnitude of scatter and artifacts in CBCT images are affected by limitation of detector's FOV and patient's involuntary motions. CBCT images included scatters and artifacts due to In addition to guide the patient setup process, CBCT data acquired prior to the treatment be used to recalculate or verify the treatment plan based on the patient anatomy of the treatment area. And the CBCT has potential to become a very useful tool for on-line ART.)

  • PDF

Helicopter Pilot Metaphor for 3D Space Navigation and its implementation using a Joystick (3차원 공간 탐색을 위한 헬리콥터 조종사 메타포어와 그 구현)

  • Kim, Young-Kyoung;Jung, Moon-Ryul;Paik, Doowon;Kim, Dong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.3 no.1
    • /
    • pp.57-67
    • /
    • 1997
  • The navigation of virtual space comes down to the manipulation of the virtual camera. The movement of the virtual cameras has 6 degrees of freedom. However, input devices such as mouses and joysticks are 2D. So, the movement of the camera that corresponds to the input device is 2D movement at the given moment. Therefore, the 3D movement of the camera can be implemented by means of the combination of 2D and 1D movements of the camera. Many of the virtual space navigation browser use several navigation modes to solve this problem. But, the criteria for distinguishing different modes are not clear, somed of the manipulations in each mode are repeated in other modes, and the kinesthetic correspondence of the input devices is often confusing. Hence the user has difficulty in making correct decisions when navigating the virtual space. To solve this problem, we use a single navigation metaphore in which different modes are organically integrated. In this paper we propose a helicopter pilot metaphor. Using the helicopter pilot metaphore means that the user navigates the virtual space like a pilot of a helicopter flying in space. In this paper, we distinguished six 2D movement spaces of the helicopter: (1) the movement on the horizontal plane, (2) the movement on the vertical plane,k (3) the pitch and yaw rotations about the current position, (4) the roll and pitch rotations about the current position, (5) the horizontal and vertical turning, and (6) the rotation about the target object. The six 3D movement spaces are visualized and displayed as a sequence of auxiliary windows. The user can select the desired movement space simply by jumping from one window to another. The user can select the desired movement by looking at the displaced 2D movement spaces. The movement of the camera in each movement space is controlled by the usual movements of the joystick.

  • PDF