• Title/Summary/Keyword: natural output

Search Result 474, Processing Time 0.032 seconds

Effect of Freezing and Thawing Condition on the Physical Characteristics of Blanched Bean Sprouts as Home Meal Replacement (냉.해동 조건에 따른 간편편이식 콩나물의 물리적 품질 변화)

  • Jang, Min-Young;Jung, You-Kyoung;Min, Sang-Gi;Cho, Eun-Kyung;Lee, Mi-Yeon
    • Culinary science and hospitality research
    • /
    • v.20 no.6
    • /
    • pp.235-244
    • /
    • 2014
  • The purpose of this study was to investigate the effect of freezing and thawing rate on the physical properties of soybean sprouts to improve the quality of processed soybean sprouts during distribution and storage. Cooked soybean sprouts were frozen by air-blast freezing (ABF) system at $-45^{\circ}C$ or natural air convection freezing (NCF) system at $-24^{\circ}C$, then thawed using microwave oven by varying output power (0, 400, 800 and 1,000 W) until $75^{\circ}C$. The quality of soybean sprouts was measured by the water content, hardness and springiness. In addition, the internal microstructure of soybean sprouts was observed by optical microscope. For results, water content of soybean sprouts thawed by 1,000 W in a microwave showed the lowest value after natural air convection freezing. Springiness of soybean sprouts thawed by all amounts of output power was decreased in comparison with control. Hardness was increased only in soybean sprouts thawed by 1,000 W after air-blast freezing. However the gaps between springiness and hardness were relatively small with control at 1,000 W thawing, after air-blast freezing. Internal microstructure of the soybean sprouts was more damaged as freezing and thawing time were increased. In conclusion, high freezing and thawing rate might improves the quality of soy bean sprout, and IQF freezing and 1,000 W of microwave thawing appears to be the optimum condition for frozen HMR production. From the results freezing and thawing process parameters might can be use as quality control parameters as various type of sprout products processing.

Photocatalytic Oxidation of Arsenite Using Goethite and UV LED (침철석과 자외선 LED를 이용한 아비산염의 광촉매 산화)

  • Jeon, Ji-Hun;Kim, Seong-Hee;Lee, Sang-Woo;Kim, Soon-Oh
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.39 no.1
    • /
    • pp.9-18
    • /
    • 2017
  • Arsenic (As) has been considered as the most toxic one among various hazardous materials and As contamination can be caused naturally and anthropogenically. Major forms of arsenic in groundwater are arsenite [(As(III)] and/or arsenate [(As(V)], depending on redox condition: arsenite and arsenate are predominant in reduced and oxidized environments, respectively. Because arsenite is much more toxic and mobile than arsenate, there have been a number of studies on the reduction of its toxicity through oxidation of As(III) to As(V). This study was initiated to develop photocatalytic oxidation process for treatment of groundwater contaminated with arsenite. The performance of two types of light sources (UV lamp and UV LED) was compared and the feasibility of goethite as a photocatalyst was evaluated. The highest removal efficiency of the process was achieved at a goethite dose of 0.05 g/L. Based on the comparison of oxidation efficiencies of arsenite between two light sources, the apparent performance of UV LED was inferior to that of UV lamp. However, when the results were appraised on the basis of their emitting UV irradiation, the higher performance was achieved by UV LED than by UV lamp. This study demonstrates that environmentally friendly process of goethite-catalytic photo-oxidation without any addition of foreign catalyst is feasible for the reduction of arsenite in groundwater containing naturally-occurring goethite. In addition, this study confirms that UV LED can be used in the photo-oxidation of arsenite as an alternative light source of UV lamp to remedy the drawbacks of UV lamp, such as long stabilization time, high electrical power consumption, short lifespan, and high heat output requiring large cooling facilities.

EFFICIENCY OF ENERGY TRANSFER BY A POPULATION OF THE FARMED PACIFIC OYSTER, CRASSOSTREA GIGAS IN GEOJE-HANSAN BAY (거제${\cdot}$한산만 양식굴 Crassostrea gigas의 에너지 전환 효율)

  • KIM Yong Sool
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.13 no.4
    • /
    • pp.179-183
    • /
    • 1980
  • The efficiency of energy transfer by a population of the farmed pacific oyster, Crassostrea gigas was studied during culture period of 10 months July 1979-April 1980, in Geoje-Hansan Bay near Chungmu City. Energy use by the farmed oyster population was calculated from estimates of half-a-month unit age specific natural mortality rate and data on growth, gonad output, shell organic matter production and respiration. Total mortality during the culture period was estimated approximate $36\%$ from data on survivor individual number per cluster. Growth may be dual consisted of a curved line during the first half culture period (July-November) and a linear line in the later half period (December-April). The first half growth was approximated by the von Bertalanffy growth model; shell height, $SH=6.33\;(1-e^{0.2421(t+0.54)})$, where t is age in half-a-month unit. In the later half growth period shell height was related to t by SH=4.44+0.14t. Dry meat weight (DW) was related to shell height by log $DW=-2.2907+2.589{\cdot}log\;SH,\;(2, and/or log $DW=-5.8153+7.208{\cdot}log\;SH,\;(5. Size specific gonad output (G) as calculated by condition index of before and after the spawning season, was related to shell height by $G=0.0145+(3.95\times10^{-3}{\times}SH^{2.9861})$. Shell organic matter production (SO) was related to shell height by log $SO=-3.1884+2.527{\cdot}1og\;SH$. Size and temperature specific respiration rate (R) as determined in biotron system with controlled temperature, was related to dry meat weight and temperature (T) by log $R=(0.386T-0.5381)+(0.6409-0.0083T){\cdot}log\;DW$. The energy used in metabolism was calculated from size, temperature specific respiration and data on body composition. The calorie contents of oyster meat were estimated by bomb calorimetry based on nitrogen correction. The assimilation efficiency of the oyster estimated directly by a insoluble crude silicate method gave $55.5\%$. From the information presently available by other workers, the assimilation efficiency ranges between $40\%\;and\;70\%$. Twenty seven point four percent of the filtered food material expressed by energy value for oyster population was estimated to have been rejected as pseudofaeces : $17.2\%$ was passed as faeces; $35.04\%$ was respired and lost as heat; $0.38\%$ was bounded up in shell organics; $2.74\%$ was released as gonad output, $2.06\%$ was fell as meat reducing by mortality. The remaining $15.28\%$ was used as meat production. The net efficiency of energy transfer from assimilation to meat production (yield/assimilation) of a farm population of the oyster was estimated to be $28\%$ during culture period July 1979-April 1980. The gross efficiency of energy transfer from ingestion to meat production (yield/food filtered) is probably between $11\%\;and\;20\%$.

  • PDF

The Influences of Obstructive Apneas on Changes of Cardiovascular Function in Anesthetized Dogs with $\alpha$-chloralose ($\alpha$-chloralose로 마취한 개에서 폐쇄성 무호흡이 심혈관계 기능변화에 미치는 영향)

  • Jang, Jae-Soon;Kang, Ji-Ho;Lee, Sang-Haak;Choi, Young-Mee;Kwon, Soon-Seog;Kim, Young-Kyoon;Kim, Kwan-Hyoung;Song, Jeong-Sup;Park, Sung-Hak;Moon, Hwa-Sik
    • Tuberculosis and Respiratory Diseases
    • /
    • v.48 no.3
    • /
    • pp.347-356
    • /
    • 2000
  • Background : Patients with obstructive sleep apnea syndrome are known to have high long-term mortality compared to healthy subjects because of their cardiovascular dysfunction. The observation of hemodynamic changes by obstructive apneas is helpful when attempting to understand the pathophysiological mechanism of the development of cardiovascular dysfunction in those patients. Therefore, we studied the changes in cardiovascular function with an animal model and tried to obtain the basic data for an ideal experimental model (this phrase is unclear), a requirement for a more advanced study. Methods : Sixteen anesthetized dogs with ${\alpha}$-chloralose delete were divided into two groups : 8 dogs of room air breathing group and 8 dogs of oxygen breathing group. We measured $PaO_2$, $PaCO_2$, heart rate, cardiac output, mean femoral artery pressure, and mean pulmonary artery pressure at specified times during the apnea-breathing cycle before endotracheal tube occlusion (baseline), 25 seconds after endotracheal tube occlusion (apneic period), 10 seconds (early phase of postapneic period, EPA) and 25 seconds (late phase of postapneic period, LPA) after spontaneous breathing. Results : In room air breathing group, the heart rate significantly decreased during the apneic period compared to that at baseline (P<0.01) and increased at EPA and LPA compared to that during the apneic period (P<0.01). But, the heart rate showed no significant changes during apneic and postapneic periods in the oxygen breathing group. Cardiac output tended to decrease during apneic period compared to that at baseline, but was statistically significant. Cardiac output significantly decreased at LP A compared to at baseline (P<0.01). Mean femoral artery pressure was significantly decreased at during apneic period compared to that at baseline (P<0.05). Conclusion : Through this experiment, we were partially able to understand the changes of cardiovascular function indirectly, but delete new experimental animal model displaying physiological mechanism close to natural sleep should be established, and the advanced study in the changes of cardiovascular function and their causes should be continued.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Application of The Semi-Distributed Hydrological Model(TOPMODEL) for Prediction of Discharge at the Deciduous and Coniferous Forest Catchments in Gwangneung, Gyeonggi-do, Republic of Korea (경기도(京畿道) 광릉(光陵)의 활엽수림(闊葉樹林)과 침엽수림(針葉樹林) 유역(流域)의 유출량(流出量) 산정(算定)을 위한 준분포형(準分布型) 수문모형(水文模型)(TOPMODEL)의 적용(適用))

  • Kim, Kyongha;Jeong, Yongho;Park, Jaehyeon
    • Journal of Korean Society of Forest Science
    • /
    • v.90 no.2
    • /
    • pp.197-209
    • /
    • 2001
  • TOPMODEL, semi-distributed hydrological model, is frequently applied to predict the amount of discharge, main flow pathways and water quality in a forested catchment, especially in a spatial dimension. TOPMODEL is a kind of conceptual model, not physical one. The main concept of TOPMODEL is constituted by the topographic index and soil transmissivity. Two components can be used for predicting the surface and subsurface contributing area. This study is conducted for the validation of applicability of TOPMODEL at small forested catchments in Korea. The experimental area is located at Gwangneung forest operated by Korea Forest Research Institute, Gyeonggi-do near Seoul metropolitan. Two study catchments in this area have been working since 1979 ; one is the natural mature deciduous forest(22.0 ha) about 80 years old and the other is the planted young coniferous forest(13.6 ha) about 22 years old. The data collected during the two events in July 1995 and June 2000 at the mature deciduous forest and the three events in July 1995 and 1999, August 2000 at the young coniferous forest were used as the observed data set, respectively. The topographic index was calculated using $10m{\times}10m$ resolution raster digital elevation map(DEM). The distribution of the topographic index ranged from 2.6 to 11.1 at the deciduous and 2.7 to 16.0 at the coniferous catchment. The result of the optimization using the forecasting efficiency as the objective function showed that the model parameter, m and the mean catchment value of surface saturated transmissivity, $lnT_0$ had a high sensitivity. The values of the optimized parameters for m and InT_0 were 0.034 and 0.038; 8.672 and 9.475 at the deciduous and 0.031, 0.032 and 0.033; 5.969, 7.129 and 7.575 at the coniferous catchment, respectively. The forecasting efficiencies resulted from the simulation using the optimized parameter were comparatively high ; 0.958 and 0.909 at the deciduous and 0.825, 0.922 and 0.961 at the coniferous catchment. The observed and simulated hyeto-hydrograph shoed that the time of lag to peak coincided well. Though the total runoff and peakflow of some events showed a discrepancy between the observed and simulated output, TOPMODEL could overall predict a hydrologic output at the estimation error less than 10 %. Therefore, TOPMODEL is useful tool for the prediction of runoff at an ungaged forested catchment in Korea.

  • PDF

A Practice-Oriented Study on Sawdust File Filteration Composting of High Moisture Pig Slurry (고수분 돈분슬러리의 톱밥여과 퇴비화 현장적용 연구)

  • Ryoo, J.W.
    • Journal of Animal Environmental Science
    • /
    • v.14 no.2
    • /
    • pp.129-138
    • /
    • 2008
  • This study was carried out to investigate the operating characteristics, water balance and chemical properties of compost during the composting with pig slurry on-farm trial. The composting plant with sawdust pile filteration was done in a forced aeration inside a house and equipped with a turning machine moving on a rails. The composting pit was 4.6m wide, 53m long and the maximum height was 2m. A field scale aerobic composting facility was tested the composting efficiency of high moisture pig slurry. The sawdust materials remained 6 months. Pig slurry was added to compost pile every other day during 6 months run. The temperature in compost pile and compost house, and input and output of moisture were measured during composting process. The result are summarized as follows; 1. The temperature of compost was varied in range of at $22.4^{\circ}C{\sim}71.1^{\circ}C$. After turning, the composting temperature decreased to $50^{\circ}C{\sim}36^{\circ}C$ during $3{\sim}5$ hours, and then raised to $64.5^{\circ}C$ 2. The temperature of compost house was maintained $20^{\circ}C{\sim}30^{\circ}C$, and relative humidity was varied in range of $50{\sim}99%$. 3. BOD, CODcr and SS of leachate water was reduced 89.5%, 81.2%, 97.5%, respectively. 4. The content of heavy metal in the final compost was lower those of Korea standards. 5. The amount of effluent was 10.2%. Total evaporation during composting Period were 74.8%. The amount of slurry per $1m^3$ sawdust was $3.16m^3$ without treatment of effluent output.

  • PDF

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

Wearable Computers

  • Cho, Gil-Soo;Barfield, Woodrow;Baird, Kevin
    • Fiber Technology and Industry
    • /
    • v.2 no.4
    • /
    • pp.490-508
    • /
    • 1998
  • One of the latest fields of research in the area of output devices is tactual display devices [13,31]. These tactual or haptic devices allow the user to receive haptic feedback output from a variety of sources. This allows the user to actually feel virtual objects and manipulate them by touch. This is an emerging technology and will be instrumental in enhancing the realism of wearable augmented environments for certain applications. Tactual displays have previously been used for scientific visualization in virtual environments by chemists and engineers to improve perception and understanding of force fields and of world models populated with the impenetrable. In addition to tactual displays, the use of wearable audio displays that allow sound to be spatialized are being developed. With wearable computers, designers will soon be able to pair spatialized sound to virtual representations of objects when appropriate to make the wearable computer experience even more realistic to the user. Furthermore, as the number and complexity of wearable computing applications continues to grow, there will be increasing needs for systems that are faster, lighter, and have higher resolution displays. Better networking technology will also need to be developed to allow all users of wearable computers to have high bandwidth connections for real time information gathering and collaboration. In addition to the technology advances that make users need to wear computers in everyday life, there is also the desire to have users want to wear their computers. In order to do this, wearable computing needs to be unobtrusive and socially acceptable. By making wearables smaller and lighter, or actually embedding them in clothing, users can conceal them easily and wear them comfortably. The military is currently working on the development of the Personal Information Carrier (PIC) or digital dog tag. The PIC is a small electronic storage device containing medical information about the wearer. While old military dog tags contained only 5 lines of information, the digital tags may contain volumes of multi-media information including medical history, X-rays, and cardiograms. Using hand held devices in the field, medics would be able to call this information up in real time for better treatment. A fully functional transmittable device is still years off, but this technology once developed in the military, could be adapted tp civilian users and provide ant information, medical or otherwise, in a portable, not obstructive, and fashionable way. Another future device that could increase safety and well being of its users is the nose on-a-chip developed by the Oak Ridge National Lab in Tennessee. This tiny digital silicon chip about the size of a dime, is capable of 'smelling' natural gas leaks in stoves, heaters, and other appliances. It can also detect dangerous levels of carbon monoxide. This device can also be configured to notify the fire department when a leak is detected. This nose chip should be commercially available within 2 years, and is inexpensive, requires low power, and is very sensitive. Along with gas detection capabilities, this device may someday also be configured to detect smoke and other harmful gases. By embedding this chip into workers uniforms, name tags, etc., this could be a lifesaving computational accessory. In addition to the future safety technology soon to be available as accessories are devices that are for entertainment and security. The LCI computer group is developing a Smartpen, that electronically verifies a user's signature. With the increase in credit card use and the rise in forgeries, is the need for commercial industries to constantly verify signatures. This Smartpen writes like a normal pen but uses sensors to detect the motion of the pen as the user signs their name to authenticate the signature. This computational accessory should be available in 1999, and would bring increased peace of mind to consumers and vendors alike. In the entertainment domain, Panasonic is creating the first portable hand-held DVD player. This device weight less than 3 pounds and has a screen about 6' across. The color LCD has the same 16:9 aspect ratio of a cinema screen and supports a high resolution of 280,000 pixels and stereo sound. The player can play standard DVD movies and has a hour battery life for mobile use. To summarize, in this paper we presented concepts related to the design and use of wearable computers with extensions to smart spaces. For some time, researchers in telerobotics have used computer graphics to enhance remote scenes. Recent advances in augmented reality displays make it possible to enhance the user's local environment with 'information'. As shown in this paper, there are many application areas for this technology such as medicine, manufacturing, training, and recreation. Wearable computers allow a much closer association of information with the user. By embedding sensors in the wearable to allow it to see what the user sees, hear what the user hears, sense the user's physical state, and analyze what the user is typing, an intelligent agent may be able to analyze what the user is doing and try to predict the resources he will need next or in the near future. Using this information, the agent may download files, reserve communications bandwidth, post reminders, or automatically send updates to colleagues to help facilitate the user's daily interactions. This intelligent wearable computer would be able to act as a personal assistant, who is always around, knows the user's personal preferences and tastes, and tries to streamline interactions with the rest of the world.

  • PDF

Development of Solar Concentrator Cooling System (태양광 시스템의 냉각장치 개발)

  • Lee, HeeJoon;Cha, Gueesoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.7
    • /
    • pp.4463-4468
    • /
    • 2014
  • To increase the efficiency of a solar module, the development of solar concentrator using a lens or reflection plate is being proceeded actively and the concentrator pursues the a concentration using a lens or an optical device of a concentration rate and designing as a solar tracking system. On the other hand, as the energy density being dissipated as a heat according to the concentration rate increases, the cares should be taken to cool the solar concentrator to prevent the lowering of efficiency of solar cell by the increasing temperature of the solar cell. This study, researched and developed an economical concentrator module system using a low priced reflection optical device. A concentrator was used as a general module to increase the generation efficiency of the solar module and heat generated was emitted by the concentration through the cooling system. To increase the efficiency of the solar concentrator, the cooling system was designed and manufactured. The features of the micro cooling system (MCS) are a natural circulation method by the capillary force, which does not require external power. By using the potential heat in the case of changing the fluid, it is available to realize high performance cooling. The 117W solar modules installed on the reflective plate and the cooling device in the cooling module and the module unit was not compared. The cooling device was installed in the module resulted in a 28% increase in power output.