• 제목/요약/키워드: impossible

Search Result 3,702, Processing Time 0.042 seconds

Study on Fabric and Embroidery of Possessed by Dong-A University Museum (동아대학교박물관 소장 <초충도수병>의 직물과 자수 연구)

  • Sim, Yeon-ok
    • Korean Journal of Heritage: History & Science
    • /
    • v.46 no.3
    • /
    • pp.230-250
    • /
    • 2013
  • possessed by Dong-A University Museum is designated as Treasure No. 595, and has been known for a more exquisite, delicate and realistic expression and a colorful three-dimensional structure compared to the 'grass and insect painting' work and its value in art history. However, it has not been analyzed and studied in fabric craft despite it being an embroidered work. This study used scientific devices to examine and analyze the Screen's fabric, thread colors, and embroidery techniques to clarify its patterns and fabric craft characteristics for its value in the history of fabric craft. As a result, consists of eight sides and its subject matters and composition are similar to those of the general paintings of grass and insects. The patterns on each side of the 'grass and insect painting' include cucumber, cockscomb, day lily, balsam pear, gillyflower, watermelon, eggplant, and chrysanthemums from the first side. Among these flowers, the balsam pear is a special material not found in the existing paintings of grass and insect. The eighth side only has the chrysanthemums with no insects and reptiles, making it different from the typical forms of the paintings of grass and insect. The fabric of the Screen uses black that is not seen in other decorative embroideries to emphasize and maximize various colors of threads. The fabric used the weave structure of 5-end satin called Gong Dan [non-patterned satin]. The threads used extremely slightly twisted threads that are incidentally twisted. Some threads use one color, while other threads use two or mixed colors in combination for three-dimensional expressions. Because the threads are severely deterioration and faded, it is impossible to know the original colors, but the most frequently used colors are yellow to green and other colors remaining relatively prominently are blue, grown, and violet. The colors of day lily, gillyflower, and strawberries are currently remaining as reddish yellow, but it is anticipated that they were originally orange and red considering the existing paintings of grass and insects. The embroidery technique was mostly surface satin stitch to fill the surfaces. This shows the traditional women's wisdom to reduce the waste of color threads. Satin stitch is a relatively simple embroidery technique for decorating a surface, but it uses various color threads and divides the surfaces for combined vertical, horizontal, and diagonal stitches or for the combination of long and short stitches for various textures and the sense of volume. The bodies of insects use the combination of buttonhole stitch, outline stitch, and satin stitch for three-dimensional expressions, but the use of buttonhole stitch is particularly noticeable. In addition to that, decorative stitches were used to give volume to the leaves and surface pine needle stitches were done on the scouring rush to add more realistic texture. Decorative stitches were added on top of gillyflower, strawberries, and cucumbers for a more delicate touch. is valuable in the history of paintings and art and bears great importance in the history of Korean embroidery as it uses outstanding technique and colors of Korea to express the Shin Sa-im-dang's 'Grass and Insect Painting'.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

The Study of Korean-style Leadership (The Great Cause?Oriented and Confidence-Oriented Leadership) (대의와 신뢰 중시의 한국형 리더십 연구)

  • Park, sang ree
    • The Journal of Korean Philosophical History
    • /
    • no.23
    • /
    • pp.99-128
    • /
    • 2008
  • This research analyzes some Korean historical figures and presents the core values of their leaderships so that we can bring up the theory of leadership which would be compatible with the current circumstances around Korea. Through this work, we expected that we would not only find out typical examples among historical leaders but also reaffirm our identities in our history. As a result of the research, it was possible to classify some figures in history into several patterns and discover their archetypal qualities. Those qualities were 'transform(實事)', 'challenge(決死)', 'energize(風流)', 'create(創案)', and 'envision(開新)' respectively. Among the qualities, this research concentrated on the quality of 'challenge', exclusively 'death-defying spirit'. This spirit is the one with which historical leaders could sacrifice their lives for their great causes. This research selected twelve figures as incarnations of death-defying spirit, who are Gyebaek(階伯), Ganggamchan(姜邯贊), Euljimundeok(乙支文德), Choeyoung(崔瑩),ChungMongju(鄭夢周), Seongsammun (成三問), Yisunsin(李舜臣), Gwakjaewoo(郭再祐), Choeikhyeon(崔益鉉), Anjunggeun(安重根), Yunbonggil(尹奉吉), Yijun(李儁). Through analyzing their core values and abilities and categorizing some historical cases into four spheres such as a private sphere, relations sphere, a community sphere, and a society sphere, we came to find a certain element in common among those figures. It was that they eventually took the lead by showing the goal and the ideal to their people at all times. Moreover, their goals were always not only obvious but also unwavering. In the second chapter, I described the core value in a private sphere, so called '志靑靑'. It implies that a leader should set his ultimate goal and then try to attain it with an unyielding will. Obvious self-confidence and unfailing self-creed are core values in a private sphere. In the third chapter, I described the core value in a relative sphere, the relationship between one and others. It is '守信結義'. It indicates that a leader should win confidence from others by discharging his duties in the relation with others. Confidence is the highest leveled affection to others. Thus, mutual reliance should be based on truthful sincerity and affection toward others. Stubbornness and strictness are needed not to be prompted by pity simultaneously. In the fourth chapter, I described the core value in a community sphere. It is '丹心合力'. For this value, what are required to a leader are both his community spirit and his loyalty to one's community. Moreover, the strong sense of responsibility and the attitude of taking an initiative among others are also required. Thus, it can be said that the great power to conduct the community is so called fine teamwork. What's more, the attitude of the leader can exert a great influence on his community. In the fifth chapter, I described the core value of death defying spirit in the society sphere. This value might be more definite and explicit than other ones described above. A leader should prepare willingly for one's death to fulfill his great duties. 'What to do' is more important for a leader than 'how to do'. That is to say, a leader should always do righteous things. Efficiency is nothing but one of his interests. A leader must be the one who behaves himself always according to righteousness. Unless a leader's behaviors are based on righteousness, it is absolutely impossible that a leader exerts his leadership toward people very efficiently. Thus, it can be said that a true leader is the one not only who is of morality and but also who tries to fulfill his duties.

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

A Study on Residual Hearing of Hearing Impaired Children (고도난청아(高度難聽兒)에 대(對)한 잔존청력(殘存聽力))

  • Rhee, Kyu-Shik;Kim, Doo-Hie
    • Journal of Preventive Medicine and Public Health
    • /
    • v.6 no.1
    • /
    • pp.51-63
    • /
    • 1973
  • This paper illustrate residual hearing and socio-medical background on the hearing impaired children, 207 comming to Deaf School. attached to Hankuk Social Work College, Taegu, Korea. The survey was performed through interview with their parents and testing by diagnostic audio-meter (TRIO, AS 105 type) at soundproof room from March 10, to November 28, 1973. The results obtained were as follows. 1) The attendance rate of the compulsory primary school was markedly lower tendency in female than male according to directly proportional to prevalence rate of deafness among them. If was showed the deeper gap in the more superior school (middle and high school). 2) Who entered at the suitable age to each school (six years old to primary school, 12 years to middle and 15 years to high) was 11.3%. And who were enrolled in school age to each school (6-11 years for primary. 12-14 years for middle and 15-17 years for high) was 45.9% (43.7% in male, 50.0% in female). 3) As causative disease, congenital case, were 23.6% included of 13.5% of heredity and 10.1% of troubles during pregnancy; the total acquired cases were 47.9%, it was classified as 11.6% of convulsion from any other diseases, 7.7% of measles, 7.7% of other febrile diseases, 3.4% of drug (the most of streptomycin) intoxication, 2.4% of meningitis, 1.5% of epidemic encephalitis and 31.3% of other diseases; and unknown cases were 28.5%. 4) 31.4% of who included congenital cases lost their hearing within six months old, 11.6% in 6-11 months. 9.7% in 1-2 years old and 14.0% in 2-3years old. Consequently we obtained that the most cases 90.0% were lost their hearing within 3 years after birth. 5) According to qualities of hearing leases the most of cases were perceptive, 197(97.5%), only two cases were conductive, and eight cases were mixed. 6) The status of residual hearing according to average grade of hearing loss. $B(=\frac{a+2b+c}{4}$ as table 13) were as follows. Two cases were normal (one was mute and another was severe speach disorder). Ten cases, moderate. Moderately severe cases were 40 (19.3%). Severe cases, 38(18.4%). Scale out, profound cases, 48 (23.3%). And impossible testing cases because that were infantile or had some mental disorder were 69 (33.3%). 7) The using rate of hearing aides was only 12.0%. Among them who had some more residual hearing and could showed hearing effect with hearing aide have used more many proportionary but who were difficult to expect that effect were rare.

  • PDF

The National Survey of Open Lung Biopsy and Thoracoscopic Lung Biopsy in Korea (개흉 및 흉강경항폐생검의 전국실태조사)

  • 대한결핵 및 호흡기학회 학술위원회
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.1
    • /
    • pp.5-19
    • /
    • 1998
  • Introduction: Direct histologic and bacteriologic examination of a representative specimen of lung tissue is the only certain method of providing an accurate diagnosis in various pulmonary diseases including diffuse pulmonary diseases. The purpose of national survey was to define the indication, incidence, effectiveness, safety and complication of open and thoracoscopic lung biopsy in korea. Methods: A multicenter registry of 37 university or general hospitals equipped more than 400 patient's bed were retrospectively collected and analyzed for 3 years from the January 1994 to December 1996 using the same registry protocol. Results: 1) There were 511 cases from the 37 hospitals during 3 years. The mean age was 50.2 years(${\pm}15.1$ years) and men was more prevalent than women(54.9% vs 45.9%). 2) The open lung biopsy was performed in 313 cases(62%) and thoracoscopic lung biopsy was performed in 192 cases(38%). The incidence of lung biopsy was more higher in diffuse lung disease(305 cases, 59.7%) than in localized lung disease(206 cases, 40.3%) 3) The duration after abnormalities was found in chest X-ray until lung biopsy was 82.4 days(open lung biopsy: 72.8 days, thoracoscopic lung biopsy: 99.4 days). The bronchoscopy was performed in 272 cases(53.2%), bronchoalveolar lavage was performed in 123 cases(24.1%) and percutaneous lung biopsy was performed in 72 cases(14.1%) before open or thoracoscopic lung biopsy. 4) There were 230 cases(45.0%) of interstitial lung disease, 133 cases(26.0%) of thoracic malignancies, 118 cases(23.1%) of infectious lung disease including tuberculosis and 30 cases (5.9 %) of other lung diseases including congenital anomalies. No significant differences were noted in diagnostic rate and disease characteristics between open lung biopsy and thoracoscopic lung biopsy. 5) The final diagnosis through an open or thoracoscopic lung biopsy was as same as the presumptive diagnosis before the biopsy in 302 cases(59.2%). The identical diagnostic rate was 66.5% in interstitial lung diseases, 58.7% in thoracic malignancies, 32.7% in lung infections, 55.1 % in pulmonary tuberculosis, 62.5% in other lung diseases including congenital anomalies. 6) One days after lung biopsy, $PaCO_2$ was increased from the prebiopsy level of $38.9{\pm}5.8mmHg$ to the $40.2{\pm}7.1mmHg$(P<0.05) and $PaO_2/FiO_2$ was decreased from the prebiopsy level of $380.3{\pm}109.3mmHg$ to the $339.2{\pm}138.2mmHg$(P=0.01). 7) There was a 10.1 % of complication after lung biopsy. The complication rate in open lung biopsy was much higher than in thoracoscopic lung biopsy(12.4% vs 5.8%, P<0.05). The incidence of complication was pneumothorax(23 cases, 4.6%), hemothorax(7 cases, 1.4%), death(6 cases, 1.2%) and others(15 cases, 2.9%). 8) The 5 cases of death due to lung biopsy were associated with open lung biopsy and one fatal case did not describe the method of lung biopsy. The underlying disease was 3 cases of thoracic malignancies(2 cases of bronchoalveolar cell cancer and one malignant mesothelioma), 2 cases of metastatic lung cancer, and one interstitial lung disease. The duration between open lung biopsy and death was $15.5{\pm}9.9$ days. 9) Despite the lung biopsy, 19 cases (3.7%) could not diagnosed. These findings were caused by biopsy was taken other than target lesion(5 cases), too small size to interpretate(3 cases), pathologic inability(11 cases). 10) The contribution of open or thoracoscopic lung biopsy to the final diagnosis was defininitely helpful(334 cases, 66.5%), moderately helpful(140 cases, 27.9%), not helpful or impossible to judge(28 cases, 5.6%). Overall, open or thoracoscopic lung biopsy were helpful to diagnose the lung lesion in 94.4 % of total cases. Conclusions: The open or thoracoscopic lung biopsy were relatively safe and reliable diagnostic method of lung lesion which could not diagnosed by other diagnostic approaches such as bronchoscopy. We recommend the thoracoscopic lung biopsy when the patients were in critical condition because the thoracoscopic biopsy was more safe and have equal diagnostic results compared with the open lung biopsy.

  • PDF

Radioimmunoassay Reagent Survey and Evaluation (검사별 radioimmunoassay시약 조사 및 비교실험)

  • Kim, Ji-Na;An, Jae-seok;Jeon, Young-woo;Yoon, Sang-hyuk;Kim, Yoon-cheol
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.25 no.1
    • /
    • pp.34-40
    • /
    • 2021
  • Purpose If a new test is introduced or reagents are changed in the laboratory of a medical institution, the characteristics of the test should be analyzed according to the procedure and the assessment of reagents should be made. However, several necessary conditions must be met to perform all required comparative evaluations, first enough samples should be prepared for each test, and secondly, various reagents applicable to the comparative evaluations must be supplied. Even if enough comparative evaluations have been done, there is a limit to the fact that the data variation for the new reagent represents the overall patient data variation, The fact puts a burden on the laboratory to the change the reagent. Due to these various difficulties, reagent changes in the laboratory are limited. In order to introduce a competitive bid, the institute conducted a full investigation of Radioimmunoassay(RIA) reagents for each test and established the range of reagents available in the laboratory through comparative evaluations. We wanted to share this process. Materials and Methods There are 20 items of tests conducted in our laboratory except for consignment tests. For each test, RIA reagents that can be used were fully investigated with the reference to external quality control report. and the manuals for each reagent were obtained. Each reagent was checked for the manual to check the test method, Incubation time, sample volume needed for the test. After that, the primary selection was made according to whether it was available in this laboratory. The primary selected reagents were supplied with 2kits based on 100tests, and the data correlation test, sensitivity measurement, recovery rate measurement, and dilution test were conducted. The secondary selection was performed according to the results of the comparative evaluation. The reagents that passed the primary and secondary selections were submitted to the competitive bidding list. In the case of reagent is designated as a singular, we submitted a explanatory statement with the data obtained during the primary and secondary selection processes. Results Excluded from the primary selection was the case where TAT was expected to be delayed at the moment, and it was impossible to apply to our equipment due to the large volume of reagents used during the test. In the primary selection, there were five items which only one reagent was available.(squamous cell carcinoma Ag(SCC Ag), β-human chorionic gonadotropin(β-HCG), vitamin B12, folate, free testosterone), two reagents were available(CA19-9, CA125, CA72-4, ferritin, thyroglobulin antibody(TG Ab), microsomal antibody(Mic Ab), thyroid stimulating hormone-receptor-antibody(TSH-R-Ab), calcitonin), three reagents were available (triiodothyronine(T3), Tree T3, Free T4, TSH, intact parathyroid hormone(intact PTH)) and four reagents were available are carcinoembryonic antigen(CEA), TG. In the secondary selection, there were eight items which only one reagent was available.(ferritin, TG, CA19-9, SCC, β-HCG, vitaminB12, folate, free testosterone), two reagents were available(TG Ab, Mic Ab, TSH-R-Ab, CA125, CA72-4, intact PTH, calcitonin), three reagents were available(T3, Tree T3, Free T4, TSH, CEA). Reasons excluded from the secondary selection were the lack of reagent supply for comparative evaluations, the problems with data reproducibility, and the inability to accept data variations. The most problematic part of comparative evaluations was sample collection. It didn't matter if the number of samples requested was large and the capacity needed for the test was small. It was difficult to collect various concentration samples in the case of a small number of tests(100 cases per month or less), and it was difficult to conduct a recovery rate test in the case of a relatively large volume of samples required for a single test(more than 100 uL). In addition, the lack of dilution solution or standard zero material for sensitivity measurement or dilution tests was one of the problems. Conclusion Comparative evaluation for changing test reagents require appropriate preparation time to collect diverse and sufficient samples. In addition, setting the total sample volume and reagent volume range required for comparative evaluations, depending on the sample volume and reagent volume required for one test, will reduce the burden of sample collection and planning for each comparative evaluation.

Management and Use of Oral History Archives on Forced Mobilization -Centering on oral history archives collected by the Truth Commission on Forced Mobilization under the Japanese Imperialism Republic of Korea- (강제동원 구술자료의 관리와 활용 -일제강점하강제동원피해진상규명위원회 소장 구술자료를 중심으로-)

  • Kwon, Mi-Hyun
    • The Korean Journal of Archival Studies
    • /
    • no.16
    • /
    • pp.303-339
    • /
    • 2007
  • "The damage incurred from forced mobilization under the Japanese Imperialism" means the life, physical, and property damage suffered by those who were forced to lead a life as soldiers, civilians attached to the military, laborers, and comfort women forcibly mobilized by the Japanese Imperialists during the period between the Manchurian Incident and the Pacific War. Up to the present time, every effort to restore the history on such a compulsory mobilization-borne damage has been made by the damaged parties, bereaved families, civil organizations, and academic circles concerned; as a result, on March 5, 2004, Disclosure act of Forced Mobilization under the Japanese Imperialism[part of it was partially revised on May 17, 2007]was officially established and proclaimed. On the basis of this law, the Truth Commission on Forced Mobilization under the Japanese Imperialism Republic of Korea[Compulsory Mobilization Commission hence after] was launched under the jurisdiction of the Prime Minister on November 10, 2004. Since February 1, 2005, this organ has begun its work with the aim of looking into the real aspects of damage incurred from compulsory mobilization under the Japanese Imperialism, by which making the historical truth open to the world. The major business of this organ is to receive the damage report and investigation of the reported damage[examination of the alleged victims and bereaved families, and decision-making], receipt of the application for the fact-finding & fact finding; fact finding and matters impossible to make judgment; correction of a family register subsequent to the damage judgement; collection & analysis of data concerning compulsory mobilization at home and from abroad and writing up of a report; exhumation of the remains, remains saving, their repatriation, and building project for historical records hall and museum & memorial place, etc. The Truth Commission on Compulsory Mobilization has dug out and collected a variety of records to meet the examination of the damage and fact finding business. As is often the case with other history of damage, the records which had already been made open to the public or have been newly dug out usually have their limits to ascertaining of the diverse historical context involved in compulsory mobilization in their quantity or quality. Of course, there may happen a case where the interested parties' story can fill the vacancy of records or has its foundational value more than its related record itself. The Truth Commission on Compulsory mobilization generated a variety of oral history records through oral interviews with the alleged damage-suffered survivors and puts those data to use for examination business, attempting to make use of those data for public use while managing those on a systematic method. The Truth Commission on compulsory mobilization-possessed oral history archives were generated based on a drastic planning from the beginning of their generation, and induced digital medium-based production of those data while bearing the conveniences of their management and usage in mind from the stage of production. In addition, in order to surpass the limits of the oral history archives produced in the process of the investigating process, this organ conducted several special training sessions for the interviewees and let the interviewees leave their real context in time of their oral testimony in an interview journal. The Truth Commission on compulsory mobilization isn't equipped with an extra records management system for the management of the collected archives. The digital archives are generated through the management system of the real aspects of damage and electronic approval system, and they plays a role in registering and searching the produced, collected, and contributed records. The oral history archives are registered at the digital archive and preserved together with real records. The collected oral history archives are technically classified at the same time of their registration and given a proper number for registration, classification, and keeping. The Truth Commission on compulsory mobilization has continued its publication of oral history archives collection for the positive use of them and is also planning on producing an image-based matters. The oral history archives collected by this organ are produced, managed and used in as positive a way as possible surpassing the limits produced in the process of investigation business and budgetary deficits as well as the absence of records management system, etc. as the form of time-limit structure. The accumulated oral history archives, if a historical records hall and museum should be built as regulated in Disclosure act of forced mobilization, would be more systematically managed and used for the public users.

Analysis of Greenhouse Thermal Environment by Model Simulation (시뮬레이션 모형에 의한 온실의 열환경 분석)

  • 서원명;윤용철
    • Journal of Bio-Environment Control
    • /
    • v.5 no.2
    • /
    • pp.215-235
    • /
    • 1996
  • The thermal analysis by mathematical model simulation makes it possible to reasonably predict heating and/or cooling requirements of certain greenhouses located under various geographical and climatic environment. It is another advantages of model simulation technique to be able to make it possible to select appropriate heating system, to set up energy utilization strategy, to schedule seasonal crop pattern, as well as to determine new greenhouse ranges. In this study, the control pattern for greenhouse microclimate is categorized as cooling and heating. Dynamic model was adopted to simulate heating requirements and/or energy conservation effectiveness such as energy saving by night-time thermal curtain, estimation of Heating Degree-Hours(HDH), long time prediction of greenhouse thermal behavior, etc. On the other hand, the cooling effects of ventilation, shading, and pad ||||&|||| fan system were partly analyzed by static model. By the experimental work with small size model greenhouse of 1.2m$\times$2.4m, it was found that cooling the greenhouse by spraying cold water directly on greenhouse cover surface or by recirculating cold water through heat exchangers would be effective in greenhouse summer cooling. The mathematical model developed for greenhouse model simulation is highly applicable because it can reflects various climatic factors like temperature, humidity, beam and diffuse solar radiation, wind velocity, etc. This model was closely verified by various weather data obtained through long period greenhouse experiment. Most of the materials relating with greenhouse heating or cooling components were obtained from model greenhouse simulated mathematically by using typical year(1987) data of Jinju Gyeongnam. But some of the materials relating with greenhouse cooling was obtained by performing model experiments which include analyzing cooling effect of water sprayed directly on greenhouse roof surface. The results are summarized as follows : 1. The heating requirements of model greenhouse were highly related with the minimum temperature set for given greenhouse. The setting temperature at night-time is much more influential on heating energy requirement than that at day-time. Therefore It is highly recommended that night- time setting temperature should be carefully determined and controlled. 2. The HDH data obtained by conventional method were estimated on the basis of considerably long term average weather temperature together with the standard base temperature(usually 18.3$^{\circ}C$). This kind of data can merely be used as a relative comparison criteria about heating load, but is not applicable in the calculation of greenhouse heating requirements because of the limited consideration of climatic factors and inappropriate base temperature. By comparing the HDM data with the results of simulation, it is found that the heating system design by HDH data will probably overshoot the actual heating requirement. 3. The energy saving effect of night-time thermal curtain as well as estimated heating requirement is found to be sensitively related with weather condition: Thermal curtain adopted for simulation showed high effectiveness in energy saving which amounts to more than 50% of annual heating requirement. 4. The ventilation performances doting warm seasons are mainly influenced by air exchange rate even though there are some variations depending on greenhouse structural difference, weather and cropping conditions. For air exchanges above 1 volume per minute, the reduction rate of temperature rise on both types of considered greenhouse becomes modest with the additional increase of ventilation capacity. Therefore the desirable ventilation capacity is assumed to be 1 air change per minute, which is the recommended ventilation rate in common greenhouse. 5. In glass covered greenhouse with full production, under clear weather of 50% RH, and continuous 1 air change per minute, the temperature drop in 50% shaded greenhouse and pad & fan systemed greenhouse is 2.6$^{\circ}C$ and.6.1$^{\circ}C$ respectively. The temperature in control greenhouse under continuous air change at this time was 36.6$^{\circ}C$ which was 5.3$^{\circ}C$ above ambient temperature. As a result the greenhouse temperature can be maintained 3$^{\circ}C$ below ambient temperature. But when RH is 80%, it was impossible to drop greenhouse temperature below ambient temperature because possible temperature reduction by pad ||||&|||| fan system at this time is not more than 2.4$^{\circ}C$. 6. During 3 months of hot summer season if the greenhouse is assumed to be cooled only when greenhouse temperature rise above 27$^{\circ}C$, the relationship between RH of ambient air and greenhouse temperature drop($\Delta$T) was formulated as follows : $\Delta$T= -0.077RH+7.7 7. Time dependent cooling effects performed by operation of each or combination of ventilation, 50% shading, pad & fan of 80% efficiency, were continuously predicted for one typical summer day long. When the greenhouse was cooled only by 1 air change per minute, greenhouse air temperature was 5$^{\circ}C$ above outdoor temperature. Either method alone can not drop greenhouse air temperature below outdoor temperature even under the fully cropped situations. But when both systems were operated together, greenhouse air temperature can be controlled to about 2.0-2.3$^{\circ}C$ below ambient temperature. 8. When the cool water of 6.5-8.5$^{\circ}C$ was sprayed on greenhouse roof surface with the water flow rate of 1.3 liter/min per unit greenhouse floor area, greenhouse air temperature could be dropped down to 16.5-18.$0^{\circ}C$, whlch is about 1$0^{\circ}C$ below the ambient temperature of 26.5-28.$0^{\circ}C$ at that time. The most important thing in cooling greenhouse air effectively with water spray may be obtaining plenty of cool water source like ground water itself or cold water produced by heat-pump. Future work is focused on not only analyzing the feasibility of heat pump operation but also finding the relationships between greenhouse air temperature(T$_{g}$ ), spraying water temperature(T$_{w}$ ), water flow rate(Q), and ambient temperature(T$_{o}$).

  • PDF

A Study on the Effect of Booth Recommendation System on Exhibition Visitors Unplanned Visit Behavior (전시장 참관객의 계획되지 않은 방문행동에 있어서 부스추천시스템의 영향에 대한 연구)

  • Chung, Nam-Ho;Kim, Jae-Kyung
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.175-191
    • /
    • 2011
  • With the MICE(Meeting, Incentive travel, Convention, Exhibition) industry coming into the spotlight, there has been a growing interest in the domestic exhibition industry. Accordingly, in Korea, various studies of the industry are being conducted to enhance exhibition performance as in the United States or Europe. Some studies are focusing particularly on analyzing visiting patterns of exhibition visitors using intelligent information technology in consideration of the variations in effects of watching exhibitions according to the exhibitory environment or technique, thereby understanding visitors and, furthermore, drawing the correlations between exhibiting businesses and improving exhibition performance. However, previous studies related to booth recommendation systems only discussed the accuracy of recommendation in the aspect of a system rather than determining changes in visitors' behavior or perception by recommendation. A booth recommendation system enables visitors to visit unplanned exhibition booths by recommending visitors suitable ones based on information about visitors' visits. Meanwhile, some visitors may be satisfied with their unplanned visits, while others may consider the recommending process to be cumbersome or obstructive to their free observation. In the latter case, the exhibition is likely to produce worse results compared to when visitors are allowed to freely observe the exhibition. Thus, in order to apply a booth recommendation system to exhibition halls, the factors affecting the performance of the system should be generally examined, and the effects of the system on visitors' unplanned visiting behavior should be carefully studied. As such, this study aims to determine the factors that affect the performance of a booth recommendation system by reviewing theories and literature and to examine the effects of visitors' perceived performance of the system on their satisfaction of unplanned behavior and intention to reuse the system. Toward this end, the unplanned behavior theory was adopted as the theoretical framework. Unplanned behavior can be defined as "behavior that is done by consumers without any prearranged plan". Thus far, consumers' unplanned behavior has been studied in various fields. The field of marketing, in particular, has focused on unplanned purchasing among various types of unplanned behavior, which has been often confused with impulsive purchasing. Nevertheless, the two are different from each other; while impulsive purchasing means strong, continuous urges to purchase things, unplanned purchasing is behavior with purchasing decisions that are made inside a store, not before going into one. In other words, all impulsive purchases are unplanned, but not all unplanned purchases are impulsive. Then why do consumers engage in unplanned behavior? Regarding this question, many scholars have made many suggestions, but there has been a consensus that it is because consumers have enough flexibility to change their plans in the middle instead of developing plans thoroughly. In other words, if unplanned behavior costs much, it will be difficult for consumers to change their prearranged plans. In the case of the exhibition hall examined in this study, visitors learn the programs of the hall and plan which booth to visit in advance. This is because it is practically impossible for visitors to visit all of the various booths that an exhibition operates due to their limited time. Therefore, if the booth recommendation system proposed in this study recommends visitors booths that they may like, they can change their plans and visit the recommended booths. Such visiting behavior can be regarded similarly to consumers' visit to a store or tourists' unplanned behavior in a tourist spot and can be understand in the same context as the recent increase in tourism consumers' unplanned behavior influenced by information devices. Thus, the following research model was established. This research model uses visitors' perceived performance of a booth recommendation system as the parameter, and the factors affecting the performance include trust in the system, exhibition visitors' knowledge levels, expected personalization of the system, and the system's threat to freedom. In addition, the causal relation between visitors' satisfaction of their perceived performance of the system and unplanned behavior and their intention to reuse the system was determined. While doing so, trust in the booth recommendation system consisted of 2nd order factors such as competence, benevolence, and integrity, while the other factors consisted of 1st order factors. In order to verify this model, a booth recommendation system was developed to be tested in 2011 DMC Culture Open, and 101 visitors were empirically studied and analyzed. The results are as follows. First, visitors' trust was the most important factor in the booth recommendation system, and the visitors who used the system perceived its performance as a success based on their trust. Second, visitors' knowledge levels also had significant effects on the performance of the system, which indicates that the performance of a recommendation system requires an advance understanding. In other words, visitors with higher levels of understanding of the exhibition hall learned better the usefulness of the booth recommendation system. Third, expected personalization did not have significant effects, which is a different result from previous studies' results. This is presumably because the booth recommendation system used in this study did not provide enough personalized services. Fourth, the recommendation information provided by the booth recommendation system was not considered to threaten or restrict one's freedom, which means it is valuable in terms of usefulness. Lastly, high performance of the booth recommendation system led to visitors' high satisfaction levels of unplanned behavior and intention to reuse the system. To sum up, in order to analyze the effects of a booth recommendation system on visitors' unplanned visits to a booth, empirical data were examined based on the unplanned behavior theory and, accordingly, useful suggestions for the establishment and design of future booth recommendation systems were made. In the future, further examination should be conducted through elaborate survey questions and survey objects.