• Title/Summary/Keyword: Small system

Search Result 14,223, Processing Time 0.048 seconds

The Comparison of the Ultra-Violet Radiation of Summer Outdoor Screened by the Landscaping Shade Facilities and Tree (조경용 차양시설과 수목에 의한 하절기 옥외공간의 자외선 차단율 비교)

  • Lee, Chun-Seok;Ryu, Nam-Hyong
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.41 no.6
    • /
    • pp.20-28
    • /
    • 2013
  • The purpose of this study was to compare the ultra-violet(UV) radiation under the landscaping shade facilities and tree with natural solar UV of the outdoor space at summer middays. The UVA+B and UVB were recorded every minute from the $20^{th}$ of June to the $26^{th}$ of September 2012 at a height of 1.1m above in the four different shading conditions, with fours same measuring system consisting of two couple of analog UVA+B sensor(220~370nm, Genicom's GUVA-T21GH) and UVB sensor(220~320nm, Genicom's GUVA-T21GH) and data acquisition systems(Comfile Tech.'s Moacon). Four different shading conditions were under an wooden shelter($W4.2m{\times}L4.2m{\times}H2.5m$), a polyester membrane structure ($W4.9m{\times}L4.9m{\times}H2.6m$), a Salix koreensis($H11{\times}B30$), and a brick-paved plot without any shading material. Based on the 648 records of 17 sunny days, the time serial difference of natural solar UVA+B and UVB for midday periods were analysed and compared, and statistical analysis about the difference between the four shading conditions was done based on the 2,052 records of daytime period from 10 A.M. to 4 P.M.. The major findings were as follows; 1. The average UVA+B under the wooden shelter, the membrane and the tree were $39{\mu}W/cm^2$(3.4%), $74{\mu}W/cm^2$(6.4%), $87{\mu}W/cm^2$(7.6%) respectively, while the solar UVA+B was $1.148{\mu}W/cm^2$. Which means those facilities and tree screened at least 93% of solar UV+B. 2. The average UVB under the wooden shelter, the membrane and the tree were $12{\mu}W/cm^2$(5.8%), $26{\mu}W/cm^2$(13%), $17{\mu}W/cm^2$(8.2%) respectively, while the solar UVB was $207{\mu}W/cm^2$. The membrane showed the highest level and the wooden shelter lowest. 3. According to the results of time serial analysis, the difference between the three shaded conditions around noon was very small, but the differences of early morning and late afternoon were apparently big. Which seems caused by the matter of the formal and structural characteristics of the shading facilities and tree, not by the shading materials itself. In summary, the performance of the four landscaping shade facilities and tree were very good at screening the solar UV at outdoor of summer middays, but poor at screening the lateral UV during early morning and late afternoon. Therefore, it can be apparently said that the more delicate design of shading facilities and big tree or forest to block the additional lateral UV, the more effective in conditioning the outdoor space reducing the useless or even harmful radiation for human activities.

Wind and Flooding Damages of Rice Plants in Korea (한국의 도작과 풍수해)

  • 강양순
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.34 no.s02
    • /
    • pp.45-65
    • /
    • 1989
  • The Korean peninsular having the complexity of the photography and variability of climate is located within passing area of a lots of typhoon occurring from the southern islands of Philippines. So, there are various patterns of wind and flooding damages in paddy field occuring by the strong wind and the heavy rain concentrated during the summer season of rice growing period in Korea. The wind damages to rice plants in Korea were mainly caused by saline wind, dry wind and strong wind when typhoon occurred. The saline wind damage having symptom of white head or dried leaves occurred by 1.1 to 17.2 mg of salt per dry weight stuck on the plant which was located at 2. 5km away from seashore of southern coastal area during the period(from 27th to 29th, August, 1986) of typhoon &Vera& accompanying 62-96% of relative humidity, more than 6 m per second of wind velocity and 22.5 to 26.4$^{\circ}C$ of air temperature without rain. Most of the typhoons accompanying 4.0 to 8. 5m per second of wind and low humidity (lesp an 60%) with high temperature in the east coastal area and southen area of Korea. were changed to dry and hot wind by the foehn phenomenon. The dry wind damages with the symptom of the white head or the discolored brownish grain occurred at the rice heading stage. The strong wind caused the severe damages such as the broken leaves, cut-leaves and dried leaves before heading stage, lodging and shattering of grain at ripening stage mechanically during typhoon. To reduce the wind damages to rice plant, cultivation of resistant varieties to wind damages such as Sangpoongbyeo and Cheongcheongbyeo and the escape of heading stage during period of typhoon by accelerating of heading within 15th, August are effective. Though the flood disasters to rice plant such as earring away of field, burying of field, submerging and lodging damage are getting low by the construction of dam for multiple purpose and river bank, they are occasionally occurred by the regional heavy rain and water filled out in bank around the river. Paddy field were submerged for 2 to 4 days when typhoon and heavy rain occurred about the end of August. At this time, the rice plants that was in younger growing stage in the late transplanting field of southern area of Korea had the severe damages. Although panicles of rice plant which was in the meiotic growing stage and heading stage were died when flooded, they had 66% of yield compensating ability by the upper tilling panicle produced from tiller with dead panicle in ordinary transplanting paddy field. It is effective for reduction of flooding damages to cultivate the resistant variety to flooding having the resistance to bacterial leaf blight, lodging and small brown planthopper simultaneously. Especially, Tongil type rice varieties are relatively resistant to flooding, compared to Japonica rice varieties. Tongil type rice varieties had high survivals, low elongation ability of leaf sheath and blade, high recovering ability by the high root activity and photosynthesis and high yield compensating ability by the upper tillering panicle when flooded. To minimize the flooding and wind damage to rice plants in future, following research have to be carried out; 1. Data analysis by telemetering and computerization of climate, actual conditions and growing diagnosis of crops damaged by disasters. 2. Development of tolerant varieties to poor natural conditions related to flooding and wind damages. 3. Improvement of the reasonable cropping system by introduction of other crops compensating the loss of the damaged rice. 4. Increament of utilization of rice plant which was damaged.

  • PDF

Results of Hyperfractionated Radiation Therapy in Bulky Stage Ib, IIa, and IIb Uterine Cervical Cancer (종괴가 큰 병기 Ib, IIa, IIb 자궁경부암에서 다분할 방사선치료의 결과)

  • Kim, Jin-Hee;Kim, Ok-Bae
    • Radiation Oncology Journal
    • /
    • v.15 no.4
    • /
    • pp.349-356
    • /
    • 1997
  • Purpose : To evaluate the efficacy of hyperfractionated radiation therapy in carcinoma of the cervix, especially on huge exophytic and endophytic stage Ib, IIa and IIb Materials and Materials : Fourty one patients with carcinoma of the cervix treated with hyperfractionated radiation therapy at the Department of Therapeutic Radiology, Dongsan Hospital, Keimyung University. School of Medicine from Jul, 1991 to Apr, 1994. According to FIGO s1aging system, therewere stage Ib (3 patients) IIa (6 patients) with exophytic ($\geq$5cm in dinmeter) and huge endophytic mass. and IIb (32 patients) with median age of 55 yeavs old. Radiation therapy consisted of hyperfractionated external irradition to the whole pelvis (120cGy/fraction, 2 fraction/day (minimum interval of 6 hours), 3600-5520cGy) and boost parametrial doses (for a total of 4480-6480cGy) with midline shield $(4\times10cm)$, and combined with intracavitary irradiation (up to 7480-8520cGy in Ib, IIa and 8480-9980cGy in IIb to point A). The maximum and mean follow up durations were 70 and 47 months respectively . Results : Five year local control rate was $78\%$ and the actuarial overall five year survival rate was $66.1\%$ for all patients, $44.4\%$ for stage Ib, IIa and $71.4\%$ for stage IIb. In bulky IIb (above 5cm in tumor size, 11 patients) five year local control rate and five rear survival rate was $88.9\%,\;73\%$ respectively Pelvic lymph node status (negative : $74\%,\;positive:25\%$, p=0.0015) was significant Prognostic factor affecting to five rear survival rate. There was marginally significant survival difference by total dose to A point ($>84Gy\;:\;70\%,\;>84Gy\;:\;42.8\%$, p=0.1). We consider that the difference of total dose to A point by stage (mean Ib,IIa : 79Gy. IIb 89Gy P=0.001) is one of the causes in worse local control and survival of Ib,IIa than IIb The overall recurrence rate was $39\%$ (16/41). The rates of local failure alone. distant failure alone. and combined local and distant failure were $9.7\%,\;19.5\%,\;and\;9.7\%$, respectively. Two Patients developed leukopenia ($\geq$ grade 3) and Three patients develoued grade 3 gastrointestinal complication. Above grade 3 complication was not noted. There was no treatment related death noted. Conclusion : We thought that it may be necessary to increase A point dose to more than 85Gy in hyperfractionated radiotherapy of huge exophytic and endophvtic stage Ib,IIa. We considered that hyperfractionated radiation therapy may be tolerable in huge exophytic and endophytic stage IIb cervical carcinoma with acceptable morbidity and possible survival gain but this was results in small patient group and will be confirmed by long term follow up in many patients.

  • PDF

Actual Results on the Control of Illegal Fishing in Adjacent Sea Area of Korea (한국 연근해 불법어업의 지도 단속 실태)

  • Lee, Sang-Jo;Kim, Jin-Kun
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.10 no.2
    • /
    • pp.139-161
    • /
    • 1998
  • This thesis includes a study on the legal regulation, the system and formalities on the control of illegal fishing. And the author analyzed the details of the lists of illegal fishing controlled by fishing patrol vessels of Ministry of Maritime Affairs and Fisheries from 1994 to 1996 in adjacent sea area of Korea. The results are summarized as follows ; 1. The fishing patrol vessels controlled total 826 cases in 2,726 days of 292 voyages by 17 vessels in 1994, total 1,086 cases in 3,060 days of 333 voyages by 18 vessels in 1995 and total 933 cases in 3,126 days of 330 voyages by 19 vessels in 1996. 2. The fishing period of illegal fishing was generally concentrated from April to September. But year after year, illegal fishing was scattered throughout the year. 3. The most controlled sea area of illegal fishing was the south central sea area in the sea near Port of Tongyeong. The sea area occupied about 36~51% of totality and the controlled cases were gradually increased every year. The second was the south western sea area in the sea near Port of Yosu. The sea area occupied about 18-27% and the controlled cases were a little bit increased every year. The third was the south eastern sea area in the sea near Pusan. The sea area occupied about 13~23% and the controlled cases were gradually decreased year by year. 4. The most controlled kind of illegal fishing was the small size bottom trawl. This occupied about 81-95% of totality and the controlled cases were gradually increased year by year. The second was the medium size bottom trawl. This occupied about 4-7% and the controlled cases were gradually decreased year by year. The third was the trawl of the coastal sea, this occupied about 2~4% and the controlled cases were a little bit decreased every year. 5. The most controlled address of illegal fishing manager was Pusan city which occupied about 33-51% of totality. The second was Cheonnam which occupied about 24-29%. The third was Kyungnam which occupied about 16~35%. 6. The most controlled violation of regulations was Article 57 of the Fisheries Act which occupied about 56-64% of totality. The second was Article 23 of Protectorate for Fisheries Resources which occupied about 21-36%. And the controlled cases by it were gradually increased every year.

  • PDF

A Study of The Medical Classics in the '$\bar{A}yurveda$' (아유르베다'($\bar{A}yurveda$) 의경(醫經)에 관한 연구)

  • Kim, Kj-Wook;Park, Hyun-Kuk;Seo, Ji-Young
    • The Journal of Dong Guk Oriental Medicine
    • /
    • v.10
    • /
    • pp.119-145
    • /
    • 2008
  • Through a simple study of the medical classics in the '$\bar{A}yurveda$', we have summarized them as follows. 1) Traditional Indian medicine started in the Ganges river area at about 1500 B. C. E. and traces of medical science can be found in the "Rigveda" and "Atharvaveda". 2) The "Charaka(閣羅迦集)" and "$Su\acute{s}hruta$(妙聞集)", ancient texts from India, are not the work of one person, but the result of the work and errors of different doctors and philosophers. Due to the lack of historical records, the time of Charaka(閣羅迦) or $Su\acute{s}hruta$(妙聞)s' lives are not exactly known. So the completion of the "Charaka" is estimated at 1st$\sim$2nd century C. E. in northwestern India, and the "$Su\acute{s}hruta$" is estimated to have been completed in 3rd$\sim$4th century C. E. in central India. Also, the "Charaka" contains details on internal medicine, while the "$Su\acute{s}hruta$" contains more details on surgery by comparison. 3) '$V\bar{a}gbhata$', one of the revered Vriddha Trayi(triad of the ancients, 三醫聖) of the '$\bar{A}yurveda$', lived and worked in about the 7th century and wrote the "$Ast\bar{a}nga$ $Ast\bar{a}nga$ hrdaya $samhit\bar{a}$ $samhit\bar{a}$(八支集) and "$Ast\bar{a}nga$ Sangraha $samhit\bar{a}$(八心集)", where he tried to compromise and unify the "Charaka" and "$Su\acute{s}hruta$". The "$Ast\bar{a}nga$ Sangraha $samhit\bar{a}$" was translated into Tibetan and Arabic at about the 8th$\sim$9th century, and if we generalize the medicinal plants recorded in each the "Charaka", "$Su\acute{s}hruta$" and the "$Ast\bar{a}nga$ Sangraha $samhit\bar{a}$", there are 240, 370, 240 types each. 4) The 'Madhava' focused on one of the subjects of Indian medicine, '$Nid\bar{a}na$' ie meaning "the cause of diseases(病因論)", and in one of the copies found by Bower in 4th century C. E. we can see that it uses prescriptions from the "BuHaLaJi(布唅拉集)", "Charaka", "$Su\acute{s}hruta$". 5) According to the "Charaka", there were 8 branches of ancient medicine in India : treatment of the body(kayacikitsa), special surgery(salakya), removal of alien substances(salyapahartka), treatment of poison or mis-combined medicines(visagaravairodhikaprasamana), the study of ghosts(bhutavidya), pediatrics(kaumarabhrtya), perennial youth and long life(rasayana), and the strengthening of the essence of the body(vajikarana). 6) The '$\bar{A}yurveda$', which originated from ancient experience, was recorded in Sanskrit, which was a theorization of knowledge, and also was written in verses to make memorizing easy, and made medicine the exclusive possession of the Brahmin. The first annotations were 1060 for the "Charaka", 1200 for the "$Su\acute{s}hruta$", 1150 for the "$Ast\bar{a}nga$ Sangraha $samhit\bar{a}$", and 1100 for the "$Nid\bar{a}na$". The use of various mineral medicines in the "Charaka" or the use of mercury as internal medicine in the "$Ast\bar{a}nga$ Sangraha $samhit\bar{a}$", and the palpation of the pulse for diagnosing in the '$\bar{A}yurveda$' and 'XiZhang(西藏)' medicine are similar to TCM's pulse diagnostics. The coexistence with Arabian 'Unani' medicine, compromise with western medicine and the reactionism trend restored the '$\bar{A}yurveda$' today. 7) The "Charaka" is a book inclined to internal medicine that investigates the origin of human disease which used the dualism of the 'Samkhya', the natural philosophy of the 'Vaisesika' and the logic of the 'Nyaya' in medical theories, and its structure has 16 syllables per line, 2 lines per poem and is recorded in poetry and prose. Also, the "Charaka" can be summarized into the introduction, cause, judgement, body, sensory organs, treatment, pharmaceuticals, and end, and can be seen as a work that strongly reflects the moral code of Brahmin and Aryans. 8) In extracting bloody pus, the "Charaka" introduces a 'sharp tool' bloodletting treatment, while the "$Su\acute{s}hruta$" introduces many surgical methods such as the use of gourd dippers, horns, sucking the blood with leeches. Also the "$Su\acute{s}hruta$" has 19 chapters specializing in ophthalmology, and shows 76 types of eye diseases and their treatments. 9) Since anatomy did not develop in Indian medicine, the inner structure of the human body was not well known. The only exception is 'GuXiangXue(骨相學)' which developed from 'Atharvaveda' times and the "$Ast\bar{a}nga$ Sangraha $samhit\bar{a}$". In the "$Ast\bar{a}nga$ Sangraha $samhit\bar{a}$"'s 'ShenTiLun(身體論)' there is a thorough listing of the development of a child from pregnancy to birth. The '$\bar{A}yurveda$' is not just an ancient traditional medical system but is being called alternative medicine in the west because of its ability to supplement western medicine and, as its effects are being proved scientifically it is gaining attention worldwide. We would like to say that what we have researched is just a small fragment and a limited view, and would like to correct and supplement any insufficient parts through more research of new records.

  • PDF

A Study on the Differences of Information Diffusion Based on the Type of Media and Information (매체와 정보유형에 따른 정보확산 차이에 대한 연구)

  • Lee, Sang-Gun;Kim, Jin-Hwa;Baek, Heon;Lee, Eui-Bang
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.133-146
    • /
    • 2013
  • While the use of internet is routine nowadays, users receive and share information through a variety of media. Through the use of internet, information delivery media is diversifying from traditional media of one-way communication, such as newspaper, TV, and radio, into media of two-way communication. In contrast of traditional media, blogs enable individuals to directly upload and share news, which can be considered to have a differential speed of information diffusion than news media that convey information unilaterally. Therefore this Study focused on the difference between online news and social media blogs. Moreover, there are variations in the speed of information diffusion because that information closely related to one person boosts communications between individuals. We believe that users' standard of evaluation would change based on the types of information. As well, the speed of information diffusion would change based on the level of proximity. Therefore, the purpose of this study is to examine the differences in information diffusion based on the types of media. And then information is segmentalized and an examination is done to see how information diffusion differentiates based on the types of information. This study used the Bass diffusion model, which has been frequently used because this model has higher explanatory power than other models by explaining diffusion of market through innovation effect and imitation effect. Also this model has been applied a lot in other information diffusion related studies. The Bass diffusion model includes an innovation effect and an imitation effect. Innovation effect measures the early-stage impact, while the imitation effect measures the impact of word of mouth at the later stage. According to Mahajan et al. (2000), Innovation effect is emphasized by usefulness and ease-of-use, as well Imitation effect is emphasized by subjective norm and word-of-mouth. Also, according to Lee et al. (2011), Innovation effect is emphasized by mass communication. According to Moore and Benbasat (1996), Innovation effect is emphasized by relative advantage. Because Imitation effect is adopted by within-group influences and Innovation effects is adopted by product's or service's innovation. Therefore, ours study compared online news and social media blogs to examine the differences between media. We also choose different types of information including entertainment related information "Psy Gentelman", Current affair news "Earthquake in Sichuan, China", and product related information "Galaxy S4" in order to examine the variations on information diffusion. We considered that users' information proximity alters based on the types of information. Hence, we chose the three types of information mentioned above, which have different level of proximity from users' standpoint, in order to examine the flow of information diffusion. The first conclusion of this study is that different media has similar effect on information diffusion, even the types of media of information provider are different. Information diffusion has only been distinguished by a disparity between proximity of information. Second, information diffusions differ based on types of information. From the standpoint of users, product and entertainment related information has high imitation effect because of word of mouth. On the other hand, imitation effect dominates innovation effect on Current affair news. From the results of this study, the flow changes of information diffusion is examined and be applied to practical use. This study has some limitations, and those limitations would be able to provide opportunities and suggestions for future research. Presenting the difference of Information diffusion according to media and proximity has difficulties for generalization of theory due to small sample size. Therefore, if further studies adopt to a request for an increase of sample size and media diversity, difference of the information diffusion according to media type and information proximity could be understood more detailed.

Impact of Shortly Acquired IPO Firms on ICT Industry Concentration (ICT 산업분야 신생기업의 IPO 이후 인수합병과 산업 집중도에 관한 연구)

  • Chang, YoungBong;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.51-69
    • /
    • 2020
  • Now, it is a stylized fact that a small number of technology firms such as Apple, Alphabet, Microsoft, Amazon, Facebook and a few others have become larger and dominant players in an industry. Coupled with the rise of these leading firms, we have also observed that a large number of young firms have become an acquisition target in their early IPO stages. This indeed results in a sharp decline in the number of new entries in public exchanges although a series of policy reforms have been promulgated to foster competition through an increase in new entries. Given the observed industry trend in recent decades, a number of studies have reported increased concentration in most developed countries. However, it is less understood as to what caused an increase in industry concentration. In this paper, we uncover the mechanisms by which industries have become concentrated over the last decades by tracing the changes in industry concentration associated with a firm's status change in its early IPO stages. To this end, we put emphasis on the case in which firms are acquired shortly after they went public. Especially, with the transition to digital-based economies, it is imperative for incumbent firms to adapt and keep pace with new ICT and related intelligent systems. For instance, after the acquisition of a young firm equipped with AI-based solutions, an incumbent firm may better respond to a change in customer taste and preference by integrating acquired AI solutions and analytics skills into multiple business processes. Accordingly, it is not unusual for young ICT firms become an attractive acquisition target. To examine the role of M&As involved with young firms in reshaping the level of industry concentration, we identify a firm's status in early post-IPO stages over the sample periods spanning from 1990 to 2016 as follows: i) being delisted, ii) being standalone firms and iii) being acquired. According to our analysis, firms that have conducted IPO since 2000s have been acquired by incumbent firms at a relatively quicker time than those that did IPO in previous generations. We also show a greater acquisition rate for IPO firms in the ICT sector compared with their counterparts in other sectors. Our results based on multinomial logit models suggest that a large number of IPO firms have been acquired in their early post-IPO lives despite their financial soundness. Specifically, we show that IPO firms are likely to be acquired rather than be delisted due to financial distress in early IPO stages when they are more profitable, more mature or less leveraged. For those IPO firms with venture capital backup have also become an acquisition target more frequently. As a larger number of firms are acquired shortly after their IPO, our results show increased concentration. While providing limited evidence on the impact of large incumbent firms in explaining the change in industry concentration, our results show that the large firms' effect on industry concentration are pronounced in the ICT sector. This result possibly captures the current trend that a few tech giants such as Alphabet, Apple and Facebook continue to increase their market share. In addition, compared with the acquisitions of non-ICT firms, the concentration impact of IPO firms in early stages becomes larger when ICT firms are acquired as a target. Our study makes new contributions. To our best knowledge, this is one of a few studies that link a firm's post-IPO status to associated changes in industry concentration. Although some studies have addressed concentration issues, their primary focus was on market power or proprietary software. Contrast to earlier studies, we are able to uncover the mechanism by which industries have become concentrated by placing emphasis on M&As involving young IPO firms. Interestingly, the concentration impact of IPO firm acquisitions are magnified when a large incumbent firms are involved as an acquirer. This leads us to infer the underlying reasons as to why industries have become more concentrated with a favor of large firms in recent decades. Overall, our study sheds new light on the literature by providing a plausible explanation as to why industries have become concentrated.

The Framework of Research Network and Performance Evaluation on Personal Information Security: Social Network Analysis Perspective (개인정보보호 분야의 연구자 네트워크와 성과 평가 프레임워크: 소셜 네트워크 분석을 중심으로)

  • Kim, Minsu;Choi, Jaewon;Kim, Hyun Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.177-193
    • /
    • 2014
  • Over the past decade, there has been a rapid diffusion of electronic commerce and a rising number of interconnected networks, resulting in an escalation of security threats and privacy concerns. Electronic commerce has a built-in trade-off between the necessity of providing at least some personal information to consummate an online transaction, and the risk of negative consequences from providing such information. More recently, the frequent disclosure of private information has raised concerns about privacy and its impacts. This has motivated researchers in various fields to explore information privacy issues to address these concerns. Accordingly, the necessity for information privacy policies and technologies for collecting and storing data, and information privacy research in various fields such as medicine, computer science, business, and statistics has increased. The occurrence of various information security accidents have made finding experts in the information security field an important issue. Objective measures for finding such experts are required, as it is currently rather subjective. Based on social network analysis, this paper focused on a framework to evaluate the process of finding experts in the information security field. We collected data from the National Discovery for Science Leaders (NDSL) database, initially collecting about 2000 papers covering the period between 2005 and 2013. Outliers and the data of irrelevant papers were dropped, leaving 784 papers to test the suggested hypotheses. The co-authorship network data for co-author relationship, publisher, affiliation, and so on were analyzed using social network measures including centrality and structural hole. The results of our model estimation are as follows. With the exception of Hypothesis 3, which deals with the relationship between eigenvector centrality and performance, all of our hypotheses were supported. In line with our hypothesis, degree centrality (H1) was supported with its positive influence on the researchers' publishing performance (p<0.001). This finding indicates that as the degree of cooperation increased, the more the publishing performance of researchers increased. In addition, closeness centrality (H2) was also positively associated with researchers' publishing performance (p<0.001), suggesting that, as the efficiency of information acquisition increased, the more the researchers' publishing performance increased. This paper identified the difference in publishing performance among researchers. The analysis can be used to identify core experts and evaluate their performance in the information privacy research field. The co-authorship network for information privacy can aid in understanding the deep relationships among researchers. In addition, extracting characteristics of publishers and affiliations, this paper suggested an understanding of the social network measures and their potential for finding experts in the information privacy field. Social concerns about securing the objectivity of experts have increased, because experts in the information privacy field frequently participate in political consultation, and business education support and evaluation. In terms of practical implications, this research suggests an objective framework for experts in the information privacy field, and is useful for people who are in charge of managing research human resources. This study has some limitations, providing opportunities and suggestions for future research. Presenting the difference in information diffusion according to media and proximity presents difficulties for the generalization of the theory due to the small sample size. Therefore, further studies could consider an increased sample size and media diversity, the difference in information diffusion according to the media type, and information proximity could be explored in more detail. Moreover, previous network research has commonly observed a causal relationship between the independent and dependent variable (Kadushin, 2012). In this study, degree centrality as an independent variable might have causal relationship with performance as a dependent variable. However, in the case of network analysis research, network indices could be computed after the network relationship is created. An annual analysis could help mitigate this limitation.

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.